diff --git a/docs/testing/test-maintenance.md b/docs/testing/test-maintenance.md index 6c3b1b11..7cc9d6ad 100644 --- a/docs/testing/test-maintenance.md +++ b/docs/testing/test-maintenance.md @@ -1 +1,1127 @@ -*This documentation is under development.* \ No newline at end of file +# Test Maintenance Guide + +## Overview + +This guide provides detailed information on maintaining and extending the test suite for the Wizamart platform. It covers test structure, configuration files, adding new tests, updating fixtures, and keeping tests maintainable as the codebase evolves. + +## Table of Contents + +- [Test Configuration](#test-configuration) +- [Fixture System](#fixture-system) +- [Adding New Tests](#adding-new-tests) +- [Updating Tests](#updating-tests) +- [Test Coverage](#test-coverage) +- [Continuous Improvements](#continuous-improvements) +- [Common Maintenance Tasks](#common-maintenance-tasks) + +--- + +## Test Configuration + +### pytest.ini + +The `pytest.ini` file at the root of the project contains all pytest configuration: + +```ini +[pytest] +testpaths = tests +python_files = test_*.py +python_classes = Test* +python_functions = test_* + +# Enhanced addopts for better development experience +addopts = + -v # Verbose output + --tb=short # Short traceback format + --strict-markers # Enforce marker registration + --strict-config # Enforce strict config + --color=yes # Colored output + --durations=10 # Show 10 slowest tests + --showlocals # Show local variables on failure + -ra # Show summary of all test outcomes + --cov=app # Coverage for app module + --cov=models # Coverage for models module + --cov=middleware # Coverage for middleware module + --cov-report=term-missing # Show missing lines in terminal + --cov-report=html:htmlcov # Generate HTML coverage report + --cov-fail-under=80 # Fail if coverage < 80% + +minversion = 6.0 + +# Test filtering shortcuts +filterwarnings = + ignore::UserWarning + ignore::DeprecationWarning + ignore::PendingDeprecationWarning + ignore::sqlalchemy.exc.SAWarning + +# Timeout settings +timeout = 300 +timeout_method = thread + +# Logging configuration +log_cli = true +log_cli_level = INFO +log_cli_format = %(asctime)s [%(levelname)8s] %(name)s: %(message)s +log_cli_date_format = %Y-%m-%d %H:%M:%S +``` + +#### Adding New Test Markers + +When you need a new test category, add it to the `markers` section in `pytest.ini`: + +```ini +markers = + unit: marks tests as unit tests + integration: marks tests as integration tests + # ... existing markers ... + your_new_marker: description of your new marker category +``` + +Then use it in your tests: + +```python +@pytest.mark.your_new_marker +def test_something(): + """Test something specific.""" + pass +``` + +#### Adjusting Coverage Thresholds + +To modify coverage requirements: + +```ini +addopts = + # ... other options ... + --cov-fail-under=85 # Change from 80% to 85% +``` + +### Directory Structure + +Understanding the test directory structure: + +``` +tests/ +├── conftest.py # Root conftest - core fixtures +├── pytest.ini # Moved to root (symlinked here) +│ +├── fixtures/ # Reusable fixture modules +│ ├── __init__.py +│ ├── testing_fixtures.py # Testing utilities (empty_db, db_with_error) +│ ├── auth_fixtures.py # Auth fixtures (test_user, test_admin, auth_headers) +│ ├── vendor_fixtures.py # Vendor fixtures (test_vendor, vendor_factory) +│ ├── marketplace_product_fixtures.py # Product fixtures +│ ├── marketplace_import_job_fixtures.py # Import job fixtures +│ └── customer_fixtures.py # Customer fixtures +│ +├── unit/ # Unit tests +│ ├── conftest.py # Unit-specific fixtures +│ ├── services/ # Service layer tests +│ │ ├── test_auth_service.py +│ │ ├── test_vendor_service.py +│ │ ├── test_product_service.py +│ │ ├── test_inventory_service.py +│ │ ├── test_admin_service.py +│ │ ├── test_marketplace_service.py +│ │ └── test_stats_service.py +│ ├── middleware/ # Middleware unit tests +│ │ ├── test_auth.py +│ │ ├── test_context.py +│ │ ├── test_vendor_context.py +│ │ ├── test_theme_context.py +│ │ ├── test_rate_limiter.py +│ │ ├── test_logging.py +│ │ └── test_decorators.py +│ ├── models/ # Model tests +│ │ └── test_database_models.py +│ └── utils/ # Utility tests +│ ├── test_csv_processor.py +│ ├── test_data_validation.py +│ └── test_data_processing.py +│ +├── integration/ # Integration tests +│ ├── conftest.py # Integration-specific fixtures +│ ├── api/ # API endpoint tests +│ │ └── v1/ +│ │ ├── test_auth_endpoints.py +│ │ ├── test_vendor_endpoints.py +│ │ ├── test_product_endpoints.py +│ │ ├── test_inventory_endpoints.py +│ │ ├── test_admin_endpoints.py +│ │ ├── test_marketplace_products_endpoints.py +│ │ ├── test_marketplace_import_job_endpoints.py +│ │ ├── test_marketplace_product_export.py +│ │ ├── test_stats_endpoints.py +│ │ ├── test_pagination.py +│ │ └── test_filtering.py +│ ├── middleware/ # Middleware integration tests +│ │ ├── conftest.py +│ │ ├── test_middleware_stack.py +│ │ ├── test_context_detection_flow.py +│ │ ├── test_vendor_context_flow.py +│ │ └── test_theme_loading_flow.py +│ ├── security/ # Security tests +│ │ ├── test_authentication.py +│ │ ├── test_authorization.py +│ │ └── test_input_validation.py +│ ├── tasks/ # Background task tests +│ │ └── test_background_tasks.py +│ └── workflows/ # Multi-step workflow tests +│ └── test_integration.py +│ +├── system/ # System tests +│ ├── conftest.py # System-specific fixtures +│ └── test_error_handling.py # System-wide error handling +│ +├── performance/ # Performance tests +│ ├── conftest.py # Performance-specific fixtures +│ └── test_api_performance.py # API performance tests +│ +└── test_data/ # Static test data files + └── csv/ + └── sample_products.csv +``` + +--- + +## Fixture System + +### Fixture Hierarchy + +Fixtures are organized hierarchically with different scopes: + +#### 1. Core Fixtures (tests/conftest.py) + +**Session-scoped fixtures** - Created once per test session: + +```python +@pytest.fixture(scope="session") +def engine(): + """Create test database engine - reused across entire test session.""" + return create_engine( + "sqlite:///:memory:", + connect_args={"check_same_thread": False}, + poolclass=StaticPool, + echo=False + ) + +@pytest.fixture(scope="session") +def testing_session_local(engine): + """Create session factory - reused across entire test session.""" + return sessionmaker(autocommit=False, autoflush=False, bind=engine) +``` + +**Function-scoped fixtures** - Created for each test: + +```python +@pytest.fixture(scope="function") +def db(engine, testing_session_local): + """ + Create a clean database session for each test. + + - Creates all tables before test + - Yields session to test + - Cleans up after test + """ + Base.metadata.create_all(bind=engine) + db_session = testing_session_local() + + try: + yield db_session + finally: + db_session.close() + Base.metadata.drop_all(bind=engine) + Base.metadata.create_all(bind=engine) + +@pytest.fixture(scope="function") +def client(db): + """ + Create test client with database dependency override. + + Overrides FastAPI's get_db dependency to use test database. + """ + def override_get_db(): + try: + yield db + finally: + pass + + app.dependency_overrides[get_db] = override_get_db + + try: + client = TestClient(app) + yield client + finally: + if get_db in app.dependency_overrides: + del app.dependency_overrides[get_db] +``` + +#### 2. Fixture Modules (tests/fixtures/) + +Fixtures are organized by domain in separate modules: + +**tests/fixtures/auth_fixtures.py:** +- `auth_manager` - AuthManager instance +- `test_user` - Regular user +- `test_admin` - Admin user +- `other_user` - Additional user for access control tests +- `another_admin` - Additional admin for admin interaction tests +- `auth_headers` - Authentication headers for test_user +- `admin_headers` - Authentication headers for test_admin + +**tests/fixtures/vendor_fixtures.py:** +- `test_vendor` - Basic test vendor +- `unique_vendor` - Vendor with unique code +- `inactive_vendor` - Inactive vendor +- `verified_vendor` - Verified vendor +- `test_product` - Vendor product relationship +- `test_inventory` - Inventory entry +- `vendor_factory` - Factory function to create vendors dynamically + +**tests/fixtures/marketplace_product_fixtures.py:** +- `unique_product` - Marketplace product +- `multiple_products` - List of products + +**tests/fixtures/testing_fixtures.py:** +- `empty_db` - Empty database for edge case testing +- `db_with_error` - Mock database that raises errors + +#### 3. Test-Level Fixtures (tests/{level}/conftest.py) + +Each test level can have specific fixtures: + +**tests/unit/conftest.py:** +```python +# Unit test specific fixtures +# Currently minimal - add unit-specific mocks here +``` + +**tests/integration/conftest.py:** +```python +# Integration test specific fixtures +# Currently minimal - add integration-specific setup here +``` + +**tests/system/conftest.py:** +```python +# System test specific fixtures +# Currently minimal - add system-specific setup here +``` + +**tests/performance/conftest.py:** +```python +@pytest.fixture +def performance_db_session(db): + """Database session optimized for performance testing.""" + return db +``` + +### Registering Fixture Modules + +Fixture modules must be registered in `tests/conftest.py`: + +```python +# Import fixtures from fixture modules +pytest_plugins = [ + "tests.fixtures.auth_fixtures", + "tests.fixtures.marketplace_product_fixtures", + "tests.fixtures.vendor_fixtures", + "tests.fixtures.customer_fixtures", + "tests.fixtures.marketplace_import_job_fixtures", + "tests.fixtures.testing_fixtures", +] +``` + +### Creating New Fixtures + +When adding new fixtures: + +1. **Determine the appropriate location:** + - Core database/client fixtures → `tests/conftest.py` + - Domain-specific fixtures → `tests/fixtures/{domain}_fixtures.py` + - Test-level specific → `tests/{level}/conftest.py` + +2. **Choose the right scope:** + - `session` - Reuse across all tests (e.g., database engine) + - `module` - Reuse within a test module + - `function` - New instance per test (default, safest) + +3. **Follow naming conventions:** + - Use descriptive names: `test_user`, `test_admin`, `test_vendor` + - Use `_factory` suffix for factory functions: `vendor_factory` + - Use `mock_` prefix for mocked objects: `mock_service` + +4. **Clean up resources:** + - Always use `db.expunge()` for database objects + - Use try/finally blocks for cleanup + - Clear dependency overrides + +**Example - Adding a new fixture module:** + +```python +# tests/fixtures/order_fixtures.py +import pytest +import uuid +from models.database.order import Order + +@pytest.fixture +def test_order(db, test_user, test_vendor): + """Create a test order.""" + unique_id = str(uuid.uuid4())[:8] + order = Order( + order_number=f"ORDER_{unique_id}", + user_id=test_user.id, + vendor_id=test_vendor.id, + total_amount=100.00, + status="pending" + ) + db.add(order) + db.commit() + db.refresh(order) + db.expunge(order) + return order + +@pytest.fixture +def order_factory(): + """Factory to create multiple orders.""" + def _create_order(db, user_id, vendor_id, **kwargs): + unique_id = str(uuid.uuid4())[:8] + defaults = { + "order_number": f"ORDER_{unique_id}", + "user_id": user_id, + "vendor_id": vendor_id, + "total_amount": 100.00, + "status": "pending" + } + defaults.update(kwargs) + + order = Order(**defaults) + db.add(order) + db.commit() + db.refresh(order) + return order + + return _create_order +``` + +Then register it in `tests/conftest.py`: + +```python +pytest_plugins = [ + "tests.fixtures.auth_fixtures", + "tests.fixtures.vendor_fixtures", + "tests.fixtures.order_fixtures", # Add new module + # ... other fixtures +] +``` + +--- + +## Adding New Tests + +### 1. Determine Test Level + +Choose the appropriate test level: + +| Level | When to Use | Example | +|-------|-------------|---------| +| **Unit** | Testing a single function/class in isolation | Service method logic, utility functions | +| **Integration** | Testing multiple components together | API endpoints, database operations, middleware flow | +| **System** | Testing complete application behavior | Error handling across app, user workflows | +| **Performance** | Testing speed and scalability | Response times, concurrent requests | + +### 2. Create Test File + +Follow naming conventions: + +``` +tests/{level}/{layer}/test_{component}.py +``` + +Examples: +- `tests/unit/services/test_order_service.py` +- `tests/integration/api/v1/test_order_endpoints.py` +- `tests/system/test_order_workflows.py` + +### 3. Write Test Class + +Use descriptive class names and apply markers: + +```python +# tests/unit/services/test_order_service.py +import pytest +from app.services.order_service import OrderService +from app.exceptions import OrderNotFoundException + +@pytest.mark.unit +@pytest.mark.orders # Add new marker to pytest.ini first +class TestOrderService: + """Test suite for OrderService.""" + + def setup_method(self): + """Initialize service instance before each test.""" + self.service = OrderService() + + def test_create_order_success(self, db, test_user, test_vendor): + """Test successful order creation.""" + # Arrange + order_data = { + "vendor_id": test_vendor.id, + "total_amount": 100.00 + } + + # Act + order = self.service.create_order(db, test_user.id, order_data) + + # Assert + assert order is not None + assert order.user_id == test_user.id + assert order.vendor_id == test_vendor.id + assert order.total_amount == 100.00 + + def test_get_order_not_found(self, db): + """Test getting non-existent order raises exception.""" + # Act & Assert + with pytest.raises(OrderNotFoundException): + self.service.get_order(db, order_id=99999) +``` + +### 4. Add Required Fixtures + +If you need new fixtures, create them as described in [Creating New Fixtures](#creating-new-fixtures). + +### 5. Run Your Tests + +```bash +# Run your new test file +pytest tests/unit/services/test_order_service.py -v + +# Run with coverage +pytest tests/unit/services/test_order_service.py --cov=app.services.order_service +``` + +--- + +## Updating Tests + +### When to Update Tests + +Update tests when: +1. **API changes** - Endpoint modifications, new parameters, response structure changes +2. **Business logic changes** - Modified validation rules, new requirements +3. **Database schema changes** - New fields, relationships, constraints +4. **Exception handling changes** - New exception types, modified error messages +5. **Test failures** - Legitimate changes that require test updates + +### Update Process + +1. **Run existing tests to identify failures:** +```bash +pytest tests/ -v +``` + +2. **Identify the cause:** + - Code change (legitimate update needed) + - Test bug (test was incorrect) + - Regression (code broke functionality) + +3. **Update tests appropriately:** + +**Example - API endpoint added new field:** + +```python +# OLD TEST +def test_create_vendor(self, client, auth_headers): + response = client.post( + "/api/v1/vendor", + headers=auth_headers, + json={ + "vendor_code": "TEST123", + "name": "Test Vendor" + } + ) + assert response.status_code == 200 + +# UPDATED TEST - New required field "subdomain" +def test_create_vendor(self, client, auth_headers): + response = client.post( + "/api/v1/vendor", + headers=auth_headers, + json={ + "vendor_code": "TEST123", + "name": "Test Vendor", + "subdomain": "testvendor" # New required field + } + ) + assert response.status_code == 200 + assert response.json()["subdomain"] == "testvendor" # Verify new field +``` + +4. **Update fixtures if necessary:** + +```python +# tests/fixtures/vendor_fixtures.py +@pytest.fixture +def test_vendor(db, test_user): + """Create a test vendor.""" + unique_id = str(uuid.uuid4())[:8].upper() + vendor = Vendor( + vendor_code=f"TESTVENDOR_{unique_id}", + name=f"Test Vendor {unique_id}", + subdomain=f"testvendor{unique_id.lower()}", # ADD NEW FIELD + owner_user_id=test_user.id, + is_active=True, + is_verified=True + ) + db.add(vendor) + db.commit() + db.refresh(vendor) + db.expunge(vendor) + return vendor +``` + +5. **Run tests again to verify:** +```bash +pytest tests/ -v +``` + +### Handling Breaking Changes + +When making breaking changes: + +1. **Update all affected tests** - Search for usage +2. **Update fixtures** - Modify fixture creation +3. **Update documentation** - Keep testing guide current +4. **Communicate changes** - Inform team of test updates + +--- + +## Test Coverage + +### Measuring Coverage + +The project enforces 80% code coverage minimum (configured in `pytest.ini`). + +#### View Coverage Report + +```bash +# Generate coverage report +make test-coverage + +# Or with pytest directly +pytest tests/ --cov=app --cov=models --cov=middleware --cov-report=html + +# Open HTML report +open htmlcov/index.html # macOS +xdg-open htmlcov/index.html # Linux +start htmlcov/index.html # Windows +``` + +#### Coverage Report Structure + +The HTML report shows: +- **Overall coverage percentage** - Should be > 80% +- **Per-file coverage** - Individual module coverage +- **Missing lines** - Specific lines not covered +- **Branch coverage** - Conditional paths covered + +### Improving Coverage + +#### 1. Identify Uncovered Code + +Look for: +- Red highlighted lines (not executed) +- Yellow highlighted lines (partial branch coverage) +- Functions/classes with 0% coverage + +#### 2. Add Missing Tests + +Focus on: +- **Error paths** - Exception handling +- **Edge cases** - Empty inputs, boundary values +- **Conditional branches** - if/else paths +- **Alternative flows** - Different code paths + +**Example - Improving branch coverage:** + +```python +# Code with conditional +def get_vendor_status(vendor): + if vendor.is_active and vendor.is_verified: + return "active" + elif vendor.is_active: + return "pending_verification" + else: + return "inactive" + +# Tests needed for 100% coverage +def test_vendor_status_active_and_verified(test_vendor): + """Test status when vendor is active and verified.""" + test_vendor.is_active = True + test_vendor.is_verified = True + assert get_vendor_status(test_vendor) == "active" + +def test_vendor_status_active_not_verified(test_vendor): + """Test status when vendor is active but not verified.""" + test_vendor.is_active = True + test_vendor.is_verified = False + assert get_vendor_status(test_vendor) == "pending_verification" + +def test_vendor_status_inactive(test_vendor): + """Test status when vendor is inactive.""" + test_vendor.is_active = False + assert get_vendor_status(test_vendor) == "inactive" +``` + +#### 3. Exclude Untestable Code + +For code that shouldn't be tested (e.g., main blocks, debug code): + +```python +# Use pragma to exclude from coverage +if __name__ == "__main__": # pragma: no cover + # This won't count against coverage + main() +``` + +### Coverage Goals by Test Level + +| Test Level | Coverage Goal | +|------------|---------------| +| Unit Tests | > 90% | +| Integration Tests | > 80% | +| Overall | > 80% (enforced) | + +--- + +## Continuous Improvements + +### Identifying Improvement Opportunities + +#### 1. Analyze Test Execution Time + +```bash +# Show 10 slowest tests +pytest tests/ --durations=10 + +# Show all durations +pytest tests/ --durations=0 +``` + +**Action Items:** +- Mark slow tests with `@pytest.mark.slow` +- Optimize fixture creation +- Reduce database operations +- Use mocks where appropriate + +#### 2. Review Test Failures + +```bash +# Run only previously failed tests +pytest tests/ --lf + +# Run failed tests first +pytest tests/ --ff +``` + +**Action Items:** +- Fix flaky tests (tests that fail intermittently) +- Improve test isolation +- Add better error messages + +#### 3. Monitor Test Growth + +Track metrics: +- Number of tests per module +- Test coverage percentage +- Average test execution time +- Failed test rate + +### Refactoring Tests + +#### When to Refactor + +- Tests have duplicated code +- Setup code is repeated +- Tests are hard to understand +- Tests are brittle (fail frequently on minor changes) + +#### Refactoring Techniques + +**1. Extract common setup to fixtures:** + +```python +# BEFORE - Duplicated setup +def test_vendor_creation(db, test_user): + vendor_data = VendorCreate(vendor_code="TEST", name="Test") + vendor = VendorService().create_vendor(db, vendor_data, test_user) + assert vendor.vendor_code == "TEST" + +def test_vendor_update(db, test_user): + vendor_data = VendorCreate(vendor_code="TEST", name="Test") + vendor = VendorService().create_vendor(db, vendor_data, test_user) + # ... update logic + +# AFTER - Use fixture +@pytest.fixture +def created_vendor(db, test_user): + """Vendor already created in database.""" + vendor_data = VendorCreate(vendor_code="TEST", name="Test") + return VendorService().create_vendor(db, vendor_data, test_user) + +def test_vendor_creation(created_vendor): + assert created_vendor.vendor_code == "TEST" + +def test_vendor_update(db, created_vendor): + # Directly use created_vendor + pass +``` + +**2. Use factory functions for variations:** + +```python +# Factory pattern for creating test data variations +@pytest.fixture +def vendor_factory(db, test_user): + """Create vendors with custom attributes.""" + def _create(**kwargs): + defaults = { + "vendor_code": f"TEST{uuid.uuid4()[:8]}", + "name": "Test Vendor", + "is_active": True, + "is_verified": False + } + defaults.update(kwargs) + + vendor_data = VendorCreate(**defaults) + return VendorService().create_vendor(db, vendor_data, test_user) + + return _create + +# Use factory in tests +def test_inactive_vendor(vendor_factory): + vendor = vendor_factory(is_active=False) + assert not vendor.is_active + +def test_verified_vendor(vendor_factory): + vendor = vendor_factory(is_verified=True) + assert vendor.is_verified +``` + +**3. Use parametrize for similar test cases:** + +```python +# BEFORE - Multiple similar tests +def test_invalid_email_missing_at(db): + with pytest.raises(ValidationException): + create_user(db, email="invalidemail.com") + +def test_invalid_email_missing_domain(db): + with pytest.raises(ValidationException): + create_user(db, email="invalid@") + +def test_invalid_email_missing_tld(db): + with pytest.raises(ValidationException): + create_user(db, email="invalid@domain") + +# AFTER - Parametrized test +@pytest.mark.parametrize("invalid_email", [ + "invalidemail.com", # Missing @ + "invalid@", # Missing domain + "invalid@domain", # Missing TLD + "@domain.com", # Missing local part +]) +def test_invalid_email_format(db, invalid_email): + """Test that invalid email formats raise validation error.""" + with pytest.raises(ValidationException): + create_user(db, email=invalid_email) +``` + +### Testing Best Practices Evolution + +As the project grows, continuously improve testing practices: + +1. **Review and update this documentation** +2. **Share knowledge within the team** +3. **Conduct test code reviews** +4. **Refactor tests alongside application code** +5. **Keep tests simple and maintainable** + +--- + +## Common Maintenance Tasks + +### Task 1: Adding a New Model + +When adding a new database model: + +1. **Create model tests:** +```python +# tests/unit/models/test_order_model.py +import pytest + +@pytest.mark.unit +class TestOrderModel: + def test_order_creation(self, db, test_user, test_vendor): + """Test Order model can be created.""" + order = Order( + order_number="ORDER001", + user_id=test_user.id, + vendor_id=test_vendor.id, + total_amount=100.00 + ) + db.add(order) + db.commit() + + assert order.id is not None + assert order.order_number == "ORDER001" +``` + +2. **Create fixtures:** +```python +# tests/fixtures/order_fixtures.py +@pytest.fixture +def test_order(db, test_user, test_vendor): + """Create a test order.""" + # ... implementation +``` + +3. **Register fixtures:** +```python +# tests/conftest.py +pytest_plugins = [ + # ... existing + "tests.fixtures.order_fixtures", +] +``` + +4. **Add service tests:** +```python +# tests/unit/services/test_order_service.py +@pytest.mark.unit +class TestOrderService: + # ... test methods +``` + +5. **Add API endpoint tests:** +```python +# tests/integration/api/v1/test_order_endpoints.py +@pytest.mark.integration +@pytest.mark.api +class TestOrderAPI: + # ... test methods +``` + +### Task 2: Updating an Existing API Endpoint + +When modifying an API endpoint: + +1. **Update integration tests:** +```python +# tests/integration/api/v1/test_vendor_endpoints.py +def test_create_vendor_with_new_field(self, client, auth_headers): + response = client.post( + "/api/v1/vendor", + headers=auth_headers, + json={ + "vendor_code": "TEST", + "name": "Test", + "new_field": "value" # Add new field + } + ) + assert response.status_code == 200 + assert response.json()["new_field"] == "value" +``` + +2. **Update service tests if logic changed:** +```python +# tests/unit/services/test_vendor_service.py +def test_create_vendor_validates_new_field(self, db, test_user): + # Test new validation logic + pass +``` + +3. **Update fixtures if model changed:** +```python +# tests/fixtures/vendor_fixtures.py +@pytest.fixture +def test_vendor(db, test_user): + vendor = Vendor( + # ... existing fields + new_field="default_value" # Add new field + ) + # ... +``` + +### Task 3: Fixing Flaky Tests + +Flaky tests fail intermittently. Common causes: + +**1. Test order dependency:** +```python +# BAD - Tests depend on order +test_data = None + +def test_create(): + global test_data + test_data = create_something() + +def test_update(): + update_something(test_data) # Fails if test_create doesn't run first + +# GOOD - Independent tests +def test_create(db): + data = create_something(db) + assert data is not None + +def test_update(db): + data = create_something(db) # Create own test data + update_something(db, data) +``` + +**2. Timing issues:** +```python +# BAD - Timing-dependent +def test_async_operation(): + start_async_task() + result = get_result() # May not be ready yet + assert result == expected + +# GOOD - Wait for completion +def test_async_operation(): + task = start_async_task() + result = task.wait() # Wait for completion + assert result == expected +``` + +**3. Shared state:** +```python +# BAD - Shared mutable state +shared_list = [] + +def test_append(): + shared_list.append(1) + assert len(shared_list) == 1 # Fails if test runs twice + +# GOOD - Fresh state per test +def test_append(): + test_list = [] # New list per test + test_list.append(1) + assert len(test_list) == 1 +``` + +### Task 4: Adding Test Markers + +When you need to categorize tests: + +1. **Add marker to pytest.ini:** +```ini +markers = + # ... existing markers + orders: marks tests as order-related functionality + payments: marks tests as payment-related functionality +``` + +2. **Apply marker to tests:** +```python +@pytest.mark.orders +class TestOrderService: + pass +``` + +3. **Run tests by marker:** +```bash +pytest -m orders +pytest -m "orders or payments" +``` + +### Task 5: Updating Test Data + +When test data needs to change: + +**Option 1: Update fixture:** +```python +# tests/fixtures/product_fixtures.py +@pytest.fixture +def sample_products(): + """Return sample product data.""" + return [ + {"name": "Product 1", "price": 10.00, "category": "Electronics"}, + {"name": "Product 2", "price": 20.00, "category": "Books"}, + # Add more or modify existing + ] +``` + +**Option 2: Update static test data:** +```csv +# tests/test_data/csv/sample_products.csv +gtin,title,brand,price +1234567890123,Product 1,Brand A,10.00 +2345678901234,Product 2,Brand B,20.00 +``` + +**Option 3: Use factory with parameters:** +```python +@pytest.fixture +def product_factory(db): + """Create products dynamically.""" + def _create(name="Product", price=10.00, **kwargs): + # ... create product + return product + return _create +``` + +--- + +## Summary + +This test maintenance guide covers: + +- ✅ Test configuration in `pytest.ini` +- ✅ Understanding fixture hierarchy and organization +- ✅ Adding new tests at appropriate levels +- ✅ Updating tests when code changes +- ✅ Measuring and improving test coverage +- ✅ Continuous improvement strategies +- ✅ Common maintenance tasks and patterns +- ✅ Refactoring techniques for better maintainability + +### Key Takeaways + +1. **Keep tests organized** - Follow the established directory structure +2. **Use appropriate test levels** - Unit for isolation, Integration for component interaction +3. **Maintain fixtures** - Keep fixture modules clean and well-documented +4. **Monitor coverage** - Aim for >80% overall, >90% for critical components +5. **Refactor regularly** - Keep tests maintainable as code evolves +6. **Document changes** - Update this guide as testing practices evolve +7. **Review tests** - Include test code in code reviews + +### Quick Reference + +```bash +# Run all tests +make test + +# Run specific test level +make test-unit +make test-integration + +# Run with coverage +make test-coverage + +# Run tests by marker +pytest -m auth +pytest -m "unit and vendors" + +# Run specific test +pytest tests/unit/services/test_vendor_service.py::TestVendorService::test_create_vendor_success + +# Debug test +pytest tests/unit/services/test_vendor_service.py -vv --pdb + +# Show slowest tests +pytest tests/ --durations=10 +``` + +For more information on writing tests, see the [Testing Guide](testing-guide.md). diff --git a/docs/testing/testing-guide.md b/docs/testing/testing-guide.md index 6c3b1b11..cbd3a484 100644 --- a/docs/testing/testing-guide.md +++ b/docs/testing/testing-guide.md @@ -1 +1,1210 @@ -*This documentation is under development.* \ No newline at end of file +# Testing Guide + +## Overview + +The Wizamart platform employs a comprehensive testing strategy with four distinct test levels: **Unit**, **Integration**, **System**, and **Performance** tests. This guide explains how to write, run, and maintain tests in the application. + +## Table of Contents + +- [Test Architecture](#test-architecture) +- [Running Tests](#running-tests) +- [Test Structure](#test-structure) +- [Writing Tests](#writing-tests) +- [Fixtures](#fixtures) +- [Mocking](#mocking) +- [Best Practices](#best-practices) +- [Troubleshooting](#troubleshooting) + +--- + +## Test Architecture + +### Test Pyramid + +Our testing strategy follows the test pyramid approach: + +``` + /\ + / \ + / P \ Performance Tests (Slow, End-to-End) + /______\ + / \ + / System \ System Tests (End-to-End Scenarios) + /____________\ + / \ + / Integration \ Integration Tests (Multiple Components) + /__________________\ + / \ +/ Unit \ Unit Tests (Individual Components) +/______________________\ +``` + +### Test Levels + +#### 1. Unit Tests (`tests/unit/`) +- **Purpose**: Test individual components in isolation +- **Scope**: Single functions, methods, or classes +- **Dependencies**: Mocked or stubbed +- **Speed**: Very fast (< 100ms per test) +- **Marker**: `@pytest.mark.unit` + +**Example Structure:** +``` +tests/unit/ +├── conftest.py # Unit test specific fixtures +├── services/ # Service layer tests +│ ├── test_auth_service.py +│ ├── test_vendor_service.py +│ └── test_product_service.py +├── middleware/ # Middleware tests +│ ├── test_auth.py +│ ├── test_context.py +│ └── test_rate_limiter.py +├── models/ # Model tests +│ └── test_database_models.py +└── utils/ # Utility function tests + ├── test_csv_processor.py + └── test_data_validation.py +``` + +#### 2. Integration Tests (`tests/integration/`) +- **Purpose**: Test multiple components working together +- **Scope**: API endpoints, database interactions, middleware flows +- **Dependencies**: Real database (in-memory SQLite), test client +- **Speed**: Fast to moderate (100ms - 1s per test) +- **Marker**: `@pytest.mark.integration` + +**Example Structure:** +``` +tests/integration/ +├── conftest.py # Integration test specific fixtures +├── api/ # API endpoint tests +│ └── v1/ +│ ├── test_auth_endpoints.py +│ ├── test_vendor_endpoints.py +│ ├── test_pagination.py +│ └── test_filtering.py +├── middleware/ # Middleware stack tests +│ ├── test_middleware_stack.py +│ └── test_context_detection_flow.py +├── security/ # Security integration tests +│ ├── test_authentication.py +│ ├── test_authorization.py +│ └── test_input_validation.py +└── workflows/ # Multi-step workflow tests + └── test_integration.py +``` + +#### 3. System Tests (`tests/system/`) +- **Purpose**: Test complete application behavior +- **Scope**: End-to-end user scenarios +- **Dependencies**: Full application stack +- **Speed**: Moderate to slow (1s - 5s per test) +- **Marker**: `@pytest.mark.system` + +**Example Structure:** +``` +tests/system/ +├── conftest.py # System test specific fixtures +└── test_error_handling.py # System-wide error handling tests +``` + +#### 4. Performance Tests (`tests/performance/`) +- **Purpose**: Test application performance and load handling +- **Scope**: Response times, concurrent requests, large datasets +- **Dependencies**: Full application stack with data +- **Speed**: Slow (5s+ per test) +- **Markers**: `@pytest.mark.performance` and `@pytest.mark.slow` + +**Example Structure:** +``` +tests/performance/ +├── conftest.py # Performance test specific fixtures +└── test_api_performance.py # API performance tests +``` + +--- + +## Running Tests + +### Using Make Commands + +The project provides convenient Make commands for running tests: + +```bash +# Run all tests +make test + +# Run specific test levels +make test-unit # Unit tests only +make test-integration # Integration tests only + +# Run with coverage report +make test-coverage # Generates HTML coverage report in htmlcov/ + +# Run fast tests (excludes @pytest.mark.slow) +make test-fast + +# Run slow tests only +make test-slow +``` + +### Using pytest Directly + +```bash +# Run all tests +pytest tests/ + +# Run specific test level by marker +pytest tests/ -m unit +pytest tests/ -m integration +pytest tests/ -m "system or performance" + +# Run specific test file +pytest tests/unit/services/test_auth_service.py + +# Run specific test class +pytest tests/unit/services/test_auth_service.py::TestAuthService + +# Run specific test method +pytest tests/unit/services/test_auth_service.py::TestAuthService::test_register_user_success + +# Run with verbose output +pytest tests/ -v + +# Run with coverage +pytest tests/ --cov=app --cov=models --cov=middleware + +# Run tests matching a pattern +pytest tests/ -k "auth" # Runs all tests with "auth" in the name +``` + +### Using Test Markers + +Markers allow you to categorize and selectively run tests: + +```bash +# Run by functionality marker +pytest -m auth # Authentication tests +pytest -m products # Product tests +pytest -m vendors # Vendor tests +pytest -m api # API endpoint tests + +# Run by test type marker +pytest -m unit # Unit tests +pytest -m integration # Integration tests +pytest -m system # System tests +pytest -m performance # Performance tests + +# Run by speed marker +pytest -m "not slow" # Exclude slow tests +pytest -m slow # Run only slow tests + +# Combine markers +pytest -m "unit and auth" # Unit tests for auth +pytest -m "integration and api" # Integration tests for API +pytest -m "not (slow or external)" # Exclude slow and external tests +``` + +### Test Markers Reference + +All available markers are defined in `pytest.ini`: + +| Marker | Description | +|--------|-------------| +| `unit` | Fast, isolated component tests | +| `integration` | Multiple components working together | +| `system` | Full application behavior tests | +| `e2e` | End-to-end user workflow tests | +| `slow` | Tests that take significant time (>1s) | +| `performance` | Performance and load tests | +| `auth` | Authentication and authorization tests | +| `products` | Product management functionality | +| `inventory` | Inventory management tests | +| `vendors` | Vendor management functionality | +| `admin` | Admin functionality and permissions | +| `marketplace` | Marketplace import functionality | +| `stats` | Statistics and reporting | +| `database` | Tests requiring database operations | +| `external` | Tests requiring external services | +| `api` | API endpoint tests | +| `security` | Security-related tests | +| `ci` | Tests that should only run in CI | +| `dev` | Development-specific tests | + +--- + +## Test Structure + +### Test Organization + +Tests are organized by: +1. **Test level** (unit/integration/system/performance) +2. **Application layer** (services/middleware/models/utils) +3. **Functionality** (auth/vendors/products/inventory) + +### File Naming Conventions + +- Test files: `test_*.py` +- Test classes: `Test*` +- Test functions: `test_*` +- Fixture files: `*_fixtures.py` +- Configuration files: `conftest.py` + +### Test Class Structure + +```python +import pytest +from app.services.auth_service import AuthService +from app.exceptions import InvalidCredentialsException + +@pytest.mark.unit +@pytest.mark.auth +class TestAuthService: + """Test suite for AuthService.""" + + def setup_method(self): + """Setup method runs before each test.""" + self.service = AuthService() + + def test_successful_login(self, db, test_user): + """Test successful user login.""" + # Arrange + username = test_user.username + password = "testpass123" + + # Act + result = self.service.login(db, username, password) + + # Assert + assert result is not None + assert result.access_token is not None + + def test_invalid_credentials(self, db, test_user): + """Test login with invalid credentials.""" + # Act & Assert + with pytest.raises(InvalidCredentialsException): + self.service.login(db, test_user.username, "wrongpassword") +``` + +### Test Naming Conventions + +Use descriptive test names that explain: +- What is being tested +- Under what conditions +- What the expected outcome is + +**Good examples:** +```python +def test_create_vendor_success(self, db, test_user): + """Test successful vendor creation by regular user.""" + +def test_create_vendor_duplicate_code_raises_exception(self, db, test_user): + """Test vendor creation fails when vendor code already exists.""" + +def test_admin_can_delete_any_vendor(self, db, test_admin, test_vendor): + """Test admin has permission to delete any vendor.""" +``` + +**Poor examples:** +```python +def test_vendor(self): # Too vague +def test_1(self): # Non-descriptive +def test_error(self): # Unclear what error +``` + +--- + +## Writing Tests + +### Unit Test Example + +Unit tests focus on testing a single component in isolation: + +```python +# tests/unit/services/test_auth_service.py +import pytest +from app.services.auth_service import AuthService +from app.exceptions import UserAlreadyExistsException +from models.schema.auth import UserRegister + +@pytest.mark.unit +@pytest.mark.auth +class TestAuthService: + """Test suite for AuthService.""" + + def setup_method(self): + """Initialize service instance.""" + self.service = AuthService() + + def test_register_user_success(self, db): + """Test successful user registration.""" + # Arrange + user_data = UserRegister( + email="newuser@example.com", + username="newuser123", + password="securepass123" + ) + + # Act + user = self.service.register_user(db, user_data) + + # Assert + assert user is not None + assert user.email == "newuser@example.com" + assert user.username == "newuser123" + assert user.role == "user" + assert user.is_active is True + assert user.hashed_password != "securepass123" + + def test_register_user_duplicate_email(self, db, test_user): + """Test registration fails when email already exists.""" + # Arrange + user_data = UserRegister( + email=test_user.email, + username="differentuser", + password="securepass123" + ) + + # Act & Assert + with pytest.raises(UserAlreadyExistsException) as exc_info: + self.service.register_user(db, user_data) + + # Verify exception details + exception = exc_info.value + assert exception.error_code == "USER_ALREADY_EXISTS" + assert exception.status_code == 409 +``` + +### Integration Test Example + +Integration tests verify multiple components work together: + +```python +# tests/integration/api/v1/test_auth_endpoints.py +import pytest + +@pytest.mark.integration +@pytest.mark.api +@pytest.mark.auth +class TestAuthenticationAPI: + """Integration tests for authentication API endpoints.""" + + def test_register_user_success(self, client, db): + """Test successful user registration via API.""" + # Act + response = client.post( + "/api/v1/auth/register", + json={ + "email": "newuser@example.com", + "username": "newuser", + "password": "securepass123" + } + ) + + # Assert + assert response.status_code == 200 + data = response.json() + assert data["email"] == "newuser@example.com" + assert data["username"] == "newuser" + assert data["role"] == "user" + assert "hashed_password" not in data + + def test_login_returns_access_token(self, client, test_user): + """Test login endpoint returns valid access token.""" + # Act + response = client.post( + "/api/v1/auth/login", + json={ + "username": test_user.username, + "password": "testpass123" + } + ) + + # Assert + assert response.status_code == 200 + data = response.json() + assert "access_token" in data + assert data["token_type"] == "bearer" + + def test_protected_endpoint_requires_authentication(self, client): + """Test protected endpoint returns 401 without token.""" + # Act + response = client.get("/api/v1/vendor") + + # Assert + assert response.status_code == 401 +``` + +### System Test Example + +System tests verify complete end-to-end scenarios: + +```python +# tests/system/test_error_handling.py +import pytest + +@pytest.mark.system +class TestErrorHandling: + """System tests for error handling across the API.""" + + def test_invalid_json_request(self, client, auth_headers): + """Test handling of malformed JSON requests.""" + # Act + response = client.post( + "/api/v1/vendor", + headers=auth_headers, + content="{ invalid json syntax" + ) + + # Assert + assert response.status_code == 422 + data = response.json() + assert data["error_code"] == "VALIDATION_ERROR" + assert data["message"] == "Request validation failed" + assert "validation_errors" in data["details"] +``` + +### Performance Test Example + +Performance tests measure response times and system capacity: + +```python +# tests/performance/test_api_performance.py +import time +import pytest +from models.database.marketplace_product import MarketplaceProduct + +@pytest.mark.performance +@pytest.mark.slow +@pytest.mark.database +class TestPerformance: + """Performance tests for API endpoints.""" + + def test_product_list_performance(self, client, auth_headers, db): + """Test performance of product listing with many products.""" + # Arrange - Create test data + products = [] + for i in range(100): + product = MarketplaceProduct( + marketplace_product_id=f"PERF{i:03d}", + title=f"Performance Test Product {i}", + price=f"{i}.99", + marketplace="Performance" + ) + products.append(product) + + db.add_all(products) + db.commit() + + # Act - Time the request + start_time = time.time() + response = client.get( + "/api/v1/marketplace/product?limit=100", + headers=auth_headers + ) + end_time = time.time() + + # Assert + assert response.status_code == 200 + assert len(response.json()["products"]) == 100 + assert end_time - start_time < 2.0 # Must complete in 2s +``` + +--- + +## Fixtures + +### Fixture Architecture + +Fixtures are organized in a hierarchical structure: + +``` +tests/ +├── conftest.py # Core fixtures (session-scoped) +│ ├── engine # Database engine +│ ├── testing_session_local # Session factory +│ ├── db # Database session (function-scoped) +│ └── client # Test client +│ +├── fixtures/ # Reusable fixture modules +│ ├── testing_fixtures.py # Testing utilities +│ ├── auth_fixtures.py # Auth-related fixtures +│ ├── vendor_fixtures.py # Vendor fixtures +│ ├── marketplace_product_fixtures.py +│ ├── marketplace_import_job_fixtures.py +│ └── customer_fixtures.py +│ +├── unit/conftest.py # Unit test specific fixtures +├── integration/conftest.py # Integration test specific fixtures +├── system/conftest.py # System test specific fixtures +└── performance/conftest.py # Performance test specific fixtures +``` + +### Core Fixtures (tests/conftest.py) + +These fixtures are available to all tests: + +```python +# Database fixtures +@pytest.fixture(scope="session") +def engine(): + """Create test database engine (in-memory SQLite).""" + return create_engine( + "sqlite:///:memory:", + connect_args={"check_same_thread": False}, + poolclass=StaticPool, + echo=False + ) + +@pytest.fixture(scope="function") +def db(engine, testing_session_local): + """Create a clean database session for each test.""" + Base.metadata.create_all(bind=engine) + db_session = testing_session_local() + + try: + yield db_session + finally: + db_session.close() + Base.metadata.drop_all(bind=engine) + Base.metadata.create_all(bind=engine) + +@pytest.fixture(scope="function") +def client(db): + """Create a test client with database dependency override.""" + def override_get_db(): + try: + yield db + finally: + pass + + app.dependency_overrides[get_db] = override_get_db + + try: + client = TestClient(app) + yield client + finally: + if get_db in app.dependency_overrides: + del app.dependency_overrides[get_db] +``` + +### Auth Fixtures (tests/fixtures/auth_fixtures.py) + +Authentication and user-related fixtures: + +```python +@pytest.fixture +def test_user(db, auth_manager): + """Create a test user with unique username.""" + unique_id = str(uuid.uuid4())[:8] + hashed_password = auth_manager.hash_password("testpass123") + user = User( + email=f"test_{unique_id}@example.com", + username=f"testuser_{unique_id}", + hashed_password=hashed_password, + role="user", + is_active=True + ) + db.add(user) + db.commit() + db.refresh(user) + db.expunge(user) # Prevent resource warnings + return user + +@pytest.fixture +def test_admin(db, auth_manager): + """Create a test admin user.""" + unique_id = str(uuid.uuid4())[:8] + hashed_password = auth_manager.hash_password("adminpass123") + admin = User( + email=f"admin_{unique_id}@example.com", + username=f"admin_{unique_id}", + hashed_password=hashed_password, + role="admin", + is_active=True + ) + db.add(admin) + db.commit() + db.refresh(admin) + db.expunge(admin) + return admin + +@pytest.fixture +def auth_headers(client, test_user): + """Get authentication headers for test user.""" + response = client.post( + "/api/v1/auth/login", + json={"username": test_user.username, "password": "testpass123"} + ) + assert response.status_code == 200 + token = response.json()["access_token"] + return {"Authorization": f"Bearer {token}"} + +@pytest.fixture +def admin_headers(client, test_admin): + """Get authentication headers for admin user.""" + response = client.post( + "/api/v1/auth/login", + json={"username": test_admin.username, "password": "adminpass123"} + ) + assert response.status_code == 200 + token = response.json()["access_token"] + return {"Authorization": f"Bearer {token}"} +``` + +### Vendor Fixtures (tests/fixtures/vendor_fixtures.py) + +Vendor-related fixtures: + +```python +@pytest.fixture +def test_vendor(db, test_user): + """Create a test vendor.""" + unique_id = str(uuid.uuid4())[:8].upper() + vendor = Vendor( + vendor_code=f"TESTVENDOR_{unique_id}", + subdomain=f"testvendor{unique_id.lower()}", + name=f"Test Vendor {unique_id.lower()}", + owner_user_id=test_user.id, + is_active=True, + is_verified=True + ) + db.add(vendor) + db.commit() + db.refresh(vendor) + db.expunge(vendor) + return vendor + +@pytest.fixture +def vendor_factory(): + """Factory function to create unique vendors.""" + return create_unique_vendor_factory() + +def create_unique_vendor_factory(): + """Factory function to create unique vendors in tests.""" + def _create_vendor(db, owner_user_id, **kwargs): + unique_id = str(uuid.uuid4())[:8] + defaults = { + "vendor_code": f"FACTORY_{unique_id.upper()}", + "subdomain": f"factory{unique_id.lower()}", + "name": f"Factory Vendor {unique_id}", + "owner_user_id": owner_user_id, + "is_active": True, + "is_verified": False + } + defaults.update(kwargs) + + vendor = Vendor(**defaults) + db.add(vendor) + db.commit() + db.refresh(vendor) + return vendor + + return _create_vendor +``` + +### Using Fixtures + +Fixtures are automatically injected by pytest: + +```python +def test_example(db, test_user, auth_headers): + """ + This test receives three fixtures: + - db: Database session + - test_user: A test user object + - auth_headers: Authentication headers for API calls + """ + # Use the fixtures + assert test_user.id is not None + response = client.get("/api/v1/profile", headers=auth_headers) + assert response.status_code == 200 +``` + +### Creating Custom Fixtures + +Add fixtures to appropriate conftest.py or fixture files: + +```python +# tests/unit/conftest.py +import pytest + +@pytest.fixture +def mock_service(): + """Create a mocked service for unit testing.""" + from unittest.mock import Mock + service = Mock() + service.get_data.return_value = {"key": "value"} + return service +``` + +--- + +## Mocking + +### When to Use Mocks + +Use mocks to: +- Isolate unit tests from external dependencies +- Simulate error conditions +- Test edge cases +- Speed up tests by avoiding slow operations + +### Mock Patterns + +#### 1. Mocking with unittest.mock + +```python +from unittest.mock import Mock, MagicMock, patch + +@pytest.mark.unit +class TestWithMocks: + def test_with_mock(self): + """Test using a simple mock.""" + mock_service = Mock() + mock_service.get_user.return_value = {"id": 1, "name": "Test"} + + result = mock_service.get_user(1) + + assert result["name"] == "Test" + mock_service.get_user.assert_called_once_with(1) + + @patch('app.services.auth_service.jwt.encode') + def test_with_patch(self, mock_encode): + """Test using patch decorator.""" + mock_encode.return_value = "fake_token" + + service = AuthService() + token = service.create_token({"user_id": 1}) + + assert token == "fake_token" + mock_encode.assert_called_once() +``` + +#### 2. Mocking Database Errors + +```python +from sqlalchemy.exc import SQLAlchemyError + +def test_database_error_handling(): + """Test handling of database errors.""" + mock_db = Mock() + mock_db.commit.side_effect = SQLAlchemyError("DB connection failed") + + service = VendorService() + + with pytest.raises(DatabaseException): + service.create_vendor(mock_db, vendor_data, user) +``` + +#### 3. Mocking External Services + +```python +@patch('app.services.email_service.send_email') +def test_user_registration_sends_email(mock_send_email, db): + """Test that user registration sends a welcome email.""" + service = AuthService() + user_data = UserRegister( + email="test@example.com", + username="testuser", + password="password123" + ) + + service.register_user(db, user_data) + + # Verify email was sent + mock_send_email.assert_called_once() + call_args = mock_send_email.call_args + assert "test@example.com" in str(call_args) +``` + +#### 4. Mocking with Context Managers + +```python +@pytest.mark.unit +class TestFileOperations: + @patch('builtins.open', create=True) + def test_file_read(self, mock_open): + """Test file reading operation.""" + mock_open.return_value.__enter__.return_value.read.return_value = "test data" + + # Code that opens and reads a file + with open('test.txt', 'r') as f: + data = f.read() + + assert data == "test data" + mock_open.assert_called_once_with('test.txt', 'r') +``` + +### Fixtures for Error Testing + +Use fixtures to create consistent mock errors: + +```python +# tests/fixtures/testing_fixtures.py +@pytest.fixture +def db_with_error(): + """Database session that raises errors.""" + mock_db = Mock() + mock_db.query.side_effect = SQLAlchemyError("Database connection failed") + mock_db.add.side_effect = SQLAlchemyError("Database insert failed") + mock_db.commit.side_effect = SQLAlchemyError("Database commit failed") + mock_db.rollback.return_value = None + return mock_db + +# Usage in tests +def test_handles_db_error(db_with_error): + """Test graceful handling of database errors.""" + service = VendorService() + + with pytest.raises(DatabaseException): + service.create_vendor(db_with_error, vendor_data, user) +``` + +--- + +## Best Practices + +### 1. Test Independence + +Each test should be completely independent: + +```python +# ✅ GOOD - Independent tests +def test_create_user(db): + user = create_user(db, "user1@test.com") + assert user.email == "user1@test.com" + +def test_delete_user(db): + user = create_user(db, "user2@test.com") # Creates own data + delete_user(db, user.id) + assert get_user(db, user.id) is None + +# ❌ BAD - Tests depend on each other +user_id = None + +def test_create_user(db): + global user_id + user = create_user(db, "user@test.com") + user_id = user.id # State shared between tests + +def test_delete_user(db): + delete_user(db, user_id) # Depends on previous test +``` + +### 2. Arrange-Act-Assert Pattern + +Structure tests clearly using AAA pattern: + +```python +def test_vendor_creation(db, test_user): + # Arrange - Set up test data + vendor_data = VendorCreate( + vendor_code="TESTVENDOR", + vendor_name="Test Vendor" + ) + + # Act - Perform the action + vendor = VendorService().create_vendor(db, vendor_data, test_user) + + # Assert - Verify the results + assert vendor.vendor_code == "TESTVENDOR" + assert vendor.owner_user_id == test_user.id +``` + +### 3. Test One Thing + +Each test should verify a single behavior: + +```python +# ✅ GOOD - Tests one specific behavior +def test_admin_creates_verified_vendor(db, test_admin): + """Test that admin users create verified vendors.""" + vendor = create_vendor(db, test_admin) + assert vendor.is_verified is True + +def test_regular_user_creates_unverified_vendor(db, test_user): + """Test that regular users create unverified vendors.""" + vendor = create_vendor(db, test_user) + assert vendor.is_verified is False + +# ❌ BAD - Tests multiple behaviors +def test_vendor_creation(db, test_admin, test_user): + """Test vendor creation.""" # Vague docstring + admin_vendor = create_vendor(db, test_admin) + user_vendor = create_vendor(db, test_user) + + assert admin_vendor.is_verified is True + assert user_vendor.is_verified is False + assert admin_vendor.is_active is True + assert user_vendor.is_active is True + # Testing too many things +``` + +### 4. Use Descriptive Names + +Test names should describe what is being tested: + +```python +# ✅ GOOD +def test_create_vendor_with_duplicate_code_raises_exception(db, test_vendor): + """Test that creating vendor with duplicate code raises VendorAlreadyExistsException.""" + +# ❌ BAD +def test_vendor_error(db): + """Test vendor.""" +``` + +### 5. Test Error Cases + +Always test both success and failure paths: + +```python +class TestVendorService: + def test_create_vendor_success(self, db, test_user): + """Test successful vendor creation.""" + # Test happy path + + def test_create_vendor_duplicate_code(self, db, test_user, test_vendor): + """Test error when vendor code already exists.""" + # Test error case + + def test_create_vendor_invalid_data(self, db, test_user): + """Test error with invalid vendor data.""" + # Test validation error + + def test_create_vendor_unauthorized(self, db): + """Test error when user is not authorized.""" + # Test authorization error +``` + +### 6. Use Markers Consistently + +Apply appropriate markers to all tests: + +```python +@pytest.mark.unit # Test level +@pytest.mark.auth # Functionality +class TestAuthService: + def test_login_success(self): + """Test successful login.""" +``` + +### 7. Clean Up Resources + +Ensure proper cleanup to prevent resource leaks: + +```python +@pytest.fixture +def test_user(db, auth_manager): + """Create a test user.""" + user = User(...) + db.add(user) + db.commit() + db.refresh(user) + db.expunge(user) # ✅ Detach from session + return user +``` + +### 8. Avoid Testing Implementation Details + +Test behavior, not implementation: + +```python +# ✅ GOOD - Tests behavior +def test_password_is_hashed(db): + """Test that passwords are stored hashed.""" + user = create_user(db, "test@example.com", "password123") + assert user.hashed_password != "password123" + assert len(user.hashed_password) > 50 + +# ❌ BAD - Tests implementation +def test_password_uses_bcrypt(db): + """Test that password hashing uses bcrypt.""" + user = create_user(db, "test@example.com", "password123") + assert user.hashed_password.startswith("$2b$") # Assumes bcrypt +``` + +### 9. Test Data Uniqueness + +Use UUIDs to ensure test data uniqueness: + +```python +def test_concurrent_user_creation(db): + """Test creating multiple users with unique data.""" + unique_id = str(uuid.uuid4())[:8] + user = User( + email=f"test_{unique_id}@example.com", + username=f"testuser_{unique_id}" + ) + db.add(user) + db.commit() +``` + +### 10. Coverage Goals + +Aim for high test coverage: +- Unit tests: > 90% coverage +- Integration tests: > 80% coverage +- Overall: > 80% coverage (enforced in pytest.ini) + +Check coverage with: +```bash +make test-coverage +# Opens htmlcov/index.html in browser +``` + +--- + +## Troubleshooting + +### Common Issues + +#### Issue: Tests Fail Intermittently + +**Cause**: Tests depend on each other or share state + +**Solution**: Ensure test independence, use function-scoped fixtures + +```python +# Use function scope for fixtures that modify data +@pytest.fixture(scope="function") # ✅ New instance per test +def db(engine, testing_session_local): + ... +``` + +#### Issue: "ResourceWarning: unclosed connection" + +**Cause**: Database sessions not properly closed + +**Solution**: Use `db.expunge()` after creating test objects + +```python +@pytest.fixture +def test_user(db): + user = User(...) + db.add(user) + db.commit() + db.refresh(user) + db.expunge(user) # ✅ Detach from session + return user +``` + +#### Issue: Tests Slow + +**Cause**: Too many database operations, not using transactions + +**Solution**: +- Mark slow tests with `@pytest.mark.slow` +- Use session-scoped fixtures where possible +- Batch database operations + +```python +# ✅ Batch create +products = [Product(...) for i in range(100)] +db.add_all(products) +db.commit() + +# ❌ Individual creates (slow) +for i in range(100): + product = Product(...) + db.add(product) + db.commit() +``` + +#### Issue: Import Errors + +**Cause**: Python path not set correctly + +**Solution**: Run tests from project root: + +```bash +# ✅ From project root +pytest tests/ + +# ❌ From tests directory +cd tests && pytest . +``` + +#### Issue: Fixture Not Found + +**Cause**: Fixture not imported or registered + +**Solution**: Check `pytest_plugins` in conftest.py: + +```python +# tests/conftest.py +pytest_plugins = [ + "tests.fixtures.auth_fixtures", + "tests.fixtures.vendor_fixtures", + # Add your fixture module here +] +``` + +### Debug Tips + +#### 1. Verbose Output + +```bash +pytest tests/ -v # Verbose test names +pytest tests/ -vv # Very verbose (shows assertion details) +``` + +#### 2. Show Local Variables + +```bash +pytest tests/ --showlocals # Show local vars on failure +pytest tests/ -l # Short form +``` + +#### 3. Drop to Debugger on Failure + +```bash +pytest tests/ --pdb # Drop to pdb on first failure +``` + +#### 4. Run Specific Test + +```bash +pytest tests/unit/services/test_auth_service.py::TestAuthService::test_login_success -v +``` + +#### 5. Show Print Statements + +```bash +pytest tests/ -s # Show print() and log output +``` + +#### 6. Run Last Failed Tests + +```bash +pytest tests/ --lf # Last failed +pytest tests/ --ff # Failed first +``` + +### Getting Help + +If you encounter issues: + +1. Check this documentation +2. Review test examples in the codebase +3. Check pytest.ini for configuration +4. Review conftest.py files for available fixtures +5. Ask team members or create an issue + +--- + +## Summary + +This testing guide covers: +- ✅ Four-level test architecture (Unit/Integration/System/Performance) +- ✅ Running tests with Make commands and pytest +- ✅ Test structure and organization +- ✅ Writing tests for each level +- ✅ Using fixtures effectively +- ✅ Mocking patterns and best practices +- ✅ Best practices for maintainable tests +- ✅ Troubleshooting common issues + +For information about maintaining and extending the test suite, see [Test Maintenance Guide](test-maintenance.md).