Files
orion/docs/testing/test-maintenance.md
Samir Boulahtit e9253fbd84 refactor: rename Wizamart to Orion across entire codebase
Replace all ~1,086 occurrences of Wizamart/wizamart/WIZAMART/WizaMart
with Orion/orion/ORION across 184 files. This includes database
identifiers, email addresses, domain references, R2 bucket names,
DNS prefixes, encryption salt, Celery app name, config defaults,
Docker configs, CI configs, documentation, seed data, and templates.

Renames homepage-wizamart.html template to homepage-orion.html.
Fixes duplicate file_pattern key in api.yaml architecture rule.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-14 16:46:56 +01:00

1128 lines
30 KiB
Markdown

# Test Maintenance Guide
## Overview
This guide provides detailed information on maintaining and extending the test suite for the Orion platform. It covers test structure, configuration files, adding new tests, updating fixtures, and keeping tests maintainable as the codebase evolves.
## Table of Contents
- [Test Configuration](#test-configuration)
- [Fixture System](#fixture-system)
- [Adding New Tests](#adding-new-tests)
- [Updating Tests](#updating-tests)
- [Test Coverage](#test-coverage)
- [Continuous Improvements](#continuous-improvements)
- [Common Maintenance Tasks](#common-maintenance-tasks)
---
## Test Configuration
### pytest.ini
The `pytest.ini` file at the root of the project contains all pytest configuration:
```ini
[pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
# Enhanced addopts for better development experience
addopts =
-v # Verbose output
--tb=short # Short traceback format
--strict-markers # Enforce marker registration
--strict-config # Enforce strict config
--color=yes # Colored output
--durations=10 # Show 10 slowest tests
--showlocals # Show local variables on failure
-ra # Show summary of all test outcomes
--cov=app # Coverage for app module
--cov=models # Coverage for models module
--cov=middleware # Coverage for middleware module
--cov-report=term-missing # Show missing lines in terminal
--cov-report=html:htmlcov # Generate HTML coverage report
--cov-fail-under=80 # Fail if coverage < 80%
minversion = 6.0
# Test filtering shortcuts
filterwarnings =
ignore::UserWarning
ignore::DeprecationWarning
ignore::PendingDeprecationWarning
ignore::sqlalchemy.exc.SAWarning
# Timeout settings
timeout = 300
timeout_method = thread
# Logging configuration
log_cli = true
log_cli_level = INFO
log_cli_format = %(asctime)s [%(levelname)8s] %(name)s: %(message)s
log_cli_date_format = %Y-%m-%d %H:%M:%S
```
#### Adding New Test Markers
When you need a new test category, add it to the `markers` section in `pytest.ini`:
```ini
markers =
unit: marks tests as unit tests
integration: marks tests as integration tests
# ... existing markers ...
your_new_marker: description of your new marker category
```
Then use it in your tests:
```python
@pytest.mark.your_new_marker
def test_something():
"""Test something specific."""
pass
```
#### Adjusting Coverage Thresholds
To modify coverage requirements:
```ini
addopts =
# ... other options ...
--cov-fail-under=85 # Change from 80% to 85%
```
### Directory Structure
Understanding the test directory structure:
```
tests/
├── conftest.py # Root conftest - core fixtures
├── pytest.ini # Moved to root (symlinked here)
├── fixtures/ # Reusable fixture modules
│ ├── __init__.py
│ ├── testing_fixtures.py # Testing utilities (empty_db, db_with_error)
│ ├── auth_fixtures.py # Auth fixtures (test_user, test_admin, auth_headers)
│ ├── store_fixtures.py # Store fixtures (test_store, store_factory)
│ ├── marketplace_product_fixtures.py # Product fixtures
│ ├── marketplace_import_job_fixtures.py # Import job fixtures
│ └── customer_fixtures.py # Customer fixtures
├── unit/ # Unit tests
│ ├── conftest.py # Unit-specific fixtures
│ ├── services/ # Service layer tests
│ │ ├── test_auth_service.py
│ │ ├── test_store_service.py
│ │ ├── test_product_service.py
│ │ ├── test_inventory_service.py
│ │ ├── test_admin_service.py
│ │ ├── test_marketplace_service.py
│ │ └── test_stats_service.py
│ ├── middleware/ # Middleware unit tests
│ │ ├── test_auth.py
│ │ ├── test_context.py
│ │ ├── test_store_context.py
│ │ ├── test_theme_context.py
│ │ ├── test_rate_limiter.py
│ │ ├── test_logging.py
│ │ └── test_decorators.py
│ ├── models/ # Model tests
│ │ └── test_database_models.py
│ └── utils/ # Utility tests
│ ├── test_csv_processor.py
│ ├── test_data_validation.py
│ └── test_data_processing.py
├── integration/ # Integration tests
│ ├── conftest.py # Integration-specific fixtures
│ ├── api/ # API endpoint tests
│ │ └── v1/
│ │ ├── test_auth_endpoints.py
│ │ ├── test_store_endpoints.py
│ │ ├── test_product_endpoints.py
│ │ ├── test_inventory_endpoints.py
│ │ ├── test_admin_endpoints.py
│ │ ├── test_marketplace_products_endpoints.py
│ │ ├── test_marketplace_import_job_endpoints.py
│ │ ├── test_marketplace_product_export.py
│ │ ├── test_stats_endpoints.py
│ │ ├── test_pagination.py
│ │ └── test_filtering.py
│ ├── middleware/ # Middleware integration tests
│ │ ├── conftest.py
│ │ ├── test_middleware_stack.py
│ │ ├── test_context_detection_flow.py
│ │ ├── test_store_context_flow.py
│ │ └── test_theme_loading_flow.py
│ ├── security/ # Security tests
│ │ ├── test_authentication.py
│ │ ├── test_authorization.py
│ │ └── test_input_validation.py
│ ├── tasks/ # Background task tests
│ │ └── test_background_tasks.py
│ └── workflows/ # Multi-step workflow tests
│ └── test_integration.py
├── system/ # System tests
│ ├── conftest.py # System-specific fixtures
│ └── test_error_handling.py # System-wide error handling
├── performance/ # Performance tests
│ ├── conftest.py # Performance-specific fixtures
│ └── test_api_performance.py # API performance tests
└── test_data/ # Static test data files
└── csv/
└── sample_products.csv
```
---
## Fixture System
### Fixture Hierarchy
Fixtures are organized hierarchically with different scopes:
#### 1. Core Fixtures (tests/conftest.py)
**Session-scoped fixtures** - Created once per test session:
```python
@pytest.fixture(scope="session")
def engine():
"""Create test database engine - reused across entire test session."""
return create_engine(
"sqlite:///:memory:",
connect_args={"check_same_thread": False},
poolclass=StaticPool,
echo=False
)
@pytest.fixture(scope="session")
def testing_session_local(engine):
"""Create session factory - reused across entire test session."""
return sessionmaker(autocommit=False, autoflush=False, bind=engine)
```
**Function-scoped fixtures** - Created for each test:
```python
@pytest.fixture(scope="function")
def db(engine, testing_session_local):
"""
Create a clean database session for each test.
- Creates all tables before test
- Yields session to test
- Cleans up after test
"""
Base.metadata.create_all(bind=engine)
db_session = testing_session_local()
try:
yield db_session
finally:
db_session.close()
Base.metadata.drop_all(bind=engine)
Base.metadata.create_all(bind=engine)
@pytest.fixture(scope="function")
def client(db):
"""
Create test client with database dependency override.
Overrides FastAPI's get_db dependency to use test database.
"""
def override_get_db():
try:
yield db
finally:
pass
app.dependency_overrides[get_db] = override_get_db
try:
client = TestClient(app)
yield client
finally:
if get_db in app.dependency_overrides:
del app.dependency_overrides[get_db]
```
#### 2. Fixture Modules (tests/fixtures/)
Fixtures are organized by domain in separate modules:
**tests/fixtures/auth_fixtures.py:**
- `auth_manager` - AuthManager instance
- `test_user` - Regular user
- `test_admin` - Admin user
- `other_user` - Additional user for access control tests
- `another_admin` - Additional admin for admin interaction tests
- `auth_headers` - Authentication headers for test_user
- `admin_headers` - Authentication headers for test_admin
**tests/fixtures/store_fixtures.py:**
- `test_store` - Basic test store
- `unique_store` - Store with unique code
- `inactive_store` - Inactive store
- `verified_store` - Verified store
- `test_product` - Store product relationship
- `test_inventory` - Inventory entry
- `store_factory` - Factory function to create stores dynamically
**tests/fixtures/marketplace_product_fixtures.py:**
- `unique_product` - Marketplace product
- `multiple_products` - List of products
**tests/fixtures/testing_fixtures.py:**
- `empty_db` - Empty database for edge case testing
- `db_with_error` - Mock database that raises errors
#### 3. Test-Level Fixtures (tests/{level}/conftest.py)
Each test level can have specific fixtures:
**tests/unit/conftest.py:**
```python
# Unit test specific fixtures
# Currently minimal - add unit-specific mocks here
```
**tests/integration/conftest.py:**
```python
# Integration test specific fixtures
# Currently minimal - add integration-specific setup here
```
**tests/system/conftest.py:**
```python
# System test specific fixtures
# Currently minimal - add system-specific setup here
```
**tests/performance/conftest.py:**
```python
@pytest.fixture
def performance_db_session(db):
"""Database session optimized for performance testing."""
return db
```
### Registering Fixture Modules
Fixture modules must be registered in `tests/conftest.py`:
```python
# Import fixtures from fixture modules
pytest_plugins = [
"tests.fixtures.auth_fixtures",
"tests.fixtures.marketplace_product_fixtures",
"tests.fixtures.store_fixtures",
"tests.fixtures.customer_fixtures",
"tests.fixtures.marketplace_import_job_fixtures",
"tests.fixtures.testing_fixtures",
]
```
### Creating New Fixtures
When adding new fixtures:
1. **Determine the appropriate location:**
- Core database/client fixtures → `tests/conftest.py`
- Domain-specific fixtures → `tests/fixtures/{domain}_fixtures.py`
- Test-level specific → `tests/{level}/conftest.py`
2. **Choose the right scope:**
- `session` - Reuse across all tests (e.g., database engine)
- `module` - Reuse within a test module
- `function` - New instance per test (default, safest)
3. **Follow naming conventions:**
- Use descriptive names: `test_user`, `test_admin`, `test_store`
- Use `_factory` suffix for factory functions: `store_factory`
- Use `mock_` prefix for mocked objects: `mock_service`
4. **Clean up resources:**
- Always use `db.expunge()` for database objects
- Use try/finally blocks for cleanup
- Clear dependency overrides
**Example - Adding a new fixture module:**
```python
# tests/fixtures/order_fixtures.py
import pytest
import uuid
from models.database.order import Order
@pytest.fixture
def test_order(db, test_user, test_store):
"""Create a test order."""
unique_id = str(uuid.uuid4())[:8]
order = Order(
order_number=f"ORDER_{unique_id}",
user_id=test_user.id,
store_id=test_store.id,
total_amount=100.00,
status="pending"
)
db.add(order)
db.commit()
db.refresh(order)
db.expunge(order)
return order
@pytest.fixture
def order_factory():
"""Factory to create multiple orders."""
def _create_order(db, user_id, store_id, **kwargs):
unique_id = str(uuid.uuid4())[:8]
defaults = {
"order_number": f"ORDER_{unique_id}",
"user_id": user_id,
"store_id": store_id,
"total_amount": 100.00,
"status": "pending"
}
defaults.update(kwargs)
order = Order(**defaults)
db.add(order)
db.commit()
db.refresh(order)
return order
return _create_order
```
Then register it in `tests/conftest.py`:
```python
pytest_plugins = [
"tests.fixtures.auth_fixtures",
"tests.fixtures.store_fixtures",
"tests.fixtures.order_fixtures", # Add new module
# ... other fixtures
]
```
---
## Adding New Tests
### 1. Determine Test Level
Choose the appropriate test level:
| Level | When to Use | Example |
|-------|-------------|---------|
| **Unit** | Testing a single function/class in isolation | Service method logic, utility functions |
| **Integration** | Testing multiple components together | API endpoints, database operations, middleware flow |
| **System** | Testing complete application behavior | Error handling across app, user workflows |
| **Performance** | Testing speed and scalability | Response times, concurrent requests |
### 2. Create Test File
Follow naming conventions:
```
tests/{level}/{layer}/test_{component}.py
```
Examples:
- `tests/unit/services/test_order_service.py`
- `tests/integration/api/v1/test_order_endpoints.py`
- `tests/system/test_order_workflows.py`
### 3. Write Test Class
Use descriptive class names and apply markers:
```python
# tests/unit/services/test_order_service.py
import pytest
from app.services.order_service import OrderService
from app.exceptions import OrderNotFoundException
@pytest.mark.unit
@pytest.mark.orders # Add new marker to pytest.ini first
class TestOrderService:
"""Test suite for OrderService."""
def setup_method(self):
"""Initialize service instance before each test."""
self.service = OrderService()
def test_create_order_success(self, db, test_user, test_store):
"""Test successful order creation."""
# Arrange
order_data = {
"store_id": test_store.id,
"total_amount": 100.00
}
# Act
order = self.service.create_order(db, test_user.id, order_data)
# Assert
assert order is not None
assert order.user_id == test_user.id
assert order.store_id == test_store.id
assert order.total_amount == 100.00
def test_get_order_not_found(self, db):
"""Test getting non-existent order raises exception."""
# Act & Assert
with pytest.raises(OrderNotFoundException):
self.service.get_order(db, order_id=99999)
```
### 4. Add Required Fixtures
If you need new fixtures, create them as described in [Creating New Fixtures](#creating-new-fixtures).
### 5. Run Your Tests
```bash
# Run your new test file
pytest tests/unit/services/test_order_service.py -v
# Run with coverage
pytest tests/unit/services/test_order_service.py --cov=app.services.order_service
```
---
## Updating Tests
### When to Update Tests
Update tests when:
1. **API changes** - Endpoint modifications, new parameters, response structure changes
2. **Business logic changes** - Modified validation rules, new requirements
3. **Database schema changes** - New fields, relationships, constraints
4. **Exception handling changes** - New exception types, modified error messages
5. **Test failures** - Legitimate changes that require test updates
### Update Process
1. **Run existing tests to identify failures:**
```bash
pytest tests/ -v
```
2. **Identify the cause:**
- Code change (legitimate update needed)
- Test bug (test was incorrect)
- Regression (code broke functionality)
3. **Update tests appropriately:**
**Example - API endpoint added new field:**
```python
# OLD TEST
def test_create_store(self, client, auth_headers):
response = client.post(
"/api/v1/store",
headers=auth_headers,
json={
"store_code": "TEST123",
"name": "Test Store"
}
)
assert response.status_code == 200
# UPDATED TEST - New required field "subdomain"
def test_create_store(self, client, auth_headers):
response = client.post(
"/api/v1/store",
headers=auth_headers,
json={
"store_code": "TEST123",
"name": "Test Store",
"subdomain": "teststore" # New required field
}
)
assert response.status_code == 200
assert response.json()["subdomain"] == "teststore" # Verify new field
```
4. **Update fixtures if necessary:**
```python
# tests/fixtures/store_fixtures.py
@pytest.fixture
def test_store(db, test_user):
"""Create a test store."""
unique_id = str(uuid.uuid4())[:8].upper()
store = Store(
store_code=f"TESTSTORE_{unique_id}",
name=f"Test Store {unique_id}",
subdomain=f"teststore{unique_id.lower()}", # ADD NEW FIELD
owner_user_id=test_user.id,
is_active=True,
is_verified=True
)
db.add(store)
db.commit()
db.refresh(store)
db.expunge(store)
return store
```
5. **Run tests again to verify:**
```bash
pytest tests/ -v
```
### Handling Breaking Changes
When making breaking changes:
1. **Update all affected tests** - Search for usage
2. **Update fixtures** - Modify fixture creation
3. **Update documentation** - Keep testing guide current
4. **Communicate changes** - Inform team of test updates
---
## Test Coverage
### Measuring Coverage
The project enforces 80% code coverage minimum (configured in `pytest.ini`).
#### View Coverage Report
```bash
# Generate coverage report
make test-coverage
# Or with pytest directly
pytest tests/ --cov=app --cov=models --cov=middleware --cov-report=html
# Open HTML report
open htmlcov/index.html # macOS
xdg-open htmlcov/index.html # Linux
start htmlcov/index.html # Windows
```
#### Coverage Report Structure
The HTML report shows:
- **Overall coverage percentage** - Should be > 80%
- **Per-file coverage** - Individual module coverage
- **Missing lines** - Specific lines not covered
- **Branch coverage** - Conditional paths covered
### Improving Coverage
#### 1. Identify Uncovered Code
Look for:
- Red highlighted lines (not executed)
- Yellow highlighted lines (partial branch coverage)
- Functions/classes with 0% coverage
#### 2. Add Missing Tests
Focus on:
- **Error paths** - Exception handling
- **Edge cases** - Empty inputs, boundary values
- **Conditional branches** - if/else paths
- **Alternative flows** - Different code paths
**Example - Improving branch coverage:**
```python
# Code with conditional
def get_store_status(store):
if store.is_active and store.is_verified:
return "active"
elif store.is_active:
return "pending_verification"
else:
return "inactive"
# Tests needed for 100% coverage
def test_store_status_active_and_verified(test_store):
"""Test status when store is active and verified."""
test_store.is_active = True
test_store.is_verified = True
assert get_store_status(test_store) == "active"
def test_store_status_active_not_verified(test_store):
"""Test status when store is active but not verified."""
test_store.is_active = True
test_store.is_verified = False
assert get_store_status(test_store) == "pending_verification"
def test_store_status_inactive(test_store):
"""Test status when store is inactive."""
test_store.is_active = False
assert get_store_status(test_store) == "inactive"
```
#### 3. Exclude Untestable Code
For code that shouldn't be tested (e.g., main blocks, debug code):
```python
# Use pragma to exclude from coverage
if __name__ == "__main__": # pragma: no cover
# This won't count against coverage
main()
```
### Coverage Goals by Test Level
| Test Level | Coverage Goal |
|------------|---------------|
| Unit Tests | > 90% |
| Integration Tests | > 80% |
| Overall | > 80% (enforced) |
---
## Continuous Improvements
### Identifying Improvement Opportunities
#### 1. Analyze Test Execution Time
```bash
# Show 10 slowest tests
pytest tests/ --durations=10
# Show all durations
pytest tests/ --durations=0
```
**Action Items:**
- Mark slow tests with `@pytest.mark.slow`
- Optimize fixture creation
- Reduce database operations
- Use mocks where appropriate
#### 2. Review Test Failures
```bash
# Run only previously failed tests
pytest tests/ --lf
# Run failed tests first
pytest tests/ --ff
```
**Action Items:**
- Fix flaky tests (tests that fail intermittently)
- Improve test isolation
- Add better error messages
#### 3. Monitor Test Growth
Track metrics:
- Number of tests per module
- Test coverage percentage
- Average test execution time
- Failed test rate
### Refactoring Tests
#### When to Refactor
- Tests have duplicated code
- Setup code is repeated
- Tests are hard to understand
- Tests are brittle (fail frequently on minor changes)
#### Refactoring Techniques
**1. Extract common setup to fixtures:**
```python
# BEFORE - Duplicated setup
def test_store_creation(db, test_user):
store_data = StoreCreate(store_code="TEST", name="Test")
store = StoreService().create_store(db, store_data, test_user)
assert store.store_code == "TEST"
def test_store_update(db, test_user):
store_data = StoreCreate(store_code="TEST", name="Test")
store = StoreService().create_store(db, store_data, test_user)
# ... update logic
# AFTER - Use fixture
@pytest.fixture
def created_store(db, test_user):
"""Store already created in database."""
store_data = StoreCreate(store_code="TEST", name="Test")
return StoreService().create_store(db, store_data, test_user)
def test_store_creation(created_store):
assert created_store.store_code == "TEST"
def test_store_update(db, created_store):
# Directly use created_store
pass
```
**2. Use factory functions for variations:**
```python
# Factory pattern for creating test data variations
@pytest.fixture
def store_factory(db, test_user):
"""Create stores with custom attributes."""
def _create(**kwargs):
defaults = {
"store_code": f"TEST{uuid.uuid4()[:8]}",
"name": "Test Store",
"is_active": True,
"is_verified": False
}
defaults.update(kwargs)
store_data = StoreCreate(**defaults)
return StoreService().create_store(db, store_data, test_user)
return _create
# Use factory in tests
def test_inactive_store(store_factory):
store = store_factory(is_active=False)
assert not store.is_active
def test_verified_store(store_factory):
store = store_factory(is_verified=True)
assert store.is_verified
```
**3. Use parametrize for similar test cases:**
```python
# BEFORE - Multiple similar tests
def test_invalid_email_missing_at(db):
with pytest.raises(ValidationException):
create_user(db, email="invalidemail.com")
def test_invalid_email_missing_domain(db):
with pytest.raises(ValidationException):
create_user(db, email="invalid@")
def test_invalid_email_missing_tld(db):
with pytest.raises(ValidationException):
create_user(db, email="invalid@domain")
# AFTER - Parametrized test
@pytest.mark.parametrize("invalid_email", [
"invalidemail.com", # Missing @
"invalid@", # Missing domain
"invalid@domain", # Missing TLD
"@domain.com", # Missing local part
])
def test_invalid_email_format(db, invalid_email):
"""Test that invalid email formats raise validation error."""
with pytest.raises(ValidationException):
create_user(db, email=invalid_email)
```
### Testing Best Practices Evolution
As the project grows, continuously improve testing practices:
1. **Review and update this documentation**
2. **Share knowledge within the team**
3. **Conduct test code reviews**
4. **Refactor tests alongside application code**
5. **Keep tests simple and maintainable**
---
## Common Maintenance Tasks
### Task 1: Adding a New Model
When adding a new database model:
1. **Create model tests:**
```python
# tests/unit/models/test_order_model.py
import pytest
@pytest.mark.unit
class TestOrderModel:
def test_order_creation(self, db, test_user, test_store):
"""Test Order model can be created."""
order = Order(
order_number="ORDER001",
user_id=test_user.id,
store_id=test_store.id,
total_amount=100.00
)
db.add(order)
db.commit()
assert order.id is not None
assert order.order_number == "ORDER001"
```
2. **Create fixtures:**
```python
# tests/fixtures/order_fixtures.py
@pytest.fixture
def test_order(db, test_user, test_store):
"""Create a test order."""
# ... implementation
```
3. **Register fixtures:**
```python
# tests/conftest.py
pytest_plugins = [
# ... existing
"tests.fixtures.order_fixtures",
]
```
4. **Add service tests:**
```python
# tests/unit/services/test_order_service.py
@pytest.mark.unit
class TestOrderService:
# ... test methods
```
5. **Add API endpoint tests:**
```python
# tests/integration/api/v1/test_order_endpoints.py
@pytest.mark.integration
@pytest.mark.api
class TestOrderAPI:
# ... test methods
```
### Task 2: Updating an Existing API Endpoint
When modifying an API endpoint:
1. **Update integration tests:**
```python
# tests/integration/api/v1/test_store_endpoints.py
def test_create_store_with_new_field(self, client, auth_headers):
response = client.post(
"/api/v1/store",
headers=auth_headers,
json={
"store_code": "TEST",
"name": "Test",
"new_field": "value" # Add new field
}
)
assert response.status_code == 200
assert response.json()["new_field"] == "value"
```
2. **Update service tests if logic changed:**
```python
# tests/unit/services/test_store_service.py
def test_create_store_validates_new_field(self, db, test_user):
# Test new validation logic
pass
```
3. **Update fixtures if model changed:**
```python
# tests/fixtures/store_fixtures.py
@pytest.fixture
def test_store(db, test_user):
store = Store(
# ... existing fields
new_field="default_value" # Add new field
)
# ...
```
### Task 3: Fixing Flaky Tests
Flaky tests fail intermittently. Common causes:
**1. Test order dependency:**
```python
# BAD - Tests depend on order
test_data = None
def test_create():
global test_data
test_data = create_something()
def test_update():
update_something(test_data) # Fails if test_create doesn't run first
# GOOD - Independent tests
def test_create(db):
data = create_something(db)
assert data is not None
def test_update(db):
data = create_something(db) # Create own test data
update_something(db, data)
```
**2. Timing issues:**
```python
# BAD - Timing-dependent
def test_async_operation():
start_async_task()
result = get_result() # May not be ready yet
assert result == expected
# GOOD - Wait for completion
def test_async_operation():
task = start_async_task()
result = task.wait() # Wait for completion
assert result == expected
```
**3. Shared state:**
```python
# BAD - Shared mutable state
shared_list = []
def test_append():
shared_list.append(1)
assert len(shared_list) == 1 # Fails if test runs twice
# GOOD - Fresh state per test
def test_append():
test_list = [] # New list per test
test_list.append(1)
assert len(test_list) == 1
```
### Task 4: Adding Test Markers
When you need to categorize tests:
1. **Add marker to pytest.ini:**
```ini
markers =
# ... existing markers
orders: marks tests as order-related functionality
payments: marks tests as payment-related functionality
```
2. **Apply marker to tests:**
```python
@pytest.mark.orders
class TestOrderService:
pass
```
3. **Run tests by marker:**
```bash
pytest -m orders
pytest -m "orders or payments"
```
### Task 5: Updating Test Data
When test data needs to change:
**Option 1: Update fixture:**
```python
# tests/fixtures/product_fixtures.py
@pytest.fixture
def sample_products():
"""Return sample product data."""
return [
{"name": "Product 1", "price": 10.00, "category": "Electronics"},
{"name": "Product 2", "price": 20.00, "category": "Books"},
# Add more or modify existing
]
```
**Option 2: Update static test data:**
```csv
# tests/test_data/csv/sample_products.csv
gtin,title,brand,price
1234567890123,Product 1,Brand A,10.00
2345678901234,Product 2,Brand B,20.00
```
**Option 3: Use factory with parameters:**
```python
@pytest.fixture
def product_factory(db):
"""Create products dynamically."""
def _create(name="Product", price=10.00, **kwargs):
# ... create product
return product
return _create
```
---
## Summary
This test maintenance guide covers:
- ✅ Test configuration in `pytest.ini`
- ✅ Understanding fixture hierarchy and organization
- ✅ Adding new tests at appropriate levels
- ✅ Updating tests when code changes
- ✅ Measuring and improving test coverage
- ✅ Continuous improvement strategies
- ✅ Common maintenance tasks and patterns
- ✅ Refactoring techniques for better maintainability
### Key Takeaways
1. **Keep tests organized** - Follow the established directory structure
2. **Use appropriate test levels** - Unit for isolation, Integration for component interaction
3. **Maintain fixtures** - Keep fixture modules clean and well-documented
4. **Monitor coverage** - Aim for >80% overall, >90% for critical components
5. **Refactor regularly** - Keep tests maintainable as code evolves
6. **Document changes** - Update this guide as testing practices evolve
7. **Review tests** - Include test code in code reviews
### Quick Reference
```bash
# Run all tests
make test
# Run specific test level
make test-unit
make test-integration
# Run with coverage
make test-coverage
# Run tests by marker
pytest -m auth
pytest -m "unit and stores"
# Run specific test
pytest tests/unit/services/test_store_service.py::TestStoreService::test_create_store_success
# Debug test
pytest tests/unit/services/test_store_service.py -vv --pdb
# Show slowest tests
pytest tests/ --durations=10
```
For more information on writing tests, see the [Testing Guide](testing-guide.md).