15 KiB
Test Maintenance Guide
This guide covers how to maintain, update, and contribute to our test suite as the application evolves. It's designed for developers who need to modify existing tests or add new test coverage.
Test Maintenance Philosophy
Our test suite follows these core principles:
- Tests should be reliable - They pass consistently and fail only when there are real issues
- Tests should be fast - Unit tests complete in milliseconds, integration tests in seconds
- Tests should be maintainable - Easy to update when code changes
- Tests should provide clear feedback - Failures should clearly indicate what went wrong
When to Update Tests
Code Changes That Require Test Updates
API Changes:
# When you modify API endpoints, update integration tests
pytest tests/integration/api/v1/test_*_endpoints.py -v
Business Logic Changes:
# When you modify service logic, update unit tests
pytest tests/unit/services/test_*_service.py -v
Database Model Changes:
# When you modify models, update model tests
pytest tests/unit/models/test_database_models.py -v
New Features:
- Add new test files following our naming conventions
- Ensure both unit and integration test coverage
- Add appropriate pytest markers
Common Maintenance Tasks
Adding Tests for New Features
Step 1: Determine Test Type and Location
# New business logic → Unit test
tests/unit/services/test_new_feature_service.py
# New API endpoint → Integration test
tests/integration/api/v1/test_new_feature_endpoints.py
# New workflow → Integration test
tests/integration/workflows/test_new_feature_workflow.py
Step 2: Create Test File with Proper Structure
import pytest
from app.services.new_feature_service import NewFeatureService
@pytest.mark.unit
@pytest.mark.new_feature # Add domain marker
class TestNewFeatureService:
"""Unit tests for NewFeatureService"""
def setup_method(self):
"""Setup run before each test"""
self.service = NewFeatureService()
def test_new_feature_with_valid_input_succeeds(self):
"""Test the happy path"""
# Test implementation
pass
def test_new_feature_with_invalid_input_raises_error(self):
"""Test error handling"""
# Test implementation
pass
Step 3: Add Domain Marker to pytest.ini
# Add to markers section in pytest.ini
new_feature: marks tests related to new feature functionality
Updating Tests for API Changes
Example: Adding a new field to product creation
# Before: tests/integration/api/v1/test_product_endpoints.py
def test_create_product_success(self, client, auth_headers):
product_data = {
"product_id": "TEST001",
"title": "Test Product",
"price": "19.99"
}
response = client.post("/api/v1/product", json=product_data, headers=auth_headers)
assert response.status_code == 200
# After: Adding 'category' field
def test_create_product_success(self, client, auth_headers):
product_data = {
"product_id": "TEST001",
"title": "Test Product",
"price": "19.99",
"category": "Electronics" # New field
}
response = client.post("/api/v1/product", json=product_data, headers=auth_headers)
assert response.status_code == 200
assert response.json()["category"] == "Electronics" # Verify new field
# Add test for validation
def test_create_product_with_invalid_category_fails(self, client, auth_headers):
product_data = {
"product_id": "TEST002",
"title": "Test Product",
"price": "19.99",
"category": "" # Invalid empty category
}
response = client.post("/api/v1/product", json=product_data, headers=auth_headers)
assert response.status_code == 422
Updating Fixtures for Model Changes
When you add fields to database models, update fixtures:
# tests/fixtures/product_fixtures.py - Before
@pytest.fixture
def test_product(db):
product = Product(
product_id="TEST001",
title="Test Product",
price="10.99"
)
# ... rest of fixture
# After: Adding new category field
@pytest.fixture
def test_product(db):
product = Product(
product_id="TEST001",
title="Test Product",
price="10.99",
category="Electronics" # Add new field with sensible default
)
# ... rest of fixture
Handling Breaking Changes
When making breaking changes that affect many tests:
- Update fixtures first to include new required fields
- Run tests to identify failures:
pytest -x(stop on first failure) - Update tests systematically by domain
- Verify coverage hasn't decreased:
make test-coverage
Test Data Management
Creating New Fixtures
Add domain-specific fixtures to appropriate files:
# tests/fixtures/new_domain_fixtures.py
import pytest
from models.database import NewModel
@pytest.fixture
def test_new_model(db):
"""Create a test instance of NewModel"""
model = NewModel(
name="Test Model",
value="test_value"
)
db.add(model)
db.commit()
db.refresh(model)
return model
@pytest.fixture
def new_model_factory():
"""Factory for creating custom NewModel instances"""
def _create_new_model(db, **kwargs):
defaults = {"name": "Default Name", "value": "default"}
defaults.update(kwargs)
model = NewModel(**defaults)
db.add(model)
db.commit()
db.refresh(model)
return model
return _create_new_model
Register new fixture module in conftest.py:
# tests/conftest.py
pytest_plugins = [
"tests.fixtures.auth_fixtures",
"tests.fixtures.product_fixtures",
"tests.fixtures.shop_fixtures",
"tests.fixtures.marketplace_fixtures",
"tests.fixtures.new_domain_fixtures", # Add new fixture module
]
Managing Test Data Files
Static test data in tests/test_data/:
tests/test_data/
├── csv/
│ ├── valid_products.csv # Standard valid product data
│ ├── invalid_products.csv # Data with validation errors
│ ├── large_product_set.csv # Performance testing data
│ └── new_feature_data.csv # Data for new feature testing
├── json/
│ ├── api_responses.json # Mock API responses
│ └── configuration_samples.json # Configuration test data
└── fixtures/
└── database_seeds.json # Database seed data
Update test data when adding new fields:
# Before: tests/test_data/csv/valid_products.csv
product_id,title,price
TEST001,Product 1,19.99
TEST002,Product 2,29.99
# After: Adding category field
product_id,title,price,category
TEST001,Product 1,19.99,Electronics
TEST002,Product 2,29.99,Books
Performance Test Maintenance
Updating Performance Baselines
When application performance improves or requirements change:
# tests/performance/test_api_performance.py
def test_product_list_performance(self, client, auth_headers, db):
# Create test data
products = [Product(product_id=f"PERF{i:03d}") for i in range(100)]
db.add_all(products)
db.commit()
# Time the request
start_time = time.time()
response = client.get("/api/v1/product?limit=100", headers=auth_headers)
end_time = time.time()
assert response.status_code == 200
assert len(response.json()["products"]) == 100
# Update baseline if performance has improved
assert end_time - start_time < 1.5 # Previously was 2.0 seconds
Adding Performance Tests for New Features
@pytest.mark.performance
@pytest.mark.slow
@pytest.mark.new_feature
def test_new_feature_performance_with_large_dataset(self, client, auth_headers, db):
"""Test new feature performance with realistic data volume"""
# Create large dataset
large_dataset = [NewModel(data=f"item_{i}") for i in range(1000)]
db.add_all(large_dataset)
db.commit()
# Test performance
start_time = time.time()
response = client.post("/api/v1/new-feature/process",
json={"process_all": True},
headers=auth_headers)
end_time = time.time()
assert response.status_code == 200
assert end_time - start_time < 10.0 # Should complete within 10 seconds
Debugging and Troubleshooting
Identifying Flaky Tests
Tests that pass/fail inconsistently need attention:
# Run the same test multiple times to identify flaky behavior
pytest tests/path/to/flaky_test.py -v --count=10
# Run with more verbose output to see what's changing
pytest tests/path/to/flaky_test.py -vv --tb=long --showlocals
Common causes of flaky tests:
- Database state not properly cleaned between tests
- Timing issues in async operations
- External service dependencies
- Shared mutable state between tests
Fixing Common Test Issues
Database State Issues:
# Ensure proper cleanup in fixtures
@pytest.fixture
def clean_database(db):
"""Ensure clean database state"""
yield db
# Explicit cleanup if needed
db.query(SomeModel).delete()
db.commit()
Async Test Issues:
# Ensure proper async test setup
@pytest.mark.asyncio
async def test_async_operation():
# Use await for all async operations
result = await async_service.process_data()
assert result is not None
Mock-Related Issues:
# Ensure mocks are properly reset between tests
def setup_method(self):
"""Reset mocks before each test"""
self.mock_service.reset_mock()
Test Coverage Issues
Identifying gaps in coverage:
# Generate coverage report with missing lines
pytest --cov=app --cov-report=term-missing
# View HTML report for detailed analysis
pytest --cov=app --cov-report=html
open htmlcov/index.html
Adding tests for uncovered code:
# Example: Adding test for error handling branch
def test_service_method_handles_database_error(self, mock_db):
"""Test error handling path that wasn't covered"""
# Setup mock to raise exception
mock_db.commit.side_effect = DatabaseError("Connection failed")
# Test that error is handled appropriately
with pytest.raises(ServiceError):
self.service.save_data(test_data)
Code Quality Standards
Test Code Review Checklist
Before submitting test changes:
- Tests have descriptive names explaining the scenario
- Appropriate pytest markers are used
- Test coverage hasn't decreased
- Tests are in the correct category (unit/integration/system)
- No hardcoded values that could break in different environments
- Error cases are tested, not just happy paths
- New fixtures are properly documented
- Performance tests have reasonable baselines
Refactoring Tests
When refactoring test code:
# Before: Repetitive test setup
class TestProductService:
def test_create_product_success(self):
service = ProductService()
data = {"name": "Test", "price": "10.99"}
result = service.create_product(data)
assert result is not None
def test_create_product_validation_error(self):
service = ProductService() # Duplicate setup
data = {"name": "", "price": "invalid"}
with pytest.raises(ValidationError):
service.create_product(data)
# After: Using setup_method and constants
class TestProductService:
def setup_method(self):
self.service = ProductService()
self.valid_data = {"name": "Test", "price": "10.99"}
def test_create_product_success(self):
result = self.service.create_product(self.valid_data)
assert result is not None
def test_create_product_validation_error(self):
invalid_data = {"name": "", "price": "invalid"}
with pytest.raises(ValidationError):
self.service.create_product(invalid_data)
Working with CI/CD
Test Categories in CI Pipeline
Our CI pipeline runs tests in stages:
Stage 1: Fast Feedback
make test-fast # Unit tests + fast integration tests
Stage 2: Comprehensive Testing
make test-coverage # Full suite with coverage
Stage 3: Performance Validation (on release branches)
pytest -m performance
Making Tests CI-Friendly
Ensure tests are deterministic:
# Bad: Tests that depend on current time
def test_user_creation():
user = create_user()
assert user.created_at.day == datetime.now().day # Flaky at midnight
# Good: Tests with controlled time
def test_user_creation(freezer):
freezer.freeze("2024-01-15 10:00:00")
user = create_user()
assert user.created_at == datetime(2024, 1, 15, 10, 0, 0)
Make tests environment-independent:
# Use relative paths and environment variables
TEST_DATA_DIR = Path(__file__).parent / "test_data"
CSV_FILE = TEST_DATA_DIR / "sample_products.csv"
Migration and Upgrade Strategies
When Upgrading Dependencies
Test dependency upgrades:
# Test with new versions before upgrading
pip install pytest==8.0.0 pytest-cov==5.0.0
make test
# If tests fail, identify compatibility issues
pytest --tb=short -x
Update test configuration for new pytest versions:
# pytest.ini - may need updates for new versions
minversion = 8.0
# Check if any deprecated features are used
Database Schema Changes
When modifying database models:
- Update model test fixtures first
- Run migration on test database
- Update affected test data files
- Run integration tests to catch relationship issues
# Update fixtures for new required fields
@pytest.fixture
def test_product(db):
product = Product(
# ... existing fields
new_required_field="default_value" # Add with sensible default
)
return product
Documentation and Knowledge Sharing
Documenting Complex Test Scenarios
For complex business logic tests:
def test_complex_pricing_calculation_scenario(self):
"""
Test pricing calculation with multiple discounts and tax rules.
Scenario:
- Product price: $100
- Member discount: 10%
- Seasonal discount: 5% (applied after member discount)
- Tax rate: 8.5%
Expected calculation:
Base: $100 → Member discount: $90 → Seasonal: $85.50 → Tax: $92.77
"""
# Test implementation with clear steps
Team Knowledge Sharing
Maintain test documentation:
- Update this guide when adding new test patterns
- Document complex fixture relationships
- Share test debugging techniques in team meetings
- Create examples for new team members
Summary: Test Maintenance Best Practices
Daily Practices:
- Run relevant tests before committing code
- Add tests for new functionality immediately
- Keep test names descriptive and current
- Update fixtures when models change
Regular Maintenance:
- Review and update performance baselines
- Refactor repetitive test code
- Clean up unused fixtures and test data
- Monitor test execution times
Long-term Strategy:
- Plan test architecture for new features
- Evaluate test coverage trends
- Update testing tools and practices
- Share knowledge across the team
Remember: Good tests are living documentation of your system's behavior. Keep them current, clear, and comprehensive to maintain a healthy codebase.
Use this guide alongside the Testing Guide for complete test management knowledge.