30 KiB
Test Maintenance Guide
Overview
This guide provides detailed information on maintaining and extending the test suite for the Wizamart platform. It covers test structure, configuration files, adding new tests, updating fixtures, and keeping tests maintainable as the codebase evolves.
Table of Contents
- Test Configuration
- Fixture System
- Adding New Tests
- Updating Tests
- Test Coverage
- Continuous Improvements
- Common Maintenance Tasks
Test Configuration
pytest.ini
The pytest.ini file at the root of the project contains all pytest configuration:
[pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
# Enhanced addopts for better development experience
addopts =
-v # Verbose output
--tb=short # Short traceback format
--strict-markers # Enforce marker registration
--strict-config # Enforce strict config
--color=yes # Colored output
--durations=10 # Show 10 slowest tests
--showlocals # Show local variables on failure
-ra # Show summary of all test outcomes
--cov=app # Coverage for app module
--cov=models # Coverage for models module
--cov=middleware # Coverage for middleware module
--cov-report=term-missing # Show missing lines in terminal
--cov-report=html:htmlcov # Generate HTML coverage report
--cov-fail-under=80 # Fail if coverage < 80%
minversion = 6.0
# Test filtering shortcuts
filterwarnings =
ignore::UserWarning
ignore::DeprecationWarning
ignore::PendingDeprecationWarning
ignore::sqlalchemy.exc.SAWarning
# Timeout settings
timeout = 300
timeout_method = thread
# Logging configuration
log_cli = true
log_cli_level = INFO
log_cli_format = %(asctime)s [%(levelname)8s] %(name)s: %(message)s
log_cli_date_format = %Y-%m-%d %H:%M:%S
Adding New Test Markers
When you need a new test category, add it to the markers section in pytest.ini:
markers =
unit: marks tests as unit tests
integration: marks tests as integration tests
# ... existing markers ...
your_new_marker: description of your new marker category
Then use it in your tests:
@pytest.mark.your_new_marker
def test_something():
"""Test something specific."""
pass
Adjusting Coverage Thresholds
To modify coverage requirements:
addopts =
# ... other options ...
--cov-fail-under=85 # Change from 80% to 85%
Directory Structure
Understanding the test directory structure:
tests/
├── conftest.py # Root conftest - core fixtures
├── pytest.ini # Moved to root (symlinked here)
│
├── fixtures/ # Reusable fixture modules
│ ├── __init__.py
│ ├── testing_fixtures.py # Testing utilities (empty_db, db_with_error)
│ ├── auth_fixtures.py # Auth fixtures (test_user, test_admin, auth_headers)
│ ├── vendor_fixtures.py # Vendor fixtures (test_vendor, vendor_factory)
│ ├── marketplace_product_fixtures.py # Product fixtures
│ ├── marketplace_import_job_fixtures.py # Import job fixtures
│ └── customer_fixtures.py # Customer fixtures
│
├── unit/ # Unit tests
│ ├── conftest.py # Unit-specific fixtures
│ ├── services/ # Service layer tests
│ │ ├── test_auth_service.py
│ │ ├── test_vendor_service.py
│ │ ├── test_product_service.py
│ │ ├── test_inventory_service.py
│ │ ├── test_admin_service.py
│ │ ├── test_marketplace_service.py
│ │ └── test_stats_service.py
│ ├── middleware/ # Middleware unit tests
│ │ ├── test_auth.py
│ │ ├── test_context.py
│ │ ├── test_vendor_context.py
│ │ ├── test_theme_context.py
│ │ ├── test_rate_limiter.py
│ │ ├── test_logging.py
│ │ └── test_decorators.py
│ ├── models/ # Model tests
│ │ └── test_database_models.py
│ └── utils/ # Utility tests
│ ├── test_csv_processor.py
│ ├── test_data_validation.py
│ └── test_data_processing.py
│
├── integration/ # Integration tests
│ ├── conftest.py # Integration-specific fixtures
│ ├── api/ # API endpoint tests
│ │ └── v1/
│ │ ├── test_auth_endpoints.py
│ │ ├── test_vendor_endpoints.py
│ │ ├── test_product_endpoints.py
│ │ ├── test_inventory_endpoints.py
│ │ ├── test_admin_endpoints.py
│ │ ├── test_marketplace_products_endpoints.py
│ │ ├── test_marketplace_import_job_endpoints.py
│ │ ├── test_marketplace_product_export.py
│ │ ├── test_stats_endpoints.py
│ │ ├── test_pagination.py
│ │ └── test_filtering.py
│ ├── middleware/ # Middleware integration tests
│ │ ├── conftest.py
│ │ ├── test_middleware_stack.py
│ │ ├── test_context_detection_flow.py
│ │ ├── test_vendor_context_flow.py
│ │ └── test_theme_loading_flow.py
│ ├── security/ # Security tests
│ │ ├── test_authentication.py
│ │ ├── test_authorization.py
│ │ └── test_input_validation.py
│ ├── tasks/ # Background task tests
│ │ └── test_background_tasks.py
│ └── workflows/ # Multi-step workflow tests
│ └── test_integration.py
│
├── system/ # System tests
│ ├── conftest.py # System-specific fixtures
│ └── test_error_handling.py # System-wide error handling
│
├── performance/ # Performance tests
│ ├── conftest.py # Performance-specific fixtures
│ └── test_api_performance.py # API performance tests
│
└── test_data/ # Static test data files
└── csv/
└── sample_products.csv
Fixture System
Fixture Hierarchy
Fixtures are organized hierarchically with different scopes:
1. Core Fixtures (tests/conftest.py)
Session-scoped fixtures - Created once per test session:
@pytest.fixture(scope="session")
def engine():
"""Create test database engine - reused across entire test session."""
return create_engine(
"sqlite:///:memory:",
connect_args={"check_same_thread": False},
poolclass=StaticPool,
echo=False
)
@pytest.fixture(scope="session")
def testing_session_local(engine):
"""Create session factory - reused across entire test session."""
return sessionmaker(autocommit=False, autoflush=False, bind=engine)
Function-scoped fixtures - Created for each test:
@pytest.fixture(scope="function")
def db(engine, testing_session_local):
"""
Create a clean database session for each test.
- Creates all tables before test
- Yields session to test
- Cleans up after test
"""
Base.metadata.create_all(bind=engine)
db_session = testing_session_local()
try:
yield db_session
finally:
db_session.close()
Base.metadata.drop_all(bind=engine)
Base.metadata.create_all(bind=engine)
@pytest.fixture(scope="function")
def client(db):
"""
Create test client with database dependency override.
Overrides FastAPI's get_db dependency to use test database.
"""
def override_get_db():
try:
yield db
finally:
pass
app.dependency_overrides[get_db] = override_get_db
try:
client = TestClient(app)
yield client
finally:
if get_db in app.dependency_overrides:
del app.dependency_overrides[get_db]
2. Fixture Modules (tests/fixtures/)
Fixtures are organized by domain in separate modules:
tests/fixtures/auth_fixtures.py:
auth_manager- AuthManager instancetest_user- Regular usertest_admin- Admin userother_user- Additional user for access control testsanother_admin- Additional admin for admin interaction testsauth_headers- Authentication headers for test_useradmin_headers- Authentication headers for test_admin
tests/fixtures/vendor_fixtures.py:
test_vendor- Basic test vendorunique_vendor- Vendor with unique codeinactive_vendor- Inactive vendorverified_vendor- Verified vendortest_product- Vendor product relationshiptest_inventory- Inventory entryvendor_factory- Factory function to create vendors dynamically
tests/fixtures/marketplace_product_fixtures.py:
unique_product- Marketplace productmultiple_products- List of products
tests/fixtures/testing_fixtures.py:
empty_db- Empty database for edge case testingdb_with_error- Mock database that raises errors
3. Test-Level Fixtures (tests/{level}/conftest.py)
Each test level can have specific fixtures:
tests/unit/conftest.py:
# Unit test specific fixtures
# Currently minimal - add unit-specific mocks here
tests/integration/conftest.py:
# Integration test specific fixtures
# Currently minimal - add integration-specific setup here
tests/system/conftest.py:
# System test specific fixtures
# Currently minimal - add system-specific setup here
tests/performance/conftest.py:
@pytest.fixture
def performance_db_session(db):
"""Database session optimized for performance testing."""
return db
Registering Fixture Modules
Fixture modules must be registered in tests/conftest.py:
# Import fixtures from fixture modules
pytest_plugins = [
"tests.fixtures.auth_fixtures",
"tests.fixtures.marketplace_product_fixtures",
"tests.fixtures.vendor_fixtures",
"tests.fixtures.customer_fixtures",
"tests.fixtures.marketplace_import_job_fixtures",
"tests.fixtures.testing_fixtures",
]
Creating New Fixtures
When adding new fixtures:
-
Determine the appropriate location:
- Core database/client fixtures →
tests/conftest.py - Domain-specific fixtures →
tests/fixtures/{domain}_fixtures.py - Test-level specific →
tests/{level}/conftest.py
- Core database/client fixtures →
-
Choose the right scope:
session- Reuse across all tests (e.g., database engine)module- Reuse within a test modulefunction- New instance per test (default, safest)
-
Follow naming conventions:
- Use descriptive names:
test_user,test_admin,test_vendor - Use
_factorysuffix for factory functions:vendor_factory - Use
mock_prefix for mocked objects:mock_service
- Use descriptive names:
-
Clean up resources:
- Always use
db.expunge()for database objects - Use try/finally blocks for cleanup
- Clear dependency overrides
- Always use
Example - Adding a new fixture module:
# tests/fixtures/order_fixtures.py
import pytest
import uuid
from models.database.order import Order
@pytest.fixture
def test_order(db, test_user, test_vendor):
"""Create a test order."""
unique_id = str(uuid.uuid4())[:8]
order = Order(
order_number=f"ORDER_{unique_id}",
user_id=test_user.id,
vendor_id=test_vendor.id,
total_amount=100.00,
status="pending"
)
db.add(order)
db.commit()
db.refresh(order)
db.expunge(order)
return order
@pytest.fixture
def order_factory():
"""Factory to create multiple orders."""
def _create_order(db, user_id, vendor_id, **kwargs):
unique_id = str(uuid.uuid4())[:8]
defaults = {
"order_number": f"ORDER_{unique_id}",
"user_id": user_id,
"vendor_id": vendor_id,
"total_amount": 100.00,
"status": "pending"
}
defaults.update(kwargs)
order = Order(**defaults)
db.add(order)
db.commit()
db.refresh(order)
return order
return _create_order
Then register it in tests/conftest.py:
pytest_plugins = [
"tests.fixtures.auth_fixtures",
"tests.fixtures.vendor_fixtures",
"tests.fixtures.order_fixtures", # Add new module
# ... other fixtures
]
Adding New Tests
1. Determine Test Level
Choose the appropriate test level:
| Level | When to Use | Example |
|---|---|---|
| Unit | Testing a single function/class in isolation | Service method logic, utility functions |
| Integration | Testing multiple components together | API endpoints, database operations, middleware flow |
| System | Testing complete application behavior | Error handling across app, user workflows |
| Performance | Testing speed and scalability | Response times, concurrent requests |
2. Create Test File
Follow naming conventions:
tests/{level}/{layer}/test_{component}.py
Examples:
tests/unit/services/test_order_service.pytests/integration/api/v1/test_order_endpoints.pytests/system/test_order_workflows.py
3. Write Test Class
Use descriptive class names and apply markers:
# tests/unit/services/test_order_service.py
import pytest
from app.services.order_service import OrderService
from app.exceptions import OrderNotFoundException
@pytest.mark.unit
@pytest.mark.orders # Add new marker to pytest.ini first
class TestOrderService:
"""Test suite for OrderService."""
def setup_method(self):
"""Initialize service instance before each test."""
self.service = OrderService()
def test_create_order_success(self, db, test_user, test_vendor):
"""Test successful order creation."""
# Arrange
order_data = {
"vendor_id": test_vendor.id,
"total_amount": 100.00
}
# Act
order = self.service.create_order(db, test_user.id, order_data)
# Assert
assert order is not None
assert order.user_id == test_user.id
assert order.vendor_id == test_vendor.id
assert order.total_amount == 100.00
def test_get_order_not_found(self, db):
"""Test getting non-existent order raises exception."""
# Act & Assert
with pytest.raises(OrderNotFoundException):
self.service.get_order(db, order_id=99999)
4. Add Required Fixtures
If you need new fixtures, create them as described in Creating New Fixtures.
5. Run Your Tests
# Run your new test file
pytest tests/unit/services/test_order_service.py -v
# Run with coverage
pytest tests/unit/services/test_order_service.py --cov=app.services.order_service
Updating Tests
When to Update Tests
Update tests when:
- API changes - Endpoint modifications, new parameters, response structure changes
- Business logic changes - Modified validation rules, new requirements
- Database schema changes - New fields, relationships, constraints
- Exception handling changes - New exception types, modified error messages
- Test failures - Legitimate changes that require test updates
Update Process
- Run existing tests to identify failures:
pytest tests/ -v
-
Identify the cause:
- Code change (legitimate update needed)
- Test bug (test was incorrect)
- Regression (code broke functionality)
-
Update tests appropriately:
Example - API endpoint added new field:
# OLD TEST
def test_create_vendor(self, client, auth_headers):
response = client.post(
"/api/v1/vendor",
headers=auth_headers,
json={
"vendor_code": "TEST123",
"name": "Test Vendor"
}
)
assert response.status_code == 200
# UPDATED TEST - New required field "subdomain"
def test_create_vendor(self, client, auth_headers):
response = client.post(
"/api/v1/vendor",
headers=auth_headers,
json={
"vendor_code": "TEST123",
"name": "Test Vendor",
"subdomain": "testvendor" # New required field
}
)
assert response.status_code == 200
assert response.json()["subdomain"] == "testvendor" # Verify new field
- Update fixtures if necessary:
# tests/fixtures/vendor_fixtures.py
@pytest.fixture
def test_vendor(db, test_user):
"""Create a test vendor."""
unique_id = str(uuid.uuid4())[:8].upper()
vendor = Vendor(
vendor_code=f"TESTVENDOR_{unique_id}",
name=f"Test Vendor {unique_id}",
subdomain=f"testvendor{unique_id.lower()}", # ADD NEW FIELD
owner_user_id=test_user.id,
is_active=True,
is_verified=True
)
db.add(vendor)
db.commit()
db.refresh(vendor)
db.expunge(vendor)
return vendor
- Run tests again to verify:
pytest tests/ -v
Handling Breaking Changes
When making breaking changes:
- Update all affected tests - Search for usage
- Update fixtures - Modify fixture creation
- Update documentation - Keep testing guide current
- Communicate changes - Inform team of test updates
Test Coverage
Measuring Coverage
The project enforces 80% code coverage minimum (configured in pytest.ini).
View Coverage Report
# Generate coverage report
make test-coverage
# Or with pytest directly
pytest tests/ --cov=app --cov=models --cov=middleware --cov-report=html
# Open HTML report
open htmlcov/index.html # macOS
xdg-open htmlcov/index.html # Linux
start htmlcov/index.html # Windows
Coverage Report Structure
The HTML report shows:
- Overall coverage percentage - Should be > 80%
- Per-file coverage - Individual module coverage
- Missing lines - Specific lines not covered
- Branch coverage - Conditional paths covered
Improving Coverage
1. Identify Uncovered Code
Look for:
- Red highlighted lines (not executed)
- Yellow highlighted lines (partial branch coverage)
- Functions/classes with 0% coverage
2. Add Missing Tests
Focus on:
- Error paths - Exception handling
- Edge cases - Empty inputs, boundary values
- Conditional branches - if/else paths
- Alternative flows - Different code paths
Example - Improving branch coverage:
# Code with conditional
def get_vendor_status(vendor):
if vendor.is_active and vendor.is_verified:
return "active"
elif vendor.is_active:
return "pending_verification"
else:
return "inactive"
# Tests needed for 100% coverage
def test_vendor_status_active_and_verified(test_vendor):
"""Test status when vendor is active and verified."""
test_vendor.is_active = True
test_vendor.is_verified = True
assert get_vendor_status(test_vendor) == "active"
def test_vendor_status_active_not_verified(test_vendor):
"""Test status when vendor is active but not verified."""
test_vendor.is_active = True
test_vendor.is_verified = False
assert get_vendor_status(test_vendor) == "pending_verification"
def test_vendor_status_inactive(test_vendor):
"""Test status when vendor is inactive."""
test_vendor.is_active = False
assert get_vendor_status(test_vendor) == "inactive"
3. Exclude Untestable Code
For code that shouldn't be tested (e.g., main blocks, debug code):
# Use pragma to exclude from coverage
if __name__ == "__main__": # pragma: no cover
# This won't count against coverage
main()
Coverage Goals by Test Level
| Test Level | Coverage Goal |
|---|---|
| Unit Tests | > 90% |
| Integration Tests | > 80% |
| Overall | > 80% (enforced) |
Continuous Improvements
Identifying Improvement Opportunities
1. Analyze Test Execution Time
# Show 10 slowest tests
pytest tests/ --durations=10
# Show all durations
pytest tests/ --durations=0
Action Items:
- Mark slow tests with
@pytest.mark.slow - Optimize fixture creation
- Reduce database operations
- Use mocks where appropriate
2. Review Test Failures
# Run only previously failed tests
pytest tests/ --lf
# Run failed tests first
pytest tests/ --ff
Action Items:
- Fix flaky tests (tests that fail intermittently)
- Improve test isolation
- Add better error messages
3. Monitor Test Growth
Track metrics:
- Number of tests per module
- Test coverage percentage
- Average test execution time
- Failed test rate
Refactoring Tests
When to Refactor
- Tests have duplicated code
- Setup code is repeated
- Tests are hard to understand
- Tests are brittle (fail frequently on minor changes)
Refactoring Techniques
1. Extract common setup to fixtures:
# BEFORE - Duplicated setup
def test_vendor_creation(db, test_user):
vendor_data = VendorCreate(vendor_code="TEST", name="Test")
vendor = VendorService().create_vendor(db, vendor_data, test_user)
assert vendor.vendor_code == "TEST"
def test_vendor_update(db, test_user):
vendor_data = VendorCreate(vendor_code="TEST", name="Test")
vendor = VendorService().create_vendor(db, vendor_data, test_user)
# ... update logic
# AFTER - Use fixture
@pytest.fixture
def created_vendor(db, test_user):
"""Vendor already created in database."""
vendor_data = VendorCreate(vendor_code="TEST", name="Test")
return VendorService().create_vendor(db, vendor_data, test_user)
def test_vendor_creation(created_vendor):
assert created_vendor.vendor_code == "TEST"
def test_vendor_update(db, created_vendor):
# Directly use created_vendor
pass
2. Use factory functions for variations:
# Factory pattern for creating test data variations
@pytest.fixture
def vendor_factory(db, test_user):
"""Create vendors with custom attributes."""
def _create(**kwargs):
defaults = {
"vendor_code": f"TEST{uuid.uuid4()[:8]}",
"name": "Test Vendor",
"is_active": True,
"is_verified": False
}
defaults.update(kwargs)
vendor_data = VendorCreate(**defaults)
return VendorService().create_vendor(db, vendor_data, test_user)
return _create
# Use factory in tests
def test_inactive_vendor(vendor_factory):
vendor = vendor_factory(is_active=False)
assert not vendor.is_active
def test_verified_vendor(vendor_factory):
vendor = vendor_factory(is_verified=True)
assert vendor.is_verified
3. Use parametrize for similar test cases:
# BEFORE - Multiple similar tests
def test_invalid_email_missing_at(db):
with pytest.raises(ValidationException):
create_user(db, email="invalidemail.com")
def test_invalid_email_missing_domain(db):
with pytest.raises(ValidationException):
create_user(db, email="invalid@")
def test_invalid_email_missing_tld(db):
with pytest.raises(ValidationException):
create_user(db, email="invalid@domain")
# AFTER - Parametrized test
@pytest.mark.parametrize("invalid_email", [
"invalidemail.com", # Missing @
"invalid@", # Missing domain
"invalid@domain", # Missing TLD
"@domain.com", # Missing local part
])
def test_invalid_email_format(db, invalid_email):
"""Test that invalid email formats raise validation error."""
with pytest.raises(ValidationException):
create_user(db, email=invalid_email)
Testing Best Practices Evolution
As the project grows, continuously improve testing practices:
- Review and update this documentation
- Share knowledge within the team
- Conduct test code reviews
- Refactor tests alongside application code
- Keep tests simple and maintainable
Common Maintenance Tasks
Task 1: Adding a New Model
When adding a new database model:
- Create model tests:
# tests/unit/models/test_order_model.py
import pytest
@pytest.mark.unit
class TestOrderModel:
def test_order_creation(self, db, test_user, test_vendor):
"""Test Order model can be created."""
order = Order(
order_number="ORDER001",
user_id=test_user.id,
vendor_id=test_vendor.id,
total_amount=100.00
)
db.add(order)
db.commit()
assert order.id is not None
assert order.order_number == "ORDER001"
- Create fixtures:
# tests/fixtures/order_fixtures.py
@pytest.fixture
def test_order(db, test_user, test_vendor):
"""Create a test order."""
# ... implementation
- Register fixtures:
# tests/conftest.py
pytest_plugins = [
# ... existing
"tests.fixtures.order_fixtures",
]
- Add service tests:
# tests/unit/services/test_order_service.py
@pytest.mark.unit
class TestOrderService:
# ... test methods
- Add API endpoint tests:
# tests/integration/api/v1/test_order_endpoints.py
@pytest.mark.integration
@pytest.mark.api
class TestOrderAPI:
# ... test methods
Task 2: Updating an Existing API Endpoint
When modifying an API endpoint:
- Update integration tests:
# tests/integration/api/v1/test_vendor_endpoints.py
def test_create_vendor_with_new_field(self, client, auth_headers):
response = client.post(
"/api/v1/vendor",
headers=auth_headers,
json={
"vendor_code": "TEST",
"name": "Test",
"new_field": "value" # Add new field
}
)
assert response.status_code == 200
assert response.json()["new_field"] == "value"
- Update service tests if logic changed:
# tests/unit/services/test_vendor_service.py
def test_create_vendor_validates_new_field(self, db, test_user):
# Test new validation logic
pass
- Update fixtures if model changed:
# tests/fixtures/vendor_fixtures.py
@pytest.fixture
def test_vendor(db, test_user):
vendor = Vendor(
# ... existing fields
new_field="default_value" # Add new field
)
# ...
Task 3: Fixing Flaky Tests
Flaky tests fail intermittently. Common causes:
1. Test order dependency:
# BAD - Tests depend on order
test_data = None
def test_create():
global test_data
test_data = create_something()
def test_update():
update_something(test_data) # Fails if test_create doesn't run first
# GOOD - Independent tests
def test_create(db):
data = create_something(db)
assert data is not None
def test_update(db):
data = create_something(db) # Create own test data
update_something(db, data)
2. Timing issues:
# BAD - Timing-dependent
def test_async_operation():
start_async_task()
result = get_result() # May not be ready yet
assert result == expected
# GOOD - Wait for completion
def test_async_operation():
task = start_async_task()
result = task.wait() # Wait for completion
assert result == expected
3. Shared state:
# BAD - Shared mutable state
shared_list = []
def test_append():
shared_list.append(1)
assert len(shared_list) == 1 # Fails if test runs twice
# GOOD - Fresh state per test
def test_append():
test_list = [] # New list per test
test_list.append(1)
assert len(test_list) == 1
Task 4: Adding Test Markers
When you need to categorize tests:
- Add marker to pytest.ini:
markers =
# ... existing markers
orders: marks tests as order-related functionality
payments: marks tests as payment-related functionality
- Apply marker to tests:
@pytest.mark.orders
class TestOrderService:
pass
- Run tests by marker:
pytest -m orders
pytest -m "orders or payments"
Task 5: Updating Test Data
When test data needs to change:
Option 1: Update fixture:
# tests/fixtures/product_fixtures.py
@pytest.fixture
def sample_products():
"""Return sample product data."""
return [
{"name": "Product 1", "price": 10.00, "category": "Electronics"},
{"name": "Product 2", "price": 20.00, "category": "Books"},
# Add more or modify existing
]
Option 2: Update static test data:
# tests/test_data/csv/sample_products.csv
gtin,title,brand,price
1234567890123,Product 1,Brand A,10.00
2345678901234,Product 2,Brand B,20.00
Option 3: Use factory with parameters:
@pytest.fixture
def product_factory(db):
"""Create products dynamically."""
def _create(name="Product", price=10.00, **kwargs):
# ... create product
return product
return _create
Summary
This test maintenance guide covers:
- ✅ Test configuration in
pytest.ini - ✅ Understanding fixture hierarchy and organization
- ✅ Adding new tests at appropriate levels
- ✅ Updating tests when code changes
- ✅ Measuring and improving test coverage
- ✅ Continuous improvement strategies
- ✅ Common maintenance tasks and patterns
- ✅ Refactoring techniques for better maintainability
Key Takeaways
- Keep tests organized - Follow the established directory structure
- Use appropriate test levels - Unit for isolation, Integration for component interaction
- Maintain fixtures - Keep fixture modules clean and well-documented
- Monitor coverage - Aim for >80% overall, >90% for critical components
- Refactor regularly - Keep tests maintainable as code evolves
- Document changes - Update this guide as testing practices evolve
- Review tests - Include test code in code reviews
Quick Reference
# Run all tests
make test
# Run specific test level
make test-unit
make test-integration
# Run with coverage
make test-coverage
# Run tests by marker
pytest -m auth
pytest -m "unit and vendors"
# Run specific test
pytest tests/unit/services/test_vendor_service.py::TestVendorService::test_create_vendor_success
# Debug test
pytest tests/unit/services/test_vendor_service.py -vv --pdb
# Show slowest tests
pytest tests/ --durations=10
For more information on writing tests, see the Testing Guide.