13 KiB
Testing Guide for Developers
This guide provides everything your development team needs to know about our comprehensive test suite structure, how to run tests effectively, and how to maintain test quality.
Quick Start
# Install test dependencies
make install-test
# Run all tests
make test
# Run fast tests only (development workflow)
make test-fast
# Run with coverage
make test-coverage
Test Structure Overview
Our test suite is organized hierarchically by test type and execution speed to optimize development workflows:
tests/
├── conftest.py # Core test configuration and database fixtures
├── pytest.ini # Test configuration with markers and coverage
├── fixtures/ # Domain-organized test fixtures
│ ├── auth_fixtures.py # Users, tokens, authentication headers
│ ├── product_fixtures.py # Products, factories, bulk test data
│ ├── shop_fixtures.py # Shops, stock, shop-product relationships
│ └── marketplace_fixtures.py # Import jobs and marketplace data
├── unit/ # Fast, isolated component tests (< 1 second)
│ ├── models/ # Database and API model tests
│ ├── utils/ # Utility function tests
│ ├── services/ # Business logic tests
│ └── middleware/ # Middleware component tests
├── integration/ # Multi-component tests (1-10 seconds)
│ ├── api/v1/ # API endpoint tests with database
│ ├── security/ # Authentication, authorization tests
│ ├── tasks/ # Background task integration tests
│ └── workflows/ # Multi-step process tests
├── performance/ # Performance benchmarks (10+ seconds)
│ └── test_api_performance.py # Load testing and benchmarks
├── system/ # End-to-end system tests (30+ seconds)
│ └── test_error_handling.py # Application-wide error handling
└── test_data/ # Static test data files
└── csv/sample_products.csv # Sample CSV for import testing
Test Categories and When to Use Each
Unit Tests (tests/unit/)
Purpose: Test individual components in isolation
Speed: Very fast (< 1 second each)
Use when: Testing business logic, data processing, model validation
# Run during active development
pytest -m unit
# Example locations:
tests/unit/services/test_product_service.py # Business logic
tests/unit/utils/test_data_processing.py # Utility functions
tests/unit/models/test_database_models.py # Model validation
Integration Tests (tests/integration/)
Purpose: Test component interactions
Speed: Moderate (1-10 seconds each)
Use when: Testing API endpoints, service interactions, workflows
# Run before commits
pytest -m integration
# Example locations:
tests/integration/api/v1/test_admin_endpoints.py # API endpoints
tests/integration/security/test_authentication.py # Auth workflows
tests/integration/workflows/test_product_import.py # Multi-step processes
Performance Tests (tests/performance/)
Purpose: Validate performance requirements
Speed: Slow (10+ seconds each)
Use when: Testing response times, load capacity, large data processing
# Run periodically or in CI
pytest -m performance
System Tests (tests/system/)
Purpose: End-to-end application behavior
Speed: Slowest (30+ seconds each)
Use when: Testing complete user scenarios, error handling across layers
# Run before releases
pytest -m system
Daily Development Workflow
During Active Development
# Quick feedback loop - run relevant unit tests
pytest tests/unit/services/test_product_service.py -v
# Test specific functionality you're working on
pytest -k "product and create" -m unit
# Fast comprehensive check
make test-fast # Equivalent to: pytest -m "not slow"
Before Committing Code
# Run unit and integration tests
make test-unit
make test-integration
# Or run both with coverage
make test-coverage
Before Creating Pull Request
# Full test suite with linting
make ci # Runs format, lint, and test-coverage
# Check if all tests pass
make test
Running Specific Tests
By Test Type
# Fast unit tests only
pytest -m unit
# Integration tests only
pytest -m integration
# Everything except slow tests
pytest -m "not slow"
# Database-dependent tests
pytest -m database
# Authentication-related tests
pytest -m auth
By Component/Domain
# All product-related tests
pytest -k "product"
# Admin functionality tests
pytest -m admin
# API endpoint tests
pytest -m api
# All tests in a directory
pytest tests/unit/services/ -v
By Specific Files or Methods
# Specific test file
pytest tests/unit/services/test_product_service.py -v
# Specific test class
pytest tests/unit/services/test_product_service.py::TestProductService -v
# Specific test method
pytest tests/unit/services/test_product_service.py::TestProductService::test_create_product_success -v
Test Fixtures and Data
Using Existing Fixtures
Our fixtures are organized by domain in the fixtures/ directory:
# In your test file
def test_product_creation(test_user, test_shop, auth_headers):
"""Uses auth_fixtures.py fixtures"""
# test_user: Creates a test user
# test_shop: Creates a test shop owned by test_user
# auth_headers: Provides authentication headers for API calls
def test_multiple_products(multiple_products):
"""Uses product_fixtures.py fixtures"""
# multiple_products: Creates 5 test products with different attributes
assert len(multiple_products) == 5
def test_with_factory(product_factory, db):
"""Uses factory fixtures for custom test data"""
# Create custom product with specific attributes
product = product_factory(db, title="Custom Product", price="99.99")
assert product.title == "Custom Product"
Available Fixtures by Domain
Authentication (auth_fixtures.py):
test_user,test_admin,other_userauth_headers,admin_headersauth_manager
Products (product_fixtures.py):
test_product,unique_product,multiple_productsproduct_factory(for custom products)
Shops (shop_fixtures.py):
test_shop,unique_shop,inactive_shop,verified_shopshop_product,test_stock,multiple_stocksshop_factory(for custom shops)
Marketplace (marketplace_fixtures.py):
test_marketplace_job
Writing New Tests
Test File Location
Choose location based on what you're testing:
# Business logic → unit tests
tests/unit/services/test_my_new_service.py
# API endpoints → integration tests
tests/integration/api/v1/test_my_new_endpoints.py
# Multi-component workflows → integration tests
tests/integration/workflows/test_my_new_workflow.py
# Performance concerns → performance tests
tests/performance/test_my_performance.py
Test Class Structure
import pytest
from app.services.my_service import MyService
@pytest.mark.unit # Always add appropriate markers
@pytest.mark.products # Domain-specific marker
class TestMyService:
"""Test suite for MyService business logic"""
def setup_method(self):
"""Run before each test method"""
self.service = MyService()
def test_create_item_with_valid_data_succeeds(self):
"""Test successful item creation - descriptive name explaining scenario"""
# Arrange
item_data = {"name": "Test Item", "price": "10.99"}
# Act
result = self.service.create_item(item_data)
# Assert
assert result is not None
assert result.name == "Test Item"
def test_create_item_with_invalid_data_raises_validation_error(self):
"""Test validation error handling"""
# Arrange
invalid_data = {"name": "", "price": "invalid"}
# Act & Assert
with pytest.raises(ValidationError):
self.service.create_item(invalid_data)
API Integration Test Example
import pytest
@pytest.mark.integration
@pytest.mark.api
@pytest.mark.products
class TestProductEndpoints:
"""Integration tests for product API endpoints"""
def test_create_product_endpoint_success(self, client, auth_headers):
"""Test successful product creation via API"""
# Arrange
product_data = {
"product_id": "TEST001",
"title": "Test Product",
"price": "19.99"
}
# Act
response = client.post("/api/v1/product",
json=product_data,
headers=auth_headers)
# Assert
assert response.status_code == 200
assert response.json()["product_id"] == "TEST001"
Test Naming Conventions
Files
test_{component_name}.pyfor the file name- Mirror your source structure:
app/services/product.py→tests/unit/services/test_product_service.py
Classes
TestComponentNamefor the main componentTestComponentValidationfor validation-specific testsTestComponentErrorHandlingfor error scenarios
Methods
Use descriptive names that explain the scenario:
# Good - explains what, when, and expected outcome
def test_create_product_with_valid_data_returns_product(self):
def test_create_product_with_duplicate_id_raises_error(self):
def test_get_product_when_not_found_returns_404(self):
# Acceptable shorter versions
def test_create_product_success(self):
def test_create_product_validation_error(self):
def test_get_product_not_found(self):
Coverage Requirements
We maintain high coverage standards:
- Minimum overall coverage: 80%
- New code coverage: 90%+
- Critical paths: 95%+
# Check coverage
make test-coverage
# View detailed HTML report
open htmlcov/index.html
# Fail build if coverage too low
pytest --cov=app --cov-fail-under=80
Debugging Failed Tests
Get Detailed Information
# Verbose output with local variables
pytest tests/path/to/test.py -vv --tb=long --showlocals
# Stop on first failure
pytest -x
# Re-run only failed tests
pytest --lf
Common Issues and Solutions
Import Errors:
# Ensure you're in project root and have installed in dev mode
pip install -e .
PYTHONPATH=. pytest
Database Issues:
# Tests use in-memory SQLite by default
# Check if fixtures are properly imported
pytest --fixtures tests/
Fixture Not Found:
# Ensure fixture modules are listed in conftest.py pytest_plugins
# Check fixture dependencies (test_shop needs test_user)
Performance and Optimization
Speed Up Test Runs
# Run in parallel (install pytest-xdist first)
pytest -n auto
# Skip slow tests during development
pytest -m "not slow"
# Run only changed tests (install pytest-testmon)
pytest --testmon
Find Slow Tests
# Show 10 slowest tests
pytest --durations=10
# Show all test durations
pytest --durations=0
Continuous Integration Integration
Our tests integrate with CI/CD pipelines through make targets:
# Commands used in CI
make ci # Format, lint, test with coverage
make test-fast # Quick feedback in early CI stages
make test-coverage # Full test run with coverage reporting
The CI pipeline:
- Runs
make test-fastfor quick feedback - Runs
make cifor comprehensive checks - Generates coverage reports in XML format
- Uploads coverage to reporting tools
Best Practices Summary
DO:
- Write tests for new code before committing
- Use descriptive test names explaining the scenario
- Keep unit tests fast (< 1 second each)
- Use appropriate fixtures for test data
- Add proper pytest markers to categorize tests
- Test both happy path and error scenarios
- Maintain good test coverage (80%+)
DON'T:
- Write tests that depend on external services (use mocks)
- Create tests that depend on execution order
- Use hardcoded values that might change
- Write overly complex test setups
- Ignore failing tests
- Skip adding tests for bug fixes
Getting Help
- Examples: Look at existing tests in similar components
- Fixtures: Check
tests/fixtures/for available test data - Configuration: See
pytest.inifor available markers - Make targets: Run
make helpto see all available commands - Team support: Ask in team channels or create GitHub issues
Make Commands Reference
make install-test # Install test dependencies
make test # Run all tests
make test-unit # Run unit tests only
make test-integration # Run integration tests only
make test-fast # Run all except slow tests
make test-coverage # Run with coverage report
make ci # Full CI pipeline (format, lint, test)
Use this guide as your daily reference for testing. The structure is designed to give you fast feedback during development while maintaining comprehensive test coverage.