Added placeholder for documentation

This commit is contained in:
2025-09-21 17:16:55 +02:00
parent bca894afc2
commit 5ef018d4ff
24 changed files with 681 additions and 1024 deletions

View File

@@ -1,551 +1 @@
# Test Maintenance Guide
This guide covers how to maintain, update, and contribute to our test suite as the application evolves. It's designed for developers who need to modify existing tests or add new test coverage.
## Test Maintenance Philosophy
Our test suite follows these core principles:
- **Tests should be reliable** - They pass consistently and fail only when there are real issues
- **Tests should be fast** - Unit tests complete in milliseconds, integration tests in seconds
- **Tests should be maintainable** - Easy to update when code changes
- **Tests should provide clear feedback** - Failures should clearly indicate what went wrong
## When to Update Tests
### Code Changes That Require Test Updates
**API Changes**:
```bash
# When you modify API endpoints, update integration tests
pytest tests/integration/api/v1/test_*_endpoints.py -v
```
**Business Logic Changes**:
```bash
# When you modify service logic, update unit tests
pytest tests/unit/services/test_*_service.py -v
```
**Database Model Changes**:
```bash
# When you modify models, update model tests
pytest tests/unit/models/test_database_models.py -v
```
**New Features**:
- Add new test files following our naming conventions
- Ensure both unit and integration test coverage
- Add appropriate pytest markers
## Common Maintenance Tasks
### Adding Tests for New Features
**Step 1: Determine Test Type and Location**
```python
# New business logic → Unit test
tests/unit/services/test_new_feature_service.py
# New API endpoint → Integration test
tests/integration/api/v1/test_new_feature_endpoints.py
# New workflow → Integration test
tests/integration/workflows/test_new_feature_workflow.py
```
**Step 2: Create Test File with Proper Structure**
```python
import pytest
from app.services.new_feature_service import NewFeatureService
@pytest.mark.unit
@pytest.mark.new_feature # Add domain marker
class TestNewFeatureService:
"""Unit tests for NewFeatureService"""
def setup_method(self):
"""Setup run before each test"""
self.service = NewFeatureService()
def test_new_feature_with_valid_input_succeeds(self):
"""Test the happy path"""
# Test implementation
pass
def test_new_feature_with_invalid_input_raises_error(self):
"""Test error handling"""
# Test implementation
pass
```
**Step 3: Add Domain Marker to pytest.ini**
```ini
# Add to markers section in pytest.ini
new_feature: marks tests related to new feature functionality
```
### Updating Tests for API Changes
**Example: Adding a new field to product creation**
```python
# Before: tests/integration/api/v1/test_product_endpoints.py
def test_create_product_success(self, client, auth_headers):
product_data = {
"product_id": "TEST001",
"title": "Test Product",
"price": "19.99"
}
response = client.post("/api/v1/product", json=product_data, headers=auth_headers)
assert response.status_code == 200
# After: Adding 'category' field
def test_create_product_success(self, client, auth_headers):
product_data = {
"product_id": "TEST001",
"title": "Test Product",
"price": "19.99",
"category": "Electronics" # New field
}
response = client.post("/api/v1/product", json=product_data, headers=auth_headers)
assert response.status_code == 200
assert response.json()["category"] == "Electronics" # Verify new field
# Add test for validation
def test_create_product_with_invalid_category_fails(self, client, auth_headers):
product_data = {
"product_id": "TEST002",
"title": "Test Product",
"price": "19.99",
"category": "" # Invalid empty category
}
response = client.post("/api/v1/product", json=product_data, headers=auth_headers)
assert response.status_code == 422
```
### Updating Fixtures for Model Changes
**When you add fields to database models, update fixtures**:
```python
# tests/fixtures/product_fixtures.py - Before
@pytest.fixture
def test_product(db):
product = Product(
product_id="TEST001",
title="Test Product",
price="10.99"
)
# ... rest of fixture
# After: Adding new category field
@pytest.fixture
def test_product(db):
product = Product(
product_id="TEST001",
title="Test Product",
price="10.99",
category="Electronics" # Add new field with sensible default
)
# ... rest of fixture
```
### Handling Breaking Changes
**When making breaking changes that affect many tests**:
1. **Update fixtures first** to include new required fields
2. **Run tests to identify failures**: `pytest -x` (stop on first failure)
3. **Update tests systematically** by domain
4. **Verify coverage hasn't decreased**: `make test-coverage`
## Test Data Management
### Creating New Fixtures
**Add domain-specific fixtures to appropriate files**:
```python
# tests/fixtures/new_domain_fixtures.py
import pytest
from models.database import NewModel
@pytest.fixture
def test_new_model(db):
"""Create a test instance of NewModel"""
model = NewModel(
name="Test Model",
value="test_value"
)
db.add(model)
db.commit()
db.refresh(model)
return model
@pytest.fixture
def new_model_factory():
"""Factory for creating custom NewModel instances"""
def _create_new_model(db, **kwargs):
defaults = {"name": "Default Name", "value": "default"}
defaults.update(kwargs)
model = NewModel(**defaults)
db.add(model)
db.commit()
db.refresh(model)
return model
return _create_new_model
```
**Register new fixture module in conftest.py**:
```python
# tests/conftest.py
pytest_plugins = [
"tests.fixtures.auth_fixtures",
"tests.fixtures.product_fixtures",
"tests.fixtures.shop_fixtures",
"tests.fixtures.marketplace_fixtures",
"tests.fixtures.new_domain_fixtures", # Add new fixture module
]
```
### Managing Test Data Files
**Static test data in tests/test_data/**:
```
tests/test_data/
├── csv/
│ ├── valid_products.csv # Standard valid product data
│ ├── invalid_products.csv # Data with validation errors
│ ├── large_product_set.csv # Performance testing data
│ └── new_feature_data.csv # Data for new feature testing
├── json/
│ ├── api_responses.json # Mock API responses
│ └── configuration_samples.json # Configuration test data
└── fixtures/
└── database_seeds.json # Database seed data
```
**Update test data when adding new fields**:
```csv
# Before: tests/test_data/csv/valid_products.csv
product_id,title,price
TEST001,Product 1,19.99
TEST002,Product 2,29.99
# After: Adding category field
product_id,title,price,category
TEST001,Product 1,19.99,Electronics
TEST002,Product 2,29.99,Books
```
## Performance Test Maintenance
### Updating Performance Baselines
**When application performance improves or requirements change**:
```python
# tests/performance/test_api_performance.py
def test_product_list_performance(self, client, auth_headers, db):
# Create test data
products = [Product(product_id=f"PERF{i:03d}") for i in range(100)]
db.add_all(products)
db.commit()
# Time the request
start_time = time.time()
response = client.get("/api/v1/product?limit=100", headers=auth_headers)
end_time = time.time()
assert response.status_code == 200
assert len(response.json()["products"]) == 100
# Update baseline if performance has improved
assert end_time - start_time < 1.5 # Previously was 2.0 seconds
```
### Adding Performance Tests for New Features
```python
@pytest.mark.performance
@pytest.mark.slow
@pytest.mark.new_feature
def test_new_feature_performance_with_large_dataset(self, client, auth_headers, db):
"""Test new feature performance with realistic data volume"""
# Create large dataset
large_dataset = [NewModel(data=f"item_{i}") for i in range(1000)]
db.add_all(large_dataset)
db.commit()
# Test performance
start_time = time.time()
response = client.post("/api/v1/new-feature/process",
json={"process_all": True},
headers=auth_headers)
end_time = time.time()
assert response.status_code == 200
assert end_time - start_time < 10.0 # Should complete within 10 seconds
```
## Debugging and Troubleshooting
### Identifying Flaky Tests
**Tests that pass/fail inconsistently need attention**:
```bash
# Run the same test multiple times to identify flaky behavior
pytest tests/path/to/flaky_test.py -v --count=10
# Run with more verbose output to see what's changing
pytest tests/path/to/flaky_test.py -vv --tb=long --showlocals
```
**Common causes of flaky tests**:
- Database state not properly cleaned between tests
- Timing issues in async operations
- External service dependencies
- Shared mutable state between tests
### Fixing Common Test Issues
**Database State Issues**:
```python
# Ensure proper cleanup in fixtures
@pytest.fixture
def clean_database(db):
"""Ensure clean database state"""
yield db
# Explicit cleanup if needed
db.query(SomeModel).delete()
db.commit()
```
**Async Test Issues**:
```python
# Ensure proper async test setup
@pytest.mark.asyncio
async def test_async_operation():
# Use await for all async operations
result = await async_service.process_data()
assert result is not None
```
**Mock-Related Issues**:
```python
# Ensure mocks are properly reset between tests
def setup_method(self):
"""Reset mocks before each test"""
self.mock_service.reset_mock()
```
### Test Coverage Issues
**Identifying gaps in coverage**:
```bash
# Generate coverage report with missing lines
pytest --cov=app --cov-report=term-missing
# View HTML report for detailed analysis
pytest --cov=app --cov-report=html
open htmlcov/index.html
```
**Adding tests for uncovered code**:
```python
# Example: Adding test for error handling branch
def test_service_method_handles_database_error(self, mock_db):
"""Test error handling path that wasn't covered"""
# Setup mock to raise exception
mock_db.commit.side_effect = DatabaseError("Connection failed")
# Test that error is handled appropriately
with pytest.raises(ServiceError):
self.service.save_data(test_data)
```
## Code Quality Standards
### Test Code Review Checklist
**Before submitting test changes**:
- [ ] Tests have descriptive names explaining the scenario
- [ ] Appropriate pytest markers are used
- [ ] Test coverage hasn't decreased
- [ ] Tests are in the correct category (unit/integration/system)
- [ ] No hardcoded values that could break in different environments
- [ ] Error cases are tested, not just happy paths
- [ ] New fixtures are properly documented
- [ ] Performance tests have reasonable baselines
### Refactoring Tests
**When refactoring test code**:
```python
# Before: Repetitive test setup
class TestProductService:
def test_create_product_success(self):
service = ProductService()
data = {"name": "Test", "price": "10.99"}
result = service.create_product(data)
assert result is not None
def test_create_product_validation_error(self):
service = ProductService() # Duplicate setup
data = {"name": "", "price": "invalid"}
with pytest.raises(ValidationError):
service.create_product(data)
# After: Using setup_method and constants
class TestProductService:
def setup_method(self):
self.service = ProductService()
self.valid_data = {"name": "Test", "price": "10.99"}
def test_create_product_success(self):
result = self.service.create_product(self.valid_data)
assert result is not None
def test_create_product_validation_error(self):
invalid_data = {"name": "", "price": "invalid"}
with pytest.raises(ValidationError):
self.service.create_product(invalid_data)
```
## Working with CI/CD
### Test Categories in CI Pipeline
Our CI pipeline runs tests in stages:
**Stage 1: Fast Feedback**
```bash
make test-fast # Unit tests + fast integration tests
```
**Stage 2: Comprehensive Testing**
```bash
make test-coverage # Full suite with coverage
```
**Stage 3: Performance Validation** (on release branches)
```bash
pytest -m performance
```
### Making Tests CI-Friendly
**Ensure tests are deterministic**:
```python
# Bad: Tests that depend on current time
def test_user_creation():
user = create_user()
assert user.created_at.day == datetime.now().day # Flaky at midnight
# Good: Tests with controlled time
def test_user_creation(freezer):
freezer.freeze("2024-01-15 10:00:00")
user = create_user()
assert user.created_at == datetime(2024, 1, 15, 10, 0, 0)
```
**Make tests environment-independent**:
```python
# Use relative paths and environment variables
TEST_DATA_DIR = Path(__file__).parent / "test_data"
CSV_FILE = TEST_DATA_DIR / "sample_products.csv"
```
## Migration and Upgrade Strategies
### When Upgrading Dependencies
**Test dependency upgrades**:
```bash
# Test with new versions before upgrading
pip install pytest==8.0.0 pytest-cov==5.0.0
make test
# If tests fail, identify compatibility issues
pytest --tb=short -x
```
**Update test configuration for new pytest versions**:
```ini
# pytest.ini - may need updates for new versions
minversion = 8.0
# Check if any deprecated features are used
```
### Database Schema Changes
**When modifying database models**:
1. Update model test fixtures first
2. Run migration on test database
3. Update affected test data files
4. Run integration tests to catch relationship issues
```python
# Update fixtures for new required fields
@pytest.fixture
def test_product(db):
product = Product(
# ... existing fields
new_required_field="default_value" # Add with sensible default
)
return product
```
## Documentation and Knowledge Sharing
### Documenting Complex Test Scenarios
**For complex business logic tests**:
```python
def test_complex_pricing_calculation_scenario(self):
"""
Test pricing calculation with multiple discounts and tax rules.
Scenario:
- Product price: $100
- Member discount: 10%
- Seasonal discount: 5% (applied after member discount)
- Tax rate: 8.5%
Expected calculation:
Base: $100 → Member discount: $90 → Seasonal: $85.50 → Tax: $92.77
"""
# Test implementation with clear steps
```
### Team Knowledge Sharing
**Maintain test documentation**:
- Update this guide when adding new test patterns
- Document complex fixture relationships
- Share test debugging techniques in team meetings
- Create examples for new team members
## Summary: Test Maintenance Best Practices
**Daily Practices**:
- Run relevant tests before committing code
- Add tests for new functionality immediately
- Keep test names descriptive and current
- Update fixtures when models change
**Regular Maintenance**:
- Review and update performance baselines
- Refactor repetitive test code
- Clean up unused fixtures and test data
- Monitor test execution times
**Long-term Strategy**:
- Plan test architecture for new features
- Evaluate test coverage trends
- Update testing tools and practices
- Share knowledge across the team
**Remember**: Good tests are living documentation of your system's behavior. Keep them current, clear, and comprehensive to maintain a healthy codebase.
Use this guide alongside the [Testing Guide](testing-guide.md) for complete test management knowledge.
*This documentation is under development.*

View File

@@ -1,470 +1 @@
# Testing Guide for Developers
This guide provides everything your development team needs to know about our comprehensive test suite structure, how to run tests effectively, and how to maintain test quality.
## Quick Start
```bash
# Install test dependencies
make install-test
# Run all tests
make test
# Run fast tests only (development workflow)
make test-fast
# Run with coverage
make test-coverage
```
## Test Structure Overview
Our test suite is organized hierarchically by test type and execution speed to optimize development workflows:
```
tests/
├── conftest.py # Core test configuration and database fixtures
├── pytest.ini # Test configuration with markers and coverage
├── fixtures/ # Domain-organized test fixtures
│ ├── auth_fixtures.py # Users, tokens, authentication headers
│ ├── product_fixtures.py # Products, factories, bulk test data
│ ├── shop_fixtures.py # Shops, stock, shop-product relationships
│ └── marketplace_fixtures.py # Import jobs and marketplace data
├── unit/ # Fast, isolated component tests (< 1 second)
│ ├── models/ # Database and API model tests
│ ├── utils/ # Utility function tests
│ ├── services/ # Business logic tests
│ └── middleware/ # Middleware component tests
├── integration/ # Multi-component tests (1-10 seconds)
│ ├── api/v1/ # API endpoint tests with database
│ ├── security/ # Authentication, authorization tests
│ ├── tasks/ # Background task integration tests
│ └── workflows/ # Multi-step process tests
├── performance/ # Performance benchmarks (10+ seconds)
│ └── test_api_performance.py # Load testing and benchmarks
├── system/ # End-to-end system tests (30+ seconds)
│ └── test_error_handling.py # Application-wide error handling
└── test_data/ # Static test data files
└── csv/sample_products.csv # Sample CSV for import testing
```
## Test Categories and When to Use Each
### Unit Tests (`tests/unit/`)
**Purpose**: Test individual components in isolation
**Speed**: Very fast (< 1 second each)
**Use when**: Testing business logic, data processing, model validation
```bash
# Run during active development
pytest -m unit
# Example locations:
tests/unit/services/test_product_service.py # Business logic
tests/unit/utils/test_data_processing.py # Utility functions
tests/unit/models/test_database_models.py # Model validation
```
### Integration Tests (`tests/integration/`)
**Purpose**: Test component interactions
**Speed**: Moderate (1-10 seconds each)
**Use when**: Testing API endpoints, service interactions, workflows
```bash
# Run before commits
pytest -m integration
# Example locations:
tests/integration/api/v1/test_admin_endpoints.py # API endpoints
tests/integration/security/test_authentication.py # Auth workflows
tests/integration/workflows/test_product_import.py # Multi-step processes
```
### Performance Tests (`tests/performance/`)
**Purpose**: Validate performance requirements
**Speed**: Slow (10+ seconds each)
**Use when**: Testing response times, load capacity, large data processing
```bash
# Run periodically or in CI
pytest -m performance
```
### System Tests (`tests/system/`)
**Purpose**: End-to-end application behavior
**Speed**: Slowest (30+ seconds each)
**Use when**: Testing complete user scenarios, error handling across layers
```bash
# Run before releases
pytest -m system
```
## Daily Development Workflow
### During Active Development
```bash
# Quick feedback loop - run relevant unit tests
pytest tests/unit/services/test_product_service.py -v
# Test specific functionality you're working on
pytest -k "product and create" -m unit
# Fast comprehensive check
make test-fast # Equivalent to: pytest -m "not slow"
```
### Before Committing Code
```bash
# Run unit and integration tests
make test-unit
make test-integration
# Or run both with coverage
make test-coverage
```
### Before Creating Pull Request
```bash
# Full test suite with linting
make ci # Runs format, lint, and test-coverage
# Check if all tests pass
make test
```
## Running Specific Tests
### By Test Type
```bash
# Fast unit tests only
pytest -m unit
# Integration tests only
pytest -m integration
# Everything except slow tests
pytest -m "not slow"
# Database-dependent tests
pytest -m database
# Authentication-related tests
pytest -m auth
```
### By Component/Domain
```bash
# All product-related tests
pytest -k "product"
# Admin functionality tests
pytest -m admin
# API endpoint tests
pytest -m api
# All tests in a directory
pytest tests/unit/services/ -v
```
### By Specific Files or Methods
```bash
# Specific test file
pytest tests/unit/services/test_product_service.py -v
# Specific test class
pytest tests/unit/services/test_product_service.py::TestProductService -v
# Specific test method
pytest tests/unit/services/test_product_service.py::TestProductService::test_create_product_success -v
```
## Test Fixtures and Data
### Using Existing Fixtures
Our fixtures are organized by domain in the `fixtures/` directory:
```python
# In your test file
def test_product_creation(test_user, test_shop, auth_headers):
"""Uses auth_fixtures.py fixtures"""
# test_user: Creates a test user
# test_shop: Creates a test shop owned by test_user
# auth_headers: Provides authentication headers for API calls
def test_multiple_products(multiple_products):
"""Uses product_fixtures.py fixtures"""
# multiple_products: Creates 5 test products with different attributes
assert len(multiple_products) == 5
def test_with_factory(product_factory, db):
"""Uses factory fixtures for custom test data"""
# Create custom product with specific attributes
product = product_factory(db, title="Custom Product", price="99.99")
assert product.title == "Custom Product"
```
### Available Fixtures by Domain
**Authentication (`auth_fixtures.py`)**:
- `test_user`, `test_admin`, `other_user`
- `auth_headers`, `admin_headers`
- `auth_manager`
**Products (`product_fixtures.py`)**:
- `test_product`, `unique_product`, `multiple_products`
- `product_factory` (for custom products)
**Shops (`shop_fixtures.py`)**:
- `test_shop`, `unique_shop`, `inactive_shop`, `verified_shop`
- `shop_product`, `test_stock`, `multiple_stocks`
- `shop_factory` (for custom shops)
**Marketplace (`marketplace_fixtures.py`)**:
- `test_marketplace_job`
## Writing New Tests
### Test File Location
Choose location based on what you're testing:
```python
# Business logic → unit tests
tests/unit/services/test_my_new_service.py
# API endpoints → integration tests
tests/integration/api/v1/test_my_new_endpoints.py
# Multi-component workflows → integration tests
tests/integration/workflows/test_my_new_workflow.py
# Performance concerns → performance tests
tests/performance/test_my_performance.py
```
### Test Class Structure
```python
import pytest
from app.services.my_service import MyService
@pytest.mark.unit # Always add appropriate markers
@pytest.mark.products # Domain-specific marker
class TestMyService:
"""Test suite for MyService business logic"""
def setup_method(self):
"""Run before each test method"""
self.service = MyService()
def test_create_item_with_valid_data_succeeds(self):
"""Test successful item creation - descriptive name explaining scenario"""
# Arrange
item_data = {"name": "Test Item", "price": "10.99"}
# Act
result = self.service.create_item(item_data)
# Assert
assert result is not None
assert result.name == "Test Item"
def test_create_item_with_invalid_data_raises_validation_error(self):
"""Test validation error handling"""
# Arrange
invalid_data = {"name": "", "price": "invalid"}
# Act & Assert
with pytest.raises(ValidationError):
self.service.create_item(invalid_data)
```
### API Integration Test Example
```python
import pytest
@pytest.mark.integration
@pytest.mark.api
@pytest.mark.products
class TestProductEndpoints:
"""Integration tests for product API endpoints"""
def test_create_product_endpoint_success(self, client, auth_headers):
"""Test successful product creation via API"""
# Arrange
product_data = {
"product_id": "TEST001",
"title": "Test Product",
"price": "19.99"
}
# Act
response = client.post("/api/v1/product",
json=product_data,
headers=auth_headers)
# Assert
assert response.status_code == 200
assert response.json()["product_id"] == "TEST001"
```
## Test Naming Conventions
### Files
- `test_{component_name}.py` for the file name
- Mirror your source structure: `app/services/product.py``tests/unit/services/test_product_service.py`
### Classes
- `TestComponentName` for the main component
- `TestComponentValidation` for validation-specific tests
- `TestComponentErrorHandling` for error scenarios
### Methods
Use descriptive names that explain the scenario:
```python
# Good - explains what, when, and expected outcome
def test_create_product_with_valid_data_returns_product(self):
def test_create_product_with_duplicate_id_raises_error(self):
def test_get_product_when_not_found_returns_404(self):
# Acceptable shorter versions
def test_create_product_success(self):
def test_create_product_validation_error(self):
def test_get_product_not_found(self):
```
## Coverage Requirements
We maintain high coverage standards:
- **Minimum overall coverage**: 80%
- **New code coverage**: 90%+
- **Critical paths**: 95%+
```bash
# Check coverage
make test-coverage
# View detailed HTML report
open htmlcov/index.html
# Fail build if coverage too low
pytest --cov=app --cov-fail-under=80
```
## Debugging Failed Tests
### Get Detailed Information
```bash
# Verbose output with local variables
pytest tests/path/to/test.py -vv --tb=long --showlocals
# Stop on first failure
pytest -x
# Re-run only failed tests
pytest --lf
```
### Common Issues and Solutions
**Import Errors**:
```bash
# Ensure you're in project root and have installed in dev mode
pip install -e .
PYTHONPATH=. pytest
```
**Database Issues**:
```bash
# Tests use in-memory SQLite by default
# Check if fixtures are properly imported
pytest --fixtures tests/
```
**Fixture Not Found**:
```bash
# Ensure fixture modules are listed in conftest.py pytest_plugins
# Check fixture dependencies (test_shop needs test_user)
```
## Performance and Optimization
### Speed Up Test Runs
```bash
# Run in parallel (install pytest-xdist first)
pytest -n auto
# Skip slow tests during development
pytest -m "not slow"
# Run only changed tests (install pytest-testmon)
pytest --testmon
```
### Find Slow Tests
```bash
# Show 10 slowest tests
pytest --durations=10
# Show all test durations
pytest --durations=0
```
## Continuous Integration Integration
Our tests integrate with CI/CD pipelines through make targets:
```bash
# Commands used in CI
make ci # Format, lint, test with coverage
make test-fast # Quick feedback in early CI stages
make test-coverage # Full test run with coverage reporting
```
The CI pipeline:
1. Runs `make test-fast` for quick feedback
2. Runs `make ci` for comprehensive checks
3. Generates coverage reports in XML format
4. Uploads coverage to reporting tools
## Best Practices Summary
### DO:
- Write tests for new code before committing
- Use descriptive test names explaining the scenario
- Keep unit tests fast (< 1 second each)
- Use appropriate fixtures for test data
- Add proper pytest markers to categorize tests
- Test both happy path and error scenarios
- Maintain good test coverage (80%+)
### DON'T:
- Write tests that depend on external services (use mocks)
- Create tests that depend on execution order
- Use hardcoded values that might change
- Write overly complex test setups
- Ignore failing tests
- Skip adding tests for bug fixes
## Getting Help
- **Examples**: Look at existing tests in similar components
- **Fixtures**: Check `tests/fixtures/` for available test data
- **Configuration**: See `pytest.ini` for available markers
- **Make targets**: Run `make help` to see all available commands
- **Team support**: Ask in team channels or create GitHub issues
## Make Commands Reference
```bash
make install-test # Install test dependencies
make test # Run all tests
make test-unit # Run unit tests only
make test-integration # Run integration tests only
make test-fast # Run all except slow tests
make test-coverage # Run with coverage report
make ci # Full CI pipeline (format, lint, test)
```
Use this guide as your daily reference for testing. The structure is designed to give you fast feedback during development while maintaining comprehensive test coverage.
*This documentation is under development.*