Added placeholder for documentation
This commit is contained in:
@@ -1,470 +1 @@
|
||||
# Testing Guide for Developers
|
||||
|
||||
This guide provides everything your development team needs to know about our comprehensive test suite structure, how to run tests effectively, and how to maintain test quality.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Install test dependencies
|
||||
make install-test
|
||||
|
||||
# Run all tests
|
||||
make test
|
||||
|
||||
# Run fast tests only (development workflow)
|
||||
make test-fast
|
||||
|
||||
# Run with coverage
|
||||
make test-coverage
|
||||
```
|
||||
|
||||
## Test Structure Overview
|
||||
|
||||
Our test suite is organized hierarchically by test type and execution speed to optimize development workflows:
|
||||
|
||||
```
|
||||
tests/
|
||||
├── conftest.py # Core test configuration and database fixtures
|
||||
├── pytest.ini # Test configuration with markers and coverage
|
||||
├── fixtures/ # Domain-organized test fixtures
|
||||
│ ├── auth_fixtures.py # Users, tokens, authentication headers
|
||||
│ ├── product_fixtures.py # Products, factories, bulk test data
|
||||
│ ├── shop_fixtures.py # Shops, stock, shop-product relationships
|
||||
│ └── marketplace_fixtures.py # Import jobs and marketplace data
|
||||
├── unit/ # Fast, isolated component tests (< 1 second)
|
||||
│ ├── models/ # Database and API model tests
|
||||
│ ├── utils/ # Utility function tests
|
||||
│ ├── services/ # Business logic tests
|
||||
│ └── middleware/ # Middleware component tests
|
||||
├── integration/ # Multi-component tests (1-10 seconds)
|
||||
│ ├── api/v1/ # API endpoint tests with database
|
||||
│ ├── security/ # Authentication, authorization tests
|
||||
│ ├── tasks/ # Background task integration tests
|
||||
│ └── workflows/ # Multi-step process tests
|
||||
├── performance/ # Performance benchmarks (10+ seconds)
|
||||
│ └── test_api_performance.py # Load testing and benchmarks
|
||||
├── system/ # End-to-end system tests (30+ seconds)
|
||||
│ └── test_error_handling.py # Application-wide error handling
|
||||
└── test_data/ # Static test data files
|
||||
└── csv/sample_products.csv # Sample CSV for import testing
|
||||
```
|
||||
|
||||
## Test Categories and When to Use Each
|
||||
|
||||
### Unit Tests (`tests/unit/`)
|
||||
**Purpose**: Test individual components in isolation
|
||||
**Speed**: Very fast (< 1 second each)
|
||||
**Use when**: Testing business logic, data processing, model validation
|
||||
|
||||
```bash
|
||||
# Run during active development
|
||||
pytest -m unit
|
||||
|
||||
# Example locations:
|
||||
tests/unit/services/test_product_service.py # Business logic
|
||||
tests/unit/utils/test_data_processing.py # Utility functions
|
||||
tests/unit/models/test_database_models.py # Model validation
|
||||
```
|
||||
|
||||
### Integration Tests (`tests/integration/`)
|
||||
**Purpose**: Test component interactions
|
||||
**Speed**: Moderate (1-10 seconds each)
|
||||
**Use when**: Testing API endpoints, service interactions, workflows
|
||||
|
||||
```bash
|
||||
# Run before commits
|
||||
pytest -m integration
|
||||
|
||||
# Example locations:
|
||||
tests/integration/api/v1/test_admin_endpoints.py # API endpoints
|
||||
tests/integration/security/test_authentication.py # Auth workflows
|
||||
tests/integration/workflows/test_product_import.py # Multi-step processes
|
||||
```
|
||||
|
||||
### Performance Tests (`tests/performance/`)
|
||||
**Purpose**: Validate performance requirements
|
||||
**Speed**: Slow (10+ seconds each)
|
||||
**Use when**: Testing response times, load capacity, large data processing
|
||||
|
||||
```bash
|
||||
# Run periodically or in CI
|
||||
pytest -m performance
|
||||
```
|
||||
|
||||
### System Tests (`tests/system/`)
|
||||
**Purpose**: End-to-end application behavior
|
||||
**Speed**: Slowest (30+ seconds each)
|
||||
**Use when**: Testing complete user scenarios, error handling across layers
|
||||
|
||||
```bash
|
||||
# Run before releases
|
||||
pytest -m system
|
||||
```
|
||||
|
||||
## Daily Development Workflow
|
||||
|
||||
### During Active Development
|
||||
```bash
|
||||
# Quick feedback loop - run relevant unit tests
|
||||
pytest tests/unit/services/test_product_service.py -v
|
||||
|
||||
# Test specific functionality you're working on
|
||||
pytest -k "product and create" -m unit
|
||||
|
||||
# Fast comprehensive check
|
||||
make test-fast # Equivalent to: pytest -m "not slow"
|
||||
```
|
||||
|
||||
### Before Committing Code
|
||||
```bash
|
||||
# Run unit and integration tests
|
||||
make test-unit
|
||||
make test-integration
|
||||
|
||||
# Or run both with coverage
|
||||
make test-coverage
|
||||
```
|
||||
|
||||
### Before Creating Pull Request
|
||||
```bash
|
||||
# Full test suite with linting
|
||||
make ci # Runs format, lint, and test-coverage
|
||||
|
||||
# Check if all tests pass
|
||||
make test
|
||||
```
|
||||
|
||||
## Running Specific Tests
|
||||
|
||||
### By Test Type
|
||||
```bash
|
||||
# Fast unit tests only
|
||||
pytest -m unit
|
||||
|
||||
# Integration tests only
|
||||
pytest -m integration
|
||||
|
||||
# Everything except slow tests
|
||||
pytest -m "not slow"
|
||||
|
||||
# Database-dependent tests
|
||||
pytest -m database
|
||||
|
||||
# Authentication-related tests
|
||||
pytest -m auth
|
||||
```
|
||||
|
||||
### By Component/Domain
|
||||
```bash
|
||||
# All product-related tests
|
||||
pytest -k "product"
|
||||
|
||||
# Admin functionality tests
|
||||
pytest -m admin
|
||||
|
||||
# API endpoint tests
|
||||
pytest -m api
|
||||
|
||||
# All tests in a directory
|
||||
pytest tests/unit/services/ -v
|
||||
```
|
||||
|
||||
### By Specific Files or Methods
|
||||
```bash
|
||||
# Specific test file
|
||||
pytest tests/unit/services/test_product_service.py -v
|
||||
|
||||
# Specific test class
|
||||
pytest tests/unit/services/test_product_service.py::TestProductService -v
|
||||
|
||||
# Specific test method
|
||||
pytest tests/unit/services/test_product_service.py::TestProductService::test_create_product_success -v
|
||||
```
|
||||
|
||||
## Test Fixtures and Data
|
||||
|
||||
### Using Existing Fixtures
|
||||
Our fixtures are organized by domain in the `fixtures/` directory:
|
||||
|
||||
```python
|
||||
# In your test file
|
||||
def test_product_creation(test_user, test_shop, auth_headers):
|
||||
"""Uses auth_fixtures.py fixtures"""
|
||||
# test_user: Creates a test user
|
||||
# test_shop: Creates a test shop owned by test_user
|
||||
# auth_headers: Provides authentication headers for API calls
|
||||
|
||||
def test_multiple_products(multiple_products):
|
||||
"""Uses product_fixtures.py fixtures"""
|
||||
# multiple_products: Creates 5 test products with different attributes
|
||||
assert len(multiple_products) == 5
|
||||
|
||||
def test_with_factory(product_factory, db):
|
||||
"""Uses factory fixtures for custom test data"""
|
||||
# Create custom product with specific attributes
|
||||
product = product_factory(db, title="Custom Product", price="99.99")
|
||||
assert product.title == "Custom Product"
|
||||
```
|
||||
|
||||
### Available Fixtures by Domain
|
||||
|
||||
**Authentication (`auth_fixtures.py`)**:
|
||||
- `test_user`, `test_admin`, `other_user`
|
||||
- `auth_headers`, `admin_headers`
|
||||
- `auth_manager`
|
||||
|
||||
**Products (`product_fixtures.py`)**:
|
||||
- `test_product`, `unique_product`, `multiple_products`
|
||||
- `product_factory` (for custom products)
|
||||
|
||||
**Shops (`shop_fixtures.py`)**:
|
||||
- `test_shop`, `unique_shop`, `inactive_shop`, `verified_shop`
|
||||
- `shop_product`, `test_stock`, `multiple_stocks`
|
||||
- `shop_factory` (for custom shops)
|
||||
|
||||
**Marketplace (`marketplace_fixtures.py`)**:
|
||||
- `test_marketplace_job`
|
||||
|
||||
## Writing New Tests
|
||||
|
||||
### Test File Location
|
||||
Choose location based on what you're testing:
|
||||
|
||||
```python
|
||||
# Business logic → unit tests
|
||||
tests/unit/services/test_my_new_service.py
|
||||
|
||||
# API endpoints → integration tests
|
||||
tests/integration/api/v1/test_my_new_endpoints.py
|
||||
|
||||
# Multi-component workflows → integration tests
|
||||
tests/integration/workflows/test_my_new_workflow.py
|
||||
|
||||
# Performance concerns → performance tests
|
||||
tests/performance/test_my_performance.py
|
||||
```
|
||||
|
||||
### Test Class Structure
|
||||
```python
|
||||
import pytest
|
||||
from app.services.my_service import MyService
|
||||
|
||||
@pytest.mark.unit # Always add appropriate markers
|
||||
@pytest.mark.products # Domain-specific marker
|
||||
class TestMyService:
|
||||
"""Test suite for MyService business logic"""
|
||||
|
||||
def setup_method(self):
|
||||
"""Run before each test method"""
|
||||
self.service = MyService()
|
||||
|
||||
def test_create_item_with_valid_data_succeeds(self):
|
||||
"""Test successful item creation - descriptive name explaining scenario"""
|
||||
# Arrange
|
||||
item_data = {"name": "Test Item", "price": "10.99"}
|
||||
|
||||
# Act
|
||||
result = self.service.create_item(item_data)
|
||||
|
||||
# Assert
|
||||
assert result is not None
|
||||
assert result.name == "Test Item"
|
||||
|
||||
def test_create_item_with_invalid_data_raises_validation_error(self):
|
||||
"""Test validation error handling"""
|
||||
# Arrange
|
||||
invalid_data = {"name": "", "price": "invalid"}
|
||||
|
||||
# Act & Assert
|
||||
with pytest.raises(ValidationError):
|
||||
self.service.create_item(invalid_data)
|
||||
```
|
||||
|
||||
### API Integration Test Example
|
||||
```python
|
||||
import pytest
|
||||
|
||||
@pytest.mark.integration
|
||||
@pytest.mark.api
|
||||
@pytest.mark.products
|
||||
class TestProductEndpoints:
|
||||
"""Integration tests for product API endpoints"""
|
||||
|
||||
def test_create_product_endpoint_success(self, client, auth_headers):
|
||||
"""Test successful product creation via API"""
|
||||
# Arrange
|
||||
product_data = {
|
||||
"product_id": "TEST001",
|
||||
"title": "Test Product",
|
||||
"price": "19.99"
|
||||
}
|
||||
|
||||
# Act
|
||||
response = client.post("/api/v1/product",
|
||||
json=product_data,
|
||||
headers=auth_headers)
|
||||
|
||||
# Assert
|
||||
assert response.status_code == 200
|
||||
assert response.json()["product_id"] == "TEST001"
|
||||
```
|
||||
|
||||
## Test Naming Conventions
|
||||
|
||||
### Files
|
||||
- `test_{component_name}.py` for the file name
|
||||
- Mirror your source structure: `app/services/product.py` → `tests/unit/services/test_product_service.py`
|
||||
|
||||
### Classes
|
||||
- `TestComponentName` for the main component
|
||||
- `TestComponentValidation` for validation-specific tests
|
||||
- `TestComponentErrorHandling` for error scenarios
|
||||
|
||||
### Methods
|
||||
Use descriptive names that explain the scenario:
|
||||
```python
|
||||
# Good - explains what, when, and expected outcome
|
||||
def test_create_product_with_valid_data_returns_product(self):
|
||||
def test_create_product_with_duplicate_id_raises_error(self):
|
||||
def test_get_product_when_not_found_returns_404(self):
|
||||
|
||||
# Acceptable shorter versions
|
||||
def test_create_product_success(self):
|
||||
def test_create_product_validation_error(self):
|
||||
def test_get_product_not_found(self):
|
||||
```
|
||||
|
||||
## Coverage Requirements
|
||||
|
||||
We maintain high coverage standards:
|
||||
- **Minimum overall coverage**: 80%
|
||||
- **New code coverage**: 90%+
|
||||
- **Critical paths**: 95%+
|
||||
|
||||
```bash
|
||||
# Check coverage
|
||||
make test-coverage
|
||||
|
||||
# View detailed HTML report
|
||||
open htmlcov/index.html
|
||||
|
||||
# Fail build if coverage too low
|
||||
pytest --cov=app --cov-fail-under=80
|
||||
```
|
||||
|
||||
## Debugging Failed Tests
|
||||
|
||||
### Get Detailed Information
|
||||
```bash
|
||||
# Verbose output with local variables
|
||||
pytest tests/path/to/test.py -vv --tb=long --showlocals
|
||||
|
||||
# Stop on first failure
|
||||
pytest -x
|
||||
|
||||
# Re-run only failed tests
|
||||
pytest --lf
|
||||
```
|
||||
|
||||
### Common Issues and Solutions
|
||||
|
||||
**Import Errors**:
|
||||
```bash
|
||||
# Ensure you're in project root and have installed in dev mode
|
||||
pip install -e .
|
||||
PYTHONPATH=. pytest
|
||||
```
|
||||
|
||||
**Database Issues**:
|
||||
```bash
|
||||
# Tests use in-memory SQLite by default
|
||||
# Check if fixtures are properly imported
|
||||
pytest --fixtures tests/
|
||||
```
|
||||
|
||||
**Fixture Not Found**:
|
||||
```bash
|
||||
# Ensure fixture modules are listed in conftest.py pytest_plugins
|
||||
# Check fixture dependencies (test_shop needs test_user)
|
||||
```
|
||||
|
||||
## Performance and Optimization
|
||||
|
||||
### Speed Up Test Runs
|
||||
```bash
|
||||
# Run in parallel (install pytest-xdist first)
|
||||
pytest -n auto
|
||||
|
||||
# Skip slow tests during development
|
||||
pytest -m "not slow"
|
||||
|
||||
# Run only changed tests (install pytest-testmon)
|
||||
pytest --testmon
|
||||
```
|
||||
|
||||
### Find Slow Tests
|
||||
```bash
|
||||
# Show 10 slowest tests
|
||||
pytest --durations=10
|
||||
|
||||
# Show all test durations
|
||||
pytest --durations=0
|
||||
```
|
||||
|
||||
## Continuous Integration Integration
|
||||
|
||||
Our tests integrate with CI/CD pipelines through make targets:
|
||||
|
||||
```bash
|
||||
# Commands used in CI
|
||||
make ci # Format, lint, test with coverage
|
||||
make test-fast # Quick feedback in early CI stages
|
||||
make test-coverage # Full test run with coverage reporting
|
||||
```
|
||||
|
||||
The CI pipeline:
|
||||
1. Runs `make test-fast` for quick feedback
|
||||
2. Runs `make ci` for comprehensive checks
|
||||
3. Generates coverage reports in XML format
|
||||
4. Uploads coverage to reporting tools
|
||||
|
||||
## Best Practices Summary
|
||||
|
||||
### DO:
|
||||
- Write tests for new code before committing
|
||||
- Use descriptive test names explaining the scenario
|
||||
- Keep unit tests fast (< 1 second each)
|
||||
- Use appropriate fixtures for test data
|
||||
- Add proper pytest markers to categorize tests
|
||||
- Test both happy path and error scenarios
|
||||
- Maintain good test coverage (80%+)
|
||||
|
||||
### DON'T:
|
||||
- Write tests that depend on external services (use mocks)
|
||||
- Create tests that depend on execution order
|
||||
- Use hardcoded values that might change
|
||||
- Write overly complex test setups
|
||||
- Ignore failing tests
|
||||
- Skip adding tests for bug fixes
|
||||
|
||||
## Getting Help
|
||||
|
||||
- **Examples**: Look at existing tests in similar components
|
||||
- **Fixtures**: Check `tests/fixtures/` for available test data
|
||||
- **Configuration**: See `pytest.ini` for available markers
|
||||
- **Make targets**: Run `make help` to see all available commands
|
||||
- **Team support**: Ask in team channels or create GitHub issues
|
||||
|
||||
## Make Commands Reference
|
||||
|
||||
```bash
|
||||
make install-test # Install test dependencies
|
||||
make test # Run all tests
|
||||
make test-unit # Run unit tests only
|
||||
make test-integration # Run integration tests only
|
||||
make test-fast # Run all except slow tests
|
||||
make test-coverage # Run with coverage report
|
||||
make ci # Full CI pipeline (format, lint, test)
|
||||
```
|
||||
|
||||
Use this guide as your daily reference for testing. The structure is designed to give you fast feedback during development while maintaining comprehensive test coverage.
|
||||
*This documentation is under development.*
|
||||
Reference in New Issue
Block a user