Enhancing documentation
This commit is contained in:
@@ -1,299 +0,0 @@
|
||||
# Testing Overview
|
||||
|
||||
Our comprehensive test suite ensures code quality, reliability, and maintainability. This section covers everything you need to know about testing in the Letzshop Import project.
|
||||
|
||||
## Test Structure
|
||||
|
||||
We use a hierarchical test organization based on test types and scope:
|
||||
|
||||
```
|
||||
tests/
|
||||
├── conftest.py # Core test configuration and fixtures
|
||||
├── pytest.ini # Pytest configuration with custom markers
|
||||
├── fixtures/ # Shared test fixtures by domain
|
||||
├── unit/ # Fast, isolated component tests
|
||||
├── integration/ # Multi-component interaction tests
|
||||
├── performance/ # Performance and load tests
|
||||
├── system/ # End-to-end system behavior tests
|
||||
└── test_data/ # Test data files (CSV, JSON, etc.)
|
||||
```
|
||||
|
||||
## Test Types
|
||||
|
||||
### 🔧 Unit Tests
|
||||
**Purpose**: Test individual components in isolation
|
||||
**Speed**: Very fast (< 1 second each)
|
||||
**Scope**: Single function, method, or class
|
||||
|
||||
```bash
|
||||
# Run unit tests
|
||||
pytest -m unit
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
- Data processing utilities
|
||||
- Model validation
|
||||
- Service business logic
|
||||
- Individual API endpoint handlers
|
||||
|
||||
### 🔗 Integration Tests
|
||||
**Purpose**: Test component interactions
|
||||
**Speed**: Fast to moderate (1-5 seconds each)
|
||||
**Scope**: Multiple components working together
|
||||
|
||||
```bash
|
||||
# Run integration tests
|
||||
pytest -m integration
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
- API endpoints with database
|
||||
- Service layer interactions
|
||||
- Authentication workflows
|
||||
- File processing pipelines
|
||||
|
||||
### 🏗️ System Tests
|
||||
**Purpose**: Test complete application behavior
|
||||
**Speed**: Moderate (5-30 seconds each)
|
||||
**Scope**: End-to-end user scenarios
|
||||
|
||||
```bash
|
||||
# Run system tests
|
||||
pytest -m system
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
- Complete user registration flow
|
||||
- Full CSV import process
|
||||
- Multi-step workflows
|
||||
- Error handling across layers
|
||||
|
||||
### ⚡ Performance Tests
|
||||
**Purpose**: Validate performance requirements
|
||||
**Speed**: Slow (30+ seconds each)
|
||||
**Scope**: Load, stress, and performance testing
|
||||
|
||||
```bash
|
||||
# Run performance tests
|
||||
pytest -m performance
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
- API response times
|
||||
- Database query performance
|
||||
- Large file processing
|
||||
- Concurrent user scenarios
|
||||
|
||||
## Running Tests
|
||||
|
||||
### Basic Commands
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
pytest
|
||||
|
||||
# Run with verbose output
|
||||
pytest -v
|
||||
|
||||
# Run specific test type
|
||||
pytest -m unit
|
||||
pytest -m integration
|
||||
pytest -m "unit or integration"
|
||||
|
||||
# Run tests in specific directory
|
||||
pytest tests/unit/
|
||||
pytest tests/integration/api/
|
||||
|
||||
# Run specific test file
|
||||
pytest tests/unit/services/test_product_service.py
|
||||
|
||||
# Run specific test class
|
||||
pytest tests/unit/services/test_product_service.py::TestProductService
|
||||
|
||||
# Run specific test method
|
||||
pytest tests/unit/services/test_product_service.py::TestProductService::test_create_product_success
|
||||
```
|
||||
|
||||
### Advanced Options
|
||||
|
||||
```bash
|
||||
# Run with coverage report
|
||||
pytest --cov=app --cov-report=html
|
||||
|
||||
# Run tests matching pattern
|
||||
pytest -k "product and not slow"
|
||||
|
||||
# Stop on first failure
|
||||
pytest -x
|
||||
|
||||
# Run failed tests from last run
|
||||
pytest --lf
|
||||
|
||||
# Run tests in parallel (if pytest-xdist installed)
|
||||
pytest -n auto
|
||||
|
||||
# Show slowest tests
|
||||
pytest --durations=10
|
||||
```
|
||||
|
||||
## Test Markers
|
||||
|
||||
We use pytest markers to categorize and selectively run tests:
|
||||
|
||||
| Marker | Purpose |
|
||||
|--------|---------|
|
||||
| `@pytest.mark.unit` | Fast, isolated component tests |
|
||||
| `@pytest.mark.integration` | Multi-component interaction tests |
|
||||
| `@pytest.mark.system` | End-to-end system tests |
|
||||
| `@pytest.mark.performance` | Performance and load tests |
|
||||
| `@pytest.mark.slow` | Long-running tests |
|
||||
| `@pytest.mark.api` | API endpoint tests |
|
||||
| `@pytest.mark.database` | Tests requiring database |
|
||||
| `@pytest.mark.auth` | Authentication/authorization tests |
|
||||
|
||||
### Example Usage
|
||||
|
||||
```python
|
||||
@pytest.mark.unit
|
||||
@pytest.mark.products
|
||||
class TestProductService:
|
||||
def test_create_product_success(self):
|
||||
# Test implementation
|
||||
pass
|
||||
|
||||
@pytest.mark.integration
|
||||
@pytest.mark.api
|
||||
@pytest.mark.database
|
||||
def test_product_creation_endpoint():
|
||||
# Test implementation
|
||||
pass
|
||||
```
|
||||
|
||||
## Test Configuration
|
||||
|
||||
### pytest.ini
|
||||
Our pytest configuration includes:
|
||||
|
||||
- **Custom markers** for test categorization
|
||||
- **Coverage settings** with 80% minimum threshold
|
||||
- **Test discovery** patterns and paths
|
||||
- **Output formatting** for better readability
|
||||
|
||||
### conftest.py
|
||||
Core test fixtures and configuration:
|
||||
|
||||
- **Database fixtures** for test isolation
|
||||
- **Authentication fixtures** for user/admin testing
|
||||
- **Client fixtures** for API testing
|
||||
- **Mock fixtures** for external dependencies
|
||||
|
||||
## Test Data Management
|
||||
|
||||
### Fixtures
|
||||
We organize fixtures by domain:
|
||||
|
||||
```python
|
||||
# tests/fixtures/product_fixtures.py
|
||||
@pytest.fixture
|
||||
def sample_product():
|
||||
return {
|
||||
"name": "Test Product",
|
||||
"gtin": "1234567890123",
|
||||
"price": "19.99"
|
||||
}
|
||||
|
||||
@pytest.fixture
|
||||
def product_factory():
|
||||
def _create_product(**kwargs):
|
||||
defaults = {"name": "Test", "gtin": "1234567890123"}
|
||||
defaults.update(kwargs)
|
||||
return defaults
|
||||
return _create_product
|
||||
```
|
||||
|
||||
### Test Data Files
|
||||
Static test data in `tests/test_data/`:
|
||||
|
||||
- CSV files for import testing
|
||||
- JSON files for API testing
|
||||
- Sample configuration files
|
||||
|
||||
## Coverage Requirements
|
||||
|
||||
We maintain high test coverage standards:
|
||||
|
||||
- **Minimum coverage**: 80% overall
|
||||
- **Critical paths**: 95%+ coverage required
|
||||
- **New code**: Must include tests
|
||||
- **HTML reports**: Generated in `htmlcov/`
|
||||
|
||||
```bash
|
||||
# Generate coverage report
|
||||
pytest --cov=app --cov-report=html --cov-report=term-missing
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Test Naming
|
||||
- Use descriptive test names that explain the scenario
|
||||
- Follow the pattern: `test_{action}_{scenario}_{expected_outcome}`
|
||||
- See our [Test Naming Conventions](test-naming-conventions.md) for details
|
||||
|
||||
### Test Structure
|
||||
- **Arrange**: Set up test data and conditions
|
||||
- **Act**: Execute the code being tested
|
||||
- **Assert**: Verify the expected outcome
|
||||
|
||||
```python
|
||||
def test_create_product_with_valid_data_returns_product(self):
|
||||
# Arrange
|
||||
product_data = {"name": "Test", "gtin": "1234567890123"}
|
||||
|
||||
# Act
|
||||
result = product_service.create_product(product_data)
|
||||
|
||||
# Assert
|
||||
assert result is not None
|
||||
assert result.name == "Test"
|
||||
```
|
||||
|
||||
### Test Isolation
|
||||
- Each test should be independent
|
||||
- Use database transactions that rollback
|
||||
- Mock external dependencies
|
||||
- Clean up test data
|
||||
|
||||
## Continuous Integration
|
||||
|
||||
Our CI pipeline runs:
|
||||
|
||||
1. **Linting** with flake8 and black
|
||||
2. **Type checking** with mypy
|
||||
3. **Security scanning** with bandit
|
||||
4. **Unit tests** on every commit
|
||||
5. **Integration tests** on pull requests
|
||||
6. **Performance tests** on releases
|
||||
|
||||
## Tools and Libraries
|
||||
|
||||
- **pytest**: Test framework and runner
|
||||
- **pytest-cov**: Coverage reporting
|
||||
- **pytest-asyncio**: Async test support
|
||||
- **pytest-mock**: Mocking utilities
|
||||
- **faker**: Test data generation
|
||||
- **httpx**: HTTP client for API testing
|
||||
- **factory-boy**: Test object factories
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. **Read the conventions**: [Test Naming Conventions](test-naming-conventions.md)
|
||||
2. **Run existing tests**: `pytest -v`
|
||||
3. **Write your first test**: See examples in existing test files
|
||||
4. **Check coverage**: `pytest --cov`
|
||||
|
||||
## Need Help?
|
||||
|
||||
- **Examples**: Look at existing tests in `tests/` directory
|
||||
- **Documentation**: This testing section has detailed guides
|
||||
- **Issues**: Create a GitHub issue for testing questions
|
||||
- **Standards**: Follow our [naming conventions](test-naming-conventions.md)
|
||||
@@ -1,403 +0,0 @@
|
||||
# Running Tests
|
||||
|
||||
This guide covers everything you need to know about running tests in the Letzshop Import project.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Ensure you have the test dependencies installed:
|
||||
|
||||
```bash
|
||||
pip install -r tests/requirements-test.txt
|
||||
```
|
||||
|
||||
Required packages:
|
||||
- `pytest>=7.4.0` - Test framework
|
||||
- `pytest-cov>=4.1.0` - Coverage reporting
|
||||
- `pytest-asyncio>=0.21.0` - Async test support
|
||||
- `pytest-mock>=3.11.0` - Mocking utilities
|
||||
- `httpx>=0.24.0` - HTTP client for API tests
|
||||
- `faker>=19.0.0` - Test data generation
|
||||
|
||||
## Basic Test Commands
|
||||
|
||||
### Run All Tests
|
||||
```bash
|
||||
# Run the entire test suite
|
||||
pytest
|
||||
|
||||
# Run with verbose output
|
||||
pytest -v
|
||||
|
||||
# Run with very verbose output (show individual test results)
|
||||
pytest -vv
|
||||
```
|
||||
|
||||
### Run by Test Type
|
||||
```bash
|
||||
# Run only unit tests (fast)
|
||||
pytest -m unit
|
||||
|
||||
# Run only integration tests
|
||||
pytest -m integration
|
||||
|
||||
# Run unit and integration tests
|
||||
pytest -m "unit or integration"
|
||||
|
||||
# Run everything except slow tests
|
||||
pytest -m "not slow"
|
||||
|
||||
# Run performance tests only
|
||||
pytest -m performance
|
||||
```
|
||||
|
||||
### Run by Directory
|
||||
```bash
|
||||
# Run all unit tests
|
||||
pytest tests/unit/
|
||||
|
||||
# Run all integration tests
|
||||
pytest tests/integration/
|
||||
|
||||
# Run API endpoint tests
|
||||
pytest tests/integration/api/
|
||||
|
||||
# Run service layer tests
|
||||
pytest tests/unit/services/
|
||||
```
|
||||
|
||||
### Run Specific Files
|
||||
```bash
|
||||
# Run specific test file
|
||||
pytest tests/unit/services/test_product_service.py
|
||||
|
||||
# Run multiple specific files
|
||||
pytest tests/unit/services/test_product_service.py tests/unit/utils/test_data_processing.py
|
||||
```
|
||||
|
||||
### Run Specific Tests
|
||||
```bash
|
||||
# Run specific test class
|
||||
pytest tests/unit/services/test_product_service.py::TestProductService
|
||||
|
||||
# Run specific test method
|
||||
pytest tests/unit/services/test_product_service.py::TestProductService::test_create_product_success
|
||||
|
||||
# Run tests matching pattern
|
||||
pytest -k "product and create"
|
||||
pytest -k "test_create_product"
|
||||
```
|
||||
|
||||
## Advanced Test Options
|
||||
|
||||
### Coverage Reporting
|
||||
```bash
|
||||
# Run with coverage report
|
||||
pytest --cov=app
|
||||
|
||||
# Coverage with missing lines
|
||||
pytest --cov=app --cov-report=term-missing
|
||||
|
||||
# Generate HTML coverage report
|
||||
pytest --cov=app --cov-report=html
|
||||
|
||||
# Coverage for specific modules
|
||||
pytest --cov=app.services --cov=app.api
|
||||
|
||||
# Fail if coverage below threshold
|
||||
pytest --cov=app --cov-fail-under=80
|
||||
```
|
||||
|
||||
### Output and Debugging
|
||||
```bash
|
||||
# Show local variables on failure
|
||||
pytest --tb=short --showlocals
|
||||
|
||||
# Stop on first failure
|
||||
pytest -x
|
||||
|
||||
# Stop after N failures
|
||||
pytest --maxfail=3
|
||||
|
||||
# Show print statements
|
||||
pytest -s
|
||||
|
||||
# Show warnings
|
||||
pytest -W error
|
||||
|
||||
# Capture output (default)
|
||||
pytest --capture=sys
|
||||
```
|
||||
|
||||
### Test Selection and Filtering
|
||||
```bash
|
||||
# Run only failed tests from last run
|
||||
pytest --lf
|
||||
|
||||
# Run failed tests first, then continue
|
||||
pytest --ff
|
||||
|
||||
# Run tests that match keyword expression
|
||||
pytest -k "user and not admin"
|
||||
|
||||
# Run tests modified since last commit
|
||||
pytest --testmon
|
||||
|
||||
# Collect tests without running
|
||||
pytest --collect-only
|
||||
```
|
||||
|
||||
### Performance and Parallel Execution
|
||||
```bash
|
||||
# Show 10 slowest tests
|
||||
pytest --durations=10
|
||||
|
||||
# Show all test durations
|
||||
pytest --durations=0
|
||||
|
||||
# Run tests in parallel (requires pytest-xdist)
|
||||
pytest -n auto
|
||||
pytest -n 4 # Use 4 workers
|
||||
```
|
||||
|
||||
## Test Environment Setup
|
||||
|
||||
### Database Tests
|
||||
```bash
|
||||
# Run database tests (uses test database)
|
||||
pytest -m database
|
||||
|
||||
# Run with test database reset
|
||||
pytest --tb=short tests/integration/database/
|
||||
```
|
||||
|
||||
### API Tests
|
||||
```bash
|
||||
# Run API endpoint tests
|
||||
pytest -m api
|
||||
|
||||
# Run with test client
|
||||
pytest tests/integration/api/ -v
|
||||
```
|
||||
|
||||
### Authentication Tests
|
||||
```bash
|
||||
# Run auth-related tests
|
||||
pytest -m auth
|
||||
|
||||
# Run with user fixtures
|
||||
pytest tests/integration/security/ -v
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
### pytest.ini Settings
|
||||
Our `pytest.ini` is configured with:
|
||||
|
||||
```ini
|
||||
[pytest]
|
||||
# Test discovery
|
||||
testpaths = tests
|
||||
python_files = test_*.py
|
||||
python_classes = Test*
|
||||
python_functions = test_*
|
||||
|
||||
# Default options
|
||||
addopts =
|
||||
-v
|
||||
--tb=short
|
||||
--strict-markers
|
||||
--color=yes
|
||||
--durations=10
|
||||
--cov=app
|
||||
--cov-report=term-missing
|
||||
--cov-report=html:htmlcov
|
||||
--cov-fail-under=80
|
||||
|
||||
# Custom markers
|
||||
markers =
|
||||
unit: Unit tests
|
||||
integration: Integration tests
|
||||
# ... other markers
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
```bash
|
||||
# Set test environment
|
||||
export TESTING=true
|
||||
|
||||
# Database URL for tests (uses in-memory SQLite by default)
|
||||
export TEST_DATABASE_URL="sqlite:///:memory:"
|
||||
|
||||
# Disable external API calls in tests
|
||||
export MOCK_EXTERNAL_APIS=true
|
||||
```
|
||||
|
||||
## Common Test Scenarios
|
||||
|
||||
### Development Workflow
|
||||
```bash
|
||||
# Quick smoke test (unit tests only)
|
||||
pytest -m unit --maxfail=1
|
||||
|
||||
# Full test run before commit
|
||||
pytest -m "unit or integration"
|
||||
|
||||
# Pre-push comprehensive test
|
||||
pytest --cov=app --cov-fail-under=80
|
||||
```
|
||||
|
||||
### Debugging Failed Tests
|
||||
```bash
|
||||
# Run failed test with detailed output
|
||||
pytest --lf -vv --tb=long --showlocals
|
||||
|
||||
# Drop into debugger on failure (requires ipdb)
|
||||
pytest --pdb
|
||||
|
||||
# Run specific failing test in isolation
|
||||
pytest tests/path/to/test.py::TestClass::test_method -vv -s
|
||||
```
|
||||
|
||||
### Performance Testing
|
||||
```bash
|
||||
# Run performance tests only
|
||||
pytest -m performance --durations=0
|
||||
|
||||
# Run with performance profiling
|
||||
pytest -m performance --profile
|
||||
|
||||
# Load testing specific endpoints
|
||||
pytest tests/performance/test_api_performance.py -v
|
||||
```
|
||||
|
||||
### CI/CD Pipeline Tests
|
||||
```bash
|
||||
# Minimal test run (fast feedback)
|
||||
pytest -m "unit and not slow" --maxfail=5
|
||||
|
||||
# Full CI test run
|
||||
pytest --cov=app --cov-report=xml --junitxml=test-results.xml
|
||||
|
||||
# Security and integration tests
|
||||
pytest -m "security or integration" --tb=short
|
||||
```
|
||||
|
||||
## Test Reports and Output
|
||||
|
||||
### Coverage Reports
|
||||
```bash
|
||||
# Terminal coverage report
|
||||
pytest --cov=app --cov-report=term
|
||||
|
||||
# HTML coverage report (opens in browser)
|
||||
pytest --cov=app --cov-report=html
|
||||
open htmlcov/index.html
|
||||
|
||||
# XML coverage for CI
|
||||
pytest --cov=app --cov-report=xml
|
||||
```
|
||||
|
||||
### JUnit XML Reports
|
||||
```bash
|
||||
# Generate JUnit XML (for CI integration)
|
||||
pytest --junitxml=test-results.xml
|
||||
|
||||
# With coverage and JUnit
|
||||
pytest --cov=app --cov-report=xml --junitxml=test-results.xml
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
#### Import Errors
|
||||
```bash
|
||||
# If getting import errors, ensure PYTHONPATH is set
|
||||
PYTHONPATH=. pytest
|
||||
|
||||
# Or install in development mode
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
#### Database Connection Issues
|
||||
```bash
|
||||
# Check if test database is accessible
|
||||
python -c "from tests.conftest import engine; engine.connect()"
|
||||
|
||||
# Run with in-memory database
|
||||
pytest --override-db-url="sqlite:///:memory:"
|
||||
```
|
||||
|
||||
#### Fixture Not Found
|
||||
```bash
|
||||
# Ensure conftest.py is in the right location
|
||||
ls tests/conftest.py
|
||||
|
||||
# Check fixture imports
|
||||
pytest --fixtures tests/unit/
|
||||
```
|
||||
|
||||
#### Permission Issues
|
||||
```bash
|
||||
# Fix test file permissions
|
||||
chmod +x tests/**/*.py
|
||||
|
||||
# Clear pytest cache
|
||||
rm -rf .pytest_cache/
|
||||
```
|
||||
|
||||
### Performance Issues
|
||||
```bash
|
||||
# Identify slow tests
|
||||
pytest --durations=10 --durations-min=1.0
|
||||
|
||||
# Profile test execution
|
||||
pytest --profile-svg
|
||||
|
||||
# Run subset of tests for faster feedback
|
||||
pytest -m "unit and not slow"
|
||||
```
|
||||
|
||||
## Integration with IDEs
|
||||
|
||||
### VS Code
|
||||
Add to `.vscode/settings.json`:
|
||||
```json
|
||||
{
|
||||
"python.testing.pytestEnabled": true,
|
||||
"python.testing.pytestArgs": [
|
||||
"tests",
|
||||
"-v"
|
||||
],
|
||||
"python.testing.cwd": "${workspaceFolder}"
|
||||
}
|
||||
```
|
||||
|
||||
### PyCharm
|
||||
1. Go to Settings → Tools → Python Integrated Tools
|
||||
2. Set Testing → Default test runner to "pytest"
|
||||
3. Set pytest options: `-v --tb=short`
|
||||
|
||||
## Best Practices
|
||||
|
||||
### During Development
|
||||
1. **Run unit tests frequently** - Fast feedback loop
|
||||
2. **Run integration tests before commits** - Catch interaction issues
|
||||
3. **Check coverage regularly** - Ensure good test coverage
|
||||
4. **Use descriptive test names** - Easy to understand failures
|
||||
|
||||
### Before Code Review
|
||||
1. **Run full test suite** - `pytest`
|
||||
2. **Check coverage meets threshold** - `pytest --cov-fail-under=80`
|
||||
3. **Ensure no warnings** - `pytest -W error`
|
||||
4. **Test with fresh environment** - New terminal/clean cache
|
||||
|
||||
### In CI/CD
|
||||
1. **Fail fast on unit tests** - `pytest -m unit --maxfail=1`
|
||||
2. **Generate reports** - Coverage and JUnit XML
|
||||
3. **Run performance tests on schedule** - Not every commit
|
||||
4. **Archive test results** - For debugging and trends
|
||||
|
||||
---
|
||||
|
||||
Need help with a specific testing scenario? Check our [Testing FAQ](testing-faq.md) or open a GitHub issue!
|
||||
551
docs/testing/test-maintenance.md
Normal file
551
docs/testing/test-maintenance.md
Normal file
@@ -0,0 +1,551 @@
|
||||
# Test Maintenance Guide
|
||||
|
||||
This guide covers how to maintain, update, and contribute to our test suite as the application evolves. It's designed for developers who need to modify existing tests or add new test coverage.
|
||||
|
||||
## Test Maintenance Philosophy
|
||||
|
||||
Our test suite follows these core principles:
|
||||
- **Tests should be reliable** - They pass consistently and fail only when there are real issues
|
||||
- **Tests should be fast** - Unit tests complete in milliseconds, integration tests in seconds
|
||||
- **Tests should be maintainable** - Easy to update when code changes
|
||||
- **Tests should provide clear feedback** - Failures should clearly indicate what went wrong
|
||||
|
||||
## When to Update Tests
|
||||
|
||||
### Code Changes That Require Test Updates
|
||||
|
||||
**API Changes**:
|
||||
```bash
|
||||
# When you modify API endpoints, update integration tests
|
||||
pytest tests/integration/api/v1/test_*_endpoints.py -v
|
||||
```
|
||||
|
||||
**Business Logic Changes**:
|
||||
```bash
|
||||
# When you modify service logic, update unit tests
|
||||
pytest tests/unit/services/test_*_service.py -v
|
||||
```
|
||||
|
||||
**Database Model Changes**:
|
||||
```bash
|
||||
# When you modify models, update model tests
|
||||
pytest tests/unit/models/test_database_models.py -v
|
||||
```
|
||||
|
||||
**New Features**:
|
||||
- Add new test files following our naming conventions
|
||||
- Ensure both unit and integration test coverage
|
||||
- Add appropriate pytest markers
|
||||
|
||||
## Common Maintenance Tasks
|
||||
|
||||
### Adding Tests for New Features
|
||||
|
||||
**Step 1: Determine Test Type and Location**
|
||||
```python
|
||||
# New business logic → Unit test
|
||||
tests/unit/services/test_new_feature_service.py
|
||||
|
||||
# New API endpoint → Integration test
|
||||
tests/integration/api/v1/test_new_feature_endpoints.py
|
||||
|
||||
# New workflow → Integration test
|
||||
tests/integration/workflows/test_new_feature_workflow.py
|
||||
```
|
||||
|
||||
**Step 2: Create Test File with Proper Structure**
|
||||
```python
|
||||
import pytest
|
||||
from app.services.new_feature_service import NewFeatureService
|
||||
|
||||
@pytest.mark.unit
|
||||
@pytest.mark.new_feature # Add domain marker
|
||||
class TestNewFeatureService:
|
||||
"""Unit tests for NewFeatureService"""
|
||||
|
||||
def setup_method(self):
|
||||
"""Setup run before each test"""
|
||||
self.service = NewFeatureService()
|
||||
|
||||
def test_new_feature_with_valid_input_succeeds(self):
|
||||
"""Test the happy path"""
|
||||
# Test implementation
|
||||
pass
|
||||
|
||||
def test_new_feature_with_invalid_input_raises_error(self):
|
||||
"""Test error handling"""
|
||||
# Test implementation
|
||||
pass
|
||||
```
|
||||
|
||||
**Step 3: Add Domain Marker to pytest.ini**
|
||||
```ini
|
||||
# Add to markers section in pytest.ini
|
||||
new_feature: marks tests related to new feature functionality
|
||||
```
|
||||
|
||||
### Updating Tests for API Changes
|
||||
|
||||
**Example: Adding a new field to product creation**
|
||||
|
||||
```python
|
||||
# Before: tests/integration/api/v1/test_product_endpoints.py
|
||||
def test_create_product_success(self, client, auth_headers):
|
||||
product_data = {
|
||||
"product_id": "TEST001",
|
||||
"title": "Test Product",
|
||||
"price": "19.99"
|
||||
}
|
||||
response = client.post("/api/v1/product", json=product_data, headers=auth_headers)
|
||||
assert response.status_code == 200
|
||||
|
||||
# After: Adding 'category' field
|
||||
def test_create_product_success(self, client, auth_headers):
|
||||
product_data = {
|
||||
"product_id": "TEST001",
|
||||
"title": "Test Product",
|
||||
"price": "19.99",
|
||||
"category": "Electronics" # New field
|
||||
}
|
||||
response = client.post("/api/v1/product", json=product_data, headers=auth_headers)
|
||||
assert response.status_code == 200
|
||||
assert response.json()["category"] == "Electronics" # Verify new field
|
||||
|
||||
# Add test for validation
|
||||
def test_create_product_with_invalid_category_fails(self, client, auth_headers):
|
||||
product_data = {
|
||||
"product_id": "TEST002",
|
||||
"title": "Test Product",
|
||||
"price": "19.99",
|
||||
"category": "" # Invalid empty category
|
||||
}
|
||||
response = client.post("/api/v1/product", json=product_data, headers=auth_headers)
|
||||
assert response.status_code == 422
|
||||
```
|
||||
|
||||
### Updating Fixtures for Model Changes
|
||||
|
||||
**When you add fields to database models, update fixtures**:
|
||||
|
||||
```python
|
||||
# tests/fixtures/product_fixtures.py - Before
|
||||
@pytest.fixture
|
||||
def test_product(db):
|
||||
product = Product(
|
||||
product_id="TEST001",
|
||||
title="Test Product",
|
||||
price="10.99"
|
||||
)
|
||||
# ... rest of fixture
|
||||
|
||||
# After: Adding new category field
|
||||
@pytest.fixture
|
||||
def test_product(db):
|
||||
product = Product(
|
||||
product_id="TEST001",
|
||||
title="Test Product",
|
||||
price="10.99",
|
||||
category="Electronics" # Add new field with sensible default
|
||||
)
|
||||
# ... rest of fixture
|
||||
```
|
||||
|
||||
### Handling Breaking Changes
|
||||
|
||||
**When making breaking changes that affect many tests**:
|
||||
|
||||
1. **Update fixtures first** to include new required fields
|
||||
2. **Run tests to identify failures**: `pytest -x` (stop on first failure)
|
||||
3. **Update tests systematically** by domain
|
||||
4. **Verify coverage hasn't decreased**: `make test-coverage`
|
||||
|
||||
## Test Data Management
|
||||
|
||||
### Creating New Fixtures
|
||||
|
||||
**Add domain-specific fixtures to appropriate files**:
|
||||
|
||||
```python
|
||||
# tests/fixtures/new_domain_fixtures.py
|
||||
import pytest
|
||||
from models.database import NewModel
|
||||
|
||||
@pytest.fixture
|
||||
def test_new_model(db):
|
||||
"""Create a test instance of NewModel"""
|
||||
model = NewModel(
|
||||
name="Test Model",
|
||||
value="test_value"
|
||||
)
|
||||
db.add(model)
|
||||
db.commit()
|
||||
db.refresh(model)
|
||||
return model
|
||||
|
||||
@pytest.fixture
|
||||
def new_model_factory():
|
||||
"""Factory for creating custom NewModel instances"""
|
||||
def _create_new_model(db, **kwargs):
|
||||
defaults = {"name": "Default Name", "value": "default"}
|
||||
defaults.update(kwargs)
|
||||
model = NewModel(**defaults)
|
||||
db.add(model)
|
||||
db.commit()
|
||||
db.refresh(model)
|
||||
return model
|
||||
return _create_new_model
|
||||
```
|
||||
|
||||
**Register new fixture module in conftest.py**:
|
||||
```python
|
||||
# tests/conftest.py
|
||||
pytest_plugins = [
|
||||
"tests.fixtures.auth_fixtures",
|
||||
"tests.fixtures.product_fixtures",
|
||||
"tests.fixtures.shop_fixtures",
|
||||
"tests.fixtures.marketplace_fixtures",
|
||||
"tests.fixtures.new_domain_fixtures", # Add new fixture module
|
||||
]
|
||||
```
|
||||
|
||||
### Managing Test Data Files
|
||||
|
||||
**Static test data in tests/test_data/**:
|
||||
```
|
||||
tests/test_data/
|
||||
├── csv/
|
||||
│ ├── valid_products.csv # Standard valid product data
|
||||
│ ├── invalid_products.csv # Data with validation errors
|
||||
│ ├── large_product_set.csv # Performance testing data
|
||||
│ └── new_feature_data.csv # Data for new feature testing
|
||||
├── json/
|
||||
│ ├── api_responses.json # Mock API responses
|
||||
│ └── configuration_samples.json # Configuration test data
|
||||
└── fixtures/
|
||||
└── database_seeds.json # Database seed data
|
||||
```
|
||||
|
||||
**Update test data when adding new fields**:
|
||||
```csv
|
||||
# Before: tests/test_data/csv/valid_products.csv
|
||||
product_id,title,price
|
||||
TEST001,Product 1,19.99
|
||||
TEST002,Product 2,29.99
|
||||
|
||||
# After: Adding category field
|
||||
product_id,title,price,category
|
||||
TEST001,Product 1,19.99,Electronics
|
||||
TEST002,Product 2,29.99,Books
|
||||
```
|
||||
|
||||
## Performance Test Maintenance
|
||||
|
||||
### Updating Performance Baselines
|
||||
|
||||
**When application performance improves or requirements change**:
|
||||
|
||||
```python
|
||||
# tests/performance/test_api_performance.py
|
||||
def test_product_list_performance(self, client, auth_headers, db):
|
||||
# Create test data
|
||||
products = [Product(product_id=f"PERF{i:03d}") for i in range(100)]
|
||||
db.add_all(products)
|
||||
db.commit()
|
||||
|
||||
# Time the request
|
||||
start_time = time.time()
|
||||
response = client.get("/api/v1/product?limit=100", headers=auth_headers)
|
||||
end_time = time.time()
|
||||
|
||||
assert response.status_code == 200
|
||||
assert len(response.json()["products"]) == 100
|
||||
# Update baseline if performance has improved
|
||||
assert end_time - start_time < 1.5 # Previously was 2.0 seconds
|
||||
```
|
||||
|
||||
### Adding Performance Tests for New Features
|
||||
|
||||
```python
|
||||
@pytest.mark.performance
|
||||
@pytest.mark.slow
|
||||
@pytest.mark.new_feature
|
||||
def test_new_feature_performance_with_large_dataset(self, client, auth_headers, db):
|
||||
"""Test new feature performance with realistic data volume"""
|
||||
# Create large dataset
|
||||
large_dataset = [NewModel(data=f"item_{i}") for i in range(1000)]
|
||||
db.add_all(large_dataset)
|
||||
db.commit()
|
||||
|
||||
# Test performance
|
||||
start_time = time.time()
|
||||
response = client.post("/api/v1/new-feature/process",
|
||||
json={"process_all": True},
|
||||
headers=auth_headers)
|
||||
end_time = time.time()
|
||||
|
||||
assert response.status_code == 200
|
||||
assert end_time - start_time < 10.0 # Should complete within 10 seconds
|
||||
```
|
||||
|
||||
## Debugging and Troubleshooting
|
||||
|
||||
### Identifying Flaky Tests
|
||||
|
||||
**Tests that pass/fail inconsistently need attention**:
|
||||
|
||||
```bash
|
||||
# Run the same test multiple times to identify flaky behavior
|
||||
pytest tests/path/to/flaky_test.py -v --count=10
|
||||
|
||||
# Run with more verbose output to see what's changing
|
||||
pytest tests/path/to/flaky_test.py -vv --tb=long --showlocals
|
||||
```
|
||||
|
||||
**Common causes of flaky tests**:
|
||||
- Database state not properly cleaned between tests
|
||||
- Timing issues in async operations
|
||||
- External service dependencies
|
||||
- Shared mutable state between tests
|
||||
|
||||
### Fixing Common Test Issues
|
||||
|
||||
**Database State Issues**:
|
||||
```python
|
||||
# Ensure proper cleanup in fixtures
|
||||
@pytest.fixture
|
||||
def clean_database(db):
|
||||
"""Ensure clean database state"""
|
||||
yield db
|
||||
# Explicit cleanup if needed
|
||||
db.query(SomeModel).delete()
|
||||
db.commit()
|
||||
```
|
||||
|
||||
**Async Test Issues**:
|
||||
```python
|
||||
# Ensure proper async test setup
|
||||
@pytest.mark.asyncio
|
||||
async def test_async_operation():
|
||||
# Use await for all async operations
|
||||
result = await async_service.process_data()
|
||||
assert result is not None
|
||||
```
|
||||
|
||||
**Mock-Related Issues**:
|
||||
```python
|
||||
# Ensure mocks are properly reset between tests
|
||||
def setup_method(self):
|
||||
"""Reset mocks before each test"""
|
||||
self.mock_service.reset_mock()
|
||||
```
|
||||
|
||||
### Test Coverage Issues
|
||||
|
||||
**Identifying gaps in coverage**:
|
||||
```bash
|
||||
# Generate coverage report with missing lines
|
||||
pytest --cov=app --cov-report=term-missing
|
||||
|
||||
# View HTML report for detailed analysis
|
||||
pytest --cov=app --cov-report=html
|
||||
open htmlcov/index.html
|
||||
```
|
||||
|
||||
**Adding tests for uncovered code**:
|
||||
```python
|
||||
# Example: Adding test for error handling branch
|
||||
def test_service_method_handles_database_error(self, mock_db):
|
||||
"""Test error handling path that wasn't covered"""
|
||||
# Setup mock to raise exception
|
||||
mock_db.commit.side_effect = DatabaseError("Connection failed")
|
||||
|
||||
# Test that error is handled appropriately
|
||||
with pytest.raises(ServiceError):
|
||||
self.service.save_data(test_data)
|
||||
```
|
||||
|
||||
## Code Quality Standards
|
||||
|
||||
### Test Code Review Checklist
|
||||
|
||||
**Before submitting test changes**:
|
||||
- [ ] Tests have descriptive names explaining the scenario
|
||||
- [ ] Appropriate pytest markers are used
|
||||
- [ ] Test coverage hasn't decreased
|
||||
- [ ] Tests are in the correct category (unit/integration/system)
|
||||
- [ ] No hardcoded values that could break in different environments
|
||||
- [ ] Error cases are tested, not just happy paths
|
||||
- [ ] New fixtures are properly documented
|
||||
- [ ] Performance tests have reasonable baselines
|
||||
|
||||
### Refactoring Tests
|
||||
|
||||
**When refactoring test code**:
|
||||
```python
|
||||
# Before: Repetitive test setup
|
||||
class TestProductService:
|
||||
def test_create_product_success(self):
|
||||
service = ProductService()
|
||||
data = {"name": "Test", "price": "10.99"}
|
||||
result = service.create_product(data)
|
||||
assert result is not None
|
||||
|
||||
def test_create_product_validation_error(self):
|
||||
service = ProductService() # Duplicate setup
|
||||
data = {"name": "", "price": "invalid"}
|
||||
with pytest.raises(ValidationError):
|
||||
service.create_product(data)
|
||||
|
||||
# After: Using setup_method and constants
|
||||
class TestProductService:
|
||||
def setup_method(self):
|
||||
self.service = ProductService()
|
||||
self.valid_data = {"name": "Test", "price": "10.99"}
|
||||
|
||||
def test_create_product_success(self):
|
||||
result = self.service.create_product(self.valid_data)
|
||||
assert result is not None
|
||||
|
||||
def test_create_product_validation_error(self):
|
||||
invalid_data = {"name": "", "price": "invalid"}
|
||||
with pytest.raises(ValidationError):
|
||||
self.service.create_product(invalid_data)
|
||||
```
|
||||
|
||||
## Working with CI/CD
|
||||
|
||||
### Test Categories in CI Pipeline
|
||||
|
||||
Our CI pipeline runs tests in stages:
|
||||
|
||||
**Stage 1: Fast Feedback**
|
||||
```bash
|
||||
make test-fast # Unit tests + fast integration tests
|
||||
```
|
||||
|
||||
**Stage 2: Comprehensive Testing**
|
||||
```bash
|
||||
make test-coverage # Full suite with coverage
|
||||
```
|
||||
|
||||
**Stage 3: Performance Validation** (on release branches)
|
||||
```bash
|
||||
pytest -m performance
|
||||
```
|
||||
|
||||
### Making Tests CI-Friendly
|
||||
|
||||
**Ensure tests are deterministic**:
|
||||
```python
|
||||
# Bad: Tests that depend on current time
|
||||
def test_user_creation():
|
||||
user = create_user()
|
||||
assert user.created_at.day == datetime.now().day # Flaky at midnight
|
||||
|
||||
# Good: Tests with controlled time
|
||||
def test_user_creation(freezer):
|
||||
freezer.freeze("2024-01-15 10:00:00")
|
||||
user = create_user()
|
||||
assert user.created_at == datetime(2024, 1, 15, 10, 0, 0)
|
||||
```
|
||||
|
||||
**Make tests environment-independent**:
|
||||
```python
|
||||
# Use relative paths and environment variables
|
||||
TEST_DATA_DIR = Path(__file__).parent / "test_data"
|
||||
CSV_FILE = TEST_DATA_DIR / "sample_products.csv"
|
||||
```
|
||||
|
||||
## Migration and Upgrade Strategies
|
||||
|
||||
### When Upgrading Dependencies
|
||||
|
||||
**Test dependency upgrades**:
|
||||
```bash
|
||||
# Test with new versions before upgrading
|
||||
pip install pytest==8.0.0 pytest-cov==5.0.0
|
||||
make test
|
||||
|
||||
# If tests fail, identify compatibility issues
|
||||
pytest --tb=short -x
|
||||
```
|
||||
|
||||
**Update test configuration for new pytest versions**:
|
||||
```ini
|
||||
# pytest.ini - may need updates for new versions
|
||||
minversion = 8.0
|
||||
# Check if any deprecated features are used
|
||||
```
|
||||
|
||||
### Database Schema Changes
|
||||
|
||||
**When modifying database models**:
|
||||
1. Update model test fixtures first
|
||||
2. Run migration on test database
|
||||
3. Update affected test data files
|
||||
4. Run integration tests to catch relationship issues
|
||||
|
||||
```python
|
||||
# Update fixtures for new required fields
|
||||
@pytest.fixture
|
||||
def test_product(db):
|
||||
product = Product(
|
||||
# ... existing fields
|
||||
new_required_field="default_value" # Add with sensible default
|
||||
)
|
||||
return product
|
||||
```
|
||||
|
||||
## Documentation and Knowledge Sharing
|
||||
|
||||
### Documenting Complex Test Scenarios
|
||||
|
||||
**For complex business logic tests**:
|
||||
```python
|
||||
def test_complex_pricing_calculation_scenario(self):
|
||||
"""
|
||||
Test pricing calculation with multiple discounts and tax rules.
|
||||
|
||||
Scenario:
|
||||
- Product price: $100
|
||||
- Member discount: 10%
|
||||
- Seasonal discount: 5% (applied after member discount)
|
||||
- Tax rate: 8.5%
|
||||
|
||||
Expected calculation:
|
||||
Base: $100 → Member discount: $90 → Seasonal: $85.50 → Tax: $92.77
|
||||
"""
|
||||
# Test implementation with clear steps
|
||||
```
|
||||
|
||||
### Team Knowledge Sharing
|
||||
|
||||
**Maintain test documentation**:
|
||||
- Update this guide when adding new test patterns
|
||||
- Document complex fixture relationships
|
||||
- Share test debugging techniques in team meetings
|
||||
- Create examples for new team members
|
||||
|
||||
## Summary: Test Maintenance Best Practices
|
||||
|
||||
**Daily Practices**:
|
||||
- Run relevant tests before committing code
|
||||
- Add tests for new functionality immediately
|
||||
- Keep test names descriptive and current
|
||||
- Update fixtures when models change
|
||||
|
||||
**Regular Maintenance**:
|
||||
- Review and update performance baselines
|
||||
- Refactor repetitive test code
|
||||
- Clean up unused fixtures and test data
|
||||
- Monitor test execution times
|
||||
|
||||
**Long-term Strategy**:
|
||||
- Plan test architecture for new features
|
||||
- Evaluate test coverage trends
|
||||
- Update testing tools and practices
|
||||
- Share knowledge across the team
|
||||
|
||||
**Remember**: Good tests are living documentation of your system's behavior. Keep them current, clear, and comprehensive to maintain a healthy codebase.
|
||||
|
||||
Use this guide alongside the [Testing Guide](testing-guide.md) for complete test management knowledge.
|
||||
@@ -1,383 +0,0 @@
|
||||
# Test Naming Conventions
|
||||
|
||||
This document outlines the naming conventions used in our test suite to ensure consistency, clarity, and maintainability across the project.
|
||||
|
||||
## Overview
|
||||
|
||||
Our test suite follows a hierarchical structure organized by test type and component, with clear naming patterns that make it easy to locate and understand test purposes.
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
tests/
|
||||
├── conftest.py # Core test configuration and fixtures
|
||||
├── pytest.ini # Pytest configuration
|
||||
├── fixtures/ # Shared test fixtures organized by domain
|
||||
├── unit/ # Fast, isolated component tests
|
||||
├── integration/ # Multi-component interaction tests
|
||||
├── performance/ # Performance and load tests
|
||||
├── system/ # End-to-end system behavior tests
|
||||
└── test_data/ # Test data files (CSV, JSON, etc.)
|
||||
```
|
||||
|
||||
## File Naming Conventions
|
||||
|
||||
### General Rule
|
||||
All test files must start with `test_` prefix for pytest auto-discovery.
|
||||
|
||||
**Pattern:** `test_{component/feature}.py`
|
||||
|
||||
### Unit Tests (`tests/unit/`)
|
||||
|
||||
Focus on testing individual components in isolation.
|
||||
|
||||
```
|
||||
tests/unit/
|
||||
├── models/
|
||||
│ ├── test_database_models.py # Database model tests
|
||||
│ ├── test_api_models.py # API model validation
|
||||
│ └── test_model_relationships.py # Model relationship tests
|
||||
├── utils/
|
||||
│ ├── test_data_processing.py # Data processing utilities
|
||||
│ ├── test_csv_processor.py # CSV processing utilities
|
||||
│ ├── test_validators.py # Validation utilities
|
||||
│ └── test_formatters.py # Data formatting utilities
|
||||
├── services/
|
||||
│ ├── test_admin_service.py # Admin service business logic
|
||||
│ ├── test_auth_service.py # Authentication service
|
||||
│ ├── test_product_service.py # Product management service
|
||||
│ ├── test_shop_service.py # Shop management service
|
||||
│ ├── test_stock_service.py # Stock management service
|
||||
│ ├── test_marketplace_service.py # Marketplace import service
|
||||
│ └── test_stats_service.py # Statistics service
|
||||
└── core/
|
||||
├── test_config.py # Configuration management
|
||||
├── test_database.py # Database utilities
|
||||
└── test_logging.py # Logging functionality
|
||||
```
|
||||
|
||||
### Integration Tests (`tests/integration/`)
|
||||
|
||||
Focus on testing interactions between multiple components.
|
||||
|
||||
```
|
||||
tests/integration/
|
||||
├── api/v1/
|
||||
│ ├── test_auth_endpoints.py # Authentication API endpoints
|
||||
│ ├── test_admin_endpoints.py # Admin API endpoints
|
||||
│ ├── test_product_endpoints.py # Product CRUD endpoints
|
||||
│ ├── test_shop_endpoints.py # Shop management endpoints
|
||||
│ ├── test_stock_endpoints.py # Stock management endpoints
|
||||
│ ├── test_marketplace_endpoints.py # Import endpoints
|
||||
│ ├── test_stats_endpoints.py # Statistics endpoints
|
||||
│ └── test_pagination.py # Pagination functionality
|
||||
├── database/
|
||||
│ ├── test_migrations.py # Database migration tests
|
||||
│ ├── test_transactions.py # Transaction handling
|
||||
│ └── test_constraints.py # Database constraints
|
||||
├── security/
|
||||
│ ├── test_authentication.py # Authentication mechanisms
|
||||
│ ├── test_authorization.py # Permission and role tests
|
||||
│ ├── test_input_validation.py # Input security validation
|
||||
│ ├── test_rate_limiting.py # Rate limiting middleware
|
||||
│ └── test_cors.py # CORS configuration
|
||||
└── workflows/
|
||||
├── test_product_import_workflow.py # Complete import process
|
||||
├── test_user_registration_flow.py # User registration process
|
||||
└── test_shop_setup_flow.py # Shop creation workflow
|
||||
```
|
||||
|
||||
### System Tests (`tests/system/`)
|
||||
|
||||
Focus on end-to-end application behavior.
|
||||
|
||||
```
|
||||
tests/system/
|
||||
├── test_application_startup.py # Application initialization
|
||||
├── test_error_handling.py # Global error handling
|
||||
├── test_health_checks.py # Health endpoint tests
|
||||
├── test_database_connectivity.py # Database connection tests
|
||||
└── test_external_dependencies.py # Third-party service tests
|
||||
```
|
||||
|
||||
### Performance Tests (`tests/performance/`)
|
||||
|
||||
Focus on performance benchmarks and load testing.
|
||||
|
||||
```
|
||||
tests/performance/
|
||||
├── test_api_performance.py # API endpoint performance
|
||||
├── test_database_performance.py # Database query performance
|
||||
├── test_import_performance.py # Large file import tests
|
||||
└── test_concurrent_users.py # Multi-user scenarios
|
||||
```
|
||||
|
||||
## Class Naming Conventions
|
||||
|
||||
Use descriptive class names that clearly indicate what is being tested:
|
||||
|
||||
### Pattern: `Test{ComponentName}{TestType}`
|
||||
|
||||
```python
|
||||
# Good examples
|
||||
class TestProductService: # Testing ProductService class
|
||||
class TestProductModel: # Testing Product model
|
||||
class TestProductEndpoints: # Testing product API endpoints
|
||||
class TestProductValidation: # Testing product validation logic
|
||||
class TestProductPermissions: # Testing product access control
|
||||
|
||||
# Avoid generic names
|
||||
class TestProduct: # Too vague - what aspect of Product?
|
||||
class Tests: # Too generic
|
||||
```
|
||||
|
||||
### Specialized Class Naming
|
||||
|
||||
```python
|
||||
class TestProductCRUD: # CRUD operations
|
||||
class TestProductSecurity: # Security-related tests
|
||||
class TestProductErrorHandling: # Error scenario tests
|
||||
class TestProductEdgeCases: # Boundary condition tests
|
||||
```
|
||||
|
||||
## Method Naming Conventions
|
||||
|
||||
Test method names should be descriptive and follow a clear pattern that explains:
|
||||
1. What is being tested
|
||||
2. The scenario/condition
|
||||
3. Expected outcome
|
||||
|
||||
### Pattern: `test_{action}_{scenario}_{expected_outcome}`
|
||||
|
||||
```python
|
||||
# Good examples - Clear and descriptive
|
||||
def test_create_product_with_valid_data_returns_product():
|
||||
def test_create_product_with_invalid_gtin_raises_validation_error():
|
||||
def test_get_product_by_id_when_not_found_returns_404():
|
||||
def test_update_product_without_permission_raises_forbidden():
|
||||
def test_delete_product_with_existing_stock_prevents_deletion():
|
||||
|
||||
# Acceptable shorter versions
|
||||
def test_create_product_success():
|
||||
def test_create_product_invalid_data():
|
||||
def test_create_product_unauthorized():
|
||||
def test_get_product_not_found():
|
||||
```
|
||||
|
||||
### Method Naming Patterns by Scenario
|
||||
|
||||
**Happy Path Tests:**
|
||||
```python
|
||||
def test_create_user_success():
|
||||
def test_login_valid_credentials():
|
||||
def test_import_csv_valid_format():
|
||||
```
|
||||
|
||||
**Error/Edge Case Tests:**
|
||||
```python
|
||||
def test_create_user_duplicate_email_fails():
|
||||
def test_login_invalid_credentials_rejected():
|
||||
def test_import_csv_malformed_data_raises_error():
|
||||
```
|
||||
|
||||
**Permission/Security Tests:**
|
||||
```python
|
||||
def test_admin_endpoint_requires_admin_role():
|
||||
def test_user_cannot_access_other_user_data():
|
||||
def test_api_key_required_for_marketplace_import():
|
||||
```
|
||||
|
||||
**Boundary/Edge Cases:**
|
||||
```python
|
||||
def test_pagination_with_zero_items():
|
||||
def test_price_with_maximum_decimal_places():
|
||||
def test_gtin_with_minimum_valid_length():
|
||||
```
|
||||
|
||||
## Marker Usage
|
||||
|
||||
Use pytest markers to categorize and run specific test types:
|
||||
|
||||
```python
|
||||
@pytest.mark.unit
|
||||
class TestProductService:
|
||||
"""Unit tests for ProductService business logic"""
|
||||
|
||||
@pytest.mark.integration
|
||||
@pytest.mark.api
|
||||
class TestProductEndpoints:
|
||||
"""Integration tests for Product API endpoints"""
|
||||
|
||||
@pytest.mark.slow
|
||||
@pytest.mark.performance
|
||||
class TestImportPerformance:
|
||||
"""Performance tests for large CSV imports"""
|
||||
```
|
||||
|
||||
### Available Markers
|
||||
|
||||
| Marker | Purpose |
|
||||
|--------|---------|
|
||||
| `@pytest.mark.unit` | Fast, isolated component tests |
|
||||
| `@pytest.mark.integration` | Multi-component interaction tests |
|
||||
| `@pytest.mark.system` | End-to-end system tests |
|
||||
| `@pytest.mark.performance` | Performance and load tests |
|
||||
| `@pytest.mark.slow` | Long-running tests |
|
||||
| `@pytest.mark.api` | API endpoint tests |
|
||||
| `@pytest.mark.database` | Tests requiring database |
|
||||
| `@pytest.mark.auth` | Authentication/authorization tests |
|
||||
| `@pytest.mark.admin` | Admin functionality tests |
|
||||
|
||||
## File Organization Best Practices
|
||||
|
||||
### Mirror Source Structure
|
||||
Test files should mirror the source code structure:
|
||||
|
||||
```
|
||||
app/services/product_service.py → tests/unit/services/test_product_service.py
|
||||
app/api/v1/admin.py → tests/integration/api/v1/test_admin_endpoints.py
|
||||
```
|
||||
|
||||
### Group Related Tests
|
||||
Keep related tests in the same file:
|
||||
|
||||
```python
|
||||
# test_product_service.py
|
||||
class TestProductCreation: # All product creation scenarios
|
||||
class TestProductValidation: # All validation scenarios
|
||||
class TestProductQueries: # All query scenarios
|
||||
```
|
||||
|
||||
### Separate Complex Workflows
|
||||
Break complex workflows into separate files:
|
||||
|
||||
```
|
||||
test_user_registration_flow.py # Complete registration process
|
||||
test_shop_setup_flow.py # Shop creation workflow
|
||||
test_product_import_workflow.py # CSV import end-to-end
|
||||
```
|
||||
|
||||
## Running Tests by Name Pattern
|
||||
|
||||
Use pytest's flexible test discovery:
|
||||
|
||||
```bash
|
||||
# Run all unit tests
|
||||
pytest tests/unit/ -v
|
||||
|
||||
# Run specific component tests
|
||||
pytest tests/ -k "product" -v
|
||||
|
||||
# Run by marker
|
||||
pytest -m "unit and not slow" -v
|
||||
|
||||
# Run specific test class
|
||||
pytest tests/unit/services/test_product_service.py::TestProductService -v
|
||||
|
||||
# Run specific test method
|
||||
pytest tests/unit/services/test_product_service.py::TestProductService::test_create_product_success -v
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Complete Test File Example
|
||||
|
||||
```python
|
||||
# tests/unit/services/test_product_service.py
|
||||
import pytest
|
||||
from unittest.mock import Mock, patch
|
||||
|
||||
from app.services.product_service import ProductService
|
||||
from app.models.database_models import Product
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
class TestProductService:
|
||||
"""Unit tests for ProductService business logic"""
|
||||
|
||||
def setup_method(self):
|
||||
"""Setup run before each test method"""
|
||||
self.service = ProductService()
|
||||
self.mock_db = Mock()
|
||||
|
||||
def test_create_product_with_valid_data_returns_product(self):
|
||||
"""Test successful product creation with valid input data"""
|
||||
# Arrange
|
||||
product_data = {
|
||||
"name": "Test Product",
|
||||
"gtin": "1234567890123",
|
||||
"price": "19.99"
|
||||
}
|
||||
|
||||
# Act
|
||||
result = self.service.create_product(product_data)
|
||||
|
||||
# Assert
|
||||
assert result is not None
|
||||
assert result.name == "Test Product"
|
||||
|
||||
def test_create_product_with_invalid_gtin_raises_validation_error(self):
|
||||
"""Test product creation fails with invalid GTIN format"""
|
||||
# Arrange
|
||||
product_data = {
|
||||
"name": "Test Product",
|
||||
"gtin": "invalid_gtin",
|
||||
"price": "19.99"
|
||||
}
|
||||
|
||||
# Act & Assert
|
||||
with pytest.raises(ValidationError, match="Invalid GTIN format"):
|
||||
self.service.create_product(product_data)
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
class TestProductValidation:
|
||||
"""Unit tests for product validation logic"""
|
||||
|
||||
def test_validate_gtin_with_valid_format_returns_true(self):
|
||||
"""Test GTIN validation accepts valid 13-digit GTIN"""
|
||||
assert ProductService.validate_gtin("1234567890123") is True
|
||||
|
||||
def test_validate_gtin_with_invalid_format_returns_false(self):
|
||||
"""Test GTIN validation rejects invalid format"""
|
||||
assert ProductService.validate_gtin("invalid") is False
|
||||
```
|
||||
|
||||
## Naming Anti-Patterns to Avoid
|
||||
|
||||
❌ **Avoid these patterns:**
|
||||
|
||||
```python
|
||||
# Too generic
|
||||
def test_product():
|
||||
def test_user():
|
||||
def test_api():
|
||||
|
||||
# Unclear purpose
|
||||
def test_function():
|
||||
def test_method():
|
||||
def test_endpoint():
|
||||
|
||||
# Non-descriptive
|
||||
def test_1():
|
||||
def test_a():
|
||||
def test_scenario():
|
||||
|
||||
# Inconsistent naming
|
||||
def testProductCreation(): # Wrong case
|
||||
def test_product_creation(): # Good
|
||||
def TestProductCreation(): # Wrong - this is for classes
|
||||
```
|
||||
|
||||
✅ **Use these patterns instead:**
|
||||
|
||||
```python
|
||||
# Clear, descriptive, consistent
|
||||
def test_create_product_with_valid_data_succeeds():
|
||||
def test_authenticate_user_with_invalid_token_fails():
|
||||
def test_get_products_endpoint_returns_paginated_results():
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
Following these naming conventions will help maintain a clean, understandable, and maintainable test suite that scales with your project.
|
||||
470
docs/testing/testing-guide.md
Normal file
470
docs/testing/testing-guide.md
Normal file
@@ -0,0 +1,470 @@
|
||||
# Testing Guide for Developers
|
||||
|
||||
This guide provides everything your development team needs to know about our comprehensive test suite structure, how to run tests effectively, and how to maintain test quality.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Install test dependencies
|
||||
make install-test
|
||||
|
||||
# Run all tests
|
||||
make test
|
||||
|
||||
# Run fast tests only (development workflow)
|
||||
make test-fast
|
||||
|
||||
# Run with coverage
|
||||
make test-coverage
|
||||
```
|
||||
|
||||
## Test Structure Overview
|
||||
|
||||
Our test suite is organized hierarchically by test type and execution speed to optimize development workflows:
|
||||
|
||||
```
|
||||
tests/
|
||||
├── conftest.py # Core test configuration and database fixtures
|
||||
├── pytest.ini # Test configuration with markers and coverage
|
||||
├── fixtures/ # Domain-organized test fixtures
|
||||
│ ├── auth_fixtures.py # Users, tokens, authentication headers
|
||||
│ ├── product_fixtures.py # Products, factories, bulk test data
|
||||
│ ├── shop_fixtures.py # Shops, stock, shop-product relationships
|
||||
│ └── marketplace_fixtures.py # Import jobs and marketplace data
|
||||
├── unit/ # Fast, isolated component tests (< 1 second)
|
||||
│ ├── models/ # Database and API model tests
|
||||
│ ├── utils/ # Utility function tests
|
||||
│ ├── services/ # Business logic tests
|
||||
│ └── middleware/ # Middleware component tests
|
||||
├── integration/ # Multi-component tests (1-10 seconds)
|
||||
│ ├── api/v1/ # API endpoint tests with database
|
||||
│ ├── security/ # Authentication, authorization tests
|
||||
│ ├── tasks/ # Background task integration tests
|
||||
│ └── workflows/ # Multi-step process tests
|
||||
├── performance/ # Performance benchmarks (10+ seconds)
|
||||
│ └── test_api_performance.py # Load testing and benchmarks
|
||||
├── system/ # End-to-end system tests (30+ seconds)
|
||||
│ └── test_error_handling.py # Application-wide error handling
|
||||
└── test_data/ # Static test data files
|
||||
└── csv/sample_products.csv # Sample CSV for import testing
|
||||
```
|
||||
|
||||
## Test Categories and When to Use Each
|
||||
|
||||
### Unit Tests (`tests/unit/`)
|
||||
**Purpose**: Test individual components in isolation
|
||||
**Speed**: Very fast (< 1 second each)
|
||||
**Use when**: Testing business logic, data processing, model validation
|
||||
|
||||
```bash
|
||||
# Run during active development
|
||||
pytest -m unit
|
||||
|
||||
# Example locations:
|
||||
tests/unit/services/test_product_service.py # Business logic
|
||||
tests/unit/utils/test_data_processing.py # Utility functions
|
||||
tests/unit/models/test_database_models.py # Model validation
|
||||
```
|
||||
|
||||
### Integration Tests (`tests/integration/`)
|
||||
**Purpose**: Test component interactions
|
||||
**Speed**: Moderate (1-10 seconds each)
|
||||
**Use when**: Testing API endpoints, service interactions, workflows
|
||||
|
||||
```bash
|
||||
# Run before commits
|
||||
pytest -m integration
|
||||
|
||||
# Example locations:
|
||||
tests/integration/api/v1/test_admin_endpoints.py # API endpoints
|
||||
tests/integration/security/test_authentication.py # Auth workflows
|
||||
tests/integration/workflows/test_product_import.py # Multi-step processes
|
||||
```
|
||||
|
||||
### Performance Tests (`tests/performance/`)
|
||||
**Purpose**: Validate performance requirements
|
||||
**Speed**: Slow (10+ seconds each)
|
||||
**Use when**: Testing response times, load capacity, large data processing
|
||||
|
||||
```bash
|
||||
# Run periodically or in CI
|
||||
pytest -m performance
|
||||
```
|
||||
|
||||
### System Tests (`tests/system/`)
|
||||
**Purpose**: End-to-end application behavior
|
||||
**Speed**: Slowest (30+ seconds each)
|
||||
**Use when**: Testing complete user scenarios, error handling across layers
|
||||
|
||||
```bash
|
||||
# Run before releases
|
||||
pytest -m system
|
||||
```
|
||||
|
||||
## Daily Development Workflow
|
||||
|
||||
### During Active Development
|
||||
```bash
|
||||
# Quick feedback loop - run relevant unit tests
|
||||
pytest tests/unit/services/test_product_service.py -v
|
||||
|
||||
# Test specific functionality you're working on
|
||||
pytest -k "product and create" -m unit
|
||||
|
||||
# Fast comprehensive check
|
||||
make test-fast # Equivalent to: pytest -m "not slow"
|
||||
```
|
||||
|
||||
### Before Committing Code
|
||||
```bash
|
||||
# Run unit and integration tests
|
||||
make test-unit
|
||||
make test-integration
|
||||
|
||||
# Or run both with coverage
|
||||
make test-coverage
|
||||
```
|
||||
|
||||
### Before Creating Pull Request
|
||||
```bash
|
||||
# Full test suite with linting
|
||||
make ci # Runs format, lint, and test-coverage
|
||||
|
||||
# Check if all tests pass
|
||||
make test
|
||||
```
|
||||
|
||||
## Running Specific Tests
|
||||
|
||||
### By Test Type
|
||||
```bash
|
||||
# Fast unit tests only
|
||||
pytest -m unit
|
||||
|
||||
# Integration tests only
|
||||
pytest -m integration
|
||||
|
||||
# Everything except slow tests
|
||||
pytest -m "not slow"
|
||||
|
||||
# Database-dependent tests
|
||||
pytest -m database
|
||||
|
||||
# Authentication-related tests
|
||||
pytest -m auth
|
||||
```
|
||||
|
||||
### By Component/Domain
|
||||
```bash
|
||||
# All product-related tests
|
||||
pytest -k "product"
|
||||
|
||||
# Admin functionality tests
|
||||
pytest -m admin
|
||||
|
||||
# API endpoint tests
|
||||
pytest -m api
|
||||
|
||||
# All tests in a directory
|
||||
pytest tests/unit/services/ -v
|
||||
```
|
||||
|
||||
### By Specific Files or Methods
|
||||
```bash
|
||||
# Specific test file
|
||||
pytest tests/unit/services/test_product_service.py -v
|
||||
|
||||
# Specific test class
|
||||
pytest tests/unit/services/test_product_service.py::TestProductService -v
|
||||
|
||||
# Specific test method
|
||||
pytest tests/unit/services/test_product_service.py::TestProductService::test_create_product_success -v
|
||||
```
|
||||
|
||||
## Test Fixtures and Data
|
||||
|
||||
### Using Existing Fixtures
|
||||
Our fixtures are organized by domain in the `fixtures/` directory:
|
||||
|
||||
```python
|
||||
# In your test file
|
||||
def test_product_creation(test_user, test_shop, auth_headers):
|
||||
"""Uses auth_fixtures.py fixtures"""
|
||||
# test_user: Creates a test user
|
||||
# test_shop: Creates a test shop owned by test_user
|
||||
# auth_headers: Provides authentication headers for API calls
|
||||
|
||||
def test_multiple_products(multiple_products):
|
||||
"""Uses product_fixtures.py fixtures"""
|
||||
# multiple_products: Creates 5 test products with different attributes
|
||||
assert len(multiple_products) == 5
|
||||
|
||||
def test_with_factory(product_factory, db):
|
||||
"""Uses factory fixtures for custom test data"""
|
||||
# Create custom product with specific attributes
|
||||
product = product_factory(db, title="Custom Product", price="99.99")
|
||||
assert product.title == "Custom Product"
|
||||
```
|
||||
|
||||
### Available Fixtures by Domain
|
||||
|
||||
**Authentication (`auth_fixtures.py`)**:
|
||||
- `test_user`, `test_admin`, `other_user`
|
||||
- `auth_headers`, `admin_headers`
|
||||
- `auth_manager`
|
||||
|
||||
**Products (`product_fixtures.py`)**:
|
||||
- `test_product`, `unique_product`, `multiple_products`
|
||||
- `product_factory` (for custom products)
|
||||
|
||||
**Shops (`shop_fixtures.py`)**:
|
||||
- `test_shop`, `unique_shop`, `inactive_shop`, `verified_shop`
|
||||
- `shop_product`, `test_stock`, `multiple_stocks`
|
||||
- `shop_factory` (for custom shops)
|
||||
|
||||
**Marketplace (`marketplace_fixtures.py`)**:
|
||||
- `test_marketplace_job`
|
||||
|
||||
## Writing New Tests
|
||||
|
||||
### Test File Location
|
||||
Choose location based on what you're testing:
|
||||
|
||||
```python
|
||||
# Business logic → unit tests
|
||||
tests/unit/services/test_my_new_service.py
|
||||
|
||||
# API endpoints → integration tests
|
||||
tests/integration/api/v1/test_my_new_endpoints.py
|
||||
|
||||
# Multi-component workflows → integration tests
|
||||
tests/integration/workflows/test_my_new_workflow.py
|
||||
|
||||
# Performance concerns → performance tests
|
||||
tests/performance/test_my_performance.py
|
||||
```
|
||||
|
||||
### Test Class Structure
|
||||
```python
|
||||
import pytest
|
||||
from app.services.my_service import MyService
|
||||
|
||||
@pytest.mark.unit # Always add appropriate markers
|
||||
@pytest.mark.products # Domain-specific marker
|
||||
class TestMyService:
|
||||
"""Test suite for MyService business logic"""
|
||||
|
||||
def setup_method(self):
|
||||
"""Run before each test method"""
|
||||
self.service = MyService()
|
||||
|
||||
def test_create_item_with_valid_data_succeeds(self):
|
||||
"""Test successful item creation - descriptive name explaining scenario"""
|
||||
# Arrange
|
||||
item_data = {"name": "Test Item", "price": "10.99"}
|
||||
|
||||
# Act
|
||||
result = self.service.create_item(item_data)
|
||||
|
||||
# Assert
|
||||
assert result is not None
|
||||
assert result.name == "Test Item"
|
||||
|
||||
def test_create_item_with_invalid_data_raises_validation_error(self):
|
||||
"""Test validation error handling"""
|
||||
# Arrange
|
||||
invalid_data = {"name": "", "price": "invalid"}
|
||||
|
||||
# Act & Assert
|
||||
with pytest.raises(ValidationError):
|
||||
self.service.create_item(invalid_data)
|
||||
```
|
||||
|
||||
### API Integration Test Example
|
||||
```python
|
||||
import pytest
|
||||
|
||||
@pytest.mark.integration
|
||||
@pytest.mark.api
|
||||
@pytest.mark.products
|
||||
class TestProductEndpoints:
|
||||
"""Integration tests for product API endpoints"""
|
||||
|
||||
def test_create_product_endpoint_success(self, client, auth_headers):
|
||||
"""Test successful product creation via API"""
|
||||
# Arrange
|
||||
product_data = {
|
||||
"product_id": "TEST001",
|
||||
"title": "Test Product",
|
||||
"price": "19.99"
|
||||
}
|
||||
|
||||
# Act
|
||||
response = client.post("/api/v1/product",
|
||||
json=product_data,
|
||||
headers=auth_headers)
|
||||
|
||||
# Assert
|
||||
assert response.status_code == 200
|
||||
assert response.json()["product_id"] == "TEST001"
|
||||
```
|
||||
|
||||
## Test Naming Conventions
|
||||
|
||||
### Files
|
||||
- `test_{component_name}.py` for the file name
|
||||
- Mirror your source structure: `app/services/product.py` → `tests/unit/services/test_product_service.py`
|
||||
|
||||
### Classes
|
||||
- `TestComponentName` for the main component
|
||||
- `TestComponentValidation` for validation-specific tests
|
||||
- `TestComponentErrorHandling` for error scenarios
|
||||
|
||||
### Methods
|
||||
Use descriptive names that explain the scenario:
|
||||
```python
|
||||
# Good - explains what, when, and expected outcome
|
||||
def test_create_product_with_valid_data_returns_product(self):
|
||||
def test_create_product_with_duplicate_id_raises_error(self):
|
||||
def test_get_product_when_not_found_returns_404(self):
|
||||
|
||||
# Acceptable shorter versions
|
||||
def test_create_product_success(self):
|
||||
def test_create_product_validation_error(self):
|
||||
def test_get_product_not_found(self):
|
||||
```
|
||||
|
||||
## Coverage Requirements
|
||||
|
||||
We maintain high coverage standards:
|
||||
- **Minimum overall coverage**: 80%
|
||||
- **New code coverage**: 90%+
|
||||
- **Critical paths**: 95%+
|
||||
|
||||
```bash
|
||||
# Check coverage
|
||||
make test-coverage
|
||||
|
||||
# View detailed HTML report
|
||||
open htmlcov/index.html
|
||||
|
||||
# Fail build if coverage too low
|
||||
pytest --cov=app --cov-fail-under=80
|
||||
```
|
||||
|
||||
## Debugging Failed Tests
|
||||
|
||||
### Get Detailed Information
|
||||
```bash
|
||||
# Verbose output with local variables
|
||||
pytest tests/path/to/test.py -vv --tb=long --showlocals
|
||||
|
||||
# Stop on first failure
|
||||
pytest -x
|
||||
|
||||
# Re-run only failed tests
|
||||
pytest --lf
|
||||
```
|
||||
|
||||
### Common Issues and Solutions
|
||||
|
||||
**Import Errors**:
|
||||
```bash
|
||||
# Ensure you're in project root and have installed in dev mode
|
||||
pip install -e .
|
||||
PYTHONPATH=. pytest
|
||||
```
|
||||
|
||||
**Database Issues**:
|
||||
```bash
|
||||
# Tests use in-memory SQLite by default
|
||||
# Check if fixtures are properly imported
|
||||
pytest --fixtures tests/
|
||||
```
|
||||
|
||||
**Fixture Not Found**:
|
||||
```bash
|
||||
# Ensure fixture modules are listed in conftest.py pytest_plugins
|
||||
# Check fixture dependencies (test_shop needs test_user)
|
||||
```
|
||||
|
||||
## Performance and Optimization
|
||||
|
||||
### Speed Up Test Runs
|
||||
```bash
|
||||
# Run in parallel (install pytest-xdist first)
|
||||
pytest -n auto
|
||||
|
||||
# Skip slow tests during development
|
||||
pytest -m "not slow"
|
||||
|
||||
# Run only changed tests (install pytest-testmon)
|
||||
pytest --testmon
|
||||
```
|
||||
|
||||
### Find Slow Tests
|
||||
```bash
|
||||
# Show 10 slowest tests
|
||||
pytest --durations=10
|
||||
|
||||
# Show all test durations
|
||||
pytest --durations=0
|
||||
```
|
||||
|
||||
## Continuous Integration Integration
|
||||
|
||||
Our tests integrate with CI/CD pipelines through make targets:
|
||||
|
||||
```bash
|
||||
# Commands used in CI
|
||||
make ci # Format, lint, test with coverage
|
||||
make test-fast # Quick feedback in early CI stages
|
||||
make test-coverage # Full test run with coverage reporting
|
||||
```
|
||||
|
||||
The CI pipeline:
|
||||
1. Runs `make test-fast` for quick feedback
|
||||
2. Runs `make ci` for comprehensive checks
|
||||
3. Generates coverage reports in XML format
|
||||
4. Uploads coverage to reporting tools
|
||||
|
||||
## Best Practices Summary
|
||||
|
||||
### DO:
|
||||
- Write tests for new code before committing
|
||||
- Use descriptive test names explaining the scenario
|
||||
- Keep unit tests fast (< 1 second each)
|
||||
- Use appropriate fixtures for test data
|
||||
- Add proper pytest markers to categorize tests
|
||||
- Test both happy path and error scenarios
|
||||
- Maintain good test coverage (80%+)
|
||||
|
||||
### DON'T:
|
||||
- Write tests that depend on external services (use mocks)
|
||||
- Create tests that depend on execution order
|
||||
- Use hardcoded values that might change
|
||||
- Write overly complex test setups
|
||||
- Ignore failing tests
|
||||
- Skip adding tests for bug fixes
|
||||
|
||||
## Getting Help
|
||||
|
||||
- **Examples**: Look at existing tests in similar components
|
||||
- **Fixtures**: Check `tests/fixtures/` for available test data
|
||||
- **Configuration**: See `pytest.ini` for available markers
|
||||
- **Make targets**: Run `make help` to see all available commands
|
||||
- **Team support**: Ask in team channels or create GitHub issues
|
||||
|
||||
## Make Commands Reference
|
||||
|
||||
```bash
|
||||
make install-test # Install test dependencies
|
||||
make test # Run all tests
|
||||
make test-unit # Run unit tests only
|
||||
make test-integration # Run integration tests only
|
||||
make test-fast # Run all except slow tests
|
||||
make test-coverage # Run with coverage report
|
||||
make ci # Full CI pipeline (format, lint, test)
|
||||
```
|
||||
|
||||
Use this guide as your daily reference for testing. The structure is designed to give you fast feedback during development while maintaining comprehensive test coverage.
|
||||
Reference in New Issue
Block a user