Updating documentation
This commit is contained in:
299
docs/testing/index.md
Normal file
299
docs/testing/index.md
Normal file
@@ -0,0 +1,299 @@
|
||||
# Testing Overview
|
||||
|
||||
Our comprehensive test suite ensures code quality, reliability, and maintainability. This section covers everything you need to know about testing in the Letzshop Import project.
|
||||
|
||||
## Test Structure
|
||||
|
||||
We use a hierarchical test organization based on test types and scope:
|
||||
|
||||
```
|
||||
tests/
|
||||
├── conftest.py # Core test configuration and fixtures
|
||||
├── pytest.ini # Pytest configuration with custom markers
|
||||
├── fixtures/ # Shared test fixtures by domain
|
||||
├── unit/ # Fast, isolated component tests
|
||||
├── integration/ # Multi-component interaction tests
|
||||
├── performance/ # Performance and load tests
|
||||
├── system/ # End-to-end system behavior tests
|
||||
└── test_data/ # Test data files (CSV, JSON, etc.)
|
||||
```
|
||||
|
||||
## Test Types
|
||||
|
||||
### 🔧 Unit Tests
|
||||
**Purpose**: Test individual components in isolation
|
||||
**Speed**: Very fast (< 1 second each)
|
||||
**Scope**: Single function, method, or class
|
||||
|
||||
```bash
|
||||
# Run unit tests
|
||||
pytest -m unit
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
- Data processing utilities
|
||||
- Model validation
|
||||
- Service business logic
|
||||
- Individual API endpoint handlers
|
||||
|
||||
### 🔗 Integration Tests
|
||||
**Purpose**: Test component interactions
|
||||
**Speed**: Fast to moderate (1-5 seconds each)
|
||||
**Scope**: Multiple components working together
|
||||
|
||||
```bash
|
||||
# Run integration tests
|
||||
pytest -m integration
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
- API endpoints with database
|
||||
- Service layer interactions
|
||||
- Authentication workflows
|
||||
- File processing pipelines
|
||||
|
||||
### 🏗️ System Tests
|
||||
**Purpose**: Test complete application behavior
|
||||
**Speed**: Moderate (5-30 seconds each)
|
||||
**Scope**: End-to-end user scenarios
|
||||
|
||||
```bash
|
||||
# Run system tests
|
||||
pytest -m system
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
- Complete user registration flow
|
||||
- Full CSV import process
|
||||
- Multi-step workflows
|
||||
- Error handling across layers
|
||||
|
||||
### ⚡ Performance Tests
|
||||
**Purpose**: Validate performance requirements
|
||||
**Speed**: Slow (30+ seconds each)
|
||||
**Scope**: Load, stress, and performance testing
|
||||
|
||||
```bash
|
||||
# Run performance tests
|
||||
pytest -m performance
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
- API response times
|
||||
- Database query performance
|
||||
- Large file processing
|
||||
- Concurrent user scenarios
|
||||
|
||||
## Running Tests
|
||||
|
||||
### Basic Commands
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
pytest
|
||||
|
||||
# Run with verbose output
|
||||
pytest -v
|
||||
|
||||
# Run specific test type
|
||||
pytest -m unit
|
||||
pytest -m integration
|
||||
pytest -m "unit or integration"
|
||||
|
||||
# Run tests in specific directory
|
||||
pytest tests/unit/
|
||||
pytest tests/integration/api/
|
||||
|
||||
# Run specific test file
|
||||
pytest tests/unit/services/test_product_service.py
|
||||
|
||||
# Run specific test class
|
||||
pytest tests/unit/services/test_product_service.py::TestProductService
|
||||
|
||||
# Run specific test method
|
||||
pytest tests/unit/services/test_product_service.py::TestProductService::test_create_product_success
|
||||
```
|
||||
|
||||
### Advanced Options
|
||||
|
||||
```bash
|
||||
# Run with coverage report
|
||||
pytest --cov=app --cov-report=html
|
||||
|
||||
# Run tests matching pattern
|
||||
pytest -k "product and not slow"
|
||||
|
||||
# Stop on first failure
|
||||
pytest -x
|
||||
|
||||
# Run failed tests from last run
|
||||
pytest --lf
|
||||
|
||||
# Run tests in parallel (if pytest-xdist installed)
|
||||
pytest -n auto
|
||||
|
||||
# Show slowest tests
|
||||
pytest --durations=10
|
||||
```
|
||||
|
||||
## Test Markers
|
||||
|
||||
We use pytest markers to categorize and selectively run tests:
|
||||
|
||||
| Marker | Purpose |
|
||||
|--------|---------|
|
||||
| `@pytest.mark.unit` | Fast, isolated component tests |
|
||||
| `@pytest.mark.integration` | Multi-component interaction tests |
|
||||
| `@pytest.mark.system` | End-to-end system tests |
|
||||
| `@pytest.mark.performance` | Performance and load tests |
|
||||
| `@pytest.mark.slow` | Long-running tests |
|
||||
| `@pytest.mark.api` | API endpoint tests |
|
||||
| `@pytest.mark.database` | Tests requiring database |
|
||||
| `@pytest.mark.auth` | Authentication/authorization tests |
|
||||
|
||||
### Example Usage
|
||||
|
||||
```python
|
||||
@pytest.mark.unit
|
||||
@pytest.mark.products
|
||||
class TestProductService:
|
||||
def test_create_product_success(self):
|
||||
# Test implementation
|
||||
pass
|
||||
|
||||
@pytest.mark.integration
|
||||
@pytest.mark.api
|
||||
@pytest.mark.database
|
||||
def test_product_creation_endpoint():
|
||||
# Test implementation
|
||||
pass
|
||||
```
|
||||
|
||||
## Test Configuration
|
||||
|
||||
### pytest.ini
|
||||
Our pytest configuration includes:
|
||||
|
||||
- **Custom markers** for test categorization
|
||||
- **Coverage settings** with 80% minimum threshold
|
||||
- **Test discovery** patterns and paths
|
||||
- **Output formatting** for better readability
|
||||
|
||||
### conftest.py
|
||||
Core test fixtures and configuration:
|
||||
|
||||
- **Database fixtures** for test isolation
|
||||
- **Authentication fixtures** for user/admin testing
|
||||
- **Client fixtures** for API testing
|
||||
- **Mock fixtures** for external dependencies
|
||||
|
||||
## Test Data Management
|
||||
|
||||
### Fixtures
|
||||
We organize fixtures by domain:
|
||||
|
||||
```python
|
||||
# tests/fixtures/product_fixtures.py
|
||||
@pytest.fixture
|
||||
def sample_product():
|
||||
return {
|
||||
"name": "Test Product",
|
||||
"gtin": "1234567890123",
|
||||
"price": "19.99"
|
||||
}
|
||||
|
||||
@pytest.fixture
|
||||
def product_factory():
|
||||
def _create_product(**kwargs):
|
||||
defaults = {"name": "Test", "gtin": "1234567890123"}
|
||||
defaults.update(kwargs)
|
||||
return defaults
|
||||
return _create_product
|
||||
```
|
||||
|
||||
### Test Data Files
|
||||
Static test data in `tests/test_data/`:
|
||||
|
||||
- CSV files for import testing
|
||||
- JSON files for API testing
|
||||
- Sample configuration files
|
||||
|
||||
## Coverage Requirements
|
||||
|
||||
We maintain high test coverage standards:
|
||||
|
||||
- **Minimum coverage**: 80% overall
|
||||
- **Critical paths**: 95%+ coverage required
|
||||
- **New code**: Must include tests
|
||||
- **HTML reports**: Generated in `htmlcov/`
|
||||
|
||||
```bash
|
||||
# Generate coverage report
|
||||
pytest --cov=app --cov-report=html --cov-report=term-missing
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Test Naming
|
||||
- Use descriptive test names that explain the scenario
|
||||
- Follow the pattern: `test_{action}_{scenario}_{expected_outcome}`
|
||||
- See our [Test Naming Conventions](test-naming-conventions.md) for details
|
||||
|
||||
### Test Structure
|
||||
- **Arrange**: Set up test data and conditions
|
||||
- **Act**: Execute the code being tested
|
||||
- **Assert**: Verify the expected outcome
|
||||
|
||||
```python
|
||||
def test_create_product_with_valid_data_returns_product(self):
|
||||
# Arrange
|
||||
product_data = {"name": "Test", "gtin": "1234567890123"}
|
||||
|
||||
# Act
|
||||
result = product_service.create_product(product_data)
|
||||
|
||||
# Assert
|
||||
assert result is not None
|
||||
assert result.name == "Test"
|
||||
```
|
||||
|
||||
### Test Isolation
|
||||
- Each test should be independent
|
||||
- Use database transactions that rollback
|
||||
- Mock external dependencies
|
||||
- Clean up test data
|
||||
|
||||
## Continuous Integration
|
||||
|
||||
Our CI pipeline runs:
|
||||
|
||||
1. **Linting** with flake8 and black
|
||||
2. **Type checking** with mypy
|
||||
3. **Security scanning** with bandit
|
||||
4. **Unit tests** on every commit
|
||||
5. **Integration tests** on pull requests
|
||||
6. **Performance tests** on releases
|
||||
|
||||
## Tools and Libraries
|
||||
|
||||
- **pytest**: Test framework and runner
|
||||
- **pytest-cov**: Coverage reporting
|
||||
- **pytest-asyncio**: Async test support
|
||||
- **pytest-mock**: Mocking utilities
|
||||
- **faker**: Test data generation
|
||||
- **httpx**: HTTP client for API testing
|
||||
- **factory-boy**: Test object factories
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. **Read the conventions**: [Test Naming Conventions](test-naming-conventions.md)
|
||||
2. **Run existing tests**: `pytest -v`
|
||||
3. **Write your first test**: See examples in existing test files
|
||||
4. **Check coverage**: `pytest --cov`
|
||||
|
||||
## Need Help?
|
||||
|
||||
- **Examples**: Look at existing tests in `tests/` directory
|
||||
- **Documentation**: This testing section has detailed guides
|
||||
- **Issues**: Create a GitHub issue for testing questions
|
||||
- **Standards**: Follow our [naming conventions](test-naming-conventions.md)
|
||||
Reference in New Issue
Block a user