Updating documentation

This commit is contained in:
2025-09-19 21:34:56 +02:00
parent 366093bbc6
commit 91a0a13daa
3 changed files with 805 additions and 0 deletions

299
docs/testing/index.md Normal file
View File

@@ -0,0 +1,299 @@
# Testing Overview
Our comprehensive test suite ensures code quality, reliability, and maintainability. This section covers everything you need to know about testing in the Letzshop Import project.
## Test Structure
We use a hierarchical test organization based on test types and scope:
```
tests/
├── conftest.py # Core test configuration and fixtures
├── pytest.ini # Pytest configuration with custom markers
├── fixtures/ # Shared test fixtures by domain
├── unit/ # Fast, isolated component tests
├── integration/ # Multi-component interaction tests
├── performance/ # Performance and load tests
├── system/ # End-to-end system behavior tests
└── test_data/ # Test data files (CSV, JSON, etc.)
```
## Test Types
### 🔧 Unit Tests
**Purpose**: Test individual components in isolation
**Speed**: Very fast (< 1 second each)
**Scope**: Single function, method, or class
```bash
# Run unit tests
pytest -m unit
```
**Examples**:
- Data processing utilities
- Model validation
- Service business logic
- Individual API endpoint handlers
### 🔗 Integration Tests
**Purpose**: Test component interactions
**Speed**: Fast to moderate (1-5 seconds each)
**Scope**: Multiple components working together
```bash
# Run integration tests
pytest -m integration
```
**Examples**:
- API endpoints with database
- Service layer interactions
- Authentication workflows
- File processing pipelines
### 🏗️ System Tests
**Purpose**: Test complete application behavior
**Speed**: Moderate (5-30 seconds each)
**Scope**: End-to-end user scenarios
```bash
# Run system tests
pytest -m system
```
**Examples**:
- Complete user registration flow
- Full CSV import process
- Multi-step workflows
- Error handling across layers
### ⚡ Performance Tests
**Purpose**: Validate performance requirements
**Speed**: Slow (30+ seconds each)
**Scope**: Load, stress, and performance testing
```bash
# Run performance tests
pytest -m performance
```
**Examples**:
- API response times
- Database query performance
- Large file processing
- Concurrent user scenarios
## Running Tests
### Basic Commands
```bash
# Run all tests
pytest
# Run with verbose output
pytest -v
# Run specific test type
pytest -m unit
pytest -m integration
pytest -m "unit or integration"
# Run tests in specific directory
pytest tests/unit/
pytest tests/integration/api/
# Run specific test file
pytest tests/unit/services/test_product_service.py
# Run specific test class
pytest tests/unit/services/test_product_service.py::TestProductService
# Run specific test method
pytest tests/unit/services/test_product_service.py::TestProductService::test_create_product_success
```
### Advanced Options
```bash
# Run with coverage report
pytest --cov=app --cov-report=html
# Run tests matching pattern
pytest -k "product and not slow"
# Stop on first failure
pytest -x
# Run failed tests from last run
pytest --lf
# Run tests in parallel (if pytest-xdist installed)
pytest -n auto
# Show slowest tests
pytest --durations=10
```
## Test Markers
We use pytest markers to categorize and selectively run tests:
| Marker | Purpose |
|--------|---------|
| `@pytest.mark.unit` | Fast, isolated component tests |
| `@pytest.mark.integration` | Multi-component interaction tests |
| `@pytest.mark.system` | End-to-end system tests |
| `@pytest.mark.performance` | Performance and load tests |
| `@pytest.mark.slow` | Long-running tests |
| `@pytest.mark.api` | API endpoint tests |
| `@pytest.mark.database` | Tests requiring database |
| `@pytest.mark.auth` | Authentication/authorization tests |
### Example Usage
```python
@pytest.mark.unit
@pytest.mark.products
class TestProductService:
def test_create_product_success(self):
# Test implementation
pass
@pytest.mark.integration
@pytest.mark.api
@pytest.mark.database
def test_product_creation_endpoint():
# Test implementation
pass
```
## Test Configuration
### pytest.ini
Our pytest configuration includes:
- **Custom markers** for test categorization
- **Coverage settings** with 80% minimum threshold
- **Test discovery** patterns and paths
- **Output formatting** for better readability
### conftest.py
Core test fixtures and configuration:
- **Database fixtures** for test isolation
- **Authentication fixtures** for user/admin testing
- **Client fixtures** for API testing
- **Mock fixtures** for external dependencies
## Test Data Management
### Fixtures
We organize fixtures by domain:
```python
# tests/fixtures/product_fixtures.py
@pytest.fixture
def sample_product():
return {
"name": "Test Product",
"gtin": "1234567890123",
"price": "19.99"
}
@pytest.fixture
def product_factory():
def _create_product(**kwargs):
defaults = {"name": "Test", "gtin": "1234567890123"}
defaults.update(kwargs)
return defaults
return _create_product
```
### Test Data Files
Static test data in `tests/test_data/`:
- CSV files for import testing
- JSON files for API testing
- Sample configuration files
## Coverage Requirements
We maintain high test coverage standards:
- **Minimum coverage**: 80% overall
- **Critical paths**: 95%+ coverage required
- **New code**: Must include tests
- **HTML reports**: Generated in `htmlcov/`
```bash
# Generate coverage report
pytest --cov=app --cov-report=html --cov-report=term-missing
```
## Best Practices
### Test Naming
- Use descriptive test names that explain the scenario
- Follow the pattern: `test_{action}_{scenario}_{expected_outcome}`
- See our [Test Naming Conventions](test-naming-conventions.md) for details
### Test Structure
- **Arrange**: Set up test data and conditions
- **Act**: Execute the code being tested
- **Assert**: Verify the expected outcome
```python
def test_create_product_with_valid_data_returns_product(self):
# Arrange
product_data = {"name": "Test", "gtin": "1234567890123"}
# Act
result = product_service.create_product(product_data)
# Assert
assert result is not None
assert result.name == "Test"
```
### Test Isolation
- Each test should be independent
- Use database transactions that rollback
- Mock external dependencies
- Clean up test data
## Continuous Integration
Our CI pipeline runs:
1. **Linting** with flake8 and black
2. **Type checking** with mypy
3. **Security scanning** with bandit
4. **Unit tests** on every commit
5. **Integration tests** on pull requests
6. **Performance tests** on releases
## Tools and Libraries
- **pytest**: Test framework and runner
- **pytest-cov**: Coverage reporting
- **pytest-asyncio**: Async test support
- **pytest-mock**: Mocking utilities
- **faker**: Test data generation
- **httpx**: HTTP client for API testing
- **factory-boy**: Test object factories
## Getting Started
1. **Read the conventions**: [Test Naming Conventions](test-naming-conventions.md)
2. **Run existing tests**: `pytest -v`
3. **Write your first test**: See examples in existing test files
4. **Check coverage**: `pytest --cov`
## Need Help?
- **Examples**: Look at existing tests in `tests/` directory
- **Documentation**: This testing section has detailed guides
- **Issues**: Create a GitHub issue for testing questions
- **Standards**: Follow our [naming conventions](test-naming-conventions.md)

View File

@@ -0,0 +1,403 @@
# Running Tests
This guide covers everything you need to know about running tests in the Letzshop Import project.
## Prerequisites
Ensure you have the test dependencies installed:
```bash
pip install -r tests/requirements-test.txt
```
Required packages:
- `pytest>=7.4.0` - Test framework
- `pytest-cov>=4.1.0` - Coverage reporting
- `pytest-asyncio>=0.21.0` - Async test support
- `pytest-mock>=3.11.0` - Mocking utilities
- `httpx>=0.24.0` - HTTP client for API tests
- `faker>=19.0.0` - Test data generation
## Basic Test Commands
### Run All Tests
```bash
# Run the entire test suite
pytest
# Run with verbose output
pytest -v
# Run with very verbose output (show individual test results)
pytest -vv
```
### Run by Test Type
```bash
# Run only unit tests (fast)
pytest -m unit
# Run only integration tests
pytest -m integration
# Run unit and integration tests
pytest -m "unit or integration"
# Run everything except slow tests
pytest -m "not slow"
# Run performance tests only
pytest -m performance
```
### Run by Directory
```bash
# Run all unit tests
pytest tests/unit/
# Run all integration tests
pytest tests/integration/
# Run API endpoint tests
pytest tests/integration/api/
# Run service layer tests
pytest tests/unit/services/
```
### Run Specific Files
```bash
# Run specific test file
pytest tests/unit/services/test_product_service.py
# Run multiple specific files
pytest tests/unit/services/test_product_service.py tests/unit/utils/test_data_processing.py
```
### Run Specific Tests
```bash
# Run specific test class
pytest tests/unit/services/test_product_service.py::TestProductService
# Run specific test method
pytest tests/unit/services/test_product_service.py::TestProductService::test_create_product_success
# Run tests matching pattern
pytest -k "product and create"
pytest -k "test_create_product"
```
## Advanced Test Options
### Coverage Reporting
```bash
# Run with coverage report
pytest --cov=app
# Coverage with missing lines
pytest --cov=app --cov-report=term-missing
# Generate HTML coverage report
pytest --cov=app --cov-report=html
# Coverage for specific modules
pytest --cov=app.services --cov=app.api
# Fail if coverage below threshold
pytest --cov=app --cov-fail-under=80
```
### Output and Debugging
```bash
# Show local variables on failure
pytest --tb=short --showlocals
# Stop on first failure
pytest -x
# Stop after N failures
pytest --maxfail=3
# Show print statements
pytest -s
# Show warnings
pytest -W error
# Capture output (default)
pytest --capture=sys
```
### Test Selection and Filtering
```bash
# Run only failed tests from last run
pytest --lf
# Run failed tests first, then continue
pytest --ff
# Run tests that match keyword expression
pytest -k "user and not admin"
# Run tests modified since last commit
pytest --testmon
# Collect tests without running
pytest --collect-only
```
### Performance and Parallel Execution
```bash
# Show 10 slowest tests
pytest --durations=10
# Show all test durations
pytest --durations=0
# Run tests in parallel (requires pytest-xdist)
pytest -n auto
pytest -n 4 # Use 4 workers
```
## Test Environment Setup
### Database Tests
```bash
# Run database tests (uses test database)
pytest -m database
# Run with test database reset
pytest --tb=short tests/integration/database/
```
### API Tests
```bash
# Run API endpoint tests
pytest -m api
# Run with test client
pytest tests/integration/api/ -v
```
### Authentication Tests
```bash
# Run auth-related tests
pytest -m auth
# Run with user fixtures
pytest tests/integration/security/ -v
```
## Configuration Options
### pytest.ini Settings
Our `pytest.ini` is configured with:
```ini
[pytest]
# Test discovery
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
# Default options
addopts =
-v
--tb=short
--strict-markers
--color=yes
--durations=10
--cov=app
--cov-report=term-missing
--cov-report=html:htmlcov
--cov-fail-under=80
# Custom markers
markers =
unit: Unit tests
integration: Integration tests
# ... other markers
```
### Environment Variables
```bash
# Set test environment
export TESTING=true
# Database URL for tests (uses in-memory SQLite by default)
export TEST_DATABASE_URL="sqlite:///:memory:"
# Disable external API calls in tests
export MOCK_EXTERNAL_APIS=true
```
## Common Test Scenarios
### Development Workflow
```bash
# Quick smoke test (unit tests only)
pytest -m unit --maxfail=1
# Full test run before commit
pytest -m "unit or integration"
# Pre-push comprehensive test
pytest --cov=app --cov-fail-under=80
```
### Debugging Failed Tests
```bash
# Run failed test with detailed output
pytest --lf -vv --tb=long --showlocals
# Drop into debugger on failure (requires ipdb)
pytest --pdb
# Run specific failing test in isolation
pytest tests/path/to/test.py::TestClass::test_method -vv -s
```
### Performance Testing
```bash
# Run performance tests only
pytest -m performance --durations=0
# Run with performance profiling
pytest -m performance --profile
# Load testing specific endpoints
pytest tests/performance/test_api_performance.py -v
```
### CI/CD Pipeline Tests
```bash
# Minimal test run (fast feedback)
pytest -m "unit and not slow" --maxfail=5
# Full CI test run
pytest --cov=app --cov-report=xml --junitxml=test-results.xml
# Security and integration tests
pytest -m "security or integration" --tb=short
```
## Test Reports and Output
### Coverage Reports
```bash
# Terminal coverage report
pytest --cov=app --cov-report=term
# HTML coverage report (opens in browser)
pytest --cov=app --cov-report=html
open htmlcov/index.html
# XML coverage for CI
pytest --cov=app --cov-report=xml
```
### JUnit XML Reports
```bash
# Generate JUnit XML (for CI integration)
pytest --junitxml=test-results.xml
# With coverage and JUnit
pytest --cov=app --cov-report=xml --junitxml=test-results.xml
```
## Troubleshooting
### Common Issues
#### Import Errors
```bash
# If getting import errors, ensure PYTHONPATH is set
PYTHONPATH=. pytest
# Or install in development mode
pip install -e .
```
#### Database Connection Issues
```bash
# Check if test database is accessible
python -c "from tests.conftest import engine; engine.connect()"
# Run with in-memory database
pytest --override-db-url="sqlite:///:memory:"
```
#### Fixture Not Found
```bash
# Ensure conftest.py is in the right location
ls tests/conftest.py
# Check fixture imports
pytest --fixtures tests/unit/
```
#### Permission Issues
```bash
# Fix test file permissions
chmod +x tests/**/*.py
# Clear pytest cache
rm -rf .pytest_cache/
```
### Performance Issues
```bash
# Identify slow tests
pytest --durations=10 --durations-min=1.0
# Profile test execution
pytest --profile-svg
# Run subset of tests for faster feedback
pytest -m "unit and not slow"
```
## Integration with IDEs
### VS Code
Add to `.vscode/settings.json`:
```json
{
"python.testing.pytestEnabled": true,
"python.testing.pytestArgs": [
"tests",
"-v"
],
"python.testing.cwd": "${workspaceFolder}"
}
```
### PyCharm
1. Go to Settings → Tools → Python Integrated Tools
2. Set Testing → Default test runner to "pytest"
3. Set pytest options: `-v --tb=short`
## Best Practices
### During Development
1. **Run unit tests frequently** - Fast feedback loop
2. **Run integration tests before commits** - Catch interaction issues
3. **Check coverage regularly** - Ensure good test coverage
4. **Use descriptive test names** - Easy to understand failures
### Before Code Review
1. **Run full test suite** - `pytest`
2. **Check coverage meets threshold** - `pytest --cov-fail-under=80`
3. **Ensure no warnings** - `pytest -W error`
4. **Test with fresh environment** - New terminal/clean cache
### In CI/CD
1. **Fail fast on unit tests** - `pytest -m unit --maxfail=1`
2. **Generate reports** - Coverage and JUnit XML
3. **Run performance tests on schedule** - Not every commit
4. **Archive test results** - For debugging and trends
---
Need help with a specific testing scenario? Check our [Testing FAQ](testing-faq.md) or open a GitHub issue!