Replace all ~1,086 occurrences of Wizamart/wizamart/WIZAMART/WizaMart with Orion/orion/ORION across 184 files. This includes database identifiers, email addresses, domain references, R2 bucket names, DNS prefixes, encryption salt, Celery app name, config defaults, Docker configs, CI configs, documentation, seed data, and templates. Renames homepage-wizamart.html template to homepage-orion.html. Fixes duplicate file_pattern key in api.yaml architecture rule. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1349 lines
36 KiB
Markdown
1349 lines
36 KiB
Markdown
# Testing Guide
|
|
|
|
## Overview
|
|
|
|
The Orion platform employs a comprehensive testing strategy with four distinct test levels: **Unit**, **Integration**, **System**, and **Performance** tests. This guide explains how to write, run, and maintain tests in the application.
|
|
|
|
## Table of Contents
|
|
|
|
- [Test Architecture](#test-architecture)
|
|
- [Running Tests](#running-tests)
|
|
- [Test Structure](#test-structure)
|
|
- [Writing Tests](#writing-tests)
|
|
- [Fixtures](#fixtures)
|
|
- [Mocking](#mocking)
|
|
- [Best Practices](#best-practices)
|
|
- [Troubleshooting](#troubleshooting)
|
|
- [Testing Hub (Admin Dashboard)](#testing-hub-admin-dashboard)
|
|
|
|
---
|
|
|
|
## Test Architecture
|
|
|
|
### Test Pyramid
|
|
|
|
Our testing strategy follows the test pyramid approach:
|
|
|
|
```
|
|
/\
|
|
/ \
|
|
/ P \ Performance Tests (Slow, End-to-End)
|
|
/______\
|
|
/ \
|
|
/ System \ System Tests (End-to-End Scenarios)
|
|
/____________\
|
|
/ \
|
|
/ Integration \ Integration Tests (Multiple Components)
|
|
/__________________\
|
|
/ \
|
|
/ Unit \ Unit Tests (Individual Components)
|
|
/______________________\
|
|
```
|
|
|
|
### Test Levels
|
|
|
|
#### 1. Unit Tests (`tests/unit/`)
|
|
- **Purpose**: Test individual components in isolation
|
|
- **Scope**: Single functions, methods, or classes
|
|
- **Dependencies**: Mocked or stubbed
|
|
- **Speed**: Very fast (< 100ms per test)
|
|
- **Marker**: `@pytest.mark.unit`
|
|
|
|
**Example Structure:**
|
|
```
|
|
tests/unit/
|
|
├── conftest.py # Unit test specific fixtures
|
|
├── services/ # Service layer tests
|
|
│ ├── test_auth_service.py
|
|
│ ├── test_store_service.py
|
|
│ └── test_product_service.py
|
|
├── middleware/ # Middleware tests
|
|
│ ├── test_auth.py
|
|
│ ├── test_context.py
|
|
│ └── test_rate_limiter.py
|
|
├── models/ # Model tests
|
|
│ └── test_database_models.py
|
|
└── utils/ # Utility function tests
|
|
├── test_csv_processor.py
|
|
└── test_data_validation.py
|
|
```
|
|
|
|
#### 2. Integration Tests (`tests/integration/`)
|
|
- **Purpose**: Test multiple components working together
|
|
- **Scope**: API endpoints, database interactions, middleware flows
|
|
- **Dependencies**: Real database (in-memory SQLite), test client
|
|
- **Speed**: Fast to moderate (100ms - 1s per test)
|
|
- **Marker**: `@pytest.mark.integration`
|
|
|
|
**Example Structure:**
|
|
```
|
|
tests/integration/
|
|
├── conftest.py # Integration test specific fixtures
|
|
├── api/ # API endpoint tests
|
|
│ └── v1/
|
|
│ ├── test_auth_endpoints.py
|
|
│ ├── test_store_endpoints.py
|
|
│ ├── test_pagination.py
|
|
│ └── test_filtering.py
|
|
├── middleware/ # Middleware stack tests
|
|
│ ├── test_middleware_stack.py
|
|
│ └── test_context_detection_flow.py
|
|
├── security/ # Security integration tests
|
|
│ ├── test_authentication.py
|
|
│ ├── test_authorization.py
|
|
│ └── test_input_validation.py
|
|
└── workflows/ # Multi-step workflow tests
|
|
└── test_integration.py
|
|
```
|
|
|
|
#### 3. System Tests (`tests/system/`)
|
|
- **Purpose**: Test complete application behavior
|
|
- **Scope**: End-to-end user scenarios
|
|
- **Dependencies**: Full application stack
|
|
- **Speed**: Moderate to slow (1s - 5s per test)
|
|
- **Marker**: `@pytest.mark.system`
|
|
|
|
**Example Structure:**
|
|
```
|
|
tests/system/
|
|
├── conftest.py # System test specific fixtures
|
|
└── test_error_handling.py # System-wide error handling tests
|
|
```
|
|
|
|
#### 4. Performance Tests (`tests/performance/`)
|
|
- **Purpose**: Test application performance and load handling
|
|
- **Scope**: Response times, concurrent requests, large datasets
|
|
- **Dependencies**: Full application stack with data
|
|
- **Speed**: Slow (5s+ per test)
|
|
- **Markers**: `@pytest.mark.performance` and `@pytest.mark.slow`
|
|
|
|
**Example Structure:**
|
|
```
|
|
tests/performance/
|
|
├── conftest.py # Performance test specific fixtures
|
|
└── test_api_performance.py # API performance tests
|
|
```
|
|
|
|
---
|
|
|
|
## Running Tests
|
|
|
|
### Using Make Commands
|
|
|
|
The project provides convenient Make commands for running tests:
|
|
|
|
```bash
|
|
# Run all tests
|
|
make test
|
|
|
|
# Run specific test levels
|
|
make test-unit # Unit tests only
|
|
make test-integration # Integration tests only
|
|
|
|
# Run with coverage report
|
|
make test-coverage # Generates HTML coverage report in htmlcov/
|
|
|
|
# Run fast tests (excludes @pytest.mark.slow)
|
|
make test-fast
|
|
|
|
# Run slow tests only
|
|
make test-slow
|
|
```
|
|
|
|
### Using pytest Directly
|
|
|
|
```bash
|
|
# Run all tests
|
|
pytest tests/
|
|
|
|
# Run specific test level by marker
|
|
pytest tests/ -m unit
|
|
pytest tests/ -m integration
|
|
pytest tests/ -m "system or performance"
|
|
|
|
# Run specific test file
|
|
pytest tests/unit/services/test_auth_service.py
|
|
|
|
# Run specific test class
|
|
pytest tests/unit/services/test_auth_service.py::TestAuthService
|
|
|
|
# Run specific test method
|
|
pytest tests/unit/services/test_auth_service.py::TestAuthService::test_register_user_success
|
|
|
|
# Run with verbose output
|
|
pytest tests/ -v
|
|
|
|
# Run with coverage
|
|
pytest tests/ --cov=app --cov=models --cov=middleware
|
|
|
|
# Run tests matching a pattern
|
|
pytest tests/ -k "auth" # Runs all tests with "auth" in the name
|
|
```
|
|
|
|
### Using Test Markers
|
|
|
|
Markers allow you to categorize and selectively run tests:
|
|
|
|
```bash
|
|
# Run by functionality marker
|
|
pytest -m auth # Authentication tests
|
|
pytest -m products # Product tests
|
|
pytest -m stores # Store tests
|
|
pytest -m api # API endpoint tests
|
|
|
|
# Run by test type marker
|
|
pytest -m unit # Unit tests
|
|
pytest -m integration # Integration tests
|
|
pytest -m system # System tests
|
|
pytest -m performance # Performance tests
|
|
|
|
# Run by speed marker
|
|
pytest -m "not slow" # Exclude slow tests
|
|
pytest -m slow # Run only slow tests
|
|
|
|
# Combine markers
|
|
pytest -m "unit and auth" # Unit tests for auth
|
|
pytest -m "integration and api" # Integration tests for API
|
|
pytest -m "not (slow or external)" # Exclude slow and external tests
|
|
```
|
|
|
|
### Test Markers Reference
|
|
|
|
All available markers are defined in `pytest.ini`:
|
|
|
|
| Marker | Description |
|
|
|--------|-------------|
|
|
| `unit` | Fast, isolated component tests |
|
|
| `integration` | Multiple components working together |
|
|
| `system` | Full application behavior tests |
|
|
| `e2e` | End-to-end user workflow tests |
|
|
| `slow` | Tests that take significant time (>1s) |
|
|
| `performance` | Performance and load tests |
|
|
| `auth` | Authentication and authorization tests |
|
|
| `products` | Product management functionality |
|
|
| `inventory` | Inventory management tests |
|
|
| `stores` | Store management functionality |
|
|
| `admin` | Admin functionality and permissions |
|
|
| `marketplace` | Marketplace import functionality |
|
|
| `stats` | Statistics and reporting |
|
|
| `database` | Tests requiring database operations |
|
|
| `external` | Tests requiring external services |
|
|
| `api` | API endpoint tests |
|
|
| `security` | Security-related tests |
|
|
| `ci` | Tests that should only run in CI |
|
|
| `dev` | Development-specific tests |
|
|
|
|
---
|
|
|
|
## Test Structure
|
|
|
|
### Test Organization
|
|
|
|
Tests are organized by:
|
|
1. **Test level** (unit/integration/system/performance)
|
|
2. **Application layer** (services/middleware/models/utils)
|
|
3. **Functionality** (auth/stores/products/inventory)
|
|
|
|
### File Naming Conventions
|
|
|
|
- Test files: `test_*.py`
|
|
- Test classes: `Test*`
|
|
- Test functions: `test_*`
|
|
- Fixture files: `*_fixtures.py`
|
|
- Configuration files: `conftest.py`
|
|
|
|
### Test Class Structure
|
|
|
|
```python
|
|
import pytest
|
|
from app.services.auth_service import AuthService
|
|
from app.exceptions import InvalidCredentialsException
|
|
|
|
@pytest.mark.unit
|
|
@pytest.mark.auth
|
|
class TestAuthService:
|
|
"""Test suite for AuthService."""
|
|
|
|
def setup_method(self):
|
|
"""Setup method runs before each test."""
|
|
self.service = AuthService()
|
|
|
|
def test_successful_login(self, db, test_user):
|
|
"""Test successful user login."""
|
|
# Arrange
|
|
username = test_user.username
|
|
password = "testpass123"
|
|
|
|
# Act
|
|
result = self.service.login(db, username, password)
|
|
|
|
# Assert
|
|
assert result is not None
|
|
assert result.access_token is not None
|
|
|
|
def test_invalid_credentials(self, db, test_user):
|
|
"""Test login with invalid credentials."""
|
|
# Act & Assert
|
|
with pytest.raises(InvalidCredentialsException):
|
|
self.service.login(db, test_user.username, "wrongpassword")
|
|
```
|
|
|
|
### Test Naming Conventions
|
|
|
|
Use descriptive test names that explain:
|
|
- What is being tested
|
|
- Under what conditions
|
|
- What the expected outcome is
|
|
|
|
**Good examples:**
|
|
```python
|
|
def test_create_store_success(self, db, test_user):
|
|
"""Test successful store creation by regular user."""
|
|
|
|
def test_create_store_duplicate_code_raises_exception(self, db, test_user):
|
|
"""Test store creation fails when store code already exists."""
|
|
|
|
def test_admin_can_delete_any_store(self, db, test_admin, test_store):
|
|
"""Test admin has permission to delete any store."""
|
|
```
|
|
|
|
**Poor examples:**
|
|
```python
|
|
def test_store(self): # Too vague
|
|
def test_1(self): # Non-descriptive
|
|
def test_error(self): # Unclear what error
|
|
```
|
|
|
|
---
|
|
|
|
## Writing Tests
|
|
|
|
### Unit Test Example
|
|
|
|
Unit tests focus on testing a single component in isolation:
|
|
|
|
```python
|
|
# tests/unit/services/test_auth_service.py
|
|
import pytest
|
|
from app.services.auth_service import AuthService
|
|
from app.exceptions import UserAlreadyExistsException
|
|
from models.schema.auth import UserRegister
|
|
|
|
@pytest.mark.unit
|
|
@pytest.mark.auth
|
|
class TestAuthService:
|
|
"""Test suite for AuthService."""
|
|
|
|
def setup_method(self):
|
|
"""Initialize service instance."""
|
|
self.service = AuthService()
|
|
|
|
def test_register_user_success(self, db):
|
|
"""Test successful user registration."""
|
|
# Arrange
|
|
user_data = UserRegister(
|
|
email="newuser@example.com",
|
|
username="newuser123",
|
|
password="securepass123"
|
|
)
|
|
|
|
# Act
|
|
user = self.service.register_user(db, user_data)
|
|
|
|
# Assert
|
|
assert user is not None
|
|
assert user.email == "newuser@example.com"
|
|
assert user.username == "newuser123"
|
|
assert user.role == "user"
|
|
assert user.is_active is True
|
|
assert user.hashed_password != "securepass123"
|
|
|
|
def test_register_user_duplicate_email(self, db, test_user):
|
|
"""Test registration fails when email already exists."""
|
|
# Arrange
|
|
user_data = UserRegister(
|
|
email=test_user.email,
|
|
username="differentuser",
|
|
password="securepass123"
|
|
)
|
|
|
|
# Act & Assert
|
|
with pytest.raises(UserAlreadyExistsException) as exc_info:
|
|
self.service.register_user(db, user_data)
|
|
|
|
# Verify exception details
|
|
exception = exc_info.value
|
|
assert exception.error_code == "USER_ALREADY_EXISTS"
|
|
assert exception.status_code == 409
|
|
```
|
|
|
|
### Integration Test Example
|
|
|
|
Integration tests verify multiple components work together:
|
|
|
|
```python
|
|
# tests/integration/api/v1/test_auth_endpoints.py
|
|
import pytest
|
|
|
|
@pytest.mark.integration
|
|
@pytest.mark.api
|
|
@pytest.mark.auth
|
|
class TestAuthenticationAPI:
|
|
"""Integration tests for authentication API endpoints."""
|
|
|
|
def test_register_user_success(self, client, db):
|
|
"""Test successful user registration via API."""
|
|
# Act
|
|
response = client.post(
|
|
"/api/v1/auth/register",
|
|
json={
|
|
"email": "newuser@example.com",
|
|
"username": "newuser",
|
|
"password": "securepass123"
|
|
}
|
|
)
|
|
|
|
# Assert
|
|
assert response.status_code == 200
|
|
data = response.json()
|
|
assert data["email"] == "newuser@example.com"
|
|
assert data["username"] == "newuser"
|
|
assert data["role"] == "user"
|
|
assert "hashed_password" not in data
|
|
|
|
def test_login_returns_access_token(self, client, test_user):
|
|
"""Test login endpoint returns valid access token."""
|
|
# Act
|
|
response = client.post(
|
|
"/api/v1/auth/login",
|
|
json={
|
|
"username": test_user.username,
|
|
"password": "testpass123"
|
|
}
|
|
)
|
|
|
|
# Assert
|
|
assert response.status_code == 200
|
|
data = response.json()
|
|
assert "access_token" in data
|
|
assert data["token_type"] == "bearer"
|
|
|
|
def test_protected_endpoint_requires_authentication(self, client):
|
|
"""Test protected endpoint returns 401 without token."""
|
|
# Act
|
|
response = client.get("/api/v1/store")
|
|
|
|
# Assert
|
|
assert response.status_code == 401
|
|
```
|
|
|
|
### System Test Example
|
|
|
|
System tests verify complete end-to-end scenarios:
|
|
|
|
```python
|
|
# tests/system/test_error_handling.py
|
|
import pytest
|
|
|
|
@pytest.mark.system
|
|
class TestErrorHandling:
|
|
"""System tests for error handling across the API."""
|
|
|
|
def test_invalid_json_request(self, client, auth_headers):
|
|
"""Test handling of malformed JSON requests."""
|
|
# Act
|
|
response = client.post(
|
|
"/api/v1/store",
|
|
headers=auth_headers,
|
|
content="{ invalid json syntax"
|
|
)
|
|
|
|
# Assert
|
|
assert response.status_code == 422
|
|
data = response.json()
|
|
assert data["error_code"] == "VALIDATION_ERROR"
|
|
assert data["message"] == "Request validation failed"
|
|
assert "validation_errors" in data["details"]
|
|
```
|
|
|
|
### Performance Test Example
|
|
|
|
Performance tests measure response times and system capacity:
|
|
|
|
```python
|
|
# tests/performance/test_api_performance.py
|
|
import time
|
|
import pytest
|
|
from models.database.marketplace_product import MarketplaceProduct
|
|
|
|
@pytest.mark.performance
|
|
@pytest.mark.slow
|
|
@pytest.mark.database
|
|
class TestPerformance:
|
|
"""Performance tests for API endpoints."""
|
|
|
|
def test_product_list_performance(self, client, auth_headers, db):
|
|
"""Test performance of product listing with many products."""
|
|
# Arrange - Create test data
|
|
products = []
|
|
for i in range(100):
|
|
product = MarketplaceProduct(
|
|
marketplace_product_id=f"PERF{i:03d}",
|
|
title=f"Performance Test Product {i}",
|
|
price=f"{i}.99",
|
|
marketplace="Performance"
|
|
)
|
|
products.append(product)
|
|
|
|
db.add_all(products)
|
|
db.commit()
|
|
|
|
# Act - Time the request
|
|
start_time = time.time()
|
|
response = client.get(
|
|
"/api/v1/marketplace/product?limit=100",
|
|
headers=auth_headers
|
|
)
|
|
end_time = time.time()
|
|
|
|
# Assert
|
|
assert response.status_code == 200
|
|
assert len(response.json()["products"]) == 100
|
|
assert end_time - start_time < 2.0 # Must complete in 2s
|
|
```
|
|
|
|
---
|
|
|
|
## Fixtures
|
|
|
|
### Fixture Architecture
|
|
|
|
Fixtures are organized in a hierarchical structure:
|
|
|
|
```
|
|
tests/
|
|
├── conftest.py # Core fixtures (session-scoped)
|
|
│ ├── engine # Database engine
|
|
│ ├── testing_session_local # Session factory
|
|
│ ├── db # Database session (function-scoped)
|
|
│ └── client # Test client
|
|
│
|
|
├── fixtures/ # Reusable fixture modules
|
|
│ ├── testing_fixtures.py # Testing utilities
|
|
│ ├── auth_fixtures.py # Auth-related fixtures
|
|
│ ├── store_fixtures.py # Store fixtures
|
|
│ ├── marketplace_product_fixtures.py
|
|
│ ├── marketplace_import_job_fixtures.py
|
|
│ └── customer_fixtures.py
|
|
│
|
|
├── unit/conftest.py # Unit test specific fixtures
|
|
├── integration/conftest.py # Integration test specific fixtures
|
|
├── system/conftest.py # System test specific fixtures
|
|
└── performance/conftest.py # Performance test specific fixtures
|
|
```
|
|
|
|
### Core Fixtures (tests/conftest.py)
|
|
|
|
These fixtures are available to all tests:
|
|
|
|
```python
|
|
# Database fixtures
|
|
@pytest.fixture(scope="session")
|
|
def engine():
|
|
"""Create test database engine (in-memory SQLite)."""
|
|
return create_engine(
|
|
"sqlite:///:memory:",
|
|
connect_args={"check_same_thread": False},
|
|
poolclass=StaticPool,
|
|
echo=False
|
|
)
|
|
|
|
@pytest.fixture(scope="function")
|
|
def db(engine, testing_session_local):
|
|
"""Create a clean database session for each test."""
|
|
Base.metadata.create_all(bind=engine)
|
|
db_session = testing_session_local()
|
|
|
|
try:
|
|
yield db_session
|
|
finally:
|
|
db_session.close()
|
|
Base.metadata.drop_all(bind=engine)
|
|
Base.metadata.create_all(bind=engine)
|
|
|
|
@pytest.fixture(scope="function")
|
|
def client(db):
|
|
"""Create a test client with database dependency override."""
|
|
def override_get_db():
|
|
try:
|
|
yield db
|
|
finally:
|
|
pass
|
|
|
|
app.dependency_overrides[get_db] = override_get_db
|
|
|
|
try:
|
|
client = TestClient(app)
|
|
yield client
|
|
finally:
|
|
if get_db in app.dependency_overrides:
|
|
del app.dependency_overrides[get_db]
|
|
```
|
|
|
|
### Auth Fixtures (tests/fixtures/auth_fixtures.py)
|
|
|
|
Authentication and user-related fixtures:
|
|
|
|
```python
|
|
@pytest.fixture
|
|
def test_user(db, auth_manager):
|
|
"""Create a test user with unique username."""
|
|
unique_id = str(uuid.uuid4())[:8]
|
|
hashed_password = auth_manager.hash_password("testpass123")
|
|
user = User(
|
|
email=f"test_{unique_id}@example.com",
|
|
username=f"testuser_{unique_id}",
|
|
hashed_password=hashed_password,
|
|
role="user",
|
|
is_active=True
|
|
)
|
|
db.add(user)
|
|
db.commit()
|
|
db.refresh(user)
|
|
db.expunge(user) # Prevent resource warnings
|
|
return user
|
|
|
|
@pytest.fixture
|
|
def test_admin(db, auth_manager):
|
|
"""Create a test admin user."""
|
|
unique_id = str(uuid.uuid4())[:8]
|
|
hashed_password = auth_manager.hash_password("adminpass123")
|
|
admin = User(
|
|
email=f"admin_{unique_id}@example.com",
|
|
username=f"admin_{unique_id}",
|
|
hashed_password=hashed_password,
|
|
role="admin",
|
|
is_active=True
|
|
)
|
|
db.add(admin)
|
|
db.commit()
|
|
db.refresh(admin)
|
|
db.expunge(admin)
|
|
return admin
|
|
|
|
@pytest.fixture
|
|
def auth_headers(client, test_user):
|
|
"""Get authentication headers for test user."""
|
|
response = client.post(
|
|
"/api/v1/auth/login",
|
|
json={"username": test_user.username, "password": "testpass123"}
|
|
)
|
|
assert response.status_code == 200
|
|
token = response.json()["access_token"]
|
|
return {"Authorization": f"Bearer {token}"}
|
|
|
|
@pytest.fixture
|
|
def admin_headers(client, test_admin):
|
|
"""Get authentication headers for admin user."""
|
|
response = client.post(
|
|
"/api/v1/auth/login",
|
|
json={"username": test_admin.username, "password": "adminpass123"}
|
|
)
|
|
assert response.status_code == 200
|
|
token = response.json()["access_token"]
|
|
return {"Authorization": f"Bearer {token}"}
|
|
```
|
|
|
|
### Store Fixtures (tests/fixtures/store_fixtures.py)
|
|
|
|
Store-related fixtures:
|
|
|
|
```python
|
|
@pytest.fixture
|
|
def test_store(db, test_user):
|
|
"""Create a test store."""
|
|
unique_id = str(uuid.uuid4())[:8].upper()
|
|
store = Store(
|
|
store_code=f"TESTSTORE_{unique_id}",
|
|
subdomain=f"teststore{unique_id.lower()}",
|
|
name=f"Test Store {unique_id.lower()}",
|
|
owner_user_id=test_user.id,
|
|
is_active=True,
|
|
is_verified=True
|
|
)
|
|
db.add(store)
|
|
db.commit()
|
|
db.refresh(store)
|
|
db.expunge(store)
|
|
return store
|
|
|
|
@pytest.fixture
|
|
def store_factory():
|
|
"""Factory function to create unique stores."""
|
|
return create_unique_store_factory()
|
|
|
|
def create_unique_store_factory():
|
|
"""Factory function to create unique stores in tests."""
|
|
def _create_store(db, owner_user_id, **kwargs):
|
|
unique_id = str(uuid.uuid4())[:8]
|
|
defaults = {
|
|
"store_code": f"FACTORY_{unique_id.upper()}",
|
|
"subdomain": f"factory{unique_id.lower()}",
|
|
"name": f"Factory Store {unique_id}",
|
|
"owner_user_id": owner_user_id,
|
|
"is_active": True,
|
|
"is_verified": False
|
|
}
|
|
defaults.update(kwargs)
|
|
|
|
store = Store(**defaults)
|
|
db.add(store)
|
|
db.commit()
|
|
db.refresh(store)
|
|
return store
|
|
|
|
return _create_store
|
|
```
|
|
|
|
### Using Fixtures
|
|
|
|
Fixtures are automatically injected by pytest:
|
|
|
|
```python
|
|
def test_example(db, test_user, auth_headers):
|
|
"""
|
|
This test receives three fixtures:
|
|
- db: Database session
|
|
- test_user: A test user object
|
|
- auth_headers: Authentication headers for API calls
|
|
"""
|
|
# Use the fixtures
|
|
assert test_user.id is not None
|
|
response = client.get("/api/v1/profile", headers=auth_headers)
|
|
assert response.status_code == 200
|
|
```
|
|
|
|
### Creating Custom Fixtures
|
|
|
|
Add fixtures to appropriate conftest.py or fixture files:
|
|
|
|
```python
|
|
# tests/unit/conftest.py
|
|
import pytest
|
|
|
|
@pytest.fixture
|
|
def mock_service():
|
|
"""Create a mocked service for unit testing."""
|
|
from unittest.mock import Mock
|
|
service = Mock()
|
|
service.get_data.return_value = {"key": "value"}
|
|
return service
|
|
```
|
|
|
|
---
|
|
|
|
## Mocking
|
|
|
|
### When to Use Mocks
|
|
|
|
Use mocks to:
|
|
- Isolate unit tests from external dependencies
|
|
- Simulate error conditions
|
|
- Test edge cases
|
|
- Speed up tests by avoiding slow operations
|
|
|
|
### Mock Patterns
|
|
|
|
#### 1. Mocking with unittest.mock
|
|
|
|
```python
|
|
from unittest.mock import Mock, MagicMock, patch
|
|
|
|
@pytest.mark.unit
|
|
class TestWithMocks:
|
|
def test_with_mock(self):
|
|
"""Test using a simple mock."""
|
|
mock_service = Mock()
|
|
mock_service.get_user.return_value = {"id": 1, "name": "Test"}
|
|
|
|
result = mock_service.get_user(1)
|
|
|
|
assert result["name"] == "Test"
|
|
mock_service.get_user.assert_called_once_with(1)
|
|
|
|
@patch('app.services.auth_service.jwt.encode')
|
|
def test_with_patch(self, mock_encode):
|
|
"""Test using patch decorator."""
|
|
mock_encode.return_value = "fake_token"
|
|
|
|
service = AuthService()
|
|
token = service.create_token({"user_id": 1})
|
|
|
|
assert token == "fake_token"
|
|
mock_encode.assert_called_once()
|
|
```
|
|
|
|
#### 2. Mocking Database Errors
|
|
|
|
```python
|
|
from sqlalchemy.exc import SQLAlchemyError
|
|
|
|
def test_database_error_handling():
|
|
"""Test handling of database errors."""
|
|
mock_db = Mock()
|
|
mock_db.commit.side_effect = SQLAlchemyError("DB connection failed")
|
|
|
|
service = StoreService()
|
|
|
|
with pytest.raises(DatabaseException):
|
|
service.create_store(mock_db, store_data, user)
|
|
```
|
|
|
|
#### 3. Mocking External Services
|
|
|
|
```python
|
|
@patch('app.services.email_service.send_email')
|
|
def test_user_registration_sends_email(mock_send_email, db):
|
|
"""Test that user registration sends a welcome email."""
|
|
service = AuthService()
|
|
user_data = UserRegister(
|
|
email="test@example.com",
|
|
username="testuser",
|
|
password="password123"
|
|
)
|
|
|
|
service.register_user(db, user_data)
|
|
|
|
# Verify email was sent
|
|
mock_send_email.assert_called_once()
|
|
call_args = mock_send_email.call_args
|
|
assert "test@example.com" in str(call_args)
|
|
```
|
|
|
|
#### 4. Mocking with Context Managers
|
|
|
|
```python
|
|
@pytest.mark.unit
|
|
class TestFileOperations:
|
|
@patch('builtins.open', create=True)
|
|
def test_file_read(self, mock_open):
|
|
"""Test file reading operation."""
|
|
mock_open.return_value.__enter__.return_value.read.return_value = "test data"
|
|
|
|
# Code that opens and reads a file
|
|
with open('test.txt', 'r') as f:
|
|
data = f.read()
|
|
|
|
assert data == "test data"
|
|
mock_open.assert_called_once_with('test.txt', 'r')
|
|
```
|
|
|
|
### Fixtures for Error Testing
|
|
|
|
Use fixtures to create consistent mock errors:
|
|
|
|
```python
|
|
# tests/fixtures/testing_fixtures.py
|
|
@pytest.fixture
|
|
def db_with_error():
|
|
"""Database session that raises errors."""
|
|
mock_db = Mock()
|
|
mock_db.query.side_effect = SQLAlchemyError("Database connection failed")
|
|
mock_db.add.side_effect = SQLAlchemyError("Database insert failed")
|
|
mock_db.commit.side_effect = SQLAlchemyError("Database commit failed")
|
|
mock_db.rollback.return_value = None
|
|
return mock_db
|
|
|
|
# Usage in tests
|
|
def test_handles_db_error(db_with_error):
|
|
"""Test graceful handling of database errors."""
|
|
service = StoreService()
|
|
|
|
with pytest.raises(DatabaseException):
|
|
service.create_store(db_with_error, store_data, user)
|
|
```
|
|
|
|
---
|
|
|
|
## Best Practices
|
|
|
|
### 1. Test Independence
|
|
|
|
Each test should be completely independent:
|
|
|
|
```python
|
|
# ✅ GOOD - Independent tests
|
|
def test_create_user(db):
|
|
user = create_user(db, "user1@test.com")
|
|
assert user.email == "user1@test.com"
|
|
|
|
def test_delete_user(db):
|
|
user = create_user(db, "user2@test.com") # Creates own data
|
|
delete_user(db, user.id)
|
|
assert get_user(db, user.id) is None
|
|
|
|
# ❌ BAD - Tests depend on each other
|
|
user_id = None
|
|
|
|
def test_create_user(db):
|
|
global user_id
|
|
user = create_user(db, "user@test.com")
|
|
user_id = user.id # State shared between tests
|
|
|
|
def test_delete_user(db):
|
|
delete_user(db, user_id) # Depends on previous test
|
|
```
|
|
|
|
### 2. SQLAlchemy Fixture Best Practices
|
|
|
|
**IMPORTANT**: Follow these rules when working with SQLAlchemy fixtures to avoid common pitfalls:
|
|
|
|
#### ❌ Never Use `db.expunge()` in Fixtures
|
|
|
|
```python
|
|
# ❌ BAD - Causes lazy loading errors
|
|
@pytest.fixture
|
|
def test_user(db, auth_manager):
|
|
user = User(email="test@example.com", ...)
|
|
db.add(user)
|
|
db.commit()
|
|
db.refresh(user)
|
|
db.expunge(user) # ❌ NEVER DO THIS!
|
|
return user
|
|
|
|
# When you later try to access relationships:
|
|
# user.merchant.name # ❌ DetachedInstanceError!
|
|
```
|
|
|
|
```python
|
|
# ✅ GOOD - Keep objects attached to session
|
|
@pytest.fixture
|
|
def test_user(db, auth_manager):
|
|
user = User(email="test@example.com", ...)
|
|
db.add(user)
|
|
db.commit()
|
|
db.refresh(user)
|
|
return user # ✅ Object stays attached to session
|
|
```
|
|
|
|
#### Why `db.expunge()` Is an Anti-Pattern
|
|
|
|
1. **Breaks lazy loading**: Detached objects cannot access relationships like `user.merchant` or `product.marketplace_product`
|
|
2. **Causes DetachedInstanceError**: SQLAlchemy throws errors when accessing unloaded relationships on detached objects
|
|
3. **Test isolation is already handled**: The `db` fixture drops and recreates all tables after each test
|
|
|
|
#### How Test Isolation Works
|
|
|
|
The `db` fixture in `tests/conftest.py` provides isolation by:
|
|
- Creating fresh tables before each test
|
|
- Dropping all tables after each test completes
|
|
- Using `expire_on_commit=False` to prevent objects from expiring after commits
|
|
|
|
```python
|
|
@pytest.fixture(scope="function")
|
|
def db(engine, testing_session_local):
|
|
Base.metadata.create_all(bind=engine) # Fresh tables
|
|
db_session = testing_session_local()
|
|
|
|
try:
|
|
yield db_session
|
|
finally:
|
|
db_session.close()
|
|
Base.metadata.drop_all(bind=engine) # Clean up
|
|
Base.metadata.create_all(bind=engine) # Ready for next test
|
|
```
|
|
|
|
#### If You Need Fresh Data
|
|
|
|
Use `db.refresh(obj)` instead of expunge/re-query patterns:
|
|
|
|
```python
|
|
# ✅ GOOD - Refresh for fresh data
|
|
def test_update_user(db, test_user):
|
|
update_user_email(db, test_user.id, "new@example.com")
|
|
db.refresh(test_user) # Get latest data from database
|
|
assert test_user.email == "new@example.com"
|
|
```
|
|
|
|
### 3. Arrange-Act-Assert Pattern
|
|
|
|
Structure tests clearly using AAA pattern:
|
|
|
|
```python
|
|
def test_store_creation(db, test_user):
|
|
# Arrange - Set up test data
|
|
store_data = StoreCreate(
|
|
store_code="TESTSTORE",
|
|
store_name="Test Store"
|
|
)
|
|
|
|
# Act - Perform the action
|
|
store = StoreService().create_store(db, store_data, test_user)
|
|
|
|
# Assert - Verify the results
|
|
assert store.store_code == "TESTSTORE"
|
|
assert store.owner_user_id == test_user.id
|
|
```
|
|
|
|
### 3. Test One Thing
|
|
|
|
Each test should verify a single behavior:
|
|
|
|
```python
|
|
# ✅ GOOD - Tests one specific behavior
|
|
def test_admin_creates_verified_store(db, test_admin):
|
|
"""Test that admin users create verified stores."""
|
|
store = create_store(db, test_admin)
|
|
assert store.is_verified is True
|
|
|
|
def test_regular_user_creates_unverified_store(db, test_user):
|
|
"""Test that regular users create unverified stores."""
|
|
store = create_store(db, test_user)
|
|
assert store.is_verified is False
|
|
|
|
# ❌ BAD - Tests multiple behaviors
|
|
def test_store_creation(db, test_admin, test_user):
|
|
"""Test store creation.""" # Vague docstring
|
|
admin_store = create_store(db, test_admin)
|
|
user_store = create_store(db, test_user)
|
|
|
|
assert admin_store.is_verified is True
|
|
assert user_store.is_verified is False
|
|
assert admin_store.is_active is True
|
|
assert user_store.is_active is True
|
|
# Testing too many things
|
|
```
|
|
|
|
### 4. Use Descriptive Names
|
|
|
|
Test names should describe what is being tested:
|
|
|
|
```python
|
|
# ✅ GOOD
|
|
def test_create_store_with_duplicate_code_raises_exception(db, test_store):
|
|
"""Test that creating store with duplicate code raises StoreAlreadyExistsException."""
|
|
|
|
# ❌ BAD
|
|
def test_store_error(db):
|
|
"""Test store."""
|
|
```
|
|
|
|
### 5. Test Error Cases
|
|
|
|
Always test both success and failure paths:
|
|
|
|
```python
|
|
class TestStoreService:
|
|
def test_create_store_success(self, db, test_user):
|
|
"""Test successful store creation."""
|
|
# Test happy path
|
|
|
|
def test_create_store_duplicate_code(self, db, test_user, test_store):
|
|
"""Test error when store code already exists."""
|
|
# Test error case
|
|
|
|
def test_create_store_invalid_data(self, db, test_user):
|
|
"""Test error with invalid store data."""
|
|
# Test validation error
|
|
|
|
def test_create_store_unauthorized(self, db):
|
|
"""Test error when user is not authorized."""
|
|
# Test authorization error
|
|
```
|
|
|
|
### 6. Use Markers Consistently
|
|
|
|
Apply appropriate markers to all tests:
|
|
|
|
```python
|
|
@pytest.mark.unit # Test level
|
|
@pytest.mark.auth # Functionality
|
|
class TestAuthService:
|
|
def test_login_success(self):
|
|
"""Test successful login."""
|
|
```
|
|
|
|
### 7. Clean Up Resources
|
|
|
|
Ensure proper cleanup to prevent resource leaks:
|
|
|
|
```python
|
|
@pytest.fixture
|
|
def test_user(db, auth_manager):
|
|
"""Create a test user."""
|
|
user = User(...)
|
|
db.add(user)
|
|
db.commit()
|
|
db.refresh(user)
|
|
db.expunge(user) # ✅ Detach from session
|
|
return user
|
|
```
|
|
|
|
### 8. Avoid Testing Implementation Details
|
|
|
|
Test behavior, not implementation:
|
|
|
|
```python
|
|
# ✅ GOOD - Tests behavior
|
|
def test_password_is_hashed(db):
|
|
"""Test that passwords are stored hashed."""
|
|
user = create_user(db, "test@example.com", "password123")
|
|
assert user.hashed_password != "password123"
|
|
assert len(user.hashed_password) > 50
|
|
|
|
# ❌ BAD - Tests implementation
|
|
def test_password_uses_bcrypt(db):
|
|
"""Test that password hashing uses bcrypt."""
|
|
user = create_user(db, "test@example.com", "password123")
|
|
assert user.hashed_password.startswith("$2b$") # Assumes bcrypt
|
|
```
|
|
|
|
### 9. Test Data Uniqueness
|
|
|
|
Use UUIDs to ensure test data uniqueness:
|
|
|
|
```python
|
|
def test_concurrent_user_creation(db):
|
|
"""Test creating multiple users with unique data."""
|
|
unique_id = str(uuid.uuid4())[:8]
|
|
user = User(
|
|
email=f"test_{unique_id}@example.com",
|
|
username=f"testuser_{unique_id}"
|
|
)
|
|
db.add(user)
|
|
db.commit()
|
|
```
|
|
|
|
### 10. Coverage Goals
|
|
|
|
Aim for high test coverage:
|
|
- Unit tests: > 90% coverage
|
|
- Integration tests: > 80% coverage
|
|
- Overall: > 80% coverage (enforced in pytest.ini)
|
|
|
|
Check coverage with:
|
|
```bash
|
|
make test-coverage
|
|
# Opens htmlcov/index.html in browser
|
|
```
|
|
|
|
---
|
|
|
|
## Troubleshooting
|
|
|
|
### Common Issues
|
|
|
|
#### Issue: Tests Fail Intermittently
|
|
|
|
**Cause**: Tests depend on each other or share state
|
|
|
|
**Solution**: Ensure test independence, use function-scoped fixtures
|
|
|
|
```python
|
|
# Use function scope for fixtures that modify data
|
|
@pytest.fixture(scope="function") # ✅ New instance per test
|
|
def db(engine, testing_session_local):
|
|
...
|
|
```
|
|
|
|
#### Issue: "ResourceWarning: unclosed connection"
|
|
|
|
**Cause**: Database sessions not properly closed
|
|
|
|
**Solution**: Use `db.expunge()` after creating test objects
|
|
|
|
```python
|
|
@pytest.fixture
|
|
def test_user(db):
|
|
user = User(...)
|
|
db.add(user)
|
|
db.commit()
|
|
db.refresh(user)
|
|
db.expunge(user) # ✅ Detach from session
|
|
return user
|
|
```
|
|
|
|
#### Issue: Tests Slow
|
|
|
|
**Cause**: Too many database operations, not using transactions
|
|
|
|
**Solution**:
|
|
- Mark slow tests with `@pytest.mark.slow`
|
|
- Use session-scoped fixtures where possible
|
|
- Batch database operations
|
|
|
|
```python
|
|
# ✅ Batch create
|
|
products = [Product(...) for i in range(100)]
|
|
db.add_all(products)
|
|
db.commit()
|
|
|
|
# ❌ Individual creates (slow)
|
|
for i in range(100):
|
|
product = Product(...)
|
|
db.add(product)
|
|
db.commit()
|
|
```
|
|
|
|
#### Issue: Import Errors
|
|
|
|
**Cause**: Python path not set correctly
|
|
|
|
**Solution**: Run tests from project root:
|
|
|
|
```bash
|
|
# ✅ From project root
|
|
pytest tests/
|
|
|
|
# ❌ From tests directory
|
|
cd tests && pytest .
|
|
```
|
|
|
|
#### Issue: Fixture Not Found
|
|
|
|
**Cause**: Fixture not imported or registered
|
|
|
|
**Solution**: Check `pytest_plugins` in conftest.py:
|
|
|
|
```python
|
|
# tests/conftest.py
|
|
pytest_plugins = [
|
|
"tests.fixtures.auth_fixtures",
|
|
"tests.fixtures.store_fixtures",
|
|
# Add your fixture module here
|
|
]
|
|
```
|
|
|
|
### Debug Tips
|
|
|
|
#### 1. Verbose Output
|
|
|
|
```bash
|
|
pytest tests/ -v # Verbose test names
|
|
pytest tests/ -vv # Very verbose (shows assertion details)
|
|
```
|
|
|
|
#### 2. Show Local Variables
|
|
|
|
```bash
|
|
pytest tests/ --showlocals # Show local vars on failure
|
|
pytest tests/ -l # Short form
|
|
```
|
|
|
|
#### 3. Drop to Debugger on Failure
|
|
|
|
```bash
|
|
pytest tests/ --pdb # Drop to pdb on first failure
|
|
```
|
|
|
|
#### 4. Run Specific Test
|
|
|
|
```bash
|
|
pytest tests/unit/services/test_auth_service.py::TestAuthService::test_login_success -v
|
|
```
|
|
|
|
#### 5. Show Print Statements
|
|
|
|
```bash
|
|
pytest tests/ -s # Show print() and log output
|
|
```
|
|
|
|
#### 6. Run Last Failed Tests
|
|
|
|
```bash
|
|
pytest tests/ --lf # Last failed
|
|
pytest tests/ --ff # Failed first
|
|
```
|
|
|
|
### Getting Help
|
|
|
|
If you encounter issues:
|
|
|
|
1. Check this documentation
|
|
2. Review test examples in the codebase
|
|
3. Check pytest.ini for configuration
|
|
4. Review conftest.py files for available fixtures
|
|
5. Ask team members or create an issue
|
|
|
|
---
|
|
|
|
## Testing Hub (Admin Dashboard)
|
|
|
|
The platform includes a web-based Testing Hub accessible at `/admin/testing` that provides a visual interface for running tests and monitoring test health.
|
|
|
|
### Accessing the Testing Hub
|
|
|
|
Navigate to **Platform Health → Testing Hub** in the admin sidebar, or go directly to `/admin/testing`.
|
|
|
|
### Features
|
|
|
|
#### 1. Test Collection
|
|
Click **Collect Tests** to discover all available tests in the codebase without running them. This displays:
|
|
|
|
- **Total Tests**: Number of test functions discovered
|
|
- **Unit Tests**: Tests in `tests/unit/`
|
|
- **Integration Tests**: Tests in `tests/integration/`
|
|
- **Performance Tests**: Tests in `tests/performance/`
|
|
- **Test Files**: Number of test files
|
|
|
|
#### 2. Running Tests
|
|
Click **Run Tests** to execute the test suite. Tests run in the background, so you can:
|
|
- Leave the page and return later
|
|
- Monitor elapsed time in real-time
|
|
- See results when tests complete
|
|
|
|
#### 3. Dashboard Statistics
|
|
The dashboard shows results from the last test run:
|
|
|
|
- **Pass Rate**: Percentage of tests passing
|
|
- **Passed/Failed/Errors**: Test outcome counts
|
|
- **Duration**: How long the test run took
|
|
- **Skipped**: Tests marked to skip
|
|
|
|
#### 4. Trend Analysis
|
|
View pass rate trends across the last 10 test runs to identify regressions.
|
|
|
|
#### 5. Tests by Category
|
|
See breakdown of passed/failed tests by category (Unit, Integration, etc.).
|
|
|
|
#### 6. Top Failing Tests
|
|
Quickly identify tests that fail most frequently across runs.
|
|
|
|
### API Endpoints
|
|
|
|
The Testing Hub uses these API endpoints:
|
|
|
|
| Endpoint | Method | Description |
|
|
|----------|--------|-------------|
|
|
| `/api/v1/admin/tests/stats` | GET | Dashboard statistics |
|
|
| `/api/v1/admin/tests/run` | POST | Start a test run |
|
|
| `/api/v1/admin/tests/runs` | GET | List recent test runs |
|
|
| `/api/v1/admin/tests/runs/{id}` | GET | Get specific run details |
|
|
| `/api/v1/admin/tests/collect` | POST | Collect test information |
|
|
|
|
### Background Execution
|
|
|
|
Test runs execute as background tasks, allowing:
|
|
- Non-blocking UI during long test runs
|
|
- Ability to navigate away and return
|
|
- Automatic status polling every 2 seconds
|
|
|
|
View all background tasks including test runs at **Platform Health → Background Tasks**.
|
|
|
|
---
|
|
|
|
## Summary
|
|
|
|
This testing guide covers:
|
|
- ✅ Four-level test architecture (Unit/Integration/System/Performance)
|
|
- ✅ Running tests with Make commands and pytest
|
|
- ✅ Test structure and organization
|
|
- ✅ Writing tests for each level
|
|
- ✅ Using fixtures effectively
|
|
- ✅ Mocking patterns and best practices
|
|
- ✅ Best practices for maintainable tests
|
|
- ✅ Troubleshooting common issues
|
|
- ✅ Testing Hub admin dashboard
|
|
|
|
For information about maintaining and extending the test suite, see [Test Maintenance Guide](test-maintenance.md).
|