32 KiB
Testing Guide
Overview
The Wizamart platform employs a comprehensive testing strategy with four distinct test levels: Unit, Integration, System, and Performance tests. This guide explains how to write, run, and maintain tests in the application.
Table of Contents
- Test Architecture
- Running Tests
- Test Structure
- Writing Tests
- Fixtures
- Mocking
- Best Practices
- Troubleshooting
Test Architecture
Test Pyramid
Our testing strategy follows the test pyramid approach:
/\
/ \
/ P \ Performance Tests (Slow, End-to-End)
/______\
/ \
/ System \ System Tests (End-to-End Scenarios)
/____________\
/ \
/ Integration \ Integration Tests (Multiple Components)
/__________________\
/ \
/ Unit \ Unit Tests (Individual Components)
/______________________\
Test Levels
1. Unit Tests (tests/unit/)
- Purpose: Test individual components in isolation
- Scope: Single functions, methods, or classes
- Dependencies: Mocked or stubbed
- Speed: Very fast (< 100ms per test)
- Marker:
@pytest.mark.unit
Example Structure:
tests/unit/
├── conftest.py # Unit test specific fixtures
├── services/ # Service layer tests
│ ├── test_auth_service.py
│ ├── test_vendor_service.py
│ └── test_product_service.py
├── middleware/ # Middleware tests
│ ├── test_auth.py
│ ├── test_context.py
│ └── test_rate_limiter.py
├── models/ # Model tests
│ └── test_database_models.py
└── utils/ # Utility function tests
├── test_csv_processor.py
└── test_data_validation.py
2. Integration Tests (tests/integration/)
- Purpose: Test multiple components working together
- Scope: API endpoints, database interactions, middleware flows
- Dependencies: Real database (in-memory SQLite), test client
- Speed: Fast to moderate (100ms - 1s per test)
- Marker:
@pytest.mark.integration
Example Structure:
tests/integration/
├── conftest.py # Integration test specific fixtures
├── api/ # API endpoint tests
│ └── v1/
│ ├── test_auth_endpoints.py
│ ├── test_vendor_endpoints.py
│ ├── test_pagination.py
│ └── test_filtering.py
├── middleware/ # Middleware stack tests
│ ├── test_middleware_stack.py
│ └── test_context_detection_flow.py
├── security/ # Security integration tests
│ ├── test_authentication.py
│ ├── test_authorization.py
│ └── test_input_validation.py
└── workflows/ # Multi-step workflow tests
└── test_integration.py
3. System Tests (tests/system/)
- Purpose: Test complete application behavior
- Scope: End-to-end user scenarios
- Dependencies: Full application stack
- Speed: Moderate to slow (1s - 5s per test)
- Marker:
@pytest.mark.system
Example Structure:
tests/system/
├── conftest.py # System test specific fixtures
└── test_error_handling.py # System-wide error handling tests
4. Performance Tests (tests/performance/)
- Purpose: Test application performance and load handling
- Scope: Response times, concurrent requests, large datasets
- Dependencies: Full application stack with data
- Speed: Slow (5s+ per test)
- Markers:
@pytest.mark.performanceand@pytest.mark.slow
Example Structure:
tests/performance/
├── conftest.py # Performance test specific fixtures
└── test_api_performance.py # API performance tests
Running Tests
Using Make Commands
The project provides convenient Make commands for running tests:
# Run all tests
make test
# Run specific test levels
make test-unit # Unit tests only
make test-integration # Integration tests only
# Run with coverage report
make test-coverage # Generates HTML coverage report in htmlcov/
# Run fast tests (excludes @pytest.mark.slow)
make test-fast
# Run slow tests only
make test-slow
Using pytest Directly
# Run all tests
pytest tests/
# Run specific test level by marker
pytest tests/ -m unit
pytest tests/ -m integration
pytest tests/ -m "system or performance"
# Run specific test file
pytest tests/unit/services/test_auth_service.py
# Run specific test class
pytest tests/unit/services/test_auth_service.py::TestAuthService
# Run specific test method
pytest tests/unit/services/test_auth_service.py::TestAuthService::test_register_user_success
# Run with verbose output
pytest tests/ -v
# Run with coverage
pytest tests/ --cov=app --cov=models --cov=middleware
# Run tests matching a pattern
pytest tests/ -k "auth" # Runs all tests with "auth" in the name
Using Test Markers
Markers allow you to categorize and selectively run tests:
# Run by functionality marker
pytest -m auth # Authentication tests
pytest -m products # Product tests
pytest -m vendors # Vendor tests
pytest -m api # API endpoint tests
# Run by test type marker
pytest -m unit # Unit tests
pytest -m integration # Integration tests
pytest -m system # System tests
pytest -m performance # Performance tests
# Run by speed marker
pytest -m "not slow" # Exclude slow tests
pytest -m slow # Run only slow tests
# Combine markers
pytest -m "unit and auth" # Unit tests for auth
pytest -m "integration and api" # Integration tests for API
pytest -m "not (slow or external)" # Exclude slow and external tests
Test Markers Reference
All available markers are defined in pytest.ini:
| Marker | Description |
|---|---|
unit |
Fast, isolated component tests |
integration |
Multiple components working together |
system |
Full application behavior tests |
e2e |
End-to-end user workflow tests |
slow |
Tests that take significant time (>1s) |
performance |
Performance and load tests |
auth |
Authentication and authorization tests |
products |
Product management functionality |
inventory |
Inventory management tests |
vendors |
Vendor management functionality |
admin |
Admin functionality and permissions |
marketplace |
Marketplace import functionality |
stats |
Statistics and reporting |
database |
Tests requiring database operations |
external |
Tests requiring external services |
api |
API endpoint tests |
security |
Security-related tests |
ci |
Tests that should only run in CI |
dev |
Development-specific tests |
Test Structure
Test Organization
Tests are organized by:
- Test level (unit/integration/system/performance)
- Application layer (services/middleware/models/utils)
- Functionality (auth/vendors/products/inventory)
File Naming Conventions
- Test files:
test_*.py - Test classes:
Test* - Test functions:
test_* - Fixture files:
*_fixtures.py - Configuration files:
conftest.py
Test Class Structure
import pytest
from app.services.auth_service import AuthService
from app.exceptions import InvalidCredentialsException
@pytest.mark.unit
@pytest.mark.auth
class TestAuthService:
"""Test suite for AuthService."""
def setup_method(self):
"""Setup method runs before each test."""
self.service = AuthService()
def test_successful_login(self, db, test_user):
"""Test successful user login."""
# Arrange
username = test_user.username
password = "testpass123"
# Act
result = self.service.login(db, username, password)
# Assert
assert result is not None
assert result.access_token is not None
def test_invalid_credentials(self, db, test_user):
"""Test login with invalid credentials."""
# Act & Assert
with pytest.raises(InvalidCredentialsException):
self.service.login(db, test_user.username, "wrongpassword")
Test Naming Conventions
Use descriptive test names that explain:
- What is being tested
- Under what conditions
- What the expected outcome is
Good examples:
def test_create_vendor_success(self, db, test_user):
"""Test successful vendor creation by regular user."""
def test_create_vendor_duplicate_code_raises_exception(self, db, test_user):
"""Test vendor creation fails when vendor code already exists."""
def test_admin_can_delete_any_vendor(self, db, test_admin, test_vendor):
"""Test admin has permission to delete any vendor."""
Poor examples:
def test_vendor(self): # Too vague
def test_1(self): # Non-descriptive
def test_error(self): # Unclear what error
Writing Tests
Unit Test Example
Unit tests focus on testing a single component in isolation:
# tests/unit/services/test_auth_service.py
import pytest
from app.services.auth_service import AuthService
from app.exceptions import UserAlreadyExistsException
from models.schema.auth import UserRegister
@pytest.mark.unit
@pytest.mark.auth
class TestAuthService:
"""Test suite for AuthService."""
def setup_method(self):
"""Initialize service instance."""
self.service = AuthService()
def test_register_user_success(self, db):
"""Test successful user registration."""
# Arrange
user_data = UserRegister(
email="newuser@example.com",
username="newuser123",
password="securepass123"
)
# Act
user = self.service.register_user(db, user_data)
# Assert
assert user is not None
assert user.email == "newuser@example.com"
assert user.username == "newuser123"
assert user.role == "user"
assert user.is_active is True
assert user.hashed_password != "securepass123"
def test_register_user_duplicate_email(self, db, test_user):
"""Test registration fails when email already exists."""
# Arrange
user_data = UserRegister(
email=test_user.email,
username="differentuser",
password="securepass123"
)
# Act & Assert
with pytest.raises(UserAlreadyExistsException) as exc_info:
self.service.register_user(db, user_data)
# Verify exception details
exception = exc_info.value
assert exception.error_code == "USER_ALREADY_EXISTS"
assert exception.status_code == 409
Integration Test Example
Integration tests verify multiple components work together:
# tests/integration/api/v1/test_auth_endpoints.py
import pytest
@pytest.mark.integration
@pytest.mark.api
@pytest.mark.auth
class TestAuthenticationAPI:
"""Integration tests for authentication API endpoints."""
def test_register_user_success(self, client, db):
"""Test successful user registration via API."""
# Act
response = client.post(
"/api/v1/auth/register",
json={
"email": "newuser@example.com",
"username": "newuser",
"password": "securepass123"
}
)
# Assert
assert response.status_code == 200
data = response.json()
assert data["email"] == "newuser@example.com"
assert data["username"] == "newuser"
assert data["role"] == "user"
assert "hashed_password" not in data
def test_login_returns_access_token(self, client, test_user):
"""Test login endpoint returns valid access token."""
# Act
response = client.post(
"/api/v1/auth/login",
json={
"username": test_user.username,
"password": "testpass123"
}
)
# Assert
assert response.status_code == 200
data = response.json()
assert "access_token" in data
assert data["token_type"] == "bearer"
def test_protected_endpoint_requires_authentication(self, client):
"""Test protected endpoint returns 401 without token."""
# Act
response = client.get("/api/v1/vendor")
# Assert
assert response.status_code == 401
System Test Example
System tests verify complete end-to-end scenarios:
# tests/system/test_error_handling.py
import pytest
@pytest.mark.system
class TestErrorHandling:
"""System tests for error handling across the API."""
def test_invalid_json_request(self, client, auth_headers):
"""Test handling of malformed JSON requests."""
# Act
response = client.post(
"/api/v1/vendor",
headers=auth_headers,
content="{ invalid json syntax"
)
# Assert
assert response.status_code == 422
data = response.json()
assert data["error_code"] == "VALIDATION_ERROR"
assert data["message"] == "Request validation failed"
assert "validation_errors" in data["details"]
Performance Test Example
Performance tests measure response times and system capacity:
# tests/performance/test_api_performance.py
import time
import pytest
from models.database.marketplace_product import MarketplaceProduct
@pytest.mark.performance
@pytest.mark.slow
@pytest.mark.database
class TestPerformance:
"""Performance tests for API endpoints."""
def test_product_list_performance(self, client, auth_headers, db):
"""Test performance of product listing with many products."""
# Arrange - Create test data
products = []
for i in range(100):
product = MarketplaceProduct(
marketplace_product_id=f"PERF{i:03d}",
title=f"Performance Test Product {i}",
price=f"{i}.99",
marketplace="Performance"
)
products.append(product)
db.add_all(products)
db.commit()
# Act - Time the request
start_time = time.time()
response = client.get(
"/api/v1/marketplace/product?limit=100",
headers=auth_headers
)
end_time = time.time()
# Assert
assert response.status_code == 200
assert len(response.json()["products"]) == 100
assert end_time - start_time < 2.0 # Must complete in 2s
Fixtures
Fixture Architecture
Fixtures are organized in a hierarchical structure:
tests/
├── conftest.py # Core fixtures (session-scoped)
│ ├── engine # Database engine
│ ├── testing_session_local # Session factory
│ ├── db # Database session (function-scoped)
│ └── client # Test client
│
├── fixtures/ # Reusable fixture modules
│ ├── testing_fixtures.py # Testing utilities
│ ├── auth_fixtures.py # Auth-related fixtures
│ ├── vendor_fixtures.py # Vendor fixtures
│ ├── marketplace_product_fixtures.py
│ ├── marketplace_import_job_fixtures.py
│ └── customer_fixtures.py
│
├── unit/conftest.py # Unit test specific fixtures
├── integration/conftest.py # Integration test specific fixtures
├── system/conftest.py # System test specific fixtures
└── performance/conftest.py # Performance test specific fixtures
Core Fixtures (tests/conftest.py)
These fixtures are available to all tests:
# Database fixtures
@pytest.fixture(scope="session")
def engine():
"""Create test database engine (in-memory SQLite)."""
return create_engine(
"sqlite:///:memory:",
connect_args={"check_same_thread": False},
poolclass=StaticPool,
echo=False
)
@pytest.fixture(scope="function")
def db(engine, testing_session_local):
"""Create a clean database session for each test."""
Base.metadata.create_all(bind=engine)
db_session = testing_session_local()
try:
yield db_session
finally:
db_session.close()
Base.metadata.drop_all(bind=engine)
Base.metadata.create_all(bind=engine)
@pytest.fixture(scope="function")
def client(db):
"""Create a test client with database dependency override."""
def override_get_db():
try:
yield db
finally:
pass
app.dependency_overrides[get_db] = override_get_db
try:
client = TestClient(app)
yield client
finally:
if get_db in app.dependency_overrides:
del app.dependency_overrides[get_db]
Auth Fixtures (tests/fixtures/auth_fixtures.py)
Authentication and user-related fixtures:
@pytest.fixture
def test_user(db, auth_manager):
"""Create a test user with unique username."""
unique_id = str(uuid.uuid4())[:8]
hashed_password = auth_manager.hash_password("testpass123")
user = User(
email=f"test_{unique_id}@example.com",
username=f"testuser_{unique_id}",
hashed_password=hashed_password,
role="user",
is_active=True
)
db.add(user)
db.commit()
db.refresh(user)
db.expunge(user) # Prevent resource warnings
return user
@pytest.fixture
def test_admin(db, auth_manager):
"""Create a test admin user."""
unique_id = str(uuid.uuid4())[:8]
hashed_password = auth_manager.hash_password("adminpass123")
admin = User(
email=f"admin_{unique_id}@example.com",
username=f"admin_{unique_id}",
hashed_password=hashed_password,
role="admin",
is_active=True
)
db.add(admin)
db.commit()
db.refresh(admin)
db.expunge(admin)
return admin
@pytest.fixture
def auth_headers(client, test_user):
"""Get authentication headers for test user."""
response = client.post(
"/api/v1/auth/login",
json={"username": test_user.username, "password": "testpass123"}
)
assert response.status_code == 200
token = response.json()["access_token"]
return {"Authorization": f"Bearer {token}"}
@pytest.fixture
def admin_headers(client, test_admin):
"""Get authentication headers for admin user."""
response = client.post(
"/api/v1/auth/login",
json={"username": test_admin.username, "password": "adminpass123"}
)
assert response.status_code == 200
token = response.json()["access_token"]
return {"Authorization": f"Bearer {token}"}
Vendor Fixtures (tests/fixtures/vendor_fixtures.py)
Vendor-related fixtures:
@pytest.fixture
def test_vendor(db, test_user):
"""Create a test vendor."""
unique_id = str(uuid.uuid4())[:8].upper()
vendor = Vendor(
vendor_code=f"TESTVENDOR_{unique_id}",
subdomain=f"testvendor{unique_id.lower()}",
name=f"Test Vendor {unique_id.lower()}",
owner_user_id=test_user.id,
is_active=True,
is_verified=True
)
db.add(vendor)
db.commit()
db.refresh(vendor)
db.expunge(vendor)
return vendor
@pytest.fixture
def vendor_factory():
"""Factory function to create unique vendors."""
return create_unique_vendor_factory()
def create_unique_vendor_factory():
"""Factory function to create unique vendors in tests."""
def _create_vendor(db, owner_user_id, **kwargs):
unique_id = str(uuid.uuid4())[:8]
defaults = {
"vendor_code": f"FACTORY_{unique_id.upper()}",
"subdomain": f"factory{unique_id.lower()}",
"name": f"Factory Vendor {unique_id}",
"owner_user_id": owner_user_id,
"is_active": True,
"is_verified": False
}
defaults.update(kwargs)
vendor = Vendor(**defaults)
db.add(vendor)
db.commit()
db.refresh(vendor)
return vendor
return _create_vendor
Using Fixtures
Fixtures are automatically injected by pytest:
def test_example(db, test_user, auth_headers):
"""
This test receives three fixtures:
- db: Database session
- test_user: A test user object
- auth_headers: Authentication headers for API calls
"""
# Use the fixtures
assert test_user.id is not None
response = client.get("/api/v1/profile", headers=auth_headers)
assert response.status_code == 200
Creating Custom Fixtures
Add fixtures to appropriate conftest.py or fixture files:
# tests/unit/conftest.py
import pytest
@pytest.fixture
def mock_service():
"""Create a mocked service for unit testing."""
from unittest.mock import Mock
service = Mock()
service.get_data.return_value = {"key": "value"}
return service
Mocking
When to Use Mocks
Use mocks to:
- Isolate unit tests from external dependencies
- Simulate error conditions
- Test edge cases
- Speed up tests by avoiding slow operations
Mock Patterns
1. Mocking with unittest.mock
from unittest.mock import Mock, MagicMock, patch
@pytest.mark.unit
class TestWithMocks:
def test_with_mock(self):
"""Test using a simple mock."""
mock_service = Mock()
mock_service.get_user.return_value = {"id": 1, "name": "Test"}
result = mock_service.get_user(1)
assert result["name"] == "Test"
mock_service.get_user.assert_called_once_with(1)
@patch('app.services.auth_service.jwt.encode')
def test_with_patch(self, mock_encode):
"""Test using patch decorator."""
mock_encode.return_value = "fake_token"
service = AuthService()
token = service.create_token({"user_id": 1})
assert token == "fake_token"
mock_encode.assert_called_once()
2. Mocking Database Errors
from sqlalchemy.exc import SQLAlchemyError
def test_database_error_handling():
"""Test handling of database errors."""
mock_db = Mock()
mock_db.commit.side_effect = SQLAlchemyError("DB connection failed")
service = VendorService()
with pytest.raises(DatabaseException):
service.create_vendor(mock_db, vendor_data, user)
3. Mocking External Services
@patch('app.services.email_service.send_email')
def test_user_registration_sends_email(mock_send_email, db):
"""Test that user registration sends a welcome email."""
service = AuthService()
user_data = UserRegister(
email="test@example.com",
username="testuser",
password="password123"
)
service.register_user(db, user_data)
# Verify email was sent
mock_send_email.assert_called_once()
call_args = mock_send_email.call_args
assert "test@example.com" in str(call_args)
4. Mocking with Context Managers
@pytest.mark.unit
class TestFileOperations:
@patch('builtins.open', create=True)
def test_file_read(self, mock_open):
"""Test file reading operation."""
mock_open.return_value.__enter__.return_value.read.return_value = "test data"
# Code that opens and reads a file
with open('test.txt', 'r') as f:
data = f.read()
assert data == "test data"
mock_open.assert_called_once_with('test.txt', 'r')
Fixtures for Error Testing
Use fixtures to create consistent mock errors:
# tests/fixtures/testing_fixtures.py
@pytest.fixture
def db_with_error():
"""Database session that raises errors."""
mock_db = Mock()
mock_db.query.side_effect = SQLAlchemyError("Database connection failed")
mock_db.add.side_effect = SQLAlchemyError("Database insert failed")
mock_db.commit.side_effect = SQLAlchemyError("Database commit failed")
mock_db.rollback.return_value = None
return mock_db
# Usage in tests
def test_handles_db_error(db_with_error):
"""Test graceful handling of database errors."""
service = VendorService()
with pytest.raises(DatabaseException):
service.create_vendor(db_with_error, vendor_data, user)
Best Practices
1. Test Independence
Each test should be completely independent:
# ✅ GOOD - Independent tests
def test_create_user(db):
user = create_user(db, "user1@test.com")
assert user.email == "user1@test.com"
def test_delete_user(db):
user = create_user(db, "user2@test.com") # Creates own data
delete_user(db, user.id)
assert get_user(db, user.id) is None
# ❌ BAD - Tests depend on each other
user_id = None
def test_create_user(db):
global user_id
user = create_user(db, "user@test.com")
user_id = user.id # State shared between tests
def test_delete_user(db):
delete_user(db, user_id) # Depends on previous test
2. Arrange-Act-Assert Pattern
Structure tests clearly using AAA pattern:
def test_vendor_creation(db, test_user):
# Arrange - Set up test data
vendor_data = VendorCreate(
vendor_code="TESTVENDOR",
vendor_name="Test Vendor"
)
# Act - Perform the action
vendor = VendorService().create_vendor(db, vendor_data, test_user)
# Assert - Verify the results
assert vendor.vendor_code == "TESTVENDOR"
assert vendor.owner_user_id == test_user.id
3. Test One Thing
Each test should verify a single behavior:
# ✅ GOOD - Tests one specific behavior
def test_admin_creates_verified_vendor(db, test_admin):
"""Test that admin users create verified vendors."""
vendor = create_vendor(db, test_admin)
assert vendor.is_verified is True
def test_regular_user_creates_unverified_vendor(db, test_user):
"""Test that regular users create unverified vendors."""
vendor = create_vendor(db, test_user)
assert vendor.is_verified is False
# ❌ BAD - Tests multiple behaviors
def test_vendor_creation(db, test_admin, test_user):
"""Test vendor creation.""" # Vague docstring
admin_vendor = create_vendor(db, test_admin)
user_vendor = create_vendor(db, test_user)
assert admin_vendor.is_verified is True
assert user_vendor.is_verified is False
assert admin_vendor.is_active is True
assert user_vendor.is_active is True
# Testing too many things
4. Use Descriptive Names
Test names should describe what is being tested:
# ✅ GOOD
def test_create_vendor_with_duplicate_code_raises_exception(db, test_vendor):
"""Test that creating vendor with duplicate code raises VendorAlreadyExistsException."""
# ❌ BAD
def test_vendor_error(db):
"""Test vendor."""
5. Test Error Cases
Always test both success and failure paths:
class TestVendorService:
def test_create_vendor_success(self, db, test_user):
"""Test successful vendor creation."""
# Test happy path
def test_create_vendor_duplicate_code(self, db, test_user, test_vendor):
"""Test error when vendor code already exists."""
# Test error case
def test_create_vendor_invalid_data(self, db, test_user):
"""Test error with invalid vendor data."""
# Test validation error
def test_create_vendor_unauthorized(self, db):
"""Test error when user is not authorized."""
# Test authorization error
6. Use Markers Consistently
Apply appropriate markers to all tests:
@pytest.mark.unit # Test level
@pytest.mark.auth # Functionality
class TestAuthService:
def test_login_success(self):
"""Test successful login."""
7. Clean Up Resources
Ensure proper cleanup to prevent resource leaks:
@pytest.fixture
def test_user(db, auth_manager):
"""Create a test user."""
user = User(...)
db.add(user)
db.commit()
db.refresh(user)
db.expunge(user) # ✅ Detach from session
return user
8. Avoid Testing Implementation Details
Test behavior, not implementation:
# ✅ GOOD - Tests behavior
def test_password_is_hashed(db):
"""Test that passwords are stored hashed."""
user = create_user(db, "test@example.com", "password123")
assert user.hashed_password != "password123"
assert len(user.hashed_password) > 50
# ❌ BAD - Tests implementation
def test_password_uses_bcrypt(db):
"""Test that password hashing uses bcrypt."""
user = create_user(db, "test@example.com", "password123")
assert user.hashed_password.startswith("$2b$") # Assumes bcrypt
9. Test Data Uniqueness
Use UUIDs to ensure test data uniqueness:
def test_concurrent_user_creation(db):
"""Test creating multiple users with unique data."""
unique_id = str(uuid.uuid4())[:8]
user = User(
email=f"test_{unique_id}@example.com",
username=f"testuser_{unique_id}"
)
db.add(user)
db.commit()
10. Coverage Goals
Aim for high test coverage:
- Unit tests: > 90% coverage
- Integration tests: > 80% coverage
- Overall: > 80% coverage (enforced in pytest.ini)
Check coverage with:
make test-coverage
# Opens htmlcov/index.html in browser
Troubleshooting
Common Issues
Issue: Tests Fail Intermittently
Cause: Tests depend on each other or share state
Solution: Ensure test independence, use function-scoped fixtures
# Use function scope for fixtures that modify data
@pytest.fixture(scope="function") # ✅ New instance per test
def db(engine, testing_session_local):
...
Issue: "ResourceWarning: unclosed connection"
Cause: Database sessions not properly closed
Solution: Use db.expunge() after creating test objects
@pytest.fixture
def test_user(db):
user = User(...)
db.add(user)
db.commit()
db.refresh(user)
db.expunge(user) # ✅ Detach from session
return user
Issue: Tests Slow
Cause: Too many database operations, not using transactions
Solution:
- Mark slow tests with
@pytest.mark.slow - Use session-scoped fixtures where possible
- Batch database operations
# ✅ Batch create
products = [Product(...) for i in range(100)]
db.add_all(products)
db.commit()
# ❌ Individual creates (slow)
for i in range(100):
product = Product(...)
db.add(product)
db.commit()
Issue: Import Errors
Cause: Python path not set correctly
Solution: Run tests from project root:
# ✅ From project root
pytest tests/
# ❌ From tests directory
cd tests && pytest .
Issue: Fixture Not Found
Cause: Fixture not imported or registered
Solution: Check pytest_plugins in conftest.py:
# tests/conftest.py
pytest_plugins = [
"tests.fixtures.auth_fixtures",
"tests.fixtures.vendor_fixtures",
# Add your fixture module here
]
Debug Tips
1. Verbose Output
pytest tests/ -v # Verbose test names
pytest tests/ -vv # Very verbose (shows assertion details)
2. Show Local Variables
pytest tests/ --showlocals # Show local vars on failure
pytest tests/ -l # Short form
3. Drop to Debugger on Failure
pytest tests/ --pdb # Drop to pdb on first failure
4. Run Specific Test
pytest tests/unit/services/test_auth_service.py::TestAuthService::test_login_success -v
5. Show Print Statements
pytest tests/ -s # Show print() and log output
6. Run Last Failed Tests
pytest tests/ --lf # Last failed
pytest tests/ --ff # Failed first
Getting Help
If you encounter issues:
- Check this documentation
- Review test examples in the codebase
- Check pytest.ini for configuration
- Review conftest.py files for available fixtures
- Ask team members or create an issue
Summary
This testing guide covers:
- ✅ Four-level test architecture (Unit/Integration/System/Performance)
- ✅ Running tests with Make commands and pytest
- ✅ Test structure and organization
- ✅ Writing tests for each level
- ✅ Using fixtures effectively
- ✅ Mocking patterns and best practices
- ✅ Best practices for maintainable tests
- ✅ Troubleshooting common issues
For information about maintaining and extending the test suite, see Test Maintenance Guide.