Added placeholder for documentation
This commit is contained in:
63
docs/api/authentication.md
Normal file
63
docs/api/authentication.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# Authentication
|
||||
|
||||
JWT-based authentication system for the Letzshop Import API.
|
||||
|
||||
## Overview
|
||||
|
||||
The API uses JSON Web Tokens (JWT) for authentication. Users must register, login to receive a token, then include the token in subsequent requests.
|
||||
|
||||
## Authentication Flow
|
||||
|
||||
1. **Register** - Create a new user account
|
||||
2. **Login** - Authenticate and receive JWT token
|
||||
3. **Use Token** - Include token in API requests
|
||||
|
||||
## Endpoints
|
||||
|
||||
### Register User
|
||||
```http
|
||||
POST /api/v1/auth/register
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"email": "user@example.com",
|
||||
"username": "testuser",
|
||||
"password": "securepassword123"
|
||||
}
|
||||
```
|
||||
|
||||
### Login
|
||||
```http
|
||||
POST /api/v1/auth/login
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"username": "testuser",
|
||||
"password": "securepassword123"
|
||||
}
|
||||
```
|
||||
|
||||
Response:
|
||||
```json
|
||||
{
|
||||
"access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9...",
|
||||
"token_type": "bearer",
|
||||
"expires_in": 86400
|
||||
}
|
||||
```
|
||||
|
||||
## Using Authentication
|
||||
|
||||
Include the JWT token in the Authorization header:
|
||||
|
||||
```http
|
||||
GET /api/v1/product
|
||||
Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9...
|
||||
```
|
||||
|
||||
## User Roles
|
||||
|
||||
- **User** - Basic access to own resources
|
||||
- **Admin** - Full system access
|
||||
|
||||
*This documentation is under development.*
|
||||
0
docs/api/error-handling.md
Normal file
0
docs/api/error-handling.md
Normal file
0
docs/api/rate-limiting.md
Normal file
0
docs/api/rate-limiting.md
Normal file
0
docs/deployment/docker.md
Normal file
0
docs/deployment/docker.md
Normal file
0
docs/deployment/environment.md
Normal file
0
docs/deployment/environment.md
Normal file
0
docs/deployment/index.md
Normal file
0
docs/deployment/index.md
Normal file
0
docs/deployment/production.md
Normal file
0
docs/deployment/production.md
Normal file
0
docs/development/architecture.md
Normal file
0
docs/development/architecture.md
Normal file
0
docs/development/contributing.md
Normal file
0
docs/development/contributing.md
Normal file
328
docs/development/database-migrations.md
Normal file
328
docs/development/database-migrations.md
Normal file
@@ -0,0 +1,328 @@
|
||||
# Database Migrations Guide
|
||||
|
||||
This guide covers advanced database migration workflows for developers working on schema changes.
|
||||
|
||||
## Overview
|
||||
|
||||
Our project uses Alembic for database migrations. All schema changes must go through the migration system to ensure:
|
||||
- Reproducible deployments
|
||||
- Team synchronization
|
||||
- Production safety
|
||||
- Rollback capability
|
||||
|
||||
## Migration Commands Reference
|
||||
|
||||
### Creating Migrations
|
||||
```bash
|
||||
# Auto-generate migration from model changes
|
||||
make migrate-create message="add_user_profile_table"
|
||||
|
||||
# Create empty migration template for manual changes
|
||||
make migrate-create-manual message="add_custom_indexes"
|
||||
```
|
||||
|
||||
### Applying Migrations
|
||||
```bash
|
||||
# Apply all pending migrations
|
||||
make migrate-up
|
||||
|
||||
# Rollback last migration
|
||||
make migrate-down
|
||||
|
||||
# Rollback to specific revision
|
||||
make migrate-down-to revision="abc123"
|
||||
```
|
||||
|
||||
### Migration Status
|
||||
```bash
|
||||
# Show current migration status
|
||||
make migrate-status
|
||||
|
||||
# Show detailed migration history
|
||||
alembic history --verbose
|
||||
|
||||
# Show specific migration details
|
||||
make migrate-show revision="abc123"
|
||||
```
|
||||
|
||||
### Backup and Safety
|
||||
```bash
|
||||
# Create database backup before major changes
|
||||
make backup-db
|
||||
|
||||
# Verify database setup
|
||||
make verify-setup
|
||||
```
|
||||
|
||||
## Development Workflows
|
||||
|
||||
### Adding New Database Fields
|
||||
|
||||
1. **Modify your SQLAlchemy model**:
|
||||
```python
|
||||
# In models/database/user.py
|
||||
class User(Base):
|
||||
# ... existing fields
|
||||
profile_image = Column(String, nullable=True) # NEW FIELD
|
||||
```
|
||||
|
||||
2. **Generate migration**:
|
||||
```bash
|
||||
make migrate-create message="add_profile_image_to_users"
|
||||
```
|
||||
|
||||
3. **Review generated migration**:
|
||||
```python
|
||||
# Check alembic/versions/xxx_add_profile_image_to_users.py
|
||||
def upgrade() -> None:
|
||||
op.add_column('users', sa.Column('profile_image', sa.String(), nullable=True))
|
||||
|
||||
def downgrade() -> None:
|
||||
op.drop_column('users', 'profile_image')
|
||||
```
|
||||
|
||||
4. **Apply migration**:
|
||||
```bash
|
||||
make migrate-up
|
||||
```
|
||||
|
||||
### Adding Database Indexes
|
||||
|
||||
1. **Create manual migration**:
|
||||
```bash
|
||||
make migrate-create-manual message="add_performance_indexes"
|
||||
```
|
||||
|
||||
2. **Edit the migration file**:
|
||||
```python
|
||||
def upgrade() -> None:
|
||||
# Add indexes for better performance
|
||||
op.create_index('idx_products_marketplace_shop', 'products', ['marketplace', 'shop_name'])
|
||||
op.create_index('idx_users_email_active', 'users', ['email', 'is_active'])
|
||||
|
||||
def downgrade() -> None:
|
||||
op.drop_index('idx_users_email_active', table_name='users')
|
||||
op.drop_index('idx_products_marketplace_shop', table_name='products')
|
||||
```
|
||||
|
||||
3. **Apply migration**:
|
||||
```bash
|
||||
make migrate-up
|
||||
```
|
||||
|
||||
### Complex Schema Changes
|
||||
|
||||
For complex changes that require data transformation:
|
||||
|
||||
1. **Create migration with data handling**:
|
||||
```python
|
||||
def upgrade() -> None:
|
||||
# Create new column
|
||||
op.add_column('products', sa.Column('normalized_price', sa.Numeric(10, 2)))
|
||||
|
||||
# Migrate data
|
||||
connection = op.get_bind()
|
||||
connection.execute(
|
||||
text("UPDATE products SET normalized_price = CAST(price AS NUMERIC) WHERE price ~ '^[0-9.]+$'")
|
||||
)
|
||||
|
||||
# Make column non-nullable after data migration
|
||||
op.alter_column('products', 'normalized_price', nullable=False)
|
||||
|
||||
def downgrade() -> None:
|
||||
op.drop_column('products', 'normalized_price')
|
||||
```
|
||||
|
||||
## Production Deployment
|
||||
|
||||
### Pre-Deployment Checklist
|
||||
- [ ] All migrations tested locally
|
||||
- [ ] Database backup created
|
||||
- [ ] Migration rollback plan prepared
|
||||
- [ ] Team notified of schema changes
|
||||
|
||||
### Deployment Process
|
||||
```bash
|
||||
# 1. Pre-deployment checks
|
||||
make pre-deploy-check
|
||||
|
||||
# 2. Backup production database
|
||||
make backup-db
|
||||
|
||||
# 3. Deploy with migrations
|
||||
make deploy-prod # This includes migrate-up
|
||||
```
|
||||
|
||||
### Rollback Process
|
||||
```bash
|
||||
# If deployment fails, rollback
|
||||
make rollback-prod # This includes migrate-down
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Migration Naming
|
||||
Use clear, descriptive names:
|
||||
```bash
|
||||
# Good examples
|
||||
make migrate-create message="add_user_profile_table"
|
||||
make migrate-create message="remove_deprecated_product_fields"
|
||||
make migrate-create message="add_indexes_for_search_performance"
|
||||
|
||||
# Avoid vague names
|
||||
make migrate-create message="update_database" # Too vague
|
||||
make migrate-create message="fix_stuff" # Not descriptive
|
||||
```
|
||||
|
||||
### Safe Schema Changes
|
||||
|
||||
**Always Safe**:
|
||||
- Adding nullable columns
|
||||
- Adding indexes
|
||||
- Adding new tables
|
||||
- Increasing column size (varchar(50) → varchar(100))
|
||||
|
||||
**Potentially Unsafe** (require careful planning):
|
||||
- Dropping columns
|
||||
- Changing column types
|
||||
- Adding non-nullable columns without defaults
|
||||
- Renaming tables or columns
|
||||
|
||||
**Multi-Step Process for Unsafe Changes**:
|
||||
```python
|
||||
# Step 1: Add new column
|
||||
def upgrade() -> None:
|
||||
op.add_column('users', sa.Column('email_new', sa.String(255)))
|
||||
|
||||
# Step 2: Migrate data (separate migration)
|
||||
def upgrade() -> None:
|
||||
connection = op.get_bind()
|
||||
connection.execute(text("UPDATE users SET email_new = email"))
|
||||
|
||||
# Step 3: Switch columns (separate migration)
|
||||
def upgrade() -> None:
|
||||
op.drop_column('users', 'email')
|
||||
op.alter_column('users', 'email_new', new_column_name='email')
|
||||
```
|
||||
|
||||
### Testing Migrations
|
||||
|
||||
1. **Test on copy of production data**:
|
||||
```bash
|
||||
# Restore production backup to test database
|
||||
# Run migrations on test database
|
||||
# Verify data integrity
|
||||
```
|
||||
|
||||
2. **Test rollback process**:
|
||||
```bash
|
||||
make migrate-up # Apply migration
|
||||
# Test application functionality
|
||||
make migrate-down # Test rollback
|
||||
# Verify rollback worked correctly
|
||||
```
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Environment-Specific Migrations
|
||||
|
||||
Use migration context to handle different environments:
|
||||
```python
|
||||
from alembic import context
|
||||
|
||||
def upgrade() -> None:
|
||||
# Only add sample data in development
|
||||
if context.get_x_argument(as_dictionary=True).get('dev_data', False):
|
||||
# Add development sample data
|
||||
pass
|
||||
|
||||
# Always apply schema changes
|
||||
op.create_table(...)
|
||||
```
|
||||
|
||||
Run with environment flag:
|
||||
```bash
|
||||
alembic upgrade head -x dev_data=true
|
||||
```
|
||||
|
||||
### Data Migrations
|
||||
|
||||
For large data transformations, use batch processing:
|
||||
```python
|
||||
def upgrade() -> None:
|
||||
connection = op.get_bind()
|
||||
|
||||
# Process in batches to avoid memory issues
|
||||
batch_size = 1000
|
||||
offset = 0
|
||||
|
||||
while True:
|
||||
result = connection.execute(
|
||||
text(f"SELECT id, old_field FROM products LIMIT {batch_size} OFFSET {offset}")
|
||||
)
|
||||
rows = result.fetchall()
|
||||
|
||||
if not rows:
|
||||
break
|
||||
|
||||
for row in rows:
|
||||
# Transform data
|
||||
new_value = transform_function(row.old_field)
|
||||
connection.execute(
|
||||
text("UPDATE products SET new_field = :new_val WHERE id = :id"),
|
||||
{"new_val": new_value, "id": row.id}
|
||||
)
|
||||
|
||||
offset += batch_size
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Migration conflicts**:
|
||||
```bash
|
||||
# When multiple developers create migrations simultaneously
|
||||
# Resolve by creating a merge migration
|
||||
alembic merge -m "merge migrations" head1 head2
|
||||
```
|
||||
|
||||
**Failed migration**:
|
||||
```bash
|
||||
# Check current state
|
||||
make migrate-status
|
||||
|
||||
# Manually fix database if needed
|
||||
# Then mark migration as applied
|
||||
alembic stamp head
|
||||
```
|
||||
|
||||
**Out of sync database**:
|
||||
```bash
|
||||
# Reset to known good state
|
||||
make backup-db
|
||||
alembic downgrade base
|
||||
make migrate-up
|
||||
```
|
||||
|
||||
### Recovery Procedures
|
||||
|
||||
1. **Database corruption**: Restore from backup, replay migrations
|
||||
2. **Failed deployment**: Use rollback process, investigate issue
|
||||
3. **Development issues**: Reset local database, pull latest migrations
|
||||
|
||||
## Integration with CI/CD
|
||||
|
||||
Our deployment pipeline automatically:
|
||||
1. Runs migration checks in CI
|
||||
2. Creates database backups before deployment
|
||||
3. Applies migrations during deployment
|
||||
4. Provides rollback capability
|
||||
|
||||
Migration failures will halt deployment to prevent data corruption.
|
||||
|
||||
## Further Reading
|
||||
|
||||
- [Alembic Official Documentation](https://alembic.sqlalchemy.org/)
|
||||
- [Database Schema Documentation](database-schema.md)
|
||||
- [Deployment Guide](../deployment/production.md)
|
||||
0
docs/development/database-schema.md
Normal file
0
docs/development/database-schema.md
Normal file
0
docs/development/services.md
Normal file
0
docs/development/services.md
Normal file
1
docs/development/troubleshooting.md
Normal file
1
docs/development/troubleshooting.md
Normal file
@@ -0,0 +1 @@
|
||||
*This documentation is under development.*
|
||||
52
docs/getting-started/configuration.md
Normal file
52
docs/getting-started/configuration.md
Normal file
@@ -0,0 +1,52 @@
|
||||
# Configuration Guide
|
||||
|
||||
Environment configuration for the Letzshop Import API.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Create a `.env` file in your project root:
|
||||
|
||||
```env
|
||||
# Database Configuration
|
||||
DATABASE_URL=sqlite:///./ecommerce.db
|
||||
# For PostgreSQL: DATABASE_URL=postgresql://user:password@localhost:5432/ecommerce
|
||||
|
||||
# Security
|
||||
JWT_SECRET_KEY=your-super-secret-key-change-in-production
|
||||
JWT_EXPIRE_HOURS=24
|
||||
|
||||
# API Settings
|
||||
API_HOST=0.0.0.0
|
||||
API_PORT=8000
|
||||
DEBUG=True
|
||||
|
||||
# Rate Limiting
|
||||
RATE_LIMIT_ENABLED=True
|
||||
RATE_LIMIT_REQUESTS=100
|
||||
RATE_LIMIT_WINDOW=3600
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
| Variable | Description | Default | Required |
|
||||
|----------|-------------|---------|----------|
|
||||
| `DATABASE_URL` | Database connection string | SQLite | Yes |
|
||||
| `JWT_SECRET_KEY` | JWT signing key | - | Yes |
|
||||
| `DEBUG` | Enable debug mode | False | No |
|
||||
|
||||
## Environment-Specific Setup
|
||||
|
||||
### Development
|
||||
```env
|
||||
DEBUG=True
|
||||
DATABASE_URL=sqlite:///./ecommerce.db
|
||||
```
|
||||
|
||||
### Production
|
||||
```env
|
||||
DEBUG=False
|
||||
DATABASE_URL=postgresql://user:password@host:5432/db
|
||||
JWT_SECRET_KEY=production-secret-key
|
||||
```
|
||||
|
||||
*This guide is under development. See [Installation](installation.md) for complete setup instructions.*
|
||||
192
docs/getting-started/database-setup.md
Normal file
192
docs/getting-started/database-setup.md
Normal file
@@ -0,0 +1,192 @@
|
||||
# Database Setup Guide
|
||||
|
||||
This guide will help new team members set up and understand the database system used in this project.
|
||||
|
||||
## Quick Setup (New Team Members)
|
||||
|
||||
After cloning the repository, follow these steps to get your database ready:
|
||||
|
||||
### 1. Install Dependencies
|
||||
```bash
|
||||
# Install all dependencies including Alembic
|
||||
make install-all
|
||||
```
|
||||
|
||||
### 2. Set Up Environment
|
||||
```bash
|
||||
# Copy the example environment file
|
||||
cp .env.example .env
|
||||
|
||||
# Edit .env with your database configuration
|
||||
# For development, you can use SQLite:
|
||||
DATABASE_URL=sqlite:///./ecommerce.db
|
||||
|
||||
# For PostgreSQL (recommended for production-like development):
|
||||
# DATABASE_URL=postgresql://username:password@localhost:5432/ecommerce_dev
|
||||
```
|
||||
|
||||
### 3. Run Database Migrations
|
||||
```bash
|
||||
# Apply all migrations to create the database schema
|
||||
make migrate-up
|
||||
```
|
||||
|
||||
### 4. Verify Setup
|
||||
```bash
|
||||
# Check that everything is working
|
||||
make verify-setup
|
||||
```
|
||||
|
||||
### 5. Start Development
|
||||
```bash
|
||||
# Start the development server
|
||||
make dev
|
||||
```
|
||||
|
||||
## Understanding Our Database System
|
||||
|
||||
### What is Alembic?
|
||||
Alembic is a database migration tool that helps us:
|
||||
- **Version control our database schema** - Every database change is tracked
|
||||
- **Share schema changes with the team** - When you pull code, you get database updates too
|
||||
- **Deploy safely** - Production deployments include database updates
|
||||
- **Roll back if needed** - We can undo problematic database changes
|
||||
|
||||
### Key Concepts
|
||||
|
||||
**Migrations**: Files that describe how to change the database schema
|
||||
- Located in `alembic/versions/`
|
||||
- Each migration has a unique ID and timestamp
|
||||
- Migrations run in order to build up the complete schema
|
||||
|
||||
**Migration Status**: Alembic tracks which migrations have been applied
|
||||
- `alembic current` - Shows the current migration
|
||||
- `alembic history` - Shows all available migrations
|
||||
|
||||
## Daily Workflow
|
||||
|
||||
### When You Pull Code
|
||||
```bash
|
||||
# After pulling changes from git, check for new migrations
|
||||
make migrate-status
|
||||
|
||||
# If there are pending migrations, apply them
|
||||
make migrate-up
|
||||
```
|
||||
|
||||
### When Working with Models
|
||||
```bash
|
||||
# After modifying SQLAlchemy models, create a migration
|
||||
make migrate-create message="add_user_profile_table"
|
||||
|
||||
# Review the generated migration file in alembic/versions/
|
||||
# Then apply it
|
||||
make migrate-up
|
||||
```
|
||||
|
||||
## Common Scenarios
|
||||
|
||||
### First Time Setup
|
||||
```bash
|
||||
make setup # This handles everything automatically
|
||||
```
|
||||
|
||||
### Database is Out of Sync
|
||||
```bash
|
||||
# Check current status
|
||||
make migrate-status
|
||||
|
||||
# Apply any missing migrations
|
||||
make migrate-up
|
||||
```
|
||||
|
||||
### Something Went Wrong
|
||||
```bash
|
||||
# Create a backup first
|
||||
make backup-db
|
||||
|
||||
# Check what migrations are available
|
||||
make migrate-status
|
||||
|
||||
# If you need to rollback the last migration
|
||||
make migrate-down
|
||||
```
|
||||
|
||||
### Starting Fresh (Development Only)
|
||||
```bash
|
||||
# Backup first (just in case)
|
||||
make backup-db
|
||||
|
||||
# Delete database and recreate from scratch
|
||||
del ecommerce.db # or drop PostgreSQL database
|
||||
make migrate-up
|
||||
```
|
||||
|
||||
## Environment-Specific Setup
|
||||
|
||||
### Development (SQLite)
|
||||
```env
|
||||
DATABASE_URL=sqlite:///./ecommerce.db
|
||||
```
|
||||
- Quick setup, no additional software needed
|
||||
- File-based database, easy to backup/restore
|
||||
- Good for local development and testing
|
||||
|
||||
### Development (PostgreSQL)
|
||||
```env
|
||||
DATABASE_URL=postgresql://user:password@localhost:5432/ecommerce_dev
|
||||
```
|
||||
- More production-like environment
|
||||
- Better for testing complex queries
|
||||
- Required for certain advanced features
|
||||
|
||||
### Production
|
||||
```env
|
||||
DATABASE_URL=postgresql://user:password@production-host:5432/ecommerce_prod
|
||||
```
|
||||
- Always use PostgreSQL in production
|
||||
- Migrations are applied automatically during deployment
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "No module named 'models'"
|
||||
```bash
|
||||
# Make sure you're in the project root and have activated the virtual environment
|
||||
cd /path/to/project
|
||||
source venv/bin/activate # or venv\Scripts\activate on Windows
|
||||
```
|
||||
|
||||
### "Database connection failed"
|
||||
```bash
|
||||
# Check your DATABASE_URL in .env
|
||||
# For SQLite, make sure the directory exists and is writable
|
||||
# For PostgreSQL, ensure the server is running and credentials are correct
|
||||
```
|
||||
|
||||
### "Migration conflicts"
|
||||
```bash
|
||||
# Check migration status
|
||||
make migrate-status
|
||||
|
||||
# If there are conflicts, contact the team lead
|
||||
# We may need to merge or reorder migrations
|
||||
```
|
||||
|
||||
### Need Help?
|
||||
- Check `make help-db` for database commands
|
||||
- Use `make verify-setup` to diagnose issues
|
||||
- Ask in the team chat if you're stuck
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always run migrations after pulling code**
|
||||
2. **Never edit migration files after they're committed**
|
||||
3. **Test your migrations on a copy of production data**
|
||||
4. **Include meaningful messages when creating migrations**
|
||||
5. **Review generated migrations before applying them**
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Database Schema Documentation](../development/database-schema.md) - Understand our data model
|
||||
- [Database Migrations Guide](../development/database-migrations.md) - Advanced migration workflows
|
||||
- [API Documentation](../api/index.md) - Start building features
|
||||
38
docs/getting-started/quickstart.md
Normal file
38
docs/getting-started/quickstart.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# Quick Start Guide
|
||||
|
||||
Get up and running with the Letzshop Import API in 5 minutes.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Python 3.10+
|
||||
- Git
|
||||
|
||||
## Quick Setup
|
||||
|
||||
```bash
|
||||
# 1. Clone and setup
|
||||
git clone <your-repo>
|
||||
cd letzshop-import
|
||||
make setup
|
||||
|
||||
# 2. Start development
|
||||
make dev
|
||||
```
|
||||
|
||||
## First API Call
|
||||
|
||||
```bash
|
||||
# Check health
|
||||
curl http://localhost:8000/health
|
||||
|
||||
# View API docs
|
||||
open http://localhost:8000/docs
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Database Setup](database-setup.md) - Configure your database
|
||||
- [Configuration](configuration.md) - Environment configuration
|
||||
- [API Documentation](../api/index.md) - Explore the API
|
||||
|
||||
*This guide is under development. For detailed instructions, see [Installation](installation.md).*
|
||||
0
docs/guides/csv-import.md
Normal file
0
docs/guides/csv-import.md
Normal file
0
docs/guides/marketplace-integration.md
Normal file
0
docs/guides/marketplace-integration.md
Normal file
0
docs/guides/product-management.md
Normal file
0
docs/guides/product-management.md
Normal file
0
docs/guides/shop-setup.md
Normal file
0
docs/guides/shop-setup.md
Normal file
@@ -33,12 +33,13 @@ Letzshop Import is a powerful web application that enables:
|
||||
- [**Shop Setup**](guides/shop-setup.md) - Configuring shops
|
||||
|
||||
### 🧪 Testing
|
||||
- [**Test Naming Conventions**](testing/test-naming-conventions.md) - Our testing standards
|
||||
- [**Running Tests**](testing/running-tests.md) - How to run the test suite
|
||||
- [**Testing Guide**](testing/testing-guide.md) - Our testing standards and how to run tests
|
||||
- [**Test Maintenance**](testing/test-maintenance.md) - Test suite maintenance
|
||||
|
||||
### 🔧 Development
|
||||
- [**Architecture**](development/architecture.md) - System design overview
|
||||
- [**Database Schema**](development/database-schema.md) - Data model documentation
|
||||
- [**Troubleshooting**](development/troubleshooting.md) - How to troubleshoot
|
||||
- [**Contributing**](development/contributing.md) - How to contribute
|
||||
|
||||
### 🚢 Deployment
|
||||
|
||||
@@ -1,551 +1 @@
|
||||
# Test Maintenance Guide
|
||||
|
||||
This guide covers how to maintain, update, and contribute to our test suite as the application evolves. It's designed for developers who need to modify existing tests or add new test coverage.
|
||||
|
||||
## Test Maintenance Philosophy
|
||||
|
||||
Our test suite follows these core principles:
|
||||
- **Tests should be reliable** - They pass consistently and fail only when there are real issues
|
||||
- **Tests should be fast** - Unit tests complete in milliseconds, integration tests in seconds
|
||||
- **Tests should be maintainable** - Easy to update when code changes
|
||||
- **Tests should provide clear feedback** - Failures should clearly indicate what went wrong
|
||||
|
||||
## When to Update Tests
|
||||
|
||||
### Code Changes That Require Test Updates
|
||||
|
||||
**API Changes**:
|
||||
```bash
|
||||
# When you modify API endpoints, update integration tests
|
||||
pytest tests/integration/api/v1/test_*_endpoints.py -v
|
||||
```
|
||||
|
||||
**Business Logic Changes**:
|
||||
```bash
|
||||
# When you modify service logic, update unit tests
|
||||
pytest tests/unit/services/test_*_service.py -v
|
||||
```
|
||||
|
||||
**Database Model Changes**:
|
||||
```bash
|
||||
# When you modify models, update model tests
|
||||
pytest tests/unit/models/test_database_models.py -v
|
||||
```
|
||||
|
||||
**New Features**:
|
||||
- Add new test files following our naming conventions
|
||||
- Ensure both unit and integration test coverage
|
||||
- Add appropriate pytest markers
|
||||
|
||||
## Common Maintenance Tasks
|
||||
|
||||
### Adding Tests for New Features
|
||||
|
||||
**Step 1: Determine Test Type and Location**
|
||||
```python
|
||||
# New business logic → Unit test
|
||||
tests/unit/services/test_new_feature_service.py
|
||||
|
||||
# New API endpoint → Integration test
|
||||
tests/integration/api/v1/test_new_feature_endpoints.py
|
||||
|
||||
# New workflow → Integration test
|
||||
tests/integration/workflows/test_new_feature_workflow.py
|
||||
```
|
||||
|
||||
**Step 2: Create Test File with Proper Structure**
|
||||
```python
|
||||
import pytest
|
||||
from app.services.new_feature_service import NewFeatureService
|
||||
|
||||
@pytest.mark.unit
|
||||
@pytest.mark.new_feature # Add domain marker
|
||||
class TestNewFeatureService:
|
||||
"""Unit tests for NewFeatureService"""
|
||||
|
||||
def setup_method(self):
|
||||
"""Setup run before each test"""
|
||||
self.service = NewFeatureService()
|
||||
|
||||
def test_new_feature_with_valid_input_succeeds(self):
|
||||
"""Test the happy path"""
|
||||
# Test implementation
|
||||
pass
|
||||
|
||||
def test_new_feature_with_invalid_input_raises_error(self):
|
||||
"""Test error handling"""
|
||||
# Test implementation
|
||||
pass
|
||||
```
|
||||
|
||||
**Step 3: Add Domain Marker to pytest.ini**
|
||||
```ini
|
||||
# Add to markers section in pytest.ini
|
||||
new_feature: marks tests related to new feature functionality
|
||||
```
|
||||
|
||||
### Updating Tests for API Changes
|
||||
|
||||
**Example: Adding a new field to product creation**
|
||||
|
||||
```python
|
||||
# Before: tests/integration/api/v1/test_product_endpoints.py
|
||||
def test_create_product_success(self, client, auth_headers):
|
||||
product_data = {
|
||||
"product_id": "TEST001",
|
||||
"title": "Test Product",
|
||||
"price": "19.99"
|
||||
}
|
||||
response = client.post("/api/v1/product", json=product_data, headers=auth_headers)
|
||||
assert response.status_code == 200
|
||||
|
||||
# After: Adding 'category' field
|
||||
def test_create_product_success(self, client, auth_headers):
|
||||
product_data = {
|
||||
"product_id": "TEST001",
|
||||
"title": "Test Product",
|
||||
"price": "19.99",
|
||||
"category": "Electronics" # New field
|
||||
}
|
||||
response = client.post("/api/v1/product", json=product_data, headers=auth_headers)
|
||||
assert response.status_code == 200
|
||||
assert response.json()["category"] == "Electronics" # Verify new field
|
||||
|
||||
# Add test for validation
|
||||
def test_create_product_with_invalid_category_fails(self, client, auth_headers):
|
||||
product_data = {
|
||||
"product_id": "TEST002",
|
||||
"title": "Test Product",
|
||||
"price": "19.99",
|
||||
"category": "" # Invalid empty category
|
||||
}
|
||||
response = client.post("/api/v1/product", json=product_data, headers=auth_headers)
|
||||
assert response.status_code == 422
|
||||
```
|
||||
|
||||
### Updating Fixtures for Model Changes
|
||||
|
||||
**When you add fields to database models, update fixtures**:
|
||||
|
||||
```python
|
||||
# tests/fixtures/product_fixtures.py - Before
|
||||
@pytest.fixture
|
||||
def test_product(db):
|
||||
product = Product(
|
||||
product_id="TEST001",
|
||||
title="Test Product",
|
||||
price="10.99"
|
||||
)
|
||||
# ... rest of fixture
|
||||
|
||||
# After: Adding new category field
|
||||
@pytest.fixture
|
||||
def test_product(db):
|
||||
product = Product(
|
||||
product_id="TEST001",
|
||||
title="Test Product",
|
||||
price="10.99",
|
||||
category="Electronics" # Add new field with sensible default
|
||||
)
|
||||
# ... rest of fixture
|
||||
```
|
||||
|
||||
### Handling Breaking Changes
|
||||
|
||||
**When making breaking changes that affect many tests**:
|
||||
|
||||
1. **Update fixtures first** to include new required fields
|
||||
2. **Run tests to identify failures**: `pytest -x` (stop on first failure)
|
||||
3. **Update tests systematically** by domain
|
||||
4. **Verify coverage hasn't decreased**: `make test-coverage`
|
||||
|
||||
## Test Data Management
|
||||
|
||||
### Creating New Fixtures
|
||||
|
||||
**Add domain-specific fixtures to appropriate files**:
|
||||
|
||||
```python
|
||||
# tests/fixtures/new_domain_fixtures.py
|
||||
import pytest
|
||||
from models.database import NewModel
|
||||
|
||||
@pytest.fixture
|
||||
def test_new_model(db):
|
||||
"""Create a test instance of NewModel"""
|
||||
model = NewModel(
|
||||
name="Test Model",
|
||||
value="test_value"
|
||||
)
|
||||
db.add(model)
|
||||
db.commit()
|
||||
db.refresh(model)
|
||||
return model
|
||||
|
||||
@pytest.fixture
|
||||
def new_model_factory():
|
||||
"""Factory for creating custom NewModel instances"""
|
||||
def _create_new_model(db, **kwargs):
|
||||
defaults = {"name": "Default Name", "value": "default"}
|
||||
defaults.update(kwargs)
|
||||
model = NewModel(**defaults)
|
||||
db.add(model)
|
||||
db.commit()
|
||||
db.refresh(model)
|
||||
return model
|
||||
return _create_new_model
|
||||
```
|
||||
|
||||
**Register new fixture module in conftest.py**:
|
||||
```python
|
||||
# tests/conftest.py
|
||||
pytest_plugins = [
|
||||
"tests.fixtures.auth_fixtures",
|
||||
"tests.fixtures.product_fixtures",
|
||||
"tests.fixtures.shop_fixtures",
|
||||
"tests.fixtures.marketplace_fixtures",
|
||||
"tests.fixtures.new_domain_fixtures", # Add new fixture module
|
||||
]
|
||||
```
|
||||
|
||||
### Managing Test Data Files
|
||||
|
||||
**Static test data in tests/test_data/**:
|
||||
```
|
||||
tests/test_data/
|
||||
├── csv/
|
||||
│ ├── valid_products.csv # Standard valid product data
|
||||
│ ├── invalid_products.csv # Data with validation errors
|
||||
│ ├── large_product_set.csv # Performance testing data
|
||||
│ └── new_feature_data.csv # Data for new feature testing
|
||||
├── json/
|
||||
│ ├── api_responses.json # Mock API responses
|
||||
│ └── configuration_samples.json # Configuration test data
|
||||
└── fixtures/
|
||||
└── database_seeds.json # Database seed data
|
||||
```
|
||||
|
||||
**Update test data when adding new fields**:
|
||||
```csv
|
||||
# Before: tests/test_data/csv/valid_products.csv
|
||||
product_id,title,price
|
||||
TEST001,Product 1,19.99
|
||||
TEST002,Product 2,29.99
|
||||
|
||||
# After: Adding category field
|
||||
product_id,title,price,category
|
||||
TEST001,Product 1,19.99,Electronics
|
||||
TEST002,Product 2,29.99,Books
|
||||
```
|
||||
|
||||
## Performance Test Maintenance
|
||||
|
||||
### Updating Performance Baselines
|
||||
|
||||
**When application performance improves or requirements change**:
|
||||
|
||||
```python
|
||||
# tests/performance/test_api_performance.py
|
||||
def test_product_list_performance(self, client, auth_headers, db):
|
||||
# Create test data
|
||||
products = [Product(product_id=f"PERF{i:03d}") for i in range(100)]
|
||||
db.add_all(products)
|
||||
db.commit()
|
||||
|
||||
# Time the request
|
||||
start_time = time.time()
|
||||
response = client.get("/api/v1/product?limit=100", headers=auth_headers)
|
||||
end_time = time.time()
|
||||
|
||||
assert response.status_code == 200
|
||||
assert len(response.json()["products"]) == 100
|
||||
# Update baseline if performance has improved
|
||||
assert end_time - start_time < 1.5 # Previously was 2.0 seconds
|
||||
```
|
||||
|
||||
### Adding Performance Tests for New Features
|
||||
|
||||
```python
|
||||
@pytest.mark.performance
|
||||
@pytest.mark.slow
|
||||
@pytest.mark.new_feature
|
||||
def test_new_feature_performance_with_large_dataset(self, client, auth_headers, db):
|
||||
"""Test new feature performance with realistic data volume"""
|
||||
# Create large dataset
|
||||
large_dataset = [NewModel(data=f"item_{i}") for i in range(1000)]
|
||||
db.add_all(large_dataset)
|
||||
db.commit()
|
||||
|
||||
# Test performance
|
||||
start_time = time.time()
|
||||
response = client.post("/api/v1/new-feature/process",
|
||||
json={"process_all": True},
|
||||
headers=auth_headers)
|
||||
end_time = time.time()
|
||||
|
||||
assert response.status_code == 200
|
||||
assert end_time - start_time < 10.0 # Should complete within 10 seconds
|
||||
```
|
||||
|
||||
## Debugging and Troubleshooting
|
||||
|
||||
### Identifying Flaky Tests
|
||||
|
||||
**Tests that pass/fail inconsistently need attention**:
|
||||
|
||||
```bash
|
||||
# Run the same test multiple times to identify flaky behavior
|
||||
pytest tests/path/to/flaky_test.py -v --count=10
|
||||
|
||||
# Run with more verbose output to see what's changing
|
||||
pytest tests/path/to/flaky_test.py -vv --tb=long --showlocals
|
||||
```
|
||||
|
||||
**Common causes of flaky tests**:
|
||||
- Database state not properly cleaned between tests
|
||||
- Timing issues in async operations
|
||||
- External service dependencies
|
||||
- Shared mutable state between tests
|
||||
|
||||
### Fixing Common Test Issues
|
||||
|
||||
**Database State Issues**:
|
||||
```python
|
||||
# Ensure proper cleanup in fixtures
|
||||
@pytest.fixture
|
||||
def clean_database(db):
|
||||
"""Ensure clean database state"""
|
||||
yield db
|
||||
# Explicit cleanup if needed
|
||||
db.query(SomeModel).delete()
|
||||
db.commit()
|
||||
```
|
||||
|
||||
**Async Test Issues**:
|
||||
```python
|
||||
# Ensure proper async test setup
|
||||
@pytest.mark.asyncio
|
||||
async def test_async_operation():
|
||||
# Use await for all async operations
|
||||
result = await async_service.process_data()
|
||||
assert result is not None
|
||||
```
|
||||
|
||||
**Mock-Related Issues**:
|
||||
```python
|
||||
# Ensure mocks are properly reset between tests
|
||||
def setup_method(self):
|
||||
"""Reset mocks before each test"""
|
||||
self.mock_service.reset_mock()
|
||||
```
|
||||
|
||||
### Test Coverage Issues
|
||||
|
||||
**Identifying gaps in coverage**:
|
||||
```bash
|
||||
# Generate coverage report with missing lines
|
||||
pytest --cov=app --cov-report=term-missing
|
||||
|
||||
# View HTML report for detailed analysis
|
||||
pytest --cov=app --cov-report=html
|
||||
open htmlcov/index.html
|
||||
```
|
||||
|
||||
**Adding tests for uncovered code**:
|
||||
```python
|
||||
# Example: Adding test for error handling branch
|
||||
def test_service_method_handles_database_error(self, mock_db):
|
||||
"""Test error handling path that wasn't covered"""
|
||||
# Setup mock to raise exception
|
||||
mock_db.commit.side_effect = DatabaseError("Connection failed")
|
||||
|
||||
# Test that error is handled appropriately
|
||||
with pytest.raises(ServiceError):
|
||||
self.service.save_data(test_data)
|
||||
```
|
||||
|
||||
## Code Quality Standards
|
||||
|
||||
### Test Code Review Checklist
|
||||
|
||||
**Before submitting test changes**:
|
||||
- [ ] Tests have descriptive names explaining the scenario
|
||||
- [ ] Appropriate pytest markers are used
|
||||
- [ ] Test coverage hasn't decreased
|
||||
- [ ] Tests are in the correct category (unit/integration/system)
|
||||
- [ ] No hardcoded values that could break in different environments
|
||||
- [ ] Error cases are tested, not just happy paths
|
||||
- [ ] New fixtures are properly documented
|
||||
- [ ] Performance tests have reasonable baselines
|
||||
|
||||
### Refactoring Tests
|
||||
|
||||
**When refactoring test code**:
|
||||
```python
|
||||
# Before: Repetitive test setup
|
||||
class TestProductService:
|
||||
def test_create_product_success(self):
|
||||
service = ProductService()
|
||||
data = {"name": "Test", "price": "10.99"}
|
||||
result = service.create_product(data)
|
||||
assert result is not None
|
||||
|
||||
def test_create_product_validation_error(self):
|
||||
service = ProductService() # Duplicate setup
|
||||
data = {"name": "", "price": "invalid"}
|
||||
with pytest.raises(ValidationError):
|
||||
service.create_product(data)
|
||||
|
||||
# After: Using setup_method and constants
|
||||
class TestProductService:
|
||||
def setup_method(self):
|
||||
self.service = ProductService()
|
||||
self.valid_data = {"name": "Test", "price": "10.99"}
|
||||
|
||||
def test_create_product_success(self):
|
||||
result = self.service.create_product(self.valid_data)
|
||||
assert result is not None
|
||||
|
||||
def test_create_product_validation_error(self):
|
||||
invalid_data = {"name": "", "price": "invalid"}
|
||||
with pytest.raises(ValidationError):
|
||||
self.service.create_product(invalid_data)
|
||||
```
|
||||
|
||||
## Working with CI/CD
|
||||
|
||||
### Test Categories in CI Pipeline
|
||||
|
||||
Our CI pipeline runs tests in stages:
|
||||
|
||||
**Stage 1: Fast Feedback**
|
||||
```bash
|
||||
make test-fast # Unit tests + fast integration tests
|
||||
```
|
||||
|
||||
**Stage 2: Comprehensive Testing**
|
||||
```bash
|
||||
make test-coverage # Full suite with coverage
|
||||
```
|
||||
|
||||
**Stage 3: Performance Validation** (on release branches)
|
||||
```bash
|
||||
pytest -m performance
|
||||
```
|
||||
|
||||
### Making Tests CI-Friendly
|
||||
|
||||
**Ensure tests are deterministic**:
|
||||
```python
|
||||
# Bad: Tests that depend on current time
|
||||
def test_user_creation():
|
||||
user = create_user()
|
||||
assert user.created_at.day == datetime.now().day # Flaky at midnight
|
||||
|
||||
# Good: Tests with controlled time
|
||||
def test_user_creation(freezer):
|
||||
freezer.freeze("2024-01-15 10:00:00")
|
||||
user = create_user()
|
||||
assert user.created_at == datetime(2024, 1, 15, 10, 0, 0)
|
||||
```
|
||||
|
||||
**Make tests environment-independent**:
|
||||
```python
|
||||
# Use relative paths and environment variables
|
||||
TEST_DATA_DIR = Path(__file__).parent / "test_data"
|
||||
CSV_FILE = TEST_DATA_DIR / "sample_products.csv"
|
||||
```
|
||||
|
||||
## Migration and Upgrade Strategies
|
||||
|
||||
### When Upgrading Dependencies
|
||||
|
||||
**Test dependency upgrades**:
|
||||
```bash
|
||||
# Test with new versions before upgrading
|
||||
pip install pytest==8.0.0 pytest-cov==5.0.0
|
||||
make test
|
||||
|
||||
# If tests fail, identify compatibility issues
|
||||
pytest --tb=short -x
|
||||
```
|
||||
|
||||
**Update test configuration for new pytest versions**:
|
||||
```ini
|
||||
# pytest.ini - may need updates for new versions
|
||||
minversion = 8.0
|
||||
# Check if any deprecated features are used
|
||||
```
|
||||
|
||||
### Database Schema Changes
|
||||
|
||||
**When modifying database models**:
|
||||
1. Update model test fixtures first
|
||||
2. Run migration on test database
|
||||
3. Update affected test data files
|
||||
4. Run integration tests to catch relationship issues
|
||||
|
||||
```python
|
||||
# Update fixtures for new required fields
|
||||
@pytest.fixture
|
||||
def test_product(db):
|
||||
product = Product(
|
||||
# ... existing fields
|
||||
new_required_field="default_value" # Add with sensible default
|
||||
)
|
||||
return product
|
||||
```
|
||||
|
||||
## Documentation and Knowledge Sharing
|
||||
|
||||
### Documenting Complex Test Scenarios
|
||||
|
||||
**For complex business logic tests**:
|
||||
```python
|
||||
def test_complex_pricing_calculation_scenario(self):
|
||||
"""
|
||||
Test pricing calculation with multiple discounts and tax rules.
|
||||
|
||||
Scenario:
|
||||
- Product price: $100
|
||||
- Member discount: 10%
|
||||
- Seasonal discount: 5% (applied after member discount)
|
||||
- Tax rate: 8.5%
|
||||
|
||||
Expected calculation:
|
||||
Base: $100 → Member discount: $90 → Seasonal: $85.50 → Tax: $92.77
|
||||
"""
|
||||
# Test implementation with clear steps
|
||||
```
|
||||
|
||||
### Team Knowledge Sharing
|
||||
|
||||
**Maintain test documentation**:
|
||||
- Update this guide when adding new test patterns
|
||||
- Document complex fixture relationships
|
||||
- Share test debugging techniques in team meetings
|
||||
- Create examples for new team members
|
||||
|
||||
## Summary: Test Maintenance Best Practices
|
||||
|
||||
**Daily Practices**:
|
||||
- Run relevant tests before committing code
|
||||
- Add tests for new functionality immediately
|
||||
- Keep test names descriptive and current
|
||||
- Update fixtures when models change
|
||||
|
||||
**Regular Maintenance**:
|
||||
- Review and update performance baselines
|
||||
- Refactor repetitive test code
|
||||
- Clean up unused fixtures and test data
|
||||
- Monitor test execution times
|
||||
|
||||
**Long-term Strategy**:
|
||||
- Plan test architecture for new features
|
||||
- Evaluate test coverage trends
|
||||
- Update testing tools and practices
|
||||
- Share knowledge across the team
|
||||
|
||||
**Remember**: Good tests are living documentation of your system's behavior. Keep them current, clear, and comprehensive to maintain a healthy codebase.
|
||||
|
||||
Use this guide alongside the [Testing Guide](testing-guide.md) for complete test management knowledge.
|
||||
*This documentation is under development.*
|
||||
@@ -1,470 +1 @@
|
||||
# Testing Guide for Developers
|
||||
|
||||
This guide provides everything your development team needs to know about our comprehensive test suite structure, how to run tests effectively, and how to maintain test quality.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Install test dependencies
|
||||
make install-test
|
||||
|
||||
# Run all tests
|
||||
make test
|
||||
|
||||
# Run fast tests only (development workflow)
|
||||
make test-fast
|
||||
|
||||
# Run with coverage
|
||||
make test-coverage
|
||||
```
|
||||
|
||||
## Test Structure Overview
|
||||
|
||||
Our test suite is organized hierarchically by test type and execution speed to optimize development workflows:
|
||||
|
||||
```
|
||||
tests/
|
||||
├── conftest.py # Core test configuration and database fixtures
|
||||
├── pytest.ini # Test configuration with markers and coverage
|
||||
├── fixtures/ # Domain-organized test fixtures
|
||||
│ ├── auth_fixtures.py # Users, tokens, authentication headers
|
||||
│ ├── product_fixtures.py # Products, factories, bulk test data
|
||||
│ ├── shop_fixtures.py # Shops, stock, shop-product relationships
|
||||
│ └── marketplace_fixtures.py # Import jobs and marketplace data
|
||||
├── unit/ # Fast, isolated component tests (< 1 second)
|
||||
│ ├── models/ # Database and API model tests
|
||||
│ ├── utils/ # Utility function tests
|
||||
│ ├── services/ # Business logic tests
|
||||
│ └── middleware/ # Middleware component tests
|
||||
├── integration/ # Multi-component tests (1-10 seconds)
|
||||
│ ├── api/v1/ # API endpoint tests with database
|
||||
│ ├── security/ # Authentication, authorization tests
|
||||
│ ├── tasks/ # Background task integration tests
|
||||
│ └── workflows/ # Multi-step process tests
|
||||
├── performance/ # Performance benchmarks (10+ seconds)
|
||||
│ └── test_api_performance.py # Load testing and benchmarks
|
||||
├── system/ # End-to-end system tests (30+ seconds)
|
||||
│ └── test_error_handling.py # Application-wide error handling
|
||||
└── test_data/ # Static test data files
|
||||
└── csv/sample_products.csv # Sample CSV for import testing
|
||||
```
|
||||
|
||||
## Test Categories and When to Use Each
|
||||
|
||||
### Unit Tests (`tests/unit/`)
|
||||
**Purpose**: Test individual components in isolation
|
||||
**Speed**: Very fast (< 1 second each)
|
||||
**Use when**: Testing business logic, data processing, model validation
|
||||
|
||||
```bash
|
||||
# Run during active development
|
||||
pytest -m unit
|
||||
|
||||
# Example locations:
|
||||
tests/unit/services/test_product_service.py # Business logic
|
||||
tests/unit/utils/test_data_processing.py # Utility functions
|
||||
tests/unit/models/test_database_models.py # Model validation
|
||||
```
|
||||
|
||||
### Integration Tests (`tests/integration/`)
|
||||
**Purpose**: Test component interactions
|
||||
**Speed**: Moderate (1-10 seconds each)
|
||||
**Use when**: Testing API endpoints, service interactions, workflows
|
||||
|
||||
```bash
|
||||
# Run before commits
|
||||
pytest -m integration
|
||||
|
||||
# Example locations:
|
||||
tests/integration/api/v1/test_admin_endpoints.py # API endpoints
|
||||
tests/integration/security/test_authentication.py # Auth workflows
|
||||
tests/integration/workflows/test_product_import.py # Multi-step processes
|
||||
```
|
||||
|
||||
### Performance Tests (`tests/performance/`)
|
||||
**Purpose**: Validate performance requirements
|
||||
**Speed**: Slow (10+ seconds each)
|
||||
**Use when**: Testing response times, load capacity, large data processing
|
||||
|
||||
```bash
|
||||
# Run periodically or in CI
|
||||
pytest -m performance
|
||||
```
|
||||
|
||||
### System Tests (`tests/system/`)
|
||||
**Purpose**: End-to-end application behavior
|
||||
**Speed**: Slowest (30+ seconds each)
|
||||
**Use when**: Testing complete user scenarios, error handling across layers
|
||||
|
||||
```bash
|
||||
# Run before releases
|
||||
pytest -m system
|
||||
```
|
||||
|
||||
## Daily Development Workflow
|
||||
|
||||
### During Active Development
|
||||
```bash
|
||||
# Quick feedback loop - run relevant unit tests
|
||||
pytest tests/unit/services/test_product_service.py -v
|
||||
|
||||
# Test specific functionality you're working on
|
||||
pytest -k "product and create" -m unit
|
||||
|
||||
# Fast comprehensive check
|
||||
make test-fast # Equivalent to: pytest -m "not slow"
|
||||
```
|
||||
|
||||
### Before Committing Code
|
||||
```bash
|
||||
# Run unit and integration tests
|
||||
make test-unit
|
||||
make test-integration
|
||||
|
||||
# Or run both with coverage
|
||||
make test-coverage
|
||||
```
|
||||
|
||||
### Before Creating Pull Request
|
||||
```bash
|
||||
# Full test suite with linting
|
||||
make ci # Runs format, lint, and test-coverage
|
||||
|
||||
# Check if all tests pass
|
||||
make test
|
||||
```
|
||||
|
||||
## Running Specific Tests
|
||||
|
||||
### By Test Type
|
||||
```bash
|
||||
# Fast unit tests only
|
||||
pytest -m unit
|
||||
|
||||
# Integration tests only
|
||||
pytest -m integration
|
||||
|
||||
# Everything except slow tests
|
||||
pytest -m "not slow"
|
||||
|
||||
# Database-dependent tests
|
||||
pytest -m database
|
||||
|
||||
# Authentication-related tests
|
||||
pytest -m auth
|
||||
```
|
||||
|
||||
### By Component/Domain
|
||||
```bash
|
||||
# All product-related tests
|
||||
pytest -k "product"
|
||||
|
||||
# Admin functionality tests
|
||||
pytest -m admin
|
||||
|
||||
# API endpoint tests
|
||||
pytest -m api
|
||||
|
||||
# All tests in a directory
|
||||
pytest tests/unit/services/ -v
|
||||
```
|
||||
|
||||
### By Specific Files or Methods
|
||||
```bash
|
||||
# Specific test file
|
||||
pytest tests/unit/services/test_product_service.py -v
|
||||
|
||||
# Specific test class
|
||||
pytest tests/unit/services/test_product_service.py::TestProductService -v
|
||||
|
||||
# Specific test method
|
||||
pytest tests/unit/services/test_product_service.py::TestProductService::test_create_product_success -v
|
||||
```
|
||||
|
||||
## Test Fixtures and Data
|
||||
|
||||
### Using Existing Fixtures
|
||||
Our fixtures are organized by domain in the `fixtures/` directory:
|
||||
|
||||
```python
|
||||
# In your test file
|
||||
def test_product_creation(test_user, test_shop, auth_headers):
|
||||
"""Uses auth_fixtures.py fixtures"""
|
||||
# test_user: Creates a test user
|
||||
# test_shop: Creates a test shop owned by test_user
|
||||
# auth_headers: Provides authentication headers for API calls
|
||||
|
||||
def test_multiple_products(multiple_products):
|
||||
"""Uses product_fixtures.py fixtures"""
|
||||
# multiple_products: Creates 5 test products with different attributes
|
||||
assert len(multiple_products) == 5
|
||||
|
||||
def test_with_factory(product_factory, db):
|
||||
"""Uses factory fixtures for custom test data"""
|
||||
# Create custom product with specific attributes
|
||||
product = product_factory(db, title="Custom Product", price="99.99")
|
||||
assert product.title == "Custom Product"
|
||||
```
|
||||
|
||||
### Available Fixtures by Domain
|
||||
|
||||
**Authentication (`auth_fixtures.py`)**:
|
||||
- `test_user`, `test_admin`, `other_user`
|
||||
- `auth_headers`, `admin_headers`
|
||||
- `auth_manager`
|
||||
|
||||
**Products (`product_fixtures.py`)**:
|
||||
- `test_product`, `unique_product`, `multiple_products`
|
||||
- `product_factory` (for custom products)
|
||||
|
||||
**Shops (`shop_fixtures.py`)**:
|
||||
- `test_shop`, `unique_shop`, `inactive_shop`, `verified_shop`
|
||||
- `shop_product`, `test_stock`, `multiple_stocks`
|
||||
- `shop_factory` (for custom shops)
|
||||
|
||||
**Marketplace (`marketplace_fixtures.py`)**:
|
||||
- `test_marketplace_job`
|
||||
|
||||
## Writing New Tests
|
||||
|
||||
### Test File Location
|
||||
Choose location based on what you're testing:
|
||||
|
||||
```python
|
||||
# Business logic → unit tests
|
||||
tests/unit/services/test_my_new_service.py
|
||||
|
||||
# API endpoints → integration tests
|
||||
tests/integration/api/v1/test_my_new_endpoints.py
|
||||
|
||||
# Multi-component workflows → integration tests
|
||||
tests/integration/workflows/test_my_new_workflow.py
|
||||
|
||||
# Performance concerns → performance tests
|
||||
tests/performance/test_my_performance.py
|
||||
```
|
||||
|
||||
### Test Class Structure
|
||||
```python
|
||||
import pytest
|
||||
from app.services.my_service import MyService
|
||||
|
||||
@pytest.mark.unit # Always add appropriate markers
|
||||
@pytest.mark.products # Domain-specific marker
|
||||
class TestMyService:
|
||||
"""Test suite for MyService business logic"""
|
||||
|
||||
def setup_method(self):
|
||||
"""Run before each test method"""
|
||||
self.service = MyService()
|
||||
|
||||
def test_create_item_with_valid_data_succeeds(self):
|
||||
"""Test successful item creation - descriptive name explaining scenario"""
|
||||
# Arrange
|
||||
item_data = {"name": "Test Item", "price": "10.99"}
|
||||
|
||||
# Act
|
||||
result = self.service.create_item(item_data)
|
||||
|
||||
# Assert
|
||||
assert result is not None
|
||||
assert result.name == "Test Item"
|
||||
|
||||
def test_create_item_with_invalid_data_raises_validation_error(self):
|
||||
"""Test validation error handling"""
|
||||
# Arrange
|
||||
invalid_data = {"name": "", "price": "invalid"}
|
||||
|
||||
# Act & Assert
|
||||
with pytest.raises(ValidationError):
|
||||
self.service.create_item(invalid_data)
|
||||
```
|
||||
|
||||
### API Integration Test Example
|
||||
```python
|
||||
import pytest
|
||||
|
||||
@pytest.mark.integration
|
||||
@pytest.mark.api
|
||||
@pytest.mark.products
|
||||
class TestProductEndpoints:
|
||||
"""Integration tests for product API endpoints"""
|
||||
|
||||
def test_create_product_endpoint_success(self, client, auth_headers):
|
||||
"""Test successful product creation via API"""
|
||||
# Arrange
|
||||
product_data = {
|
||||
"product_id": "TEST001",
|
||||
"title": "Test Product",
|
||||
"price": "19.99"
|
||||
}
|
||||
|
||||
# Act
|
||||
response = client.post("/api/v1/product",
|
||||
json=product_data,
|
||||
headers=auth_headers)
|
||||
|
||||
# Assert
|
||||
assert response.status_code == 200
|
||||
assert response.json()["product_id"] == "TEST001"
|
||||
```
|
||||
|
||||
## Test Naming Conventions
|
||||
|
||||
### Files
|
||||
- `test_{component_name}.py` for the file name
|
||||
- Mirror your source structure: `app/services/product.py` → `tests/unit/services/test_product_service.py`
|
||||
|
||||
### Classes
|
||||
- `TestComponentName` for the main component
|
||||
- `TestComponentValidation` for validation-specific tests
|
||||
- `TestComponentErrorHandling` for error scenarios
|
||||
|
||||
### Methods
|
||||
Use descriptive names that explain the scenario:
|
||||
```python
|
||||
# Good - explains what, when, and expected outcome
|
||||
def test_create_product_with_valid_data_returns_product(self):
|
||||
def test_create_product_with_duplicate_id_raises_error(self):
|
||||
def test_get_product_when_not_found_returns_404(self):
|
||||
|
||||
# Acceptable shorter versions
|
||||
def test_create_product_success(self):
|
||||
def test_create_product_validation_error(self):
|
||||
def test_get_product_not_found(self):
|
||||
```
|
||||
|
||||
## Coverage Requirements
|
||||
|
||||
We maintain high coverage standards:
|
||||
- **Minimum overall coverage**: 80%
|
||||
- **New code coverage**: 90%+
|
||||
- **Critical paths**: 95%+
|
||||
|
||||
```bash
|
||||
# Check coverage
|
||||
make test-coverage
|
||||
|
||||
# View detailed HTML report
|
||||
open htmlcov/index.html
|
||||
|
||||
# Fail build if coverage too low
|
||||
pytest --cov=app --cov-fail-under=80
|
||||
```
|
||||
|
||||
## Debugging Failed Tests
|
||||
|
||||
### Get Detailed Information
|
||||
```bash
|
||||
# Verbose output with local variables
|
||||
pytest tests/path/to/test.py -vv --tb=long --showlocals
|
||||
|
||||
# Stop on first failure
|
||||
pytest -x
|
||||
|
||||
# Re-run only failed tests
|
||||
pytest --lf
|
||||
```
|
||||
|
||||
### Common Issues and Solutions
|
||||
|
||||
**Import Errors**:
|
||||
```bash
|
||||
# Ensure you're in project root and have installed in dev mode
|
||||
pip install -e .
|
||||
PYTHONPATH=. pytest
|
||||
```
|
||||
|
||||
**Database Issues**:
|
||||
```bash
|
||||
# Tests use in-memory SQLite by default
|
||||
# Check if fixtures are properly imported
|
||||
pytest --fixtures tests/
|
||||
```
|
||||
|
||||
**Fixture Not Found**:
|
||||
```bash
|
||||
# Ensure fixture modules are listed in conftest.py pytest_plugins
|
||||
# Check fixture dependencies (test_shop needs test_user)
|
||||
```
|
||||
|
||||
## Performance and Optimization
|
||||
|
||||
### Speed Up Test Runs
|
||||
```bash
|
||||
# Run in parallel (install pytest-xdist first)
|
||||
pytest -n auto
|
||||
|
||||
# Skip slow tests during development
|
||||
pytest -m "not slow"
|
||||
|
||||
# Run only changed tests (install pytest-testmon)
|
||||
pytest --testmon
|
||||
```
|
||||
|
||||
### Find Slow Tests
|
||||
```bash
|
||||
# Show 10 slowest tests
|
||||
pytest --durations=10
|
||||
|
||||
# Show all test durations
|
||||
pytest --durations=0
|
||||
```
|
||||
|
||||
## Continuous Integration Integration
|
||||
|
||||
Our tests integrate with CI/CD pipelines through make targets:
|
||||
|
||||
```bash
|
||||
# Commands used in CI
|
||||
make ci # Format, lint, test with coverage
|
||||
make test-fast # Quick feedback in early CI stages
|
||||
make test-coverage # Full test run with coverage reporting
|
||||
```
|
||||
|
||||
The CI pipeline:
|
||||
1. Runs `make test-fast` for quick feedback
|
||||
2. Runs `make ci` for comprehensive checks
|
||||
3. Generates coverage reports in XML format
|
||||
4. Uploads coverage to reporting tools
|
||||
|
||||
## Best Practices Summary
|
||||
|
||||
### DO:
|
||||
- Write tests for new code before committing
|
||||
- Use descriptive test names explaining the scenario
|
||||
- Keep unit tests fast (< 1 second each)
|
||||
- Use appropriate fixtures for test data
|
||||
- Add proper pytest markers to categorize tests
|
||||
- Test both happy path and error scenarios
|
||||
- Maintain good test coverage (80%+)
|
||||
|
||||
### DON'T:
|
||||
- Write tests that depend on external services (use mocks)
|
||||
- Create tests that depend on execution order
|
||||
- Use hardcoded values that might change
|
||||
- Write overly complex test setups
|
||||
- Ignore failing tests
|
||||
- Skip adding tests for bug fixes
|
||||
|
||||
## Getting Help
|
||||
|
||||
- **Examples**: Look at existing tests in similar components
|
||||
- **Fixtures**: Check `tests/fixtures/` for available test data
|
||||
- **Configuration**: See `pytest.ini` for available markers
|
||||
- **Make targets**: Run `make help` to see all available commands
|
||||
- **Team support**: Ask in team channels or create GitHub issues
|
||||
|
||||
## Make Commands Reference
|
||||
|
||||
```bash
|
||||
make install-test # Install test dependencies
|
||||
make test # Run all tests
|
||||
make test-unit # Run unit tests only
|
||||
make test-integration # Run integration tests only
|
||||
make test-fast # Run all except slow tests
|
||||
make test-coverage # Run with coverage report
|
||||
make ci # Full CI pipeline (format, lint, test)
|
||||
```
|
||||
|
||||
Use this guide as your daily reference for testing. The structure is designed to give you fast feedback during development while maintaining comprehensive test coverage.
|
||||
*This documentation is under development.*
|
||||
@@ -7,12 +7,12 @@ repo_name: letzshop-import
|
||||
repo_url: https://github.com/yourusername/letzshop-import
|
||||
edit_uri: edit/main/docs/
|
||||
|
||||
# Navigation structure
|
||||
nav:
|
||||
- Home: index.md
|
||||
- Getting Started:
|
||||
- Installation: getting-started/installation.md
|
||||
- Quick Start: getting-started/quickstart.md
|
||||
- Database Setup: getting-started/database-setup.md # NEW
|
||||
- Configuration: getting-started/configuration.md
|
||||
- API:
|
||||
- Overview: api/index.md
|
||||
@@ -31,6 +31,7 @@ nav:
|
||||
- Development:
|
||||
- Architecture: development/architecture.md
|
||||
- Database Schema: development/database-schema.md
|
||||
- Database Migrations: development/database-migrations.md # NEW
|
||||
- Services: development/services.md
|
||||
- Contributing: development/contributing.md
|
||||
- Deployment:
|
||||
|
||||
Reference in New Issue
Block a user