feat: implement background task architecture for code quality scans

- Add status fields to ArchitectureScan model (status, started_at,
  completed_at, error_message, progress_message)
- Create database migration for new status fields
- Create background task function execute_code_quality_scan()
- Update API to return 202 with job IDs and support polling
- Add code quality scans to unified BackgroundTasksService
- Integrate scans into background tasks API and page
- Implement frontend polling with 3-second interval
- Add progress banner showing scan status
- Users can navigate away while scans run in background
- Document the implementation in architecture docs

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
2025-12-21 20:57:08 +01:00
parent 9cf0a568c0
commit 6a903e16c6
10 changed files with 1710 additions and 83 deletions

View File

@@ -0,0 +1,316 @@
# Background Tasks Architecture Rules
# ====================================
# Enforces consistent patterns for all background tasks in the application.
# See docs/architecture/background-tasks.md for full specification.
background_task_rules:
# ─────────────────────────────────────────────────────────────────
# Model Rules
# ─────────────────────────────────────────────────────────────────
- id: BG-001
name: Background task models must have status field
description: |
All database models for background tasks must include a 'status' field
with proper indexing for efficient querying.
severity: error
patterns:
# Models that should have status field
- file: "models/database/marketplace_import_job.py"
must_contain: "status = Column"
- file: "models/database/letzshop.py"
must_contain: "status = Column"
- file: "models/database/test_run.py"
must_contain: "status = Column"
- file: "models/database/architecture_scan.py"
must_contain: "status = Column"
suggestion: |
Add: status = Column(String(30), nullable=False, default="pending", index=True)
- id: BG-002
name: Background task models must have timestamp fields
description: |
All background task models must include created_at, started_at, and
completed_at timestamp fields for proper lifecycle tracking.
severity: error
patterns:
- file: "models/database/*_job.py"
must_contain:
- "started_at"
- "completed_at"
- file: "models/database/test_run.py"
must_contain:
- "started_at"
- "completed_at"
suggestion: |
Add timestamp fields:
created_at = Column(DateTime, default=lambda: datetime.now(UTC))
started_at = Column(DateTime, nullable=True)
completed_at = Column(DateTime, nullable=True)
- id: BG-003
name: Background task models must have error_message field
description: |
All background task models must include an error_message field
to store failure details.
severity: warning
patterns:
- file: "models/database/marketplace_import_job.py"
must_contain: "error_message"
- file: "models/database/letzshop.py"
must_contain: "error_message"
- file: "models/database/test_run.py"
must_contain: "error_message"
suggestion: |
Add: error_message = Column(Text, nullable=True)
- id: BG-004
name: Background task models must have triggered_by field
description: |
All background task models must track who/what triggered the task
for audit purposes.
severity: warning
patterns:
- file: "models/database/*_job.py"
must_contain: "triggered_by"
- file: "models/database/test_run.py"
must_contain: "triggered_by"
suggestion: |
Add: triggered_by = Column(String(100), nullable=True)
Format: "manual:username", "scheduled", "api:client_id"
# ─────────────────────────────────────────────────────────────────
# Status Value Rules
# ─────────────────────────────────────────────────────────────────
- id: BG-010
name: Use standard status values
description: |
Background task status must use standard values:
- pending: Task created, not started
- running: Task actively executing
- completed: Task finished successfully
- failed: Task failed with error
- completed_with_warnings: Task completed with non-fatal issues
Legacy values to migrate:
- "processing" -> "running"
- "fetching" -> "running"
- "passed" -> "completed"
- "error" -> "failed"
severity: warning
anti_patterns:
- pattern: 'status.*=.*["\']processing["\']'
message: "Use 'running' instead of 'processing'"
- pattern: 'status.*=.*["\']fetching["\']'
message: "Use 'running' instead of 'fetching'"
suggestion: |
Use the standard status enum:
class TaskStatus(str, Enum):
PENDING = "pending"
RUNNING = "running"
COMPLETED = "completed"
FAILED = "failed"
COMPLETED_WITH_WARNINGS = "completed_with_warnings"
# ─────────────────────────────────────────────────────────────────
# API Rules
# ─────────────────────────────────────────────────────────────────
- id: BG-020
name: Task trigger endpoints must return job_id
description: |
Endpoints that trigger background tasks must return the job ID
so the frontend can poll for status.
severity: error
patterns:
- file: "app/api/v1/**/marketplace.py"
endpoint_pattern: "@router.post.*import"
must_return: "job_id"
- file: "app/api/v1/**/tests.py"
endpoint_pattern: "@router.post.*run"
must_return: "run_id"
suggestion: |
Return a response like:
{
"job_id": job.id,
"status": "pending",
"message": "Task queued successfully"
}
- id: BG-021
name: Use FastAPI BackgroundTasks for async execution
description: |
All long-running tasks must use FastAPI's BackgroundTasks
for async execution, not synchronous blocking calls.
severity: error
patterns:
- file: "app/api/v1/**/*.py"
must_contain: "BackgroundTasks"
when_contains: ["import_", "export_", "run_scan", "execute_"]
anti_patterns:
- pattern: "subprocess\\.run.*wait.*True"
message: "Don't wait synchronously for subprocesses in API handlers"
- pattern: "time\\.sleep"
file: "app/api/**/*.py"
message: "Don't use time.sleep in API handlers"
suggestion: |
Use BackgroundTasks:
async def trigger_task(background_tasks: BackgroundTasks):
job = create_job_record(db)
background_tasks.add_task(execute_task, job.id)
return {"job_id": job.id}
- id: BG-022
name: Tasks must be registered in BackgroundTasksService
description: |
All background task types must be registered in the
BackgroundTasksService for unified monitoring.
severity: warning
patterns:
- file: "app/services/background_tasks_service.py"
must_reference:
- "MarketplaceImportJob"
- "LetzshopHistoricalImportJob"
- "TestRun"
- "ArchitectureScan"
suggestion: |
Add task model to TASK_MODELS in BackgroundTasksService:
TASK_MODELS = {
'product_import': MarketplaceImportJob,
'order_import': LetzshopHistoricalImportJob,
'test_run': TestRun,
'code_quality_scan': ArchitectureScan,
}
# ─────────────────────────────────────────────────────────────────
# Task Implementation Rules
# ─────────────────────────────────────────────────────────────────
- id: BG-030
name: Tasks must update status on start
description: |
Background task functions must set status to 'running'
and record started_at timestamp at the beginning.
severity: error
patterns:
- file: "app/tasks/*.py"
must_contain:
- 'status = "running"'
- "started_at"
suggestion: |
At task start:
job.status = "running"
job.started_at = datetime.now(UTC)
db.commit()
- id: BG-031
name: Tasks must update status on completion
description: |
Background task functions must set appropriate final status
and record completed_at timestamp.
severity: error
patterns:
- file: "app/tasks/*.py"
must_contain:
- "completed_at"
must_contain_one_of:
- 'status = "completed"'
- 'status = "failed"'
suggestion: |
On completion:
job.status = "completed" # or "failed"
job.completed_at = datetime.now(UTC)
db.commit()
- id: BG-032
name: Tasks must handle exceptions
description: |
Background tasks must catch exceptions, set status to 'failed',
store error message, and optionally notify admins.
severity: error
patterns:
- file: "app/tasks/*.py"
must_contain:
- "try:"
- "except"
- 'status = "failed"'
- "error_message"
suggestion: |
Use try/except pattern:
try:
# Task logic
job.status = "completed"
except Exception as e:
job.status = "failed"
job.error_message = str(e)
logger.error(f"Task failed: {e}")
finally:
job.completed_at = datetime.now(UTC)
db.commit()
- id: BG-033
name: Tasks must use separate database sessions
description: |
Background tasks must create their own database session
since FastAPI request sessions are closed after response.
severity: error
patterns:
- file: "app/tasks/*.py"
must_contain: "SessionLocal()"
anti_patterns:
- pattern: "def.*task.*db.*Session"
message: "Don't pass request db session to background tasks"
suggestion: |
Create session in task:
async def my_task(job_id: int):
db = SessionLocal()
try:
# Task logic
finally:
db.close()
# ─────────────────────────────────────────────────────────────────
# Frontend Rules
# ─────────────────────────────────────────────────────────────────
- id: BG-040
name: Pages with tasks must show status on return
description: |
Pages that trigger background tasks must check for active/recent
tasks on load and display appropriate status banners.
severity: info
patterns:
- file: "static/admin/js/*.js"
when_contains: "BackgroundTasks"
must_contain:
- "init"
- "activeTask"
suggestion: |
In Alpine component init():
async init() {
// Check for active tasks for this page
await this.checkActiveTask();
if (this.activeTask) {
this.startPolling();
}
}
- id: BG-041
name: Use consistent polling interval
description: |
Polling for background task status should use 3-5 second intervals
to balance responsiveness with server load.
severity: info
patterns:
- file: "static/admin/js/*.js"
when_contains: "setInterval"
should_match: "setInterval.*[3-5]000"
anti_patterns:
- pattern: "setInterval.*1000"
message: "1 second polling is too frequent"
- pattern: "setInterval.*10000"
message: "10 second polling may feel unresponsive"
suggestion: |
Use 3-5 second polling:
this.pollInterval = setInterval(() => this.pollStatus(), 5000);

View File

@@ -0,0 +1,82 @@
"""add_scan_status_fields
Add background task status fields to architecture_scans table
for harmonized background task architecture.
Revision ID: g5b6c7d8e9f0
Revises: f4a5b6c7d8e9
Create Date: 2024-12-21
"""
from collections.abc import Sequence
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "g5b6c7d8e9f0"
down_revision: str | None = "f4a5b6c7d8e9"
branch_labels: str | Sequence[str] | None = None
depends_on: str | Sequence[str] | None = None
def upgrade() -> None:
# Add status field with default 'completed' for existing records
# New records will use 'pending' as default
op.add_column(
"architecture_scans",
sa.Column(
"status",
sa.String(length=30),
nullable=False,
server_default="completed", # Existing scans are already completed
),
)
op.create_index(
op.f("ix_architecture_scans_status"), "architecture_scans", ["status"]
)
# Add started_at - for existing records, use timestamp as started_at
op.add_column(
"architecture_scans",
sa.Column("started_at", sa.DateTime(timezone=True), nullable=True),
)
# Add completed_at - for existing records, use timestamp + duration as completed_at
op.add_column(
"architecture_scans",
sa.Column("completed_at", sa.DateTime(timezone=True), nullable=True),
)
# Add error_message for failed scans
op.add_column(
"architecture_scans",
sa.Column("error_message", sa.Text(), nullable=True),
)
# Add progress_message for showing current step
op.add_column(
"architecture_scans",
sa.Column("progress_message", sa.String(length=255), nullable=True),
)
# Update existing records to have proper started_at and completed_at
# This is done via raw SQL for efficiency
op.execute(
"""
UPDATE architecture_scans
SET started_at = timestamp,
completed_at = datetime(timestamp, '+' || CAST(duration_seconds AS TEXT) || ' seconds')
WHERE started_at IS NULL
"""
)
def downgrade() -> None:
op.drop_index(op.f("ix_architecture_scans_status"), table_name="architecture_scans")
op.drop_column("architecture_scans", "progress_message")
op.drop_column("architecture_scans", "error_message")
op.drop_column("architecture_scans", "completed_at")
op.drop_column("architecture_scans", "started_at")
op.drop_column("architecture_scans", "status")

View File

@@ -22,7 +22,7 @@ class BackgroundTaskResponse(BaseModel):
"""Unified background task response"""
id: int
task_type: str # 'import' or 'test_run'
task_type: str # 'import', 'test_run', or 'code_quality_scan'
status: str
started_at: str | None
completed_at: str | None
@@ -46,6 +46,7 @@ class BackgroundTasksStatsResponse(BaseModel):
# By type
import_jobs: dict
test_runs: dict
code_quality_scans: dict
def _convert_import_to_response(job) -> BackgroundTaskResponse:
@@ -107,11 +108,47 @@ def _convert_test_run_to_response(run) -> BackgroundTaskResponse:
)
def _convert_scan_to_response(scan) -> BackgroundTaskResponse:
"""Convert ArchitectureScan to BackgroundTaskResponse"""
duration = scan.duration_seconds
if scan.status in ["pending", "running"] and scan.started_at:
duration = (datetime.now(UTC) - scan.started_at).total_seconds()
# Map validator type to human-readable name
validator_names = {
"architecture": "Architecture",
"security": "Security",
"performance": "Performance",
}
validator_name = validator_names.get(scan.validator_type, scan.validator_type)
return BackgroundTaskResponse(
id=scan.id,
task_type="code_quality_scan",
status=scan.status,
started_at=scan.started_at.isoformat() if scan.started_at else None,
completed_at=scan.completed_at.isoformat() if scan.completed_at else None,
duration_seconds=duration,
description=f"{validator_name} code quality scan",
triggered_by=scan.triggered_by,
error_message=scan.error_message,
details={
"validator_type": scan.validator_type,
"total_files": scan.total_files,
"total_violations": scan.total_violations,
"errors": scan.errors,
"warnings": scan.warnings,
"git_commit_hash": scan.git_commit_hash,
"progress_message": scan.progress_message,
},
)
@router.get("/tasks", response_model=list[BackgroundTaskResponse])
async def list_background_tasks(
status: str | None = Query(None, description="Filter by status"),
task_type: str | None = Query(
None, description="Filter by type (import, test_run)"
None, description="Filter by type (import, test_run, code_quality_scan)"
),
limit: int = Query(50, ge=1, le=200),
db: Session = Depends(get_db),
@@ -120,7 +157,7 @@ async def list_background_tasks(
"""
List all background tasks across the system
Returns a unified view of import jobs and test runs.
Returns a unified view of import jobs, test runs, and code quality scans.
"""
tasks = []
@@ -138,6 +175,13 @@ async def list_background_tasks(
)
tasks.extend([_convert_test_run_to_response(run) for run in test_runs])
# Get code quality scans
if task_type is None or task_type == "code_quality_scan":
scans = background_tasks_service.get_code_quality_scans(
db, status=status, limit=limit
)
tasks.extend([_convert_scan_to_response(scan) for scan in scans])
# Sort by start time (most recent first)
tasks.sort(
key=lambda t: t.started_at or "1970-01-01T00:00:00",
@@ -157,22 +201,31 @@ async def get_background_tasks_stats(
"""
import_stats = background_tasks_service.get_import_stats(db)
test_stats = background_tasks_service.get_test_run_stats(db)
scan_stats = background_tasks_service.get_scan_stats(db)
# Combined stats
total_running = import_stats["running"] + test_stats["running"]
total_completed = import_stats["completed"] + test_stats["completed"]
total_failed = import_stats["failed"] + test_stats["failed"]
total_tasks = import_stats["total"] + test_stats["total"]
total_running = (
import_stats["running"] + test_stats["running"] + scan_stats["running"]
)
total_completed = (
import_stats["completed"] + test_stats["completed"] + scan_stats["completed"]
)
total_failed = (
import_stats["failed"] + test_stats["failed"] + scan_stats["failed"]
)
total_tasks = import_stats["total"] + test_stats["total"] + scan_stats["total"]
tasks_today = import_stats["today"] + test_stats["today"] + scan_stats["today"]
return BackgroundTasksStatsResponse(
total_tasks=total_tasks,
running=total_running,
completed=total_completed,
failed=total_failed,
tasks_today=import_stats["today"] + test_stats["today"],
tasks_today=tasks_today,
avg_duration_seconds=test_stats.get("avg_duration"),
import_jobs=import_stats,
test_runs=test_stats,
code_quality_scans=scan_stats,
)
@@ -194,4 +247,8 @@ async def list_running_tasks(
running_tests = background_tasks_service.get_running_test_runs(db)
tasks.extend([_convert_test_run_to_response(run) for run in running_tests])
# Running code quality scans
running_scans = background_tasks_service.get_running_scans(db)
tasks.extend([_convert_scan_to_response(scan) for scan in running_scans])
return tasks

View File

@@ -1,24 +1,42 @@
"""
Code Quality API Endpoints
RESTful API for architecture validation and violation management
RESTful API for code quality validation and violation management
Supports multiple validator types: architecture, security, performance
"""
from datetime import datetime
from datetime import UTC, datetime
from enum import Enum
from fastapi import APIRouter, Depends, Query
from fastapi import APIRouter, BackgroundTasks, Depends, Query
from pydantic import BaseModel, Field
from sqlalchemy.orm import Session
from app.api.deps import get_current_admin_api
from app.core.database import get_db
from app.exceptions import ViolationNotFoundException
from app.services.code_quality_service import code_quality_service
from app.exceptions import ScanNotFoundException, ViolationNotFoundException
from app.services.code_quality_service import (
VALID_VALIDATOR_TYPES,
code_quality_service,
)
from app.tasks.code_quality_tasks import execute_code_quality_scan
from models.database.architecture_scan import ArchitectureScan
from models.database.user import User
from models.schema.stats import CodeQualityDashboardStatsResponse
router = APIRouter()
# Enums and Constants
class ValidatorType(str, Enum):
"""Supported validator types"""
ARCHITECTURE = "architecture"
SECURITY = "security"
PERFORMANCE = "performance"
# Pydantic Models for API
@@ -27,23 +45,65 @@ class ScanResponse(BaseModel):
id: int
timestamp: str
validator_type: str
status: str
started_at: str | None
completed_at: str | None
progress_message: str | None
total_files: int
total_violations: int
errors: int
warnings: int
duration_seconds: float
triggered_by: str
triggered_by: str | None
git_commit_hash: str | None
error_message: str | None = None
class Config:
from_attributes = True
class ScanRequest(BaseModel):
"""Request model for triggering scans"""
validator_types: list[ValidatorType] = Field(
default=[ValidatorType.ARCHITECTURE, ValidatorType.SECURITY, ValidatorType.PERFORMANCE],
description="Validator types to run",
)
class ScanJobResponse(BaseModel):
"""Response model for a queued scan job"""
id: int
validator_type: str
status: str
message: str
class MultiScanJobResponse(BaseModel):
"""Response model for multiple queued scans (background task pattern)"""
scans: list[ScanJobResponse]
message: str
status_url: str
class MultiScanResponse(BaseModel):
"""Response model for completed scans (legacy sync pattern)"""
scans: list[ScanResponse]
total_violations: int
total_errors: int
total_warnings: int
class ViolationResponse(BaseModel):
"""Response model for a violation"""
id: int
scan_id: int
validator_type: str
rule_id: str
rule_name: str
severity: str
@@ -111,37 +171,124 @@ class AddCommentRequest(BaseModel):
# API Endpoints
@router.post("/scan", response_model=ScanResponse)
async def trigger_scan(
db: Session = Depends(get_db), current_user: User = Depends(get_current_admin_api)
):
"""
Trigger a new architecture scan
Requires authentication. Runs the validator script and stores results.
Domain exceptions (ScanTimeoutException, ScanParseException) bubble up to global handler.
"""
scan = code_quality_service.run_scan(
db, triggered_by=f"manual:{current_user.username}"
)
db.commit()
def _scan_to_response(scan: ArchitectureScan) -> ScanResponse:
"""Convert ArchitectureScan to ScanResponse."""
return ScanResponse(
id=scan.id,
timestamp=scan.timestamp.isoformat(),
total_files=scan.total_files,
total_violations=scan.total_violations,
errors=scan.errors,
warnings=scan.warnings,
duration_seconds=scan.duration_seconds,
timestamp=scan.timestamp.isoformat() if scan.timestamp else None,
validator_type=scan.validator_type,
status=scan.status,
started_at=scan.started_at.isoformat() if scan.started_at else None,
completed_at=scan.completed_at.isoformat() if scan.completed_at else None,
progress_message=scan.progress_message,
total_files=scan.total_files or 0,
total_violations=scan.total_violations or 0,
errors=scan.errors or 0,
warnings=scan.warnings or 0,
duration_seconds=scan.duration_seconds or 0.0,
triggered_by=scan.triggered_by,
git_commit_hash=scan.git_commit_hash,
error_message=scan.error_message,
)
@router.post("/scan", response_model=MultiScanJobResponse, status_code=202)
async def trigger_scan(
request: ScanRequest = None,
background_tasks: BackgroundTasks = None,
db: Session = Depends(get_db),
current_user: User = Depends(get_current_admin_api),
):
"""
Trigger code quality scan(s) as background tasks.
By default runs all validators. Specify validator_types to run specific validators.
Returns immediately with job IDs. Poll /scan/{scan_id}/status for progress.
Scans run asynchronously - users can browse other pages while scans execute.
"""
if request is None:
request = ScanRequest()
scan_jobs = []
triggered_by = f"manual:{current_user.username}"
for vtype in request.validator_types:
# Create scan record with pending status
scan = ArchitectureScan(
timestamp=datetime.now(UTC),
validator_type=vtype.value,
status="pending",
triggered_by=triggered_by,
)
db.add(scan)
db.flush() # Get scan.id
# Queue background task
background_tasks.add_task(execute_code_quality_scan, scan.id)
scan_jobs.append(
ScanJobResponse(
id=scan.id,
validator_type=vtype.value,
status="pending",
message=f"{vtype.value.capitalize()} scan queued",
)
)
db.commit()
validator_names = ", ".join(vtype.value for vtype in request.validator_types)
return MultiScanJobResponse(
scans=scan_jobs,
message=f"Scans queued for: {validator_names}",
status_url="/admin/code-quality/scans/running",
)
@router.get("/scans/{scan_id}/status", response_model=ScanResponse)
async def get_scan_status(
scan_id: int,
db: Session = Depends(get_db),
current_user: User = Depends(get_current_admin_api),
):
"""
Get status of a specific scan.
Use this endpoint to poll for scan completion.
"""
scan = db.query(ArchitectureScan).filter(ArchitectureScan.id == scan_id).first()
if not scan:
raise ScanNotFoundException(scan_id)
return _scan_to_response(scan)
@router.get("/scans/running", response_model=list[ScanResponse])
async def get_running_scans(
db: Session = Depends(get_db),
current_user: User = Depends(get_current_admin_api),
):
"""
Get all currently running scans.
Returns scans with status 'pending' or 'running'.
"""
scans = (
db.query(ArchitectureScan)
.filter(ArchitectureScan.status.in_(["pending", "running"]))
.order_by(ArchitectureScan.timestamp.desc())
.all()
)
return [_scan_to_response(scan) for scan in scans]
@router.get("/scans", response_model=list[ScanResponse])
async def list_scans(
limit: int = Query(30, ge=1, le=100, description="Number of scans to return"),
validator_type: ValidatorType | None = Query(
None, description="Filter by validator type"
),
db: Session = Depends(get_db),
current_user: User = Depends(get_current_admin_api),
):
@@ -149,23 +296,13 @@ async def list_scans(
Get scan history
Returns recent scans for trend analysis.
Optionally filter by validator type.
"""
scans = code_quality_service.get_scan_history(db, limit=limit)
return [
ScanResponse(
id=scan.id,
timestamp=scan.timestamp.isoformat(),
total_files=scan.total_files,
total_violations=scan.total_violations,
errors=scan.errors,
warnings=scan.warnings,
duration_seconds=scan.duration_seconds,
triggered_by=scan.triggered_by,
git_commit_hash=scan.git_commit_hash,
scans = code_quality_service.get_scan_history(
db, limit=limit, validator_type=validator_type.value if validator_type else None
)
for scan in scans
]
return [_scan_to_response(scan) for scan in scans]
@router.get("/violations", response_model=ViolationListResponse)
@@ -173,8 +310,11 @@ async def list_violations(
scan_id: int | None = Query(
None, description="Filter by scan ID (defaults to latest)"
),
validator_type: ValidatorType | None = Query(
None, description="Filter by validator type"
),
severity: str | None = Query(
None, description="Filter by severity (error, warning)"
None, description="Filter by severity (error, warning, info)"
),
status: str | None = Query(
None, description="Filter by status (open, assigned, resolved, ignored)"
@@ -191,13 +331,15 @@ async def list_violations(
"""
Get violations with filtering and pagination
Returns violations from latest scan by default.
Returns violations from latest scan(s) by default.
Filter by validator_type to get violations from a specific validator.
"""
offset = (page - 1) * page_size
violations, total = code_quality_service.get_violations(
db,
scan_id=scan_id,
validator_type=validator_type.value if validator_type else None,
severity=severity,
status=status,
rule_id=rule_id,
@@ -213,6 +355,7 @@ async def list_violations(
ViolationResponse(
id=v.id,
scan_id=v.scan_id,
validator_type=v.validator_type,
rule_id=v.rule_id,
rule_name=v.rule_name,
severity=v.severity,
@@ -280,6 +423,7 @@ async def get_violation(
return ViolationDetailResponse(
id=violation.id,
scan_id=violation.scan_id,
validator_type=violation.validator_type,
rule_id=violation.rule_id,
rule_name=violation.rule_name,
severity=violation.severity,
@@ -429,7 +573,11 @@ async def add_comment(
@router.get("/stats", response_model=CodeQualityDashboardStatsResponse)
async def get_dashboard_stats(
db: Session = Depends(get_db), current_user: User = Depends(get_current_admin_api)
validator_type: ValidatorType | None = Query(
None, description="Filter by validator type (returns combined stats if not specified)"
),
db: Session = Depends(get_db),
current_user: User = Depends(get_current_admin_api),
):
"""
Get dashboard statistics
@@ -440,7 +588,32 @@ async def get_dashboard_stats(
- Trend data (last 7 scans)
- Top violating files
- Violations by rule and module
- Per-validator breakdown
When validator_type is specified, returns stats for that type only.
When not specified, returns combined stats across all validators.
"""
stats = code_quality_service.get_dashboard_stats(db)
stats = code_quality_service.get_dashboard_stats(
db, validator_type=validator_type.value if validator_type else None
)
return CodeQualityDashboardStatsResponse(**stats)
@router.get("/validator-types")
async def get_validator_types(
current_user: User = Depends(get_current_admin_api),
):
"""
Get list of available validator types
Returns the supported validator types for filtering.
"""
return {
"validator_types": VALID_VALIDATOR_TYPES,
"descriptions": {
"architecture": "Architectural patterns and code organization rules",
"security": "Security vulnerabilities and best practices",
"performance": "Performance issues and optimizations",
},
}

View File

@@ -9,6 +9,7 @@ from datetime import UTC, datetime
from sqlalchemy import case, desc, func
from sqlalchemy.orm import Session
from models.database.architecture_scan import ArchitectureScan
from models.database.marketplace_import_job import MarketplaceImportJob
from models.database.test_run import TestRun
@@ -124,6 +125,69 @@ class BackgroundTasksService:
"avg_duration": round(stats.avg_duration or 0, 1),
}
def get_code_quality_scans(
self, db: Session, status: str | None = None, limit: int = 50
) -> list[ArchitectureScan]:
"""Get code quality scans with optional status filter"""
query = db.query(ArchitectureScan)
if status:
query = query.filter(ArchitectureScan.status == status)
return query.order_by(desc(ArchitectureScan.timestamp)).limit(limit).all()
def get_running_scans(self, db: Session) -> list[ArchitectureScan]:
"""Get currently running code quality scans"""
return (
db.query(ArchitectureScan)
.filter(ArchitectureScan.status.in_(["pending", "running"]))
.all()
)
def get_scan_stats(self, db: Session) -> dict:
"""Get code quality scan statistics"""
today_start = datetime.now(UTC).replace(
hour=0, minute=0, second=0, microsecond=0
)
stats = db.query(
func.count(ArchitectureScan.id).label("total"),
func.sum(
case(
(ArchitectureScan.status.in_(["pending", "running"]), 1), else_=0
)
).label("running"),
func.sum(
case(
(
ArchitectureScan.status.in_(
["completed", "completed_with_warnings"]
),
1,
),
else_=0,
)
).label("completed"),
func.sum(
case((ArchitectureScan.status == "failed", 1), else_=0)
).label("failed"),
func.avg(ArchitectureScan.duration_seconds).label("avg_duration"),
).first()
today_count = (
db.query(func.count(ArchitectureScan.id))
.filter(ArchitectureScan.timestamp >= today_start)
.scalar()
or 0
)
return {
"total": stats.total or 0,
"running": stats.running or 0,
"completed": stats.completed or 0,
"failed": stats.failed or 0,
"today": today_count,
"avg_duration": round(stats.avg_duration or 0, 1),
}
# Singleton instance
background_tasks_service = BackgroundTasksService()

View File

@@ -0,0 +1,217 @@
# app/tasks/code_quality_tasks.py
"""Background tasks for code quality scans."""
import json
import logging
import subprocess
from datetime import UTC, datetime
from app.core.database import SessionLocal
from app.services.admin_notification_service import admin_notification_service
from models.database.architecture_scan import ArchitectureScan, ArchitectureViolation
logger = logging.getLogger(__name__)
# Validator type constants
VALIDATOR_ARCHITECTURE = "architecture"
VALIDATOR_SECURITY = "security"
VALIDATOR_PERFORMANCE = "performance"
VALID_VALIDATOR_TYPES = [VALIDATOR_ARCHITECTURE, VALIDATOR_SECURITY, VALIDATOR_PERFORMANCE]
# Map validator types to their scripts
VALIDATOR_SCRIPTS = {
VALIDATOR_ARCHITECTURE: "scripts/validate_architecture.py",
VALIDATOR_SECURITY: "scripts/validate_security.py",
VALIDATOR_PERFORMANCE: "scripts/validate_performance.py",
}
# Human-readable names
VALIDATOR_NAMES = {
VALIDATOR_ARCHITECTURE: "Architecture",
VALIDATOR_SECURITY: "Security",
VALIDATOR_PERFORMANCE: "Performance",
}
def _get_git_commit_hash() -> str | None:
"""Get current git commit hash"""
try:
result = subprocess.run(
["git", "rev-parse", "HEAD"],
capture_output=True,
text=True,
timeout=5,
)
if result.returncode == 0:
return result.stdout.strip()[:40]
except Exception:
pass
return None
async def execute_code_quality_scan(scan_id: int):
"""
Background task to execute a code quality scan.
This task:
1. Gets the scan record from DB
2. Updates status to 'running'
3. Runs the validator script
4. Parses JSON output and creates violation records
5. Updates scan with results and status 'completed' or 'failed'
Args:
scan_id: ID of the ArchitectureScan record
"""
db = SessionLocal()
scan = None
try:
# Get the scan record
scan = db.query(ArchitectureScan).filter(ArchitectureScan.id == scan_id).first()
if not scan:
logger.error(f"Code quality scan {scan_id} not found")
return
validator_type = scan.validator_type
if validator_type not in VALID_VALIDATOR_TYPES:
raise ValueError(f"Invalid validator type: {validator_type}")
script_path = VALIDATOR_SCRIPTS[validator_type]
validator_name = VALIDATOR_NAMES[validator_type]
# Update status to running
scan.status = "running"
scan.started_at = datetime.now(UTC)
scan.progress_message = f"Running {validator_name} validator..."
scan.git_commit_hash = _get_git_commit_hash()
db.commit()
logger.info(f"Starting {validator_name} scan (scan_id={scan_id})")
# Run validator with JSON output
start_time = datetime.now(UTC)
try:
result = subprocess.run(
["python", script_path, "--json"],
capture_output=True,
text=True,
timeout=600, # 10 minute timeout
)
except subprocess.TimeoutExpired:
logger.error(f"{validator_name} scan {scan_id} timed out after 10 minutes")
scan.status = "failed"
scan.error_message = "Scan timed out after 10 minutes"
scan.completed_at = datetime.now(UTC)
db.commit()
return
duration = (datetime.now(UTC) - start_time).total_seconds()
# Update progress
scan.progress_message = "Parsing results..."
db.commit()
# Parse JSON output (get only the JSON part, skip progress messages)
try:
lines = result.stdout.strip().split("\n")
json_start = -1
for i, line in enumerate(lines):
if line.strip().startswith("{"):
json_start = i
break
if json_start == -1:
raise ValueError("No JSON output found in validator output")
json_output = "\n".join(lines[json_start:])
data = json.loads(json_output)
except (json.JSONDecodeError, ValueError) as e:
logger.error(f"Failed to parse {validator_name} validator output: {e}")
logger.error(f"Stdout: {result.stdout[:1000]}")
logger.error(f"Stderr: {result.stderr[:1000]}")
scan.status = "failed"
scan.error_message = f"Failed to parse validator output: {e}"
scan.completed_at = datetime.now(UTC)
scan.duration_seconds = duration
db.commit()
return
# Update progress
scan.progress_message = "Storing violations..."
db.commit()
# Create violation records
violations_data = data.get("violations", [])
logger.info(f"Creating {len(violations_data)} {validator_name} violation records")
for v in violations_data:
violation = ArchitectureViolation(
scan_id=scan.id,
validator_type=validator_type,
rule_id=v.get("rule_id", "UNKNOWN"),
rule_name=v.get("rule_name", "Unknown Rule"),
severity=v.get("severity", "warning"),
file_path=v.get("file_path", ""),
line_number=v.get("line_number", 0),
message=v.get("message", ""),
context=v.get("context", ""),
suggestion=v.get("suggestion", ""),
status="open",
)
db.add(violation)
# Update scan with results
scan.total_files = data.get("files_checked", 0)
scan.total_violations = data.get("total_violations", len(violations_data))
scan.errors = data.get("errors", 0)
scan.warnings = data.get("warnings", 0)
scan.duration_seconds = duration
scan.completed_at = datetime.now(UTC)
scan.progress_message = None
# Set final status based on results
if scan.errors > 0:
scan.status = "completed_with_warnings"
else:
scan.status = "completed"
db.commit()
logger.info(
f"{validator_name} scan {scan_id} completed: "
f"files={scan.total_files}, violations={scan.total_violations}, "
f"errors={scan.errors}, warnings={scan.warnings}, "
f"duration={duration:.1f}s"
)
except Exception as e:
logger.error(f"Code quality scan {scan_id} failed: {e}", exc_info=True)
if scan is not None:
try:
scan.status = "failed"
scan.error_message = str(e)[:500] # Truncate long errors
scan.completed_at = datetime.now(UTC)
scan.progress_message = None
# Create admin notification for scan failure
admin_notification_service.create_notification(
db=db,
title="Code Quality Scan Failed",
message=f"{VALIDATOR_NAMES.get(scan.validator_type, 'Unknown')} scan failed: {str(e)[:200]}",
notification_type="error",
category="code_quality",
action_url="/admin/code-quality",
)
db.commit()
except Exception as commit_error:
logger.error(f"Failed to update scan status: {commit_error}")
db.rollback()
finally:
if hasattr(db, "close") and callable(db.close):
try:
db.close()
except Exception as close_error:
logger.error(f"Error closing database session: {close_error}")

View File

@@ -1,7 +1,7 @@
{# app/templates/admin/code-quality-dashboard.html #}
{% extends "admin/base.html" %}
{% from 'shared/macros/alerts.html' import loading_state, error_state, alert_dynamic %}
{% from 'shared/macros/headers.html' import page_header_flex, refresh_button, action_button %}
{% from 'shared/macros/headers.html' import page_header_flex, refresh_button %}
{% block title %}Code Quality Dashboard{% endblock %}
@@ -12,9 +12,46 @@
{% endblock %}
{% block content %}
{% call page_header_flex(title='Code Quality Dashboard', subtitle='Architecture validation and technical debt tracking') %}
{% call page_header_flex(title='Code Quality Dashboard', subtitle='Unified code quality tracking: architecture, security, and performance') %}
{{ refresh_button(variant='secondary') }}
{{ action_button('Run Scan', 'Scanning...', 'scanning', 'runScan()', icon='search') }}
<!-- Scan Dropdown -->
<div x-data="{ scanDropdownOpen: false }" class="relative">
<button @click="scanDropdownOpen = !scanDropdownOpen"
:disabled="scanning"
class="flex items-center px-4 py-2 text-sm font-medium leading-5 text-white transition-colors duration-150 bg-purple-600 border border-transparent rounded-lg hover:bg-purple-700 focus:outline-none focus:shadow-outline-purple disabled:opacity-50">
<template x-if="!scanning">
<span class="flex items-center">
<span x-html="$icon('search', 'w-4 h-4 mr-2')"></span>
Run Scan
<span x-html="$icon('chevron-down', 'w-4 h-4 ml-1')"></span>
</span>
</template>
<template x-if="scanning">
<span>Scanning...</span>
</template>
</button>
<div x-show="scanDropdownOpen"
@click.away="scanDropdownOpen = false"
x-transition
class="absolute right-0 mt-2 w-48 bg-white dark:bg-gray-800 rounded-lg shadow-lg z-10 border border-gray-200 dark:border-gray-700">
<button @click="runScan('all'); scanDropdownOpen = false"
class="block w-full px-4 py-2 text-sm text-left text-gray-700 dark:text-gray-300 hover:bg-gray-100 dark:hover:bg-gray-700 rounded-t-lg">
Run All Validators
</button>
<button @click="runScan('architecture'); scanDropdownOpen = false"
class="block w-full px-4 py-2 text-sm text-left text-gray-700 dark:text-gray-300 hover:bg-gray-100 dark:hover:bg-gray-700">
Architecture Only
</button>
<button @click="runScan('security'); scanDropdownOpen = false"
class="block w-full px-4 py-2 text-sm text-left text-gray-700 dark:text-gray-300 hover:bg-gray-100 dark:hover:bg-gray-700">
Security Only
</button>
<button @click="runScan('performance'); scanDropdownOpen = false"
class="block w-full px-4 py-2 text-sm text-left text-gray-700 dark:text-gray-300 hover:bg-gray-100 dark:hover:bg-gray-700 rounded-b-lg">
Performance Only
</button>
</div>
</div>
{% endcall %}
{{ loading_state('Loading dashboard...') }}
@@ -23,8 +60,79 @@
{{ alert_dynamic(type='success', message_var='successMessage', show_condition='successMessage') }}
<!-- Scan Progress Alert -->
<div x-show="scanning && scanProgress"
x-transition
class="flex items-center p-4 mb-4 text-sm text-blue-800 rounded-lg bg-blue-50 dark:bg-gray-800 dark:text-blue-400"
role="alert">
<svg class="animate-spin -ml-1 mr-3 h-5 w-5 text-blue-500" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24">
<circle class="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" stroke-width="4"></circle>
<path class="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
</svg>
<span x-text="scanProgress">Running scan...</span>
<span class="ml-2 text-xs text-gray-500 dark:text-gray-400">(You can navigate away - scan runs in background)</span>
</div>
<!-- Dashboard Content -->
<div x-show="!loading && !error">
<!-- Validator Type Tabs -->
<div class="mb-6">
<div class="flex flex-wrap space-x-1 bg-gray-100 dark:bg-gray-700 rounded-lg p-1 inline-flex">
<button @click="selectValidator('all')"
:class="selectedValidator === 'all' ? 'bg-white dark:bg-gray-800 text-purple-600 dark:text-purple-400 shadow-sm' : 'text-gray-600 dark:text-gray-400 hover:text-gray-800 dark:hover:text-gray-200'"
class="px-4 py-2 rounded-md text-sm font-medium transition-colors duration-150">
All
</button>
<button @click="selectValidator('architecture')"
:class="selectedValidator === 'architecture' ? 'bg-white dark:bg-gray-800 text-purple-600 dark:text-purple-400 shadow-sm' : 'text-gray-600 dark:text-gray-400 hover:text-gray-800 dark:hover:text-gray-200'"
class="px-4 py-2 rounded-md text-sm font-medium transition-colors duration-150">
Architecture
</button>
<button @click="selectValidator('security')"
:class="selectedValidator === 'security' ? 'bg-white dark:bg-gray-800 text-red-600 dark:text-red-400 shadow-sm' : 'text-gray-600 dark:text-gray-400 hover:text-gray-800 dark:hover:text-gray-200'"
class="px-4 py-2 rounded-md text-sm font-medium transition-colors duration-150">
Security
</button>
<button @click="selectValidator('performance')"
:class="selectedValidator === 'performance' ? 'bg-white dark:bg-gray-800 text-yellow-600 dark:text-yellow-400 shadow-sm' : 'text-gray-600 dark:text-gray-400 hover:text-gray-800 dark:hover:text-gray-200'"
class="px-4 py-2 rounded-md text-sm font-medium transition-colors duration-150">
Performance
</button>
</div>
</div>
<!-- Per-Validator Summary (shown when "All" is selected) -->
<div x-show="selectedValidator === 'all' && stats.by_validator && Object.keys(stats.by_validator).length > 0" class="grid gap-4 mb-6 md:grid-cols-3">
<template x-for="vtype in ['architecture', 'security', 'performance']" :key="vtype">
<div class="p-4 bg-white rounded-lg shadow-xs dark:bg-gray-800 cursor-pointer hover:ring-2 hover:ring-purple-500"
@click="selectValidator(vtype)">
<div class="flex items-center justify-between">
<div>
<p class="text-sm font-medium text-gray-600 dark:text-gray-400 capitalize" x-text="vtype"></p>
<p class="text-xl font-semibold text-gray-700 dark:text-gray-200"
x-text="stats.by_validator[vtype]?.total_violations || 0"></p>
</div>
<div class="p-2 rounded-full"
:class="{
'bg-purple-100 text-purple-600 dark:bg-purple-900 dark:text-purple-400': vtype === 'architecture',
'bg-red-100 text-red-600 dark:bg-red-900 dark:text-red-400': vtype === 'security',
'bg-yellow-100 text-yellow-600 dark:bg-yellow-900 dark:text-yellow-400': vtype === 'performance'
}">
<span x-html="vtype === 'architecture' ? $icon('cube', 'w-5 h-5') : (vtype === 'security' ? $icon('shield-check', 'w-5 h-5') : $icon('lightning-bolt', 'w-5 h-5'))"></span>
</div>
</div>
<div class="mt-2 flex space-x-3 text-xs">
<span class="text-red-600 dark:text-red-400">
<span x-text="stats.by_validator[vtype]?.errors || 0"></span> errors
</span>
<span class="text-yellow-600 dark:text-yellow-400">
<span x-text="stats.by_validator[vtype]?.warnings || 0"></span> warnings
</span>
</div>
</div>
</template>
</div>
<!-- Stats Cards -->
<div class="grid gap-6 mb-8 md:grid-cols-2 xl:grid-cols-4">
<!-- Card: Total Violations -->
@@ -192,7 +300,15 @@
<template x-if="stats.by_rule && Object.keys(stats.by_rule).length > 0">
<template x-for="[rule_id, count] in Object.entries(stats.by_rule)" :key="rule_id">
<div class="flex justify-between items-center text-sm">
<span class="text-gray-700 dark:text-gray-300" x-text="rule_id"></span>
<span class="text-gray-700 dark:text-gray-300 flex items-center">
<span class="inline-block w-2 h-2 rounded-full mr-2"
:class="{
'bg-purple-500': rule_id.startsWith('API') || rule_id.startsWith('SVC') || rule_id.startsWith('FE'),
'bg-red-500': rule_id.startsWith('SEC'),
'bg-yellow-500': rule_id.startsWith('PERF')
}"></span>
<span x-text="rule_id"></span>
</span>
<span class="font-semibold text-gray-900 dark:text-gray-100" x-text="count"></span>
</div>
</template>
@@ -231,17 +347,17 @@
Quick Actions
</h4>
<div class="flex flex-wrap gap-3">
<a href="/admin/code-quality/violations"
<a :href="'/admin/code-quality/violations' + (selectedValidator !== 'all' ? '?validator_type=' + selectedValidator : '')"
class="flex items-center px-4 py-2 text-sm font-medium leading-5 text-white transition-colors duration-150 bg-purple-600 border border-transparent rounded-lg hover:bg-purple-700 focus:outline-none focus:shadow-outline-purple">
<span x-html="$icon('clipboard-list', 'w-4 h-4 mr-2')"></span>
View All Violations
</a>
<a href="/admin/code-quality/violations?status=open"
<a :href="'/admin/code-quality/violations?status=open' + (selectedValidator !== 'all' ? '&validator_type=' + selectedValidator : '')"
class="flex items-center px-4 py-2 text-sm font-medium leading-5 text-gray-700 dark:text-gray-300 transition-colors duration-150 bg-white dark:bg-gray-700 border border-gray-300 dark:border-gray-600 rounded-lg hover:bg-gray-50 dark:hover:bg-gray-600 focus:outline-none focus:shadow-outline-gray">
<span x-html="$icon('folder-open', 'w-4 h-4 mr-2')"></span>
Open Violations
</a>
<a href="/admin/code-quality/violations?severity=error"
<a :href="'/admin/code-quality/violations?severity=error' + (selectedValidator !== 'all' ? '&validator_type=' + selectedValidator : '')"
class="flex items-center px-4 py-2 text-sm font-medium leading-5 text-gray-700 dark:text-gray-300 transition-colors duration-150 bg-white dark:bg-gray-700 border border-gray-300 dark:border-gray-600 rounded-lg hover:bg-gray-50 dark:hover:bg-gray-600 focus:outline-none focus:shadow-outline-gray">
<span x-html="$icon('exclamation', 'w-4 h-4 mr-2')"></span>
Errors Only
@@ -253,6 +369,9 @@
<!-- Last Scan Info -->
<div x-show="stats.last_scan" class="text-sm text-gray-600 dark:text-gray-400 text-center">
Last scan: <span x-text="stats.last_scan ? new Date(stats.last_scan).toLocaleString() : 'Never'"></span>
<template x-if="selectedValidator !== 'all'">
<span class="ml-2">(<span class="capitalize" x-text="selectedValidator"></span> validator)</span>
</template>
</div>
</div>
{% endblock %}

View File

@@ -0,0 +1,393 @@
# Background Tasks Architecture
## Overview
This document defines the harmonized architecture for all background tasks in the application. Background tasks are long-running operations that execute asynchronously, allowing users to continue browsing while the task completes.
## Current State Analysis
| Task Type | Database Model | Status Values | Tracked in BG Tasks Page |
|-----------|---------------|---------------|--------------------------|
| Product Import | `MarketplaceImportJob` | pending, processing, completed, failed, completed_with_errors | Yes |
| Order Import | `LetzshopHistoricalImportJob` | pending, fetching, processing, completed, failed | No |
| Test Runs | `TestRun` | running, passed, failed, error | Yes |
| Product Export | `LetzshopSyncLog` | success, partial (post-facto logging only) | No |
| Code Quality Scan | `ArchitectureScan` | pending, running, completed, failed, completed_with_warnings | **Yes** ✓ |
### Identified Issues
1. **Inconsistent status values** - "processing" vs "running" vs "fetching"
2. **Inconsistent field naming** - `started_at`/`completed_at` vs `timestamp`
3. **Incomplete tracking** - Not all tasks appear on background tasks page
4. ~~**Missing status on some models** - Code quality scans have no status field~~ ✓ Fixed
5. **Exports not async** - Product exports run synchronously
---
## Harmonized Architecture
### Standard Status Values
All background tasks MUST use these standard status values:
| Status | Description | When Set |
|--------|-------------|----------|
| `pending` | Task created but not yet started | On job creation |
| `running` | Task is actively executing | When processing begins |
| `completed` | Task finished successfully | On successful completion |
| `failed` | Task failed with error | On unrecoverable error |
| `completed_with_warnings` | Task completed but with non-fatal issues | When partial success |
### Standard Database Model Fields
All background task models MUST include these fields:
```python
class BackgroundTaskMixin:
"""Mixin providing standard background task fields."""
# Required fields
id = Column(Integer, primary_key=True)
status = Column(String(30), nullable=False, default="pending", index=True)
created_at = Column(DateTime, default=lambda: datetime.now(UTC))
started_at = Column(DateTime, nullable=True)
completed_at = Column(DateTime, nullable=True)
triggered_by = Column(String(100), nullable=True) # "manual:username", "scheduled", "api"
error_message = Column(Text, nullable=True)
# Optional but recommended
progress_percent = Column(Integer, nullable=True) # 0-100 for progress tracking
progress_message = Column(String(255), nullable=True) # Current step description
```
### Standard Task Type Identifier
Each task type MUST have a unique identifier used for:
- Filtering on background tasks page
- API routing
- Frontend component selection
| Task Type ID | Description | Model |
|--------------|-------------|-------|
| `product_import` | Marketplace product CSV import | `MarketplaceImportJob` |
| `order_import` | Letzshop historical order import | `LetzshopHistoricalImportJob` |
| `product_export` | Product feed export | `ProductExportJob` (new) |
| `test_run` | Pytest execution | `TestRun` |
| `code_quality_scan` | Architecture/Security/Performance scan | `ArchitectureScan` |
---
## API Design Pattern
### Trigger Endpoint
All background tasks MUST follow this pattern:
```
POST /api/v1/{domain}/{task-type}
```
**Request**: Task-specific parameters
**Response** (202 Accepted):
```json
{
"job_id": 123,
"task_type": "product_import",
"status": "pending",
"message": "Task queued successfully",
"status_url": "/api/v1/admin/background-tasks/123"
}
```
### Status Endpoint
```
GET /api/v1/admin/background-tasks/{job_id}
```
**Response**:
```json
{
"id": 123,
"task_type": "product_import",
"status": "running",
"progress_percent": 45,
"progress_message": "Processing batch 5 of 11",
"started_at": "2024-01-15T10:30:00Z",
"completed_at": null,
"triggered_by": "manual:admin",
"error_message": null,
"details": {
// Task-specific details
}
}
```
### Unified List Endpoint
```
GET /api/v1/admin/background-tasks
```
Query parameters:
- `task_type` - Filter by type (product_import, order_import, etc.)
- `status` - Filter by status (pending, running, completed, failed)
- `limit` - Number of results (default: 50)
---
## Frontend Design Pattern
### Page-Level Task Status Component
Every page that triggers a background task MUST include:
1. **Task Trigger Button** - Initiates the task
2. **Running Indicator** - Shows when task is in progress
3. **Status Banner** - Shows task status when returning to page
4. **Results Summary** - Shows outcome on completion
### Standard JavaScript Pattern
```javascript
// Task status tracking mixin
const BackgroundTaskMixin = {
// State
activeTask: null, // Current running task
taskHistory: [], // Recent tasks for this page
pollInterval: null,
// Initialize - check for active tasks on page load
async initTaskStatus() {
const tasks = await this.fetchActiveTasks();
if (tasks.length > 0) {
this.activeTask = tasks[0];
this.startPolling();
}
},
// Start a new task
async startTask(endpoint, params) {
const result = await apiClient.post(endpoint, params);
this.activeTask = {
id: result.job_id,
status: 'pending',
task_type: result.task_type
};
this.startPolling();
return result;
},
// Poll for status updates
startPolling() {
this.pollInterval = setInterval(() => this.pollStatus(), 3000);
},
async pollStatus() {
if (!this.activeTask) return;
const status = await apiClient.get(
`/admin/background-tasks/${this.activeTask.id}`
);
this.activeTask = status;
if (['completed', 'failed', 'completed_with_warnings'].includes(status.status)) {
this.stopPolling();
this.onTaskComplete(status);
}
},
stopPolling() {
if (this.pollInterval) {
clearInterval(this.pollInterval);
this.pollInterval = null;
}
},
// Override in component
onTaskComplete(result) {
// Handle completion
}
};
```
### Status Banner Component
```html
<!-- Task Status Banner - Include on all pages with background tasks -->
<template x-if="activeTask">
<div class="mb-4 p-4 rounded-lg"
:class="{
'bg-blue-50 dark:bg-blue-900': activeTask.status === 'running',
'bg-green-50 dark:bg-green-900': activeTask.status === 'completed',
'bg-red-50 dark:bg-red-900': activeTask.status === 'failed',
'bg-yellow-50 dark:bg-yellow-900': activeTask.status === 'completed_with_warnings'
}">
<div class="flex items-center justify-between">
<div class="flex items-center">
<template x-if="activeTask.status === 'running'">
<span class="animate-spin mr-2">...</span>
</template>
<span x-text="getTaskStatusMessage(activeTask)"></span>
</div>
<template x-if="activeTask.progress_percent">
<div class="w-32 bg-gray-200 rounded-full h-2">
<div class="bg-blue-600 h-2 rounded-full"
:style="`width: ${activeTask.progress_percent}%`"></div>
</div>
</template>
</div>
</div>
</template>
```
---
## Background Tasks Service
### Unified Query Interface
The `BackgroundTasksService` MUST provide methods to query all task types:
```python
class BackgroundTasksService:
"""Unified service for all background task types."""
TASK_MODELS = {
'product_import': MarketplaceImportJob,
'order_import': LetzshopHistoricalImportJob,
'test_run': TestRun,
'code_quality_scan': ArchitectureScan,
'product_export': ProductExportJob,
}
def get_all_running_tasks(self, db: Session) -> list[BackgroundTaskResponse]:
"""Get all currently running tasks across all types."""
def get_tasks(
self,
db: Session,
task_type: str = None,
status: str = None,
limit: int = 50
) -> list[BackgroundTaskResponse]:
"""Get tasks with optional filtering."""
def get_task_by_id(
self,
db: Session,
task_id: int,
task_type: str
) -> BackgroundTaskResponse:
"""Get a specific task by ID and type."""
```
---
## Implementation Checklist
### For Each Background Task Type:
- [ ] Database model includes all standard fields (status, started_at, completed_at, etc.)
- [ ] Status values follow standard enum (pending, running, completed, failed)
- [ ] API returns 202 with job_id on task creation
- [ ] Status endpoint available at standard path
- [ ] Task appears on unified background tasks page
- [ ] Frontend shows status banner on originating page
- [ ] Polling implemented with 3-5 second interval
- [ ] Error handling stores message in error_message field
- [ ] Admin notification triggered on failures
### Migration Plan
| Task | Current State | Required Changes |
|------|---------------|------------------|
| Product Import | Mostly compliant | Change "processing" to "running" |
| Order Import | Partially compliant | Add to background tasks page, standardize status |
| Test Runs | Mostly compliant | Add "pending" status before run starts |
| Product Export | Not async | Create job model, make async |
| Code Quality Scan | **Implemented** ✓ | ~~Add status field, create job pattern, make async~~ |
---
## Code Quality Scan Implementation
The code quality scan background task implementation serves as a reference for the harmonized architecture.
### Files Modified/Created
| File | Purpose |
|------|---------|
| `models/database/architecture_scan.py` | Added status fields (status, started_at, completed_at, error_message, progress_message) |
| `alembic/versions/g5b6c7d8e9f0_add_scan_status_fields.py` | Database migration for status fields |
| `app/tasks/code_quality_tasks.py` | Background task function `execute_code_quality_scan()` |
| `app/api/v1/admin/code_quality.py` | Updated endpoints with 202 response and polling |
| `app/services/background_tasks_service.py` | Added scan methods to unified service |
| `app/api/v1/admin/background_tasks.py` | Integrated scans into unified background tasks page |
| `static/admin/js/code-quality-dashboard.js` | Frontend polling and status display |
| `app/templates/admin/code-quality-dashboard.html` | Progress banner UI |
### API Endpoints
```
POST /admin/code-quality/scan
→ Returns 202 with job IDs immediately
→ Response: { scans: [{id, validator_type, status, message}], status_url }
GET /admin/code-quality/scans/{scan_id}/status
→ Poll for individual scan status
→ Response: { id, status, progress_message, total_violations, ... }
GET /admin/code-quality/scans/running
→ Get all currently running scans
→ Response: [{ id, status, progress_message, ... }]
```
### Frontend Behavior
1. **On page load**: Checks for running scans via `/scans/running`
2. **On scan trigger**: Creates scans, stores IDs, starts polling
3. **Polling**: Every 3 seconds via `setInterval`
4. **Progress display**: Shows spinner and progress message
5. **On completion**: Fetches final results, updates dashboard
6. **User can navigate away**: Scan continues in background
### Background Task Pattern
```python
async def execute_code_quality_scan(scan_id: int):
db = SessionLocal() # Own database session
try:
scan = db.query(ArchitectureScan).get(scan_id)
scan.status = "running"
scan.started_at = datetime.now(UTC)
scan.progress_message = "Running validator..."
db.commit()
# Execute validator...
scan.status = "completed"
scan.completed_at = datetime.now(UTC)
db.commit()
except Exception as e:
scan.status = "failed"
scan.error_message = str(e)
db.commit()
# Create admin notification
finally:
db.close()
```
---
## Architecture Rules
See `.architecture-rules/background_tasks.yaml` for enforceable rules.
Key rules:
- `BG-001`: All background task models must include standard fields
- `BG-002`: Status values must be from approved set
- `BG-003`: Task triggers must return 202 with job_id
- `BG-004`: All tasks must be registered in BackgroundTasksService

View File

@@ -21,7 +21,7 @@ from app.core.database import Base
class ArchitectureScan(Base):
"""Represents a single run of the architecture validator"""
"""Represents a single run of a code quality validator"""
__tablename__ = "architecture_scans"
@@ -29,12 +29,26 @@ class ArchitectureScan(Base):
timestamp = Column(
DateTime(timezone=True), server_default=func.now(), nullable=False, index=True
)
validator_type = Column(
String(20), nullable=False, index=True, default="architecture"
) # 'architecture', 'security', 'performance'
# Background task status fields (harmonized architecture)
status = Column(
String(30), nullable=False, default="pending", index=True
) # 'pending', 'running', 'completed', 'failed', 'completed_with_warnings'
started_at = Column(DateTime(timezone=True), nullable=True)
completed_at = Column(DateTime(timezone=True), nullable=True)
error_message = Column(Text, nullable=True)
progress_message = Column(String(255), nullable=True) # Current step description
# Scan results
total_files = Column(Integer, default=0)
total_violations = Column(Integer, default=0)
errors = Column(Integer, default=0)
warnings = Column(Integer, default=0)
duration_seconds = Column(Float, default=0.0)
triggered_by = Column(String(100)) # 'manual', 'scheduled', 'ci/cd'
triggered_by = Column(String(100)) # 'manual:username', 'scheduled', 'ci/cd'
git_commit_hash = Column(String(40))
# Relationship to violations
@@ -47,7 +61,7 @@ class ArchitectureScan(Base):
class ArchitectureViolation(Base):
"""Represents a single architectural violation found during a scan"""
"""Represents a single code quality violation found during a scan"""
__tablename__ = "architecture_violations"
@@ -55,7 +69,10 @@ class ArchitectureViolation(Base):
scan_id = Column(
Integer, ForeignKey("architecture_scans.id"), nullable=False, index=True
)
rule_id = Column(String(20), nullable=False, index=True) # e.g., 'API-001'
validator_type = Column(
String(20), nullable=False, index=True, default="architecture"
) # 'architecture', 'security', 'performance'
rule_id = Column(String(20), nullable=False, index=True) # e.g., 'API-001', 'SEC-001', 'PERF-001'
rule_name = Column(String(200), nullable=False)
severity = Column(
String(10), nullable=False, index=True
@@ -96,17 +113,20 @@ class ArchitectureViolation(Base):
class ArchitectureRule(Base):
"""Architecture rules configuration (from YAML with database overrides)"""
"""Code quality rules configuration (from YAML with database overrides)"""
__tablename__ = "architecture_rules"
id = Column(Integer, primary_key=True, index=True)
rule_id = Column(
String(20), unique=True, nullable=False, index=True
) # e.g., 'API-001'
) # e.g., 'API-001', 'SEC-001', 'PERF-001'
validator_type = Column(
String(20), nullable=False, index=True, default="architecture"
) # 'architecture', 'security', 'performance'
category = Column(
String(50), nullable=False
) # 'api_endpoint', 'service_layer', etc.
) # 'api_endpoint', 'service_layer', 'authentication', 'database', etc.
name = Column(String(200), nullable=False)
description = Column(Text)
severity = Column(String(10), nullable=False) # Can override default from YAML

View File

@@ -1,9 +1,10 @@
/**
* Code Quality Dashboard Component
* Manages the code quality dashboard page
* Manages the unified code quality dashboard page
* Supports multiple validator types: architecture, security, performance
*/
// Use centralized logger
// Use centralized logger
const codeQualityLog = window.LogConfig.createLogger('CODE-QUALITY');
function codeQualityDashboard() {
@@ -14,15 +15,23 @@ function codeQualityDashboard() {
// Set current page for navigation
currentPage: 'code-quality',
// Validator type selection
selectedValidator: 'all', // 'all', 'architecture', 'security', 'performance'
validatorTypes: ['architecture', 'security', 'performance'],
// Dashboard-specific data
loading: false,
scanning: false,
error: null,
successMessage: null,
scanProgress: null, // Progress message during scan
runningScans: [], // Track running scan IDs
pollInterval: null, // Polling interval ID
stats: {
total_violations: 0,
errors: 0,
warnings: 0,
info: 0,
open: 0,
assigned: 0,
resolved: 0,
@@ -33,11 +42,57 @@ function codeQualityDashboard() {
by_rule: {},
by_module: {},
top_files: [],
last_scan: null
last_scan: null,
validator_type: null,
by_validator: {}
},
async init() {
// Check URL for validator_type parameter
const urlParams = new URLSearchParams(window.location.search);
const urlValidator = urlParams.get('validator_type');
if (urlValidator && this.validatorTypes.includes(urlValidator)) {
this.selectedValidator = urlValidator;
} else {
// Ensure 'all' is explicitly set as default
this.selectedValidator = 'all';
}
await this.loadStats();
// Check for any running scans on page load
await this.checkRunningScans();
},
async checkRunningScans() {
try {
const runningScans = await apiClient.get('/admin/code-quality/scans/running');
if (runningScans && runningScans.length > 0) {
this.scanning = true;
this.runningScans = runningScans.map(s => s.id);
this.updateProgressMessage(runningScans);
this.startPolling();
}
} catch (err) {
codeQualityLog.error('Failed to check running scans:', err);
}
},
updateProgressMessage(scans) {
const runningScans = scans.filter(s => s.status === 'running' || s.status === 'pending');
if (runningScans.length === 0) {
this.scanProgress = null;
return;
}
// Show progress from the first running scan
const firstRunning = runningScans.find(s => s.status === 'running');
if (firstRunning && firstRunning.progress_message) {
this.scanProgress = firstRunning.progress_message;
} else {
const validatorNames = runningScans.map(s => this.capitalizeFirst(s.validator_type));
this.scanProgress = `Running ${validatorNames.join(', ')} scan${runningScans.length > 1 ? 's' : ''}...`;
}
},
async loadStats() {
@@ -45,7 +100,13 @@ function codeQualityDashboard() {
this.error = null;
try {
const stats = await apiClient.get('/admin/code-quality/stats');
// Build URL with validator_type filter if not 'all'
let url = '/admin/code-quality/stats';
if (this.selectedValidator !== 'all') {
url += `?validator_type=${this.selectedValidator}`;
}
const stats = await apiClient.get(url);
this.stats = stats;
} catch (err) {
codeQualityLog.error('Failed to load stats:', err);
@@ -60,37 +121,162 @@ function codeQualityDashboard() {
}
},
async runScan() {
async selectValidator(validatorType) {
if (this.selectedValidator !== validatorType) {
this.selectedValidator = validatorType;
await this.loadStats();
// Update URL without reload
const url = new URL(window.location);
if (validatorType === 'all') {
url.searchParams.delete('validator_type');
} else {
url.searchParams.set('validator_type', validatorType);
}
window.history.pushState({}, '', url);
}
},
async runScan(validatorType = 'all') {
this.scanning = true;
this.error = null;
this.successMessage = null;
this.scanProgress = 'Queuing scan...';
try {
const scan = await apiClient.post('/admin/code-quality/scan');
this.successMessage = `Scan completed: ${scan.total_violations} violations found (${scan.errors} errors, ${scan.warnings} warnings)`;
// Determine which validators to run
const validatorTypesToRun = validatorType === 'all'
? this.validatorTypes
: [validatorType];
// Reload stats after scan
await this.loadStats();
const result = await apiClient.post('/admin/code-quality/scan', {
validator_types: validatorTypesToRun
});
// Store running scan IDs for polling
this.runningScans = result.scans.map(s => s.id);
// Show initial status message
const validatorNames = validatorTypesToRun.map(v => this.capitalizeFirst(v));
this.scanProgress = `Running ${validatorNames.join(', ')} scan${validatorNames.length > 1 ? 's' : ''}...`;
// Start polling for completion
this.startPolling();
// Clear success message after 5 seconds
setTimeout(() => {
this.successMessage = null;
}, 5000);
} catch (err) {
codeQualityLog.error('Failed to run scan:', err);
this.error = err.message;
this.scanning = false;
this.scanProgress = null;
// Redirect to login if unauthorized
if (err.message.includes('Unauthorized')) {
window.location.href = '/admin/login';
}
} finally {
this.scanning = false;
}
},
startPolling() {
// Clear any existing polling
if (this.pollInterval) {
clearInterval(this.pollInterval);
}
// Poll every 3 seconds
this.pollInterval = setInterval(async () => {
await this.pollScanStatus();
}, 3000);
},
stopPolling() {
if (this.pollInterval) {
clearInterval(this.pollInterval);
this.pollInterval = null;
}
},
async pollScanStatus() {
if (this.runningScans.length === 0) {
this.stopPolling();
this.scanning = false;
this.scanProgress = null;
return;
}
try {
const runningScans = await apiClient.get('/admin/code-quality/scans/running');
// Update progress message from running scans
this.updateProgressMessage(runningScans);
// Check if our scans have completed
const stillRunning = this.runningScans.filter(id =>
runningScans.some(s => s.id === id)
);
if (stillRunning.length === 0) {
// All scans completed - get final results
await this.handleScanCompletion();
} else {
// Update running scans list
this.runningScans = stillRunning;
}
} catch (err) {
codeQualityLog.error('Failed to poll scan status:', err);
}
},
async handleScanCompletion() {
this.stopPolling();
// Get results for all completed scans
let totalViolations = 0;
let totalErrors = 0;
let totalWarnings = 0;
const completedScans = [];
for (const scanId of this.runningScans) {
try {
const scan = await apiClient.get(`/admin/code-quality/scans/${scanId}/status`);
completedScans.push(scan);
totalViolations += scan.total_violations || 0;
totalErrors += scan.errors || 0;
totalWarnings += scan.warnings || 0;
} catch (err) {
codeQualityLog.error(`Failed to get scan ${scanId} results:`, err);
}
}
// Format success message based on number of validators run
if (completedScans.length > 1) {
this.successMessage = `Scan completed: ${totalViolations} total violations found (${totalErrors} errors, ${totalWarnings} warnings) across ${completedScans.length} validators`;
} else if (completedScans.length === 1) {
const scan = completedScans[0];
this.successMessage = `${this.capitalizeFirst(scan.validator_type)} scan completed: ${scan.total_violations} violations found (${scan.errors} errors, ${scan.warnings} warnings)`;
} else {
this.successMessage = 'Scan completed';
}
// Reload stats after scan
await this.loadStats();
// Reset scanning state
this.scanning = false;
this.scanProgress = null;
this.runningScans = [];
// Clear success message after 5 seconds
setTimeout(() => {
this.successMessage = null;
}, 5000);
},
async refresh() {
await this.loadStats();
},
capitalizeFirst(str) {
return str.charAt(0).toUpperCase() + str.slice(1);
}
};
}