# Code Quality Dashboard Implementation Plan **Status:** Phase 1 Complete (Database Foundation) This document tracks the implementation of the Code Quality Dashboard for tracking architecture violations in the admin UI. --- ## Overview The Code Quality Dashboard provides a UI for viewing, managing, and tracking architecture violations detected by `scripts/validate/validate_architecture.py`. This allows teams to: - View violations in a user-friendly interface - Track technical debt over time - Assign violations to developers - Prioritize fixes by severity - Monitor code quality improvements --- ## Implementation Phases ### ✅ Phase 1: Database Foundation (COMPLETED) **Files Created:** - `app/models/architecture_scan.py` - Database models - `alembic/versions/7a7ce92593d5_*.py` - Migration **Database Schema:** ``` architecture_scans ├── id (PK) ├── timestamp ├── total_files ├── total_violations ├── errors, warnings ├── duration_seconds ├── triggered_by └── git_commit_hash architecture_violations ├── id (PK) ├── scan_id (FK → scans) ├── rule_id, rule_name ├── severity, status ├── file_path, line_number ├── message, context, suggestion ├── assigned_to (FK → users) ├── resolved_at, resolved_by └── resolution_note architecture_rules ├── id (PK) ├── rule_id (unique) ├── category, name ├── description, severity ├── enabled └── custom_config (JSON) violation_assignments ├── id (PK) ├── violation_id (FK) ├── user_id (FK) ├── assigned_by, due_date └── priority violation_comments ├── id (PK) ├── violation_id (FK) ├── user_id (FK) ├── comment └── created_at ``` --- ### 🔨 Phase 2: Service Layer (TODO) **File to Create:** `app/services/code_quality_service.py` **Required Methods:** ```python class CodeQualityService: """Business logic for code quality tracking""" # Scan Management def run_scan(db: Session, triggered_by: str = 'manual') -> ArchitectureScan: """ Run architecture validator and store results Steps: 1. Execute scripts/validate/validate_architecture.py 2. Parse JSON output 3. Create ArchitectureScan record 4. Create ArchitectureViolation records 5. Calculate statistics """ pass def get_latest_scan(db: Session) -> ArchitectureScan: """Get most recent scan""" pass def get_scan_history(db: Session, limit: int = 30) -> List[ArchitectureScan]: """Get scan history for trend graphs""" pass # Violation Management def get_violations( db: Session, scan_id: int = None, severity: str = None, status: str = None, rule_id: str = None, file_path: str = None, limit: int = 100, offset: int = 0 ) -> Tuple[List[ArchitectureViolation], int]: """Get violations with filtering and pagination""" pass def get_violation_by_id(db: Session, violation_id: int) -> ArchitectureViolation: """Get single violation with details""" pass def assign_violation( db: Session, violation_id: int, user_id: int, assigned_by: int, due_date: datetime = None, priority: str = 'medium' ) -> ViolationAssignment: """Assign violation to developer""" pass def resolve_violation( db: Session, violation_id: int, resolved_by: int, resolution_note: str ) -> ArchitectureViolation: """Mark violation as resolved""" pass def ignore_violation( db: Session, violation_id: int, ignored_by: int, reason: str ) -> ArchitectureViolation: """Mark violation as ignored/won't fix""" pass # Statistics def get_dashboard_stats(db: Session) -> dict: """ Get statistics for dashboard Returns: { 'total_violations': 2961, 'errors': 100, 'warnings': 2861, 'open': 2850, 'assigned': 50, 'resolved': 61, 'technical_debt_score': 72, 'trend': { 'last_7_days': [...], 'last_30_days': [...] }, 'by_severity': {'error': 100, 'warning': 2861}, 'by_rule': {'API-001': 45, 'API-002': 23, ...}, 'by_module': {'app/api': 234, 'app/services': 156, ...}, 'top_files': [ {'file': 'app/api/stores.py', 'count': 12}, ... ] } """ pass def calculate_technical_debt_score(db: Session) -> int: """ Calculate technical debt score (0-100) Formula: 100 - (errors * 0.5 + warnings * 0.05) """ pass # Rule Management def get_all_rules(db: Session) -> List[ArchitectureRule]: """Get all rules with configuration""" pass def update_rule( db: Session, rule_id: str, severity: str = None, enabled: bool = None ) -> ArchitectureRule: """Update rule configuration""" pass ``` **Integration with Validator:** ```python def run_scan(db: Session, triggered_by: str = 'manual') -> ArchitectureScan: import subprocess import json from datetime import datetime # Run validator start_time = datetime.now() result = subprocess.run( ['python', 'scripts/validate/validate_architecture.py', '--json'], capture_output=True, text=True ) duration = (datetime.now() - start_time).total_seconds() # Parse output data = json.loads(result.stdout) # Create scan record scan = ArchitectureScan( timestamp=datetime.now(), total_files=data['files_checked'], total_violations=len(data['violations']), errors=len([v for v in data['violations'] if v['severity'] == 'error']), warnings=len([v for v in data['violations'] if v['severity'] == 'warning']), duration_seconds=duration, triggered_by=triggered_by, git_commit_hash=get_git_commit_hash() ) db.add(scan) db.flush() # Create violation records for v in data['violations']: violation = ArchitectureViolation( scan_id=scan.id, rule_id=v['rule_id'], rule_name=v['rule_name'], severity=v['severity'], file_path=v['file_path'], line_number=v['line_number'], message=v['message'], context=v.get('context'), suggestion=v.get('suggestion'), status='open' ) db.add(violation) db.commit() return scan ``` --- ### 🔨 Phase 3: API Endpoints (TODO) **File to Create:** `app/api/v1/admin/code_quality.py` **Endpoints:** ```python # Scan Management POST /admin/code-quality/scan # Trigger new scan GET /admin/code-quality/scans # List scans GET /admin/code-quality/scans/{scan_id} # Scan details # Violations GET /admin/code-quality/violations # List with filters GET /admin/code-quality/violations/{id} # Violation details PUT /admin/code-quality/violations/{id} # Update status POST /admin/code-quality/violations/{id}/assign # Assign POST /admin/code-quality/violations/{id}/resolve # Resolve POST /admin/code-quality/violations/{id}/ignore # Ignore POST /admin/code-quality/violations/{id}/comments # Add comment # Statistics GET /admin/code-quality/stats # Dashboard stats GET /admin/code-quality/stats/trend # Trend data # Rules GET /admin/code-quality/rules # List all rules PUT /admin/code-quality/rules/{rule_id} # Update rule ``` **Request/Response Models:** ```python from pydantic import BaseModel from typing import Optional, List from datetime import datetime class ScanTriggerRequest(BaseModel): triggered_by: str = 'manual' class ScanResponse(BaseModel): id: int timestamp: datetime total_violations: int errors: int warnings: int duration_seconds: float class Config: from_attributes = True class ViolationResponse(BaseModel): id: int rule_id: str rule_name: str severity: str file_path: str line_number: int message: str context: Optional[str] suggestion: Optional[str] status: str created_at: datetime class Config: from_attributes = True class ViolationListResponse(BaseModel): violations: List[ViolationResponse] total: int page: int per_page: int class AssignViolationRequest(BaseModel): user_id: int due_date: Optional[datetime] priority: str = 'medium' class ResolveViolationRequest(BaseModel): resolution_note: str class DashboardStatsResponse(BaseModel): total_violations: int errors: int warnings: int open: int assigned: int resolved: int technical_debt_score: int by_severity: dict by_rule: dict by_module: dict top_files: List[dict] ``` --- ### 🔨 Phase 4: Frontend (TODO) **Files to Create:** 1. **Templates:** - `app/templates/admin/code-quality-dashboard.html` - Overview page - `app/templates/admin/code-quality-violations.html` - Violations list - `app/templates/admin/code-quality-detail.html` - Violation detail - `app/templates/admin/code-quality-rules.html` - Rule management 2. **JavaScript:** - `static/admin/js/code-quality-dashboard.js` - Dashboard interactions - `static/admin/js/code-quality-violations.js` - Violations list - `static/admin/js/code-quality-detail.js` - Violation detail 3. **Routes:** - Add to `app/routes/admin_pages.py`: ```python @router.get("/code-quality") @router.get("/code-quality/violations") @router.get("/code-quality/violations/{violation_id}") @router.get("/code-quality/rules") ``` **Dashboard Layout:** ```html
0
0
0
0
/100| Rule | File | Severity | Status | Actions |
|---|---|---|---|---|
Code Quality
``` --- ## Validator Output Format **Update:** `scripts/validate/validate_architecture.py` Add JSON output support: ```python parser.add_argument( '--json', action='store_true', help='Output results as JSON' ) # In print_report(): if args.json: output = { 'files_checked': self.result.files_checked, 'violations': [ { 'rule_id': v.rule_id, 'rule_name': v.rule_name, 'severity': v.severity.value, 'file_path': str(v.file_path), 'line_number': v.line_number, 'message': v.message, 'context': v.context, 'suggestion': v.suggestion } for v in self.result.violations ] } print(json.dumps(output)) return 0 if not self.result.has_errors() else 1 ``` --- ## Testing Checklist - [ ] Database models create correctly - [ ] Migration runs without errors - [ ] Service layer runs validator successfully - [ ] Service layer parses output correctly - [ ] API endpoints return proper responses - [ ] Dashboard loads without errors - [ ] Scan button triggers validator - [ ] Violations display in table - [ ] Filtering works (severity, status, rule) - [ ] Pagination works - [ ] Violation detail page shows context - [ ] Assignment workflow works - [ ] Resolution workflow works - [ ] Trend chart displays correctly - [ ] CSV export works - [ ] Rule management works --- ## Next Session Tasks **Priority Order:** 1. **Update validator script** - Add --json flag 2. **Implement service layer** - code_quality_service.py 3. **Create API endpoints** - code_quality.py 4. **Build dashboard page** - Basic UI with stats and table 5. **Test full flow** - Scan → Store → Display **Estimated Time:** 3-4 hours --- ## Notes - Consider adding background job support (Celery/RQ) for long-running scans - Add email notifications when violations assigned - Consider Gitea/GitHub integration (comment on PRs) - Add historical comparison (violations introduced vs fixed) - Consider rule suggestions based on common violations --- **Created:** 2025-11-28 **Last Updated:** 2025-11-28 **Status:** Phase 1 Complete, Phase 2-4 Pending