Understanding Your Results
Learn how to interpret AI analysis results and make the most of your Mathom reports.
Report Structure
Overall Summary
- Overall Grade: A-F rating of the entire project
- Total Findings: Count of issues identified
- Severity Breakdown: Distribution across Critical/High/Medium/Low
Review Areas
Results are organized by configured review areas:
- Code Quality & Architecture
- Security & Vulnerabilities
- Technical Debt
- Documentation Quality
- Business Logic Review
- (Custom areas as configured)
Understanding Findings
Each finding includes:
Title & Description
Clear summary of the issue or observation
Severity Rating
- Critical 🔴 - Immediate attention required, deal-breaking issues
- High 🟠- Significant concerns that need addressing
- Medium 🟡 - Moderate issues to be aware of
- Low 🟢 - Minor observations or best practices
Evidence
Specific excerpts from your documents supporting the finding
Recommendations
Actionable steps to address or mitigate the issue
Impact Assessment
Business and technical implications
Severity Criteria
Critical Findings
- Security vulnerabilities
- Legal compliance issues
- Data integrity problems
- System-breaking bugs
High Findings
- Performance bottlenecks
- Architectural anti-patterns
- Missing critical documentation
- Significant technical debt
Medium Findings
- Code quality issues
- Incomplete error handling
- Documentation gaps
- Minor security concerns
Low Findings
- Style inconsistencies
- Optimization opportunities
- Enhancement suggestions
Grading System
A (90-100): Excellent - minimal issues, best practices followed
B (80-89): Good - few issues, generally well-maintained
C (70-79): Acceptable - some issues need attention
D (60-69): Concerning - multiple significant issues
F (<60): Poor - critical issues requiring immediate attention
Using Filters
Filter findings by:
- Severity: Focus on critical issues first
- Review Area: Examine specific aspects
- Search: Find specific topics or keywords
Exporting Results
Export your analysis:
- PDF Report: Share with stakeholders
- CSV: Import into spreadsheets
- JSON: Integrate with other tools
Taking Action
Prioritization
- Address Critical findings first
- Plan for High findings
- Track Medium findings
- Defer Low findings if needed
Creating Action Items
- Assign findings to team members
- Set deadlines for remediation
- Track progress in your PM tool
Follow-up Analysis
Re-run analysis after making changes to verify improvements.
Common Patterns
All Green
Indicates well-maintained codebase with few issues - still review carefully
Many Critical
Suggests significant risks - prioritize remediation before proceeding
Mostly Low/Medium
Typical for mature products - focus on high-impact improvements
Questions to Ask
- Are critical findings deal-breakers?
- What's the cost to remediate?
- Are there systemic patterns?
- What's the risk of deferring fixes?
Best Practices
✅ Review with Team: Discuss findings with technical and business stakeholders
✅ Validate Findings: AI is powerful but verify important findings
✅ Document Decisions: Track which issues to fix, defer, or accept
✅ Track Trends: Run periodic analyses to monitor code health
Need Help Interpreting?
- Schedule a demo with our team
- Contact Support for analysis review
- Join our community forum for peer advice