Why Explainability Matters
When AI systems make recommendations about infrastructure safety, maintenance priorities, or asset conditions, stakeholders need to understand why. Inspectors need confidence that AI findings are accurate. Managers need justification for resource allocation. Regulators need documentation of decision rationale. And when AI makes mistakes, understanding the failure is essential for improvement.
MuVeraAI includes comprehensive explainability tools designed for infrastructure inspection use cases. This guide shows you how to use them effectively.
Accessing Explainability Features
MuVeraAI's explainability features are integrated throughout the platform.
Inspection-Level Explanations
Every AI-generated finding includes an explanation accessible through the inspection interface.
Finding the Explanation Panel
- Navigate to any inspection in the platform
- Select an AI-generated finding
- Click the "Explain" button (lightbulb icon) in the finding toolbar
- The explanation panel opens alongside the finding
Explanation Panel Contents
The explanation panel contains several sections:
- Summary: Natural language explanation of the finding
- Evidence: Specific data points that contributed to the finding
- Confidence: Model confidence level with calibration context
- Similar Cases: Historical findings for comparison
- Alternative Interpretations: Other possibilities the model considered
Dashboard Explanations
Aggregate views and dashboards also include explainability features.
Risk Score Explanations
When viewing asset risk scores:
- Hover over any risk indicator
- Click "View breakdown"
- See factor contributions to the overall score
Trend Explanations
For trend indicators:
- Click on any trend chart
- Select "Explain trend"
- View factors driving the observed pattern
Understanding Visual Explanations
For image-based AI findings, visual explanations show what the model detected.
Attention Maps
Attention maps highlight regions the model focused on.
How to Access
- Open any image-based finding
- Click "Show attention map" in the image viewer
- Overlay appears showing model focus areas
Interpreting Attention Maps
- Red/warm colors: High model attention (primary decision factors)
- Blue/cool colors: Low attention (context, not primary)
- White/no color: Minimal model attention
Best Practices
- Attention should focus on the identified defect
- If attention is scattered, the finding may be unreliable
- Compare attention to your own visual inspection
- Use unexpected attention patterns to identify model errors
Segmentation Masks
For defect detection, segmentation masks show exact boundaries.
How to Access
- Open image-based finding
- Click "Show segmentation" in viewer
- Color-coded overlay shows detected regions
Mask Categories
Different colors indicate different detection types:
- Red: Detected defects
- Yellow: Uncertain regions (borderline confidence)
- Blue: Reference/context regions
- Green: Normal condition baseline
Confidence Contours
Confidence contours show certainty variation across images.
How to Access
- Enable "Confidence overlay" in image viewer settings
- Contour lines appear on the image
- Inner contours = higher confidence
Interpreting Contours
- Tight contours around defects indicate confident detection
- Wide or irregular contours suggest uncertainty
- Use contours to identify areas needing human review
Understanding Numerical Explanations
For sensor data and numerical predictions, different explanation types apply.
Feature Importance
Feature importance shows which inputs most influenced the output.
How to Access
- View any numerical prediction or score
- Click "Feature importance" in the explanation panel
- Bar chart shows contribution of each input
Reading Feature Importance
- Bars extending right indicate positive contributions (increasing the score)
- Bars extending left indicate negative contributions
- Bar length indicates magnitude of contribution
- Features are ordered by importance
Example: Equipment Health Score
For an equipment health score of 73 (of 100):
- Vibration level: -12 (elevated vibration reduces score)
- Operating temperature: -8 (slightly high temperature)
- Run hours: -5 (accumulated wear)
- Last maintenance: +3 (recent maintenance helps)
- Oil condition: -5 (degraded lubrication)
This breakdown shows why the score is 73 instead of 100.
Threshold Explanations
When findings are based on threshold crossings, explanations show threshold context.
How to Access
- View threshold-based finding
- Click "Threshold details"
- See current value, threshold, and margin
Threshold Information Includes
- Current measured value
- Applicable threshold
- Margin above/below threshold
- Trend toward/away from threshold
- Historical threshold crossings
Trend Decomposition
For trend-based findings, decomposition shows trend components.
How to Access
- View trend-based finding
- Click "Decompose trend"
- See component breakdown
Trend Components
- Baseline: Long-term average level
- Trend: Gradual directional change
- Seasonal: Repeating patterns (daily, weekly, annual)
- Residual: Unexplained variation
- Events: Discrete changes from known events
Generating Explanation Reports
For documentation and stakeholder communication, export explanation reports.
Finding-Level Reports
Export detailed explanation for individual findings.
How to Generate
- Open finding explanation panel
- Click "Export" button
- Select format (PDF, Word, HTML)
- Choose detail level (Summary, Standard, Detailed)
- Download or email report
Report Contents
- Finding summary and classification
- Evidence with visual and numerical explanations
- Confidence level and calibration
- Similar historical cases
- Model version and timestamp
Inspection-Level Reports
Generate comprehensive explanation for entire inspections.
How to Generate
- Open inspection summary view
- Click "Generate explanation report"
- Select which findings to include
- Choose format and detail level
- Generate and download
Portfolio Reports
For management and regulatory purposes, generate portfolio-level explanation reports.
How to Generate
- Navigate to Portfolio view
- Select assets or time period
- Click "Generate portfolio explanation"
- Configure report parameters
- Schedule or immediate generation
Portfolio Report Contents
- Aggregate explanation of AI decision patterns
- Model performance summary
- Significant findings with explanations
- Trend explanations across portfolio
- Recommendation justifications
Configuring Explanation Preferences
Customize how explanations are presented.
User Preferences
Set your personal explanation preferences.
Navigation: Settings > User Preferences > Explanations
Configurable Options
- Default detail level: How much explanation to show by default
- Automatic display: Whether explanations open automatically
- Visualization preferences: Which visual explanation types to show
- Export defaults: Default format and settings for exports
Organization Settings
Administrators can configure organization-wide defaults.
Navigation: Admin > Settings > Explanation Configuration
Configurable Options
- Minimum explanation level: Required explanations for all findings
- Mandatory documentation: Explanation requirements for specific finding types
- Retention policies: How long explanation data is retained
- Access controls: Who can access detailed explanations
Using Explanations for Quality Assurance
Explanations support quality assurance of AI findings.
Identifying Unreliable Findings
Use explanations to flag findings needing human review.
Warning Signs
- Scattered attention maps without clear focus
- Wide confidence contours
- Conflicting feature importances
- Unusual similar cases (different from finding)
- Alternative interpretations with similar confidence
QA Workflow
- Review automatically flagged low-confidence findings
- Examine explanations for coherence
- Compare to your own assessment
- Confirm, modify, or reject findings
- Feedback improves future AI performance
Auditing AI Decisions
For regulatory or internal audits, explanations provide documentation.
Audit Trail
Every AI decision includes:
- Timestamp and model version
- Input data used
- Model output with confidence
- Explanation generated
- Any human review or override
Audit Report Generation
- Navigate to Audit > AI Decisions
- Select date range and decision types
- Generate audit report
- Export with full explanations
Understanding Model Confidence
Confidence scores require proper interpretation.
Confidence Calibration
MuVeraAI models are calibrated so confidence reflects accuracy.
What Calibration Means
- 90% confidence = approximately 90% of such predictions are correct
- Calibration verified through ongoing validation
- Calibration status shown in model information
Viewing Calibration Information
- Open any explanation panel
- Click confidence score
- View calibration context and historical accuracy
Using Confidence Appropriately
Different confidence levels warrant different actions:
High Confidence (>95%)
- Model is highly certain
- Still verify critical findings
- Low false positive rate expected
Medium Confidence (80-95%)
- Model is reasonably certain
- Human review recommended for important findings
- Some false positives expected
Low Confidence (60-80%)
- Model is uncertain
- Human review strongly recommended
- Finding may be correct but needs validation
Very Low Confidence (<60%)
- Model is guessing
- Treat as hypothesis, not finding
- Investigate further before acting
Troubleshooting Explanations
Common issues and solutions.
Missing Explanations
Problem: No explanation available for a finding.
Possible Causes
- Finding generated by legacy model version
- Explanation computation in progress
- System configuration issue
Solutions
- Wait for explanation processing to complete
- Check if model version supports explanations
- Contact support if persistent
Inconsistent Explanations
Problem: Explanation does not match the finding.
Possible Causes
- Display rendering issue
- Data synchronization delay
- Actual model error
Solutions
- Refresh the page
- Clear browser cache
- Report as potential model issue
Explanation Performance
Problem: Explanations load slowly.
Possible Causes
- Large image analysis
- Complex trend decomposition
- System under heavy load
Solutions
- Request summary explanation first
- Schedule detailed explanations for later
- Contact administrator about performance
Best Practices
Maximize value from explainability features.
For Inspectors
- Review explanations for all non-obvious findings
- Use visual explanations to validate image-based detection
- Compare attention maps to your own inspection
- Document disagreements with AI explanations
- Provide feedback to improve model accuracy
For Managers
- Include explanations in decision documentation
- Use portfolio explanations for trend understanding
- Ensure regulatory documentation requirements met
- Monitor explanation quality as AI performance indicator
- Train staff on explanation interpretation
For Auditors
- Verify explanation completeness in audit trails
- Check calibration information for model validity
- Review sample of explanations for coherence
- Ensure retention policies meet requirements
- Document explanation review in audit reports
Conclusion
Explainability transforms AI from a black box into a transparent partner. By understanding how MuVeraAI reaches its conclusions, you can use AI findings with appropriate confidence, meet regulatory requirements, and continuously improve system performance through informed feedback.
The tools described in this guide make explanations accessible and actionable. Use them regularly to build the trust and understanding that enable AI to enhance rather than replace human expertise.
Learn More About MuVeraAI Explainability
MuVeraAI is designed for transparency, with explainability features built into every aspect of the platform. See our explainable AI capabilities in action.
Ready to see transparent AI in practice?
Schedule a Demo to explore MuVeraAI's explainability features with your own data.