Measuring Success
Traditional training measures completion. This framework measures growth.
Fluency Slope: The Key Metric
Fluency slope measures the rate of AI competency growth over time — not just where someone is, but how fast they’re moving.
A beginner who’s improving weekly is more valuable to the organization than an expert who plateaued six months ago. Slope captures this.
How It Works
Learners periodically answer three simple questions:
- What AI tools did you use this week? (breadth)
- What’s the most complex thing you used AI for? (depth)
- What did you try that didn’t work? (experimentation)
These responses are tracked over time. The pattern reveals slope:
| Slope | Signal | Action |
|---|---|---|
| Steep positive | Rapid adoption, high engagement | Accelerate to next level, pair with others |
| Gradual positive | Steady growth, building habits | Continue current level, reinforce wins |
| Flat | Stalled — could be stuck or disengaged | Targeted intervention, adjust content |
| Declining | Regression — tools abandoned or frustration | Direct support, identify blockers |
Fluency Slope Visualization
xychart-beta
title "Fluency Slope by Team (Example)"
x-axis ["Month 1", "Month 2", "Month 3", "Month 4", "Month 5", "Month 6"]
y-axis "Fluency Score" 0 --> 100
line "Operations" [10, 25, 45, 60, 72, 80]
line "Admin Team" [15, 22, 30, 38, 50, 55]
line "IT / Engineering" [40, 50, 58, 65, 75, 85]
Organizational KPIs
Adoption Metrics
- Enrollment rate — % of target audience actively participating
- Level progression — Distribution across Beginner → Expert over time
- Time-to-capable — How quickly new learners reach the “Capable” minimum bar
Impact Metrics
- AI usage frequency — Self-reported weekly AI tool usage across teams
- Workflow integrations — Number of processes redesigned with AI (Proficient+ level)
- Time savings — Estimated hours recaptured through AI-assisted tasks
- Knowledge sharing — Cross-team exercise and pattern sharing activity
Quality Metrics
- Safety compliance — Zero data privacy incidents in AI tool usage
- Output verification rate — % of AI outputs that are fact-checked before use
- Incident reports — Tracked and resolved (a healthy number means people are reporting)
Measurement Cadence
| Timeframe | What We Measure | Who Reviews |
|---|---|---|
| Weekly | Fluency slope check-ins (3 questions) | Automated collection |
| Monthly | Level progression, adoption rates | Program administrators |
| Quarterly | Full KPI dashboard, content effectiveness | Leadership + AI agents |
| Annually | Organizational fluency benchmark, program ROI | Executive leadership |
AI Agent Role in Measurement
The measurement system feeds directly into the AI agent maintenance loop:
- Feedback Agent aggregates responses and identifies patterns (“Admin team stalling at Capable level — exercises may not match their workflow”)
- Content Review Agent uses these insights to prioritize updates (“Add more examples for ops roles”)
- Personalization Agent adjusts individual learning paths (“This learner completed Beginner quickly — suggest jumping to Capable exercises 3-5”)
This creates a closed loop: measure → identify gaps → adapt content → measure again. The program improves itself.
Success Criteria
For a pilot program, these targets indicate the framework is working:
| Metric | 30-Day Target | 90-Day Target |
|---|---|---|
| Enrollment | 80% of pilot group | 100% of pilot group |
| Reached “Capable” | 40% | 75% |
| Positive fluency slope | 70% | 85% |
| AI used weekly | 50% | 70% |
| Safety incidents | 0 | 0 |
| ← Back: Safety Guidelines | Next: Implementation → |