Measuring Success

Traditional training measures completion. This framework measures growth.

Fluency Slope: The Key Metric

Fluency slope measures the rate of AI competency growth over time — not just where someone is, but how fast they’re moving.

A beginner who’s improving weekly is more valuable to the organization than an expert who plateaued six months ago. Slope captures this.

How It Works

Learners periodically answer three simple questions:

  1. What AI tools did you use this week? (breadth)
  2. What’s the most complex thing you used AI for? (depth)
  3. What did you try that didn’t work? (experimentation)

These responses are tracked over time. The pattern reveals slope:

Slope Signal Action
Steep positive Rapid adoption, high engagement Accelerate to next level, pair with others
Gradual positive Steady growth, building habits Continue current level, reinforce wins
Flat Stalled — could be stuck or disengaged Targeted intervention, adjust content
Declining Regression — tools abandoned or frustration Direct support, identify blockers

Fluency Slope Visualization

xychart-beta title "Fluency Slope by Team (Example)" x-axis ["Month 1", "Month 2", "Month 3", "Month 4", "Month 5", "Month 6"] y-axis "Fluency Score" 0 --> 100 line "Operations" [10, 25, 45, 60, 72, 80] line "Admin Team" [15, 22, 30, 38, 50, 55] line "IT / Engineering" [40, 50, 58, 65, 75, 85]

Organizational KPIs

Adoption Metrics

Impact Metrics

Quality Metrics

Measurement Cadence

Timeframe What We Measure Who Reviews
Weekly Fluency slope check-ins (3 questions) Automated collection
Monthly Level progression, adoption rates Program administrators
Quarterly Full KPI dashboard, content effectiveness Leadership + AI agents
Annually Organizational fluency benchmark, program ROI Executive leadership

AI Agent Role in Measurement

The measurement system feeds directly into the AI agent maintenance loop:

  1. Feedback Agent aggregates responses and identifies patterns (“Admin team stalling at Capable level — exercises may not match their workflow”)
  2. Content Review Agent uses these insights to prioritize updates (“Add more examples for ops roles”)
  3. Personalization Agent adjusts individual learning paths (“This learner completed Beginner quickly — suggest jumping to Capable exercises 3-5”)

This creates a closed loop: measure → identify gaps → adapt content → measure again. The program improves itself.

Success Criteria

For a pilot program, these targets indicate the framework is working:

Metric 30-Day Target 90-Day Target
Enrollment 80% of pilot group 100% of pilot group
Reached “Capable” 40% 75%
Positive fluency slope 70% 85%
AI used weekly 50% 70%
Safety incidents 0 0

← Back: Safety Guidelines Next: Implementation →