TL;DR
Workshops create awareness. Real capability comes from practice in context, timely feedback, and a system that forces repetition. If you want your team to become actually good with AI, stop running one-off trainings and build micro-practice loops, explicit success criteria, and low-friction feedback channels. This edition gives a 30/60/90 day playbook, copy-paste templates, and metrics leaders can use tomorrow.
A quick note
Amazon just announced 30,000 layoffs. UPS cut 48,000. Both cited AI as a key driver, and it’s not stopping there.
If you were recently affected and want to pivot into an AI-related role, I want to help. I’m putting together a small effort to connect people with resources, mentors, and companies hiring for AI-skilled roles.
I’ve been in the AI space for 8+ years, worked with ML systems at Meta, founded an AI education non-profit that reached 70,000 people, and now run an AI testing platform where I see firsthand how companies are implementing AI and reshaping their approach to business.
If that sounds useful, you can fill out the form below. I’ll share what I learn as I help people navigate this shift.
The problem, short version
Every company runs workshops. Everyone feels optimistic for a week. Then old habits return.
Why? Because workshops are events. Work is a continuous stream of decisions. Events produce knowledge; systems produce capability.
Put simply:
Workshops = awareness spikes
Systems = behavior change over time
If you want capability, design a system.
The science (brief, useful)
Three cognitive truths that matter for any learning program:
The forgetting curve. People forget quickly unless the material is revisited and applied.
Desirable difficulty. Learning that feels hard — spaced, varied practice — produces durable skill.
Context-dependent memory. People recall and use what they practiced in the environment where the skill is needed.
Workshops ignore all three. They front-load information but provide little opportunity to practice in the actual workflows where the skills matter.
Diagnose: Are you confusing activity with progress?
Quick diagnostic — answer these for a program you run:
After two weeks, what percent of attendees use what they learned in their daily work? (real figure)
Can you point to three concrete decisions changed because of the workshop?
Do you have micro-tasks that force application within 48 hours?
Is there personalized feedback for each learner?
If any answer is “no” — you’re running events, not building capability.
What actually works — the practice-first framework
Build learning as a closed-loop system with five parts:
Trigger (context): The exact workflow moment where a skill is needed.
Micro-practice (do): Short, focused tasks done in the real tool or workflow.
Feedback (fix): Fast, specific, and tied to real outcomes.
Reflection (meta): A 5–10 minute async write-up that enforces the lesson.
Spacing (repeat): Revisit the micro-practice at increasing intervals.
Repeat until behavior changes.
A simple 30 / 60 / 90 day playbook (copy-paste and run)
0–30 days — stop the leak
Replace one 90-minute workshop with three 15-minute micro-practices embedded in the actual workflow.
Add a “do-it-now” task to onboarding that every new hire completes in day 1–3.
Run a “skill triage”: pick 2 high-impact skills (e.g., prompt design for product, core reconciliation flow for finance) and build one micro-practice for each.
Micro-practice template
TITLE: <skill name — 1 line>
CONTEXT: <where this applies — e.g., drafting customer replies>
TASK (5–10 mins): <action — e.g., use the template to draft a reply for ticket X>
SUCCESS SIGNAL: <what counts as ‘good’ — e.g., reply accepted, score > 0.8>
FEEDBACK: <who or what will grade it — human / rubric / auto-check>
REPEAT: <when to revisit — day 3, day 10, day 30>
30–60 days — systemize feedback
Build fast feedback channels: peer review, short coach rounds, or automated scoring for structured tasks.
Run weekly “practice sprints” (30-min block) with calibration sessions: compare outputs, discuss differences, update rubric.
Start tracking transfer rate: % of learners who applied the skill within 7 days.
60–90 days — scale & measure impact
Add the practice tasks to role-based onboarding and performance checklists.
Measure business outcomes tied to practice (time saved, error reduction, response quality).
Iterate on the rubrics and expand to 2–3 more skills.
Two ready-to-use rubrics (copy-paste)
Prompt quality rubric (0–3) — quick human eval
0 — Unusable: irrelevant or dangerous
1 — Basic: answers prompt but lacks specificity or evidence
2 — Good: usable, cites context, minor edits required
3 — Excellent: precise, follows output contract, ready to deploy
Applied task rubric (0–4) — for structured micro-practice
0 — No attempt / wrong approach
1 — Attempted with major faults
2 — Completes task with guidance
3 — Completes independently, minor issues
4 — Exemplary — follows best practice and documents rationale
Two practical examples (not theoretical)
Example — Customer Support
Workshop: “How to use AI to draft replies.”
Practice-first rewrite: every rep receives 3 real tickets and drafts replies via the model; peers score using the prompt quality rubric; coach provides 5-minute feedback. Repeat in 3 days with different ticket types. Measure reduction in average response time and escalation rate.
Example — Product Discovery
Workshop: “Use AI to summarize user interviews.”
Practice-first rewrite: PMs must submit 2 summaries from real interviews within 48 hours and tag the insights into the team’s decision log. Feedback: TL reviews and marks whether insight led to 1 experiment. Track % of summaries that convert into experiments.
Metrics that prove you’re building capability (pick 3)
Transfer rate: % of participants who apply the skill within 7 days.
Retention of practice: % who complete repeated micro-practices at scheduled intervals.
Business conversion: % of practices that lead to measurable action (experiment launched, ticket resolved).
Quality lift: rubric score delta pre → post practice.
Time-to-autonomy: median days until a learner finishes a practice without feedback.
Measure these weekly. If transfer < 30% after 30 days, redesign your micro-practice.
Common traps (and what to do instead)
Trap: “Gamify it.” People don’t need leaderboards; they need usefulness.
Do: Tie practice to real decisions and outcomes.Trap: “We’ll make a course.” Courses are libraries, not engines.
Do: Replace some course time with supervised in-work tasks.Trap: “Certification = competence.” Certifications measure completion, not behavior.
Do: Measure transfer and impact, not completion rates.Trap: “One-size-fits-all.” Learning speed varies.
Do: Scope micro-practice difficulty and allow adaptive pacing.
One short playbook for leaders (3 prompts)
Use these in your next meeting or planning doc.
Before a kickoff
Write a 2-line “how we will know this worked” statement. Define a single transfer metric we can measure within 14 days.
For L&D redesign
Replace one full-day training with three micro-practices. Each practice must be done inside the tool the team uses and have a one-sentence success signal.
Weekly standup prompt
One line: what practice did you complete this week and what changed in your work because of it?
Final note
Workshops are comforting. Systems are uncomfortable. If you want people who use AI well — design practice into their day.
Start small. Measure transfer. Iterate fast.
Build the muscle first — the outcomes will follow.
👉 If you found this issue useful, share it with a teammate or founder navigating AI adoption.
And subscribe to AI Ready for weekly lessons on how leaders are making AI real at scale.
Until next time,
Haroon
