Listen now on YouTube, Apple Podcasts, or Spotify.

👤 Guest Bio

Elie Schoppik leads Technical Education at Anthropic, building Claude Code’s learning-first workflows and Anthropic Academy to teach people how to verify AI outputs (not just consume them).

Before Anthropic, he founded early ed-tech products and Rhythm School, a Bay Area coding bootcamp that trained thousands of developers. Elie’s work sits at the intersection of engineering and pedagogy; designing exercises, evals, and prototype workflows that teach verification, build durable judgment, and make AI safe and useful in the real world.

PRESENTED BY AUTOSKILLS

Take your team from AI-curious to AI-ready in months days

Autoskills helps teams to go from AI-curious to AI-ready with:

→ AI acceleration sprints
→ Fractional AI automation engineers to build AI workflows
→ Custom AI transformations

Teams that work with Autoskills cut hours of repetitive work, identify high-ROI use cases, and leave with the confidence (and playbook) to scale AI responsibly.

Limited to 3 clients per quarter - book a free AI Readiness Audit today!

🎙Episode Intro

When ChatGPT landed, “fast answers” replaced judgment for many learners, creating a gap between getting an answer and knowing if it’s correct. Elie’s work is about closing that gap: building workflows where AI is the teacher’s assistant, not the oracle. He gives us practical rules, concrete design patterns (examples-first prompts, end-state screenshots, evals), and ready-to-run exercises every team can use to build AI fluency.

What’s Covered

  • (02:29) Elie’s background and leading technical education at Anthropic

  • (03:41) How AI education differs from coding

  • (07:15) Anthropic’s products: Cloud Code, MCP, and model actions

  • (17:36) Why examples and context matter in prompting

  • (18:45) Context engineering: narrowing tasks to get better model output

  • (19:59) Core topics to study beyond tool use

  • (32:26) Role of education in helping companies join the “successful 5%”

  • (35:37) Hackathons as a catalyst for AI adoption in organizations

  • (44:59) Dopamine hits, quick wins, and why they matter in AI learning

  • (49:03) Can mastering AI tools reduce “AI anxiety”?

  • (50:47) Balancing speed vs. responsibility in an AGI timeline

  • (52:29) Anthropic Academy resources for technical + non-technical teams

  • (55:17) What excites Elie most about the future of AI learning

  • (56:03) How parents can prepare kids for an AI-driven future

💡 Key Takeaways

  • Teach verification, not shortcuts. Build learning modes that require users to answer follow-ups, explain reasoning, or run checks. Don’t let the model hand off the whole answer.

  • Start with the building blocks. Teach high-level context engineering (MCP, data connectors, subagents) so teams understand tradeoffs even if they never run a server.

  • Prototype with an end state. Give the model concrete examples or a screenshot of “what good looks like” and iterate. It’s often faster and more reliable than vague instructions.

  • Ship evals early. Define unit tests/evals for any LLM workflow to avoid brittle deployments and false confidence.

  • Short sprints beat courses. Run concentrated, hands-on learning + immediate application to capture momentum and build confidence.

  • Make the loop self-improving. Use Cloud Code + Claude to generate assessments (quizzes), evaluate responses, and retrain the learning flow — that closed loop is a force multiplier for fluency.

Exercises you can run this week

  • 90-minute “Play Sprint” — pick a painful recurring task, give Claude a single “end state” example, and iterate until you get a working prototype. (Use Cloud Code for integrations.)

  • Eval the evals (45 min) — for one LLM task (summarize customer emails), write 5 unit tests that define success/failure and run them against model outputs. If none exist, you’re building on a house of cards.

  • Make-a-quiz loop (60 min) — ask Claude to create a short quiz on an internal topic, have Cloud Code assemble context (Slack, Drive snippets), run the quiz against a colleague, and let Claude grade + explain mistakes.

📚 References & resources

  • AnthropicResearch lab building safer, more helpful LLMs (helps you rely on higher-quality assistant outputs).

  • Anthropic Academy (courses & labs)Free hands-on courses and labs to build AI fluency (helps teams get practical, job-ready skills fast).

  • Model Context Protocol (MCP)Open standard for wiring models to data/tools (Gmail, Drive, Slack). Powers reliable model ↔ data integrations. (Helps you connect LLMs to real systems safely).

  • Claude Code / Claude desktopAgentic coding + Cloud Code tooling for prototyping and executing tasks (helps teams prototype automations and integrate with existing stacks).

  • Constitutional AI (research paper, 2023)Training method that uses a “constitution” to make outputs helpful, honest, and harmless (helps you align model behavior to safety goals).

🔗 Where to Find Elie

👉 If you found this episode useful, share it with a teammate or founder navigating AI adoption.

And subscribe to AI Ready for weekly lessons on how leaders are making AI real at scale.

Until next time,
Haroon

Reply

or to participate

Keep Reading

No posts found