I watched something shift last week. Not slowly, but suddenly.
Andrej Karpathy (ex-Tesla AI Director, OpenAI co-founder) posted about his workflow changing "overnight" after using Claude Code intensively. Not incrementally better. Fundamentally different.
When I first read his thread, I'll admit, I was skeptical. We've all heard the hype about AI coding tools. But this felt different. He wasn't talking about auto-complete getting better. He was describing a complete paradigm shift in how he thinks about building software.
Here's what caught my attention: he's stopped telling the AI how to code. He's telling it what to achieve.
I've been playing with this approach for a few weeks now, and I can tell you. It changes everything about how you work with AI. Let me show you what I mean.
Quick ask
I'm making some big decisions about AI Ready Newsletter, and it would be great if you could answer some quick questions
Here's what I'm trying to figure out:
Should we focus more on tactical guides or industry news?
How often do you actually want to hear from me?
What topics matter most to you right now?
Your feedback directly shapes what I create. Excited to hear from you guys.
The Imperative → Declarative Shift
You've probably written code like this (I know I have):
Imperative (the old way):
Read the CSV file.
Skip the header row.
Loop through each row.
Split by comma.
Take column 3 for age.
Take column 5 for city.
Filter where age > 30 AND city == "Beijing".Declarative (what's happening now):
Find all users over 30 who live in Beijing
from this CSV file.The first is how. The second is what. Clear difference, right?
With AI agents that can loop, test, and iterate autonomously, declarative wins every time. So you’re basically defining the success criteria and letting the AI figure out the path.
The first time I tried this, it felt wrong. Like I was being lazy or not doing "real work." But then I watched the AI loop through three different implementations, test each one, and arrive at something better than what I would've written manually. In a fraction of the time.
Karpathy's insight: "Change your approach from imperative to declarative to get the agents looping longer and gain leverage."
A Library With Zero Code
This leads somewhere wild.
Last week, a developer named Drew Breunig released something that made me stop and think. It's a library called whenwords, a time-formatting library that turns timestamps into "3 hours ago."
Here's the kicker: it contains zero code.
When I first heard about this, I thought it was a joke. A library without code? What's the point?
But then I looked at what it actually contains:
SPEC.md- Detailed behavior specificationtests.yaml- 125 test cases as input/output pairsINSTALL.md- Installation instructions
The installation instructions are comically simple. Literally just:
Implement the whenwords library in [LANGUAGE].
1. Read SPEC.md
2. Parse tests.yaml and generate tests
3. Implement the five functions
4. Run tests until all pass
5. Place implementation in [LOCATION]I tried it myself. Pasted that prompt into Claude. It worked. First try.
It works in Ruby, Python, Rust, Elixir, Swift, PHP, Bash, and even Excel formulas.
Why does this matter?
Because we're approaching a future where the specification is the product. The code is just one instantiation of it generated on demand for your specific language, framework, and environment.
That's not a theoretical future. That's happening right now.
What This Actually Means
The shift:
From: Writing code
To: Defining what success looks like
Think about it: You don't think about how your Python code gets compiled to machine code. That abstraction layer is invisible to you.
Soon, writing the code itself becomes the invisible layer. You'll think about:
What problem am I solving?
What are the success criteria?
What are the edge cases?
How do I test this?
The AI handles translating that into working code.
I've been testing this with my team for the past few weeks. The productivity gains are real, but they require a mental shift. You have to let go of thinking in implementation details and start thinking in outcomes.
It's harder than it sounds. But once you get it, you don't want to go back.
When You Still Need Real Code
Now, before you go delete all your existing codebases (don't), spec-only development isn't a silver bullet.
Drew identifies five reasons you still want actual code libraries, and honestly, I think he's right on all of them:
1. When performance matters Browsers, databases, real-time systems. You need humans obsessively optimizing every byte and millisecond. I'm not running my production database on AI-generated code. Not yet, anyway.
2. When testing gets complicated A spec change that fixes Elixir might break Ruby. Do you test against 20 languages × 4 AI models? That's 80 implementations to validate. Good luck with that CI/CD pipeline.
3. When you need support If a customer's AI-generated codebase breaks, how do you debug it? The probabilistic nature of AI means their code and your code might be meaningfully different. I've hit this problem already. It's annoying.
4. When updates matter Security patches, bug fixes, new features — spec-only libraries work best for "implement and forget" utilities, not foundations you actively maintain. Your authentication library? Probably still want that as actual code.
5. When community matters Open source is more than code. It's people, culture, collaboration. The magic is in the community that forms around a shared goal. You can't spec your way into that kind of ecosystem.
What to Do Today
Here's what this means for you right now, based on what I've learned:
Start thinking declaratively with AI tools.
Next time you're using Claude, Cursor, or any AI coding assistant, try this:
❌ Don't: "Create a function that loops through this array, checks if each item meets condition X, then maps it to format Y..."
✅ Do: "I need to transform this user data into a dashboard-ready format. Success means: all users with active subscriptions, sorted by lifetime value, with their last login date formatted as 'X days ago'."
Then add: "Write tests first. Make sure it handles edge cases like missing data and timezone differences."
The AI will figure out the how. You focus on the what.
I've been doing this for two weeks. My velocity is up, but more importantly, I'm spending more time on architecture and less time on implementation details. That's where I should be spending my time anyway.
The Three Software Eras
We're moving through three paradigms:
Software 1.0: Humans write explicit instructions (traditional code)
Software 2.0: Humans curate data, optimizers generate neural network weights (machine learning era)
Software 3.0: Humans write specs in natural language, AI generates the code (now)
Each layer abstracts the previous one. Each makes the "how" less relevant and the "what" more critical.
The skill that matters isn't "knowing how to write a binary search algorithm." It's "knowing what problem needs solving and how to articulate success criteria."
That's a fundamentally different skillset. And honestly? It's more accessible. If you can think clearly and write clearly, you can build software.
I've watched this happen in real-time with non-technical people on my team. They're building internal tools now. Not prototypes, but actual production tools. That wouldn't have been possible six months ago.
The Open Questions
This paradigm shift raises questions I don't have answers to yet:
How do we handle versioning when the "product" is a spec, not code?
What happens to code review culture when there's no code to review?
Do we need new tools for "spec review" instead?
How do security audits work when implementations are ephemeral?
I'll be exploring these in future issues. If you're thinking about this stuff too, hit reply. I genuinely want to hear what you're seeing in your own work.
Quick ask: Have you used Claude Code, Cursor, or similar tools yet?
Reply with:
YES - and tell me what surprised you most
NO - but I'm curious
NO - and I'm skeptical
I'll share the most interesting responses on Thursday.
👉 If you found this issue useful, share it with a teammate or founder navigating AI adoption.
And subscribe to AI Ready for weekly lessons on how leaders are making AI real at scale.
Until next time,
Haroon
