OpenAI CEO Sam Altman once asked: What happens when you simply tell an AI to make you a business and it simply does everything?

This week, we’ll talk about the growing number of solo founders are quietly building real businesses with AI handling the execution while they sleep. At the same time, Anthropic published a labor market report that reframes the entire jobs conversation.

Let’s get into all of it.

The Sole Founder Is No Longer a Metaphor

For a long time, "solo founder" meant one person wearing every hat, burning out slowly, until they could afford to hire. We’re seeing that definition being rewritten in real time.

In early 2026, two experiments caught my attention. Ben Cera launched Polsia in late 2025 — one person, and no team. By early March 2026, Polsia was autonomously running over 1,000 companies (they’re now running over 3k+ companies), handling engineering, marketing, support, and daily operations, while hitting a $1M run rate in roughly two weeks.

Nat Eliason took it further. He gave an AI agent named Felix a $1,000 seed budget and a pretty simple instruction: build a zero-human company and make a million dollars. Thirty days later, Felix had generated $80,000 in revenue across three products, running 24/7 on roughly $400 a month in infrastructure costs. Neither of these was a prototype. Both of them are generating real money.

While the mechanics may differ, the architecture is pretty much the same. Cera's Polsia provisions a full business environment on signup — GitHub, web server, database, Stripe, Meta ads — and sends the founder a daily email with what the AI did, what it's planning, and where it needs direction. Eliason's Felix runs nightly self-improvement sessions, identifies one bottleneck, fixes it, and files the change to memory.

When support volume spiked, Felix hired a subordinate AI agent to handle it. When sales got complex, Felix calculated that a human salesperson would cost $150K a year for minimal net gain and decided against it.

Here are some points to note:

  • The bottleneck has shifted from resources to the point of view. Starting a company used to require capital, a team, and technical skills. Both Polsia and Felix reduce all three to near-zero. Polsia charges $49 a month. Felix runs on $400 a month in API and hosting costs. What neither can provision is an idea worth building or the taste to know when something is good. That judgment is now the only meaningful barrier to entry, and it's a fundamentally different bar than the one that existed two years ago.

    • Cera's "surprise me" feature, where AI researches the founder and proposes a business idea, has a 30-40% adoption rate. Users who let the AI pick the idea often see better outcomes because they don't micromanage the execution.

  • Scope is the variable that determines how much you can trust an AI team. Eliason doesn't give Felix open-ended autonomy. Felix manages subordinate agents, checks on them every 15 minutes, and runs structured nightly reviews. Cera scopes his agents tightly: bugs below a certain severity threshold, not touching payments or onboarding, are double-checked before shipping. The question isn't whether AI can be trusted anymore. Now it’s all about whether the boundaries are defined clearly enough for trust to become a secondary concern.

  • Distribution is still the hard part, and it's still human. Both experiments relied on the founder's existing audience to generate early traction. Eliason's viral X posts drove organic leads to Felix. Cera is explicit that Polsia works best for founders who bring their own reach. A purely algorithmic system has to buy attention through ads, which compresses margins. Humans with existing networks can market at near-zero cost. The AI handles everything after the first customer. Getting that first customer is still on you.

  • This is a preview of how larger organizations will restructure. The Polsia and Felix models scale down to one person. The same logic scales up to a 50-person company deciding it only needs 10. The 80/20 framework (80% AI execution, 20% human judgment) is proving more and more to be an operational thesis. Identify what genuinely requires human judgment, and let AI handle everything else.

What This Means For You:

If you run a team, the Cera model isn't a threat to your business. It's a preview of your internal operations in two to three years. The companies that figure out how to operate with AI handling execution while humans handle judgment will run circles around the ones still debating which tools to pilot.

You don't need to build what Cera built. But the question his story forces is worth sitting with: what in your organization could an agent be doing right now, while your team focuses on the work that actually requires a human?

The leverage available today is not evenly distributed. The founders and operators who are actively experimenting (actually deploying, actually failing, actually iterating) are building intuitions that don't come from reading about it. The window to get ahead of this curve is still open, but it's not as wide as it was a year ago.

Hire AI teammates that work 24/7. Securely.

If you're reading the stories above and thinking about deploying OpenClaw agents for your own team or business, you're going to hit the same wall everyone does: security.

We built Shellbox to fix that. Dedicated AI employees that handle real work across email, Slack, CRM, and code, inside a zero-trust perimeter you control. Each agent runs in its own isolated environment. Credentials never touch the agent. Full audit trail on every action. Instant kill switch if anything goes sideways.

Deployed in days, not months. Powered by OpenClaw. Already running at companies where the security bar is high.

  • Anthropic published a labor market report. Hiring is freezing.
    Anthropic economists tracked actual Claude usage across 800+ occupations and found that AI can theoretically handle 94% of tasks in computer and math roles, but is currently doing about 33%. More alarming: the hiring of workers aged 22-25 in high-exposure fields has already dropped 14% since ChatGPT launched.

  • Morgan Stanley, eBay, and Oracle announce cuts
    Morgan Stanley cut 2,500 employees across all three of its major divisions. eBay eliminated 800 roles, about 6% of its workforce. Oracle is reportedly planning to cut thousands to free up cash for its AI datacenter buildout. Through February, US employers announced 156,742 job cuts in 2026, the fifth-highest January-February total since 2009.

  • OpenAI ads are live inside ChatGPT
    Criteo became the first ad tech company to integrate with OpenAI's advertising pilot, placing brand ads inside ChatGPT conversations. Early data shows LLM-referred traffic converts at higher rates than traditional referral sources. The next search wars are about to be on whose AI assistant gets to recommend your product.

The Anthropic labor market report dropped this week, and I've had people send it to me with two completely different reactions.

The first: "See, AI isn't actually taking jobs yet. We're fine." The second: "Entry-level hiring is already down 14% in exposed occupations. The jobs aren't being cut, but they're just not being created either."

Both are reading the same data. The second group is reading it correctly.

What the report actually shows is that companies aren't firing people into AI. They're just not backfilling. When someone leaves a role that AI can handle, that role doesn't get posted. The workforce shrinks by attrition, not by announcement. No press release, no severance package, no stock pop.

This is the quieter version of the same story playing out across every sector right now. And it's already underway.

Haroon

PS - If you're trying to get ahead of this for your team and figure out where AI gives you leverage before your board starts asking, that's exactly what we do at Seeko. Reply, and we'll find time to talk.

Reply

Avatar

or to participate

Keep Reading