Welcome back.

Two weeks ago, I laid out the standoff between Anthropic and the Pentagon and asked: if you were on Anthropic's board, would you hold the line or negotiate?

A lot has happened since then.

The war between the US, Israel, and Iran started the same week the deal collapsed. The question of "what should AI be allowed to do without a human in the loop" has stopped being hypothetical. So here's the full timeline, and what I think it means.

AI News Roundup

Cal AI Sold to MyFitnessPal — Built by Teens, $40M in Revenue
Cal AI, the photo-based calorie tracking app, was acquired by MyFitnessPal this week. The founders built the app in high school. It hit 15 million downloads and $40 million in annual revenue in under two years, with a team of seven. The app stays independent, now with access to MyFitnessPal's database of 20 million foods and 380+ restaurant chains.

OpenAI Raised $110B at a $730B Valuation
Amazon ($50B), Nvidia ($30B), and SoftBank ($30B) led the largest private funding round in history. Amazon becomes the exclusive third-party cloud distribution provider for OpenAI's enterprise platform, and OpenAI committed $100 billion to AWS over the next eight years. Microsoft put out a joint statement confirming their partnership is unchanged. The round is still open.

Claude Hit #1 on the App Store
After the Pentagon blacklisted Anthropic, Claude overtook ChatGPT as the most downloaded free app in the US App Store. The surge crashed Anthropic's servers. A Reddit post saying "Cancel and Delete ChatGPT" hit 30,000 upvotes. Anthropic also launched Claude Import Memory this week, letting users transfer their ChatGPT and Gemini memory into Claude with a copy-paste flow.

Deploy OpenClaw. Securely.

Your team wants the productivity of autonomous AI agents. You need the security. We deploy OpenClaw on isolated, hardened infrastructure so you get both: credential vaults, curated skills, full audit trails, and zero direct internet access for the agent.

Tell us what you want to automate. If it's a fit, we can deploy the same week.

The Full Timeline

For those who missed the February 17th issue, here's the quick backstory: last July, Anthropic signed a $200 million contract with the Pentagon. Claude became the first AI model authorized for classified military networks. Then negotiations broke down over two specific limits Anthropic refused to drop: no mass domestic surveillance, and no fully autonomous lethal weapons.

This is what happened over the past two weeks.

February 25: Axios reported the Pentagon took its first step toward blacklisting Anthropic. Talks had been quietly deteriorating for months.

February 26: Anthropic formally rejected what the Pentagon called its "final offer." The Pentagon wanted Claude available for "all lawful purposes," with no carve-outs. Anthropic's position was that the existing surveillance law hasn't caught up to what AI can do at scale. It stated that the Pentagon's definition of "lawful" isn't a sufficient guardrail.

February 27: Trump ordered all federal agencies to phase out Anthropic products within six months. Defense Secretary Hegseth designated Anthropic a formal "supply chain risk" — a label previously reserved for Chinese hardware companies, not American AI labs. Hours later, OpenAI announced a deal to replace Anthropic on the Pentagon's classified network.

February 27-28: The US and Israel launched coordinated airstrikes on Iran. Suddenly, the framing of this dispute — theoretical AI ethics versus live military operations — got very concrete, very fast.

February 28: Anthropic filed for legal challenge, calling the supply chain risk designation "legally unsound." Dario Amodei published a statement on Anthropic's website: "Threats do not change our position: we cannot in good conscience accede to their request." Hundreds of tech workers signed an open letter calling on Congress to examine whether the designation was an appropriate use of government authority.

March 1-2: Claude went viral. Consumer downloads surged, Claude hit #1 on the App Store, and Anthropic's servers went down from the load. The public reaction was unambiguous: holding the line was good for the brand.

March 3 (today): Altman admitted on X that OpenAI's Pentagon deal "looked opportunistic and sloppy" and said he "shouldn't have rushed." OpenAI and the Pentagon quietly amended the contract to add stronger surveillance protections, closing a loophole that would have allowed AI to be used on "commercially acquired" data — meaning geolocation data, browsing history, and financial records purchased from data brokers.

What I Actually Think

A few things stand out to me here.

Anthropic's limits weren't radical. The US military has historically required humans in the decision chain for lethal force. What Anthropic asked for — no mass domestic surveillance, no fully autonomous weapons — is consistent with decades of American military doctrine. The Pentagon's push for "all lawful purposes" was a departure from precedent.

It is also no coincidence that all of this is unfolding at the same time as the US-Iran conflict. The strikes started the same day as the blacklist. And we now have active US military operations underway in a conflict where AI-assisted intelligence and targeting are almost certainly in play.

The abstract question of whether AI should help identify targets without human oversight is now a live operational question. That context makes the Anthropic-Pentagon dispute feel less like a contract negotiation and more like a foundational argument about how this technology gets used in war.

And OpenAI took a real hit. Altman's admission that the deal was "opportunistic and sloppy" is unusually candid for a CEO of his stature. The amended contract is meaningful — closing the data broker loophole is a genuine improvement. But the sequence of events (Anthropic holds the line, OpenAI swoops in within hours, public backlash, contract quietly amended) doesn't reflect well on the process, even if the outcome ends up in roughly the right place.

Lawfare's analysis is worth reading in full. Legal experts argue the supply chain risk designation likely won't survive a legal challenge. Hegseth's public statements may have already undermined the government's litigation posture. The designation exceeds what the statute authorizes, and the required findings don't hold up.

What This Means for You

If you're a business leader who uses Claude, the practical impact is limited. The six-month phase-out applies to federal agencies and their contractors. Consumer and enterprise access to Claude is unaffected.

But the broader question this raises matters for every organization deploying AI: who gets to decide what your AI tools are allowed to do?

Right now, AI companies set the limits. The Pentagon tried to override those limits by contractual force. The public backed the company. And the legal system may ultimately determine what authority the government actually has here.

That won't always play out this cleanly. The next dispute won't necessarily involve a sympathetic AI lab and a clear-cut ethical line. Understanding that the usage limits on AI tools are a negotiated boundary — not a fixed one — is worth keeping in mind as you build your own AI workflows.

The Question I Asked Two Weeks Ago

I asked: if you were Anthropic's board, would you hold the line or negotiate toward "all lawful purposes"?

You answered. The most common response was some version of: hold the line, but make the limits explicit and publish them so there's no ambiguity about what you're agreeing to.

That's more or less exactly what Anthropic did.

Now I want to hear from you...

This week's question: Does the public backing Anthropic change how you think about which AI tools your organization should be using? Or does this feel like a DC story that doesn't affect your day-to-day? Hit reply.

Until next week,

Haroon

Reply

Avatar

or to participate

Keep Reading