Welcome back.

You've probably seen the headlines by now.

And if you haven’t, here’s the gist: The Pentagon is threatening to designate Anthropic a "supply chain risk."

Defense Secretary Pete Hegseth is reportedly "close" to cutting business ties entirely. A senior Pentagon official told Axios this week: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."

That's the kind of language usually reserved for foreign adversaries. Not American AI companies.

I've spent the past few days reading every account of this. Here's my honest assessment: the framing is wrong on both sides, and what's actually happening is more consequential than anyone's admitting.

Let’s unpack.

AI News Roundup

OpenClaw's Creator Just Joined OpenAI
The viral AI agent — formerly Clawdbot, then Moltbot, now OpenClaw — is being absorbed into OpenAI after founder Peter Steinberger accepted an offer to "drive the next generation of personal agents." OpenClaw will live on as an open-source foundation project that OpenAI will sponsor. Translation: OpenAI just acquired one of the hottest agent projects in the ecosystem without writing a check for a company.

OpenAI Retired the Model People Actually Loved
After massive user backlash forced a reversal last August, OpenAI finally pulled GPT-4o from ChatGPT on February 13th — for good. Users had grown attached to its warmth and personality. OpenAI says the feedback "directly shaped" GPT-5.1 and GPT-5.2. Make of that what you will.

India Just Hosted the First Global AI Summit in the Global South
PM Modi kicked off the India AI Impact Summit this week in New Delhi — a five-day event drawing 20+ heads of state, Sam Altman, Sundar Pichai, and over 500 global AI leaders.

Introducing Seeko: White-glove AI workflow automations for companies

I started Seeko to help engineering, operations, and product leaders find the workflows eating their team's time — and build the systems to automate them.

We have space for 2 more clients this month. If you’re interested in working with us, click the button below.

The Timeline

Let me catch you up, because the timeline matters.

Last July, Anthropic signed a $200 million contract with the Pentagon. And Claude became the first AI model authorized for use in classified military systems — a genuinely big deal. OpenAI, Google, and xAI were available for unclassified work, but Claude was the one the military trusted with its most sensitive data.

Then came Venezuela.

Last month, U.S. special forces conducted the operation that captured Nicolás Maduro. The Wall Street Journal reported that Claude was used in that operation through Anthropic's partnership with Palantir, which has deep Pentagon ties. According to a senior administration official, after the raid, an Anthropic executive reached out to ask whether Claude had been involved.

The Pentagon's interpretation: Anthropic was signaling disapproval.

Anthropic denied it. Their spokesperson said the company "has not discussed the use of Claude for specific operations with the Department of War."

The Actual Dispute

The Venezuela moment may have accelerated the fight, but it didn't start it. Anthropic and the Pentagon have been in contentious negotiations for months over a core question: what can the military actually do with Claude?

The Pentagon's position: AI should be available for "all lawful purposes" — weapons development, intelligence collection, battlefield operations. If it's legal, Claude should help.

Anthropic's position: Two hard limits.

  • No mass surveillance of Americans

  • No fully autonomous weapons — systems that make lethal decisions without a human in the loop

An Anthropic official told Axios that surveillance law "has not in any way caught up to what AI can do." The Pentagon can already legally collect massive amounts of data. AI doesn't change what's legal. It just changes what's possible at scale.

That's the sticking point. And the Pentagon's not entirely wrong.

Meanwhile, OpenAI, Google, and xAI have all agreed to lift civilian safeguards for military use. One has already agreed to the "all lawful purposes" standard across all systems. Anthropic is the lone holdout.

Here's Where I Dig Deeper

Anthropic isn't being naive or anti-military. They built Claude Gov for national security. They were first into classified networks. They partnered with Palantir.

What they're arguing is narrower: autonomous lethal weapons and AI-powered mass domestic surveillance should always have a human in the loop.

That's not a radical position. The U.S. military has historically required humans in the decision chain for lethal force. The question is whether AI changes that at the speed and scale modern operations demand.

There's also internal pressure. A source told Axios that Anthropic engineers have significant concerns about Pentagon work. Last week, Mrinank Sharma — former head of Anthropic's Safeguards Research Team — resigned with a warning that "the world is in peril."

This is a founding-level tension playing out in public.

The Part That Should Actually Concern You

The Pentagon's "supply chain risk" threat isn't just posturing. If it happens, any company doing business with the U.S. military would have to certify that they don't use Anthropic tools.

Eight of the ten biggest U.S. companies use Claude.

The precedent: if a safety-focused AI lab gets blacklisted for maintaining usage restrictions, what does that signal to every other AI company about responsible deployment?

The Pentagon's logic — "if it's lawful, it's permitted" — outsources all ethical judgment to existing law. Existing law was written before AI existed.

The Takeaway

Anthropic's handling isn't perfect. Asking about the Maduro raid — whatever the intention — looked bad.

The Pentagon's overreacting. "Supply chain risk" is for Huawei, not a San Francisco AI lab with a $200 million Pentagon contract.

But this negotiation will define AI deployment in national security for the next decade. Every AI company is watching. Every government contractor is watching. Every enterprise buyer thinking about risk is watching.

The real question: who gets to decide what AI is allowed to do — the company that built it, or the customer who bought it?

Every organization adopting AI will eventually face that question in its own way.

Now I want to hear from you…

This week's question: If you were Anthropic's board, what would you do?

Hold the line on autonomous weapons and surveillance limits, and risk losing the Pentagon entirely? Or negotiate toward "all lawful purposes" and accept that the military gets to define the limits?

Hit reply. The most interesting answers get featured on Thursday.

Until then,

Haroon

Reply

Avatar

or to participate

Keep Reading