AI Ready
ArchiveDaily Brief
Subscribe
The 3.5x intelligence gap inside enterprises is widening
Daily Brief

The 3.5x intelligence gap inside enterprises is widening

OpenAI's enterprise data shows depth of use, not access, is now the divide. Plus IBM's Think keynote, GPT-5.5 Instant, and CPC ads in ChatGPT.

By Haroon Choudery·May 7, 2026·8 min read

THE AI BRIEF

Today's signal: depth of AI use, not access, is what separates enterprises pulling ahead from everyone else. Plus four shorter reads on IBM, GPT-5.5, ChatGPT ads, and the Class of 2026.

In today’s issue:

  • Main story: Frontier firms now use 3.5x as much AI per worker as everyone else

  • Also worth knowing: IBM repositions watsonx, GPT-5.5 Instant becomes the ChatGPT default, OpenAI opens a self-serve Ads Manager, and the first ChatGPT-native graduating class gets named

  • Free webinar: Build Notion Agents to Automate Complex Tasks, a 30-minute Lightning Lesson, May 14 at 2 pm ET. Save your seat →

THE READ

OpenAI's first enterprise-data report shows the lead is widening, and it's not coming from more seats.

OpenAI released the first B2B Signals report yesterday, drawn from de-identified usage across its enterprise customers. The headline number: firms in the 95th percentile of AI use now consume 3.5x as much intelligence per worker as typical firms, up from 2x a year ago. Message volume explains only 36% of that gap. The rest is depth, which the report defines as longer prompts, richer context, and more substantive outputs per interaction.

The bigger gap shows up in agent and code tools. Frontier firms send 16x as many Codex messages per worker as typical firms. ChatGPT Agent, Apps in ChatGPT, Deep Research, and GPTs show similar patterns. The report names Cisco as a case in point: Codex helped the company cut build times by about 20%, save 1,500 engineering hours per month, and increase defect-resolution throughput 10 to 15 times. The Cisco team described the shift as treating Codex "as part of the team" rather than as a tool.

AI READY IN 21 DAYS · FREE

A 21-day AI program. In your inbox. Then it ends.

One email every morning at 7 am ET. Each one is a short read and one real thing to try before lunch. By Day 21, you have nine concrete capabilities, including prompting that hits your bar on the first draft, AI workflows that take five hours a week off your plate, and the language to lead the AI conversation at your company.

Built for ops leaders, COOs, chiefs of staff, founders, and team leads at mid-market companies. The four weeks move you from foundations (effective prompting, catching hallucinations, learning AI with AI) to connect (a personal context layer, daily automation, five hours a week back), to build (reusable skills, agent design and debugging), to lead (tool and model evaluation, leading the conversation at work). Twenty-one mornings, no upsells, no community Slack.

Not sure if it is for you? Take the 2-minute diagnostic.

Most internal AI metrics are still about access: seat counts, license utilization, the percentage of employees with a login. Those are easy to count, and according to OpenAI's data, they are no longer the thing that correlates with results. The companies that have moved past access are measuring something closer to depth, including how much of a real workflow is being delegated, how much context the AI is being given, and what share of multi-step work is actually being completed.

I want to be honest about the bear case. This is OpenAI's data, scoped to OpenAI's products, and OpenAI has every reason to publish a report whose conclusion is "use our advanced tools more." The 3.5x figure is an interesting signal, not a benchmark to manage to. The more useful question for an operator is the one underneath it: in your company, how would you tell the difference between an employee who uses ChatGPT to answer questions and an employee who uses it to do the actual work? If you can't tell, your dashboards are still measuring access, and the gap the report describes is one you would not be able to detect even if it were happening inside your own four walls.

SPONSORED BY CLUTCH

Hire secure AI teammates that work 24/7.

Hire pre-built AI teammates. Give your engineers and operators a platform to ship their own AI apps. Stop losing sleep about what is running where.

Clutch is the platform behind both: pre-built agents for the workflows your ops team should automate first, plus the integration plane your team's vibe-coded apps and Claude Code projects plug into. One platform. Real production. Visible and safe by default.

Built for ops, engineering, and security teams that are tired of the shadow-AI surface area inside their own company.

ALSO WORTH KNOWING

Anthropic's Code with Claude conference shipped three new Claude Managed Agents features yesterday 
Multi-agent orchestration, Outcomes for self-directed iteration, and Dreaming (a loop where Claude reviews its own sessions to catch what it missed). API traffic is up 17x since last year. An agent that can coordinate, self-direct, and self-correct is a different category of tool than what most teams have deployed.

IBM used its Think 2026 keynote to reposition watsonx as a multi-agent control plane
Arvind Krishna announced the next generation of watsonx Orchestrate, the new IBM Concert operations platform, and Confluent integration for real-time data. A Nestlé proof-of-concept on watsonx.data delivered 83% cost savings on a global data mart. The incumbent enterprise stack is making its case against the new lab-led "deployment companies."

OpenAI replaced ChatGPT's default model with GPT-5.5 Instant
The new default cuts hallucinated claims by 52.5% on high-stakes prompts in medicine, law, and finance, and by 37.3% on conversations users had flagged as factually wrong. For any team that has staff using ChatGPT for work that touches regulated content, the floor on accuracy moved this week.

OpenAI opened a self-serve Ads Manager and added cost-per-click bidding inside ChatGPT
US advertisers can now buy ChatGPT ads directly without going through Dentsu, Omnicom, Publicis, or WPP. CPC bidding is a marketer-native pricing model. If your customers are searching inside ChatGPT instead of Google, this is the first time you can target them at scale on a familiar buying motion.

P.S.

If you want to stress-test how deep your team's AI use actually is, hit reply with one workflow you've been meaning to delegate end-to-end. I'll send back two questions to ask before you scope it. I read every reply.

FORWARD THIS

If someone on your team is still measuring AI adoption by seat count, forward this. The gap the report describes is one you would not be able to detect on a utilization dashboard, and the companies pulling ahead are not buying more licenses.

Back tomorrow,
Haroon

Enjoying this issue?
Get the next Daily Brief as it lands.
Free. Four sends a week.
Keep reading

Get every issue, as it lands.