TL;DR
Everyone's building agents. Almost no one is building the memory layer agents actually need.
Foundation Capital's context graph essay hit because it names something obvious once you see it: your Salesforce knows what happened, but has no idea why. When Sarah from Support approved that refund exception, when Engineering escalated that bug to P0, when Finance waived the late fee — the decision logic died in Slack.
Here's why this matters more than most AI infrastructure debates: the companies capturing decision context today will own the precedent layer tomorrow. And right now, that's nobody.
The incumbents (Salesforce, ServiceNow, Workday) spent 20 years optimizing for state storage. They're architecturally bad at decision lineage. The AI vendors are building orchestration but treating context as a retrieval problem. The actual gap is neither — it's a capture problem that only gets solved by sitting in the execution path.
The controversial bit: I think the context graph opportunity fragments, not consolidates. You won't get one universal graph. You'll get vertical context graphs (sales decisions, support escalations, security approvals) that become moats in their specific workflows. The winner in sales context ≠ the winner in DevOps context.
By the way, I'm teaching everything I know about AI
Hi. If you’re reading this, it’s because you’re one of the 30,000 people who care about learning AI fluency.
I know this is true because… you’re on this newsletter.
And I figured I’d just invite you to my workshop so you can stop prompting like it’s 2021.
Tomorrow I’m teaching the LOGIC Method. The framework I spent the last year building. It’s what I wish existed back when 50,000 of us were just getting started.
So if you’ve been reading these emails thinking, “yeah, I’ll check it out eventually,” well, eventually is tomorrow, 12 pm ET. 60 minutes. Free. No upsell.
P.S. If you’re already registered, please forward this to anyone who’s been struggling to get value from AI tools.
P.P.S. Anything you want me to cover during the workshop? Reply here or drop it when you register.
What the Essay Actually Says (No Jargon)
Foundation Capital's argument:
Systems of record track state, not reasoning. Your CRM knows the deal closed. It doesn't know why legal approved the non-standard terms.
Agents need the "why," not just the "what." When an AI handles the next exception, it needs to know: what precedent exists? Who decided? What was the edge case?
That "why" lives in unstructured chaos — Slack threads, email chains, Zoom calls, people's heads.
Capture it at decision time = searchable precedent. Record the inputs, the approval path, and the rationale. Store it as a graph of connected decisions.
The graph becomes a platform. Agents query past decisions to handle new ones. Startups in the execution path (agent orchestration, approval flows) are positioned to capture this.
The thesis: Incumbents own state. Newcomers will own context. That's the new platform layer.
Why This Hit (And What People Are Arguing About)
The essay landed because it explains why your "AI-ready" data warehouse still can't answer "why did we do that?"
Three camps emerged:
Believers (VCs, agent builders): "Finally someone named the gap. We're funding this." Expect a wave of "context graph infrastructure" pitches in Q1.
Skeptics (enterprise architects, incumbents): "We can add this. It's metadata + lineage, which we already do poorly but could do better." Watch for Salesforce to launch "Einstein Decision Graphs" by Dreamforce.
Engineers: "Cool idea. Now show me the schema, the privacy model, and how you handle conflicting traces across 12 systems." The implementation questions are real and unsolved.
The debate isn't whether context matters — everyone agrees. It's who captures it and whether it consolidates or fragments.
The Hard Questions No One's Answering Yet
Who owns the trace when decisions span systems?
Example: A refund approval touches Stripe, Zendesk, Slack, your approval tool, and the CRM. Which system is the source of truth for the decision graph?
My take: The system which the human acted. That's usually the approval/orchestration layer, not the record system.
What happens when decision context conflicts?
Sarah in Support and Jake in Finance both remember the rationale differently. Whose trace wins?
Current answer: Nothing. These systems don't exist yet.
Privacy landmine:
Decision traces will contain PII, internal justifications, and sensitive approvals.
Storing "why we denied this person" creates compliance nightmares incumbents are allergic to.
Advantage: Startups can design privacy-first from day one.
The retrieval problem everyone's ignoring:
Capturing context is step one. Surfacing the right precedent when an agent needs it is step two.
This is where most implementations will fail. Noisy retrieval = useless graph.
What to Actually Do (If You're Building With Agents)
Stop theorizing. Start instrumenting. Here's the 30-day experiment:
Week 1: Find Your Decision Graveyard
Pick one recurring decision where outcomes vary (discount approvals, security exceptions, escalation paths).
Audit: Can you explain why the last 10 edge cases were resolved the way they did from your systems alone?
If the answer is "I'd have to ask Sarah," → you have tribal knowledge, not searchable precedent.
Week 2: Capture One Decision Thread
For every instance of that decision, require:
Inputs: What data points were considered?
Path: Who was involved? (Name + role)
Why: One-line rationale for the final call
Source: Link to Slack thread/email/meeting note
Store it as metadata in the ticket/case/record.
Week 3: Test Retrieval
When the next similar case arrives, try to surface the relevant precedent in <2 minutes using only your captured context.
Pass condition: You can show the decision-maker "here's what we did last time and why" without memory or asking around.
Fail condition: The context is too noisy, too sparse, or too hard to find.
Week 4: Measure Tribal Knowledge Dependence
Count how many times in 20 decisions the team says:
"I remember we did X because…"
"Let me ask [person] why we…"
"I think it was because…"
If >30% → you're running on human memory, not recorded context. That breaks when the agent handles the decision.
Signals the Category Is Moving (Watch These)
Product:
Salesforce, ServiceNow, or Workday announces "decision lineage" features (defensive play)
Agent orchestration startups (Dust, Multi-On, Relevance AI) add explicit context capture
Someone launches "the Segment for decision traces" (scary good positioning)
Funding:
Seed rounds with "context graph," "decision intelligence," or "agent memory" in the pitch
Andreessen, Sequoia, or Benchmark backs a precedent-capture startup
Standards:
Open-source schema proposals for decision traces (like OpenTelemetry for observability)
A consortium forms around interoperable context graphs (when category = platform)
Proof points:
A public case study: "We deployed agents with context capture and reduced escalations by 40%"
First ROI stories will come from support, sales ops, or security workflows
Who Wins (My Prediction)
Won't win:
Pure infrastructure plays ("we're the universal context layer!") — too abstract, too far from value
Incumbents bolting this onto existing products — they're architected wrong and culturally slow
Will win:
Vertical agent platforms that naturally sit in decision flows (approval tools, agent orchestrators, workflow automation) and capture context as a byproduct
Domain-specific context graphs — whoever owns "how sales teams handle exceptions" or "how security teams approve risks" builds a moat in that vertical
Example: The Gong for decision context in sales. The PagerDuty for incident decision graphs.
Dark horse:
A technical founder who builds decision lineage as open-source infrastructure, then monetizes the hosting/querying layer (the Supabase model)
Reading List (Start Here)
Essential:
Foundation Capital: Context Graphs — the original essay
Reactions:
Jaya Gupta's LinkedIn thread (best summary + debate in comments)
Prukalpa's counterargument (integrator might win, not orchestrators)
PlayerZero's thread (production engineering angle)
Technical context:
Arize AI on agent tracing and observability
LangGraph documentation (how traces work in practice today)
Plain-language explainer:
"Context Graphs Explained Simply" (Medium) — for non-technical stakeholders
Bottom Line
The context graph thesis is right: agents need decision lineage, and no one's capturing it systematically. But the "one platform to rule them all" framing is wrong.
These fragments. Vertical workflows (sales, support, security, DevOps) will each get their own context graphs, owned by whoever sits closest to the decision. The companies that instrument this today — before agents are deployed at scale — will own the precedent layer when agents become standard.
If you're building agents and not instrumenting decisions, you're building on sand. Start capturing context now, even if it's manual and messy. The graph you build in the next 6 months becomes your moat in 18.
→ What's your take? Are context graphs one platform or many vertical moats? Reply or hit me up — I want to hear what you're seeing in production.
👉 If you found this issue useful, share it with a teammate or founder navigating AI adoption.
And subscribe to AI Ready for weekly lessons on how leaders are making AI real at scale.
Until next time,
Haroon
