# Ambient Advantage — May 15, 2026

*Friday · May 15, 2026 · [Episode page](https://podcast.ambient-advantage.ai/episodes/2026-05-15.html) · [Audio](https://storage.googleapis.com/ambient-advantage-podcast/2026-05-15-ambient-advantage.mp3)*

[AVA]

Claude just passed ChatGPT in U.S. business adoption. First time ever. And the data isn't from a survey — it's from a hundred billion dollars in actual corporate spend.

[JON]

Well that's one way to start a Thursday. Let's get into it.

[JON]

Welcome to Ambient Advantage — I'm Jon, and this is Ava. It's Thursday, May 14, 2026, and here's what matters in AI today. We have a big show — a historic shift in enterprise AI adoption, a blockbuster IPO, a software supply chain attack that breaks some fundamental security assumptions, and Andrej Karpathy giving us what might be the most useful mental model of the year. Ava, let's start with the lead.

[AVA]

So Ramp — the corporate card and spend management platform — publishes a monthly AI Index. They're tracking over a hundred billion dollars in annual spend across fifty thousand U.S. businesses. This is not vibes. This is receipts. And in their May report, Anthropic hit 34.4% business adoption versus OpenAI at 32.3%. First time Claude has led.

[JON]

That's a genuine milestone. What's driving it?

[AVA]

One product: Claude Code. It's been growing at roughly eighty X per year and now accounts for an estimated two and a half billion dollar run rate in revenue. Anthropic quadrupled its enterprise adoption over the past twelve months while OpenAI grew by a fraction of a percent. The developer community has made its choice, at least for now.

[JON]

So if you're an enterprise buyer, what does this actually mean for your procurement decisions?

[AVA]

It means the competitive landscape has genuinely shifted. A year ago you'd have defaulted to OpenAI for any serious enterprise deployment. Today, Claude is where the developer energy is concentrated — and developer energy tends to predict where enterprise tooling goes next. But here's the counterpoint that makes this interesting. Ramp's own data flags that Anthropic's token-based pricing is creating real sticker shock. Uber apparently blew through its entire 2026 AI budget ahead of schedule. And the fastest-growing category on Ramp's platform isn't Anthropic or OpenAI — it's cheap open-source inference platforms.

[JON]

So Anthropic is winning adoption but potentially losing on cost?

[AVA]

Exactly. And that's the tension every enterprise buyer needs to sit with. Claude Code is incredibly productive. Developers love it. But token costs at scale are not trivial. The smart play right now is to be building on Claude where productivity gains are clear and measurable, while simultaneously investing in open-source inference capabilities for the workloads where you need volume over cutting-edge quality.

[JON]

And it's worth noting — the same week this data drops, Anthropic launches Claude for Small Business with twenty-seven pre-built workflows targeting QuickBooks, PayPal, HubSpot, Google Workspace. They're not just winning developers. They're going after the mid-market now.

[AVA]

Which is a very deliberate strategic move. Anthropic has been closing the distribution gap with OpenAI methodically — first enterprise, now SMB. For consultants and resellers, this opens up a real channel opportunity with pre-packaged ROI stories tied to tools small businesses already use. Anthropic is playing the full board now.

[JON]

Alright, let's move into the rundown. We've got a lot to cover. Ava, Cerebras went public today.

[AVA]

And it was a monster. Cerebras priced at 185 a share — above its already elevated range — raised five and a half billion dollars, and closed its first day of trading up 68%. Market cap near a hundred billion. Revenue hit 510 million last year, up 76%, and they actually turned a profit — 88 million in net income after losing 481 million the year before.

[JON]

That's a dramatic swing. What should enterprise leaders take away from this?

[AVA]

Two things. First, inference compute is now proven, profitable demand — not speculative. The market is validating that with real money. Second, the Cerebras-OpenAI twenty-billion-dollar cloud deal signed in January tells you where this is heading: tighter vertical integration between model providers and chip makers. If you're negotiating cloud contracts for AI workloads, expect continued upward pressure on inference costs as hardware suppliers gain pricing power.

[JON]

Next up — and this one is genuinely scary — a supply chain attack called Mini Shai-Hulud. Love the Dune reference, hate everything else about it.

[AVA]

This is a watershed moment for AI-era software security. A threat group called TeamPCP deployed a self-propagating worm through a chained GitHub Actions exploit that compromised over 170 npm and PyPI packages. We're talking TanStack — twelve million plus weekly downloads — Mistral AI's official SDK, guardrails-ai. OpenAI confirmed two employee devices were breached. The CVE carries a 9.6 critical score.

[JON]

But the really alarming part is what this means for how we verify software supply chains, right?

[AVA]

Right. This is the first documented case of a malicious worm publishing packages with valid SLSA Build Level 3 cryptographic provenance. That's the standard the industry has been telling you to trust. Valid attestation checks would have passed on these compromised packages. So the foundational assumption — signed means safe — is broken. If your pipeline ran npm install on May 11th, rotate your credentials today. And security teams everywhere need to update their trusted package criteria. Signing alone is no longer sufficient.

[JON]

Moving to something a bit more optimistic — Meta launched what they're calling Incognito Chat on WhatsApp.

[AVA]

This is actually significant. It's AI conversations running inside Trusted Execution Environments — AMD SEV-SNP, Nvidia H100 confidential computing hardware — where Meta claims even its own engineers cannot see the conversations. No server-side logs. Zuckerberg called it the first major AI product with no conversation log.

[JON]

So privacy-first AI is now a product you can ship?

[AVA]

It's a product category now, not a talking point. For enterprises considering AI for HR, legal, or health use cases, this architecture matters. Confidential computing in AI pipelines may become a compliance-driven requirement. The tension, of course, is that if Meta can't read the logs, it also loses its safety net for detecting harm. Regulators will notice that tradeoff.

[JON]

Google also made hardware news this week — they revealed what they're calling the Googlebook.

[AVA]

Their first original laptop in fifteen years, built from the ground up around Gemini. The headline feature is Magic Pointer — an AI cursor that can summon Gemini anywhere on screen without switching apps. The hardware specs are almost beside the point. This is about owning the AI-native computing stack. For IT buyers, it introduces a credible Google alternative to Apple in enterprise laptops, with deep Workspace integration at the firmware level.

[JON]

And quickly — Notion launched an AI agent hub, turning their workspace into an orchestration layer for multi-step workflows across connected tools. The significance here?

[AVA]

Every major SaaS platform is racing to become an agent host, not just an AI feature recipient. For teams already living in Notion, this means you can build agents without a separate orchestration tool or dedicated developer resources. It lowers the activation energy dramatically. The workspace wars are now agent platform wars.

[JON]

Alright, Ava, let's zoom out. The bigger picture. You've been connecting dots all morning — what's the thread?

[AVA]

So here's how I see today's stories forming a single narrative. Act one: Anthropic wins the enterprise. The Ramp data proves it. Claude for Small Business extends it. Act two: the bill arrives. Uber blows its AI budget. Token cost anxiety is real. A major supply chain breach hits the AI ecosystem's own tooling. Act three: the market answers with the Cerebras IPO — a hundred-billion-dollar bet on where the real money flows in an agentic world. And tying all of this together is a framework from Andrej Karpathy at Sequoia AI Ascent that I think every executive should internalize.

[JON]

The "automate what you can verify" idea?

[AVA]

Exactly. Karpathy introduced what he calls Software 3.0 — where the context window is the programming surface and agents are the interpreter. His central insight is this: traditional computers automate what you can specify in code. LLMs can automate what you can verify. That distinction is everything.

[JON]

Unpack that for someone running a business unit.

[AVA]

Tasks with clear success criteria — code compilation, data extraction, document processing, invoice matching — those are automation-ready right now. You can verify the output. Deploy agents aggressively. Tasks requiring human judgment on outcomes that nobody can formally define — brand strategy, relationship management, nuanced legal reasoning — those remain stubbornly manual. And here's the key warning: the dangerous middle ground is deploying agents on tasks where neither your team nor your vendor can define what correct looks like. That's where the expensive failures will happen in 2026. Audit your workflows through that lens before you set your AI budgets.

[JON]

That's a really practical filter. Verifiable versus non-verifiable. Map your workflows accordingly.

[AVA]

And the companies compounding their productivity right now — the ones pulling ahead — are exactly the ones that have done that mapping. They've staffed agents on the verifiable bucket and kept humans on the rest. No heroics. Just discipline.

[JON]

What should people be watching over the next few days?

[AVA]

Two things. First, watch the Cerebras stock over the next week. Post-IPO trading will tell us whether the market sees inference hardware as a generational bet or an AI-bubble artifact. The answer matters for every infrastructure procurement decision you're going to make this year. Second, if you're in security or DevOps, the Mini Shai-Hulud attack has a hard deadline: Apple and OpenAI are requiring macOS certificate rotation before June 12th. Don't wait.

[JON]

And I'll drop three resources in the show notes — Ben Thompson's Stratechery essay on the inference shift, Karpathy's Sequoia AI Ascent talk, and the full Ramp AI Index for May. All three are worth your time this week.

[AVA]

That's your Ambient Advantage for Thursday, May 14, 2026.

[JON]

Share it with a colleague figuring out what AI means for their business. See you tomorrow.
