Good Wednesday —
Anthropic tells all on Claude Mythos, its most powerful model. Its almost unbelievable.
Anthropic released details on Claude Mythos, its most powerful AI model. It decided not to make it generally available to the public. Instead, it's restricted to ~40 named partners (AWS, Apple, Cisco, CrowdStrike, Google etc.) for cybersecurity purposes
72.4%. That's the rate at which Mythos Preview converts discovered Firefox vulnerabilities into working exploits.
<1%. That's the estimated share of Mythos-discovered vulnerabilities that have actually been patched. Anthropic has found thousands of high-severity zero-days across every major OS and browser, contracted professional security contractors to manually validate every bug report before disclosure, and is still running at a fraction of a percent on remediation.
Cybersecurity is forever changed. The validator bottleneck is human. The discovery rate is now machine-speed. That gap isn't closing on its own.
In raw capabilities, Claude Mythos also leads across the exact combination that matters most in practice: coding, terminal tasks, hard reasoning, math and tool use. That’s the profile of a model that can become the default across serious workflows. Anthropic is hitting all the right bases

Know someone allocating to AI infra? They probably saw the Anthropic headline without the analysis. Forward this email.
On Our Radar
UBS projects DRAM prices up 125% in 2026 net of supply-demand imbalance. Hyperscaler capex budgets now face a second input shock (memory) beyond compute, compressing margins on inference workloads.
Perplexity's $450M ARR run-rate (50% growth in 3 months) is driven by enterprise adoption of search agents, not consumer freemium. This sets up margin pressure on traditional search and forces LLM inference cost economics into the open.
Gartner's 64% semiconductor growth and 125% DRAM price jump in 2026 is driven by AI capex, not consumer demand. This validates the memory scarcity thesis and sets up a two-year window where memory vendors capture outsized margins.
Codex at 3M WAU with plans to scale to 10M reveals OpenAI is resetting usage caps to drive adoption, not revenue. This signals confidence in underlying inference capacity and willingness to absorb margin compression to lock in developer lock-in.
Regime Snapshot
Compute (CRS): 65 — scarcity. More buyers than available capacity. Lead times extending.
Memory (MRS): 77 — Tight. Demand outstripping supply. No relief until new capacity comes online.
Narratives Moving Today
The race to de-China the chip supply chain — conviction 74, ▲12 pts this week.
This is exactly the kind of structural shift the terminal tracks in real time — which companies are exposed, how the constraint map is shifting, and what the regime history says about timing.
We’re opening the private beta to a small group of investors and analysts. If you want the full picture, not just the daily snapshot:
— Teng
