Good Monday,
Google deepening custom silicon with Marvell accelerates its escape from merchant chip dependency

The coverage on Google and Marvell is framing this as a hyperscaler escaping NVIDIA. Not really.
The Marvell talks surfaced days after Broadcom locked in a through-2031 agreement to design and supply Google's TPUs. What Google is reportedly discussing with Marvell are two inference-side chips: a memory processing unit to ease data movement alongside existing TPUs, and an inference-optimized TPU. Neither chip touches training. Broadcom holds the training relationship through the end of the decade. What looks like Google diversifying away from merchant silicon is actually Google adding a third ASIC vendor to the inference layer while Broadcom's grip on the higher-value training work stays intact.
NVIDIA invested $2 billion in Marvell last month through NVLink Fusion, partnering to build custom XPUs and NVLink-compatible networking. Marvell is simultaneously the ASIC alternative to NVIDIA and a contracted extension of NVIDIA's own ecosystem. The company is playing both sides, which is a fine business to be in.
The more interesting read on Marvell isn't "NVIDIA threat vector." It's toll collector across competing architectures, collecting rent regardless of which silicon wins. Neither company has confirmed the talks are even finalized.
Earnings season is kicking off this week, with INTC reporting on Thurs. Street prices Intel as a near-breakeven turnaround option with a $0.01 EPS bar. The call is a proof-of-life test for Intel Foundry Services.
Either they’re quietly lining up an 18A external customer and they name one, or the silence on IFS says something louder than any revenue line. A quiet call with no new foundry customer names is, functionally, confirmation that 18A is drifting from competitive process to internal cost center. That changes the multiple.
We've been tracking the IFS pipeline signals in Tessara ahead of the print. If you want to know what management language to listen for, which line items to watch, and where the foundry thesis either holds or quietly collapses, it's in the terminal.
Every earnings call. Every infrastructure name. All in one place.

Stories That Matter
SK Hynix's 192GB SOC-DIMM2 unlocks Vera Rubin's memory bottleneck, shifting inference capex from GPU-heavy to memory-heavy. Larger batch sizes and context windows now run without storage swaps.
Cursor's $50B valuation at $2B ARR sets a punishing bar for AI dev tools. A $6B run rate by year-end compresses multiples sharply, signaling margin pressure or downward repricing.
Morgan Stanley's 30% TSMC upside is consensus repricing on Meta and Microsoft guidance, not new supply or demand data. Margins and lead times are already forward-priced.
FERC approval clears Tract's 1GW Illinois datacenter for grid connection. Watch for anchor tenant announcements within 90 days. This is one of few projects with explicit power capacity locked.
Regime Snapshot
Compute (CRS): 65, scarcity.
Memory (MRS): 84, Shortage.
Narratives Moving Today
Agentic AI flips the compute stack: CPUs reclaim the bottleneck? ▲13 pts this week.
See you tomorrow,
Teng
