Note: This is the first in a series of Tessara AI supply-chain theses. Each piece uses live data from our regime scores, supplier maps, earnings calls, and silicon demand tracker to make a clear call, map the listed equities that express it, and state what would prove us wrong.

HBM (high bandwidth memory) is tight. Your analyst knows it, your PM knows it, and the guy posting supply-chain tweets at 2 a.m. definitely knows it too.

While it was interesting in 2025, that’s no longer the useful call.

The real question is whether the monster rally in memory stocks over the last 30 days is supported by evidence, or running ahead of it. In a normal memory cycle, we would be very cautious. This time, the opposite has happened: the evidence improved while the tape moved.

Our memory basket is up bigly

SK Hynix is describing customer behavior as allocation-led. Micron says its entire calendar 2026 HBM supply is covered by price and volume agreements. Samsung says its production-ready HBM4 capacity is fully booked and sold out. ASML says memory customers are sold out for 2026 and constrained beyond it.

Four different vantage points pointing to the same future outcome.

But the equity question has changed. If everyone already knows HBM is tight, the edge is not saying “buy memory.” The edge is knowing which supplier has the cleanest expression of the next leg

Our view at Tessara:  

The scarce thing is still getting scarcer. HBM remains structurally short through 2026 Q4; SK Hynix is the structural leader, Micron is the catalyst into June 23, and Samsung is the correction trade.

TL;DR

  • HBM is structurally short through 2026 Q4. Three scaled suppliers are not collectively adding enough capacity to clear demand from 17 active HBM-critical silicon programs in our tracker (11 shipping, 6 in early production)

  • The 2026 book is already largely spoken for. SK Hynix, Micron, and Samsung have each described 2026 HBM capacity as committed, booked, or sold out. ASML’s upstream commentary points in the same direction.

  • The demand stack is broader than NVIDIA. NVIDIA remains the largest buyer, but AMD, AWS, Google, and Microsoft now account for a meaningful share of HBM-critical accelerator programs in our tracker.

  • Three equity expressions, three different jobs. Structural leader: SK Hynix. Catalyst expression: Micron. Corrective expression: Samsung.

Why HBM is binding right now

If you have been listening to the AI infrastructure cycle through a capex lens, you know the bottleneck rotates. And every rotation opens up a new bottleneck.

In early 2024, the constraint was CoWoS packaging at TSMC. Through most of 2025, it was grid power and substation lead time. Right now, on our read, it is HBM.

That is not because compute or power eased. It is because HBM tightened harder.

Our Memory Regime Score now sits at 90 out of 100, the highest band we publish.

The reason is simple: each new accelerator generation is using more memory. HBM-per-chip has tripled in two years.

NVIDIA’s H100 used 80GB of HBM3.

B200 moved to 192GB of HBM3E.

GB300 / Blackwell Ultra reaches 288GB of HBM3E per GPU.

Rubin (later this year) moves the next leg of demand to HBM4, the latest generation of high-bandwidth memory.

That is only NVIDIA. AMD’s MI300 and MI350 families, AWS Trainium, Google TPU, and Microsoft Maia also pull on the same HBM supplier base.

Only 3 companies can fab the relevant HBM at scale: SK Hynix, Micron, and Samsung. Their 2026 books are already largely committed. That is the structural reality today.

The demand stack: Silicon programs

5 of the chip programs we’re actively tracking - Memory capacity just keeps going up

The other half of the argument is demand. In Tessara’s silicon tracker, 17 active or ramping accelerator programs carry high HBM signal weight. The customer breadth is wide.

Volume programs: 11 designs

NVIDIA H100 SXM, H200, B100, B200, GB200 NVL72; AMD MI300X, MI325X, MI350; AWS Trainium2; Google TPU v6 Trillium; Google TPU v7 Ironwood.

Early-production or ramping programs: 6 designs

NVIDIA B300, GB300 NVL72; AWS Trainium2e, Trainium3; Google TPU v8 Sunfish; Microsoft Maia 200.

NVIDIA accounts for 7 of the 17 programs in our tracker. The other 10 come from AMD, AWS, Google, and Microsoft.

Every shipping or ramping HBM-critical design today is still HBM3 or HBM3E. HBM4 supply is ramping ahead of the largest customer-side volume wave, which is what you would expect when suppliers are preparing for a constrained next generation. Demand is rising on two axes at once: more accelerator programs and more HBM per program.

That is why “HBM is tight” understates the issue. The better read is: the amount of memory demanded per unit of AI compute is rising faster than the qualified supplier base can clear.

Why we’re making the call

Our thesis rests on 4 public signals.

1. The three-supplier convergence

In their latest disclosed commentary, SK Hynix, Micron, and Samsung each pointed to 2026 HBM capacity being committed, booked, or sold out. In their latest earnings calls:

  • SK Hynix has described customer behavior as volume-security driven and said it is mass-producing the volume requested by customers.

  • Micron said it has completed price and volume agreements for its entire calendar 2026 HBM supply, including HBM4.

  • Samsung said demand for its HBM4 has concentrated around its differentiated performance and that production-ready capacity is fully booked and sold out.

The wording differs by company, but the direction is the same. Customers are not asking, “Can we get a better price?” They are asking, “Can we get allocation?”

That is the strongest signal in the note.

2. SK Hynix is describing allocation-led demand

On its April 22 earnings call, SK Hynix CFO Kim Woo Hyun said customers are prioritizing volume security over price, sustaining current pricing strength.

That is the language of a constrained market. In a normal memory cycle, customers push price. In this regime, customers secure supply.

SK Hynix also guided DRAM shipments up high single digits quarter-over-quarter in Q2. The combination matters: volume is rising, but pricing strength is holding because allocation remains tight.

3. ASML sees the same constraint from upstream

ASML CEO Christophe Fouquet said memory customers are sold out for 2026 and that the supply constraint will last beyond 2026.

ASML is not selling HBM into accelerators. It sits upstream, where equipment demand gives a different view of capacity intent. That makes the signal useful because it is not just another supplier defending its own pricing.

HBM suppliers and the upstream equipment vendor are pointing to the same 2026 capacity picture.

4. Samsung is no longer outside the trade

Samsung’s Q1 2026 call a few days ago changed our read.

Management said commercial HBM4 shipment began in February, HBM4 sales are expected to exceed 50% of total HBM sales from Q3 onward, and production-ready capacity is fully booked.

There is a caveat. Samsung’s “world first” language should not be read as proof that it leapfrogged SK Hynix. SK Hynix made its own world-first HBM4 production claim earlier, and industry consensus still supports SK Hynix as the timing leader.

The read on Samsung is narrower and more important for the equity map: Samsung is no longer outside the HBM4 expression set.

That matters because the old market frame had Samsung as the qualification laggard. The new post-earnings call read is that Samsung is an active participant in a sold-out HBM4 cycle, even if it is not the lifecycle leader.

Our Three equity expressions

The regime maps into three equity expressions. Each is long the same HBM constraint. but expresses it differently.

1. Structural leader: SK Hynix (000660.KS)

SK Hynix is the cleanest listed expression of the regime call.

It is the lifecycle leader in our matrix: production on HBM3E, most mature on HBM4. No other major has the same four-line lead.

The cash-flow read matches the product read. SK Hynix printed 58.6% operating margin in the most recent quarter. Management is describing customer behavior as volume-led, not price-led. At roughly 11.5x PE, the equity still reads more like cyclical memory exposure than a company with several quarters of structural HBM scarcity in front of it.

The risk is timing. The next scheduled SK Hynix earnings print is 3 months away. That leaves room for a Samsung yield surprise, Micron HBM4 acceleration, or a broader memory reversal to compress the discount.

That makes SK Hynix the structural-leader expression, not the near-term catalyst expression.

2. Catalyst expression: Micron (MU)

Micron is the cleanest grading event into June 23.

The company has begun volume shipment of HBM4 36GB 12-high designed for NVIDIA Vera Rubin. It has completed pricing and volume agreements for its entire calendar 2026 HBM supply. It claims HBM4 pin speed above 11Gbps and expects the HBM4 yield ramp to be faster than HBM3E. It has also sampled next-generation HBM4 16-high at 48GB per stack.

That is a dense set of setup signals heading into earnings.

The June 23 print can test several things at once:

  • HBM4 volume progress

  • Vera Rubin timing

  • HBM4 16-high sampling progress

  • HBM3E mix

  • 2026 pricing visibility

  • customer demand for geographically diversified supply

Micron isn’t the structural leader. SK Hynix still has that position. Micron’s role is different: it gives our HBM regime call a near-term public grading event with upside skew if the company closes the lifecycle gap.

3. Corrective expression: Samsung (005930.KS)

Samsung is the changed part of the map, after this week’s earnings call.

Until recently, the market treated Samsung as the HBM laggard: behind SK Hynix and Micron, burdened by qualification concerns, and less credible as a near-term supplier into NVIDIA-linked demand.

That frame is now stale.

Samsung says it began commercial HBM4 shipment in February. It says production-ready capacity is fully booked and sold out. It expects HBM4 to exceed 50% of total HBM sales from Q3 onward. In our matrix, Samsung now qualifies as Tier 1 NVIDIA-linked HBM exposure.

That makes Samsung the corrective expression. The bet is that the market could be still discounting Samsung for a supplier status that has changed.

At roughly 33.4x PE and 10.1% ROE, it is more expensive than SK Hynix per unit of clean memory exposure. The conglomerate structure also weakens the purity of the read.

The two falsifiers

Most thesis pieces tell you why they are right. Fewer tell you what would prove the thesis wrong.

So we are publicly stating two falsifiers here because there are two horizons.

#1: Regime falsifier: 180-day window

We will mark our regime call broken if HBM average selling prices go flat or negative quarter-over-quarter for two consecutive quarters at any of the three major suppliers.

One weak quarter can be inventory normalization, product mix, or seasonal digestion. Two consecutive flat-to-down quarters would mean demand has met capacity additions faster than we expect.

The “any supplier” condition matters. The bull case requires aggregate scarcity. If even one major supplier prints flat-to-down HBM ASPs while shipping into a market we describe as sold out, we would consider ourselves wrong.

#2: Equity-expression falsifier: through Micron’s June 23 print

We will mark the equity-expression map wrong if one of three things happens before or at Micron’s June 23 print:

  1. SK Hynix, Micron, or Samsung guides to weaker HBM pricing, weaker allocation, or faster-than-expected supply normalization.

  2. Samsung’s upgraded NVIDIA-linked status is contradicted by credible supplier, customer, or teardown evidence.

  3. Micron fails to show HBM4 volume progress, Vera Rubin linkage, or sustained 2026 pricing visibility.

What we are watching next

  1. June 23: Micron earnings. The key read is HBM4 ramp progress, Vera Rubin shipment timing, HBM4 16-high sampling, HBM3E mix, and 2026 pricing visibility.

  2. HBM ASP and allocation commentary from the three majors. Two consecutive flat-to-down HBM ASP quarters at any major supplier breaks the regime call.

  3. NVIDIA Rubin shipping cadence. A faster Rubin ramp pulls HBM4 demand forward and validates supplier-side production ramps.

  4. Samsung qualification evidence. Any credible contradiction of Samsung’s upgraded NVIDIA-linked status weakens the corrective expression.

The AI build is still memory-bound. Viva la memory!

-Teng Yan

This is a Tessara thesis. We do not publish price targets. We identify the binding constraint on the AI build, map the listed equities that express it, and state what would make the call wrong.

This public note is the snapshot. Tessara is the live system behind it: Memory Regime Score, supplier matrix, silicon demand tracker, company pages, and earnings-call extraction. Apply for a Tessara private beta seat if you actively track AI infrastruture, semis, or the memory cycle.

Methodology for our Memory Regime Score is here.

This article is for informational and research purposes only. It is not financial advice, investment advice, or a recommendation to buy or sell any security. Tessara Research does not publish price targets. The views expressed here reflect our analysis at the time of publication and may change as new evidence arrives. Readers should do their own research and consult a qualified financial adviser before making investment decisions.

Keep Reading