Nvidia’s next-gen Rubin GPUs could find yourself delivery later and in smaller volumes than anticipated as a result of provide chain challenges, TrendForce warned on Wednesday.
The business watchers now expect Rubin to account for 22 p.c of Nvidia’s high-end GPU shipments in 2026, down from their earlier forecast, which had pinned the combination at 29 p.c.
TrendForce cited the time required to validate the newer HBM4 reminiscence utilized by the chips, challenges with the migration to Nvidia’s sooner ConnectX-9 NICs, the system’s increased general energy consumption, and the extra superior liquid cooling necessities as contributing to the delays.
Shipments of Nvidia’s Hopper GPUs, together with H200s sure for the Chinese language market, are additionally anticipated to be decrease than initially forecast as a result of ongoing geopolitical points between the US and China.
In December, the Trump administration mentioned it will allow exceptions to earlier US export guidelines governing gross sales of high-end AI accelerators to China, with formal US approval following in January. The choice meant Nvidia may promote its older, however nonetheless potent H200 accelerators to Chinese language clients for the primary time. In trade, Nvidia would simply need to fork over 1 / 4 of the income from these gross sales to Uncle Sam.
Regardless of this, it is taken months to persuade Beijing to log off on the deal. At GTC final month, CEO Jensen Huang revealed Nvidia was within the technique of firing up its manufacturing capability to supply H200s once more for the Chinese language market and that it had buy orders in hand.
TrendForce now expects Hopper accelerators to account for about 7 p.c of Nvidia’s GPU cargo combine this yr, down from its earlier forecast of 10 p.c.
Whereas Rubin and Hopper shipments are anticipated to be decrease than initially forecast, TrendForce says Blackwell GPUs, just like the GB300 and B300, are prone to fill the void.
Analysts now anticipate Blackwell shipments to account for 71 p.c of Nvidia GPUs offered this yr.
Lastly, TrendForce is moderately bullish about demand for Nvidia’s newly introduced Groq LPUs, which we explored in depth here. These chips do not depend on standard DRAM reminiscence, and are designed to work alongside GPUs like Rubin to speed up the token-generating decode section of the inference pipeline.
Nonetheless, as a result of their restricted on-chip SRAM, massive portions are required for this objective. As such, TrendForce anticipates demand within the “a number of hundred thousand models” vary this yr, and roughly double that in 2027.
In associated information, TrendForce additionally warned this week that client DRAM costs may rise one other 45-50 percent within the second quarter. That is on high of the 75-80 p.c pricing improve we noticed within the first quarter.
Reminiscence costs have surged in latest months with many merchandise like DDR5 and SSDs now promoting for greater than triple what they have been retailing for presently final yr.
As we have beforehand reported, demand for AI infrastructure, mixed with the extremely cyclical nature of reminiscence markets, is essentially accountable for the sky excessive costs.
We have reached out to Nvidia for touch upon potential delays to its Rubin lineup; we’ll let you understand if we hear something again. ®
Source link


