Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

![Jukanlosreve Avatar](https://lunarcrush.com/gi/w:24/cr:twitter::1836240683268759552.png) Jukan [@Jukanlosreve](/creator/twitter/Jukanlosreve) on x 22.9K followers
Created: 2025-07-07 01:08:53 UTC

Recent Trends and Future Outlook in the AI Server Sector By WT

NVIDIA’s GB200 NVL72 shipments are ramping steadily. Shipments reached around 0.5K cabinets in Q1 2025. WT forecasts Q2–Q4 shipments of 4.0K, 8.1K, and 11.3K cabinets respectively — totaling about 24K units for 2025, aligning with market expectations of 21–26K. WT's research indicates ODM shipments are stabilizing, but CSP design revisions have hindered rapid volume increases. Currently, only Corewave, OpenAI, and Oracle are receiving shipments smoothly. Thanks to accumulated experience, 3231 Wistron has significantly better quality in sub-L10 assembly than peers. Also, some CSPs are still struggling with minor design revisions.

Rubin hasn’t taped out yet, but foundries expect mass production to begin in Q2 2026. Estimated wafer starts for 2026 are 340–350K. The CoWoS-L reticle area is increasing from 3.5X to 5.5X, and HBM count per package will rise from X to XX. HBM4 12hi will be adopted, with no hybrid bonding in the initial phase, pushing capacity to 576GB. Based on current validation, Samsung’s participation is unlikely. With these wafer volumes, only XXX million Rubin chips may be available in 2026 — equivalent to around 16–17K VR144 cabinets. However, ODMs likely won’t receive the chips until late 2026, and CSP customers may not receive VR144 servers until 2H 2027 or even year-end. Rubin chips require second-gen CoWoS-L packaging and XX HBM4 stacks. WT expects foundries to delay output, pushing VR144 availability to late 2027.

Cloud service providers are also beginning volume production of in-house ASICs. Google’s TPU V6p is shipping this year, though some industry insiders believe it’s been rebranded as V7. Volume production will begin in 2H 2025 with about 2–3 million units expected for the year. For 2026, TPU V7 or V8 may be co-designed with 2454 MTK. However, due to unresolved issues with Google, no tape-out has occurred yet. If resolved, 2026 TPU shipments could exceed X million units, driven by Gemini-related applications. AWS's Trainium2 is expected to ship around XXX million units in 2025, with a small contribution from the minor Trainium2.5 update by year-end. Mass production of Trainium3 is expected by Q2 2026; it has recently taped out and will be produced by Annapurna, with backend handled by 3661 Alchip and Marvell. Total 2026 shipments are expected to reach 2.3–2.5 million units. Meta's MTIA X will begin shipping in Q4 2025, with XXX million units projected for 2026, and MTIA XXX launching mid-2026 using Andes RISC-V. Microsoft’s Maia200 recently taped out, with mass production expected in early 2026 — as this is Microsoft's first ASIC chip, market expectations remain limited.

In conclusion, GPGPU growth in 2026 — primarily driven by NVIDIA’s Rubin — will face delays and a high base effect. In contrast, ASICs will show much stronger growth.

In NVIDIA’s substrate supply chain, Ibiden holds 65–75% market share, 3037 Unimicron holds 15–25%, and 3189 Kinsus has 5–10%. Japanese leader Ibiden has nearly all its high-end capacity allocated to NVIDIA. 3037 Unimicron has a strong track record with Blackwell substrates and high-end OAM HDI server boards. It has also demonstrated flexibility in substrate line adjustments and continuous upgrades to its HDI and PCB lines, significantly improving its operational foundation and optimizing its product mix. This positions Unimicron as a true high-end substrate provider, which will be reflected in its future gross margin performance. As we enter the ASIC era, Unimicron's competitive advantage will increasingly stand out.


XXXXXX engagements

![Engagements Line Chart](https://lunarcrush.com/gi/w:600/p:tweet::1942028012075246065/c:line.svg)

**Related Topics**
[coins ai](/topic/coins-ai)

[Post Link](https://x.com/Jukanlosreve/status/1942028012075246065)

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

Jukanlosreve Avatar Jukan @Jukanlosreve on x 22.9K followers Created: 2025-07-07 01:08:53 UTC

Recent Trends and Future Outlook in the AI Server Sector By WT

NVIDIA’s GB200 NVL72 shipments are ramping steadily. Shipments reached around 0.5K cabinets in Q1 2025. WT forecasts Q2–Q4 shipments of 4.0K, 8.1K, and 11.3K cabinets respectively — totaling about 24K units for 2025, aligning with market expectations of 21–26K. WT's research indicates ODM shipments are stabilizing, but CSP design revisions have hindered rapid volume increases. Currently, only Corewave, OpenAI, and Oracle are receiving shipments smoothly. Thanks to accumulated experience, 3231 Wistron has significantly better quality in sub-L10 assembly than peers. Also, some CSPs are still struggling with minor design revisions.

Rubin hasn’t taped out yet, but foundries expect mass production to begin in Q2 2026. Estimated wafer starts for 2026 are 340–350K. The CoWoS-L reticle area is increasing from 3.5X to 5.5X, and HBM count per package will rise from X to XX. HBM4 12hi will be adopted, with no hybrid bonding in the initial phase, pushing capacity to 576GB. Based on current validation, Samsung’s participation is unlikely. With these wafer volumes, only XXX million Rubin chips may be available in 2026 — equivalent to around 16–17K VR144 cabinets. However, ODMs likely won’t receive the chips until late 2026, and CSP customers may not receive VR144 servers until 2H 2027 or even year-end. Rubin chips require second-gen CoWoS-L packaging and XX HBM4 stacks. WT expects foundries to delay output, pushing VR144 availability to late 2027.

Cloud service providers are also beginning volume production of in-house ASICs. Google’s TPU V6p is shipping this year, though some industry insiders believe it’s been rebranded as V7. Volume production will begin in 2H 2025 with about 2–3 million units expected for the year. For 2026, TPU V7 or V8 may be co-designed with 2454 MTK. However, due to unresolved issues with Google, no tape-out has occurred yet. If resolved, 2026 TPU shipments could exceed X million units, driven by Gemini-related applications. AWS's Trainium2 is expected to ship around XXX million units in 2025, with a small contribution from the minor Trainium2.5 update by year-end. Mass production of Trainium3 is expected by Q2 2026; it has recently taped out and will be produced by Annapurna, with backend handled by 3661 Alchip and Marvell. Total 2026 shipments are expected to reach 2.3–2.5 million units. Meta's MTIA X will begin shipping in Q4 2025, with XXX million units projected for 2026, and MTIA XXX launching mid-2026 using Andes RISC-V. Microsoft’s Maia200 recently taped out, with mass production expected in early 2026 — as this is Microsoft's first ASIC chip, market expectations remain limited.

In conclusion, GPGPU growth in 2026 — primarily driven by NVIDIA’s Rubin — will face delays and a high base effect. In contrast, ASICs will show much stronger growth.

In NVIDIA’s substrate supply chain, Ibiden holds 65–75% market share, 3037 Unimicron holds 15–25%, and 3189 Kinsus has 5–10%. Japanese leader Ibiden has nearly all its high-end capacity allocated to NVIDIA. 3037 Unimicron has a strong track record with Blackwell substrates and high-end OAM HDI server boards. It has also demonstrated flexibility in substrate line adjustments and continuous upgrades to its HDI and PCB lines, significantly improving its operational foundation and optimizing its product mix. This positions Unimicron as a true high-end substrate provider, which will be reflected in its future gross margin performance. As we enter the ASIC era, Unimicron's competitive advantage will increasingly stand out.

XXXXXX engagements

Engagements Line Chart

Related Topics coins ai

Post Link

post/tweet::1942028012075246065
/post/tweet::1942028012075246065