Skip to content
gentic.news — AI News Intelligence Platform

chiplet

6 articles about chiplet in AI news

Intel's UCIe-S Hits 48 Gb/s on 22nm, Beats 3nm EMIB

Intel demonstrated a UCIe-S die-to-die interconnect on 22nm hitting 48 Gb/s/lane over standard organic substrate, beating a 3nm EMIB design with 3× higher data rate and 2.8× higher bandwidth density. This signals a strategic shift away from EMIB for Intel's own products toward UCIe over substrate.

85% relevant

Nvidia B200 Costs $6,400 to Produce, Gross Margin Hits 82%

Epoch AI estimates Nvidia's B200 GPU costs $5,700–$7,300 to produce, with HBM memory and advanced packaging accounting for two-thirds of the cost. At a $30k–$40k sale price, chip-level gross margins reach ~82%, though rack-scale margins may be lower.

74% relevant

Foxconn to Mass-Produce 10,000+ CPO Optical Switches for AI in Q3 2026

Foxconn's manufacturing arm will begin volume production of advanced co-packaged optics (CPO) switches in Q3 2026, targeting over 10,000 units. This move directly addresses the critical bandwidth and power bottlenecks in next-generation AI data center infrastructure.

85% relevant

Nvidia Invests $2B in Marvell to Expand NVLink Fusion Chip Partnership

Nvidia is investing $2 billion in Marvell Technology to deepen their partnership on NVLink Fusion, a chip-to-chip interconnect crucial for scaling AI training clusters. This strategic move aims to secure supply and accelerate development of high-bandwidth links between GPUs and custom AI accelerators.

84% relevant

The Invisible Dance: How AI Chip Manufacturing Relies on Microscopic Wire Bonding

High-speed semiconductor wire bonding creates thousands of electrical connections per minute using ultra-fine 25-micron wires. This critical but often overlooked process enables the AI chips powering today's most advanced systems.

85% relevant

Apple's M5 Pro and Max: Fusion Architecture Redefines AI Computing on Silicon

Apple unveils M5 Pro and M5 Max chips with groundbreaking Fusion Architecture, merging two 3nm dies into a single SoC. The chips deliver up to 30% faster CPU performance and over 4x peak GPU compute for AI workloads compared to previous generations.

95% relevant