Nvidia Puts $2 Billion Behind Marvell to Build NVLink Fusion

Nvidia is investing $2 billion in Marvell Technology, deepening a strategic partnership focused on developing NVLink Fusion — a next-generation interconnect technology designed to scale GPU clusters far beyond current NVLink domain limits. The investment was announced via MSN sources, with Marvell shares jumping on the news.
The Deal
- Investment amount: $2 billion
- Target company: Marvell Technology
- Purpose: Co-development of NVLink Fusion interconnect technology
- Market reaction: Marvell shares rose on the announcement
Nvidia's investment represents a significant bet on custom silicon and interconnect design, moving beyond its traditional reliance on InfiniBand and standard Ethernet for scale-out networking.
What Is NVLink Fusion?
NVLink Fusion is Nvidia's emerging architecture for connecting thousands of GPUs into a single, coherent compute domain. While current NVLink (as used in DGX systems) connects up to 8 or 16 GPUs within a node, NVLink Fusion aims to extend that low-latency, high-bandwidth fabric across entire clusters.
Marvell's role centers on providing the custom silicon — likely specialized switch ASICs and PHY layers — that can handle the extreme bandwidth and low latency requirements of NVLink Fusion without the overhead of traditional networking protocols.
Why Marvell?
Marvell has deep expertise in:
- Data infrastructure silicon: Switches, retimers, and PHYs for hyperscale data centers
- Custom ASIC design: The company has a proven track record with custom compute and networking chips
- High-speed serdes: Critical for the 800G/1.6T signaling rates NVLink Fusion will require
Nvidia's own Mellanox acquisition (completed in 2020 for $6.9 billion) gave it InfiniBand and Ethernet switch expertise, but NVLink Fusion represents a different architecture — one that may require custom silicon beyond what Mellanox's existing product lines offer.
Competitive Context

This move comes as Nvidia faces increasing competition in the AI interconnect space:
- AMD's Infinity Architecture is pushing similar scale-out GPU fabric concepts
- Intel's UALink (Ultra Accelerator Link) consortium aims to create an open standard for GPU-to-GPU communication
- Hyperscalers like Google and AWS are building custom interconnects for their own TPU and Trainium chips
By investing in Marvell, Nvidia is securing a dedicated silicon partner for NVLink Fusion, potentially locking out competitors from Marvell's custom ASIC capacity.
What This Means in Practice
For AI engineers and data center operators, NVLink Fusion promises to reduce the complexity of programming across GPU clusters. Instead of managing MPI, NCCL, and InfiniBand configurations for cross-node communication, a unified NVLink fabric could present thousands of GPUs as a single memory-coherent device.
This would simplify large model training and inference — especially for models requiring tensor parallelism across multiple nodes, which currently requires careful orchestration of network topology and bandwidth.
gentic.news Analysis
This $2 billion investment is Nvidia's latest move to vertically integrate the AI hardware stack. Having already absorbed Mellanox (networking), Cumulus (networking software), and invested in CoreWeave (cloud), Nvidia is now locking in custom silicon capacity from Marvell.
The timing is notable. Just last week, Nvidia open-sourced Kimono, a motion diffusion model for humanoid robots, and trained a billion-parameter LLM without backpropagation. The company is simultaneously pushing into new AI paradigms while shoring up its infrastructure moat.
Google, meanwhile, committed up to $40 billion to Anthropic and expanded its own partnership with Nvidia for agentic AI infrastructure (announced April 23). The interconnect layer is becoming a critical battleground — whoever controls the fabric that connects thousands of accelerators controls the economics of large-scale AI training.
Marvell's stock jump reflects investor recognition that Nvidia's interconnect needs are growing faster than its internal capacity to design custom silicon. This deal effectively converts Marvell into a key Nvidia supply chain partner for NVLink Fusion, likely for multiple product generations.
Frequently Asked Questions
Why is Nvidia investing $2 billion in Marvell?
Nvidia needs custom silicon for its NVLink Fusion interconnect architecture, which aims to connect thousands of GPUs into a single coherent compute domain. Marvell has deep expertise in high-speed data infrastructure chips, custom ASIC design, and advanced serdes technology.
What is NVLink Fusion?
NVLink Fusion is Nvidia's next-generation GPU interconnect that extends the low-latency, high-bandwidth NVLink fabric beyond single-node boundaries (currently 8-16 GPUs) to entire clusters of thousands of GPUs, simplifying distributed training and inference.
How does this affect Nvidia's relationship with Mellanox?
Nvidia acquired Mellanox in 2020 for its InfiniBand and Ethernet networking technology. NVLink Fusion is a different architecture that may require custom silicon beyond Mellanox's existing product lines, making Marvell a complementary rather than competing investment.
Will NVLink Fusion compete with AMD's Infinity Architecture or Intel's UALink?
Yes. NVLink Fusion is Nvidia's proprietary answer to AMD's Infinity Fabric and the open UALink consortium backed by Intel, Google, and others. By investing in Marvell, Nvidia is securing dedicated silicon capacity to maintain its interconnect advantage.







