Qualcomm is developing custom silicon for an unnamed hyperscaler, with initial shipments expected in December 2026. The deal represents Qualcomm's most concrete data-center comeback move to date, following its May 2026 acquisition of Alphawave for custom ASIC capabilities.
Key facts
- Initial shipments expected December 2026
- Unnamed hyperscaler customer
- Qualcomm acquired Alphawave in May 2026 for custom ASIC capabilities
- Dedicated CPU for agentic AI revealed May 2026
- Global datacenter capex reaches $250-300 billion annually
According to DatacenterDynamics, Qualcomm's custom silicon project targets a single hyperscaler customer, though the company did not disclose the buyer's identity or the chip's specifications. The December 2026 shipment timeline suggests a relatively short development cycle, likely leveraging Qualcomm's existing Nuvia CPU cores and AI engine IP.
The Hyperscaler Puzzle

The unnamed hyperscaler is the key unknown. Amazon (AWS), Google, and Microsoft each operate their own custom silicon programs — Trainium, TPU, and Maia respectively — making them less likely customers. Meta, which has relied on off-the-shelf CPUs and custom accelerators for AI inference, emerges as a plausible candidate. The company has publicly discussed reducing dependence on merchant silicon, and Qualcomm's power-efficient Arm architecture aligns with Meta's infrastructure goals.
Context: Qualcomm's Data Center Arc
This deal marks Qualcomm's third major data center move in 2026. In May, the company acquired Alphawave, a custom ASIC designer, for $2.5 billion. [According to the source] The same month, Qualcomm revealed a dedicated CPU for agentic AI workloads in data centers, positioning itself against Intel's Xeon and AMD's EPYC lines. The custom silicon deal consolidates these efforts into a single customer relationship.
The broader AI infrastructure market provides tailwinds. Global datacenter capital expenditure reached $250-300 billion annually, equivalent to 5-7 Manhattan Projects per year, per recent estimates. Hyperscalers are increasingly seeking custom silicon to optimize total cost of ownership for AI inference, which now dominates compute demand.
Competitive Landscape

Qualcomm enters a crowded field. Marvell and Broadcom both operate large custom ASIC businesses for hyperscalers. Marvell's custom chips power Amazon's Trainium 2, while Broadcom designs Google's TPU and Meta's MTIA accelerators. Qualcomm's differentiation hinges on its CPU architecture — the Nuvia-derived Oryon cores offer performance-per-watt advantages that could appeal to hyperscalers running power-constrained inference farms.
[According to the source] The company did not disclose the hyperscaler's identity or the chip's specifications. The December 2026 shipment window suggests a first-generation product focused on inference rather than training, where Qualcomm's mobile-derived efficiency cores have the strongest competitive position.
What to watch
Watch for the hyperscaler's identity disclosure — likely at Qualcomm's November 2026 investor day or alongside Q4 earnings. The chip's specifications (process node, core count, power envelope) will reveal whether Qualcomm targets inference-only or training-capable workloads. Competing announcements from Marvell or Broadcom regarding similar hyperscaler wins would signal market share dynamics.









