At the Optical Fiber Communication Conference (OFC) 2026 last month, Cisco's chief architect Rakesh Chopra presented a critical analysis of the networking architectures powering modern AI infrastructure. The presentation highlighted a fundamental shift from "traditional" data center interconnect (DCI) to "scale-across" networking, with the latter requiring approximately 14 times more bandwidth to support synchronous GPU communication.
The Scale-Across vs. Traditional DCI Divide
The core distinction lies in what's being connected. Traditional DCI handles frontend network traffic between CPUs across data centers—supporting typical web services, databases, and asynchronous workloads. Scale-across networking, in contrast, forms the backend fabric connecting GPUs within and between AI clusters to enable loss-intolerant, synchronous data flows required for distributed training and inference.
As Chopra explained, hyperscalers are managing the oversubscription of intra-datacenter bandwidth relative to inter-datacenter bandwidth through two key technologies:
- Deep switch buffers to absorb traffic bursts without packet loss
- Proactive congestion control mechanisms to maintain synchronous flow requirements
The 14x Bandwidth Multiplier
The most striking technical revelation is the bandwidth disparity. Scale-across networking's approximate 14x bandwidth requirement compared to traditional DCI stems from the fundamentally different traffic patterns of AI workloads:
Traditional DCI CPUs across frontend Asynchronous, bursty Baseline (1x) Scale-Across GPUs across backend Synchronous, continuous ~14x baselineThis multiplier reflects how AI training clusters generate continuous, all-to-all communication patterns during distributed training, where thousands of GPUs must exchange gradient updates with minimal latency and zero packet loss.
Infrastructure Implications and Market Opportunity
Chopra's presentation directly ties this architectural shift to specific hardware opportunities:
1. 800G Coherent Pluggables
The bandwidth density requirements make 800G coherent optics essential for scale-across links, both within and between data centers. These pluggables must handle the stringent latency and synchronization requirements of GPU communication.
2. Deep-Buffered Switches
Conventional data center switches with shallow buffers cannot handle the traffic patterns of scale-across networks. Switches with deep buffers (often using specialized memory like HBM) are becoming necessary to prevent packet loss during congestion.
3. Multi-Billion Dollar Deployment
The presentation explicitly notes that "significant buildout of scale-across infrastructure at various hyperscalers is expected to result in multi-billion dollar opportunities" for these components. This isn't theoretical—it's already driving procurement decisions at major cloud providers.
SemiAnalysis's Upcoming Tracking
The source indicates that SemiAnalysis's AI Networking Model will soon initiate estimates of scale-across networking equipment spend across hyperscalers. This suggests the industry is moving from architectural discussion to quantified spending forecasts, with likely breakdowns by vendor, component type, and cloud provider.
gentic.news Analysis
This OFC 2026 presentation confirms what our infrastructure coverage has been tracking: the AI networking stack is diverging from traditional cloud networking. The 14x bandwidth multiplier isn't just incremental—it represents a fundamental rearchitecture of data center networks around GPU communication patterns.
This aligns with several trends we've documented. First, it explains the aggressive adoption of 800G optics we covered in "Cloud Titans Accelerate 800G Deployments for AI Clusters" (March 2026), where Microsoft, Google, and Meta were all racing to deploy 800G infrastructure. Second, it contextualizes the specialized switch developments from companies like NVIDIA (Spectrum-X) and Broadcom (Jericho3-AI), which are specifically designed for AI fabric with deep buffering and congestion control.
The presentation also reveals an interesting competitive dynamic. While Cisco is analyzing this trend, much of the early scale-across deployment appears driven by hyperscalers' custom designs and specialist vendors. Cisco's position as a traditional networking leader gives them visibility into these deployments, but the actual equipment spend might flow to optical component makers (like Coherent, Lumentum, or Innolight) and switch silicon vendors (like Broadcom and NVIDIA) before reaching traditional OEMs.
For AI practitioners, the key takeaway is infrastructure awareness. The scale-across vs. traditional DCI distinction means that where you run distributed training matters significantly—not all data center interconnects are created equal. Teams deploying large models should inquire about their cloud provider's scale-across capabilities, as this directly impacts training efficiency and cost.
Frequently Asked Questions
What is scale-across networking?
Scale-across networking refers to the backend fabric that connects GPUs within and between AI clusters to enable synchronous, loss-intolerant data flows required for distributed training. Unlike traditional data center interconnects that handle asynchronous CPU traffic, scale-across networks must maintain continuous communication with minimal latency and zero packet loss.
Why does scale-across need 14x more bandwidth than traditional DCI?
The 14x multiplier comes from the fundamentally different traffic patterns. AI training generates continuous all-to-all communication between thousands of GPUs during distributed training, whereas traditional web services and databases produce bursty, asynchronous traffic. The synchronous nature of gradient exchange in AI training requires significantly more consistent bandwidth.
What hardware is needed for scale-across networks?
Two key components are driving the multi-billion dollar opportunity: 800G coherent pluggable optics for high-density bandwidth, and deep-buffered switches that can absorb traffic bursts without packet loss. These components must meet stringent latency and synchronization requirements that conventional data center equipment cannot satisfy.
Which companies benefit from scale-across deployment?
The primary beneficiaries include optical component manufacturers (producing 800G coherent pluggables), switch silicon vendors (designing deep-buffered ASICs), and hyperscalers who can offer differentiated AI infrastructure. Traditional networking OEMs like Cisco have visibility into these trends but may face competition from specialist vendors and hyperscaler custom designs.









