Skip to content
gentic.news — AI News Intelligence Platform

interconnect

30 articles about interconnect in AI news

Nvidia Invests $2B in Marvell for NVLink Fusion Interconnect

Nvidia is investing $2 billion in Marvell Technology to deepen their partnership on NVLink Fusion, a new interconnect architecture for scaling AI clusters beyond current limits.

82% relevant

IOWN Forum Pushes All-Photonic WAN for AI Neocloud Interconnects

The IOWN Global Forum is focusing its optical networking tech on datacenter interconnects, aiming to let GPU 'neoclouds' and financial firms use cheaper, remote facilities without latency penalties for AI workloads.

78% relevant

AMD Backs UALink Open Interconnect to Challenge NVIDIA NVLink in AI

AMD is supporting the newly formed UALink Consortium, which aims to create an open standard for connecting AI accelerators. This move challenges NVIDIA's control over the critical NVLink technology that underpins its AI data center systems.

84% relevant

Google's Virgo Network Links 134,000 TPU v8 Chips with 47 Pbps Fabric

Google unveiled its Virgo networking stack for TPU v8, capable of linking 134,000 chips in a single fabric with 47 petabits/sec of bi-sectional bandwidth. This represents a massive scale-up in interconnect technology for large-scale AI model training.

100% relevant

Cisco Reveals Scale-Across GPU Networking Needs 14x DCI Bandwidth

Cisco's chief architect detailed the massive bandwidth requirements for connecting AI clusters via 'scale-across' GPU networking, which needs 14x the capacity of traditional data center interconnects. This shift is creating a multi-billion dollar market for 800G coherent pluggables and deep-buffered switches.

85% relevant

UALink 2.0 Spec Finalized, Aims to Challenge NVLink for AI Clusters

The UALink 2.0 interconnect specification has been finalized, providing a standardized way to link AI accelerators from AMD, Intel, and others. However, it lags behind NVIDIA's established NVLink technology in real-world deployment.

96% relevant

Nvidia's Silicon Photonics Roadmap Targets AI Data Center Bottlenecks

Nvidia is developing its own silicon photonics-based interconnects to address the growing data transfer bottleneck within AI data centers and supercomputers. This move is critical as AI model size and cluster scale continue to grow exponentially.

86% relevant

Nvidia Invests $2B in Marvell to Expand NVLink Fusion Chip Partnership

Nvidia is investing $2 billion in Marvell Technology to deepen their partnership on NVLink Fusion, a chip-to-chip interconnect crucial for scaling AI training clusters. This strategic move aims to secure supply and accelerate development of high-bandwidth links between GPUs and custom AI accelerators.

84% relevant

Open-Source AI Crew Replaces Notion, Obsidian with 8 Local Agents

A researcher has built a fully local, open-source system of 8 specialized AI agents that work together to manage an Obsidian vault—handling notes, inboxes, meetings, and deadlines. It replaces separate tools like Notion and inbox triagers with an autonomous, interconnected crew.

87% relevant

Nvidia B200 Costs $6,400 to Produce, Gross Margin Hits 82%

Epoch AI estimates Nvidia's B200 GPU costs $5,700–$7,300 to produce, with HBM memory and advanced packaging accounting for two-thirds of the cost. At a $30k–$40k sale price, chip-level gross margins reach ~82%, though rack-scale margins may be lower.

72% relevant

Gas-Fueled AI Data Centers Could Emit More Than Entire Nations

WIRED investigation reveals 11 behind-the-meter natural gas projects for AI data centers could emit 129 million tons of greenhouse gases annually, surpassing Morocco's 2024 emissions. Projects tied to OpenAI, Meta, Microsoft, and xAI bypass traditional grids.

70% relevant

Applied Digital Lands 300MW Lease with Hyperscaler at Louisiana Site

Applied Digital secured a 300MW lease with an investment-grade hyperscaler at its Delta Forge 1 site in Louisiana, with a total reported value of $7.5 billion, signaling continued demand for AI data center capacity.

100% relevant

LLM Agents Will Reshape Personalization

Researchers propose that LLM-based assistants are reconfiguring how user representations are produced and exposed, requiring a shift toward inspectable, portable, and revisable user models across services. They identify five research fronts for the future of recommender systems.

84% relevant

LangFuse on Evaluating AI Agents in Production

The article outlines a practical methodology for monitoring and enhancing AI agent performance post-deployment. It emphasizes combining automated LLM-based evaluation with human feedback loops to create actionable datasets for fine-tuning.

78% relevant

Google Cloud Next '26: 8th-gen TPUs, agent platform, $750M fund

At Cloud Next 2026, Google unveiled two 8th-gen TPU chips, a Gemini-based enterprise AI agent platform, and a $750 million partner fund to drive secure, large-scale automation and heavy AI workloads.

88% relevant

OpenAI's 'Freebird' Data Center in Texas to Span 549K Sq Ft, Cost $470M

OpenAI is building a massive 548,950-square-foot data center in Milam, Texas, named 'Freebird,' with a first-phase cost of around $470 million. This infrastructure investment is critical for scaling next-generation AI model training and inference.

92% relevant

Arista Doubles 2026 AI Revenue Target to $3B+ on Open Ethernet

Arista Networks doubled its 2026 AI networking revenue target to over $3 billion, citing expanded roles for open Ethernet in AI data centers. This signals a major shift toward disaggregated, standards-based networking for AI clusters.

89% relevant

GraphRAG-IRL: A Hybrid Framework for More Robust Personalized Recommendation

Researchers propose GraphRAG-IRL, a hybrid recommendation framework that addresses LLMs' weaknesses as standalone rankers. It uses a knowledge graph and inverse reinforcement learning for robust pre-ranking, then applies persona-guided LLM re-ranking to a shortlist, achieving significant NDCG improvements.

92% relevant

Layers on Layers — How You Can Improve Your Recommendation Systems

An IBM article critiques monolithic recommendation engines for trying to do too much with one score. It proposes a layered architecture—candidate generation, ranking, and business logic—to improve performance and adaptability. This is a direct, practical framework for engineering teams.

78% relevant

Bull Delivers HPC Infrastructure to Power Mimer AI Factory

Bull, a subsidiary of Atos, has supplied the core HPC infrastructure for Mimer's new AI factory. This facility is dedicated to training and developing large language models for the European market.

82% relevant

Microsoft's Fairwater AI Data Center Launches Early, Boosts Azure Capacity

Microsoft has launched its Fairwater AI data center ahead of schedule. The facility adds significant high-performance computing capacity to Azure's AI infrastructure, crucial for training and running large models.

92% relevant

Foxconn to Mass-Produce 10,000+ CPO Optical Switches for AI in Q3 2026

Foxconn's manufacturing arm will begin volume production of advanced co-packaged optics (CPO) switches in Q3 2026, targeting over 10,000 units. This move directly addresses the critical bandwidth and power bottlenecks in next-generation AI data center infrastructure.

85% relevant

Google, Marvell in Talks to Co-Develop New AI Chips, Including TPU-Optimized MPU

Google is reportedly in talks with Marvell Technology to co-develop two new AI chips: a memory processing unit (MPU) to pair with TPUs and a new, optimized TPU. This move is a direct effort to bolster Google's custom silicon stack and compete with Nvidia's dominance.

95% relevant

OpenAI Launches GPT-Rosalind for Drug Discovery, GPT-5.4-Cyber for Security

OpenAI launched GPT-Rosalind, a life sciences model performing above the 95th percentile of human experts on novel biological data, and GPT-5.4-Cyber, a cybersecurity variant. These releases, alongside a major Agents SDK update, signal a pivot from general AI to specialized, high-stakes enterprise domains.

90% relevant

Gur Singh Claims 7 M4 MacBooks Match A100, Calls Cloud GPU Training a 'Scam'

Developer Gur Singh posted that seven M4 MacBooks (2.9 TFLOPS each) match an NVIDIA A100's performance, calling cloud GPU training a 'scam' and advocating for distributed, consumer-hardware approaches.

77% relevant

New Research Proposes CPGRec

A new arXiv paper introduces CPGRec, a three-module framework for video game recommendations. It aims to solve the common trade-off between accuracy and diversity by using strict game connections and leveraging category/popularity data. Experiments on a Steam dataset show promising results.

74% relevant

Sabicap Develops Brain Wearable to Decode Imagined Speech into Text

Sabicap is developing a brain wearable with tens of thousands of sensors to decode imagined speech into text. The company, backed by Vinod Khosla, aims to create a system that works across users with minimal calibration for broad adoption.

95% relevant

Aehr Test Systems Lands $41M AI Chip Order; H2 Bookings Top $92M

Aehr Test Systems received a record $41 million production order from a key hyperscale AI customer. Total bookings for the second half of its fiscal year exceeded $92 million, highlighting surging demand for semiconductor test and burn-in equipment.

74% relevant

AI Tool 'Build' Generates Wiring Diagrams & BOMs from English Descriptions

A new AI tool, 'Build,' automates the tedious front-end of hardware prototyping. Users describe a project in plain English, and it generates wiring diagrams, a bill of materials, and step-by-step assembly instructions instantly.

85% relevant

Canada's AI Compute Gap: Google Cloud Montreal Offers 2017-Era Chips

A technical developer's attempt to rent modern AI compute in Canada revealed a stark infrastructure gap, with major providers offering chips as old as 2017, undermining national AI ambitions.

85% relevant