Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

Tool · GPU & Accelerator Comparator

H100, B200, MI300X, TPU v5p, Trainium2 — side by side.

Every spec quoted here is from the official datasheet. Pick up to 4 accelerators to compare. Derived per-watt and per-GB ratios update automatically. Use this before you argue about benchmarks.

Select up to 4 accelerators

Spec

H200

NVIDIA

B200 SXM

NVIDIA

MI300X

AMD

Trainium2

AWS

Released2024202520232024
ArchitectureHopperBlackwellCDNA 3Trainium2
ProcessTSMC 4NTSMC 4NP (dual die)TSMC 5nm + 6nm (chiplet)TSMC 5nm
Die814 mm²2 × 814 mm² (NV-HBI linked)8× XCD chiplets + IOD2 compute tiles + 4 HBM stacks
Transistors80B208B total153B(undisclosed)
Memory (GB)141 GB192 GB192 GB96 GB
Memory typeHBM3eHBM3e (8 stacks)HBM3HBM3
Memory BW4.80 TB/s8.00 TB/s5.30 TB/s2.90 TB/s
FP64 (TFLOPS)67 TFLOPS40 TFLOPS163 TFLOPS
BF160.99 PFLOPS2.25 PFLOPS1.30 PFLOPS0.65 PFLOPS
FP81.98 PFLOPS4.50 PFLOPS2.60 PFLOPS1.30 PFLOPS
FP49.00 PFLOPS
TDP700 W1000 W750 W500 W
Scale-upNVLink 4 (900 GB/s)NVLink 5 (1.8 TB/s)Infinity Fabric (896 GB/s, 8-way)NeuronLink (64-chip UltraServer)
Scale-up BW0.90 TB/s1.80 TB/s0.90 TB/s0.50 TB/s
Scale-outNDR InfiniBand 400GNDR/XDR InfiniBandRoCE / InfiniBandEFA (Elastic Fabric Adapter)
Approx price~$32k~$38k~$18k
AvailabilityShipping since mid-2024Ramping 2025-2026Shipping since late 2023AWS only
NotesSame silicon as H100, HBM3e refresh: 76% more memory + 43% more bandwidth.First dual-die AI GPU, connected via NV-HBI (10 TB/s). Liquid cooling required.Largest HBM capacity of any 2023-era GPU. ROCm software stack closed the gap but still behind CUDA.Powers Project Rainier (500K+ chips for Anthropic). Only via AWS EC2 Trn2 instances.

📊 Derived ratios (per chip)

MetricH200B200 SXMMI300XTrainium2
Memory per watt (GB/W)0.2010.1920.2560.192
BF16 PFLOPS per watt1.412.251.731.30
Memory BW per watt (GB/s/W)6.868.007.075.80
$/GB HBM (at list)$227/GB$198/GB$94/GB

Specs from NVIDIA H100/H200/B200/GB200 datasheets, AMD Instinct MI300X/MI325X sheets, Google Cloud TPU v5p docs, AWS Trainium2 announcement, Cerebras WSE-3 specs. Street prices are approximate and for orientation only — actual pricing varies by volume + contract. Highlighted cells are best-in-comparison; exclusions like GB200 NVL72 (a rack, not a chip) will skew ratios.