Site icon www.grafickekarty.sk

AMD Helios rack brings a new era of AI computing – open design instead of closed solutions

AMD Helios rack - Illustrative image

Artificial Intelligence is no longer just about the performance of a single graphics card. AMD is responding with its most ambitious project yet – AMD Helios rack, which is changing the concept of AMD’s data centres. Instead of dozens of separate servers, it creates one huge computing organism where everything – GPU, CPU, network and cooling – works as a whole.

Open rack, open philosophy

At OCP Global Summit 2025, AMD unveiled its new rack-scale Helios system – an open reference design created in collaboration with the Open Compute Project and Meta. The key is the Open Rack Wide (ORW) standard, which is twice the width of conventional server racks. The result? More room for performance, easier servicing, and better cooling.

AMD Helios rack visualization showing modular GPU and cooling arrangements for AI data centers. Source: AMD

AMD claims the new design reduces service times by more than 30% and offers up to 50% more memory capacity compared to competing racks. At the heart of the new design is openness. While Nvidia is betting on closed systems (NVLink, NVSwitch), AMD’s Helios rack will allow manufacturers such as HPE, Dell and Supermicro to create their own variants on the same design. This is a significant shift towards standardization of AI hardware and more freedom for the market.

The power of 72 GPUs in a single rack

Helios is designed to integrate up to 72 AMD Instinct MI450 GPUs in a single rack. These accelerators build on the CDNA 4 architecture and represent the next step after the MI300 series. Each GPU is planned to have up to 432 GB of HBM4 memory with a throughput of around 19.6 TB/s, which in aggregate translates to more than 31 TB of high-speed memory in a single frame.

The GPUs are interconnected using UALink (Ultra Accelerator Link) technology, which provides throughput of up to 260 TB/s within the rack. UALoE (UALink over Ethernet) is used between the racks, allowing multiple units to be connected without proprietary cables. This is a fully open standard that can be used by other companies – and that’s the main difference with Nvidia.

An official render of the AMD Helios rack from AMD’s presentation at OCP Global Summit 2025. Source: AMD

AMD states that Helios has a target performance of up to 2.9 exaFLOPS in FP4 and 1.4 exaFLOPS in FP8. These figures are projected values for now, which will be confirmed after launch in 2026.

AMD EPYC “Venice” (Zen 6) and Pensando “Vulcano” DPUs are also part of the system. The processors manage data and task management, while the DPUs accelerate network operations and security functions. The entire rack is cooled using a direct water circuit to maintain stable performance at lower power consumption.

Simply put – in the AMD ecosystem, every element plays a role:
CDNA 4 defines the GPU chips’ innards, Zen 6 the architecture of the CPU cores, and the AMD Helios rack is a way to reliably combine all of these components into entire AI superclusters. It’s a rack platform architecture – an open system that defines what a future data center built on AMD components should look like.

The DPU (Data Processing Unit) serves as a standalone processor to manage data streams, network communications and security, offloading both the CPU and GPU and increasing overall system efficiency.

Oracle as a pilot partner

The first partner that has already confirmed Helios deployment is Oracle Cloud Infrastructure (OCI). As planned, a supercluster built on the AMD Helios rack with tens of thousands of MI450 GPUs will be launched in 2026. The goal is to create an environment for training and inferencing models with extreme memory and throughput requirements.

Oracle will be the first hyperscaler to make this type of rack-scale infrastructure publicly available. It will also be a pilot test of how AMD’s open model can perform in an environment where Nvidia has long dominated with its Vera Rubin and NVL72 systems.

AMD Helios rack and its competitors

The year 2025 is marked by a full AI offensive. Nvidia announced the Vera Rubin platform and is preparing NVL144 racks that will extend the NVSwitch ecosystem and Grace-Blackwell CPU/GPU interconnects. Intel, in turn, introduced the Crescent Island architecture, which combines Xe3 GPUs and AI accelerators in a monolithic design – full launch is expected in 2026.

AMD, however, is betting on other values. AMD’s Helios rack is not about maximum numbers, but about open collaboration – so that AI computing doesn’t become a single vendor’s game. From a technology perspective, Helios thus represents the “democratization” of AI infrastructures, in which different companies and research teams can take part.

What lies ahead in 2026

Helios is currently in the reference design and testing stage with partners. The first commercial deployments are planned for the second half of 2026, with AMD already confirming that it will expand the project further by working with more cloud providers. In subsequent generations, Helios is also set to receive an MI450X variant with a larger rack width and higher power density.

We’ll know the exact performance and power consumption parameters after the official launch, but it’s already clear that AMD is entering the new segment with courage and a vision of an open AI standard.

Conclusion

The AMD Helios rack represents a new direction in AI data center development – open, modular and ready for a future where it’s no longer just about the power of a single chip, but about the efficiency of an entire ecosystem. If AMD delivers on its promises, Helios could fundamentally change the balance of power in AI infrastructures as early as 2026 and become the most open alternative to Nvidia’s dominance.

Exit mobile version