NVLink Fusion - Nvidia’s Response to UALink?

0
May 22, 2025

Among many announcements, Nvidia released news about about its NVLink Fusion technology this week at Computex 2025 in Taipei, Taiwan.

Scaling up with NVLink

NVLink is a high-speed interconnect compute fabric which connects together multiple GPUs in a server or rack. In recent years, it has been a key factor in maintaining Nvidia’s dominance in AI as GPU interconnect speeds is one of the greatest barriers to scaling up AI servers, determining peak performance and power efficiency of an AI system.

NVLink provides unparalleled GPU-to-GPU interconnect bandwidth and latency. In its current generation, Nvidia’s 5th gen NVLink fabric can support up to 1.8TB/s bandwidth (i.e. 900GB/s in each direction) per GPU for up to 72 GPUs per rack, i.e. 14x higher bandwidth compared to PCIe 5.0, thus making it a much better option for training very large AI models.

Key to its operation is cache coherency, which ensures data consistency across GPUs, enabling all GPUs in a server or rack to behave as a single large accelerator via shared compute and memory resources. This simplifies programming models because developers do not need to manage memory coherency explicitly. Until now, however, NVLink’s usage has been confined to Nvidia’s GPUs.

Opening up NVLink with Fusion

Nvidia’s GPUs are already being used with AMD and Intel CPUs, and of course, with Nvidia’s own Grace CPUs. NVLink Fusion opens-up the standard to allow other vendors’ CPUs and GPUs/custom accelerators to be used in Nvidia’s rackscale infrastructure, such as the upcoming DGX GB300 NVL72 system. However, Nvidia is only partly opening-up the standard, allowing either custom CPUs to be connected to Nvidia’s GPUs (Figure 1: middle rack) OR third-party GPU/accelerators to be connected to Nvidia’s CPUs (Figure 1: right rack), i.e. a Nvidia CPU or GPU must always be part of the package.

owever, A close-up of a computerAI-generated content may be incorrect.Figure 1: Nvidia rackscale options enabled by NVLink Fusion: Left: All Nvidia; Middle: Third Party CPU; Right: Third-Party Accelerator. Source: Nvidia.

Figure 1: Nvidia rackscale options enabled by NVLink Fusion:

Left: All Nvidia; Middle: Third Party CPU; Right: Third-Party Accelerator

NVLink Fusion Ecosystem

A number of vendors have signed up to use NVLink Fusion, including AIchip, Astera Labs, Fujitsu, Marvell, MediaTek, Qualcomm and Synopsys. But not AMD, Broadcom, Intel and a number of other companies.

Earlier in the week at Computex, Qualcomm announced that it was developing data center CPUs designed to connect with Nvidia’s GPUs using NVLink Fusion. Also Fujitsu, which plans to make its next-generation 2nm ARM-based Monaka CPU compatible with NVLink Fusion.

Analyst Viewpoint

NVLink Fusion is Nvidia’s response to the threat of an alternative, fully open compute fabric called UALink. Developed by the UALink Consortium and led by AMD, Broadcom, Cisco, Intel and others, the first specification - UALink 200G – was published last month.

Hyperscalers and enterprises have been concerned for some time at the continuing dominance of Nvidia, which seems set to continue for many years to come! The objective of UALink is to create an alternative, competitive connectivity ecosystem for AI accelerators to enable companies such as AMD, AWS, Broadcom, Google, Intel, Meta, Microsoft, etc. to build less expensive rackscale infrastructure using open industry-standard technologies. This also extends to scale out networks with the development of Ultra Ethernet as an alternative to Nvidia’s Infiniband networking technology. (N.B. Nvidia also offers its proprietary Spectrum-X Ethernet-based solution).

Commercial availability of UALink is not expected until 2026/27 with GPUs/accelerators from AMD and Intel expected along with switches from Astera Labs, Broadcom, etc. However, UALink’s success will depend on whether it can encourage enough customers to buy AMD and Intel GPUs rather than Nvidia GPUs. Nvidia’s NVLink Fusion announcement this week makes that more challenging as Fusion provides new options for customers. For Nvidia, selling a NVLink Fusion-based rackscale system is the next best thing to selling an all Nvidia system, as it allows Nvidia to still maintain an iron grip on the remaining rackscale components. Ultimately, however, Counterpoint believes that open standards based AI infrastructure will prevail due to wide level industry support – the only question is how long it will take for UALink to gain a sizeable market share. In the meantime, Nvidia will no doubt have other tricks up its sleeves to thwart progress!

Summary

Published

May 22, 2025

Author

Gareth Owen

Gareth has been a technology analyst for over 20 years and has compiled research reports and market share/forecast studies on a range of topics, including wireless technologies, AI & computing, automotive, smartphone hardware, sensors and semiconductors, digital broadcasting and satellite communications.