Skip to Main Content

ARTIFICIAL INTELLIGENCE

Tech Giants Unite to Boost Ethernet for AI Networks

A new industry consortium, ESUN, is forming to advance Ethernet's capabilities for high-performance scale-up AI infrastructure through open standards and collaboration.

Read time
4 min read
Word count
919 words
Date
Oct 14, 2025
Summary

Leading technology companies have launched the Ethernet for Scale-Up Networking (ESUN) initiative, an effort to enhance Ethernet's capacity for the demanding requirements of artificial intelligence infrastructure. Hosted by the Open Compute Project, ESUN will focus on open, standards-based Ethernet switching and framing, working in conjunction with existing groups like the Ultra-Ethernet Consortium (UEC) and IEEE 802.3. This collaboration aims to foster interoperability and accelerate innovation within the AI networking landscape, supporting the development of robust, lossless, and error-resilient network topologies for advanced AI workloads.

Modern data centers are rapidly evolving to meet the demands of advanced AI workloads. Credit: Shutterstock
🌟 Non-members read here

The rapid expansion of artificial intelligence (AI) networking technology has prompted a significant industry collaboration focused on ensuring Ethernet can effectively manage the escalating demands. A new initiative, Ethernet for Scale-Up Networking (ESUN), has been established by the nonprofit Open Compute Project (OCP) to advance Ethernet’s capabilities for scale-up connectivity across accelerated AI infrastructure. This crucial development was announced during the 2025 OCP Global Summit in San Jose, California.

ESUN brings together an impressive roster of technology leaders, including AMD, Arista, ARM, Broadcom, Cisco, HPE Networking, Marvell, Meta, Microsoft, Nvidia, OpenAI, and Oracle. These companies are committed to addressing the evolving needs of AI workloads, which are fundamentally reshaping modern data center architectures. The initiative aims to align on open standards, integrate best practices, and accelerate innovation in Ethernet solutions specifically for scale-up networking environments.

Advancing Ethernet for AI Workloads

ESUN’s core mission is to focus exclusively on open, standards-based Ethernet switching and framing for scale-up networking. The initiative will intentionally exclude host-side stacks, non-Ethernet protocols, application-layer solutions, and proprietary technologies, emphasizing a clear, open-standard approach. This targeted focus is designed to expand the development and interoperability of XPU network interfaces and Ethernet switch ASICs, which are critical components for high-performance AI networks.

The OCP, in its announcement, highlighted that ESUN’s initial efforts will concentrate on Layer 2 and Layer 3 Ethernet framing and switching. This foundational work is essential for enabling robust, lossless, and error-resilient single-hop and multi-hop topologies, which are vital for the seamless operation of complex AI systems. Ensuring such resilience is paramount, as even minor delays can significantly impede thousands of concurrent operations within AI clusters.

A key aspect of ESUN’s strategy involves active engagement with other organizations dedicated to advancing Ethernet for AI networks. This includes established bodies like the IEEE 802.3 Ethernet group and the more recently formed Ultra-Ethernet Consortium (UEC). This collaborative approach underscores a commitment to avoid fragmentation and build upon existing, robust standards, fostering a unified direction for AI networking development.

The Ultra-Ethernet Consortium, formed in 2023 by companies such as AMD, Arista, Broadcom, Cisco, Intel, Meta, and Microsoft, already boasts over 75 members. Its objective is to construct a complete Ethernet-based communication stack architecture tailored for high-performance networking. Similarly, the Ultra Accelerator Link (UALink) consortium recently published its first specification, the UALink 200G 1.0 Specification, which defines an open standard interconnect for AI clusters supporting data rates up to 200 Giga transfers per second per channel. ESUN will leverage the work of both IEEE and UEC where applicable, ensuring compatibility and efficiency in its own developments.

Arista’s CEO Jayshree Ullal and Chief Development Officer Hugh Holbrook outlined a modular framework for Ethernet scale-up, comprising three crucial building blocks. First, they emphasize common Ethernet headers for interoperability, ensuring a broad range of upper-layer protocols and use cases can be supported. Second, an open Ethernet data link layer will provide a high-performance foundation for AI collectives. This layer will utilize standards-based mechanisms such as Link-Layer Retry, Priority-based Flow Control, and Credit-based Flow Control to achieve cost-efficiency, flexibility, and performance. Finally, by relying on the ubiquitous Ethernet physical layer, ESUN guarantees interoperability across multiple vendors and a wide array of optical and copper interconnect options.

This framework is also designed to support various upper-layer transports, including Scale-Up Ethernet Transport (SUE-T), a new OCP workstream seeded by Broadcom’s contribution of Scale-Up Ethernet (SUE). SUE-T aims to define functionalities for reliability scheduling, load balancing, and transaction packing, which are critical performance enhancers for specific AI workloads. Essentially, the ESUN framework enables individual accelerators to function as a cohesive, powerful AI supercomputer, where network performance directly influences the speed and efficiency of AI model development and execution. This layered approach promotes innovation without fragmentation, offering XPU accelerator developers flexibility in host-side choices while maintaining system design options and allowing for practical, iterative improvements.

The Evolving Landscape of AI Networking Fabrics

The concept of scale-up AI fabrics (SAIF) has garnered significant attention, with research firm Gartner projecting substantial growth in this sector to support AI infrastructure initiatives through 2029. Gartner anticipates a dynamic vendor landscape over the next two years, characterized by the emergence of multiple technology ecosystems. The firm’s report, “What are ‘Scale-Up’ AI Fabrics and Why Should I Care?”, defines SAIF as providing high-bandwidth, low-latency physical network interconnectivity and enhanced memory interaction among nearby AI processors.

Current SAIF implementations are often vendor-proprietary and confined by proximity, typically limited to a single rack or row. Gartner recommends using Ethernet when connecting multiple SAIF systems, citing its optimal scale, performance, and supportability. This recommendation underscores the strategic importance of initiatives like ESUN in standardizing and enhancing Ethernet’s role in this burgeoning field.

Gartner forecasts major shifts in SAIF technology from 2025 through 2027, including increased traction for Nvidia’s SAIF offerings and other alternatives. As of mid-2025, Nvidia continues to dominate this technology segment, evolving and expanding its NVLink technology through partnerships with companies like Marvell, Fujitsu, Qualcomm, and Astera Labs to directly integrate with Nvidia’s SAIF ecosystem, branded as Nvidia NVLink Fusion.

However, the emergence of competing ecosystems, such as UALink and others, signals a move towards a more diverse and open environment. These initiatives hold the potential to foster a multivendor ecosystem, offering greater flexibility and reducing vendor lock-in. Such developments are expected to cultivate a more competitive market, ultimately benefiting users through broader choices and accelerated innovation in AI networking solutions. The collaborative spirit embodied by ESUN aligns perfectly with this future, aiming to create a standardized foundation that all participants can build upon.