ARTIFICIAL INTELLIGENCE
HPE Unveils AI Networking Innovations and Key Partnerships
HPE recently launched new networking hardware and software at Discover Barcelona, enhancing AI capabilities and deepening collaborations with AMD and Nvidia.
- Read time
- 6 min read
- Word count
- 1,202 words
- Date
- Dec 3, 2025
Summarize with AI
Hewlett Packard Enterprise (HPE) introduced a comprehensive suite of networking solutions at its Discover Barcelona 2025 event, designed to propel enterprises into the artificial intelligence (AI) networking era. This extensive rollout includes advanced switches and routers, alongside significantly strengthened partnerships with industry giants AMD and Nvidia. A core focus of the announcements was the integration of AI technology acquired from Juniper, demonstrating HPE's strategic vision for unified, intelligent network management. The company aims to leverage combined strengths to offer superior performance and simplified operations for next-generation AI infrastructure, addressing diverse deployment models and complex AI workloads.

š Non-members read here
HPE recently showcased a wide array of networking solutions and software at its Discover Barcelona 2025 customer event, aiming to equip enterprises for the evolving artificial intelligence (AI) networking landscape. This significant launch includes advanced switches and routers, alongside reinforced collaborations with AMD and Nvidia. A key highlight was the detailed plan for integrating AI technology from Juniper, following their finalized acquisition in July.
Rami Rahim, president and general manager of the HPE networking business, explained the strategy to merge the robust features of HPE Aruba Networking Central with Juniperās core Mist AIOps software. Aruba Networking Central serves as HPEās primary cloud-based platform for managing and orchestrating wired and wireless networks across various environments. This integration seeks to create a powerful, unified management experience for diverse network infrastructures.
Unifying AI-Powered Network Management
Juniperās Mist AI and Marvis virtual network assistant (VNA) are known for their ability to collect vast amounts of telemetry and user state data from various network devices. This data helps in providing actionable insights and automating workflows for detecting and resolving complex enterprise networking issues. HPE is now incorporating Mistās Large Experience Model (LEM) into Aruba Networking Central.
The Mist LEM utilizes billions of data points from popular applications like Zoom and Teams, augmented with synthetic data from digital twins. This advanced model is designed to quickly identify, troubleshoot, and predict video conferencing problems, enhancing network reliability. Concurrently, Aruba Networkingās Agentic Mesh technology will become available to Mist, boosting its AI-based anomaly detection and root-cause analysis capabilities.
Furthermore, Mist will benefit from the organizational and global network operations center (NOC) views offered by HPE Aruba Networking Central. This integration will enable customers to manage operations seamlessly across both platforms. Rahim emphasized that while Mist was largely developed for cloud deployment, Aruba Central offers a more diverse deployment model capability. The long-term objective is to unify the user experience of these platforms by leveraging microservices architecture and cross-pollinating capabilities.
On the hardware front, HPE is introducing new solutions tailored for the AI data center edge and for scaling out network delivery. The Juniper MX series is a flagship routing family traditionally aimed at carriers and large-scale enterprise data centers, while the QFX line serves data center clients, anchoring spine/leaf networks and top-of-rack systems. These new hardware offerings are designed to meet the rigorous demands of AI workloads.
Next-Generation Hardware for AI Infrastructure
HPEās new 1U, 1.6Tbps MX301 multiservice edge router is immediately available, targeting the deployment of AI inferencing closer to data generation sources. This router is suitable for metro, mobile backhaul, and enterprise routing applications. Rahim highlighted its high-density support for 16 x 1/10/25/50GbE, 10 x 100Gb, and 4 x 400Gb interfaces.
The MX301 acts as a critical on-ramp, providing high-speed, secure connections for distributed inference cluster users, devices, and agents from the edge to the central AI data center. These applications typically demand high performance, extensive logical scaling, and integrated security features. The router is engineered to deliver these capabilities, ensuring efficient and secure data flow for AI operations.
In the QFX series, the new QFX5250 switch is expected in the first quarter of 2026. This fully liquid-cooled system is designed to integrate Nvidia Rubin and/or AMD MI400 GPUs for AI consumption across the data center. Built on Broadcom Tomahawk 6 silicon, the QFX5250 supports an impressive 102.4Tbps Ethernet bandwidth.
Rahim explained that the QFX5250 combines HPEās liquid cooling technology with Juniperās networking software (Junos) and integrated AIops intelligence. This combination aims to deliver high-performance, power-efficient, and simplified operations for next-generation AI inference workloads. The focus is on providing robust and scalable networking solutions that can handle the intense computational demands of modern AI.
Expanding Strategic Industry Collaborations
A crucial element of HPE and Juniperās AI networking strategy involves deepening their partnerships with Nvidia and AMD. The collaboration with Nvidia now extends to include HPE Juniper edge on-ramp and long-haul data center interconnect (DCI) support within the Nvidia AI Computing by HPE portfolio. This expansion utilizes the MX and Juniper PTX hyperscaler routers to facilitate high-scale, secure, and low-latency connections.
These connections are vital for linking users, devices, and agents to AI factories, as well as for establishing connections between AI clusters deployed across greater distances or multiple cloud environments. Earlier this year, Nvidia AI Computing by HPE was introduced as a partnership to accelerate enterprise AI deployment. It encompasses Nvidiaās Enterprise AI Factory validated designs, the Spectrum-X Ethernet networking platform, and NVIDIA BlueField-3 data processing units (DPU). In a further development, the companies announced plans to establish an AI factory lab in Grenoble, France, where customers can test and refine their AI workloads.
With AMD, HPE committed to supporting AMDās Helios AI rack scale architecture, featuring integrated scale-up Ethernet networking. Specifically, this system will be a scale-up, turnkey Ethernet package incorporating a purpose-built Juniper Networks switch. This switch will be based on Broadcomās 102.4 Tbps Tomahawk 6 network silicon and will employ the Ultra Accelerator Link over Ethernet (UALoE) specification.
The UALoE specification, established earlier this year by a consortium of 75 members including AMD, Broadcom, Cisco, Google, HPE, Intel, Meta, Microsoft, and Synopsys, defines the technology for supporting a maximum data rate of 200 Gigatransfers per second (GT/s) per channel or lane between accelerators and switches. This standard allows for connections between up to 1,024 AI computing pods. Helios itself is built around the next-generation AMD Instinct MI450 Series GPUs, offering up to 260 TB/s of scale-up interconnect bandwidth and 43 TB/s of Ethernet-based scale-out bandwidth. This architecture aims to ensure high-performance communication across GPUs, nodes, and racks, ultimately supporting trillion-parameter training and large-scale AI inference development.
Rahim highlighted the groundbreaking nature of this collaboration, emphasizing the introduction of Ethernet to a new layer of the AI data center network. He noted that this scale-up solution leverages standard Ethernet, ensuring an open standard approach that avoids proprietary vendor lock-in. It utilizes proven HPE Juniper networking technology to deliver optimized performance and scalability for AI workloads, marking a significant advancement in AI infrastructure.
Additional Innovations and Platform Enhancements
HPE also announced several other networking advancements. The Apstra Data Center Director and Data Center Assurance software from HPE and Juniper will be integrated with HPEās OpsRamp management package. OpsRamp monitors servers, networks, storage, databases, and applications. This integration aims to enhance data center automation, leveraging Apstraās capabilities to enforce consistent network and security policies across physical and virtual infrastructures. Available through GreenLake, this combined solution will offer full-stack observability, predictive assurance, and proactive issue resolution across compute, storage, networking, and cloud environments.
Additionally, HPE is introducing software-defined networking for virtual machines (VMs) hosted by the HVM hypervisor in HPE Morpheus VM Essentials and HPE Morpheus Enterprise Software. The goal is to bring cloud-enabled networking and security capabilities to the virtual machine platform, providing enhanced flexibility and protection for virtualized workloads.
In the realm of storage, HPE unveiled two new options: the StoreOnce 5720 and the all-flash 7700. These next-generation backup appliances are engineered for rapid protection of critical workloads. Both models integrate directly with HPE Alletra Storage MP and HPE SimpliVity, allowing customers to mount copies directly. This feature simplifies the process of reusing protected data for forensics, analysis, or testing purposes, increasing data utility and operational efficiency.