CISCO
Cisco Unveils New Router and Chip for Advanced AI Data Centers
Cisco introduces its 8223 routing system and P200 Silicon One chip, engineered to support the demanding, distributed AI workloads of modern data centers.
- Read time
- 6 min read
- Word count
- 1,253 words
- Date
- Oct 8, 2025
Summary
Cisco has launched a new high-performance router, the 8223 routing system, and a corresponding programmable deep-buffer chip, the P200 from its Silicon One portfolio. This innovation aims to provide robust support for the increasingly distributed and demanding artificial intelligence workloads prevalent in today's hyperscale and large data center environments. The system features advanced optics and a design optimized for efficiency, addressing critical needs such as power consumption and network resiliency across vast AI infrastructures. Its architecture is designed to handle massive data surges and facilitate low-latency communication essential for AI training and applications.

🌟 Non-members read here
Cisco Elevates AI Data Center Connectivity with New Hardware
Cisco has recently announced a significant advancement in its networking hardware, introducing a new high-end router and a powerful chip designed to tackle the growing demands of distributed artificial intelligence workloads. This strategic move aims to bolster support for hyperscalers and major data center operators by providing enhanced connectivity solutions. The new offerings are engineered to handle the complex, geographically dispersed AI clusters that are becoming increasingly common across various industries.
At the core of this innovation is the Cisco 8223 routing system, a robust platform built around the latest iteration of the company’s Silicon One portfolio, the P200 programmable, deep buffer chip. This system incorporates advanced optical form factors, including Octal Small Form-Factor Pluggable and Quad Small Form-Factor Pluggable Double Density, crucial for enabling efficient communication within and across widely spread AI infrastructure. This focus on distributed capabilities reflects a broader industry trend toward decentralized AI processing.
Rakesh Chopra, a Cisco Fellow and senior vice president for Silicon One, highlighted the necessity of these advancements. He noted that power constraints and the imperative for resilient operations are driving hyperscale, neocloud, and enterprise entities to adopt distributed AI clusters spanning campus and metropolitan areas. Such configurations require secure, high-performing, high-capacity, and energy-efficient network connectivity, precisely what the 8223 system aims to deliver. Its design is specifically optimized for large-scale disaggregated fabrics, allowing customers to scale their AI infrastructure with unparalleled efficiency and control.
Next-Generation Silicon One Architecture
Cisco’s Silicon One processors are purpose-built for exceptional network bandwidth and performance. A key design principle behind these processors is their versatility; they can be customized for either routing or switching functions from a single chipset. This eliminates the need for distinct silicon architectures for each network role, simplifying design and deployment. The Silicon One system also boasts enhanced Ethernet features, including improved flow control mechanisms and sophisticated congestion awareness and avoidance capabilities, which are vital for maintaining network stability under heavy loads.
The P200 chip, central to the new 8223 system, represents a significant leap in processing power and efficiency. A single P200-based system can manage the traffic volume that previously necessitated six 25.6 Tbps fixed systems or a modular system with four slots. This consolidation dramatically reduces the hardware footprint and operational complexity. Furthermore, the 8223’s 3RU, 51.2 Tbps configuration achieves a remarkable reduction in power consumption, using approximately 65% less energy compared to earlier generations, underscoring Cisco’s commitment to energy efficiency in high-performance computing.
The 8223 features 64 ports of 800G coherent optics support, enabling it to process over 20 billion packets per second. Its advanced buffering capabilities are critical for handling the massive traffic surges inherent in AI training applications. The P200 chip also allows the router to support a full 512 radix, providing extensive connectivity options. This architecture can scale to 13 petabits using a two-layer topology, or an astonishing 3 exabits with a three-layer topology, offering immense scalability for future AI demands.
Addressing AI Traffic Management Challenges
The use of deep buffers in network traffic management, particularly for AI workloads, has been a subject of debate within the industry. Some experts contend that these buffers can lead to performance issues, as they fill and drain, introducing jitter into workloads and potentially slowing down operations. However, Chopra offers a different perspective, arguing that the true source of these challenges often lies not in the buffers themselves, but in suboptimal congestion management schemes and inefficient load balancing with AI workloads.
According to Chopra, AI workloads are often deterministic and predictable, allowing for proactive strategies to manage flow placement across the network and preemptively avoid congestion. This proactive approach, combined with the capabilities of the 8223, can mitigate the traditional concerns associated with deep buffers. The router’s deep-buffer design provides substantial memory for temporarily storing packets during periods of congestion or traffic bursts. This is an essential feature for AI networks, where inter-GPU communication can generate unpredictable, high-volume data flows.
Gurudatt Shenoy, Vice President of Cisco Provider Connectivity, further elaborated on the benefits. He explained that when combined with its high-radix architecture, the 8223 enables more devices to connect directly, which reduces latency, conserves rack space, and further decreases power consumption. The outcome is a flatter, more efficient network topology that supports the high-bandwidth, low-latency communication that is paramount for demanding AI workloads. This integrated approach to network design and traffic management is central to the 8223’s effectiveness in AI environments.
Flexible Network Operating System Support
A notable aspect of the 8223 routing system is its initial support for open-source network operating systems. The first operating systems compatible with the 8223 are the Linux Foundation’s Software for Open Networking in the Cloud (SONiC) and Facebook Open Switching System (FBOSS). While Cisco’s proprietary IOS XR will be supported, its integration is slated for a later date. This decision underscores Cisco’s understanding of the evolving landscape of network software and the preferences of hyperscale operators.
SONiC is a significant development in network software, as it decouples the network software from the underlying hardware. This allows it to run on a diverse range of switches and ASICs from multiple vendors, while still providing a full suite of essential network features. These features include Border Gateway Protocol, remote direct memory access, Quality of Service, and Ethernet/IP protocols. A core component of SONiC is its switch-abstraction interface, which offers a vendor-independent API for controlling forwarding elements such as switching ASICs, network processing units, or software switches in a uniform manner.
SONiC is increasingly seen as a compelling alternative to more traditional, less flexible network operating systems. Its modularity, programmability, and cloud-native architecture make it a viable option for enterprises and hyperscalers seeking to deploy adaptable cloud networking solutions. Chopra noted that hyperscalers are currently the primary users requiring this level of capacity, and the 8000 series is specifically designed for these customers. As more enterprises adopt AI, broader operating system support will naturally follow, accommodating their diverse needs.
Expanding the Silicon One Portfolio for AI
The introduction of the 8223 system represents the latest in a series of strategic upgrades Cisco has made to its Silicon One family of switch and router solutions. These ongoing enhancements demonstrate Cisco’s commitment to evolving its hardware to meet the rapidly changing demands of AI and other high-performance computing applications. The Silicon One portfolio continues to grow, offering a wider range of options for various networking requirements.
Earlier in the year, Cisco expanded its Silicon One-based smart switches with the C9350 Fixed Access Smart Switches and the C9610 Modular Core. Both were specifically designed to handle AI workloads, including agentic AI, generative AI, automation, and augmented/virtual reality applications. Additionally, Cisco unveiled a new family of data center switches based on the Silicon One chip earlier this year. These switches incorporate built-in programmable data processing units from AMD, allowing them to offload complex data processing tasks and free up the switches for dedicated AI and large workload processing.
Cisco now offers 14 distinct varieties of Silicon One ASICs, covering a broad spectrum of applications. These range from leaf/top-of-rack campus switching to high-throughput AI-backbone applications, providing comprehensive solutions for various networking needs. This versatile silicon is a critical component across many of Cisco’s core switches and routers. It powers devices in the Nexus Series 8000, tailored for telco and hyperscaler environments, as well as the Catalyst 9500X/9600X enterprise campus switches and the 8100 line of branch and edge devices, demonstrating the broad applicability and strategic importance of the Silicon One architecture across Cisco’s product lines.