Skip to Main Content

CO-PACKAGED OPTICS

Co-Packaged Optics Transform AI Data Centers

Co-packaged optics (CPO) are emerging as a vital technology to address the escalating demands of artificial intelligence workloads in data centers.

Read time
8 min read
Word count
1,753 words
Date
Dec 2, 2025
Summarize with AI

The surge in artificial intelligence workloads is driving an urgent need for advanced data center network solutions. Co-packaged optics (CPO) offer a promising answer by integrating optical components directly into network switches, thereby enhancing speed, capacity, and crucially, power efficiency. While still in its nascent stages, CPO is gaining traction among major vendors like Nvidia and Broadcom. This innovative approach aims to circumvent the limitations of traditional optical transceivers, particularly concerning power consumption and signal integrity over increasing data rates. The technology also introduces new considerations regarding reliability and deployment, with alternative solutions like linear pluggable optics (LPO) presenting different trade-offs.

Optical technology is becoming increasingly critical for high-speed data center networking. Credit: Shutterstock
🌟 Non-members read here

The rapid expansion of artificial intelligence workloads is placing unprecedented strain on data center networks. This increasing demand for speed and capacity highlights the urgent need for innovative solutions. One such promising development is co-packaged optics (CPO), a technology designed to embed optical components more deeply within data center network switches.

CPO not only promises to support the higher data rates required by AI but also offers significant reductions in power consumption, a critical consideration for modern data centers. While still in its early phases, and with some industry players exploring alternative paths, many experts believe CPO is a crucial step towards meeting the insatiable demands of AI-driven data processing.

Advancing Data Center Networking with Co-Packaged Optics

Traditional data center switches typically connect to copper network cables via a network interface card (NIC). When transitioning to fiber-optic networking, the NIC must incorporate a transceiver with a digital signal processor (DSP). This DSP translates electronic signals into optical signals for transmission over fiber-optic cables and performs the reverse conversion for incoming optical signals.

Co-packaged optics fundamentally alters this architecture by eliminating the need for a separate transceiver and DSP. Instead, the electronic-to-optical translation capabilities are integrated directly onto the switch application-specific integrated circuit (ASIC). This means all necessary electrical-to-optical conversions occur on a single chip, in silicon. The result is a direct connection for optical cables to the switch, enabling a continuous optical stream from the switch ASIC to the fiber cable without intermediate conversions.

This is not merely a theoretical concept. TSMC, a global leader in semiconductor manufacturing, has developed processes for producing CPO-capable chips. They are actively collaborating with major technology firms, including Nvidia and Broadcom, to facilitate their deployment. Nvidia, for instance, has already unveiled optical switches that incorporate CPO technology, marking a significant step towards widespread adoption. Scott Wilkinson, a lead analyst at Cignal AI, emphasizes that these collaborations are bringing CPO to fruition in real-world applications.

The emergence of CPO is particularly timely given the immense pressure AI workloads exert on data centers and their underlying networks. Modern large data centers typically utilize a leaf-spine architecture. In this setup, servers within a rack connect to a top-of-rack (ToR) switch via copper cables. These ToR switches then form the data center’s backbone, or spine, which is predominantly fiber-optic. Consequently, every ToR switch linking to the fiber-optic backbone requires a NIC equipped with an optical transceiver. In an AI-centric data center, this translates to an enormous number of optical connections and, subsequently, a high volume of NICs.

Gilad Shainer, senior vice president of networking at Nvidia, highlights the scale of these deployments, noting that “we’re talking about tens and hundreds and thousands of racks.” Furthermore, AI workloads often span beyond a single server or even a single rack. While copper cables are effective for short-distance connections within a rack or chassis, they cannot deliver the speeds and low latency needed for longer distances between racks. This limitation underscores the critical role of optical networking in meeting AI’s demanding communication requirements.

Enhancing Efficiency and Performance

The increased reliance on optical cables in AI data centers significantly contributes to overall power consumption. Converting electronic signals into light and powering the necessary lasers consumes substantial energy. Each signal transition, involving an electrical-to-light or light-to-electrical conversion, also incurs some degree of signal loss. This necessitates sufficient power to ensure the signal’s integrity through multiple conversions.

According to Shainer, the power consumed by the optical network in a large AI factory can reach nearly 10% of the total compute capacity. By eliminating numerous transceivers, CPO technology has the potential to reduce networking power requirements by at least 3.5 times. While this figure might be an optimistic projection, industry analysts agree that the power savings are substantial. Zeus Kerravala, founder and principal analyst at ZK Research, suggests that CPO can reduce interconnect power by 60% or 70% in some scenarios, leading to considerable savings per switch compared to traditional pluggable modules.

Beyond power efficiency, speed is another critical factor. As computational demands continue to soar, there is an escalating need for faster speeds within racks and chassis, often referred to as “scale-up” networks. This contrasts with “scale-out” networks, which handle connections between racks. Eventually, scale-up networks will require speeds of 400 gigabits per second (Gbps) per lane. A “lane” refers to the capacity of a single laser; for example, an 800G module might use eight lasers, each operating at 100G.

Wilkinson explains that the current generation of 1.6 terabit modules typically uses 200G per lane, with the next generation moving towards 400G per lane. At these speeds, copper connections become inadequate, necessitating optical solutions to achieve the required data density. CPO addresses this by bringing optics closer to the processing unit, thereby enabling these ultra-high speeds.

The reliability of CPO is an area still under evaluation, as widespread deployments have yet to occur. However, initial projections and design considerations offer compelling arguments. Nvidia claims its CPO-capable switches will enhance resiliency by tenfold at scale compared to previous switch generations. This is partly attributed to the fact that these optical switches require four times fewer lasers. The optical engine is integrated onto the ASIC, allowing multiple optical channels to share a single laser source.

Furthermore, in Nvidia’s implementation, the laser source is located externally to the switch. This design choice allows for hot-swappable replacement of a laser source if it fails, meaning the switch does not need to be shut down. While the concern that a CPO box might require complete replacement if an embedded photonics engine fails is often raised, Wilkinson dismisses this as a “fallacy.” He points out that these components have few moving parts and are not expected to fail frequently. Kerravala suggests a simple workaround for hyperscalers: overbuild with 5% to 10% more ports than needed, allowing for easy relocation if a port fails.

Industry Landscape and Competing Technologies

The list of major switch vendors fully embracing CPO is still developing, though prominent component manufacturers like TSMC are key facilitators. Among the significant switch providers, Broadcom has been actively progressing with CPO since 2021. Their third-generation offering, the Tomahawk 6 – Davisson (TH6-Davisson), is now being shipped to early access customers and partners. Developed in collaboration with Micas Networks, TSMC, and HPE, the TH6-Davisson is an Ethernet switch supporting 102.4 terabits per second (Tbps) of optically enabled switching capacity.

Nvidia also announced new photonics switches with CPO support in March 2025, developed with partners including TSMC. Their Nvidia Spectrum-X Photonics switches are designed for a total throughput of 400 Tbps across various port configurations. The liquid-cooled Nvidia Quantum-X Photonics switches offer 144 ports of 800 Gbps InfiniBand, based on 200 Gbps SerDes technology. Shipments are anticipated for next year, with the InfiniBand models expected first.

Cisco, on the other hand, is adopting a more cautious approach to its optics strategy, partly due to reliability concerns. While they demonstrated a CPO switch in 2023, a formal product announcement has yet to be made. Bill Gartner, senior vice president and general manager of Cisco’s optical systems and optics business, has expressed concerns about the assembly complexity of CPO packages, which could involve over a thousand optical connections. He believes the industry needs to undergo a “learning curve” to ensure high yield and reliability for such intricate assemblies.

Arista is notably absent from the CPO proponents, instead backing a competing technology: linear pluggable optics (LPO). LPO shares a similar theoretical foundation with CPO but retains a distinct difference: it does not embed optics at the chip level. Instead, it utilizes a familiar, pluggable format. LPO eliminates the DSP on the transceiver, opting for linear transimpedance amplifiers (TIAs) to drive optical signals. Proponents argue that LPO can connect to standard switch ports, is easily replaceable, and offers comparable power efficiency to CPO.

Wilkinson explains that the DSP in traditional transceivers compensates for signal distortions across the electrical link from the switch, through a circuit board and connector, to the pluggable optic. LPO assumes that the DSP on the switch itself will be sufficient to correct any distortions. He cautions that while this might be true in some cases, the variability across devices and networks makes a universal guarantee impossible. CPO largely bypasses this issue due to the extremely short electrical link between the switch and the optical engines.

Vijay Vusirikala, a distinguished leader for AI systems at Arista, offers a different perspective. He asserts that the underlying optics, including the number of lasers and silicon photonics technology, are identical for both LPO and CPO. Consequently, both technologies deliver similar power consumption benefits, with LPO potentially consuming less power at higher speeds like 1600G. The core difference, Vusirikala states, lies in packaging: on-chip versus pluggable. He argues that the pluggable nature of LPO significantly impacts serviceability, allowing for quick replacement of a failed optics module in minutes, whereas a CPO failure could necessitate replacing the entire switch, taking hours. Furthermore, pluggable optics modules boast greater maturity, with hundreds of millions of units expected to ship next year, while CPO volume shipments are not anticipated until 2027.

Another emerging technology, co-packaged copper (CPC), is conceptually similar to CPO but employs copper cabling co-packaged with the ASIC instead of optics. Wilkinson notes that CPC’s applicability is limited to short distances, such as “scale-up” connections within an AI node. Ultimately, copper will reach its speed limitations, necessitating a transition to optics. Kerravala concurs, stating that CPC is better suited for legacy interconnects, smaller enterprise racks, and short-reach applications. He emphasizes CPO’s significant advantage in power consumption, as CPC typically results in a larger footprint and higher power usage.

Considering the array of competing technologies and specific use cases, enterprise users may question the relevance of co-packaged optics to their operations. CPO is primarily designed for high-end data center switches, making it largely a domain for hyperscalers. However, it is not exclusively so. Wilkinson suggests that large enterprises with extensive racks of switches performing similar functions might find CPO appealing.

Kerravala advises that while CPO isn’t an immediate necessity for most enterprises, it warrants consideration as they begin building their own private AI and high-performance computing clusters. Enterprises need to clearly define their AI roadmap and determine what they plan to build internally versus what they will run in the cloud. Nvidia’s perspective is clear: if a data center is being built, CPO offers compelling advantages. Shainer expresses confidence that data center developers will embrace CPO to reduce power consumption, enhance compute capacity, lower costs, and improve the overall resiliency of their facilities.