Marvell Unveils Advanced Co-Packaged Optics for AI Accelerators

January 9, 2025
Marvell Unveils Advanced Co-Packaged Optics for AI Accelerators

Marvell Technology, a leader in data infrastructure semiconductor solutions, has made a groundbreaking announcement in the field of custom AI accelerators. Marvell has introduced an advanced co-packaged optics (CPO) architecture as part of its custom XPU architecture. This innovation builds on Marvell’s recent high-bandwidth memory (HBM) compute architecture, further solidifying the company’s leadership in custom silicon solutions. The significant leap in technology is poised to address the ever-increasing performance demands for large-scale AI applications and data center infrastructures.

Marvell’s new AI accelerator architecture is notable for its combination of XPU compute silicon, HBM, and other chiplets with Marvell’s 3D Silicon Photonics (SiPho) Engines. This amalgamation is strategically placed on the same substrate, utilizing high-speed Serializer/Deserializer (SerDes) interfaces, die-to-die connections, and advanced packaging technologies. A key feature of this architecture is the elimination of electrical signals leaving the XPU package into copper cables or across printed circuit boards. Instead, integrated optics are used to enable faster, longer-distance data transfers between XPUs, achieving transfer rates and distances 100 times longer than traditional electrical cabling.

Co-Packaged Optics (CPO) Technology

Co-packaged optics (CPO) technology, central to Marvell’s latest advancement, integrates optical components within a single package, significantly improving high-speed signal integrity, reducing signal loss, and minimizing latency. This cutting-edge technology allows for scale-up connectivity within AI servers across multiple racks, enabling cloud hyperscalers to create custom XPUs with higher interconnect bandwidths and longer reach. As a result, the CPO technology meets the rapidly growing performance demands of AI applications.

By integrating optical components directly into the XPU package, Marvell notably reduces parasitic losses while improving signal integrity at high speeds. This advancement is crucial for the competitive landscape of data infrastructure solutions providers as demand for robust, scalable AI solutions continues to grow. The efficiency gains from CPO technology provide noticeable improvements in data transfer rates and power consumption, positioning Marvell favorably as it delivers high-performance solutions to address the complexities of large-scale AI application needs.

In practical terms, the integration of CPO technology into the XPU package means faster data transfers over extensive distances without the traditional limitations of electrical connections. This capability is critical for data centers that require seamless, high-performance interconnectivity across multiple racks, as well as for cloud service providers aiming to deliver enhanced AI processing capabilities. Marvell’s technology achieves this without compromising on power efficiency, which is increasingly vital as power consumption becomes a more prominent issue in data centers.

Marvell’s 3D SiPho Engine

One of the standout components in Marvell’s architecture is the 3D SiPho Engine, demonstrated at the Optical Fiber Communication Conference and Exhibition (OFC) 2024. This engine supports 200Gbps electrical and optical interfaces and plays a crucial role in integrating CPO into XPUs. The 6.4T 3D SiPho Engine is equipped with 32 channels of 200G electrical and optical interfaces, incorporating a range of elements, including modulators, photodetectors, modulator drivers, and microcontrollers. This comprehensive integration delivers twice the bandwidth, twice the input/output bandwidth density, and 30% lower power consumption per bit compared to devices with 100G interfaces.

The introduction of high-speed SiPho Engines, which combine both electrical and optical interfaces within a unified device, epitomizes innovation in addressing signal integrity and power dissipation challenges. This progress facilitates faster data transfers over longer distances while aligning with the growing demand for AI-driven processing power within data centers. The efficiency and performance gains brought about by the 3D SiPho Engine demonstrate Marvell’s commitment to pushing the boundaries of current AI infrastructure capabilities.

Marvell’s 3D SiPho Engine sets a new standard for integrating electrical and optical interfaces in a single device. This approach not only improves system performance but also significantly enhances power efficiency, addressing one of the critical challenges in developing next-generation AI infrastructure. By combining these advanced elements, Marvell’s technology supports the high-speed data transfer requirements of modern data centers while minimizing energy consumption, contributing to more sustainable and cost-effective operations.

Industry Trends and Shifts

Marvell’s advancement in AI accelerator architecture reflects a broader trend within the semiconductor industry towards improving bandwidth, data transfer rates, and operational efficiency of data centers. The shift from traditional electrical cabling to optical solutions is a clear trend driven by the promise of faster and further data transfers with lower power consumption. This trend is particularly relevant in the context of the increasing complexity and scale of AI applications.

The industry is also experiencing an increased focus on integrating various components within a single package to minimize signal loss and enhance efficiency. This integration and miniaturization enable more compact and efficient AI server designs, addressing both the space and power constraints commonly faced in data centers. By incorporating innovative packaging technologies that reduce latency and optimize power efficiency, Marvell’s architecture represents a significant step forward in the evolution of AI infrastructure.

Architectures supporting large-scale AI applications are crucial as the demand for processing power and fast data transfer grows exponentially. Marvell’s approach, enabling scale-up solutions that span multiple racks, highlights the importance of designing systems capable of handling vast amounts of data with high efficiency and reliability. The overarching trends in the industry point towards a concerted effort to develop more powerful, efficient, and scalable AI server clusters to meet the surging needs of various applications.

Expert Perspectives and Industry Impact

Industry experts have acknowledged the significance of Marvell’s development in the context of growing data center needs. Will Chu, senior vice president and general manager of the Custom, Compute, and Storage Group at Marvell, emphasizes the densification and performance enhancements that this architecture brings to AI servers. Similarly, Nick Kucharewski, senior vice president and general manager of the Network Switching Business Unit at Marvell, refers to the integration of CPO into custom XPUs as a logical step towards achieving higher interconnect bandwidths and longer reach necessary for AI scale-up servers.

Radha Nagarajan, senior vice president and chief technology officer of Optical Platforms at Marvell, emphasizes the importance of silicon photonics in meeting the increasing demands for bandwidth, signal integrity, and power efficiency. Since 2017, Marvell has been a pioneer in the silicon photonics space, delivering high-volume production-ready solutions that address the exacting requirements of leading hyperscalers. This ongoing innovation in silicon photonics technology underscores Marvell’s commitment to evolving its solutions to keep pace with rapidly changing industry needs.

Moreover, Vlad Kozlov, founder, and CEO of LightCounting, projects a significant growth trajectory for CPO technology, predicting an increase from less than 50,000 port shipments today to over 18 million CPO ports by 2029. This projection highlights the potential of CPO technology to become a staple within server infrastructures due to its inherent advantages over traditional electrical connections. These expert insights and industry projections underscore the transformative impact of Marvell’s advancements in AI infrastructure and data center solutions.

Future Directions and Implications

A highlight of Marvell’s architecture is the 3D SiPho Engine, featured at the Optical Fiber Communication Conference and Exhibition (OFC) 2024. This innovative engine supports 200Gbps electrical and optical interfaces, crucial for integrating Co-Packaged Optics (CPO) into XPUs. The 6.4T 3D SiPho Engine comes with 32 channels of 200G electrical and optical interfaces, including modulators, photodetectors, modulator drivers, and microcontrollers. This integration results in twice the bandwidth, twice the input/output bandwidth density, and 30% lower power consumption per bit compared to 100G interface devices.

The advent of high-speed SiPho Engines, which merge electrical and optical interfaces in a single device, showcases significant innovation in tackling signal integrity and power dissipation issues. This advancement enables faster data transfers over longer distances, meeting the rising demand for AI-driven processing power in data centers. The efficiency and performance improvements provided by the 3D SiPho Engine highlight Marvell’s dedication to advancing AI infrastructure capabilities.

Marvell’s 3D SiPho Engine raises the bar for integrating electrical and optical interfaces within one device. This method enhances system performance and significantly boosts power efficiency, addressing a critical challenge in developing next-generation AI infrastructure. By fusing these advanced components, Marvell’s technology meets the high-speed data transfer demands of modern data centers while reducing energy use, leading to more sustainable and cost-effective operations.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later