DARPA Selects Cerebras to Deliver Next Generation, Real-Time Compute Platform for Advanced Military and Commercial Applications
Cerebras Integrates Wafer Scale Technology and Ranovus Co-Packaged Optics Technology in a New Compute Platform Optimized for Real-Time AI and HPC
SUNNYVALE, Calif.-- April 1, 2025 -- Cerebras Systems, the pioneer in accelerating generative AI, has been awarded a new contract from the Defense Advanced Research Projects Agency (DARPA), for the development of a state-of-the-art high-performance computing system. The Cerebras system will combine the power of Cerebras’ wafer scale technology and Ranovus’ wafer scale co-packaged optics to deliver several orders of magnitude better compute performance at a fraction of the power draw.
“By combining wafer scale technology and co-packaged optics interconnects, Cerebras will deliver a platform capable of real-time, high-fidelity simulations for the most challenging physical environment simulations and the largest scale AI workloads, pushing the boundaries of what is possible in AI and in high performance computing and AI,” said Andrew Feldman, co-founder and CEO of Cerebras. “Building on the successes of the DARPA’s Digital RF Battlespace Emulator (DRBE) program, where Cerebras is currently executing the third phase and delivering a leading-edge RF emulation supercomputer, in this new initiative Cerebras and its partner Ranovus will deliver the industry-first wafer-scale photonic interconnect solution.”
Two of the fundamental challenges faced by today’s computer systems are memory and communication bottlenecks. Put simply, the compute requirement is growing at a much faster rate than either memory or IO technologies can keep up with. Cerebras’ pioneering work in wafer scale integration has already closed the memory bandwidth gap. The Cerebras Wafer-Scale Engine has 7,000 times more memory bandwidth than GPUs, and as a result delivers the world’s fastest AI inference and the world’s fastest molecular simulations ever conducted.
As part of this new DARPA effort, Cerebras will solve the communication bottleneck by integrating advanced co-packaged optics interconnects from Ranovus, enabling compute performance impossible to achieve today even in the largest of supercompute clusters. Not only will this solution be orders of magnitude faster than today’s state of the art, but it also will deliver compute at a fraction of the power consumed by GPUs tied together with traditional switches. Today’s switches and their optical interconnects are amongst the most power-hungry components in a large AI or simulation cluster. Integrating the optics into the wafer scale package provides unmatched power efficiency.
“By solving these fundamental problems of compute bandwidth, communication IO and power per unit compute through Cerebras’ wafer scale technology plus optical integration with Ranovus co-packaged optics, we will unlock solutions to some of the most complex problems in the realm of real-time AI and physical simulations -- solutions that are today utterly unattainable,” said Feldman. “Staying ahead of our rivals with advanced, power efficient compute that enables faster AI and faster simulations is critical for US defense, as well as for the domestic commercial sector.”
The impact of this technology extends beyond national defense, serving dual-use applications in both Department of Defense (DoD) and commercial sectors. In particular, it holds immense potential for doing real-time AI directly from information processing sensors, as well as real-time simulations of battlefields, and military and commercial robotics.
“We’re thrilled to collaborate with Cerebras on this groundbreaking innovation,” said Hamid Arabzadeh, CEO and founder of Ranovus. "Our Wafer-Scale Co-Packaged Optics platform will deliver 100 times the capacity of current Co-Packaged Optics solutions while significantly enhancing the energy efficiency of AI clusters. This partnership will establish a new industry standard for supercomputing and AI infrastructure, addressing the rising demand for data transmission and processing and paving the way for the next generation of military and commercial simulations and applications.”
About Cerebras Systems
Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world’s largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit cerebras.ai.
About Ranovus
Ranovus is a leader in advanced Co-Packaged Optics interconnect solutions, enabling the next generation of AI/ML workloads in data centers and communication networks. With deep expertise and track record in optoelectronic subsystem development and commercialization, Ranovus is driving disrupting innovation in the AI Compute industry. Our disruptive portfolio of IP Cores—including Multi-Wavelength Quantum Dot Laser technology and advanced digital and silicon photonics integrated circuits—sets new benchmarks for power efficiency, size, and cost in optical interconnect solutions. At the heart of this innovation is Ranovus’ Odin® platform, a transformative technology designed to optimize data center architectures for AI/ML and communications applications. For further information, visit www.ranovus.com.
Related Semiconductor IP
- 1.8V/3.3V GPIO Compliant with Multiple Standards in TSMC 16nm
- 1.2V/1.8V GPIO Compliant with multiple standards in TSMC 16nm
- SLVS Transceiver in TSMC 28nm
- 0.9V/2.5V I/O Library in TSMC 55nm
- 1.8V/3.3V Multi-Voltage GPIO in TSMC 28nm
Related News
- Cerebras Systems Raises $250M in Funding for Over $4B Valuation to Advance the Future of Artificial Intelligence Compute
- Gidel introduces groundbreaking edge computer with NVIDIA Jetson Orin NX system-on-module and high-bandwidth camera frame grabber for real-time image acquisition compute and AI processing
- Alphawave Semi to Showcase Next-Generation PCIe® 7.0 IP Platform for High-Performance Connectivity and Compute at PCI-SIG® DevCon 2024
- VyperCore launches VyperLab, an evaluation platform to showcase its innovative compute accelerator technology
Latest News
- Will RISC-V reduce auto MCU’s future risk?
- Frontgrade Gaisler Launches New GRAIN Line and Wins SNSA Contract to Commercialize First Energy-Efficient Neuromorphic AI for Space Applications
- Continuous-Variable Quantum Key Distribution (CV-QKD) system demonstration
- Latest intoPIX JPEG XS Codec Powers FOR-A’s FA-1616 for Efficient IP Production at NAB 2025
- VeriSilicon Launches ISP9000: The Next-Generation AI-Embedded ISP for Intelligent Vision Applications