How Standards Are Unleashing the Power of DPUs for Cloud Computing
The slowing of Moore’s Law is driving a new approach to compute as single core performance is flattening out. To handle the increasing demands of data-centric workloads, hyperscalers and modern cloud data centers are looking for a new class of programmable processors that can efficiently process and move data at scale. They also need to support growing deployments of advanced applications – including Generative AI models – that require GPUs, faster networking and distributed storage. Today’s cloud infrastructure is custom built, from SSDs to HDDs, SmartNICs to video accelerators, and the last standardized component, the server CPU, will not cut it as a universal general-purpose processor moving forward. Enter data processing units, aka DPUs.
What is a DPU?
To read the full article, click here
Related Semiconductor IP
- USB 20Gbps Device Controller
- AGILEX 7 R-Tile Gen5 NVMe Host IP
- 100G PAM4 Serdes PHY - 14nm
- Bluetooth Low Energy Subsystem IP
- Multi-core capable 64-bit RISC-V CPU with vector extensions
Related Blogs
- Synopsys Cloud: The Power of Automated License Management
- Mentium Accelerates Tape-out of AI Accelerator Chip for Space Applications on Synopsys Cloud
- CXL 3.1: How Evolving CXL Standards are Pushing Interconnects to Even Higher Performance
- The Future of Driving: How Advanced DSP is Shaping Car Infotainment Systems
Latest Blogs
- From guesswork to guidance: Mastering processor co-design with Codasip Exploration Framework
- Enabling AI Innovation at The Far Edge
- Unleashing Leading On-Device AI Performance and Efficiency with New Arm C1 CPU Cluster
- The Perfect Solution for Local AI
- UA Link vs Interlaken: What you need to know about the right protocol for AI and HPC interconnect fabrics