How Standards Are Unleashing the Power of DPUs for Cloud Computing
The slowing of Moore’s Law is driving a new approach to compute as single core performance is flattening out. To handle the increasing demands of data-centric workloads, hyperscalers and modern cloud data centers are looking for a new class of programmable processors that can efficiently process and move data at scale. They also need to support growing deployments of advanced applications – including Generative AI models – that require GPUs, faster networking and distributed storage. Today’s cloud infrastructure is custom built, from SSDs to HDDs, SmartNICs to video accelerators, and the last standardized component, the server CPU, will not cut it as a universal general-purpose processor moving forward. Enter data processing units, aka DPUs.
What is a DPU?
To read the full article, click here
Related Semiconductor IP
- Flexible Pixel Processor Video IP
- 1.6T/3.2T Multi-Channel MACsec Engine with TDM Interface (MACsec-IP-364)
- 100G MAC and PCS core
- xSPI + eMMC Combo PHY IP
- NavIC LDPC Decoder
Related Blogs
- How to Unlock the Power of Operator Fusion to Accelerate AI
- Together, we are building the future of computing, on Arm
- Synopsys Cloud: The Power of Automated License Management
- Windows on Arm is Ready for Prime Time: Native Chrome Caps Momentum for the Future of Laptop Computing
Latest Blogs
- Morgan State University (MSU) Leveraging Intel 16 and the Cadence Tool Flow for Academic Chip Tapeout
- Securing the Future of Terabit Ethernet: Introducing the Rambus Multi-Channel Engine MACsec-IP-364 (+363)
- Why Weebit’s IP Licensing Model Matters
- Arasan’s xSPI/eMMC5.1 PHY: Unified Dual-Mode Physical Layer IP
- Evolution of CXL PBR Switch in the CXL Fabric