How Standards Are Unleashing the Power of DPUs for Cloud Computing
The slowing of Moore’s Law is driving a new approach to compute as single core performance is flattening out. To handle the increasing demands of data-centric workloads, hyperscalers and modern cloud data centers are looking for a new class of programmable processors that can efficiently process and move data at scale. They also need to support growing deployments of advanced applications – including Generative AI models – that require GPUs, faster networking and distributed storage. Today’s cloud infrastructure is custom built, from SSDs to HDDs, SmartNICs to video accelerators, and the last standardized component, the server CPU, will not cut it as a universal general-purpose processor moving forward. Enter data processing units, aka DPUs.
What is a DPU?
To read the full article, click here
Related Semiconductor IP
- Ultra-Low-Power LPDDR3/LPDDR2/DDR3L Combo Subsystem
- 1G BASE-T Ethernet Verification IP
- Network-on-Chip (NoC)
- Microsecond Channel (MSC/MSC-Plus) Controller
- 12-bit, 400 MSPS SAR ADC - TSMC 12nm FFC
Related Blogs
- How Network-on-Chip Architectures Are Powering the Future of Microcontroller Design
- Unleashing the Power of Wi-Fi 7. Multi-Link Operation Configurations Demystified
- CXL 3.1: How Evolving CXL Standards are Pushing Interconnects to Even Higher Performance
- The Growing Importance of PVT Monitoring for Silicon Lifecycle Management
Latest Blogs
- Rivian’s autonomy breakthrough built with Arm: the compute foundation for the rise of physical AI
- AV1 Image File Format Specification Gets an Upgrade with AVIF v1.2.0
- Industry’s First End-to-End eUSB2V2 Demo for Edge AI and AI PCs at CES
- Integrating Post-Quantum Cryptography (PQC) on Arty-Z7
- UA Link PCS customizations from 800GBASE-R Ethernet PCS Clause 172