How Standards Are Unleashing the Power of DPUs for Cloud Computing
The slowing of Moore’s Law is driving a new approach to compute as single core performance is flattening out. To handle the increasing demands of data-centric workloads, hyperscalers and modern cloud data centers are looking for a new class of programmable processors that can efficiently process and move data at scale. They also need to support growing deployments of advanced applications – including Generative AI models – that require GPUs, faster networking and distributed storage. Today’s cloud infrastructure is custom built, from SSDs to HDDs, SmartNICs to video accelerators, and the last standardized component, the server CPU, will not cut it as a universal general-purpose processor moving forward. Enter data processing units, aka DPUs.
What is a DPU?
To read the full article, click here
Related Semiconductor IP
- SLVS Transceiver in TSMC 28nm
- 0.9V/2.5V I/O Library in TSMC 55nm
- 1.8V/3.3V Multi-Voltage GPIO in TSMC 28nm
- 1.8V/3.3V I/O Library with 5V ODIO & Analog in TSMC 16nm
- ESD Solutions for Multi-Gigabit SerDes in TSMC 28nm
Related Blogs
- Windows on Arm is Ready for Prime Time: Native Chrome Caps Momentum for the Future of Laptop Computing
- Unleashing the Power of Communication: Exploring the XSPI Protocol and Arasan Chip Systems' XSPI IP Portfolio
- FPGA Insights and Trends 2023: Unleashing the Power of FPGA
- How Google and Arm Collaborate on the Next Wave of Cloud Infrastructure
Latest Blogs
- Half of the Compute Shipped to Top Hyperscalers in 2025 will be Arm-based
- Industry's First Verification IP for Display Port Automotive Extensions (DP AE)
- IMG DXT GPU: A Game-Changer for Gaming Smartphones
- Rivos and Canonical partner to deliver scalable RISC-V solutions in Data Centers and enable an enterprise-grade Ubuntu experience across Rivos platforms
- ReRAM-Powered Edge AI: A Game-Changer for Energy Efficiency, Cost, and Security