Care for Some Gates With Your Server?
One of the big themes from the Linley Data Center Conference earlier this month was the need to get more performance out of each server without increasing the power requirements. Adding more processing power in the form of more cores per server, or more servers, increases the power. So even if it does make incremental improvements in performance, it is at the price of increased power. In addition, many algorithms run out of steam once they have "enough" cores/servers. Very large data centers have almost unlimited numbers of servers and so it is hard to make any particular algorithm run faster just by adding more servers, even ignoring power budget issues. Google search is not going to run faster by adding another thousand processors (which Google does pretty much every day, by the way).
To read the full article, click here
Related Semiconductor IP
- LPDDR6/5X/5 PHY V2 - Intel 18A-P
- ML-KEM Key Encapsulation & ML-DSA Digital Signature Engine
- MIPI SoundWire I3S Peripheral IP
- ML-DSA Digital Signature Engine
- P1619 / 802.1ae (MACSec) GCM/XTS/CBC-AES Core
Related Blogs
- Maximizing the Usability of Your Chip Development: Design with Flexibility for the Future
- What's it take to design DDR4 into your next SoC? Newly released DFI 3.0 Spec opens the flood gates for DDR4 design
- Supercharge your Arm builds with Docker Build Cloud: Efficiency meets performance
- Achronix Achieves 5X Faster Physical Verification for Full SoC Within Budget with Synopsys Cloud
Latest Blogs
- Why What Where DIFI and the new version 1.3
- Accelerating PCIe Gen6 L0p Verification for AI & HPC Designs using Synopsys VIP
- ML-DSA explained: Quantum-Safe digital Signatures for secure embedded Systems
- Efficiency Defines The Future Of Data Movement
- Why Standard-Cell Architecture Matters for Adaptable ASIC Designs