Care for Some Gates With Your Server?
One of the big themes from the Linley Data Center Conference earlier this month was the need to get more performance out of each server without increasing the power requirements. Adding more processing power in the form of more cores per server, or more servers, increases the power. So even if it does make incremental improvements in performance, it is at the price of increased power. In addition, many algorithms run out of steam once they have "enough" cores/servers. Very large data centers have almost unlimited numbers of servers and so it is hard to make any particular algorithm run faster just by adding more servers, even ignoring power budget issues. Google search is not going to run faster by adding another thousand processors (which Google does pretty much every day, by the way).
To read the full article, click here
Related Semiconductor IP
- Flexible Pixel Processor Video IP
- Complex Digital Up Converter
- Bluetooth Low Energy 6.0 Digital IP
- Verification IP for Ultra Ethernet (UEC)
- MIPI SWI3S Manager Core IP
Related Blogs
- Maximizing the Usability of Your Chip Development: Design with Flexibility for the Future
- What's it take to design DDR4 into your next SoC? Newly released DFI 3.0 Spec opens the flood gates for DDR4 design
- Take your neural networks to the next level with Arm's Machine Learning Inference Advisor
- Unveiling Ultra-Compact MACsec IP Core with optimized Flexible Crypto Block for 5X Size Reduction and Unmatched Efficiency from Comcores
Latest Blogs
- CNNs and Transformers: Decoding the Titans of AI
- How is RISC-V’s open and customizable design changing embedded systems?
- Imagination GPUs now support Vulkan 1.4 and Android 16
- From "What-If" to "What-Is": Cadence IP Validation for Silicon Platform Success
- Accelerating RTL Design with Agentic AI: A Multi-Agent LLM-Driven Approach