Care for Some Gates With Your Server?
One of the big themes from the Linley Data Center Conference earlier this month was the need to get more performance out of each server without increasing the power requirements. Adding more processing power in the form of more cores per server, or more servers, increases the power. So even if it does make incremental improvements in performance, it is at the price of increased power. In addition, many algorithms run out of steam once they have "enough" cores/servers. Very large data centers have almost unlimited numbers of servers and so it is hard to make any particular algorithm run faster just by adding more servers, even ignoring power budget issues. Google search is not going to run faster by adding another thousand processors (which Google does pretty much every day, by the way).
To read the full article, click here
Related Semiconductor IP
- RVA23, Multi-cluster, Hypervisor and Android
- 64 bit RISC-V Multicore Processor with 2048-bit VLEN and AMM
- NPU IP Core for Mobile
- RISC-V AI Acceleration Platform - Scalable, standards-aligned soft chiplet IP
- H.264 Decoder
Related Blogs
- What's it take to design DDR4 into your next SoC? Newly released DFI 3.0 Spec opens the flood gates for DDR4 design
- Accelerate your time to market with Arm Approved ISP Service Partners
- Take your neural networks to the next level with Arm's Machine Learning Inference Advisor
- Pace of Innovation for Custom Silicon on Arm Continues with AWS Graviton4
Latest Blogs
- How fast a GPU do you need for your user interface?
- PCIe 6.x and 112 Gbps Ethernet: Synopsys and TeraSignal Achieve Optical Interconnect Breakthroughs
- Powering the Future of RF: Falcomm and GlobalFoundries at IMS 2025
- The Coming NPU Population Collapse
- Driving the Future of High-Speed Computing with PCIe 7.0 Innovation