How Low Can You Go? Pushing the Limits of Transistors - Deep Low Voltage Enablement of Embedded Memories and Logic Libraries to Achieve Extreme Low Power
By Synopsys
Rising demand for cutting-edge mobile, IoT, and wearable devices, along with high compute demands for AI and 5G/6G communications, has driven the need for lower power systems-on-chip (SoCs). This is not only a concern for a device’s power consumption when active (dynamic power), but also when the device is not active (leakage power). This highly competitive industry provides significant rewards for being the first to achieve best-in-class power efficiency in these markets. And of course, all of this must be achieved without impacting performance or area. Power, performance, and area (PPA) are the critical metrics for today’s advanced semiconductor SoCs.
Synopsys Foundation IP Memory Compilers and Logic Libraries enable SoC designers to achieve the best possible PPA, getting the maximum possible performance out of their designs while enabling them at the lowest possible operating voltages (near threshold values of transistors), thus significantly reducing overall power consumption. The result is longer battery life and higher Performance Per Watt.
In this paper we will discuss:
- Deep low voltage requirements (0.4v typical and below) for mobile, IoT, high performance compute (HPC), automotive, and crypto applications
- Various techniques adopted by SoC designers to trade-off PPA, including improvements on existing assist techniques for memory compilers
- Architectural and characterization enhancements to support lower voltages for logic libraries
- How Synopsys Memory Compilers and Logic Libraries have been enhanced to support deep low voltages to save power, while still achieving optimal performance and area and maintaining high reliability
To read the full article, click here
Related Semiconductor IP
- USB 20Gbps Device Controller
- AGILEX 7 R-Tile Gen5 NVMe Host IP
- 100G PAM4 Serdes PHY - 14nm
- Bluetooth Low Energy Subsystem IP
- Multi-core capable 64-bit RISC-V CPU with vector extensions
Related White Papers
- An 800 Mpixels/s, ~260 LUTs Implementation of the QOI Lossless Image Compression Algorithm and its Improvement through Hilbert Scanning
- An Industrial Overview of Open Standards for Embedded Vision and Inferencing
- Understanding the Deployment of Deep Learning algorithms on Embedded Platforms
- Role of Embedded Systems and its future in Industrial Automation
Latest White Papers
- CRADLE: Conversational RTL Design Space Exploration with LLM-based Multi-Agent Systems
- On the Thermal Vulnerability of 3D-Stacked High-Bandwidth Memory Architectures
- OmniSim: Simulating Hardware with C Speed and RTL Accuracy for High-Level Synthesis Designs
- Balancing Power and Performance With Task Dependencies in Multi-Core Systems
- LLM Inference with Codebook-based Q4X Quantization using the Llama.cpp Framework on RISC-V Vector CPUs