Maximizing battery life on embedded platforms - Part 4. Turning off peripherals and subsystems
Chris Shore, ARM
EDN (November 19, 2012)
Friendly and unfriendly peripherals
Some systems have very clever peripherals and they are not just there to fill up the available silicon space. So, use the peripheral system to your advantage. If you have a DMA engine and need to copy large amounts of data about then use it! You can either have the CPU go off and do something else in parallel during the transfer time or, if nothing else needs doing, put it to sleep and wake it up when the data is in place.
Also worth considering is the relative speed of your core as compared to your peripherals. When doing something which is bounded by the speed of the peripherals, there is no point in keeping the core running at full speed. It will spend most of its time waiting. A good example might be programming flash memory. The algorithm will spend a lot of its time waiting for a response from the memory device. No matter how fast you run the core, the memory will still respond at the same speed. So, if you are not doing anything else and you can control the clock speed of the core, reduce it to the minimum speed which allows you to respond to the memory in time. That way, the core will spend the same time idle but will consume less power when just waiting for each response.
The overall time to complete the operation will remain the same but the energy usage will go down.
To read the full article, click here
Related Semiconductor IP
- Root of Trust (RoT)
- Fixed Point Doppler Channel IP core
- Multi-protocol wireless plaform integrating Bluetooth Dual Mode, IEEE 802.15.4 (for Thread, Zigbee and Matter)
- Polyphase Video Scaler
- Compact, low-power, 8bit ADC on GF 22nm FDX
Related White Papers
- Understanding the Deployment of Deep Learning algorithms on Embedded Platforms
- Power optimization can extend the battery life in your portable multimedia system
- Why Embedded Software Development Still Matters: Optimizing a Computer Vision Application on the ARM Cortex A8
- Embedded Computing on the Edge
Latest White Papers
- Reimagining AI Infrastructure: The Power of Converged Back-end Networks
- 40G UCIe IP Advantages for AI Applications
- Recent progress in spin-orbit torque magnetic random-access memory
- What is JESD204C? A quick glance at the standard
- Open-Source Design of Heterogeneous SoCs for AI Acceleration: the PULP Platform Experience