System view of core clock frequency, voltage
Andrew Girson
(11/01/2004 10:00 AM EST)
Over the past few years, the embedded community has shown growing interest in the potential energy savings offered by microprocessors with dynamic core clock frequency and voltage (FV) adjustment capabilities. The decreasing physical size of high-performance wireless devices and the slow growth of battery energy density in recent years have brought new emphasis to low-power device designs incorporating advanced power management techniques such as FV adjustment.
The concept of FV adjustment is part of a new wave of low-power embedded technology. FV adjustment is based on semiconductor physics dictating that core voltage can be reduced if core clock frequency is reduced and that the power consumed by a microprocessor core is roughly proportional to the clock frequency at which it runs and the square of the voltage at which it is powered.
Sounds simple enough: just reduce core clock frequency and core voltage so power goes down and battery life increases. But since power equals energy divided by time and because software tasks may take longer to complete at lower clock frequencies, the total energy consumed at lower clock frequencies is not guaranteed to be appreciably different than at higher frequencies.
So FV adjustments must be made intelligently, implying firmware algorithms that dynamically analyze hardware and software variables and make judicious modifications, based on a series of device and user criteria. Such algorithms can be implemented in a number of ways; for example, with low-level software running on the microprocessor or with logic inside the microprocessor. But what are the parameters and features that change dynamically and provide useful information for an algorithm to make adjustments?
This article will look at a microprocessor-centric algorithmic approach and at how this limited approach can be expanded to take a systemwide embedded device view that can be incorporated into overall device design decisions.
When it comes to the microprocessor and energy management, relative performance level is a key variable. Adjusting the clock frequency can reduce performance, but such a reduction may be perfectly acceptable in certain situations. This is the concept behind many microprocessor-focused FV adjustment techniques. These techniques, also known as "just in time" approaches, rely on the sporadic nature of device usage.
For example, in a typical high-performance wireless device such as a cellular telephone or a handheld computer, a multitasking operating system manages a variety of software threads performing such tasks as supervising peripherals, decoding compressed video, updating the display and acquiring audio data. For much of the time that the device is "on," these threads are "blocked" and simply waiting for an event or interrupt to occur. In these situations, the operating system will place the microprocessor in a low-power, nonexecuting idle state until the next interrupt occurs. A microprocessor may be in idle mode for more than 90 percent of the time that a wireless device is turned on.
A just-in-time FV adjustment algorithm takes this into account and attempts to minimize the amount of time spent idling. For example, if the microprocessor spends 60 percent of its time idling and 40 percent executing code while running at a certain core clock frequency, halving the core clock frequency and simultaneously reducing the core voltage may result in a smaller idle time and a larger execution time, especially if the microprocessor is doing internal cache-based processing. But as long as the idle time does not approach 0 percent, the device is still completing all of its activities. So, with a corresponding reduction in both core clock frequency and voltage, the power consumption of the microprocessor core is reduced by more than half. As a result, the overall energy that the microprocessor consumes during this time may be reduced, resulting in longer battery life.
A system view
What happens though in a just-in-time algorithm if the device has hard real-time processing requirements? In this case, by reducing idle time, though all processing gets completed, some time-critical processing component might take longer than desired and cause a systemwide problem. What happens when the microprocessor is not idling at all? How does the changing current load that occurs with FV adjustments affect a battery's discharge curve?
These questions only can be answered by taking a system view of FV adjustment. Algorithms based on a system view recognize that the microprocessor is just one component of a device design that incorporates peripherals, power supplies, batteries, firmware, software drivers and software applications.
A good example of the difference between a just-in-time view and a system view of FV adjustment occurs when considering the difference between cache-based processing and peripheral-based processing. In a situation where the microprocessor is executing code 100 percent of the time with no idle time, a just-in-time algorithm will generally increase the clock frequency in an attempt to get to a steady-state condition having a small amount of idle time. If the microprocessor is executing a tight algorithm out of the cache-for example, a complex matrix calculation operation-just-in-time works.
But suppose the processor is spending 100 percent of its time reading in data from an uncached peripheral. Now, the task is no longer microprocessor-centric, and the task's performance is therefore bound not by the microprocessor but by the much slower peripheral. In this case, increasing the clock frequency has a negligible impact on performance-the data can't arrive any faster, no matter what the core clock frequency-so the idle time stays at 0 percent. Battery life decreases because power and energy increase, yet there is no gain in performance.
An FV adjustment algorithm that took a systemwide approach would look at other factors and variables that provide a more-nuanced view of performance, making adjustments based not just on idle time but also on such performance factors as data throughput. In this example, such an algorithm would actually reduce the core clock frequency and voltage, not increase it, because reducing it would negligibly affect performance. A 10 to 35 percent decrease in total device energy consumption during periods of high data throughput can be achieved with minimal consequences using such techniques.
A system approach to FV adjustment can take into account many other factors in the system too. For example, designers painstakingly choose battery chemistry, size, voltage, current and discharge characteristics to suit the device requirements. But all batteries have discharge curves that vary as a function of load and temperature. Since FV adjustments cause changes to current, an algorithm with a system approach might incorporate temperature and current measurements into its decisions on making such adjustments, thereby ensuring that the battery operated in its optimal discharge regime.
Recommendations
FV adjustment and related power management techniques are becoming more important as ways to reduce energy consumption in wireless devices. In the coming months and years, microprocessors and run-time software packages will provide a new set of energy-reduction tools based on FV adjustment. As with any sophisticated technology, such techniques must be evaluated wisely and used carefully. If you understand the true system design of your device, you will be able to improve one of its most important features-battery life-without appreciably increasing the size, weight or cost of your design.
Andrew Girson (agirson@inhandelectronics.com) is chief executive officer at InHand Electronics Inc. (Rockville, Md.).
All material on this site Copyright © 2005 CMP Media LLC. All rights reserved.
Privacy Statement | Your California Privacy Rights | Terms of Service
Related Semiconductor IP
- JESD204D Transmitter and Receiver IP
- 100G UDP IP Stack
- Frequency Synthesizer
- Temperature Sensor IP
- LVDS Driver/Buffer
Related White Papers
- System Packet Interface (SPI) 4.2 IP Core
- Anti tamper real time clock (RTC) - make your embedded system secure
- Reducing power in AMD processor core with RTL clock gating analysis
- Single core to multicore: Addressing the system design paradigm shift with project management and software instrumentation
Latest White Papers
- New Realities Demand a New Approach to System Verification and Validation
- How silicon and circuit optimizations help FPGAs offer lower size, power and cost in video bridging applications
- Sustainable Hardware Specialization
- PCIe IP With Enhanced Security For The Automotive Market
- Top 5 Reasons why CPU is the Best Processor for AI Inference