Our eprof software energy profiler is a tool which relates the consumed dynamic energy back to the software that caused this consumption. In contrast to currently available tools, eprof generates energy-usage information at a fine granularity; it can even identify the energy used by individual lines of code.

When trying to write energy-efficient software, developers currently have to rely on their intuition, because few tools and methods exist to reason about the energy consumption of software. Software developers cannot make informed choices about which algorithms use less energy. For example, they might have to choose between using CPU resources to evaluating a function each time it is used versus using memory accesses to look up precomputed values in a table. Or, they might have to choose between storing data in a compressed form on a disk, which requires extra CPU resources versus using an uncompressed format, which requires more disk accesses. Moreover, if a software developer must optimize a large code-base to use less energy, he needs tools to identify the code locations that use the most energy, so he can rewrite them. Seemingly simple functions with short execution times may, in fact, consume vast amounts of energy because they make asynchronous wireless transmissions or other device I/O. On highly resource-constrained embedded platforms, this energy consumption can be estimated using full-system emulators, but this approach is slow and not widely available to the average developer.

After a calibration phase, eprof does not require any external devices or circuitry and thus enables an average developer to profile his or her software for energy consumption.