Quick start

Hardware requirements

CPU power measure is done with RAPL. Support is ensured on intel processor since Sandy Bridge architecture. To see if your processor is compatible, first check that the CPU is a GenuineIntel:

$ cat /proc/cpuinfo | grep vendor | uniq | awk '{print $3}'
GenuineIntel

In linux, RAPL will log the energy consumption in /sys/class/powercap/intel-rapl

Change the permissions so that our program can read these logs:

$ sudo chmod -R 755 /sys/class/powercap/intel-rapl

GPU will be measured by nvidia-smi. Again, not all gpu cards (for ex: Jetson Nano board) include the required sensors.

A quick check is to run

$ nvidia-smi -q -x

and search if the xml output contains values at the “power_readings” field.

Installation

$ git clone https://github.com/GreenAI-Uppa/AIPowerMeter.git
$ pip install -r requirements.txt
$ python setup.py install

The power recording part is independant of the model type, it is desirable to monitor the number of parameters and mac operations of your experiment.

We use pytorch model for this (optional) aspect:

$ pip install torchinfo

Measuring my first program

We provide examples scripts for pytorch, tensorflow, numpy and describe an integration with Docker integration. In a nutshell,

you instantiate an experiment and place the code you want to measure between a start and stop signal.

from deep_learning_power_measure.power_measure import experiment, parsers

driver = parsers.JsonParser("output_folder")
exp = experiment.Experiment(driver)

p, q = exp.measure_yourself(period=2) # measure every 2 seconds
###################
#  place here the code that you want to profile
################
q.put(experiment.STOP_MESSAGE)

This will create a directory output_folder in which a power_metrics.json will contain the power measurements. See the section Recorded fields for details on this file. If it already exists, the content folder will be replaced. So you should have one folder per experiment. You can then get a summary of the recordings

from deep_learning_power_measure.power_measure import experiment, parsers
driver = parsers.JsonParser("output_folder")
exp_result = experiment.ExpResults(driver)
exp_result.print()

and the console output should looks like:

================= EXPERIMENT SUMMARY ===============
MODEL SUMMARY:  28 parameters and  444528 mac operations during the forward pass

ENERGY CONSUMPTION:
on the cpu

RAM consumption not available. Your usage was  4.6GiB with an overhead of 4.5GiB
Total CPU consumption: 107.200 joules, your experiment consumption:  106.938 joules
total intel power:  146.303 joules
total psys power:  -4.156 joules


on the gpu
nvidia total consumption: 543.126 joules, your consumption:  543.126, average memory used: 1.6GiB

TIPS and use cases

  • We store examples in this folder . exp_deep_learning.py is a good one to start with.

  • You can measure the consumption of a console command, for example, to evaluate “exp_deep_learning.py”

python -m deep_learning_power_measure --output_folder "/home/paul/test" --cmd  "python examples/exp_deep_learning.py"
  • Record separetely the consumption of your training, and test phases of your deep learning experiments from this example

  • Set permanently the access to the RAPL files

sudo apt install sysfsutils
echo "mode class/powercap/intel-rapl:0/energy_uj = 0444" >> /etc/sysfs.conf

then reboot