Submit your algorithm, and be ranked according to your energy consumption.


A competition where participants optimize both their energy consumption and the generalization ability of their models over the CIFAR-10 dataset with common GPU architectures.

A prize of 1000 euros will reward the best teams.


The growth of AI powered services hosted on gpu servers has been so far guided by accuracy performance, although it is likely that a better compromise in terms of spent Joules is possible.

Unlike previous competitions on specific low power devices to enhance portability on smartphones or Raspberry Pii’s, the idea of this competition is to monitor the consumption on standard workstation’s processors and GP-GPU as these are the most commonly used devices and are responsible for the majority of energy consumption and carbon footprint in the AI/deep learning field.


The best algorithm will be promoted for different levels of accuracy. The different intervals are

For instance: if three algorithms have the following performances:

Then, Algo1 will be ranked first in the [95% - 100%] category, Algo3 will be second, and Algo2 will be ranked first in the [90% - 95%] interval.

The IAPowerMeter will be used to measure the amount of energy used by the program inference. You can already use it to check how good you are.

In practice, we will run 200 iterations of predicting the full cifar10 dataset on a GeForce RTX 3090 GPU, with 16 i9 Intel cores CPU. If different numbers of iterations are required because some proposed algorithms will be very fast (good sign) or very slow (bad sign), we will normalise by the number of iterations. We will then sum the CPU (total_intel_power) and GPU (nvidia_draw_absolute) energy consumptions. If you want to test the procedure on your server, be carefull, that your program must be the only one running when you are doing the measurement.

The energy consumption depends not only on the algorithm, but also on the whole pipeline, the efficient use of the gpu, and other dedicated hardwares parts. Unfortunately, it is hard to integrate all these aspects in the competition. We encourage interesting contributions to be submitted as papers at the workshop.


Send a mail with a team id at paul dot gay at

You will receive a link to an online storage where you will drop your submission.


Every team should supply:

It boils down to a python script which loads your model and provides a predict function that we can access to run the inference.

Important dates (UTC)