As the AI for Good Summit opened its doors yesterday in Geneva, a joint study by UNESCO and University College London (UCL) reveals that simple adjustments in the design and use of language models can reduce their energy consumption by 90% without affecting their performance. In a context where the environmental footprint of AI is becoming a strategic issue, this finding invites a rethink of how LLMs are trained, deployed, and used globally.
An Invisible but Exponential Consumption
Every request addressed to a generative AI like ChatGPT consumes an average of 0.34 watt-hours. A seemingly trivial figure, until multiplied by the massive use of these tools. Today, more than a billion people use them: a single daily interaction by each of them with one of these tools corresponds to an annual consumption of over 310 gigawatt-hours, equivalent to the electricity consumed annually by about 3 million inhabitants of a low-income African country.
Yet, only 5% of African AI experts have access to the necessary infrastructure, a glaring imbalance that widens the digital divide with high-income countries, where most computing capacities are concentrated.
Three Levers for a Less Energy-Intensive AI
Experiments with several open-source LLMs allowed UCL researchers to identify three approaches to minimize the carbon footprint of generative AI:
- Use Smaller and Specialized Models: Contrary to the received idea that "bigger" means "smarter," UCL's results show that compact models, specialized in specific tasks (summarization, translation, information extraction) can reduce energy consumption tenfold without loss of performance. This specialization logic is found in Mixture of Experts (MoE) architectures, which activate only the relevant modules for each task, thus avoiding resource wastage and optimizing energy efficiency;
- Reduce the Length of Interactions: More concise prompts and responses can lead to a reduction of more than 50% in energy consumption, according to conducted tests;
- Compress Models: Techniques like quantization can reduce model size without notable loss of accuracy, resulting in energy savings of around 44%. These approaches, known in research, remain marginal in commercial deployments.
Unanimously adopted by the 194 member states in November 2021, UNESCO's "Recommendation on the Ethics of Artificial Intelligence" includes a chapter dedicated to the environmental impacts of these technologies. This new report aligns with this continuity, calling on governments and businesses to invest in the R&D of more efficient, ethical, and accessible AI, as well as in user education, so they become aware of the energy consequences of their digital practices.