Sustainability and AI: The Hidden Cost of Our Digital Revolution

Share

By Rita Bateson

Rita Bateson, co-founder of Eblana Learning was the keynote speaker at the e-Assessment Association AI Symposium in June 2025. Rita also presented to the eAA’s AI Special Interest Group on Net-Zero Assessment: AI, Evidence & Emissions in July 2025.

 

I’m currently on only one diet – an energy diet. And before you roll your eyes at this oversharing, let me explain why this matters more than you might think, especially if you’re an educator considering AI in your classroom.

I write to you from the future. By which I mean Dublin, a city where 50% of the electricity grid is now consumed to power our resident data centres. Our ageing infrastructure is under considerable pressure due to this massive increase in datacentre consumption thanks to our very recent addiction to AI. 

Datacentres require constant, unbroken energy flow to ensure that our digital lives are not impacted and we are now living in a reality where energy diversions, brownouts and blackouts are a very real fear in order to keep Amazon, Meta and Google’s servers humming. It’s a stark reminder that our digital revolution has a very real, very physical cost.

The Paradox We Can’t Ignore

Here’s the thing about AI: it promises to solve all the world’s sustainability problems but first we have to spend enormous amounts of energy to get there. By 2026, AI energy consumption is predicted to equal Japan’s entire electricity usage, according to the MIT Technology Review (2025). To put that in perspective, I often explain to students, and educators, that one image generated can require the same electricity as charging a smartphone one to three times over.This fact alone is staggering when we think of the daily usage of AI, both intended and invisible in all our lives. 

Naturally the computational and training energy required by AI is changing all the time and we have to work with imperfect calculations. But the scale of use and speed of adoption are undeniable. It’s reported that ChatGPT added a million new users per hour in the week following their latest image generation model. The energy cost is eye-watering. Are we really burning this much carbon for a better worksheet or a funny meme?

It helps them to grasp that a typical school assignment (one that might include 10 to 15 queries, a few images and perhaps a short research query) can consume the same amount of energy as 3.5 hours of microwaving. Giving energy consumption in these terms really brings it home for them. 

The most impactful statistic is often that one 30-second deepfake video could consume the same energy as watching every episode of Friends 5,000 times. Suddenly, that rocket emoji-filled ChatGPT response doesn’t seem so desirable. The goal isn’t to make them feel guilty, I simply ask that they pause for a moment of considerate reflection before they choose AI as their go-to for every action.

The Escalating Energy Ladder

Of course these are imperfect calculations. True calculations are virtually impossible, as the efficiency of models and variety of tasks in AI are changing all the time. Added to this, AI models are often closed source and companies are not really incentivised to share too much about their proprietary models. Nevertheless, we can say with certainty we have certainly moved rapidly up an energy consumption ladder. Even with better efficiencies, and lower compute requirements, we see ever increasing demand and adoption. 

Initially AI widespread use focussed on text generation – which requires relatively low energy, but has now incredibly frequent use. Then came image generation in 2024: 15 billion AI images created in that year alone, a number that apparently was surpassed in just the first three weeks of 2025. Contrast that to the excitement of generating action figures and Ghibli-inspired images leading to 700 million images generated in ChatGPT in a single week. 

But video generation is the real energy monster. Google’s VEO-3 creates videos so realistic that distinguishing AI from reality will soon be impossible. We now hear about fully immersible, user-designed 3D world maps where we can move around our environments and all I can think of is the energy this will require. The applications are endless – and endlessly energy-hungry.

This is where the sustainability conversation gets practical. When choosing AI tools, there’s a 61,848x difference between the highest and lowest energy consumption models, according to the best publicly available data published by the AI Energy Score Leaderboard:https://huggingface.co/spaces/AIEnergyScore/Leaderboard. We can make conscious choices about which model to use, just as we might choose whether we choose to recycle or discard an object.  

Four Questions for Conscious AI Use

Before implementing any AI tool, I ask four key questions:

  1. Will this achieve learning outcomes, or is it just an efficiency drive?
  2. Is this a financial imperative or is it genuinely learning-driven?
  3. Are we innovating or just following trends?
  4. What’s the true cost-benefit analysis, including environmental impact?

These aren’t just philosophical questions. When AI companies are under pressure to innovate, they throw money (and by money, read “energy costs”) at the problem. For example, when xAI needed the Grok model, knowing they were years behind the competition, they achieved superhuman results by acquiring over 70% of all cooling facilities in the US and setting up allegedly illegal server stacks on sites in the US. Money was no object, so energy was no object. That’s not sustainable leadership.

The Accountability Model

For this article and my recent presentation, I tracked my AI usage: 13 cloud queries, three Gemini searches, two ChatGPT interactions, plus one deep research query. Combined with my flight emissions from London to Dublin, that equals three trees. So I’ll plant three trees in my increasingly ridiculous garden – my tangible reminder of digital choices.

It may be about guilt but I choose to reframe it about consciousness. I ask students to pause before each AI interaction: Is this the best use of energy? Will this help my brain development? Will this support my learning? These are the same questions we need to model as educators.

The Path Forward

We need honest conversations about what percentage of our AI adoption directly supports education versus what could be scaled back. We need to choose models based on efficiency, not just capability. Educators need to teach digital literacy that includes environmental consciousness.

Most importantly, we need to lead by example. Every time I consider generating an image or asking ChatGPT to rewrite something I could write myself, I have a moment of agency. I use it consciously, purposefully and sustainably.

The best time to plant a tree was 20 years ago; the second-best time is now. The same applies to embedding sustainable AI practices. We’re building these systems now – this is our moment to get it right. We have lived through humanity’s love affair with plastic and more recently seen the impact of social media on our young people (and ourselves). It is the perfect time to reflect on these lessons and implement all the safeguards and limitations that we have never fully done before. Now is not the time for apathy or defeatism. We have a once-in-a-lifetime opportunity to protect the uniquely valuable aspects of education and introduce ethical, conscious use of AI by sharing and modelling this for students in our own lives too. 

And if we all plant a few more actual trees to offset all the AI-produced CO2, then so much the better! 

Our students are watching. They’re learning how to use AI either by their own experimentation or by our modelling and influence. Let’s make sure we’re teaching them to be pioneers of sustainable innovation – for themselves, for learning and for the planet. Someone has to model what conscious consumption looks like in the age of artificial intelligence. It might as well be us.

AI Disclosure: Many of these points were first introduced in a Keynote and AI was used in the initial research and idea development. A first draft of the article was originally written by AI and almost entirely discarded. The words are now the author’s own. 

About Rita Bateson

Rita Bateson is the Director of Education and co-founder of Eblana Learning. Rita leads initiatives and consultancy services for International Baccalaureate (IB) schools. Eblana Learning was founded to address the real struggles international schools face today—keeping up with rapid tech changes, accessing trustworthy AI guidance and cutting through the clutter of AI information.

Focusing on bringing ethical, sustainable AI into schools in a way that aligns with the values of the IB,  Eblana Learning ensure schools have the optimal, trustworthy guidance when adopting AI practices and philosophy.

Find out more at eblanalearning.com/

Related News

Join our membership

Shape the future of digital assessment

Join the global community advancing e-assessment through innovation, research, and collaboration.

user full
5,000+
Global members
globe point
50+
Countries
cog icon
15+
Years leading

Keep informed

Subscribe to our newsletter

This site uses cookies to monitor site performance and provide a mode responsive and personalised experience. You must agree to our use of certain cookies. For more information on how we use and manage cookies, please read our Privacy Policy.