Dark
Light
Photo with a sleek and modern design of a transparent computer chip placed on a white table, symbolizing AI.

Artificial Intelligence and environmental impact

October 21, 2023
7 mins read

The implementation of Artificial Intelligence (AI)-based solutions in the digital marketing arena is not simply a matter of technological innovation, but also a critical consideration of their environmental impact. The carbon footprint derived from the intense computational use of AI models, particularly in data centers, emerges as a significant dilemma that confronts operational benefits with ecological responsibilities. This article dives into a meticulous analysis of the correlation between the use of AI and its environmental impact, exploring strategies to optimize AI models, critically evaluate their application, and emphasize continuous improvement and responsibility in their employment. Our goal is to provide a pragmatic resource for digital marketing professionals seeking to implement AI solutions in a conscious and sustainable manner, ensuring a balance between technological efficiency and environmental responsibility.

1. Environmental impact of AI

The environmental impact of Artificial Intelligence can be measured and found at different points of the production chain; from the electricity consumed by the computation of a data center to the mining needed to extract necessary resources such as silicon, gold, copper, etc. All of them 100% necessary to have the gigantic server farms needed to enjoy services such as Chat GPT.

1.1. Carbon footprint

Artificial Intelligence (AI) consumes much more energy than other computing resources, much more. A study by the University of Massachusetts Amherst estimated that impact, and the results are not indifferent.

(…) training a state-of-the-art model now requires substantial computational resources which demand considerable energy, along with the associated financial and environmental costs. Research and development of new models multiplies these costs by thousands of times by requiring retraining to experiment with model architectures and hyperparameters. (…)

Emma Strubell, Ananya Ganesh & Andrew McCallum. Energy and Policy Considerations for Deep Learning in NLP. arXiv:1906.02243. Page 1. Paragraph 2.

This quote from the authors is fundamental, we are not only talking about the use of the models themselves, but also their research and development. The very existence of GPT-4, PaLM or Dall-E 3 is thanks to a process of constant use of gigantic server farms whose power consumption can equal that of entire countries. The MIT Technology Review also has interesting articles on the subject, in particular the one entitled “AI’s latest challenge: calculating its own carbon footprint“.

Perhaps one of the right steps is to measure the amount of watts needed to solve a computing problem. An interesting article by Daniel Rudis entitled “Is energy consumption for AI spiraling out of control?” makes a simple comparison: while a Google search query consumes approximately 0.0003 kWh, a Chat GPT-3 query consumes 0.004 kWh, i.e., we are talking about 13 times higher power consumption.

Certainly this comparison is superficial, not all AI models consume the same amount of resources, and also as time goes by we see the birth of new smaller models that are more than enough to solve some specific tasks. We are not going to see an industry with increasingly larger and more expensive models, but rather a great diversity of models with different size and level of specialty, it will be our job to choose the most appropriate one and thus minimize the environmental impact of this technology. .

This scenario raises the imperative need for critical evaluation and informed decision making in the selection and use of AI models. It is not just a matter of measuring energy consumption in absolute terms, but also of understanding energy efficiency in relation to the task performed. For example, an AI model that consumes more energy but solves a significantly more complex task or does so with much greater accuracy may have justifiable “energy efficiency,” especially if the alternative would involve significant human resource expenditure or reduced effectiveness.

In addition, the industry is showing promising signs of self-correction and adaptation. The development of “model distillation” techniques, which simplify and reduce the size of AI models without significant loss of performance, is an example of this. In addition, the increasing viability of edge computing, where data processing is performed locally near the source device, can reduce the need for constant data transmissions to data centers, thereby reducing overall energy consumption.

However, beyond technological improvements, a crucial strategy for minimizing the environmental impact of AI is the implementation of responsible business policies and practices. This includes corporate commitments to sustainability, such as investing in renewable energy, implementing energy efficiency standards, and participating in carbon offset initiatives. In addition, transparency in disclosing the environmental impact of AI products and services can not only better inform customers and end users, but also foster a culture of responsibility and continuous improvement throughout the industry.

This convergence of technological innovation, corporate responsibility and user awareness is critical to guide the development and use of AI towards a more sustainable and climate-conscious path.

1.2 Data centers

On the other hand, it is not only about energy consumption, but also the need to build more and new data centers. The widespread use of artificial intelligence not only causes us to emit more carbon dioxide, it also demands the construction of more buildings, greater use of cooling systems, mining of materials such as iron, gold, copper, silicon, etc. In other words: using artificial intelligence has a major impact on the physical world.

And not to be outdone, an analysis paper from the Spanish Institute for Strategic Studies mentions similar ideas estimating that “Information and communications technology consumes around 5-9 % of the electricity produced in the world.”

Ana Valdivia also refers to the environmental cost of Artificial Intelligences in rural Spain. This is an issue that simply should not leave us indifferent. Although it is true that we will undoubtedly have more data centers in the future, and therefore greater energy consumption, with greater mining of natural resources and greater erosion of the natural environments of our planet.

This only further reinforces the idea that, when using Artificial Intelligence, we must do so with awareness and with a clear optimization policy: if it can be solved without AI, we will access traditional computing, and if AI is necessary, we will use the ideal model to solve the problem.

The growing demand for data centers underscores the urgency of adopting sustainable building practices and efficient resource management. This means using green building materials, optimizing building design for energy efficiency, and employing innovative cooling systems that minimize the use of water and energy resources. In addition, the geographic location of these centers can be strategically selected to take advantage of natural climatic conditions or the availability of renewable energy.

Recycling and reuse of materials from servers and other computer equipment is another critical area. “urban mining” initiatives to recover precious metals from discarded electronic devices, and “extended producer responsibility” programs that force manufacturers to manage the full life cycle of their products, can significantly reduce the demand for mining virgin materials and mitigate the impact on natural ecosystems.

At the operational level, implementing power management software and virtualization techniques can maximize server utilization and reduce resource waste. Adopting “fog computing” technologies can also decentralize data processing, reducing the load on central data centers and decreasing the distance data needs to travel, which saves energy.

Ultimately, the decision to use AI must be measured and justified not only by its technical feasibility or commercial benefit, but also by its environmental sustainability. This requires a holistic assessment that takes into account the entire life cycle of the technology, from material extraction and energy consumption to recycling and reuse potential. Only through a collective commitment to conscious optimization and environmental responsibility can we balance advances in artificial intelligence with the preservation of our physical world.

AI Model Optimization

Aritificial Intelligence models not only become bigger, more complex and more useful, they also become more efficient in different dimensions, one of them is their training. Open AI has a great article on this subject entitled “AI and efficiency” where it mentions how it has improved the consumption in teraflops/s required to train an AI model. In his words “44x less compute required to get to AlexNet performance 7 years later”, i.e., the computational needs have been reduced by 44 times to achieve the same results.

On the other hand we must also consider the size of the model as it is one of the main factors that influence the energy consumption of a model and this is where the engineering works its own magic. It is indisputable that GPT-4 is a flagship model today and its level of accuracy far exceeded the expectations of the most skeptical. However other papers showed how other models such as Meta’s LLaMA 2 achieved similar results to GPT-4 in the task of summarizing text, all while consuming 30 times less. This is not to say that LLaMA is superior to GPT-4, instead what it does is reinforce the idea that it is important to choose the ideal and most optimal model to solve a given task as well as always use the most modern, optimized and polished models.

These trends in training efficiency and model optimization are testaments to progress in “green AI,” an emerging field focused on developing more sustainable AI solutions. However, training efficiency is only part of the equation. The inference phase, where models apply what they have learned to new data, also consumes energy, especially in models that are used millions of times a day globally. Here, model optimization for fast inference and energy efficiency is crucial, and techniques such as quantization and pruning, which reduce model size and computational complexity without significant loss of performance, are valuable areas of research.

The AI community is also exploring the sharing and reuse of trained models. Instead of each organization training its own model from scratch, which requires a significant amount of energy and resources, companies can collaborate through “models as a service” (MaaS), where a trained model is offered to multiple users via the cloud. This not only reduces overall energy consumption but also democratizes access to high-quality AI, especially for startups and small businesses that may not have the resources to train their own models.

In addition, it is important to recognize that efficiency is not just about hardware and algorithms, but also about the people who use them. Training and education in AI best practices, from model selection and efficient use to energy-conscious maintenance and upgrades, are essential to maximizing the potential of these technological advances. Initiatives such as green AI hackathons and certifications in AI sustainability can foster a culture of innovation and responsibility in this space.

While we celebrate improvements in AI efficiency and effectiveness, we must remain vigilant and committed to minimizing its environmental footprint. Choosing the “right” model is not just a matter of accuracy or speed, but also of sustainability. By adopting a mindset that prioritizes conscious optimization and strategic model selection, we can harness the power of AI in a more responsible and sustainable way.

Leave a Reply

Your email address will not be published.

Don't Miss