top of page

Advancing AI with Decarbonized Datacenters Requires Us to Ask Deeper Questions About Our Quality of Life & Definition for Prosperity


Datacenters are the engine for advancing artificial intelligence (AI) as we currently know it. There are a few select high profile datacenters that are intentionally pursuing decarbonized infrastructure and operations, notably the zero carbon Prometheus Hyperscale 1.2GW datacenter in Wyoming. However, the fact remains that the resource intensity for the majority of datacenter construction and operations remains significant. The densified computing power required to breathe life into AI requires a lot of energy, water, land, and rare earth minerals. See a prior post "Feed Me, Seymour!" Is Generative AI the 'Little Shop of Horrors' of our Generation?


Thousands of events characterized #ClimateWeek 2025. A key topic that interwove many discussions was at the intersection of #AI and sharp increased demand for constructing #datacenter infrastructure. The backdrop to many AI/datacenter conversations was the obvious increase in electricity demand, and subsequently, more nuanced questions pertaining to power grid congestion and resilience, price volatility and consumer equity and affordability, resource intensity and best practices for efficiency, decarbonization solutions, and Net Zero goals. Then there was dialog pertaining to local level siting of datacenters and an increase in public opposition and scrutiny aligned with Not In My Backyard (NIMBY) and Build Absolutely Nothing Anywhere Near Nothing (BANANA).



Datacenters, thirsty for energy, water, land, and rare earth materials, are the modern engine for AI
Datacenters, thirsty for energy, water, land, and rare earth materials, are the modern engine for AI


Among the over 100,000 participants in Climate Week there were also deeper questions regarding the moral, ethical, legal, economic, and environmental risks prevalent in AI and datacenter infrastructure buildout that were raised. During a panel I served on, for example, the question of restraint was raised, albeit briefly. Our advance of AI and the datacenter engine engender the classic maxim, “just because we can, doesn’t mean we should.” My first book, “The Sustainability Generation: The Politics of Change and Why Personal Accountability is Essential NOW” (2012) was written within the ethos of that maxim.


Another maxim, one I learned via a chemical industry CEO years ago, continues to hold true. That goes something like, “people (and subsequently, organizations) are inherently flawed; thus, we tend to keep doing the same stupid things over and over again.” The CEO said it with greater gravitas and in the context of a meeting among 30 of the world’s largest polluters that I was co-facilitating about fifteen years ago. The network of companies convened to share best practices on the clean up of environmental liabilities and to explore the right governance structure, organizational model, management tools, and technology capabilities to prevent future liabilities from occurring.


Fast forward to today and many environmental liabilities tied to centuries of industrial production have been mitigated, yet the hard corporate and societal lessons learned from our industrial past have not been absorbed and adopted across all industries ubiquitously. To put it bluntly, “we keep doing stupid things on purpose.”



The signs of our colossal waste and negative impact on human health and the environment are everywhere - suggesting that humans have not yet perfected our capacity to leverage knowledge so that we don't repeat the mistakes of the past. As we sit at the precious of an AI generation and future, are we prepared to assimilate our collective knowledge and wisdom so that we can put the proper controls and governance structure in place to ensure a dignified and sustainable future?
The signs of our colossal waste and negative impact on human health and the environment are everywhere - suggesting that humans have not yet perfected our capacity to leverage knowledge so that we don't repeat the mistakes of the past. As we sit at the precious of an AI generation and future, are we prepared to assimilate our collective knowledge and wisdom so that we can put the proper controls and governance structure in place to ensure a dignified and sustainable future?

Environmental risk management is, and I would argue – should continue to be, an evergreen endeavor. That is, if we are pushing the envelope on innovation (i.e., AI, decarbonized industry, advanced materials, paradigm shifts in energy/transport/communications), then we are likely going to promulgate environmental, societal, and economic challenges that we never saw coming.


Thus, we need to continuously reinforce a risk management mindset – one that is vigilant in the moment, but which also builds upon the knowledge of the past and protects the future. We’ve learned a great deal in the past forty or fifty years of environmental risk management. We need to expediently transfer that knowledge, know-how, and logic to the next generation of leaders. We may not be able to predict every scenario or mitigate every risk; but we can at a minimum, lay the right foundation and governance structure to avert unnecessary risk to public health and the environment.


In the case of AI and datacenters, the question surrounding resource intensity and consumption tends to be the inch thick mile wide topic du jour. Yet if we take a moment to go deeper, we might, as example, find ourselves differentiating AI use cases and the necessity for datacenter overbuilding.


Given asymmetric yet also integrated factors including the geopolitical landscape, the resource cost of datacenters, the investments required in power grid infrastructure, cyber-physical infrastructure risk, and consumer-facing moral, ethical, and legal risks – we need to evaluate datacenter buildout as a market response, but also as a social response? What is the value of AI for healthcare diagnostics and aiding a patient to a quicker diagnosis or more speedy recovery compared to streaming AI videos with the likeness of Oprah?


The demand for AI and datacenter infrastructure is not all created equal. Yes, we want to exercise a free market economy for advancing technology – but shouldn’t we also make the case for prioritizing technology and resources that first, better our quality of life and prosperity?


For the past 20 years our technological advancement has, arguably, outpaced our ability to foresee societal impact. Policy has lagged, and in some cases been obfuscated. We remain, for the time being, capable of safeguarding our future. The time for the proper discourse and vetting of technology, and toward a shared future, is now.     


So what is needed to infuse an ethos of pragmatism into our AI future? Here are some thoughts, but please, let's discuss!


  • ⛓️‍💥A foundation of Trust (that is, bringing forth a Preventive, Predictive, and Proactive Posture on how we conceive, design, build, and operate AI and its infrastructure).


  • 💖A foundation of Care (that is, an element of concern for all people and all living things).


  • 🤔🧠A foundation of Logic (that is, leveraging our knowledge and innate wisdom to "move beyond blame" and work together on "common sense for the common good" solutions that prioritize quality of life and civil discourse on the evolving definition of prosperity).


  • 🌏A foundation of Connectedness (that is, enlivening our innate spirit that enriches our desire for peace found by pursuing common ground solutions).


  • 🌐A foundation for Shared Governance (that is, recognizing that data has become ubiquitous - a "Common" that should be valued as a societal resource, not as a means for drawing lines of division or exercising control).


* * *



Planet Pragmatism by Mark Coleman
Planet Pragmatism by Mark Coleman

Comments


bottom of page