Introduction

Amidst the current buzz of user-centered AI, the spotlight often shines on individual gains, but isn't it time we cast a broader gaze? While I'm all for enhancing user experiences, let's also ponder how AI's ripples extend to impact not only humanity but the very fabric of our natural world. How is the exponential growth of data affecting climate change through the need to store all of the data in data centers, which AI relies on so heavily, as well as produces? How is it marginalizing to people who may not even realize that AI is at play, such as in decision making for jobs or loans? These are the types of questions that we need to be asking. 

There are two major documents in the US that are meant to serve policy making on AI. One is the AI Bill of Rights. On occasion, when I raise this topic, individuals tend to misinterpret it as advocating for rights for AI entities. I must clarify that we're not delving into that discussion at this juncture. The AI Bill of Rights deals with human rights around AI, such as that a person should always be made aware when they are talking to a bot and have the choice to talk to a human. In addition, rights must be established which confront unchecked social media data collection and protect against discrimination. As you can imagine, the ‘rights’ in the AI Bill of Rights are not currently being implemented, as well meaning as they are. And I must highlight that the one huge area that got left out of the AI Bill of Rights is any concern for the environment, which is intrinsically linked to human rights. 

The second document is the AI Risk Management Framework. (AI RMF) This one does have several places where they talk about AI’s effects on the environment, as well as many other areas of concern. The AI RMF is aimed at general groups of AI Actors. These are separate from the primary audience of the AI RMF. They include trade associations, standards organizations, researchers, advocacy groups, environmental organizations, civil society, end-users, and potentially affected individuals and communities. According to the document, these AI actors can: 

 - Provide context and understanding of potential and actual AI impacts.

  - Offer formal or quasi-formal norms and guidance for AI risk management.

  - Define boundaries for AI operations (technical, societal, legal, and ethical).

  - Encourage discussions on balancing societal values and priorities, including civil liberties, equity, the environment, the economy, and human rights. (Tabassi, 2023)

It is a positive thing to be working on regulation around AI, and we will see more and more of this as we roll forward. How much it actually helps is hard to tell, as corporations are mostly self-regulating, which they cannot be trusted to do. According to Timnit Gebru, founder of  Distributed AI Research Institute (DAIR) and the co-founder of Black in AI,  “the #1 thing that would safeguard us from unsafe uses of AI is curbing the power of the companies who develop it and increasing the power of those who speak up against the harms of AI and these companies’ practices.” (2021) In contrast to the way big tech executives frame it as an arms race, the real obstacle to innovation lies in the existing system where a small group creates technology with harmful consequences, while others are consistently occupied with mitigating that harm, leaving them with limited opportunities to realize their own vision for the future due to constraints on time, resources, and space. (Gebru, 2021)

How to implement regulations both on the corporate level and at the level of users is very important and also very challenging. Just as when cars first came to the world, roads needed to change, guardrails needed to be put up, and rules needed to be put in place. What analogous measures must we consider when evaluating risks associated with AI, akin to speed limits and safety belts? 

We Need to Focus on User-centered AI, not Corporations, but How?

Organizations advocating for civil rights, such as the Electronic Privacy Information Center, have actively engaged in the broader discourse concerning AI regulations. They have expressed reservations about the idea that industry associations should hold substantial influence in shaping policies related to a rights-oriented document formulated by the White House. (Krishan, 2023)

Formally titled as the Framework for an AI Bill of Rights, the manuscript, released in October 2022, is an outcome of a joint effort involving the Office of Science and Technology Policy (OSTP), scholars, organizations championing human rights, the wider public, and major corporations including Microsoft and Google. This blueprint offers recommendations to enhance the transparency, equity, and security of AI applications. It delves into the immediate and potential injustices related to civil rights brought about by AI, with a particular focus on domains like employment, education, healthcare, financial access, and business surveillance. (The White House, 2023)

Summary of AI Bill of Rights

When the White House introduced its vision for an AI 'Bill of Rights,'  they presented an approach centered on human rights as the foundational basis for regulating AI. This was succeeded by the AI RMF in January, which adopted a risk-centric perspective. This framework aimed to assess the extent and nature of risks associated with specific use scenarios, identifying potential threats to establish a sense of reliability in AI technology. (Krishan, 2023) 

The following are the main areas that the AI Bill of Rights addresses:

  • SAFE AND EFFECTIVE SYSTEMS: You should be protected from unsafe or ineffective systems.

  • ALGORITHMIC DISCRIMINATION PROTECTIONS: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.

  • DATA PRIVACY: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.

  • NOTICE AND EXPLANATION: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.

  • HUMAN ALTERNATIVES, CONSIDERATION, AND FALLBACK: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. (The White House, 2023)

These are all incredibly important areas of focus, but I have two major concerns left unanswered for. The first is frankly, how and when are these to be implemented? And second of course is my issue with the lack of concern for the environment, and the intrinsic connection between human rights and necessary protections of the natural world, which are direly missing from this bill of rights. 

Summary of AI Risk Management Framework

The AI Risk Management Framework (AI RMF) acknowledges that while artificial intelligence (AI) holds immense potential to positively impact various aspects of society and the environment, it also carries unique risks that can affect individuals, organizations, and communities. Unlike traditional software systems, AI systems can be influenced by evolving data and complex contexts, making risk detection and response challenging. These socio-technical systems are susceptible to amplifying inequalities and undesirable outcomes, but responsible AI practices, emphasizing human-centricity and social responsibility, can help mitigate these risks. AI risk management is essential for fostering responsible AI development and use, enhancing trustworthiness, and building public trust in this transformative technology. (Tabassi, 2023

Figure 1 above is directly from the AI RMF (Tabassi, 2023) and provides an abbreviated look at potential and actual harms of AI using broad strokes. As seen in the figure, a major branch is addressing environmental concerns, unlike the AI Bill of Rights, which is vital to address, as if nothing is done, the harm could be compounding over time. 

Expert Reactions to the AI Bill of Rights and the AI Risk Management Framework

Nicole Foster, in charge of shaping worldwide policies related to AI and machine learning at Amazon Web Services, highlighted a major issue regarding the documents. She pointed out that the primary source of concern lies in the conflicting interpretations of the technology's very essence. (Krishan, 2023) The way that the two documents define AI is seemingly contradictory.

According to experts in AI policy, the absence of clear directives from the White House regarding how to reconcile contrasting perspectives on AI—those centered on rights and those on risks—has become a significant obstacle for companies striving to develop innovative products while ensuring necessary protections. (Krishan, 2023)

Personally, I think that they could and should be implemented together, and honestly I feel that arguing over definitions seems like an excuse to not abide by the necessary regulations that they suggest.  

Patrick Gaspard, who serves as the CEO and president of the Center for American Progress, acknowledged the substantial commitment made by the Biden administration in crafting the significant Blueprint for an AI Bill of Rights. Gaspard highlighted the impending AI executive order as an ideal occasion to transform these guiding principles into enforceable policy within the United States. He emphasized that this presents an opportune moment for the president to prioritize democracy and individual rights in shaping the trajectory of these influential tools. (Hananel, 2023)

Ben Winters, the senior legal advisor at EPIC overseeing their efforts in the realm of AI and human rights, expressed his regret that the business sector is displeased with the policy document's alignment, or lack thereof, with their profit-oriented motives—namely, generating revenue, exploiting individuals' data, and securing additional contracts. He emphasized that the document is a policy-oriented text, and as such, industry entities do not possess the authority to author it according to their preferences. (Krishan, 2023)

We Need to Include Concerns for the Environment/Natural World

Unlike the Blueprint for the AI Bill of Rights, the AI Risk Management Framework (RMF) does include protections for the environment. For instance, it states that AI systems “should not, under defined conditions, cause physical or psychological harm or lead to a state in which human life, health, property, or the environment is endangered.” (Source: ISO/IEC TS 5723:2022) (Tabassi, 2023 p.13) 

This and the rest of the 36 page document contain a lot of important considerations and regulatory needs for AI risks, however, there are some major questions as to how to make regulations happen. Who will be the AI police? Who will be the highway patrol? It is much harder to see how and when AI is causing harm, unlike with cars on the road, when it is easy to see that a car crash has happened, or to measure the carbon output of a vehicle.

Certainly, one example of how AI can be damaging to the environment is through its association with energy consumption and carbon emissions. Here's how:

1. Increased Energy Demands: AI models, particularly deep learning models, require significant computational power to train and execute tasks. This necessitates the use of powerful hardware, including graphics processing units (GPUs) and specialized AI chips, which are energy-intensive. Large-scale AI training processes, such as training a language model like GPT-3, can consume substantial amounts of electricity over extended periods.

2. Data Center Operations: Many AI applications, especially those involving big data and machine learning, rely on data centers for processing and storage. These data centers operate 24/7 and require vast amounts of energy for cooling and maintaining server infrastructure. Cooling alone can account for a significant portion of the energy consumption.

3. Manufacturing of Hardware: The production of AI hardware components, including GPUs and specialized AI chips, involves resource-intensive processes and the mining of rare materials. These processes contribute to environmental degradation and carbon emissions.

4. E-waste: As AI technologies evolve rapidly, older AI hardware can become obsolete. The disposal of electronic waste (e-waste) generated by outdated AI equipment can have detrimental effects on the environment if not managed properly.

5. Indirect Environmental Impact: AI is also used in applications like optimizing logistics and transportation systems. While these applications can reduce energy consumption in some cases, they may lead to increased overall energy use if not implemented thoughtfully.

6. Data Centers Location: The location of data centers matters. If they are powered by fossil fuels in regions with high emissions, the carbon footprint of AI operations can be significant.

Efforts are being made to mitigate these environmental impacts. This includes research into more energy-efficient AI algorithms, the use of renewable energy sources for data centers, and the development of AI hardware with lower power requirements. However, it's important to recognize that AI's environmental impact is a complex issue that requires ongoing attention and sustainable practices to minimize its harm to the environment.

We have the regulations that say this shouldn’t happen. We have an opportunity to step up and make AI more ethical, by not letting it hurt anyone, or the world around us, and to let it help us in making non-harm the norm. 

The chief science advisor to President Biden has indicated that a growing awareness of the potential hazards associated with AI is driving a pressing initiative to establish protective measures. (Eliasgroll, 2023) When will we see these protective measures be implemented? And further, what side-effects might these implementations have that are unforeseen? 

A central motif revolves around the transformation of aspirations into practical steps: How can we effectively put into practice the substantial promises that have been put forth?

 A particularly potent instrument that fills us with enthusiasm, and one that we anticipate will garner significant attention in the forthcoming years, is the utilization of AI as a pivotal player in addressing climate challenges. The horizon holds an increasing array of prospects, spanning across governments, enterprises, non-profit organizations, as they harness AI's capabilities to expedite their endeavors dedicated to combating climate change. (Toplensky, 2023)

As an illustration, consider the case of UPS, which is leveraging machine learning solutions to enhance the optimization of their truck routes. This initiative has resulted in the conservation of tens of millions of gallons of fuel annually, thereby curbing corresponding emissions. Furthermore, we've witnessed AI applications revolutionizing the synchronization of traffic lights within urban landscapes. An example lies in collaboration on the Greenlight project in Hamburg, Germany, where the reduction in stop-and-start occurrences translated to a 10% decrease in emissions. (Toplensky, 2023)

Conclusion: There is Much Work to be Done, and Fast 

The AI Bill of Rights constitutes a collection of principles crafted by the OSTP, aiming to steer the ethical development and application of artificial intelligence. This initiative emerges in the context of a worldwide endeavor to formulate increased regulations that oversee the realm of AI. (The White House, 2023) This is all well and good, but we need to see it put into action. To conclude, here is an outline of just one aspect that I would like to see implemented, defined by the problem, proposed solution, what we already know, and what we need to know. 

Problem: The development and deployment of AI can harm the environment and exacerbate climate change. The profit-driven approach to AI development may prioritize the interests of big companies over the well-being of humanity.

Solution: The AI Bill of Rights should include provisions that prioritize the betterment of humanity and its influence on the natural environment. 

We already know: We are in a climate crisis. We must be careful not to view AI as a solution for all problems without considering its costs and benefits, (van Wynsberghe, 2020) such as the high energy consumption needed to train and run algorithms. (Coeckelbergh, 2021) We know that “. . .the tension at the heart of climate AI is that it reproduces the very problems it claims to be solving: those of the climate crisis.” (Baker & Gabrys, 2022)

We need to know: The potential impacts of AI on the environment and climate, including the potential risks and benefits of AI in addressing environmental challenges. We need to understand the economic incentives and disincentives that may drive or hinder the development of environmentally-friendly AI.

This article is a call to action to refocus priorities, pull back from sci-fi level fears of AI, and understand what is already in place for AI regulation, but not being implemented. How do we get that to happen? What can we as individuals do?

Take Action: It's time for all of us to play a crucial role in shaping the future of AI. Stay informed, support advocacy groups, engage with your elected representatives, and participate in public consultations. Encourage ethical AI practices, promote transparency, and be a responsible consumer. Raise awareness, advocate for privacy, and support legislation that prioritizes safety and fairness. Remember, your voice matters in the development of AI regulations that safeguard our society's interests. Together, we can ensure that AI technologies serve humanity responsibly and ethically. 


Resources

Baker, K., & Gabrys, J. (2022). Earth for AI: A Political Ecology of Data-Driven Climate Initiatives. Geoforum, 131, 1-10. https://doi.org/10.1016/j.geoforum.2022.01.016

Coeckelbergh, M. AI for climate: Freedom, justice, and other ethical and political challenges. AI Ethics 2021, 1, 67–72. 

Eliasgroll. (2023, August 12). White House is fast-tracking executive order on artificial intelligence. CyberScoop. https://cyberscoop.com/white-house-executive-order-artificial-intelligence/



Gebru, T. (2021, December 6). For truly ethical AI, its research must be independent from big tech. The Guardian. https://www.theguardian.com/commentisfree/2021/dec/06/google-silicon-valley-ai-timnit-gebru



Hananel, S. (2023, August 3). RELEASE: Civil Rights, Tech Groups Call on Biden To Protect Public from Harms of AI. Center for American Progress. https://www.americanprogress.org/press/release-civil-rights-tech-groups-call-on-biden-to-protect-public-from-harms-of-ai/



Krishan, N. (2023, August 24). Experts warn of ‘contradictions’ in Biden administration’s top AI policy documents. FedScoop. https://fedscoop.com/experts-warn-of-contradictions-in-biden-administrations-top-ai-policy-documents/



Perrigo, B. (2022, February 17). Inside Facebook’s African Sweatshop. Time. https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/



Tabassi, E. (2023). AI Risk Management Framework. https://doi.org/10.6028/nist.ai.100-1



The White House. (2023, March 16). Blueprint for an AI Bill of Rights | OSTP | The White House. https://www.whitehouse.gov/ostp/ai-bill-of-rights/



Toplensky, R. (2023, June 22). Google’s CSO Kate Brandt on how AI can accelerate climate action. WSJ. https://www.wsj.com/articles/googles-cso-kate-brandt-on-how-ai-can-accelerate-climate-action-8410242c



van Wynsberghe, A. Artificial Intelligence. From Ethics to Policy; Study, Panel for the Future of Science and Technology, European Parliamentary Research Service (EPRS), Scientific Foresight Unit (STOA); 2020.