GAO Warns of Hidden Costs and Risks of Generative AI in New Environmental and Human Impact Report

In April 2025, the U.S. Government Accountability Office (GAO) released its third report in a series on generative artificial intelligence, titled “Artificial Intelligence: Generative AI’s Environmental and Human Effects” (GAO-25-107172). Authored by Brian Bothwell and Kevin Walsh, this report offers a comprehensive assessment of the largely opaque and underreported impacts of generative AI on both the environment and society. While generative AI holds transformative potential, the GAO underscores that its explosive growth comes with significant and uncertain costs that demand urgent scrutiny and regulatory attention.

According to the report, the environmental footprint of generative AI is vast yet poorly understood due to the lack of data transparency from developers. The training of large language models consumes substantial electricity, often equivalent to the energy use of entire towns, and relies heavily on water-intensive cooling systems within data centers. For example, training Meta’s largest Llama 3 model (405 billion parameters) reportedly consumed over 21,000 megawatt-hours of electricity and emitted nearly 9,000 metric tons of CO2-equivalent emissions. Yet, even these estimates may understate the impact, as few developers disclose specific figures on energy, water use, or carbon emissions. Worse still, much of the environmental damage stems not from usage but from the full lifecycle of AI infrastructure—including raw material extraction, hardware manufacturing, and end-of-life disposal—areas for which data are particularly sparse.

Human consequences are equally concerning. The GAO outlines five principal risks posed by generative AI: unsafe systems that can produce inaccurate or harmful outputs; threats to data privacy; vulnerabilities to cyberattacks such as prompt injections and data poisoning; amplification of societal biases; and a troubling lack of accountability when systems malfunction. These issues are compounded by the black-box nature of most AI models, the absence of transparency about training data, and the rapid evolution of generative capabilities beyond existing regulatory frameworks.

The report also highlights sector-specific concerns. In public services, generative AI can streamline communication and accessibility but may mislead users with hallucinated legal advice or false information. In labor markets, it could boost productivity while displacing entry-level jobs and deepening socioeconomic disparities. In education and research, it may democratize access to knowledge yet introduce academic dishonesty and diminish critical thinking. And in national security, it raises profound risks related to misinformation campaigns, data manipulation, and cybersecurity breaches.

To mitigate these concerns, the GAO offers policy options for Congress, agencies, and private industry. These include improving data collection and environmental reporting, encouraging the use of risk management frameworks such as those issued by NIST and GAO, and developing standards for transparency and accountability. Although maintaining the status quo is presented as an option, GAO cautions that existing measures are insufficient given the scale of potential harm.

The report concludes that without stronger policies and better data, generative AI may advance at the expense of environmental sustainability and social well-being. Policymakers are thus urged to act decisively to balance innovation with responsibility.

Disclaimer: This blog post summarizes a GAO report and is not guaranteed to be accurate. It is not intended to provide legal advice.

Next
Next

GAO Urges OMB to Modernize Burden Reduction Guidance for Federal Benefit Programs