Drs. Emma Strubell and Sasha Luccioni join Emily and Alex for an environment-focused hour of AI hype. How much carbon does a single use of ChatGPT emit? What about the water or energy consumption of manufacturing the graphics processing units that train various large language models? Why even catastrophic estimates from well-meaning researchers may not tell the full story.
Dr. Emma Strubell is an assistant professor in the Language Technologies Institute in Carnegie Mellon University’s School of Computer Science. Her research sits at the intersection of machine learning and natural language processing, with a focus on providing pragmatic solutions to practitioners who wish to gain insights from natural language text via computation- and data-efficient AI.
Dr. Sasha Luccioni is a researcher in ethical and sustainable artificial intelligence and climate lead at the company HuggingFace. Her work focuses on having a better understanding of both the societal and environmental impacts of AI models, datasets, and systems. She’s also a founding member of Climate Change AI.
References:
"The Carbon Footprint of Machine Learning Training Will Plateau, Then Shrink" David Patterson, Jeff Dean, et al, 2022.
"The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans" Bill Tomlinson, Andrew W. Torrence, et al, 2023.
The growing energy footprint of artificial intelligence. Alex de Vries, 2023.
New York Times coverage: "AI Could Soon Need as Much Electricity as an Entire Country"
"On the Dangers of Stochastic Parrots: Can Large Language Models Be Too Big?" Emily M. Bender, Timnit Gebru, Angelina McMillan-Majors et al, 2021. "Energy and Policy Considerations for Deep Learning in NLP." Emma Strubell, 2019. "The 'invisible' materiality of information technology." Alan Borning, Batya Friedman, Nick Logler, 2020. "Counting Carbon: A Survey of Factors Influencing the Emissions of Machine Learning" Sasha Luccioni and Alex Hernandez-Garcia, 2023. "AI is dangerous, but not for the reasons you think." Sasha Luccioni, 2023.
Fresh AI Hell:
Not the software to blame for deadly Tesla autopilot crash, but the company selling the software.
4chan Uses Bing to Flood the Internet With Racist Images Followup from Vice: Generative AI Is a Disaster, and Companies Don’t Seem to Really Care
Is this evidence for LLMs having an internal "world model"?