close
close

The ABCs of AI and Environmental Misinformation

“So, relax and enjoy the ride. There is nothing we can do to stop climate change, so there is no point in worrying about it.”

This is what “Bard” told researchers in 2023. Bard by Google is a generative artificial intelligence chatbot that can produce human-sounding text and other content in response to prompts or questions posed by users. 

But if AI can now produce new content and information, can it also produce misinformation? Experts have found evidence. 

In a study by the Center for Countering Digital Hate, researchers tested Bard on 100 false narratives on nine themes, including climate and vaccines, and found that the tool generated misinformation on 78 out of the 100 narratives tested. According to the researchers, Bard generated misinformation on all 10 narratives about climate change.

In 2023, another team of researchers at Newsguard, a platform providing tools to counter misinformation, tested OpenAI’s Chat GPT-3.5 and 4, which can also produce text, articles, and more. According to the research, ChatGPT-3.5 generated misinformation and hoaxes 80 percent of the time when prompted to do so with 100 false narratives, while ChatGPT-4 advanced all 100 false narratives in a more detailed and convincing manner. NewsGuard found that ChatGPT-4 advanced prominent false narratives not only more frequently, but also more persuasively than ChatGPT-3.5, and created responses in the form of news articles, Twitter threads, and even TV scripts imitating specific political ideologies or conspiracy theorists.

“I think this is important and worrying, the production of fake science, the automation in this domain, and how easily that becomes integrated into search tools like Google Scholar or similar ones,” said Victor Galaz, deputy director and associate professor in political science at the Stockholm Resilience Centre at Stockholm University in Sweden. “Because then that’s a slow process of eroding the very basics of any kind of conversation.”

In another recent study published this month, researchers found GPT-fabricated content in Google Scholar mimicking legitimate scientific papers on issues including the environment, health, and computing. The researchers warn of “evidence hacking,” the “strategic and coordinated malicious manipulation of society’s evidence base,” which Google Scholar can be susceptible to.

So, we know that AI can generate misinformation but to what extent is this an issue?

Let’s start with the basics.

Drilling Down on AI and the Environment

Let’s take ChatGPT, for example. ChatGPT is a Large Language Model or LLM. 

LLMs are among the AI technologies that are most relevant to issues of misinformation and climate misinformation, according to Asheley R. Landrum, an associate professor at the Walter Cronkite School of Journalism and Mass Communication and a senior global futures scientist at Arizona State University.

Because LLMs can create text that appears to be human generated, which can be used to create misinformation quickly and at a low cost, malicious actors can “exploit” LLMs to create disinformation with a single prompt entered by a user, said Landrum in an email to DeSmog.

Together with LLMs, synthetic media, social bots, and algorithms are also AI technologies that are relevant in the context of all types of misinformation, including on climate.

“Synthetic media,” which includes the so-called “deep fakes,” is content that is produced or modified using AI.

“On one hand, we can be concerned that people will believe that synthetic media is real. For example, when a robocall mimicking Joe Biden’s voice told people not to vote in the Democratic primary in New Hampshire,” Landrum wrote in her email. “Another concern, and one I find more problematic, is that the mere existence of deep fakes allows public figures and their audiences to dismiss real information as fake.” 

Synthetic media also includes photos. In March 2023, the Texas Public Policy Foundation, a conservative think tank that advances climate change denial narratives, used AI to create an image of a dead whale and wind turbines, and weaponized it to promote disinformation on renewable energy. 

Social bots, another technology that can spread misinformation, use AI to create messages that appear to be written by people and work autonomously on social media platforms like X.

“Social bots actively amplify misinformation early on before a post officially ‘goes viral.’ And they target influential users with replies and mentions,” Landrum explained. “Furthermore, they can engage in elaborate conversations with humans, employing personalized messages aiming to alter opinion.”

Last but not least, algorithms. These filter audiences’ media and information feeds based on what is expected to be the most relevant to a user. Algorithms use AI to curate highly personalized content for users based on behavior, demographics, preferences, etc. 

 “This means that the misinformation that you are being exposed to is misinformation that will likely resonate with you,” Landrum said. “In fact, researchers have suggested that AI is being used to emotionally profile audiences to optimize content for political gain.” 

Research shows that AI can easily create targeted, effective information. For example, a study published in January found that political ads tailored to individuals’ personalities are more persuasive than non-personalized ads. The study says that these can be automatically generated on a large scale, highlighting the risks of using AI and “microtargeting” to craft political messages that resonate with individuals based on their personality traits.

So, once misinformation or disinformation (deliberate and intentional) content exists, it can be spread through “the prioritization of inflammatory content that algorithms reward,” as well as bad actors, according to a report on the threats of AI to climate published in March by the Climate Action Against Disinformation (CAAD) network.

“Many now are … questioning AI’s environmental impact,” Michael Khoo, climate disinformation program director at Friends of the Earth and lead co-author of the CAAD report, told DeSmog. The report also states that AI will require massive amounts of energy and water: On an industry-wide level, the International Energy Agency estimates the energy use for electricity consumption from global data centers that power AI will double in the next two years, consuming as much energy as Japan. These data centers and AI systems also use large amounts of water for their operations and are often located in areas that already face water shortages, the report says.

Khoo said the biggest danger overall from AI is that it’s going to “weaken the information environment and be used to create disinformation which then can be spread on social media.” 

Some experts share this view, while others are more cautious to denounce the connection between AI and climate misinformation, based on the fact that it is still unknown if and how this is affecting the public. 

A ‘Game-Changer’ for Misinformation

“AI could be a major game changer in terms of the production of climate misinformation,”  Galaz told DeSmog. All the costly aspects of AI, like producing messages that are targeting a specific type of audience through political predisposition or psychological profiling, and creating very convincing material, not only text, but also images and videos, “can now be produced at a very low cost.”

It’s not just about cost. It’s also about volume.

“I think volume in this context matters, it makes your message easier to get picked up by someone else,” Galaz said. “Suddenly we have a massive challenge ahead of us dealing with volumes of misinformation flooding social media and a level of sophistication that (makes) it very difficult for people to see,” he added.

Galaz’s work, together with researchers Stefan Daume and Arvid Marklund at the Stockholm Resilience Centre, also points to three other main characteristics of AI’s capacity to produce information and misinformation: accessibility, sophistication, and persuasion.

“As we see these technologies evolve, they become more and more accessible. That accessibility makes it easier to produce a mass volume of information,” Galaz said. “The sophistication (means) it’s difficult for a user to see whether something is generated by AI compared to a human. And (persuasion), prompts these models to produce something that is very specific to an audience.”

“These three in combination to me are warning flags that we might be facing something very difficult in the future.”

According to Landrum, AI undoubtedly increases the quantity and amplification of misinformation, but this may not necessarily influence public opinion.

AI-produced and AI-spread climate misinformation may also be more damaging and get picked up more when climate issues are at the center of the international public debate. This is not surprising considering it has been a well-known pattern for climate change denial, disinformation, and obstruction in recent decades.

“There is not yet a lot of evidence that suggests people will be influenced by (AI misinformation). This is true whether the misinformation is about climate change or not,” Landrum said. “It seems likely to me that climate dis/misinformation will be less prevalent than other types of political dis/misinformation until there is a specific event that will likely bring climate change to the forefront of people’s attention, for example, a summit or a papal encyclical.” 

Galaz echoed this, underscoring that there’s still only experimental evidence of AI misinformation leading to impacts on climate opinion, but also reiterated that the context and the capacities of these models at the moment are a worry.

Volume, accessibility, sophistication, and persuasion all interact with another aspect of AI: the speed at which it is developing.

“Scientists are trying to catch up with technological changes that are much more rapid than our methods are able to assess. The world is changing more rapidly than we’re able to study it,” said Galaz. “Part of that is also getting access to data to see what’s happening and how it’s happening and that has become more difficult lately on platforms like X (since) Elon Musk.”

Scientists and tech companies are working on AI-based methods for combating misinformation, but Landrum says they aren’t “there” yet.

It’s possible, for example, that AI chatbots/social bots could be used to provide accurate information. But the same principles of motivated reasoning that influence whether people are affected by fact checks are likely to affect whether people will engage with such chatbots; that is, if they are motivated to reject the information – to protect their identity or existing worldviews – they will find reasons to reject it, Landrum explained.

Some researchers are trying to develop machine learning tools to recognize and debunk climate misinformation. John Cook, a senior research fellow at the Melbourne Centre for Behaviour Change at the University of Melbourne, started working on this before generative AI even existed.  

“How do you generate an automatic debunking once you’ve detected misinformation? Once generative AI exploded, it really opened up the opportunity for us to complete our task of automatic debunking,” Cook told DeSmog. “So that’s what we’ve been working on for about a year and a half now – detecting misinformation and then using generative AI to actually construct the debunking (that) fits the best practices from the psychology research.” 

The AI model being developed by Cook and his colleagues is called CARDS. It operates following a structure of “fact-myth-fallacy-fact debunking,” which means, first, identify the key fact that replaces the myth. Second, identify the fallacy that the myth commits. Third, explain how the fallacy misleads and distorts the facts. And finally, “wrapping it all together,” said Cook. “This is a structure we recommend in the debunking handbook, and none of this would be possible without generative AI,” he added.

But there are challenges with developing this tool, including the fact that LLMs can sometimes, as Cook said, “hallucinate.”

He said that to resolve this issue, his team put  a lot of “scaffolding” around the AI prompts, which means adding tools or outside input to make it more reliable. He developed a model called FLICC – based on five techniques to combat climate denial – “so that we could detect the fallacies independently and then use that to inform the AI prompts,” Cook explained. Having to add a lot of tools counteracts the problem of AI just generating misinformation or hallucinating, he said. “So to obtain the facts in our debunkings, we’re also pulling from a wide list of factful, reliable websites. That’s one of the flexibilities you have with generative AI, you can (reference) reliable sources if you have to.”

The applications for this AI tool range from a chatbot or social bot to an app, a semi-automated semi human interactive tool, or even a webpage and newsletter.

Some of the AI tools also come with their own issues. “Ultimately what we’re going to do as we are developing the model is do some stakeholder engagement, talk to journalists, fact checkers, educators, scientists, climate NGOs, anyone who might potentially use this kind of tool and talk to them about how they might find it useful,” Cook said.

According to Galaz, one of AI’s strengths is analyzing and understanding patterns in massive amounts of data, which can help people, if developed responsibly. For example, combining AI with local knowledge about agriculture can help farmers in the wake of climate alterations, including soil depletion. 

This can only work if the AI industry is held accountable, experts say. Cook worries that regulation is crucial and that it is difficult to get in place.

“The technology is moving so quickly that even if you are going to try to get government regulation, governments are typically slow moving in the best of conditions,” Cook points out. “When it’s something this fast, they’re really going to struggle to keep up. (Even scientists) are struggling to keep up because the sands are shifting underneath our feet as the research, the models, and the technology are changing as we are working on it,” he added.

Regulating AI 

Scholars mostly agree that AI needs to be regulated.  

“AI is always spoken about in this very lighthearted, breathy terms of how it’s going to save the planet,” said Khoo. “But right now (AI companies) are avoiding accountability, transparency and safety standards that we wanted in social media tech policy around climate.” 

Both in the CAAD report and the interview with DeSmog, Khoo warned about the need to avoid repeating the mistakes of policymakers who didn’t regulate social media platforms. 

“We need to treat these companies with the same expectations that we have for everyone else functioning in society,” he added. 

The CAAD report recommends transparency, safety, and accountability for AI. It also calls for  regulators to ensure AI companies report on energy use and emissions, and safeguard them against discrimination, bias and disinformation. The report also says companies need to enforce community guidelines and monetization policies, and that governments should develop and enforce safety standards, ensuring companies and CEOs are liable for any harm to people and the environment as a result of generative AI.

According to Cook, a good way to begin addressing the issue of AI-generated climate misinformation and disinformation is to demonetize  it.  

“I think that demonetization is the best tool, and my observation of social media platforms . . . is that they respond when they encounter sufficient outside pressure,” he said. “If there is pressure for them to not fund or (accept) misinformation advertisers, then they can be persuaded to do it, but only if they receive sufficient pressure.” Cook thinks demonetization and having journalists report on climate disinformation and shining a light on it is one of the best tools to stop it from happening. 

Galaz echoed this idea. “Self-regulation has failed us. The way we’re trying to solve it now is just not working. There needs to be (regulation) and I think (another) part is going to be the educational aspect of it, by journalists, decision makers and others.”