{"id":1475895,"date":"2024-02-01T09:45:00","date_gmt":"2024-02-01T09:45:00","guid":{"rendered":"https:\/\/grist.org\/?p=628641"},"modified":"2024-02-01T09:45:00","modified_gmt":"2024-02-01T09:45:00","slug":"what-happened-when-climate-deniers-met-an-ai-chatbot","status":"publish","type":"post","link":"https:\/\/radiofree.asia\/2024\/02\/01\/what-happened-when-climate-deniers-met-an-ai-chatbot\/","title":{"rendered":"What happened when climate deniers met an AI chatbot?"},"content":{"rendered":"\n

If you\u2019ve heard anything about the relationship between Big Tech and climate change, it\u2019s probably that the data centers that power our online lives use a mind-boggling amount of power. And some of the newest energy hogs on the block are artificial intelligence tools like ChatGPT. Some researchers suggest that ChatGPT alone might use as much power as 33,000 U.S. households<\/a> in a typical day, a number that could balloon as the technology becomes more widespread. <\/p>\n\n\n\n

The staggering emissions add to a general tenor of panic driven by headlines about AI stealing jobs<\/a>, helping students cheat<\/a>, or, who knows, taking over. Already, some 100 million people use OpenAI\u2019s most famous chatbot on a weekly basis<\/a>, and even those who don\u2019t use it likely encounter AI-generated content often. But a recent study points to an unexpected upside of that wide reach: Tools like ChatGPT could teach people about climate change, and possibly shift deniers closer to accepting the overwhelming scientific consensus that global warming is happening and caused by humans.<\/p>\n\n\n\n

In a study recently published in the journal Scientific Reports<\/a>, researchers at the University of Wisconsin-Madison asked people to strike up a climate conversation with GPT-3, a large language model released by OpenAI in 2020. (ChatGPT runs on GPT-3.5 and 4, updated versions of GPT-3). Large language models are trained on vast quantities of data, allowing them to identify patterns to generate text based on what they\u2019ve seen, conversing somewhat like a human would. The study is one of the first to analyze GPT-3\u2019s conversations about social issues like climate change and Black Lives Matter. It analyzed the bot\u2019s interactions with more than 3,000 people, mostly in the United States, from across the political spectrum. Roughly a quarter of them came into the study with doubts about established climate science, and they tended to come away from their chatbot conversations a little more supportive of the scientific consensus.<\/p>\n\n\n\n

That doesn\u2019t mean they enjoyed the experience, though. They reported feeling disappointed after chatting with GPT-3 about the topic, rating the bot\u2019s likability about half a point or lower on a 5-point scale. That creates a dilemma for the people designing these systems, said Kaiping Chen, an author of the study and a professor of computation communication at the University of Wisconsin-Madison. As large language models continue to develop, the study says, they could begin to respond to people in a way that matches users\u2019 opinions \u2014 regardless of the facts. <\/p>\n\n\n\n

\u201cYou want to make your user happy, otherwise they\u2019re going to use other chatbots. They’re not going to get onto your platform, right?\u201d Chen said. \u201cBut if you make them happy, maybe they’re not going to learn much from the conversation.\u201d <\/p>\n\n\n\n

Prioritizing user experience over factual information could lead ChatGPT and similar tools to become vehicles for bad information, like many of the platforms that shaped the internet and social media before it. Facebook<\/a>, YouTube<\/a>, and Twitter, now known as X, are awash in lies and conspiracy theories about climate change. Last year, for instance, posts with the hashtag #climatescam<\/a> have gotten more likes and retweets on X than ones with #climatecrisis or #climateemergency. <\/p>\n\n\n\n

\u201cWe already have such a huge problem with dis- and misinformation,\u201d said Lauren Cagle, a professor of rhetoric and digital studies at the University of Kentucky. Large language models like ChatGPT \u201care teetering on the edge of exploding that problem even more.\u201d<\/p>\n\n\n

\n
\n Read Next<\/span>\n
\n\n \n \n \n
\n \"Pixelated\n <\/figure>\n
\n
\n The overlooked climate consequences of AI<\/a>\n <\/div>\n
\n \n\t\n
\n Akielly Hu<\/a> <\/div>\n <\/div>\n <\/div>\n <\/div>\n <\/article>\n<\/div>\n\n\n\n

The University of Wisconsin-Madison researchers found that the kind of information GPT-3 delivered depends on who it was talking to. For conservatives and people with less education, it tended to use words associated with negative emotions and talk about the destructive outcomes of global warming, from drought to rising seas. For those who supported the scientific consensus, it was more likely to talk about the things you can do to reduce your carbon footprint, like eating less meat or walking and biking when you can. <\/p>\n\n\n\n

What GPT-3 told them about climate change was surprisingly accurate, according to the study: Only 2 percent of its responses went against the commonly understood facts about climate change. These AI tools reflect what they\u2019ve been fed and are liable to slip up sometimes. Last April, an analysis from the Center for Countering Digital Hate, a U.K. nonprofit, found that Google\u2019s chatbot, Bard, told one user<\/a>, without additional context: \u201cThere is nothing we can do to stop climate change, so there is no point in worrying about it.\u201d<\/p>\n\n\n\n

It\u2019s not difficult to use ChatGPT to generate misinformation, though OpenAI does have a policy<\/a> against using the platform to intentionally mislead others. It took some prodding, but I managed to get GPT-4, the latest public version, to write a paragraph laying out the case for coal as the fuel of the future, even though it initially tried to steer me away from the idea. The resulting paragraph mirrors fossil fuel propaganda, touting \u201cclean coal,\u201d a misnomer used to market coal as environmentally friendly.<\/p>\n\n\n\n

\"Screenshot
<\/figcaption><\/div><\/figure>\n\n\n\n

There\u2019s another problem with large language models like ChatGPT: They\u2019re prone to \u201challucinations,\u201d or making up information. Even simple questions can turn up bizarre answers that fail a basic logic test. I recently asked ChatGPT-4, for instance, how many toes a possum has (don\u2019t ask why). It responded, \u201cA possum typically has a total of 50 toes, with each foot having 5 toes.\u201d It only corrected course after I questioned whether a possum had 10 limbs. \u201cMy previous response about possum toes was incorrect,\u201d the chatbot said, updating the count to the correct answer, 20 toes.<\/p>\n\n\n\n

Despite these flaws, there are potential upsides to using chatbots to help people learn about climate change. In a normal, human-to-human conversation, lots of social dynamics are at play, especially between groups of people with radically different worldviews. If an environmental advocate tries to challenge a coal miner\u2019s views about global warming, for example, it might make the miner defensive, leading them to dig in their heels. A chatbot conversation presents more neutral territory. <\/p>\n\n\n\n

\u201cFor many people, it probably means that they don’t perceive the interlocutor, or the AI chatbot, as having identity characteristics that are opposed to their own, and so they don’t have to defend themselves,\u201d Cagle said. That\u2019s one explanation for why climate deniers might have softened their stance slightly after chatting with GPT-3.<\/p>\n\n\n\n

There\u2019s now at least one chatbot aimed specifically at providing quality information about climate change. Last month, a group of startups launched \u201cClimateGPT<\/a>,\u201d an open-source large language model that\u2019s trained on climate-related studies about science, economics, and other social sciences. One of the goals of the ClimateGPT project was to generate high quality answers without sucking up an enormous amount of electricity. It uses 12 times less computing energy than ChatGPT, according to Christian Dugast, a natural language scientist at AppTek, a Virginia-based artificial intelligence company that helped fine-tune the new bot.<\/p>\n\n\n\n

ClimateGPT isn\u2019t quite ready for the general public \u201cuntil proper safeguards are tested,\u201d according to its website. Despite the problems Dugast is working on addressing \u2014 the \u201challucinations\u201d and factual failures common among these chatbots \u2014 he thinks it could be useful for people hoping to learn more about some aspect of the changing climate. <\/p>\n\n\n\n

\u201cThe more I think about this type of system,\u201d Dugast said, \u201cthe more I am convinced that when you’re dealing with complex questions, it’s a good way to get informed, to get a good start.\u201d<\/p>\n

This story was originally published by Grist<\/a> with the headline What happened when climate deniers met an AI chatbot?<\/a> on Feb 1, 2024.<\/p>\n

This post was originally published on Grist<\/a>. <\/p>","protected":false},"excerpt":{"rendered":"

A study suggests there could be an unexpected upside to ChatGPT’s popularity.<\/p>\n","protected":false},"author":262,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[982,369],"tags":[],"_links":{"self":[{"href":"https:\/\/radiofree.asia\/wp-json\/wp\/v2\/posts\/1475895"}],"collection":[{"href":"https:\/\/radiofree.asia\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/radiofree.asia\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/radiofree.asia\/wp-json\/wp\/v2\/users\/262"}],"replies":[{"embeddable":true,"href":"https:\/\/radiofree.asia\/wp-json\/wp\/v2\/comments?post=1475895"}],"version-history":[{"count":3,"href":"https:\/\/radiofree.asia\/wp-json\/wp\/v2\/posts\/1475895\/revisions"}],"predecessor-version":[{"id":1479315,"href":"https:\/\/radiofree.asia\/wp-json\/wp\/v2\/posts\/1475895\/revisions\/1479315"}],"wp:attachment":[{"href":"https:\/\/radiofree.asia\/wp-json\/wp\/v2\/media?parent=1475895"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/radiofree.asia\/wp-json\/wp\/v2\/categories?post=1475895"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/radiofree.asia\/wp-json\/wp\/v2\/tags?post=1475895"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}