{"id":1475895,"date":"2024-02-01T09:45:00","date_gmt":"2024-02-01T09:45:00","guid":{"rendered":"https:\/\/grist.org\/?p=628641"},"modified":"2024-02-01T09:45:00","modified_gmt":"2024-02-01T09:45:00","slug":"what-happened-when-climate-deniers-met-an-ai-chatbot","status":"publish","type":"post","link":"https:\/\/radiofree.asia\/2024\/02\/01\/what-happened-when-climate-deniers-met-an-ai-chatbot\/","title":{"rendered":"What happened when climate deniers met an AI chatbot?"},"content":{"rendered":"\n
If you\u2019ve heard anything about the relationship between Big Tech and climate change, it\u2019s probably that the data centers that power our online lives use a mind-boggling amount of power. And some of the newest energy hogs on the block are artificial intelligence tools like ChatGPT. Some researchers suggest that ChatGPT alone might use as much power as 33,000 U.S. households<\/a> in a typical day, a number that could balloon as the technology becomes more widespread. <\/p>\n\n\n\n The staggering emissions add to a general tenor of panic driven by headlines about AI stealing jobs<\/a>, helping students cheat<\/a>, or, who knows, taking over. Already, some 100 million people use OpenAI\u2019s most famous chatbot on a weekly basis<\/a>, and even those who don\u2019t use it likely encounter AI-generated content often. But a recent study points to an unexpected upside of that wide reach: Tools like ChatGPT could teach people about climate change, and possibly shift deniers closer to accepting the overwhelming scientific consensus that global warming is happening and caused by humans.<\/p>\n\n\n\n In a study recently published in the journal Scientific Reports<\/a>, researchers at the University of Wisconsin-Madison asked people to strike up a climate conversation with GPT-3, a large language model released by OpenAI in 2020. (ChatGPT runs on GPT-3.5 and 4, updated versions of GPT-3). Large language models are trained on vast quantities of data, allowing them to identify patterns to generate text based on what they\u2019ve seen, conversing somewhat like a human would. The study is one of the first to analyze GPT-3\u2019s conversations about social issues like climate change and Black Lives Matter. It analyzed the bot\u2019s interactions with more than 3,000 people, mostly in the United States, from across the political spectrum. Roughly a quarter of them came into the study with doubts about established climate science, and they tended to come away from their chatbot conversations a little more supportive of the scientific consensus.<\/p>\n\n\n\n That doesn\u2019t mean they enjoyed the experience, though. They reported feeling disappointed after chatting with GPT-3 about the topic, rating the bot\u2019s likability about half a point or lower on a 5-point scale. That creates a dilemma for the people designing these systems, said Kaiping Chen, an author of the study and a professor of computation communication at the University of Wisconsin-Madison. As large language models continue to develop, the study says, they could begin to respond to people in a way that matches users\u2019 opinions \u2014 regardless of the facts. <\/p>\n\n\n\n \u201cYou want to make your user happy, otherwise they\u2019re going to use other chatbots. They’re not going to get onto your platform, right?\u201d Chen said. \u201cBut if you make them happy, maybe they’re not going to learn much from the conversation.\u201d <\/p>\n\n\n\n Prioritizing user experience over factual information could lead ChatGPT and similar tools to become vehicles for bad information, like many of the platforms that shaped the internet and social media before it. Facebook<\/a>, YouTube<\/a>, and Twitter, now known as X, are awash in lies and conspiracy theories about climate change. Last year, for instance, posts with the hashtag #climatescam<\/a> have gotten more likes and retweets on X than ones with #climatecrisis or #climateemergency. <\/p>\n\n\n\n \u201cWe already have such a huge problem with dis- and misinformation,\u201d said Lauren Cagle, a professor of rhetoric and digital studies at the University of Kentucky. Large language models like ChatGPT \u201care teetering on the edge of exploding that problem even more.\u201d<\/p>\n\n\n