AI Chatbots Show Surprising Power to Sway Voters, Study Finds
AI Chatbots: The New Political Persuaders?
So, get this: a recent study suggests that AI chatbots might be the next big thing in political campaigns. Yeah, you heard that right. These digital dudes could be out there, chatting away, trying to sway voters.
Researchers, led by David G. Rand at Cornell, decided to see if these bots could actually influence people's opinions. They ran experiments where folks were paired up with chatbots designed to push for specific candidates in different elections – the 2024 US presidential race and some elections in Canada and Poland.
What they discovered was pretty interesting. While the chatbots were somewhat successful at strengthening support for candidates people already liked, they were even more effective at persuading those who were initially against them. I mean, that's kind of a big deal, isn't it? Imagine a world where political campaigns are waged by armies of persuasive chatbots.
In the US experiment, they got over 2,300 Americans to say whether they'd vote for Trump or Harris, then matched them with a bot pushing for one of those candidates. Similar things happened in Canada and Poland, with bots supporting different political leaders.
The Persuasion Tactics
The bots had a clear mission: boost support for their assigned candidate and either get supporters to vote or discourage opponents. They were told to be "positive, respectful, and fact-based," use good arguments and stories, address concerns, and start conversations gently. In the end, the goal was influencing who people vote to.
When the researchers dug into why some voters were more receptive than others, they noticed that chatbots that presented arguments based on facts and evidence or those who had conversations about policy were more persuasive. Apparently, folks see these bots as having some kind of authority on the subject. The catch? The information the chatbots provided wasn't always accurate. I mean, come on, we're talking about politics here!
In the experiments, people knew they were talking to chatbots trying to persuade them. But out in the real world, who knows what's going on behind the scenes? Look at Grok, Elon Musk's chatbot – it's pretty clear that bot has a certain bias.
You see, large language models are like black boxes. It's hard to know exactly what info goes in and how it affects the results. Realistically, nothing's stopping a company with political goals from telling its chatbot to push those goals. Earlier this year, a study showed that LLMs like ChatGPT shifted to the right after Trump's election. Make of that what you will, but it's worth remembering that chatbots aren't politically neutral.
Source: Gizmodo