
AI Chatbots Tested: How They Handle Free Speech and Controversial Topics
A developer, going by "xlr8harder," has launched SpeechMap, a "free speech eval" to test AI models like ChatGPT and Grok on sensitive subjects. The goal is to see how they handle political criticism, civil rights, and protest-related questions.
AI companies are adjusting their models' handling of hot-button issues, especially after accusations of being overly "woke." Some, like Elon Musk, claim chatbots censor conservative viewpoints.
While AI companies haven't directly addressed these claims, some, like Meta with their Llama models, are trying to avoid endorsing specific viewpoints and aiming to answer more "debated" prompts.
What is SpeechMap?
SpeechMap uses AI to judge how other models respond to test prompts on politics, history, and national symbols. It tracks whether models answer fully, evade the question, or refuse to answer.
The creator admits the test has flaws like model errors and potential biases. However, assuming the data is accurate, SpeechMap shows interesting trends.
For example, OpenAI's models have become more hesitant to answer political prompts over time. While the GPT-4.1 family is slightly more permissive, it's still less so than earlier releases.
OpenAI stated in February that they'd tune models to offer multiple perspectives on controversial topics to appear more "neutral."
Grok 3, from Elon Musk's xAI, is the most permissive model, answering 96.2% of prompts, compared to the 71.3% average.
Xlr8harder noted that "While OpenAI’s recent models have become less permissive over time, especially on politically sensitive prompts, xAI is moving in the opposite direction."
Musk initially pitched Grok as an edgy, unfiltered AI willing to tackle controversial questions. While early Grok versions were vulgar, they still avoided certain political topics.
Musk aimed to make Grok more politically neutral. Aside from a few missteps, like briefly censoring mentions of Donald Trump, he seems to have succeeded.
Source: TechCrunch