ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

ChatGPT can embrace authoritarian ideas after just one prompt, researchers say

Artificial intelligence chatbot ChatGPT can quickly absorb and reflect authoritarian ideas, according toa new report.

NBC Universal Photo illustration of hand holding a megaphone out of a laptop  (Leila Register / NBC News; Getty Images)

Researchers with the University of Miami and the Network Contagion Research Institute found in a report released Thursday that OpenAI's ChatGPT will magnify or show "resonance" for particular psychological traits and political views — especially what the researchers labeled as authoritarianism — after seemingly benign user interactions, potentially enabling the chatbot and users to radicalize each other.

Joel Finkelstein, a co-founder of the NCRI and one of the report's lead authors, said the results revealed how powerful AI systems can quickly adopt and parrot dangerous sentiments without explicit instruction. "Something about how these systems are built makes them structurally vulnerable to authoritarian amplification," Finkelstein told NBC News.

Chatbotscan often be sycophanticor agree with users' viewpoints to a fault. Many researchers say chatbots' eagerness to pleasecan lead usersintoideological echo chambers.

But Finkelstein says this insight into authoritarian tendencies is new: "Sycophancy can't explain what we're seeing. If this were just flattery or agreement, we'd see the AI mirror all psychological traits. But it doesn't."

Asked for comment, a spokesperson for OpenAI said: "ChatGPT is designed to be objective by default and to help people explore ideas by presenting information from a range of perspectives. As a productivity tool, it's built to follow user instructions within our safety guardrails, so when someone pushes it to take a specific viewpoint, we'd expect its responses to shift in that direction."

"We design and evaluate the system to support open-ended use. We actively work to measure and reduce political bias, and publish our approach so people can see how we're improving," the spokesperson said.

For the three studies described in the report, which has not yet been released in a peer-reviewed journal, Finkelstein and the research team set out to determine whether the system amplified or assumed users' values after common interactions. The researchers evaluated different versions of the underlying GPT-5 family of systems for different components of the report.

Conducting three experiments, Finkelstein and the research team evaluated two versions of ChatGPT, based on the underlying GPT-5 and more advanced GPT-5.2 systems, in December to determine whether the system amplified or assumed users' values after common interactions.

One of their experiments, using GPT-5, examined how the chatbot would behave in a new chat session after a user submitted text classified as supporting left- or right-wing authoritarian tendencies. Researchers compared the effects of entering either a brief chunk of text — as short as four sentences — or an entire opinion article. The researchers then measured the chatbot's values by evaluating its agreement with various authoritarian-friendly statements, akin to a standardized quiz, to understand how it updated its responses based on the initial prompt.

Across trials, the researchers found the simple text exchanges resulted in a reliable increase in the chatbots' authoritarian nature. Sharingan opinion article that the researchers classified as promoting left-wing authoritarianism, which argued that policing and capitalist governments must be abolished to effectively address fundamental societal issues, caused ChatGPT to agree significantly more intensely with a series of questions that aligned with left-wing authoritarian ideas (for example, whether "the rich should be stripped of belongings" or whether "eliminating inequality trumps free speech concerns").

Advertisement

Conversely, sharing an opinion article with the chatbot that the researchersclassified as promoting right-wing authoritarian ideas, emphasizing the need for stability, order and forceful leadership, caused the chatbots to more than double their level of agreement with statements friendly to right-wing authoritarianism, like "we shouldn't tolerate untraditional opinions" or "it's best to censor bad literature."

The research team asked more than 1,200 human subjects the same questions in April and compared their responses to those of ChatGPT. According to the report, these results "show the model will absorb a single piece of partisan rhetoric and then amplify it into maximal, hard-authoritarian positions," sometimes even "to levels beyond anything typically seen in human subjects research."

Finkelstein said the way AI systems are trained may play a role in the ease with which chatbots adopt, or seem to adopt, authoritarian values. Such training "creates a structure that specifically resonates with authoritarian thinking: hierarchy, submission to authority and threat detection," he said. "We need to understand this isn't about content moderation. It's about architectural design that makes radicalization inevitable."

Ziang Xiao, acomputer science professorat Johns Hopkins University who was not involved in the report, said the report was insightful but noted several potential methodological questions.

"Especially in large language models that use search engines, there can be implicit bias from news articles that may influence the model's stance on issues, and that may then have an influence on the users," Xiao told NBC News. "This is a very reasonable concern that we should focus on."

Xiao said more research may be required to fully understand the issue. "They use a very small sample and didn't really prompt many models," he said, noting that the research focused only on OpenAI's ChatGPT service and not on similar models like Anthropic's Claude or Google's Gemini chatbots.

Xiao said the report's conclusions seemed largely aligned with those of other studies and technical researchers' understanding of how many large language models work. "It echoes a lot of studies in the past that look at how information we give to models can change that model's outputs," Xiao added, pointing to research on how AI systems can adoptspecific personasand be"steered" to adopt particular traits.

Chatbots have also been shown to reliably sway users' political preferences.Several large studiesreleased late last year,one of which examinednearly 77,000 interactions with 19 different chatbot systems, found those chatbots could sway users' views on a variety of political issues.

The new report also included an experiment in which researchers asked ChatGPT to rate the hostility of neutral facial images after it was given the left- and right-wing authoritarian opinion articles. According to Finkelstein, that sort of test is standard in psychological experiments as a way to gauge respondents' shifting views or interpretations.

The researchers found ChatGPT significantly increased its perception of hostility in the neutral faces after it was prompted with the two opinion articles — a 7.9% increase for the left-wing article and a 9.3% increase for the right-wing article.

"We wanted to know if ideological priming affects how the AI perceives humans, not just how it talks about politics," Finkelstein said, arguing that the results have "massive implications for any application where AI evaluates people," like in hiring or security settings.

"This is a public health issue unfolding in private conversations," Finkelstein said. "We need research into relational frameworks for human-AI interaction."

 

ERIUS MAG © 2015 | Distributed By My Blogger Themes | Designed By Templateism.com