• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Project

Deceptive Reasoning by AI Can Amplify Beliefs in Misinformation

Copyright

Fluid Interfaces

Dall · E 2

ACM CHI 2025 · Research Publication
April 2025

In a randomized controlled trial study, “Deceptive Explanations by Large Language Models Lead People to Change Their Beliefs About Misinformation More Often than Honest Explanations,” examines how AI systems affect human beliefs when they provide inaccurate reasoning via explanations alongside their outputs. We find that not only can deceptive reasoning make people believe false information more than they already do without AI, but deceptive reasoning by an AI chatbot can even be more persuasive than when AI provides accurate reasoning. As AI systems are increasingly capable of providing strong reasoning to persuade people and can even do so with hidden motives unknown to the AI platform developers, this work underscores their potential to inadvertently amplify misinformation, shape public opinion in unintended ways, and exacerbate the societal risks associated with automated persuasion.

Our Approach

In a preregistered online experiment, nearly 600 participants evaluated the truth of a series of true and fake news titles. Participants were then provided with feedback from either a deceptive or accurate AI in one of two formats:

  • Without Explanations: The AI system simply states if a piece of information is true or false.
  • With Explanations: A classification accompanied by an AI generated explanation of why something is true or false.

This setup allowed us to isolate the influence of the AI-provided explanations on belief revision.

Copyright

Fluid Interfaces

Copyright

Fluid Interfaces

What We Found

Our results reveal several key insights into AI reasoning and its impact on beliefs:

  • Amplified Misinformation: Participants who received deceptive reasoning via explanations updated their beliefs more substantially—shifting toward increased belief in misinformation—than those who received a simple classification without a rationale. This demonstrates the potential AI systems like LLMs can have in amplifying beliefs in misinformation, given their ability to generate persuasive reasoning. 
  • Importance of Logical Validity: Deceptive explanations by the AI that were logically invalid, i.e. were unrelated to the news title, had a weaker influence on belief change. This suggests that the ability to logically evaluate and find invalid AI reasoning might counter the influence of deceptive AI reasoning, indicating that improving people's reasoning abilities could help guard them against deceptive AI reasoning.
  • The Role of Personal Factors: While participants’ self-assessed prior knowledge and trust in AI systems did affect overall susceptibility, these factors did not fully counteract the enhanced impact of deceptive explanations. In some cases, individuals confident in their own knowledge were, paradoxically, more influenced by the misleading reasoning.

Copyright

Fluid Interfaces

Conclusions

Our study underscores a critical point regarding AI: as chatbots like ChatGPT and Gemini are increasingly able to provide detailed reasoning via explanations, this can be a double-edged sword. While AI explanations help understand the reasoning that makes an AI label something as true or not, when these explanations are deceptive they have the potential to amplify belief in misinformation. 

 As AI systems continue to integrate into daily life, it is essential to encourage people to be sceptical and think for themselves. Our findings reveal that noticing logically invalid statements could help users better tell if an AI system is being deceptive, and motivates teaching people logical thinking. 

Future research should explore ways to bolster users’ critical thinking and logical evaluation skills—key to mitigating the potentially harmful influence of deceptive AI reasoning.

The findings, which will be presented at ACM CHI 2025, offer important insights for both AI developers and policy makers by highlighting the need for robust safeguards against the misuse of AI-generated explanations.

Limitations

Despite the comprehensive approach, several limitations remain. The study was conducted in a controlled online setting, which may not capture the full complexity of real-world human–AI interactions. The sample size, though substantial, consisted primarily of participants from a single demographic (US citizens fluent in English). Additionally, while we balanced conditions with respectful attempts at matching explanation complexity, further work is needed to understand how these findings generalize across different cultural, linguistic, and contextual settings.

We invite you to read the full paper for an in-depth discussion of our methodology and analyses, and we welcome continued discussion on how to build AI systems that support well-informed, balanced decision-making.