Fact-O-Meter is an AI-powered conversational system designed to enhance critical discernment of visual misinformation. It encourages you to estimate the veracity of news content featuring images and headlines, which can be a foundation for developing more thoughtful engagement with potentially misleading media. We employ forensic image analysis, persuasive dialogue techniques, belief assessment, and reflective questioning to enhance human thinking.
Abstract
Given the growing prevalence of fake information, including increasingly realistic AI-generated news, there is an urgent need to train people to better evaluate and detect misinformation.
While interactions with AI have been shown to durably reduce people's beliefs in false information, it is unclear whether these interactions also teach people the skills to discern false information themselves. We conducted a month-long study where 67 participants classified news headline-image pairs as real or fake, discussed their assessments with an AI system, followed by an unassisted evaluation of unseen news items to measure accuracy before, during, and after AI assistance. While AI assistance produced immediate improvements during AI-assisted sessions (+21% average), participants' unassisted performance on new items declined significantly by week 4 (-15.3%). These results indicate that while AI may help immediately, it ultimately degrades long-term misinformation detection abilities.