We present three interconnected studies examining how artificial intelligence technologies can induce and amplify false memories in human subjects.
Our research investigates this phenomenon across multiple modalities: conversational AI interfaces, AI-manipulated visual media, and subtle misinformation injection during chatbot interactions.
In our first study, we simulated crime witness interviews using different AI interviewer conditions, finding that generative chatbots powered by large language models induced over three times more immediate false memories than control conditions, with 36.4% of user responses being misled through the interaction. Our second investigation expanded this work to visual media, demonstrating that AI-edited images and particularly AI-generated videos of edited images could increase false recollections by up to 2 times compared to unedited stimuli, while also increasing confidence in these false memories. Our third study explored the deliberate exploitation of this vulnerability, showing that chatbots subtly injecting misinformation during conversations produced significantly higher rates of false recollection than traditional text-based misinformation delivery methods.
Together, these studies reveal a consistent pattern: AI systems, whether through suggestive questioning, visual manipulation, or conversational misdirection, possess an unprecedented capacity to shape and distort human memory. This work establishes that the integration of AI into sensitive contexts such as legal proceedings, therapeutic settings, and everyday information consumption requires careful consideration of these memory manipulation risks.
Conversational AI Powered by Large Language Models Amplifies False Memories in Witness Interviews
Chan, S., Pataranutaporn, P., Suri, A., Zulfikar, W., Maes, P., & Loftus, E. F. (2024). Conversational AI Powered by Large Language Models Amplifies False Memories in Witness Interviews. arXiv preprint arXiv:2408.04681.
Abstract
This study examines the impact of AI on human false memories -- recollections of events that did not occur or deviate from actual occurrences. It explores false memory induction through suggestive questioning in Human-AI interactions, simulating crime witness interviews. Four conditions were tested: control, survey-based, pre-scripted chatbot, and generative chatbot using a large language model (LLM). Participants (N=200) watched a crime video, then interacted with their assigned AI interviewer or survey, answering questions including five misleading ones. False memories were assessed immediately and after one week. Results show the generative chatbot condition significantly increased false memory formation, inducing over 3 times more immediate false memories than the control and 1.7 times more than the survey method. 36.4% of users' responses to the generative chatbot were misled through the interaction. After one week, the number of false memories induced by generative chatbots remained constant. However, confidence in these false memories remained higher than the control after one week. Moderating factors were explored: users who were less familiar with chatbots but more familiar with AI technology, and more interested in crime investigations, were more susceptible to false memories. These findings highlight the potential risks of using advanced AI in sensitive contexts, like police interviews, emphasizing the need for ethical considerations.
Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection
Pataranutaporn, P., Archiwaranguprok, C., Chan, S. W., Loftus, E., & Maes, P. (2025, April). Synthetic human memories: Ai-edited images and videos can implant false memories and distort recollection. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1-20).
Abstract
AI is increasingly used to enhance images and videos, both intentionally and unintentionally. As AI editing tools become more integrated into smartphones, users can modify or animate photos into realistic videos. This study examines the impact of AI-altered visuals on false memories—recollections of events that didn’t occur or deviate from reality. In a pre-registered study, 200 participants were divided into four conditions of 50 each. Participants viewed original images, completed a filler task, then saw stimuli corresponding to their assigned condition: unedited images, AI-edited images, AI-generated videos, or AI-generated videos of AI-edited images. AI-edited visuals significantly increased false recollections, with AI-generated videos of AI-edited images having the strongest effect (2.05x compared to control). Confidence in false memories was also highest for this condition (1.19x compared to control). We discuss potential applications in HCI, such as therapeutic memory reframing, and challenges in ethical, legal, political, and societal domains.
Slip Through the Chat: Subtle Injection of False Information in LLM Chatbot Conversations Increases False Memory Formation
Pataranutaporn, P., Archiwaranguprok, C., Chan, S. W., Loftus, E., & Maes, P. (2025, March). Slip Through the Chat: Subtle Injection of False Information in LLM Chatbot Conversations Increases False Memory Formation. In Proceedings of the 30th International Conference on Intelligent User Interfaces (pp. 1297-1313).
Abstract
This study examines the potential for malicious generative chatbots to induce false memories by injecting subtle misinformation during user interactions. An experiment involving 180 participants explored five intervention conditions following the presentation of an article: (1) no intervention, (2) reading an honest or (3) misleading article summary, (4) discussing the article with an honest or (5) misleading chatbot. Results revealed that while the misleading summary condition increased false memory occurrence, misleading chatbot interactions led to significantly higher rates of false recollection. These findings highlight the emerging risks associated with conversational AI as it becomes more prevalent. The paper concludes by discussing implications and proposing future research directions to address this concerning phenomenon.