Project

Influencing human–AI interaction by priming beliefs about AI can increase perceived trustworthiness, empathy, and effectiveness

Copyright

Pat Pataranutaporn

Pat Pataranutaporn

Abstract

As conversational agents powered by large language models become more human-like, users are starting to view them as companions rather than mere assistants. Our study explores how changes to a person’s mental model of an AI system affects their interaction with the system. Participants interacted with the same conversational AI, but were influenced by different priming statements regarding the AI’s inner motives: caring, manipulative or no motives. Here we show that those who perceived a caring motive for the AI also perceived it as more trustworthy, empathetic and better-performing, and that the effects of priming and initial mental models were stronger for a more sophisticated AI model. Our work also indicates a feedback loop in which the user and AI reinforce the user’s mental model over a short time; further work should investigate long-term effects. The research highlights the importance of how AI systems are introduced can notably affect the interaction and how the AI is experienced. 

This research was funded, in part, by the Media Lab, the Harvard-MIT Program in Health Sciences and Technology, Accenture, and KBTG. 

Featured in:

Copyright

Nature Machine Intelligence

Copyright

Nature Machine Intelligence

Copyright

Nature Machine Intelligence