Project

Collective Debate

LSM

On Collective Debate, users take a test of their morality, then debate an artificial agent regarding a controversial claim: that differences in professional outcomes between men and women arise from bias as opposed to biology. Users indicate how much they agree with the claim, then they exchange arguments with the agent (who assumes the opposite position). After the debate, users are asked to re-evaluate their position. The artificial agent is trained to select arguments that nudge the user to become more moderate.

Results

Here you can see how others changed their minds after engaging in debate with the artificial agent. Some moved towards the middle (moderately agree / somewhat confident), but others moved towards the extremes (strongly agree or disagree / extremely confident). 

We've also built a self-organizing map projection of ~4k moral matrices that enables users to see how their own moral foundations compare to those of prototypical liberals and conservatives. 

Experiment

We are recruiting users for an experiment to test different artificial agents' ability to persuade users to become more moderate. Each agent is powered by a different model of how the user will react to various arguments, and how likely the user is to become more moderate given a path through the debate.