Political arguments rarely change minds, but for a while now, political psychologists have been chipping away at the margins, trying to better understand why we’re so stubborn in our beliefs. A new paper in Personality and Social Psychology Bulletin by Matthew Feinberg of the University of Toronto and Robb Willer of Stanford lends further evidence to an idea that’s gained a lot of steam in recent years: Liberals and conservatives have different senses of morality, and if you’re arguing with someone across the political aisle from you, doing so in moral terms they understand is your best bet.
A quick crash course on moral foundations theory, which is the basis of this article: The theory, pioneered by NYU social psychologist Jonathan Haidt (he was at the University of Virginia when he developed the theory), argues that humans respond to five different sets of moral concerns. Summing up a great deal of research, Feinberg and Willer write that “liberals tend to endorse foundations based on caring and protection from harm (harm) and maintenance of fairness and reciprocity (fairness) more strongly than conservatives. However, conservatives tend to endorse moral concerns related to ingroup-loyalty (loyalty), respect for authority (authority), and protection of purity and sanctity (purity) more than liberals [emphasis theirs].” Liberals aren’t generally swayed by arguments against same-sex marriage couched in disgust (purity); conservatives aren’t generally swayed by arguments for giving undocumented immigrants social benefits since they’re human beings like everyone else (fairness).
Feinberg and Willer had two main hypotheses for their study: first, that liberals tasked with convincing conservatives on some issue would do so with liberal-“flavored” moral language, and vice versa, rather than try to seek out an argument more likely to resonate with someone on the other side; and second, that liberals would be more swayed by liberal-flavored arguments, and vice versa.
Over the course of six experiments involving Amazon Mechanical Turk workers who were asked about their political views (and who were in some cases incentivized to come up with effective arguments by being told that if they successfully swayed someone on the other side, they’d be entered into a drawing for $50 on top of the small amount of money they were making for participating), the researchers found support for both ideas. As this graph shows, Feinberg and Willer were able to do a fair amount of nudging simply by changing the way a given issue was framed: In the case of arguments for maintaining a high level of military funding, for example, focusing on the military’s role in reducing inequality — a fairness concern — led liberal-leaning Turk workers not to support it, exactly, but to report being less opposed to it:
The authors point out that there’s a double-edged nature to these results: On the one hand, they’re just moderately sized and involve only short-term shifts in opinion, but on the other, they were achieved fairly easily. And while an experiment like this one is pretty far from the real world of high-decibel political debates and name-calling, the authors think there’s some potential here. “This technique not only substantiates the power of morality to shape political thought,” they write, “but also presents a potential means for political coalition formation.”