Gary Lucas, rising star of Behavioral Public Choice, turned me on to the work of psychologist Dan Kahan.  Highlights from Kahan’s review article on the “Politically Motivated Reasoning Paradigm“, or PMRP:

Citizens divided over the relative weight of “liberty” and “equality” are less sharply divided today over the justice of progressive taxation (Moore 2015) than over the evidence that human CO2 emissions are driving up global temperatures (Frankovic 2015). Democrats and Republicans argue less strenuously about whether states should permit “voluntary school prayer” (General Social Survey 2014) than about whether allowing citizens to carry concealed handguns in public increases homicide rates or instead decreases them…

These are admittedly complex questions. But they are empirical ones. Values can’t supply the answers; only evidence can. The evidence that is relevant to either one of these factual issues, moreover, is completely distinct from the evidence relevant to the other. There is no logical reason, in sum, for positions on these two policy relevant facts–not to mention myriad others, including the safety of deep geologic isolation of nuclear wastes, the health effects of the HPV vaccine for teenage girls; the deterrent impact of the death penalty, the efficacy of invasive forms of surveillance to combat terrorism–to cluster at all, much less to form packages of beliefs that so strongly unite citizens of shared cultural outlooks and divide those of opposing ones. (emphasis mine)

What on Earth is going on?

That explanation is politically motivated reasoning (Jost, Hennes & Lavine 2013; Taber & Lodge 2013). Where positions on some risk or other policy relevant fact has come to assume a widely recognized social meaning as a marker of membership within identity-defining affinity groups, members of those groups can be expected to conform their assessments of all manner of information–from persuasive advocacy to reports of expert opinion; from empirical data to their own brute sense impressions–to the position associated with their respective groups.

If, unlike me, you resist appeals to “common sense,” experiments testing the PMRP are adding up.  One example:

[C]onsider a study of how politically motivated reasoning can affect perceptions of scientific consensus. (Kahan, Jenkins-Smith & Braman 2011) In the study, the subjects (a large, nationally representative sample of U.S. adults) were shown pictures and CVs of scientists, all of whom had been trained at and now held positions at prestigious universities and had been elected to the National Academy of Sciences. The subjects were then asked to indicate how strongly they disagreed or agreed that each one of them was indeed a scientific expert on a disputed societal risk–either global warming, the safety of nuclear power, or the impact of permitting citizens to carry concealed handguns. The positions of the scientists on these issues were manipulated, so that half the subjects believed that scientist held the “high risk” position and half the “low risk” one on the indicated issue. The direction and strength of the subjects’ assessment of the expertise of each scientist turned on out to be highly correlated with whether the position attributed to the scientist matched the one that was predominant among individuals sharing the subjects’ cultural out-looks (Figure 2).

Results, in a single figure:

pmrp2.jpg

Some other findings of the PMRP:

High numeracy–a quantitative reasoning proficiency that strongly predicts the disposition to use System 2 information processing–also magnifies politically motivated reasoning. In one study, subjects highest in Numeracy more accurately construed complex empirical data on the effectiveness of gun control laws but only when the data, properly interpreted, supported the position congruent with their political outlooks. When the data properly interpreted was inconsistent with their predispositions, they were more disposed than low numeracy subjects to dismiss it as flawed.

Though I’m admittedly unhappy with Kahan’s interpretation:

These data, then, support a very different conclusion from the standard one: politically motivated reasoning, far from reflecting too little rationality, reflects too much.

Why wouldn’t “being rational if and only if it helps your cause” still count as “too little rationality”?  The most you could fairly say here is that “a little rationality is a dangerous thing.”

I’m also thrilled by Kahan’s treatment of monetary incentives and external validity of experimental political psychology:

No-stake PMRP designs seek to faithfully model this real-world behavior by furnishing subjects with cues that excite this affective orientation and related style of information processing. If one is trying to model the real-world behavior of ordinary people in their capacity as citizens, so-called “incentive compatible designs”–ones that offer monetary “incentives” for “correct” answers”–are externally invalid because they create a reason to form “correct” beliefs that is alien to subjects’ experience in the real-world domains of interest.

On this account, expressive beliefs are what are “real” in the psychology of democratic citizens (Kahan in press_a). The answers they give in response to monetary incentives are what should be regarded as “artifactual,” “illusory” (Bullock et al., pp. 520, 523) if we are trying to draw reliable inferences about their behavior in the political world.

Professionally, I must confess, I wish Kahan at least cited my Myth of the Rational Voter, which makes many of the same points.  But I strive to put such pettiness aside.  The more scholars dogpile on the shameless religiosity of the political mind, the better.