In Defence of Evidence Resistance

This article from the New Yorker has been making the rounds lately. It’s not alone. The type (like this from the BBC) has grown popular during 2016, and Donald Trump’s[1] election and accession didn’t exactly slow things down.

I should be thrilled a branch of erisology is getting popular (even if no one uses the word, of course). We’re getting more knowledgeable about how argumentative reason actually works and how it differs from philosophically imagined pure reason. That’s all good, but it might actually make things worse on the whole without the complementary branch dealing with the ambiguity of language and how unintuitively hard it is to interpret claims.

A version of Murphy’s Law goes: ”if something can be done in such a way as to cause a disaster, somebody will do it that way” (Edward Murphy’s saying wasn’t originally a general expression of cynicism but an exhortation to build idiot-proof technology). The analogue ”if a concept can be used as a rhetorical weapon, somebody will use it that way”[2] is what makes me worry about my new horse’s teeth.

Note that these articles rarely say ”this is how you get better at listening to others, understanding what they mean and empathizing with it.” It’s other people who ought to be convinced by you, dear reader. Dear reader, reading a respectable publication that supplies you with respectable views. I’ll be honest about the reasons for this snark: pieces like this rub me the wrong way partly because I hold some contrarian views myself, which makes it easy to dislike some things implied by the new narrative taking shape. Things like:

1. Contrarian views of all kinds, from simple disapproval of how much emphasis is put on different topics in public discourse or of how things are spun, all the way to conspiracy nuttery about lizard people being secretly flat, are fundamentally similar in nature[3].

2. People believe such things because they are afflicted by cognitive biases, while ”normal” beliefs are based on sound reasoning. If we can educate people about biases, nonstandard beliefs will go away[4].

3. Believing contrarian claims means being deficient in critical thinking.

4. Authorities (or perhaps more importantly, others who speak on their behalf) don’t speak with greater confidence or greater generality than the evidence warrants, and as an amateur you are not qualified to criticize their methods or doubt the validity of their work.

I hold all of them as false. The combination of 3 and 4 is especially bad since they reinforce 1 and 2. Conventional knowledge (excepting the hardest of sciences) is a lot more uncertain and provisional than we’re normally led to believe (4), and skills finding flaws in arguments and weaknesses in evidence (3) can be used against conventional authorities and not just internet randos.

We overestimate the certainty of most scientific knowledge pecause (=partly because) almost everyone is overconfident, and everyone gets almost all their knowledge from other people rather than from examining everything in excruciating detail on their own. Hence we get to know things in a form that underplays its uncertainty[5].

It applies to experts and professionals too. ”Expertise” refers to two separate concepts: expertise on a particular thing-in-the-world, and expertise in a particular body of knowledge. While they’re often the same there is an important distinction to be made between your degree of expertise in a body of knowledge and the quality of that body of knowledge itself. It’s entirely possible to be a perfect expert in a flawed domain of knowledge. There are astrologists who know everything there is to know about the teachings of astrology, which is lot more than I know. Should I believe them about the validity of astrology?

Less starkly, someone might be an expert on something like nutrition, as currently understood, but if the field has a long history of fumbling around trying to make sense of something too complicated for the blunt instruments it tends to use (like correlational studies based on self reports — considering how complicated this is, we’re way past crap like that) and therefore fails and fails to yield reliable knowledge, their expertise might not be worth so much. David Chapman’s teardown of the whole field might go a little further than I’m comfortable with but I wouldn’t say it’s wrong, and it’s worth reading.

Lower level expertise, like the kind of knowledge you get in a school where you’re taught the cleaned-up and neatified version where uncertainty, complexity and ambiguity (henceforth known as ”UCA”) are sidelined is of course even worse.

Bottom line: it may be healthy to have a certain resistance towards letting anything even vaguely smelling of scientific expertise and evidence completely override your own judgment.

Both the New Yorker article and the one from BBC referenced a classic psychology experiment from 1979, showing how people can encounter evidence against their beliefs yet walk away even more sure of themselves. From the BBC article:

[P]articipants were recruited who had strong pro- or anti-death penalty views and were presented with evidence that seemed to support the continuation or abolition of the death penalty. Obviously, depending on what you already believe, this evidence is either confirmatory or disconfirmatory. Their original finding showed that the nature of the evidence didn’t matter as much as what people started out believing. Confirmatory evidence strengthened people’s views, as you’d expect, but so did disconfirmatory evidence. That’s right, anti-death penalty people became more anti-death penalty when shown pro-death penalty evidence (and vice versa). A clear example of biased reasoning.

Thinking I shouldn’t take their word for it, I looked up the original study. The results as presented there are less clear cut, with varying effects depending on what order you see the evidence and whether you just get to see it or also discuss it with other participants[6]. Still, the authors do draw the conclusion that exposure to the same evidence has the opposite effect depending on your position coming in. The BBC description is broadly accurate (but with all the UCA removed, of course), it does seem like we humans have a bad habit of reacting to evidence in ways we shouldn’t.

But wait. Doesn’t “how we should react” kind of depend on the evidence? What they had on offer wasn’t exactly slam-dunk quality stuff. This is what they used:

Kroner and Phillips (1977) compared murder rates for the year before and the year after adoption of capital punishment in 14 states. In 11 of the 14 states, murder rates were lower after adoption of the death penalty. This research supports the deterrent effect of the death penalty.

and

Palmer and Crandall (1977) compared murder rates in 10 pairs of neighboring states with different capital punishment laws. In 8 of the 10 pairs, murder rates were higher in the state with capital punishment. This research opposes the deterrent effect of the death penalty.

Complex issue, no doubt. A multitude of unknown factors[7] and meandering causation pathways lie between the theory tested and the data gathered. The evidence is indirect, it’s not an experiment and nothing is controlled for (at least no controls are mentioned). There are obvious level-one objections like ”what happened to the murder rate in all other states, did they also go down?” and ”could not states with high murder rates react by adopting/keeping the death penalty?”.

All in all, these two pieces of evidence, presented this way, are almost comically weak. What sane person would change their mind in the face of something so pathetic? Sure, if you hold an extreme position then perhaps this should make you mellow out a bit. But in general people are likely to reject evidence if that evidence is so easily rejectable. In Bayesian terms, none of these observations are particularly unlikely regardless of whether there exists a deterrence effect of capital punishment.

And that’s before we start to question the very basis of the hypothesis. What is the hypothesis? ”Capital punishment deters murder”? That’s a sentence, not a well-formed hypothesis. I’m willing to bet a bit of money that the right answer is ”to some extent, for some people, in some conditions”. Scientific questions are about physical reality and anything too abstract to be translated into concrete terms is scientifically incoherent. We need to be way more specific about what we’re asking about if we want there to be a real answer.

Incoherent questions that looks reasonable on the outside are answered not by a single-bit response but by a narrative trying to look like a fact. And as long as there is UCA several different (and mutally contradictory) narratives are admissible. The evidence social science is capable of delivering is typically so cursory or so narrowly circumscribed that it won’t convince anyone to switch allegiances from one narrative to another.

Remember that our opinions on things like this come from a whole life of experiences plus evolution-approved intuitive understanding of other people. Our worldviews are not bags of disconnected facts but complex webs of mutually reinforcing ideas[8]. Should we rip it all up because some single piece of third-rate evidence suggests we’re wrong? Are semi-relevant scientific studies or official statistics infinitely more weighty than literally any other kind of information we’ve been taking in all our lives? Sometimes, yes — if we have good reason to believe our intuitions are completely out of their element. But not always.

It makes sense not to be convinced by weak evidence in UCA-rich cases, but the studies suggest something more than that. People didn’t just not change their minds, but in many cases dug in their heels and became even more convinced they were right. Why would this happen? Doubling down in the face of a social threat is probably a partial explanation, but I wonder if there isn’t something else too.

I’ve experienced it myself. I’ve become more convinced I was right after hearing arguments against my position. It’s simple: in a real-life argument your opponent will use an argument they find strong — implying all other arguments for their position is weaker. If what they say is especially unconvincing we have a rational reason to assume their whole case is weak. Specifically, if their supposedly strongest argument is even weaker than we expected we can increase confidence in our own position. If you were faced with the evidence in the death penalty example in a real argument, then ”Are you kidding? That’s the best you’ve got? Even I thought you had more than that!” would not be an unreasonable reaction.

I’ve read creationists arguing that eyes are too complicated to evolve and moon-landing deniers complain that there are no stars in the sky on pictures taken on the moon. Having such champions does not make a case look good. Christians bringing up First cause, Pascal’s Wager or something as ridiculous as Anselm’s argument has much the same effect.

A meatspace example is a seminar text I was assigned in a course about the history of modernity. It was about premodern peasant life and how farmers desired physically strong wives. The way it was written made it clear it was meant as an argument against there being something enduring and essential about women: physical strength used to be a part of ”womanhood” but isn’t any more. Much of gender studies is about studying historical changes in gender roles — with an implied argument that this shows it’s not a matter of biology. I wasn’t familiar with gender theory in non-popularized forms enough to know how good their arguments usually were, but if this weak sauce was par for the course I’d have felt more justified than ever just dismissing the whole field. I should have asked my teacher if this was considered a strong argument but avoided it. I didn’t want to be that guy.

Another one was when a fellow psychology student tried to convince me basic emotions were culturally arbitrary. As in, we feel things because we are taught to (examples include the idea that romantic love was invented in the middle ages which like so many similar things isn’t stupid in its original form, only when vulgarized), not because of human nature[9]. She claimed people in [some culture] didn’t feel sexual jealousy, and offered “they have no word for it” as an argument, apparently expecting it to be impressive. It wasn’t, considering we readily recognize feelings other languages have words for but ours doesn’t. The most famous example for English speakers is probably ”schadenfreude”. There is even a whole subreddit dedicated to words without English equivalents. They’re still perfectly understandable, many of them describing emotions.

”Backfiring” against bad arguments or weak evidence isn’t always irrational. This doesn’t mean that we should keep believing whatever it is we believe and nothing should change our minds. It does mean we should be acutely aware of the limitations of all knowledge and how unclear many of our questions are.

There is a difference between things that are true and things that aren’t, but we don’t have direct access to that information and we need to make do with what we have, which is often worse than it appears. We search for truth blindfolded, in the dark, wearing oven mitts, with music blasting from all directions. We shouldn’t be so sure other people are wrong.

• • •

[1]
In case you haven’t heard of him, Donald Trump is an american billionaire real estate mogul, later known for hosting the reality show ”The Apprentice”. In the 2016 presidential election he emerged as a leading candidate, carried by populist nationalism and anti-establishment sentiments. To much surprise, he won the election. More information on wikipedia.

[2]
Examples of concepts capturing something true but used less as analytical laser scalpels than the rhetorical equivalent of a sawed-off shotgun include but are not limited to: fake news, trolling/troll, confirmation bias, the Dunning-Kruger effect, facts, gaslighting, alt-right, social construction, bias, Nice Guy™, virtue signaling, SJW, friendzone, fascism, privilege, microaggression, censorship and whatever more, just spitballing here…

[3]
Not unlike how ”alternative medicine” can mean anything from meditation or vitamin supplements to magic pyramid crystal healing — a terrible expression and a good example of what I’m talking about in general. In the past I subscribed to the glib ”there is no alternative medicine that works, if it works it’s just called medicine”. But I no longer have anywhere near enough trust that the medical system can research everything it should with the necessary diligence, have the resulting mess of mosaical information effectively and swiftly integrated, interpreted, widely disseminated and made into standard practice to believe that this is reliably the case. As a result I’m quite willing to consider other kinds of information not completely worthless by comparison and that some such treatments can work, to some extent, for some people, in some conditions.

[4]
There is something deeply disturbing about methodological relativism — that the truth or falsity of a scientific hypothesis must play no part in our explanation of how it came to be accepted. It’s still useful and we can and do believe falsities for rational reasons and truths for irrational reasons.

[5]
When we don’t know something intimately ourselves, we’ll tend to take authoritative statements (that don’t directly contradict what we already believe) about it at face value. This is as it should be, because as the New Yorker article points out humans rely on socially distributed knowledge. My point is that knowledge gains in perceived certainty as it is distributed socially, both because minor details like uncertainty is lost in transmission and because we use spread as a measure of credibility.

[6]
If results are this unclear and un-robust they really shouldn’t be drawing any broad conclusions at all. It’s an indication their methodology isn’t powerful or exact enough for what they’re trying to study. Not that they’re uniquely bad. Almost everything is like this.

[7]
How aware are people of the risk of being put to death? What’s the probability of receiving this particular punishment? What other risks are people dealing with, affecting their risk tolerance?

[8]
You can’t just throw disconnected evidence at people and expect it to stick. Everything around the targeted conviction supporting it must be addressed as well. And maybe a few layers of recursion on that — meaning if you want to challenge an opinion, you’ll need to challenge the whole cluster of mutually reinforcing opinions it’s part of. You’re going up against not just me, but all of my friends. I think confirmation bias is partly built on this: we don’t so much undervalue evidence against our beliefs as we overvalue evidence for them; we rightly see the weakness of contradictory evidence because it stands alone. We’re more likely to trust that random stranger if it’s someone our friends all seem to like.

[9]
I was honestly quite perplexed (and concerned) by how common it was among my class of psychology students — psychology students — to be openly hostile to biology as a factor in explaining human social behavior.

Did you enjoy this article? Consider supporting Everything Studies on Patreon.

2 thoughts on “In Defence of Evidence Resistance

  1. The analogue ”if a concept can be used as a rhetorical weapon, somebody will use it that way”[2] is what makes me worry my new horse has cavities.

    An elegant summation of my thoughts exactly, and which I spent ~4000 words in “Confronting Unavoidable Gadflies” trying to encapsulate. 😛 It was interesting to learn that the original conception of Murphy’s Law was basically a direct analog of this.

    Like

    1. Well, for the oneliner to really work, the 4000 words needs to be there too, otherwise you’ll just wind op with platitudes. More concerning is, and now I’m quoting Scott A: “I don’t know how to fix this.”

      Like

Leave a comment