The CEU Philosophy department recently held a conference for graduate students. That is, most of the talks were by graduate students from across Europe, there were responses given by CEU students, and the keynote speeches were given by the professors involved in organising it. I wasn't at much of the conference, but one talk I heard was so appalling that I feel the need to make a public record of this.
To be clear, this was not a talk by a graduate student. This was by a professor from elsewhere who came here in order to help run the conference. The reason for my strenuous objection is not that it is merely wrong (lots of philosophical ideas and arguments are wrong) or even that it was a bad argument (although even by the low standards of academic philosophy it was). This argument is dangerous.
The conclusion of the argument is that "holding different values from a person constitutes an epistemically sound reason to regard them as unreliable in fields in which they are expert". That is to say, if you disagree with them about politics then you are justified in ignoring whatever they have to say - even in fields you know very little about.
How does one justify such a remarkable position? We begin with the question of how far one ought to defer to experts. Clearly experts are not always reliable, but we would like to be able to assess whether or not to believe them without going through the long and difficult process of gaining all the same knowledge that they have. The speaker therefore distinguished the cases where there is general agreement and where there is widespread disagreement between experts. In cases where there is general agreement, we should go with expert consensus. In cases where there is disagreement, we need some way of establishing which to trust.
This is about as far as I am willing to agree with the speaker. There is as yet no draft of the paper - this was, the speaker remarked, the first time she had discussed these thoughts publicly - so you will have to trust from this point onwards that I am presenting her views faithfully and accurately.
Before moving on, I think she might have benefited from drawing a distinction between consensus of experts and consensus of papers. If all the experts in a field conclude that X, then I'm going to be strongly inclined to believe X. If all the papers on the subject conclude that X, then while I will probably still accept X I'm going to also be heavily suspicious that there is some bias in the publication procedure. The speaker's example of consensus was global warming, and she quoted some meta-analysis which looked at more than 900 papers and found that they all pointed towards the existence of AGW. Sure, AGW is definitely a thing, but is it really plausible than in 900 studies, none of them would by chance fail to detect it? There's something fishy going on either with that meta-analysis or with the way the speaker reported it.
That said, how did the argument continue? The speaker claimed that deciding when a risk is worth taking will require a kind of judgement. Her example involved some ecological risk relating to bees. Suppose the downside of this risk has a 5% chance of occurring. Is that a big enough risk that we ought to consider action?
Ultimately, she claimed, this is a value judgement. So if one ecologist tells us we should beware of this risk and another says we don't need to, in order to decide which one to trust we need to know which shares our values where risk-taking is concerned. So if an expert has significantly different values from you, this constitutes a reason to give less weight to their testimony relative to those experts with whom you have shared values.
There are, so far as I can see, four gaping holes in her argument. The first is a simple failure to apply Bayesian epistemology. When we say "there is a 5% chance that a large comet will strike earth in the next 10,000 years", this does not mean that the universe is indeterminate and has a 5% chance of resolving into a situation where a large comet strikes the earth: rather, we mean that we lack the information necessary to establish definitively whether or not earth will be hit by a comet, but based on the information which is available we attribute a 5% probability to it happening in the next 10,000 years. Furthermore, it is fine to act based upon this kind of partial information. Indeed, it is all that we can do.
So there is no need for some "critical point" at which a risk becomes worth considering. We can simply do a standard cost-benefit analysis in which the ethical weights given to events are multiplied by our best guesses at the probability of those events.
The second is that even if there were some "critical point" of this sort, or even a Sorites-paradox-type thing where really tiny risks should be ignored and we're never quite certain when they become important to think about, and it were subjectively determined in the way our speaker took it to be - this would be a far cry from what we ordinarily refer to as "values". It would be something much more contained and precise - call it "risk tolerance". I'm sure the speaker would recognise this if pointed to it - but the fact remains that she chose to use the word "values" in a very loose and careless way.
Third, we already have an approximation of this "critical point". It's called statistical significance.
Finally, she leaves aside the extent to which our values are determined by other things, which may be the very things whose truth we aim to question. There are experts on the philosophy of religion who believe in God, and there are those who do not. If we privilege the testimony of those with whom we share values, then when considering the existence of deities Christians will have a good reason to privilege the testimony of fellow Christians, Hindus to privilege the testimony of fellow Hindus, and atheists to privilege the testimony of fellow neckbeards. But this is blatantly bad epistemic practice!
The thesis of this talk, if accepted by mainstream society, would destroy the ability to have sensible political and scientific discourse. Being dangerous does not automatically that philosophy is bad or wrong - if democracy is the only way to avoid either feudal warlords or tyrannical dictators then Jason Brennan's work is dangerous, but that doesn't mean his work is low quality. But when you are pushing a potentially dangerous thesis, you have not only prudential reasons but also a moral requirement to make the best argument you can. This speaker flagrantly failed to do so.
No comments:
Post a Comment