A Persian Cafe, Edward Lord Weeks

Showing posts with label Epistemology. Show all posts
Showing posts with label Epistemology. Show all posts

Thursday, 30 July 2020

Social Foundationalism in Epistemology

There is an ancient problem in philosophy known as Agrippa's Trilemma, which many parents will have encountered with inquisitive children. Ordinarily if one is asked how we can know something, we will appeal to underlying beliefs which support it. But this raises the question of why we should believe these underlying beliefs - and if there are even deeper underlying beliefs, why we should believe those. There are three responses which can be taken to this:

  • Infinitism: the idea that is possible for human knowledge to be founded upon an infinite regress of reasons, in much the same way that the earth is stacked upon an infinite column of tortoises.
  • Foundationalism: the idea that there are some beliefs which you just have to accept, and these form the foundation for other beliefs.
  • Coherentism: the idea that we operate on a "web of belief", and it doesn't matter if there is no ultimate ground to it if the beliefs are mutually supporting.
I am myself a determined coherentist: it's not that there are no beliefs which don't require further support, it's that once you've gone "I think, therefore I am" it's rather difficult to spin that up into much more. But the debate between foundationalists and coherentists continues, with the occasional "foundherentist" peacemaker like Susan Haack and the occasional infinist troll.


What strikes me, however, is how completely dominant coherentism is in the field of ethics. Under the name of "the method of reflective equilibrium" it is basically the method for trying to establish truth. We combine judgements from a range of levels - from practical judgements like "if a child is drowning in water next to them, you are morally obliged to rescue the child" to highly abstract judgements like "if states of the world A, B, and C are such that A is morally better than B, and B morally better than C, A is necessarily better than C" - to create general theories which aim to explain as much of the moral universe as possible. A couple of possibilities as to why this difference exists between fields:

  • Taking a foundationalist approach feels more respectable, and probably more likely to be successful, when the foundational judgements are highly general and widely applicable. Foundationalist epistemology, for example, would take mathematics to be foundational; whereas the most widely agreed judgements which we aim to expand from in ethics tend to be very practical and narrow in nature, e.g. "it is wrong to torture innocent children for one's own pleasure."
  • Ethics is generally accepted to be a social enterprise - it's about how we should behave, less about how I as an individual should behave. In particular, the existence of other moral agents is not generally taken to be in doubt. By comparison, epistemology is much more easily framed not as "what are the reasons for believing/doing X?" but "why should I believe X?"

I don't know that I particularly believe either of these. Maybe my initial observation is off, for that matter. If the second explanation is true, however, then given the rise in popularity of social epistemology in the last couple of decades, there's probably some mileage for a new defence of foundationalism - not that individuals should take certain beliefs as basic and unquestionable, but that societies should.


Tuesday, 2 August 2016

Trolling Should Be Equal-Offender

A couple of weeks ago, Vox.com released an article entitled "Pokémon Go is everything that is wrong with late capitalism". The article argued that Pokémon Go will lead to greater inequality of income and wealth, even if in terms of people's quality of life it is a significant boost to many people and especially those on low incomes. The solution to this, the article continued was looser housing policy, demand management, and perhaps increased redistribution of income. The article took a fair bit of criticism at the time, including this by Rob Wiblin. At the risk of flogging a dead horse, I'm going to add my own criticism of the piece.

Let's be clear: the article is highly tongue-in-cheek. The title alone should be enough to make it clear that they are comically exaggerating the importance of and the scale of their opposition to the Pokémon game. Indeed, I think that for this reason they can shrug off some of the other criticisms. My question is: why, if this is a light-spirited article, is its conclusion identical to those of Vox's serious pieces?

When you start a political argument with an absurdity, there should be no inherent tendency to reach any particular conclusion. (Perhaps there will be a greater tendency towards extreme conclusions, but that doesn't help Vox given that they're arguing for standard centre-left positions). If you consistently reach the same end-point regardless of your premises, then the suspicion has to be that you are starting with your political preferences and then working backwards to see how they might be justified by any particular set of circumstances. What you are showing when you argue, then, is not the strength of your political position but rather your ability to make arguments sound plausible.

This has a knock-on effect for your more serious arguments, too. If I know you can convince me that anything at all is evidence for X, regardless of whether it actually is, then I should not take your arguments for X as strong evidence for its truth - if I take them as evidence at all. Moving from the meta-level to the concrete, if Vox will argue convincingly that Pokémon Go is evidence for why we need to be more left-wing, then they will do that in any situation - and hence should not be trusted in any situation.

There's actually a real lesson to be learned here, which is that if you want to make both serious and joking arguments about the same topic, and you want your serious arguments to be taken in a serious manner, the conclusions of your joking arguments (and ideally your serious arguments too) should not always be for the same conclusion. If you're going to argue that libertarians should be taxed less than leftists, you should also talk about how the UK should invade other countries and take their wealth. If you're going to talk about how Trump should be assassinated, you should also argue that women should face longer prison sentences than men for the same crime.

Tuesday, 19 April 2016

Gettier on knowledge

In order to count as a Hungarian student, I have to pass exams as part of my degree. The format of these exams is that we have been given five possible questions for each subject, and will be randomly given one from each subject in the actual exams. For working towards these I am preparing answers to the questions, and this seems as good a place as any to store them. The answers I give in the exam will be largely the bog-standard-but-mildly-original replies necessary to score an A; however, when writing here I will express some of my more controversial philosophical beliefs.

What are the main points of Gettier's famous paper on justified belief and knowledge?

Edmund Gettier's 1963 paper Is Justified True Belief Knowledge begins by showing that there has historically been general agreement over what it means to know that P. The classical definition, beginning with Plato, holds that an agent X knows that P if and only if:

  1. X believes that P
  2. P is true
  3. X is justified in believing that P
Gettier's concern in his paper is to demonstrate that this definition of knowledge is inadequate, and in particular that there are cases of justified true belief which are not cases of knowledge. He provides two counterexamples to the standard account. In the first of these, two men - Smith and Jones - are both applying for a job. Smith believes that he has messed up his interview and that Jones will get the job; furthermore, he happens to know that Jones has ten coins in his pocket. From these he draws the conclusion that the man who will get the job has ten coins in his pocket.

Smith has in fact done far better than he thought, and gets the job. As it so happens, he also has ten coins in his pocket. This means that:
  1. Smith believed that the man who would get the job had ten coins in his pocket
  2. It was true that the man who got the job had ten coins in his pocket
  3. Smith was justified in believing that the man who would get the job had ten coins in his pocket
All of the conditions of the classical definition of knowledge are met- yet intuitively this does not seem like a case of knowledge. This shows that justified true belief is not adequate to define our intuitive sense of what knowledge is.

There has been a great deal of work attempting to tighten up the definition of knowledge - requiring that one's belief be "not easily wrong" or "truth-tracking" or some similar. Personally I think we should just accept that "knowledge" is not a well-defined term, and while it serves purposes in everyday conversation these are less like "I have a JTB that P" but rather closer to "I strongly believe that P" or "I am indeed aware that P". The search for a definition of truth is part and parcel of the mistaken project of trying to obtain genuine certainty. This is an unrealistic and indeed impossible standard.

Monday, 4 April 2016

Your Argument is Bad (and Dangerous) and You Should Feel Bad

The CEU Philosophy department recently held a conference for graduate students. That is, most of the talks were by graduate students from across Europe, there were responses given by CEU students, and the keynote speeches were given by the professors involved in organising it. I wasn't at much of the conference, but one talk I heard was so appalling that I feel the need to make a public record of this.

To be clear, this was not a talk by a graduate student. This was by a professor from elsewhere who came here in order to help run the conference. The reason for my strenuous objection is not that it is merely wrong (lots of philosophical ideas and arguments are wrong) or even that it was a bad argument (although even by the low standards of academic philosophy it was). This argument is dangerous.

The conclusion of the argument is that "holding different values from a person constitutes an epistemically sound reason to regard them as unreliable in fields in which they are expert". That is to say, if you disagree with them about politics then you are justified in ignoring whatever they have to say - even in fields you know very little about.

How does one justify such a remarkable position? We begin with the question of how far one ought to defer to experts. Clearly experts are not always reliable, but we would like to be able to assess whether or not to believe them without going through the long and difficult process of gaining all the same knowledge that they have. The speaker therefore distinguished the cases where there is general agreement and where there is widespread disagreement between experts. In cases where there is general agreement, we should go with expert consensus. In cases where there is disagreement, we need some way of establishing which to trust.

This is about as far as I am willing to agree with the speaker. There is as yet no draft of the paper - this was, the speaker remarked, the first time she had discussed these thoughts publicly - so you will have to trust from this point onwards that I am presenting her views faithfully and accurately.

Before moving on, I think she might have benefited from drawing a distinction between consensus of experts and consensus of papers. If all the experts in a field conclude that X, then I'm going to be strongly inclined to believe X. If all the papers on the subject conclude that X, then while I will probably still accept X I'm going to also be heavily suspicious that there is some bias in the publication procedure. The speaker's example of consensus was global warming, and she quoted some meta-analysis which looked at more than 900 papers and found that they all pointed towards the existence of AGW. Sure, AGW is definitely a thing, but is it really plausible than in 900 studies, none of them would by chance fail to detect it? There's something fishy going on either with that meta-analysis or with the way the speaker reported it.

That said, how did the argument continue? The speaker claimed that deciding when a risk is worth taking will require a kind of judgement. Her example involved some ecological risk relating to bees. Suppose the downside of this risk has a 5% chance of occurring. Is that a big enough risk that we ought to consider action?

Ultimately, she claimed, this is a value judgement. So if one ecologist tells us we should beware of this risk and another says we don't need to, in order to decide which one to trust we need to know which shares our values where risk-taking is concerned. So if an expert has significantly different values from you, this constitutes a reason to give less weight to their testimony relative to those experts with whom you have shared values.

There are, so far as I can see, four gaping holes in her argument. The first is a simple failure to apply Bayesian epistemology. When we say "there is a 5% chance that a large comet will strike earth in the next 10,000 years", this does not mean that the universe is indeterminate and has a 5% chance of resolving into a situation where a large comet strikes the earth: rather, we mean that we lack the information necessary to establish definitively whether or not earth will be hit by a comet, but based on the information which is available we attribute a 5% probability to it happening in the next 10,000 years. Furthermore, it is fine to act based upon this kind of partial information. Indeed, it is all that we can do.

So there is no need for some "critical point" at which a risk becomes worth considering. We can simply do a standard cost-benefit analysis in which the ethical weights given to events are multiplied by our best guesses at the probability of those events.

The second is that even if there were some "critical point" of this sort, or even a Sorites-paradox-type thing where really tiny risks should be ignored and we're never quite certain when they become important to think about, and it were subjectively determined in the way our speaker took it to be - this would be a far cry from what we ordinarily refer to as "values". It would be something much more contained and precise - call it "risk tolerance". I'm sure the speaker would recognise this if pointed to it - but the fact remains that she chose to use the word "values" in a very loose and careless way.

Third, we already have an approximation of this "critical point". It's called statistical significance.

Finally, she leaves aside the extent to which our values are determined by other things, which may be the very things whose truth we aim to question. There are experts on the philosophy of religion who believe in God, and there are those who do not. If we privilege the testimony of those with whom we share values, then when considering the existence of deities Christians will have a good reason to privilege the testimony of fellow Christians, Hindus to privilege the testimony of fellow Hindus, and atheists to privilege the testimony of fellow neckbeards. But this is blatantly bad epistemic practice!

The thesis of this talk, if accepted by mainstream society, would destroy the ability to have sensible political and scientific discourse. Being dangerous does not automatically that philosophy is bad or wrong - if democracy is the only way to avoid either feudal warlords or tyrannical dictators then Jason Brennan's work is dangerous, but that doesn't mean his work is low quality. But when you are pushing a potentially dangerous thesis, you have not only prudential reasons but also a moral requirement to make the best argument you can. This speaker flagrantly failed to do so.

Sunday, 23 November 2014

My philosophical views

Having an hour to spare and nothing better to do, I've decided to write down my current answers to the questions on the PhilPapers survey of philosophers' views. First, a couple of notes and caveats:

  • At first, I wasn't going to look at any (potentially new-to-me) arguments for the positions while doing this. However, upon reflection it seems strange to reject a chance to be motivated to learn.
  • One of the options on the original survey was "insufficiently familiar with the area." This really ought to be my default answer - I am, after all, a mere undergraduate student - but where would be the fun in that. Instead, for any given issue you should assume that I am probably not as familiar with the issue as I ought to be.
A Priori knowledge: yes or no?
Umm... lean no, maybe? I lean towards the view that logic, maths etc are constructed rather than discovered, and given that they are supposed to be the paradigm cases of a priori knowledge, I guess that places me in the No category.

Abstract objects: Platonism or nominalism?
Is this asking whether I believe that there are no abstract objects, or which of these positions I lean towards on a greater number of subjects? I'm not willing to completely rule out abstract objects (fictional objects in particular strike me as things which might exist but be abstract) but I don't believe in the existence of numbers, of propositions, or of many of the other abstract objects which have been postulated to exist. Put me down as leaning towards nominalism.

Aesthetic value: objective or subjective?
I have actually put serious effort into trying to work out why anyone might think that aesthetic value is objective, and the closest I've seen to an argument is SEP's mention of the fact that "people tend to agree about which things are beautiful." Sigh. Accept subjective.

Analytic-synthetic distinction: yes or no?
I don't believe in it, the only question is whether I go down as Lean No or Accept No. Quine was very convincing... go on, put me down as Accept No.

Epistemic justification: Internalism or Externalism?
I can never remember which is which. Assuming I correctly understand the issue, one of them is the view that knowledge-seeking has intrinsic value, the other is that we should seek knowledge because it is useful to us. Yudkowsky put this very nicely in the Sequences, saying that seeking knowledge out of curiosity has a certain purity to it, but the advantage of seeking knowledge because it is useful is that it creates an external criterion by which to measure our success. Accept whichever one it is which says we should seek knowledge because it is useful.

External world: idealism, skepticism, or non-sceptical realism?
Accept non-sceptical realism. You can't achieve absolute certainty that you aren't being deceived by a demon, but (a) there is no reason to believe you are either and (b) in any case, suppose you were. You don't know anything about what the demon wants, so there's no particular reason to change the way you act.

Free-will: compatabilism, libertarianism, or no free will?
I'm fairly well convinced that if determinism is true, then (a) people cannot act differently than they do but (b) they are still morally responsible for their actions. I believe this makes me a compatibilist, although it strikes me as a bit weird that this is counted as believing in free will rather than denying that free will is necessary for moral responsibility.

God: theism or atheism?
Damn, no option for deism. Lean deism if that's acceptable, otherwise I place higher probability mass in atheism than in any of the "revealed religions".

Knowledge: empiricism or rationalism?
Given that I deny a priori knowledge, it would be rather odd if I were to say rationalism. (At least, it appears that way; perhaps this is one of the many things on which I shall come to be corrected.) Accept empiricism.

Knowledge claims: contextualism, relativism, or invariantism?
No familiarity with the subject area.

Laws of nature: Humean or non-Humean?
Accept Humean.

Logic: classical or non-classical?
This is an interesting one. As said above, I lean towards the view that logics are constructed rather than discovered, and that different logics may be appropriate for different purposes. The philosophical justification for intuitionistic logic is something I find very appealing, so let's say Lean non-classical.

Mental content: internalism or externalism?
No familiarity with the subject area.

Meta-ethics: moral realism or moral anti-realism?
I lean towards constructivism. I believe this makes me a moral realist, although that's a bit weird since I started working out my metaethics by explicitly assuming there were no genuine moral facts floating around.

Metaphilosophy: naturalism or non-naturalism?
Is the question "Which is it more fruitful for us to assume as a default?" or "Which do I beliee is actually true?" Accept naturalism on the first, lean non-naturalism on the second.

Mind: physicalism or non-physicalism?
Next to no familiarity with the subject area.

Moral judgement: cognitivism or non-cognitivism?
I looked at this at some point, but I can't remember much of what it was about.

Moral motivation: internalism or externalism?
Is this related to the amoralist's challenge? I've been thinking about that for ages, and still don't have a satisfactory answer despite reformulating my metaethics at least partially in an attempt to produce an answer to this question.

Newcomb's problem: one box or two boxes?
Accept one box. Although even if I were the type of person who would two-box, would I go around telling people that?

Normative ethics: deontology, consequentialism, or virtue ethics?
Virtue ethics, subject to deontological constraints, and with the choice of virtues justified on pluralist-consequentialist grounds. Yes, really.

Perceptual experience: disjunctivism, qualia theory, representationalism, or sense-datum theory?
When I studied this in first year, it seemed like a slam-dunk for sense-datum theory. However, given that (a) that was before I had read The Sequences, (b) I can't even remember what the first two of these were or if they were even mentioned, and (c) I have rejected almost every other view I picked up on that course (belief in the a priori, epistemological foundationalism, free-will libertarianism, near-universal scepticism... I must just about hold to a sensitivity condition regarding knowledge, so not quite everything), I'm inclined to take that past belief with rather a lot of salt.

Personal identity: biological view, psychological view, or further-fact view?
I don't hold to a biological view, but I' not greatly satisfied by the leading psychological accounts (though if I had to choose one, I would go with Schechtman's). I don't even know what the further-fact view is, and looking at the relevant SEP and Wikipedia articles suggests that either I'm misunderstanding the question, or that there is something odd about it. I was reading section 3 of Reasons and Persons, but my Kindle has gone missing.

Politics: communitarianism, egalitarianism, or libertarianism?
Accept libertarianism. Have you read my blog?

Proper names: Fregean, or Millian?
I prefer the Millian view, and I believe that Nathan Salmon's discussion of "guises" solves most of the problems for it; that said, I need to do more reading, so put me down as merely leaning Millian.

Science: scientific realism or scientific anti-realism?
Scientific realism. Because, you know. Duh.

Teletransporter (new material): survival or death?
Can I suggest the answer is somewhat subjective? Personally I would regard it as survival, but I'm very open towards difference of intuitions and I think that the disagreement is more to do with people having different values than to do with some (or all) people being wrong about an actual fact in the world.

Time: A-theory or B-theory?
B-theory is the one which holds all times to be equally real, and suggests that we move through time rather than time itself moving, right? Accept that one.

Trolley problem (five straight ahead, one on side track, turn requires switching, what ought one do?) 
switch or don't switch?
I would lean towards switching. I'm not entirely comfortable with it, but David Friedman's variation on Fat Man (in which both the Fat Man and yourself are required to does a fair job of convincing me that we should probably be willing not only to turn the trolley, but to push the fat man in its way.

Truth: correspondence, deflationary, or epistemic?
I read The Simple Truth and it sounded sensible. Then again, I haven't done a great deal of engagement with the views other than correspondence - certainly I could not explain what they are - so I'll have to just say I have insufficient engagement with the subject area.

Zombies: inconceivable, conceivable but not metaphysically possible, or metaphysically possible?
Again, especially insufficiently familiar, but leaning towards one of the not-metaphysically-possible positions.

Sunday, 2 February 2014

What does it mean to believe?

Take a proposition P and a person A. What conditions are necessary and/or sufficient for us to be able to say that A believes P?

The most obvious definition to try would be that A attributes a probability of over 50% to P being true. However, this seems potentially both over-extensive and under-extensive in the range of situations it classes as 'belief'. First, suppose A buys a lottery ticket and flips a coin. Without looking at the outcome of either, A can say that the probability of the proposition "the coin has landed on heads, or the lottery ticket has won the jackpot" is ever-so-slightly above 50%, yet it seems strange to call this a belief when it is at best a guess based upon probabilities. Furthermore, suppose there is a set of propositions P1, P2, ... all of which are mutually exclusive, whose subjective probabilities (according to A) of being true sum to 1, and P(P1)<0.5, but P1 is by a substantial margin the most probable of these. Would it then be reasonable to say that A believes P? I'm uncertain, which seems to suggest that it is likely to depend upon the vagaries of the case.

With regard to the first issue, perhaps we can tweak our definition to state that P(P) must be a posterior probability, based upon actual evidence, as opposed to a prior probability based on... something. (I'm assuming for the moment that A follows a Bayesian Epistemology; of course most people, including most philosophers, do not fit this description, but I personally at least try to and this whole post is a somewhat roundabout way of tackling an issue regarding my own beliefs).

This doesn't really seem to make any sense, however, as a normative prescription for how we ought to choose our beliefs. If we have a sensibly chosen prior then it's hard to see why having evidence should affect the epistemic nature of our view of a proposition. (By nature, I refer to belief vs. justified belief vs. knowledge vs. whatever else there is, as opposed to status, i.e. strongly believe vs. weakly belief vs. weakly disbelieve and a thousand-one-variants thereof).

What I suppose I'm getting at is that the notion of belief as a binary concept is very unclear, and perhaps incoherent. What would this mean? It shouldn't affect our actions: we are perfectly capable of acting on things we 'disbelieve' or believe to have negligible probability - this is one of the key ideas in some areas of global catastrophic risk. It ought, however, to affect the way we think about epistemology - if it is impossible to come up with a sensible definition for belief, then this will throw all attempts to define knowledge out of the window. For people with sensible (i.e. Bayesian) epistemologies this is no problem, but it might form the basis of an attack upon non-Bayesian epistemology.

Monday, 18 November 2013

External World Realism!

My view on perception of the external world for some time was "The fact that we see the world in a certain way is evidence for it being that way. However, there is no way to actually know we are not being deceived in some way. That said, there's no particular reason to believe that the external world would be some particular way which does not correspond to our perception of it, so it is most rational to act according beliefs based upon what we actually see." To an extent I still believe that, but this morning I was re-listening to Map and Territory when it suddenly occurred to me:

Any theory of how the external world is, must not only explain how the world is, but also why we explain it precisely as we do.

That means that any complete description of the external world must contain within it a complete description of what we perceive to be the external world. Hence, by Occam's Razor based on Solomonoff Induction, a "naive" view of the external world, that it is as we perceive it to be, must be the favoured explanation of how it actually is.


Apologies if this seems either obvious or arcane, but it means a significant amount for my confidence in my ability to know the world.

Sunday, 10 November 2013

By faith and not by sight?

There's a song called By Faith which we frequently sing at Church. It's a great fun song, with an upbeat tune and a very catchy chorus. You can see the original version here, although I'm personally more keen on the live version at church. The chorus goes:

We will stand as children of the promise,
We will fix our eyes on you our souls' reward,
'Til the race is finished and the work is done
We'll walk be faith and not by sight.

Not by sight? We're going to believe without evidence? That's ridiculous. Evidence is precisely what makes it reasonable to hold a belief. There is no virtue to be gained by holding specific beliefs unless they are true, and if they are true then there should be plenty of evidence for them. I highly recommend C. S. Lewis's Mere Christianity, and in particular this chapter to any Christian who thinks that "faith" is enough for a belief in our God.

What about the song? I've taken to singing "by faith, not just by sight." which is good enough for myself but I don't think it's good enough in general. People are going to be influenced by what they are singing with the Church's fervent endorsement, and if they're learning anti-epistemological habits from it then in the long run that's not good for either truth or for the church. The less Christians feel the need to justify our beliefs, the less effort we will put into investigating our beliefs. If we conclude that Christianity is probably false, then we should shrug, say "Any belief which can be destroyed by truth, should be." and move on. If we conclude that it is probably true, then hallelujah! Let's go out and convert everyone, surer and better-equipped than ever we were before!