I
The Prisoners' Dilemma is a classic exercise in Game Theory, and I shall present the traditional version of it. Two criminals are held under suspicion of having together committed some crimes, and there is enough evidence to pin them down for one particular charge but not for all. The two criminals are kept seperate, and each approached with an offer. They can confess to all their crimes and rat out the other criminal, in exchange for a lighter sentence, but this will lead to a stiffer sentence for the other prisoner. Putting this into numerical terms, if neither confesses then each will receive a sentence of five years; if one confesses and the other doesn't, then they will receive sentences of two years and twelve years respectively; and if both confess, then each will receive a sentence of nine years. Putting this into a table:
Prisoner B Confesses Does not Confess
Prisoner A
Confesses pA imprisoned 9yrs pA imprisoned 2yrs
pB imprisoned 9yrs pB imprisoned 12yrs
Does not Confess pA imprisoned 12yrs pA imprisoned 5yrs
pB imprisoned 2yrs pB imprisoned 5yrs
From the perspective of the prisoners together, it is clearly best for neither of them to confess, since this leads to a combined total of 10 years in prison, and if either or both of them confesses then the total time spent in prison is increased. But from the perspective of an individual prisoner, one is always better off for confessing - being imprisoned for nine years rather than twelve (if the other prisoner confesses) or for two years rather than five (if the other prisoner keeps silent). So the end result of each of them pursuing their own self-interest is in fact the worst result of all, whereas if they could somehow coordinate so that neither confesses, both would be better off.
I first encountered the Prisoners' Dilemma around the age of 12 - if memory serves, in Tim Harford's
The Undercover Economist. I found it fascinating, and looked forward to the day when I could study economics and learn more complex - and (to my 12-year-old mind) therefore more useful - game-theoretic models. A fascinating puzzle, certainly, but how much did it really have to tell us about the real world? After all, the world is full of many different agents - how could we hope that a model with only two agents would be useful for thinking about it?
II
John Rawls is indisputably the most influential political philosopher of the last century. He argued that decisions about the structure of society should be decided from behind a "veil of ignorance" in which people were ignorant of a great many facts about themselves - crucially, where in society they would be. The basic intuition is that if you have a choice of two societies A and B:
Given the choice of these societies, Society B is, from the perspective of an independent observer, clearly the better. But if you are at the level within society indicated by the blue line, then you would presumably endorse society A over society B, for the simple reason that it gives you personally a better deal. Rawls believed that a set of social institutions ought to fulfil various conditions, but there were two conditions that he prioritised above all others: it should be
just, that is, that people should endorse this set of social institutions from behind the veil of ignorance, and it should be
stable, that is, people should be willing to continue to endorse these institutions when coming out of the veil and into the real world. He endorsed a strategy of minimax - that is, the institutions should seek to maximise the utility of the worst-off individual within society. His reasoning for preferring this over straight-up utilitarianism (very roughly: seek to maximise the average utility accruing to all members of society) is the burden that utilitarianism places upon the worst-off. Compare societies C and D, representing utilitarianism and minimax respectively:
A member of society C will, most likely and on average, be significantly better off than their counterpart in society D. However, the worst-off member of society C will, even if she endorsed utilitarianism behind the veil of ignorance, be unable to support it when living in it. Hence the society is unstable, and we must prefer minimax.
(One could equally say that non-worst-off people under utilitarianism could make this same complaint of minimax, given how much better they would be under utilitarianism; I put this to my political philosophy lecturer, who so far as I can tell is a fully-paid-up Rawlsian lefty, and the response was something along the lines of "That's a shining example of rich privilege.")
Rawls rejects repression of the worst-off to maintain stability as "stability for the wrong reasons"; however, perhaps he might be able to accept people making a commitment behind the veil to accept what they have when they emerge from the veil. If such an agreement could be upheld, then surely we could endorse utilitarianism and many people would be far better off?
The problem, of course, is that when one has come out of the veil and discovers that one's own interests have been sacrificed in the name of improving average utility, one has no reason to hold to such an agreement but has everything to gain by pushing for a move to minimax. Let us engage in a highly idealised model of the situation, where two people have to choose what strategies to pursue. The principle of utility will supply one with 10 utils and the other with 4 utils, while minimax will supply them with 6 utils and 5 utils instead. When behind the veil they are aware of these numbers. If either of them, emerging from the veil, then chooses to endorse minimax, minimax occurs. They have two strategies available to them as to the strategy they choose within the veil for endorsing a set of institutions when leaving the veil - either seek to maximise average utility regardless of where this leaves them, or endorse whichever set of institutions leaves them personally better off.
Person B Follow self-interest Promote social utility
Person A
Follow self-interest pA mean utility 5.5 pA mean utility 7.5
pB mean utility 5.5 pB mean utility 4.5
Promote social utility pA mean utility 4.5 pA mean utility 7
pB mean utility 7.5 pB mean utility 7
We're back to our old friend, the prisoners' dilemma. If we could all commit to following the principle of utility, the world would be so much better. But because people are individually better off for not co-operating, the world of all worlds (at least, of those within this chart) is instantiated.
We don't actually need Rawls for this problem with acting according to utilitarian principles, though he is a particularly highbrow example. Take Peter Singer's famous
drowning child analogy for aid to the third world. Not only utilitarianism, but virtually every system of morality known to man, dictates that we should give vast amounts of money to help the third world. Yet with a few honourable examples (the most famous ones being
Toby Ord and
Julia Wise & Jeff Kaufman) no-one actually does this - it's simply not in your interests to give thousands of pounds away every year for no discernible benefit to yourself. The sum total of human happiness would be far greater if people donated much more to effective charities - or indeed, just gave money directly to those worse off than themselves - but rich westerners and poor third-worlders can, in a veil of ignorance as to which of them is the first world and which the third world, be seen as the participants in a prisoners' dilemma with a payoff something like the following:
Africans Follow self-interest Promote global utility
USA/Europeans
Follow self-interest West mean utility 6 West mean utility 8.5
Africa mean utility 6 Africa mean utility 4.5
Promote global utility West mean utility 4.5 West mean utility 7
Africa mean utility 8.5 Africa mean utility 7
(I assume that the first world gets 10 utility and the third world gets 2 utility, the first world can sacrifice 3 utility to equalise utility at 7 for everyone).
It goes beyond considerations of morality. Consider Brienne Strohl's suggestion of
"Tell Culture", as opposed to
"Ask Culture" and "Guess Culture". The whole context is too long to explain here, but short enough that if you haven't already you should be able to read her article in a couple of minutes and come back here. Go on, off you go.
As is noted both in her article and in the comments, Tell culture is highly dependent upon honesty from all the people involved - if one person is dishonest about their motivations or the strength of their desires, they can reap large rewards in status and in achieving their values - but at the cost of other people within the system.
Alternatively, take the issue of how civil we should be with those we have significant political disagreements with. Scott Alexander, one of my favourite bloggers (the habit of titling sections with Roman numerals didn't come from nowhere!) has written in favour of
being polite and focussing on reasoned discussion and debate; his opponents in this debate argue that, since [they] are right, [they] should do whatever it takes to make certain that [their] political ideas are implemented, including belittling, straw-manning, lying about and insulting opponents. It seems obvious to me that if one of these methods were to be chosen for universal usage, it should be Scott's advocated system of politeness and honesty; but from the perspective of an individual who believes him or herself to be right, one would surely prefer the more intellectually violent approach.
III
As I skated over earlier, there is a crucial way in which all of these examples differ from the standard Prisoners' Dilemma model, which is that they have far more than just the two parties to the matrix. In the charity example I was able to sort-of hide this by viewing it all in aggregates, but for tell culture and all sorts of other beneficial or potentially-beneficial social norms it becomes a lot harder.
In
The Invention of Lying, nobody lies until one man suddenly gains the ability to do so. He takes advantage of this for significant advantage before undergoing character development and starting to use it for good - or at least, what he sees as being good. If someone like
George Constanza had gained this ability, then his personal gains would have been absolutely massive - exceeded only be the damage wreaked upon the rest of society. Let us define a society as "a group of people conforming to one or more rules or principles governing their behaviour". The bigger a society is, the more gain there is to be had by taking advantage of people's expectations that you will conform to that rule. If the two prisoners were allowed to communicate, then perhaps they could work out a way to co-operate. As the number of prisoners increases, though, the harder it becomes to achieve this co-operation.
What other things affect the ability to achieve co-operation in such dilemmas? Feelings of goodwill between the prisoners are very useful - if your wellbeing is a part of my own wellbeing function, then I may be motivated to sacrifice my own interests to promote yours. These feelings of goodwill can be achieved in numerous ways - obviously ties of family and friendship are significant factors, but something as little as sharing a language or living on the same continent can serve to increase people's benevolence towards one another.
IV
With this in mind, what can we do to promote greater co-operation in the Dilemmas which arise for us in our lives? One traditional response is to create an enforcer of co-operation - indeed, this is generally seen as the key function of government - to prevent us from theft and other such crimes which benefit ourselves at the expense of the rest of society. But in many cases this is not a practical solution - the typical office can hardly afford a policeman to enforce a ban on internal politics. Plus, the useful parts of real-world governments tend to come with a lot of unwanted and grossly harmful parts - indeed, government itself creates many dilemmas. If everyone were to refrain from lobbying for pork-barrel or minority-interest spending, then government would be cheaper and would do less harm to competitive markets, but it is not in anyone's individual interest to stop competing for government to favour their particular interest groups.
We can attempt to keep societies small - it will be harder for a person to get away with pursuing personal interest at social expense in small societies than in larger ones. But this doesn't really solve the problem for most human interactions, since small societies are unlikely to be able to practise the division of labour necessary to sustain the full richness of modern life.
"Moral bioenhancement" might move us somewhere towards a solution, but is potentially just pushing the issue to a new level - sure, society as a whole is better off for people taking moral bioenhancement treatment, but an individual who only pretends to take the treatment (or who had previously taken an antidote) can still take advantage of the rest of society due to the expectation of co-operation. Besides which, it's not clear to what extent this is merely futuristic and to what extent it is
pseudoscience.
What about feelings of goodwill? If nothing else, this is a solid self-interested reason to be nice to other people. But again, this is far from anything approaching a complete solution.
So is there something which can be done to promote co-operation among large groups of people, without relying on undependable things like trust or crude things like state or vigilante violence? Comment are welcome.