A Persian Cafe, Edward Lord Weeks

Sunday, 14 January 2018

Review: Jorge Luis Borges, Ficciones

Borges is a writer who I had been somewhat aware of for a while, read a few passages of and enjoyed, but never got around to reading deliberately. So when I set out a reading list for 2018, his collection Ficciones, generally regarded as the most accessible starting point in reading him, seemed an obvious inclusion.

Ficciones is a set of seventeen short stories, originally published in two separate volumes in the 1940s and then later collated; they first appeared in English in 1962. Borges wrote in Spanish, though he was heavily influenced by English writers, in particular G. K. Chesterton. There is a tremendous playfulness in many of Borges' stories, exemplified by my favourite story from the collection: Pierre Menard, Author of the Quixote. Pierre Menard is a deceased author and the story is an appreciation of his work, in particular of his greatest project: an attempt to rewrite Don Quixote in the exact same words used by Miguel de Cervantes. The narrator of this story therefore takes Menard to have made a conscious decision to write not in his own native land and time, but instead in "the land of Carmen during the century of Lepanto and Lope". His Don Quixote is a wild and romantic figure, in contrast to de Cervantes' more pedestrian protagonist.

There are other wonderful stories. The Library of Babel is an excellent counterpoint and companion to Pierre Menard, Three Versions of Judas is another masterful piece of intellectual trolling, and The Form of the Sword and Theme of the Traitor and the Hero provide scintillating plots which blast by in only a few pages. Borges' writing style goes after my own heart, with numerous allusions to both the real and the imaginary. But the quality is distinctly uneven. The Circular Ruins is eminently forgettable. The End will seem quite pointless to anyone who is not already familiar with the Argentine national epic Martín Fierro. Perhaps the biggest disappointment is The Secret Miracle, a story about a Jewish playwright in 1940s Prague struggling to complete his masterwork before his execution by the Nazis. By a miracle he is allowed to unfurl it all to its conclusion, to put each word into place - but only in his head, and it dies with him. There's a fantastic basis for a story there, but it seems so incomplete. One might argue that the point would be spoiled if we were to know what this play is about, but I'm not buying that - we already know that he was granted this miracle to complete it, something which no-one else inside the story would have been privy to. So the content of this play seems like a massive missed opportunity to draw parallels with the greater story, to exude some moral about life, or to draw some dramatic irony with the situation in which the playwright finds himself.

Indeed, with several of the less allusive stories one begins to wonder why one does not simply read the Wikipedia page for each of the stories. Perhaps one does not gain so much intellectually from reading Pierre Menard that one could not also learn from the Wikipedia page, but Borges' charming voice makes the extra reading time well worth the investment. Some of the better stories combine abstract theorising and an actual story, again making them worth the time to read properly. But unless one enjoys all of the writing styles which Borges employs, one is liable to find some of the stories to be distinctly full of air and little else.

Overall, I definitely recommend the book - if nothing else, most of the stories are pretty short and there's a pdf of Borges' collected works to be found on Google for free, so the costs of trying him and not enjoying it are trivial. More importantly, while there are some dull stories the greatest stories are magnificent, and the good significantly outweighs the mediocre. But if, after a couple of pages into a story, you still have no idea what it's actually about, take that as a sign that it may be worth skipping ahead to the next one.

Tuesday, 9 January 2018

The Elephant in the Brain: some notes

The Elephant in the Brain, written by Kevin Simler and Robin Hanson and recently out in paperback, is best viewed as two books on connected topics. The first is a convincing argument that "the elephant" exists: that we consistently engage in self-deceptive behaviour for purposes of social gain. The second is a serious of arguments, ranging from the highly plausible to the outrageous, that this explains various the function (or malfunction) of various human social behaviours.

The Elephant

The first section of the book presents multiple lines of argument leading inexorably to the conclusion that many of our behaviours are inexplicable in the first person but are on some level intended, in a way that a third party might easily observe, towards attaining social advancement. This is made possible by the modular structure of the brain, in which sections of the brain may have the ability to make decisions but not to communicate them or defend them. Crucially, we are unable to distinguish between those actions caused by the parts of our brain which also control what we say and those caused by other parts of the brain: hence, we will typically invent justifications for such actions which will own nothing to the actual motivations behind them.

The upshot of this is that one section of the brain can engage in devious, cynical scheming, and we are free to act upon this advice while having no conscious awareness of it, and therefore being able to honestly protest complete innocence when accused of holding these devious and cynical motives.

Hanson and Simler present a range of evidence for this, which I won't reiterate partly because other reviews will cover it and partly because I didn't take very good notes and really need to reread this section of the book. What I do remember finding illuminating, however, is the way they placed features of humans in the wider context of nature. Why is the American Redwood tree so tall? On clear and flat ground, being taller doesn't allow a tree to get any more sunlight but it does mean that the tree has to acquire more nutrients and transport them further upwards. The answer, of course, is that redwoods don't originate from clear and flat ground: they have to be as tall as, or taller than, the trees around them in order to have access to sunlight. The redwoods become so tall because of competition with other redwoods.

Similarly, how did humans become as smart as demonstrated by the graph above (taken from the book)? The answer lies in not in the abilities it grants over nature, but in competition against other people. This thesis is not new to Hanson and Simler, of course, but their presentation of it is especially clear.

I have some further thoughts following from the discussion of norms and how we subvert them, but they are not developed enough to appear even in this miserable excuse for a book review.

The Elephant in Practice

There then follow ten chapters, each discussing a different phenomenon from a Hansonian perspective. I don't want to go over all of these, so will briefly look at two that I found especially interesting. Firstly, they argue that laughter - which we often struggle to explain, of course, so looking for hidden motives may well be the way to go - serves the function of signalling that we are "at play". When one laughs, this indicates to those around oneself that one is not in a serious mood, which can allow one to say or do things that would normally be taken as threatening.

This theory is fascinating, and for lack of a better theory has changed my view on at least one issue: rape jokes. The ability to laugh at something is an indication that one is not concerned about it - if this theory is true, then, we should probably consider dark humour to be indicative of a lack of virtue, and indeed to actively discourage such a lack of caring in others. Perhaps this doesn't merit an absolute prohibition on such jokes - humour is a value which can weigh against other considerations - but it does suggest that we should be very cautious with such jokes and should never consider rape in itself to be suitable for a punchline.

There's also a defence of canned laughter, which I don't remember well enough to faithfully pass on.


The second section has already gained some attention when I shared a page from it on Twitter: their theory of art. This theory, originally developed by Geoffrey Miller, is that art developed primarily as a way to show off various attractive traits - in particular intelligence, creativity, and conscientiousness. They draw a distinction, which I assume must have been drawn many times before, between the intrinsic and extrinsic properties of an artwork. Intrinsic properties are those that we perceive in an artwork, extrinsic are those that which cannot be known - primarily facts about how it was created. Quoting directly:
The conventional view locates the vast majority of art's value in its intrinsic properties, along with the experiences that result from perceiving and contemplating those properties... In contrast, in the fitness-display theory, extrinsic properties are crucial to our experience of art. As a fitness display, art is largely a statement about the artist... If a work of art is physically (intrinsically) beautiful, but was made too easily (like if a painting was copied from a photograph), we're likely to judge it as much less valuable than a similar work that required greater skill to produce.
This has the consequence that as our ability to produce things has improved, artists have had to find new ways to make art difficult for themselves. They offer this as an explanation for why theatre continues to be popular, despite the various capabilities (camera angles, numerous takes, vast amounts of post-production editing) that film offers: it has the chance to go wrong, and so demands greater skill of the performers. I think this is not the whole story (and nor, for that matter, is Michael Story's theory that theatre serves to make lowbrow comedy acceptable for the middle and upper classes) - theatre offers advantages in terms of one's ability to focus on whichever section of the stage one prefers (regardless of whether or not, artistically speaking, it is the best), and the ability to tailor to particular performances (theatre actors can wait for laughs to subside, film actors can't). But it's a fascinating view on the topic.

As I suggested on Twitter, I am only partially sold on this. How good are audiences at realising that mistakes have been made? Sometimes it's clear - for example, a playgoer may see an actor requesting a line from the stage manager (I didn't see this happen when I saw Twelfth Night at the RSC the other day, but it happened very obviously a couple of months ago when I saw an amateur production of Arcadia) - but much modern art is highly abstract. If one of the lines on Jackson Pollock's No. 5 is out of place, how shall we know? If someone gets the timing wrong or plays the wrong note in some atonal piece of music, will anyone without a score be in a position to check?

I have some other thoughts on this in regard to popular music, which will be a post of their own because they're worth actually developing. For now I'm just going to raise three questions which I think are worth asking of the authors:

How sophisticated is the elephant, anyway?
Some of the signalling stories which Simler and Hanson tell are very complicated. For example, they argue that much advertising works not by influencing us as individuals, but by causing us to expect others to be influenced by it:
When Corona runs its "Find Your Beach" ad campaign, it's not necessarily targeting you directly - because you, naturally, are too savvy to be manipulated by this kind of ad. But it might be targeting you indirectly, by way of your peers. If you think the ad will change other people's perceptions of Corona, then it might make sense for you to buy it, even if you know that a beer is just a beer, not a lifestyle.
The classic strawman of evolutionary psychology is that almost no-one has a conscious aim of maximising their genetic footprint. The chain of reasoning "I will do X, because X will make me more attractive, which will allow me to attract a higher quality mate or to attract more mates, which will increase my genetic footprint" will almost never include the less clause, and may not even go beyond "I will do X" if X is something we are inherently motivated to do. The answer, of course, is that we don't need to think everything through - so long as a category of action reliably leads to higher fertility, we may well find ourselves inherently motivated to do it. This explains desires to eat and drink, to have sex, to parent our children well, and many other things. But these things which we are inherently motivated to do are fairly broad classes of action, with no particular cultural knowledge required. The Corona example is actually highly sophisticated cognition, involving not only instrumental rationality but also a theory of other minds. Do Hanson and Simler think this is all being done non-verbally, by evolved instincts - or is there a portion of the brain thinking thoughts, in a verbal fashion, but entirely detached from our stream of consciousness?

How far do signals rely on common knowledge?
Another example from their chapter on consumption:
Blue jeans, for example, are a symbol of egalitarian values, in part because denim is a cheap, durable, low-maintenance fabric that makes wealth and class distinctions hard to detect.
I had no idea about any of that. Indeed, I doubt most people consciously pick up on most of the signals which Simler and Hanson allege we send. So how far can we actually be expected to react to them?

Signalling vs. Creating Meaning
Depending on what kind of story we tell, the same product can send different messages about its owner. Consider three people buying the same pair of running shoes. Alice might explain that she bought them because they got excellent reviews from Runner's World magazine, signaling her conscientiousness as well as her concern for athletic performance. Bob might explain that they were manufactured without child labour, showing his concern for the welfare of others. Carol, meanwhile, might brag about how she got them at a discount, demonstrating her thrift and nose for finding a good deal.
If so many different messages could be sent by the same purchase, then none of them will be sent. I think these are far better explained as the stories we tell ourselves in order to create a sense of meaning and purpose in our lives. Once one raises this spectre, one wonders how much of their theory it could take over. Is the extrinsic value of art not that it could go wrong and is therefore a display of fitness, but that the process of creation is a way of creating meaning? Perhaps creating meaning is just another form of signalling, but this is something that has to be actually argued for.

One piece of evidence in favour of signalling over meaning-creation theories of fashion is a dog that hasn't barked - decorating the inside of clothing. The underside of a shirt could have many messages, verbal or pictorial, that would be understood by the owner but not by observers. The fact that we worry greatly about the outside of clothing but not the inside suggests that it the impression given to observers that we care about.


Conclusion

The book is very readable, and if you like Robin Hanson's other writings you'll like this. That said, it didn't quite live up to the praise given to it by other sources (e.g. Tyler Cowen) - there are some excellent passages, and some wonderful ideas, but there are also many ideas which are in sore need of greater defence. It's worth reading, quite possibly more than once, but it is not - in my view - Book-of-the-year level good, which is the level I feel it has been hyped to.

Friday, 15 December 2017

We Already Have A Voting Lottery

When, on that fateful day last year, the UK voted to leave the European Union, there was a great deal left as yet undecided. There were a great many paths we could have pursued, ranging from the Norway+ options that would have removed us from the European parliament and little else, to the economists' nightmare No Deal scenario. None of these options could at that point be declared "undemocratic", since the referendum gave us an answer to only a single question. Theresa May - or whoever else might have become Prime Minister - could have, if they so wished, declared that "52% to 48% is no mandate for radical change. We will leave the EU, but smoothly and cautiously" and changed very little.

Instead, the narrative very quickly became that the referendum had ultimately been about immigration, and that the British People had ultimately voted to Take Back Control Of The Borders. It's not hard to see why the notoriously anti-immigration Theresa May wished for this narrative to prevail. Moreover, it's not totally absurd - the campaign for Brexit did, after all, emphasise this as a reason in favour of Brexit (though by no means the only one, or even the main one - remember that bus about the money which could go to the NHS?). What is puzzling, however, is the lack of pushback against this narrative. Theresa May may not have wanted to suggest the vote was anything less than an endorsement of radical change, but why have so many other actors, including many who are in principle in favour of immigration, colluded in this narrative and not challenged this interpretation of the vote?

The answer, of course, is that it is a correct interpretation. Not that you could tell this from the Brexit vote alone, of course - referenda are, much like general elections, a quite incredible effort to extract the minimal possible sliver of information from voters. But we have a great many other surveys and polls of public opinion, conducted with great regularity and on a much richer array of questions than the usual choices offered at the polling station. We know that Brits want less immigration, but this is not because given a choice of two highly uncertain prospects, they chose the one likely to involve less immigration: rather, it is because people from YouGov have asked them exactly what their views of immigration are.



An alternative to universal suffrage that almost only ever appears in the academic literature is the "voting lottery". The idea of this is that rather than collect votes from every single person, we select a certain much smaller number of citizens - say, 1000 - and only ask their votes. This would have three key advantages: firstly it would be cheaper ("But you can't put a financial value on democracy!" "Sure you can. In 2011 we rejected AV because it would be too expensive.") Secondly, by increasing the power of those who actually get to vote, it would give them more of an incentive to seriously consider their vote and its impact. Perhaps most importantly, it would provide an opportunity to stratify the sample of voters. Currently certain groups - in particular, the young and various ethnic minorities - are grossly underrepresented by democracy because of their lower turnout. A voting lottery would allow us to ensure that these groups are counted in accordance with their proportion of the total population, not merely the their proportion of the population that turns up to vote.

Now of course many people who encounter this idea have a strong aversion to it. The point of democracy, they say, is in the mass participation. But the fact that our assessment of public opinion comes not from five-yearly general elections but from weekly polls rather pulls the rug out from under this. Voting in an election is screaming into the void: real political participation is happening to be selected for a YouGov survey, and giving your opinion there.

Another common concern is that a sample of 1000 people cannot hope to fully capture the views of an electorate of millions. I'm not married to the 1000 number - in fact, I think it could stand to be more like 10,000. But the basis of all modern polling is the Law of Large Numbers, which in essence states that when you have a process consisting of many small things which are themselves error-prone, but whose errors can cancel each other out - the errors will tend to cancel themselves out. Hence a poll of 1000 people will be within 3% of the true values 95% of the time, and a poll of 2000 people will be within 2% 95% of the time, for example. Yes, the newspaper polls can be wrong, but this is more often due to bias in the way they have selected voters - asking by telephone, or at a particular time of day, or with certain incorrect assumptions about who is likely to vote - which we could eliminate by selecting voters directly from government records.

It ought to be clear where this is leading. How about, rather than maintaining our thin veneer of universal suffrage with all its attendant problems of unrepresentativeness, we acknowledge the fact that we already live in a political system dominated by the voting lottery, and adjust accordingly? Of course there are costs, but there are also real benefits, benefits which we would be much better equipped to realise if we were honest about our political system and learned to live with it.

People have every reason to worry about attempts to disenfranchise them. In the USA, very significant effort goes into attempts to disenfranchise black voters due to their tendency to vote for the Democrat party. But the voting lottery is different both in its intention and its effects: while we would stop even pretending to care about most voters as individuals (as though we ever could in a nation of 65 million!), we would give much greater weight to their views as members of groups. And voting is far from the only way to engage in politics: so long as we have free speech and a free press, those who are not randomly selected for the ballot will have the opportunity to influence the votes of those who do vote though force of persuasion.

There's a Chesterton-like paradox to the suggestion that we should improve our democracy by removing the votes of most citizens. But the idea, I maintain, is not ridiculous. Certainly no more ridiculous than the idea that rather than vote on every single decision, we should delegate this to some 650 people, mostly white men, all living primarily in central London. I urge you to consider being explicit about the voting lottery which we already have - and to consider how it might be put to better use.

Saturday, 7 October 2017

A Retraction and An Apology

Several months ago I wrote a mostly-serious essay arguing the cosmopolitan case for #SpendTheSix. In one line I claimed that:
Typical practice during the days of the old Empire, as best we can tell, was to spend around 7% of GDP on the military.
I can't remember if I made any attempt to check this claim at the time, but it seems unlikely. It was half-remembered from a book I read sometime in my teenage years - most probably Niall Ferguson's Colossus: The Rise and Fall of the American Empire. As my memory records it, the book was briefly discussing whether continuing US military dominance across the planet was financially viable, and argued that the US spends around 3.5% of its GDP on its military compared to the 7% or so which the old British Empire spent. Hence whatever barriers there were to continued US hegemony would not be financial, etc etc. It should be noted that not only I am only about 70% confident that Colossus was the book in question, it is entirely possible that either I misread it at the time or that in the seven or eight years since my memory of the factoid has become confused. Certainly I do not wish either to accuse Mr. Ferguson of making this claim, or to suggest that my failure to properly check the claim when I wrote the essay was in any way excusable.

My attention was drawn again to this claim when, browsing Andrew Sabisky's Curious Cat, I discovered that he had cited this essay for the claim that "we did in fact historically spend the six, & not just during the cold war either". It occurred to me that I perhaps ought to check the veracity of this claim, so quickly googled "historical british military spending". From the results it seems clear that I could not have done this when I wrote the original essay. First, this article on the BBC website:
"It's often thought the British army in the 19th Century just mowed down natives with a machine gun. This is a myth," says military historian Nick Lloyd.
"The most remarkable thing is that they often had no technical advantages and we managed it by spending only 2.5% of GDP on defence, which is not much higher than we have today."
Second, ukpublicspending.co.uk:
Defence began in 1900 at 3.69 percent of GDP but quickly expanded during the Boer War to 6.47 percent. After the war it contracted down to about 3 percent of GDP.
Third, this fascinating graph from ourworldindata.org:

The first part of Sabisky's statement is supported - we have historically spent the six. Technically the second holds up in that we also spent the six during various wars, but I think that it would be fair to characterise this as misleading.

For what it's worth, I don't think the overall thrust of either Sabisky's or my argument is hurt to any great extent by this fact turning out to have been false - neither of us was arguing that we ought to spend the six because the Empire spent the six, merely trying to suggest that in historical perspective the claim would sound less absurd than it does to people who have only known the world of today. Nevertheless, it is entirely clear that I ought firstly to retract that claim, and secondly to apologise - to anyone who read my piece, to Andrew Sabisky, and to anyone else who encountered the claim indirectly through him or some other intermediary. It was not my intention to mislead, but I ought to have practised higher standards of scholarship - and hope that in the future I will do so.

Thursday, 5 October 2017

The Cult of the Composer: in lieu of an essay

NB: This is something I want to write as a proper essay, but have no idea about how to phrase. For this reason, I am simply stating the main claims and arguments here, with a view to converting them into an extended piece of writing at a later date.

  • Music is like cookery, and different from most other art-forms, in that it is (a) reproduced from a "recipe", (b) generally not seeking to represent anything in particular - and even when it is, does so in a very abstract way
  • There are very good reasons for not messing with non-reproducible artworks (such as the originals of paintings). There are good reasons to be careful about how we treat many representational artworks (such as poetry).
  • However, when these do not apply, we are generally very happy to modify, deface, and do whatever we like to artworks. Example one: we are happy to adapt cooking recipes, even when they come from very good chefs. Example two: we are happy to deface posters and prints of paintings. (Remember the Joseph Ducreaux meme from a few years back?)
  • We should be more willing to carry out this kind of modification for music. By this I mean not just the kind of wholesale changes we already make (e.g. remixes, various classical pieces) but micro-changes.
  • By micro-changes I mean deciding that a certain chord is wrong and changing it, modifying a tune slightly, and all sorts of other small changes.
  • Composers are presumably good judges of what is good music, but the judgement of the composer is not infallible, and we should be willing to overrule them in cases where we think they have erred (or where tastes have simply changed!)
  • See for example these eight beautiful bars in Schubert's Unfinished Symphony, and the two-bar fart that follows them. (from 1:20 in) I don't have a ready suggestion for how to continue the tune, but am quite certain that there are option much better than what Schubert went with.
  • Obviously if you are performing pieces for the public then you should make changes only after careful consideration, but this does not mean you should not make changes at all!
  • A good performer or composer can definitely improve on an already good piece, and this need not entail any disrespect to the original composer. See, for example, Marc-André Hamelin's excellent cadenza to Liszt's Hungarian Rhapsody no.2 (cadenza starts at 8:26, runs to around 11:40):
  • We're past the days in which books are the ideal medium for this, but it's sad that there's no book of "Mozart's piano works, as adapted by __". Nowadays, why not have a website of suggested micro-changes to pieces?
  • Try to come up with more suggestions for micro-changes. e.g. I reckon we could improve the descending lines at the climaxes of Finlandia (occurs more than once, e.g. at 3:56)

Saturday, 30 September 2017

The Sufficientarian Case for Feudalism

Most people thing there is something morally wrong with the existence of poverty, to the extent that those who are in poverty - or at least, the government which represents them - is entitled to forcibly extract resources from other people to end, reduce, or ameliorate poverty. This is what is meant by "social justice".

Views of this kind are often described as "egalitarian", but in fact one of the most plausible such views has nothing at all to do with equality. Sufficientarianism is the view according to which there exists a level which is "enough" for people; people below this line are entitled to the resources which bring them up to it, while those above are obliged to provide. Sufficientarianism has a lot of intuitive appeal: it is easy to see how a starving beggar might be entitled to the charity of a billionaire, but it is much harder to see how a comfortable homeowner, who while hardly a billionaire has no concern about where his next meal is coming from, would be entitled to this charity. We might still think a world in which the homeowner and the billionaire were more equal would be better, but this falls quite short of implying that the homeowner or his government has the right to forcibly redistribute from the billionaire to the homeowner.

Similarly we might think that the higher one lies above the line of sufficiency, the greater is one's obligation to bring others above the line; but again, this does not require one to take equality as any kind of fundamental value.

One consequence of sufficientarianism, often considered counterintuitive and sometimes considered damning, is what it implies in a world of people who are all or mostly below the line of sufficiency. If the measure of a society is the extent to which it brings people above this line, this seems to imply that we should worsen the lives of some of those who are already below the line in order to bring some others above the line. In extremis, with a world of 100 people narrowly below the line, sufficientarianism may require us to utterly ruin the lives of 99 of these people in order to marginally the life of the 100th so that she reaches the line.

There are of course ways to avoid this conclusion, but I sometimes think we are too quick to reject it. Suppose 100 people are caught in a prison camp, and all would rather die than continue to endure this miserable existence. To wit, they hatch an audacious escape plan which will enable a small number of their fellows to reach freedom. Those left behind will be heavily punished and tortured for their roles in the plot, so the plan could hardly be less egalitarian - yet it is still worthwhile going through with, and it is worthwhile for those left behind to suffer for their fellows.

Is there a clear historical example of this? Indeed there is, and for much of history it dominated our planet. The idea that most people could live good lives is a distinctly modern one, a product of the industrial revolution. Before that, poverty, starvation, and abject misery were the norm and indeed the only possibility 99% of the world's population. Simultaneously, however, there existed classes of knights who enjoyed lives vastly greater than any villein or serf could have hoped for: eating well (by the standards of the time), enjoying education (such as there was), and without having to engage in backbreaking labour in the fields.

It is my contention that from a sufficientarian perspective, such arrangements made perfect sense: almost everyone below what should be considered an acceptable level of wellbeing, but by the sacrifice of the many a few were enabled to live  genuinely worthwhile lives.

In the modern world, with abundant food and water, with indoor plumbing and heating, it is hardly necessary to impoverish the masses in order to create lives worth living. But in the complacent post-scarcity society, it is easy to lose sight of the kind of sacrifices which were necessary for our ancestors. Feudalism was not a system of brutal oppression; rather it stands as the greatest monument to the nobility of the human spirit: the willingness to sacrifice oneself for the creation of lives which are truly worthwhile.

Monday, 11 September 2017

The Rhetoric of Desert

There are two ways in which a person can fail to deserve what they have. The first is that they are actually undeserving of it: the prodigal son does not deserve his father’s welcome, Job did not deserve to be tormented with destruction and agony. The second is that the concept of desert fails to apply: thus neither James Potter nor Lily Evans deserved the love of Lily Evans, because in the decision of who she should marry desert is simply not a relevant factor.

These two situations are very different, yet we use the same phrase of “not deserving” to describe them both. This is liable to create dangerous confusion: when a good (or bad) is appropriate for distribution by deservingness, someone’s lack of desert generally provides a reason for taking that good away from them (and typically giving to them). Physical property is, in most naive views of the world, taken to be appropriate for distribution according to desert: thus a simple argument for economic redistribution would be that the poor are no less deserving than are the rich of worldly goods.

When a good is not appropriate for distribution according to desert - for example, love - the fact that someone is undeserving is no reason to remove the good from them. While most people naively think of private property as something to be distributed according to desert, this view is exceedingly rare among philosophers. The most obvious example of an anti-desert theorist is John Rawls, who argued that we cannot deserve anything at all: any good traits we possess are the results either of our environment or of our genes, neither of which we chose and therefore neither of which we can be credited for.

This anti-realism about desert does not - cannot - provide an argument for redistribution of goods. If desert is not real, then no goods can be appropriately distributed according to desert, and so the fact that the rich are no more deserving than the poor is no argument for redistribution. One may, of course, favour redistribution on other grounds, and this was Rawls’ purpose: to disarm desert-based arguments against redistribution! But if one only takes the conclusion of his argument - that the rich do not deserve their wealth - and puts it not into the context of Rawls’ wider theory, but rather the naive view that desert is real and is a moral basis for property, then one arrives at a rhetorically effective, but subtly self-contradictory, agument for redistribution. I suspect that many people who dabble in political philosophy without studying it in depth, including many politics undergrads ae liable to fall into this trap.