A Persian Cafe, Edward Lord Weeks

Monday 10 August 2015

Response to Dylan Matthews on EA and X-Risk

Dylan Matthews wrote a review of EA Global Berkeley on Vox.com, complaining that too much attention was paid to existential risk concerns - and in particular to concerns about artificial intelligence - and not enough to global poverty eradication. I was not at the weekend, so I have particular reason to doubt his lived experience of the conference. That said, as a fellow Effective Altruist and having listened to the first couple of sessions of the weekend before they disappeared from Youtube, I feel that I am in a position to respond to him. This is not intended to be hostile, but I think there are some fairly serious problems with his piece and since no-one is especially likely to read this, I feel no particular need to be gentle.
In the beginning, EA was mostly about fighting global poverty.
This is true in the sense that the term "Effective Altruism" came out of a philosophical tradition in which the key figures are Peter Singer and Toby Ord, a tradition which has tended to be highly concerned about global poverty. However, as Will MacAskill noted in his opening speech at the weekend, the modern EA movement comes from at least three different strands of people. There were those inspired by Singer, but there were also rich philanthropists - most notably Holden Karnofsky and Elie Hassenfeld - who separately thought that they should look for effectiveness when "buying their philanthropy". There were also people around what is now MIRI, who were making arguments which have now become mainstays in the study of existential risk. Global poverty has always been a consideration of EA, but to find a time when there were movement leaders concerned about poverty but not about X-risk requires you to go back to before there was any single group recognisable as the modern EA movement.

Declaring that global poverty is a "rounding error" and everyone really ought to be doing computer science research is a great way to ensure that the movement remains dangerously homogenous and, ultimately, irrelevant.
If this is were being said to the general public as a key message of EA, I would agree. Indeed, I've previously made a very similar argument that, regardless of what is actually the most useful cause, third-world poverty is relatively non-weird and a very good cause to talk about when introducing new people to EA.

But this is not what we say to outsiders. In my experience, Sam (the founding president of Giving What We Can: Manchester) and I have occasionally mentioned AI risk to the unitiated because it's sexy and exciting, but the bread and butter of our outreach has always been third-world poverty. The TED talk that we point people to is Peter Singer's rather than Robin Hanson's. There have been some articles about AI risk in mainstream media, but to the best of my knowledge they have with a single exception focused squarely on AI risk without any comparison to global poverty. That exception? The very article I'm responding to right now.

Is it a problem if we say this kind of thing amongst ourselves? Well, the simple fact is that it may well be true, and if so then it's pretty important that we recognise that. Perhaps Dylan is concerned that we're engaging in a kind of "government house utilitarianism" when we make these claims which we wouldn't to outsiders for fear of bad publicity. But if that's really your concern, then you should be open about the relative effectiveness of different causes and be damned, rather than pretend that third-world poverty is really the most important cause and hope that the world adjusts to fit your tastes.
EA Global was dominated by talk of existential risks, or X-risks.
 It's hard for me to comment on this in any detail, not having been there. But I am aware that there were a variety of different streams of talks on different topics. I don't have a list of those streams and can only remember what a couple of them were, so it's theoretically possible that the majority were about X-risk. But this seems unlikely. My concern here is that, due either to primarily attending X-risk focused talks or due to the people he happened to meet over drinks, Dylan may be projecting his own experience of the conference onto other people's.
The only X-risk basically anyone wanted to talk about at the conference was artificial intelligence.
This is very believable, and if true it is something where I agree with Dylan's criticism. I hold no particular opinion on what the most likely X-risks are, but it does seem to me that AI risk has captured the imagination of many EAs in a way that no other X-risk has. (That's not to say it isn't the most dangerous risk, though: I genuinely hold no opinion).

And indeed, the AI risk panel — featuring Musk, Bostrom, MIRI's executive director Nate Soares, and the legendary UC Berkeley AI researcher Stuart Russell — was the most hyped event at EA Global.
It is certainly true that Elon Musk's appearance was hyped. But from my perspective it appeared the it was Musk himself who was hyped, rather than the particular panel on which he appeared. Which is fair enough, in that he is (among nerds, at least) a genuine celebrity, whereas it's hard to point to any other EAs who are famous outside the EA movement and outside academic philosophy.


The discussion of Pascal's Mugging is fair enough.
The other problem is that the AI crowd seems to be assuming that people who might exist in the future should be counted equally to people who definitely exist today. That's by no means an obvious position, and tons of philosophers dispute it.
There are genuine arguments in favour of a "social discount rate", it is true. What surprises me is that Dylan links not to a discussion of this but of the non-identity problem. The non-identity problem is less obviously connected to this issue than the issue of a discount rate, but it's also not a problem. Then again, it's probably unfair to expect Dylan to have read the paper which proves it not to be a problem when that paper has yet to be written, let along published, and currently only exists outside my brain as a couple of tweets. (Basically, I'm going to consider the non-identity problem from various meta-ethical standpoints, and show that none of them would suggest the non-identity problem has any force. Hence if your normative ethics contain the non-identity problem, then that's a problem for your ethics rather than a reflection of how the moral world really is.)

More to the point, suppose potential (or, if you're a determinist, future) lives ought to be treated as having lower value than current lives. (Incidentally, suppose we could change the past. Would we regard past lives as being of lower significance than present lives? And would we advocate that our descendants view our lives as less important than their own?) So what? Unless you think they are worth many orders of magnitude less, this won't have a hope of possibly making X-risk from a uniquely imperative concern to merely one concern among many. Remember, the future is unimaginably huge.
To be fair, the AI folks weren't the only game in town. Another group emphasized "meta-charity," or giving to and working for effective altruist groups. The idea is that more good can be done if effective altruists try to expand the movement and get more people on board than if they focus on first-order projects like fighting poverty.
I understand his concern about this very well. Last year I was at the UK Effective Altruists retreat, and one of the sessions was a debate in which one side (Paul Christiano and Carl Shulman, IIRC) were supposed to argue in favour of holding off on donating and instead investing in order to donate more money later, while their opponents (I can't remember who they were) were supposed to argue that it is better to donate now. Both sides ended up agreeing that really it would be better to invest in making more people into EAs, which to me felt rather... cultish, maybe?

But I would dispute the claim that the current movement appeals only to computer science types. In the UK student EA population we have plenty of Philosophy and Economics students, with the occasional mathematician. I spent a considerable amount of time trying to introduce my flatmates over the last year to EA, and got precisely nowhere. One is a current CS undergrad, while another is a professional software engineer. I think it is true that EA appeals primarily, indeed almost exclusively, to highly-educated people, but the idea that they are all computer science people is, to me, laughable. Indeed Dylan's claim seems like it may well be a textbook case of selection bias. "All these people in a movement do CS!" Well, try attending a conference which isn't held at the Googleplex in Silicon Valley. Come to Oxford and we'll show you the most armchair-ridden horde of philosophers you can imagine, all of them EAs. Come to Melbourne and we'll show you a beer-chugging army of EAs of all sorts. (Incidentally: San Fransisco EAs are one-boxers. Oxford EAs are two-boxers. Melbourne EAs are kangaroo-boxers.)


While I'm writing this, there's one other thing I want to mention: in his original Voxsplainer about EA, Dylan claimed that EAs "tend to be favourable to the domestic welfare state", or something to that effect. I forget the wording, and it's an hour and a half after I intended to be in bed (hence the awful puns and racist stereotyping) so I'm not going looking for it now. Anyway: that's not my experience. EAs are in my experience no more left-wing than any other young and well-educated social group, and indeed contain a disproportionate number of libertarians.

Indeed, one of the things that originally got me interested in EA is how it completely wrecks the case for the welfare state: if you think we have positive duties to help other people, then within-country redistribution is a ridiculously poor way to go about that. If you want to defend domestic welfare from an EA perspective, then you should do it by coming up with some argument that a robust welfare system will help aid domestic growth and so allow more donations to the third world. Claiming that domestic aid is a good way to directly promote global utility isn't even bullshit, it's a dirty great lie.

No comments:

Post a Comment