On Sam Bankman-Fried and Effective Altruism
א. Preface
This is my first chance to get to write straightforwardly on this blog on the topic of philosophy, and while I plan to write a full introduction to what I want this to be in the future, I figure I’ll offer a couple of disclaimers now:
I am probably retreading covered ground. This is a space for me to work out my thoughts, not publish brilliant and original philosophic work (gotta leave that for the journals!)
I am gonna get stuff wrong. Tell me if you think I do, and we can fight about it
I think many of the EA people are wonderful, smart, and good. Many of them are my friends, and I hope we can all engage in this argument in good faith.
With that out of the way, let’s begin
ב. The Scandal
In the past week or so, a scandal has emerged surrounding the crypto billionaire, political donor, effective altruist, and philanthropist Sam Bankman-Fried (SBF). It seems now like he was engaged in a Ponzi-scheme or Ponzi-adjacent scheme, moving money from FTX, his crypto-exchange, to a hedge fund he founded and remained affiliated with: Alameda Research. The fraud is massive, and billions of dollars are involved. The scheme, most observers agree, constitutes an ethical violation on a massive scale.
SBF and his co-conspirators are particularly attractive as a media spectacle because of Bankman-Fried’s political connections (he was a top campaign contributor to Democrats in the 2022 election) and lots of condescending and gossipy reporting on a polygamous relationship among those involved at the top of FTX and Alameda Research. None of that is of particular interest to us here. What is of particular interest is SBF’s involvement in the Effective Altruist (EA) movement.
ג. The Problem
The concern here is clear: did SBF engage in fraud knowingly and with the belief that he is morally justified?
The fact of the matter is that’s probably an unanswerable question: no one except Sam knows why he did what he did, and his motivations are likely muddled and unclear even to him. But here’s what I’d like to posit, it presents a problem for Effective Altruism as a philosophical project that the potential for an effective altruist justification of the scheme exists. If Effective Altruism is uniquely positioned to produce results that even its proponents detest, it (as a theory) is, to put it gently, in deep sh*t.
I’m not the only one that thinks so either. William MacAskill, an ethical philosopher that is extremely influential in EA circles, thought it was so dire a threat to EA he wrote a paragraphs-long Twitter thread on the subject. The rest of that thread will be trying to untangle that, and seeing if contentions hold up.
ד. Contra MacAskill
After some preliminary remarks expression his frustration and anger at SBF, as well as an explanation and renunciations of his ties with SBF, MacAskill dives into the philosophical meat:
I want to make it utterly clear: if those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community.
This is an interesting start, and it makes what MacAskill is trying to do here clear. For EA to maintain its credibility, it cannot be associated with the FTX scheme. MacAskill goes on:
For years, the EA community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations. A clear-thinking EA should strongly oppose “ends justify the means” reasoning. I hope to write more soon about this. In the meantime, here are some links to writings produced over the years.
At this point, anyone familiar with EA should be scratching their head. EA opposed to end-justify-the-means thinking? Isn’t that, like, the whole point of consequentialism? The answer: it is, and MacAskill knows that. What becomes clear as you read the rest of the thread (and its accompanying citations) is that he’s trying to thread a very important needle. Philosophically EA believes that the ends justify the means, but practically it must avoid engaging in that sort moral reasoning. As I’ll get to later, this is untenable, but let’s take it at face value for the time being.
What MacAskill does next is cite some existing EA literature that suggests this kind of argument, ostensibly to show that EA has always incorporated these ideas into their broader philosophy.
From MacAskill’s book What We Owe The Future
These are some pretty interesting selections, and I think get to the real heart of the matter. I’ll start with the end, where he discusses “either endors[ing] nonconsequentialism or tak[ing] moral uncertainty seriously.” Frankly, I think the nonconsequentialism thing is basically just an ass-covering maneuver. EA is consequentialist and near-dogmatically so, that’s just what it is. The moral uncertainty piece is very interesting, and maybe represents EA’s ticket out of this dilemma, but only in a way that dramatically undercuts the movement. A truly morally uncertain person just should not be a longtermist effective altruist.1
However, the first two arguments are interesting. They can essentially be summed up as:
The Hedging Argument: Ethical decisions are made in conditions of uncertainty, and standard ethical principles represent a good method for what is essentially hedging, or minimizing risk
The Extraordinary Circumstances Argument: While there are valid circumstances in which violating standard ethical principles is justified, they are so infrequent that they aren’t worthy of concern.
Here is, as I see it, the trouble for MacAskill: neither of these arguments would have a chance of convincing SBF. The reasons for this are simple: dramatic ethical decisions are always risky, they are always extraordinary, and they are exactly what EA tells you is the right thing to do. Let’s take these in reverse order:
EA is, at its core, a maximizing argument. You have to maximize utility, globally, and failure to do so is a moral failing. This means that to be the best person possible, you have to accrue as much money and/or power as possible, and then direct it towards the most marginally efficient production of utility. One way to do that is to run a Ponzi scheme and give the money away to philanthropy.
When you are in control of vast resources, as EA incentivizes you to be and as SBF assuredly was, all decisions you choose to do with that are, by nature, extraordinary. Sure, you shouldn’t rob a baby to save a baby in Africa, but if you can rob BILLIONS to save BILLIONS, then you find yourself in the same kind of “baby Hitler scenario” MacAskill wants to pretend doesn’t actually arise. But it does.
When you are in control of vast resources, by the nature of opportunity cost, all decisions you decide to make with it (including indecisions) are extremely risky. Not defrauding your crypto customers could mean not lobbying the elected representatives you could bribe into preventing the next pandemic. Isn’t the next pandemic also a massive risk? When you reach the scale of billions of dollars under your control, hedging becomes a useless tactic. Isn’t doing the quote-unquote wrong thing the less risky proposition in the face of global annihilation?
ה. Conclusion
So where does that leave EA, and what the hell do I mean by “justificatory ethics?” Well, I think EA has a fundamental flaw that allows its adherents to engage in moral reasoning that upends the whole project.
This is because, by its maximizing nature, it incentivizes the conditions in which the moral principles which are supposed to prevent it from self-defeat are stripped away. It self-defeats its own self-defeat protection! Now, if William MacAskill and other EAs were simply willing to accept this result, perhaps they could survive as a niche, insulated community, although (as is becoming clear from the overwhelmingly negative media coverage of this entire affair) it probably couldn’t achieve the popular success it desires. But even the members of EA resent what SBF did, they find it morally abhorrent. And yet, they lack the very terms to define why it’s wrong.
ו. Post Script
Well, that’s my first post done! I hope everyone liked it, I sure had fun reading it, and I hope I didn’t make anyone too mad! If you’re reading this on Tumblr please follow me on here, and if you’re reading this through my substack, please subscribe, it’s free!
Moral uncertainty (and its cousin Moral Pluralism) are super interesting ideas and I hope to talk about them more in the future.
Been waiting for this a long time--not disappointed! Sure wish I knew more of the terms, but Google is always there. I'll email some teachers for the Contemporary classes and see if they can read your newsletter!
-JamesJamesJamesJam
Not sure what you mean by "Moral Pluralism" exactly, but I know I'll hate it