Sam Harris speaks with William MacAskill about the implosion of FTX and the effect that it has had on the Effective Altruism movement. They discuss the logic of “earning to give,” the mind of SBF, his philanthropy, the character of the EA community, potential problems with focusing on long-term outcomes, AI risk, the effects of the FTX collapse on Will personally, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
Over at Waking Up, we just introduced Playlists, which has been our most requested feature.
It took a while to do that, but that seems like a very auspicious change.
You can create your own retreats.
You can create playlists for any purpose.
Despite the name of the app, there's a lot of content there that is very good for sleep.
So you could create a sleep playlist.
Many of us fall asleep to audio these days.
So, thanks to the team over at Waking Up for producing that feature, among many others.
The app is continually improving.
And what else?
If you haven't seen Coleman Hughes on The View promoting his book, that is worth finding on YouTube.
Coleman was recently on the podcast discussing the book, The End of Race Politics.
He went on The View to do that.
As you might expect, he was bombarded with a level of moral and political confusion that is genuinely hard to deal with in a confined space when one is short on time.
And I have to say, he really did a perfect job.
I mean, it was absolutely masterful.
So, it's worth watching in case there was any doubt in your mind about Coleman's talents.
If ever there were a commercial for the equanimity that can be achieved through mindfulness, that was it.
So, bravo Coleman!
In the last housekeeping, I acknowledged the death of Danny Kahneman.
I went back and listened to my podcast with him, recorded at that event at the Beacon Theater in New York about five years ago.
I was pleasantly surprised.
It's often the case that live events don't translate into the best podcasts.
I really thought this was a great conversation, and Danny was really worth listening to there.
So, that was episode 150.
If you want to revisit it, I really enjoyed hearing it again.
Okay, today I'm talking to Will McCaskill.
Will is an associate professor in philosophy and a research fellow at the Global Priorities Institute at Oxford University.
He is one of the primary voices in a philanthropic movement known as Effective Altruism.
And the co-founder of three non-profits based on EA principles, Giving What We Can, 80,000 Hours, and the Center for Effective Altruism.
He is also the author of several books, including Doing Good Better, Effective Altruism and a Radical New Way to Make a Difference, and most recently, What We Owe the Future.
However, today we don't talk much philosophy.
Rather, we do a post-mortem on the career of Sam Bankman Freed and the implosion of FTX, and look at the effect that it's had on the effective altruism movement.
When we recorded last week, Sam had not yet been sentenced, but he has since, and he was sentenced to 25 years in prison, which is not as much as he could have gotten, but certainly more than the minimum.
I must say, that strikes me as too long a sentence.
You'll hear Will and I struggle to form a theory of mind of Sam, in this podcast, we discussed the possibilities at some length, but when you look at some of the people who don't get 25 years in prison for the malicious things they do, I don't know, it does not strike me as a fair sentence.
Perhaps I'll talk about that more some other time.
Anyway, Will and I talk about the effect that this fiasco has had on effective altruism, the character of the EA community, potential problems with long-termism.
We have a brief sidebar discussion on AI risk.
We discuss the effects of the FTX collapse on Will personally, and other topics.
There's no paywall for this one.
As I thought, everyone should hear what Will has to say on this topic.
As you'll hear, despite the size of the crater that Sam Bankman Freed left on this landscape, I consider the principles of effective altruism untouched.
And while I've always considered myself a peripheral member of the community, you'll hear me discuss the misgivings I have with it once again here.
I've discussed them on previous podcasts as well.
I just think the backlash against EA is thoroughly wrong-headed, and Will and I talk about that.
As always, if you want to support the podcast, there's one way to do that.
You can subscribe at SamHarris.org, and if you can't afford a subscription, you can request one for free.
Occasionally I hear rumors that someone has requested a free subscription and didn't get one.
That should never happen, so check your spam folder.
Something has gone wrong.
Just request again if that happens to you.
We don't decline any of those requests.
And now I bring you Will McCaskill.
I am back here with Will McCaskill.
Will, thanks for joining me again.
Thanks for having me on.
So, we have a lot to talk about.
I've been wanting to do a post-mortem with you on the Sam Bankman Freed FTX catastrophe.
I don't think that's putting it too strongly, at least in EA circles.
So, we're going to talk about what happened there, your perception of it, what it has done to the optics around effective altruism, and perhaps effective altruism itself.
Where should we start here?
I mean, perhaps you could summarize what Sam Bankman Freed's position was in the EA community before the wheels came so fully off.
Where did you meet him?
He certainly seemed like a promising young man who was going to do great things.
Perhaps we should take it from the top.
Sure, I'm happy to.
And yeah, he did, from my perspective, seemed like a promising young man, even though that's very much not how it turned out.
So I first met Sam all the way back in 2012.
I was giving talks for a new organization I'd set up, 80,000 Hours, which were about how you can do good with your career.
And I was going around college campuses speaking about this.
I can't remember who, but someone put me and Sam in touch.
I think he had been quite active on a forum for people who are interested in utilitarian philosophy, and ideas like Earning to Give had been discussed on that forum.
As a result, we met up for lunch and he came to my talk.
He was interested in a number of different career paths at the time, so politics, Earning to Give was one.
Perhaps we should remind people what Earning to Give means because it's really the proper framing for everything Sam was up to.
Sure, so Earning to Give was the idea that Rather than, say, directly working for a charity, instead you could deliberately take a career that was higher paying, something you were perhaps particularly good at, in order to donate a significant fraction of your earnings, where, depending on how much you made, that might be 50% or more.
The core idea was that Well, you could, say, become a doctor in the developing world, and you would do a huge amount of good by doing that.
Or you could earn more and donate enough to pay for many doctors working in the same cause, and thereby perhaps do even more good again.
This was one of the things that I was talking about at the time.
And he found the ideas compelling.
We discussed it back and forth at the time.
I next met him something like six months later at a vegan conference.
We hadn't been much in touch in that period, but then he told me that he'd got an internship at Chainsleet, which is this quantitative trading fund.
That was very impressive, I thought.
He seemed just this very autonomous, very morally motivated person.
Animal welfare was his main focus at the time.
He said later he'd also asked some animal welfare organizations, would they rather his time, would they rather they work for him, or would they rather that he go make money in order to donate it to them?
And they said, we'd rather have the money.
And so he went and did that at Janesley, but then subsequently left and set up a trading firm called Alameda Research, that was a cryptocurrency trading firm.
And then a couple of years later, an exchange, as in a platform where others could trade cryptocurrency, called FTX in 2019.
Those seemed to be incredibly successful.
So by the end of 2021, he was worth tens of billions of dollars.
The company FTX was worth $40 billion, and he seemed to be living up to his kind of claims.
He was saying he was going to donate everything, essentially everything he earned, 99% of his wealth, something like that.
And through the course of 2022, he had actually started making those donations too.
had donated well north of a hundred million dollars.
But then, as it turned out, in November, it seemed like the company was not all that it seemed.
There was, you know, what you could call a run on the bank, except it wasn't a bank.
So there was a loss of confidence in FTX.
A lot of people started withdrawing their money, but the money that customers had deposited on the exchange that should have been there was not there.
And that should not have been possible.
It must have been the case, therefore, that Sam and the others leading FTX had misappropriated that money in some way, all the while saying that the assets were perfectly safe, that they were not invested.
That led to the complete collapse of FTX.
A number of people, so three other people who are high up at FTX or Alameda, so Caroline Ellison, Gary Wang and Nishad Singh, they all pleaded guilty to fraud a couple of months after the collapse.
Sam did not plead guilty, but there was a trial.
At the end of last year, and he was found guilty.
And what is his current state?
He's in jail, is he awaiting an appeal?
Do you have up-to-the-minute information on his progress through the criminal justice system?
Yeah, so he's in jail and he's awaiting sentencing, which will happen next week, I think.
So, I guess one thing we should talk about is some theory of mind about Sam, what his intentions actually were insofar as we can guess about them.
Because there are really two alternate pictures here which give a very different ethical sense of him as a person and just the situation So many people were in giving him their trust.
Perhaps we can just jump there.
Do you think this was a conscious fraud?
I mean, perhaps there are other variants of this, but I'll give you the two that come to mind for me.
Either this was a conscious fraud where he was quite cynically using the concepts of effective altruism, but his heart was never really in that place and he was just, you know, trying to get fantastically wealthy and famous and misappropriating people's funds to that end and it all blew up because just bad luck on some level.
So, he was kind of a Bernie Madoff-style character running something like a Ponzi scheme or some unethical variant of misappropriating people's funds.
Or, alternately, and I think quite differently, he was somebody who, based on what he believed about the actual ethics of the situation and probability theory, he was taking risks
That he shouldn't have taken, obviously in the end, given the outcome, but they may well have paid off, and he was taking these risks because he wanted to do the maximum amount of good in the world with as much of the resources available that he could get his hands around.
And he was just placing, in the end, some silly bets that he was allowed to place in a totally unregulated space.
And it catastrophically failed, but it was by no means guaranteed to fail.
And he was, on some level, a good guy who was ruled by some bad or at least unrealistic expectations of just how many times you can Play a game of roulette and win.
Perhaps there's some middle position between those two cases, but what's your sense of his actual intentions throughout this whole time?
Yeah, so this is something that I've now spent many months over the last year and a half really trying to understand.
I didn't know about the fraud or have suspicions about the fraud at the time, so my understanding of things here is really me trying to piece together the story on the basis of all that's come out as a result of the trial and media coverage over the last year and a half.
One thing I'll say kind of before we talk on this, it's very easy once you start getting into trying to inhabit someone's mental state to start saying things where, you know, it sounds like you're defending the person or something.
And so, yeah, I just want to be clear on just how bad and how harmful what happened was.
So, you know, a million people lost money.
The scale of this is just unbelievable.
And actually, recently the prosecution released Twitter messages that Sam had received during the collapse, and they're really heartbreaking to read.
One is from a Ukrainian man who had fleed Ukraine, and in order to get his money out, put the money on FTX.
Another person who was going to be made homeless, had four children he needed to feed.
On that point, Will, let's just linger for a second.
I had heard that a lot of the money was getting recovered.
Do you know where that process is and how much has been recovered?
Yeah, so as it turns out, customers will receive all of the financial, all of the money they put on the exchange as measured in terms of As measured in terms of the value of what they put on the exchange in November 2022.
So the value as of that date as opposed to the amount they initially put in at whatever date.
Yes, but also as opposed to the amount today.
So often people will say putting Bitcoin on the exchange, Bitcoin is now worth more than it was then.
The standard narrative of this has been that that has happened because klepto-hysterism and anthropic, a particular investment that was made, has done well.
My best understanding is that actually that's not accurate.
That has helped, but even putting that to the side, even as of September last year when there had not been a crypto rise, customers would have been made whole.
So the issue was not that money was taken and then just lost in the sense of spent or just lost on bad trades or something.
Instead, the money was illegally taken and invested into assets that couldn't be liquidated quickly.
So, already we're, I think, a far distance from someone like Bernie Madoff, right, who was, whatever the actual origins of his behavior, whether he was ever a legitimate investor, for the longest time he was making only sham investments and just lying to everyone in sight and running a proper Ponzi scheme.
Yeah, that's right.
Bernie Madoff was committing the fraud for about eight years, and he was needing any time a client of his wanted to withdraw money, he would raise more money in order to give it back, to give what should have been there to the customers, which is going to central Ponzi scheme.
Alameda and FTX, this is one of the things that's so bizarre about the whole story, and even tragic, is just that the companies themselves were making money, in fact large amounts of money, but the customer asked it.
So in that sense they were not Ponzi schemes.
The customer assets held on FTX that should have been there, should have been bank vaulted, separate from everything, got used by Alameda Research, the trading firm, in a way that should not have even been possible.
And so you were asking on how to interpret the story, and you gave two interpretations.
One was that effective altruism was a sham.
His commitment to that was a sham.
He was just in it for his own power, his own greed.
The second was that it was some carefully calculated bet that, you know, may have been, was illegal, had good intentions though, or perhaps twisted intentions, but didn't pay off.
My personal take is that it was neither of those things.
And obviously I'll caveat, you know, I've followed this a lot because I've really tried to resolve the confusion in my mind about what happened.
But I'm not an expert in this.
It's extremely complicated.
But I think there's a few pieces of evidence for thinking that This just wasn't a rational or calculated decision.
No matter what utility function Sam and the others were following, it did not make sense as an action.
And one piece of evidence is actually just learning more about other white-collar crimes.
So Bernie Madoff being one example, but the England scandal and many others too.
So there's this Harvard business professor, Eugene Salters, who's written this really excellent book called Why They Do It, about white-collar crime.
And he argues, and it's on the basis of interviews with many of the most famous white-collar criminals, And he argues quite strongly against the idea that these crimes are a result of some sort of careful cost-benefit analysis.
Mainly, in part, because the cost-benefit analysis just does not make sense.
Often these are actually really quite wealthy, really quite successful people.
Who have not that much to gain, but everything to lose.
But then secondly, looking at how the decisions actually get made, the word he often uses is mindless.
You know, it's like people aren't even paying attention.
It might be that, you know, this was true for the CEO of McKinsey, gets off a call from the board of Goldman Sachs, I think, and immediately, 23 seconds later, calls a friend to tell him about what happened in the board meeting.
in a way that was illegal insider trading.
This was not a carefully calculated decision.
It was just irrationality.
It was a failure.
A failure of intuition rather than reasoning.
That's my best guess at what happened here as well.
I think what happened, it seemed, is that Actually, let me layer on a few points that could certainly bias some people in the direction of more calculation and less impetuosity than that, because I think both within EA circles more widely, and certainly it seems within
There were some ideas that will strike people as strange, but nonetheless difficult to refute, you know, rationally or logically, which is to say, it all hinges around a topic you and I, I believe, have discussed before on the podcast, which comes up in any discussion of what's called long-termism, which I think we'll get to.
It comes down to just how to integrate rationally any notion of probability, especially probabilities where one side of the decision tree represents some extraordinarily large possible gains, right?
So, I believe Sam at one point was accused of believing, or he may have said something along these lines, that if the expected value is such, you know, that you could, I forget if it was in terms in positive terms or negative terms in terms of avoiding, you know, extinction, but it sounds like he was willing to just toss a coin endlessly with the risk of ruin on one side with a sufficiently but it sounds like he was willing to just toss a coin endlessly with the risk of ruin on one side with a sufficiently
It's just like, you know, if you have a chance to win a million dollars on one side and lose, you know, a hundred thousand on the other, you should just keep tossing that coin because your, you know, your expected value is 50% of a million on one side and 50% of losing your expected value is 50% of a million on one side and 50% of losing a So it's your expected value is, you know, $450,000 every time you toss that coin.
But, you know, But of course, if you only have $100,000 to lose, you toss the coin, you can lose everything on your first toss, right?
And so he just seems to be someone who was looking at the expected value proposition somewhat naively and looking at it with everyone else's money on the line.
Or at least that's, you know, certain things I've heard said of him or by him suggested that was the case.
So, perhaps bring in your beliefs about how one should think about probability And it kind of ends justifying the means thinking that many people believe has corrupted EA more generally, and Sam is just kind of the ultimate instance of a cautionary tale there.
Sure.
So, yeah, one thing that's just absolutely true was Sam seemed unusually risk-tolerant, and was unusually risk-tolerant.
And at the outset, like when the collapse happened, I absolutely was worried that perhaps what had gone on was some sort of carefully calculated fraud, carefully calculated willingness to break, just to break the law in the service of what he thought was best.
You know, I was worried that maybe there would come out a spreadsheet that did a little cost-benefit analysis of fraud and was clear for all to see.
I think there are just good reasons for thinking that's not what happened.
I'll discuss that first and then let's come back to the important points about attitudes to risk and ends justify the means reasoning.
But just briefly on why that's, I think, not what happened.
So one is just the overall plan makes so little sense if it was a long con.
Like a con kind of from the very start.
So if that was the case, then why would they be trying so hard to get regulated?
Why would they be so incredibly public about what they were doing, and in fact very actively courting press attention?
Or in fact having Michael Lewis, one of the world's leading financial writers, following them around, having access.
Why would they be associating themselves with the EA so much as well, if that was also what they seemed to care about?
And then the second thing is, it's just absolutely agreed by everyone that The companies were a shambles.
There was not even the most basic accounting, not the most basic corporate controls.
In June of 2022, there was a meeting between the high ups where they were very worried because it looked like there was a $16 billion loan from FTX to Alameda, and they thought that was really bad.
It turned out that was a bug in the code, and actually it was only an $8 billion loan.
They were apparently elated at the time, so they didn't know how much their assets were to within $10 billion.
In fact, it was at that time that they discovered that they had been double-counting $8 billion.
So customers for FTX, one way they could put money on the FTX exchange was by sending money to Alameda that Alameda then should have given to FTX.
And it seems that in June of that point, of that time, they realized that had not been happening.
Alameda had thought that money was within Alameda, legitimately, I guess, and FTX had thought that money was there.
And now, I'm not going to claim I know that wasn't conveniently overlooked or something, but at least Caroline on the stand testified that she at least didn't know that Sam knew that that money, prior to that point in time, was in Alameda, whereas it should have been in FTX.
And that's the way almost all the money flowed from FTX to Alameda.
There was also a lending program, which was really focused on by the prosecution, but we can go into that in more detail as well.
I think that's actually not where the action was in terms of How the money moved.
And if you're just really getting to the heads of the people at this time, okay, let's now suppose they're the most ruthless consequentialists ever, and they want to just make as much money as possible.
Let's say it's just dollars raised, they're not risk averse at all.
Which wasn't even true, but Let's even assume that.
Why on earth would they take that money and then invest it, 5.5 billion of it, into illiquid venture investments?
So, companies, basically.
It was obviously posing enormous risks on them, and the gains were really quite small.
compared to the potential loss of not only everything in the company, but also the huge harm that it would do to the rest of the world, to the effect of altruism movement itself.
It really just makes no sense from a utilitarian perspective at all.
And that's why, when I try and inhabit this mode of them making these kind of carefully calculated, rational decisions, I think there's too many facts that seem in tension with that, or inconsistent with that, for that to be my best guess at what happened.
So what do you think was actually going on?
Did you ascribe this, in the end, to some version of incompetence combined with a dopaminergic attachment to just winning at some kind of gambling task?
Yeah, I mean, I see the deep kind of vice that ultimately drove this all as hubris, where they were not very experienced, they were very smart, They grew a company in what was, for a while, an impressive way.
And Sam in particular, I just think, thought he was smarter than everyone else.
And this was something that I didn't like about Sam, I noticed during the time.
He would not be convinced of something just because other people, even if everyone else believed X and he believed Y, that wouldn't give him no pause for doubt.
And so I think he got kind of corrupted by his own success.
I think he felt like he had made these bets that had paid off in the spite of people being sceptical time and again, and so he just thought he was smarter.
And that means that Very basic things like having good accounting, having the kind of adults, professionals come in who could do risk management, actually point out what on earth was going on with where different stashes of money were.
Because this is another thing that shows just how insane the whole venture was.
At the time of collapse, they just didn't know where most of their assets were.
You know, there would be hundreds of millions of dollars in a bank account somewhere, and they wouldn't even know it existed.
The bank would have to call them to tell them, by the way, you've got these assets on hand.
Again, if it was a carefully calculated ploy, you would want to know where all of your assets were in case there was a mass withdrawal of customer deposits.
And so, I think, yeah, that hubris.
Also, kind of not a risk calculation, but maybe an attitude to risk.
Where many people, I think, when they're in the position of, you know, really quite rapidly running a multi-billion dollar company, would think, holy shit, I should really get some experienced professionals in here.
Whereas that was, and be quite worried about, you know, have I attended to everything?
Have I, basically just have I got this company under control?
And I think at that point, that was not at all how Sam and the others were thinking.
And then the final thing to say just is that this isn't saying that they didn't commit fraud.
From June onwards, after this hole has been discovered, I think then it becomes pretty clear that You know, they're just brazen lies to try to get out of the position that they've put themselves in.
I think there are also other cases of things that seem like clearly flawed, though they are not of the kind of eight billion dollars scale.
And this was fraud, you think, to conceal the hole in the boat that was putting everything at risk?
Or this was fraud even when things appeared to be going well and there was no risk of oblivion evident?
What was the nature of the fraud, do you think?
Yeah, I mean, again flagging that there's probably lots that I'm saying that are wrong because this is, you know, it's complex and I'm not confident.
There's like lots of different stories.
My guess is both.
Like in the trial, one thing that came up and I'm surprised didn't have more attention was that FTX advertised it had an insurance fund.
So if it's liquidation engine, basically a bit of technology that meant that Even if a customer was borrowing funds on the exchange in order to make basically a bet on the exchange with borrowed money.
You know, on other exchanges, you could easily go negative by doing that.
And that would mean other users would have to pay to cover that loss.
FTX had this automatic liquidation engine that was quite well respected.
But they said, even if that fails, there's this insurance fund.
that will cover any losses.
However, the number that was advertised on the website seemed to have been created just by a random number generator.
So that seems like really quite clear fraud.
And I don't know, it hasn't been discussed very much, but seems totally inexcusable and was applied even when the going was good.
But then, the big fraud, the $8 billion, it seemed like that really started kicking in from June of 2022 onwards.
Though I'll also say, we can talk about my interactions with the people there.
Looking back, I think it seems to me like they did not know just how bad the situation they were in was.
Yeah, well, let's talk about your interactions and let's focus on Sam to start.
I mean, you bring up the vice of hubris.
I only spoke to him once, I believe.
It's possible I had a call with him before I did a podcast with him, but He was on the podcast once, and this was very much in the moment when he was the darling of the EA community.
I think he was described as the youngest self-made person to reach something like $30 billion.
I think he was 29 and had $29 billion or something at the time I spoke to him.
And again, the purpose of all of this earning was to do the maximum amount of good he could do in the world.
He was just earning to give as far as the eye could see.
I didn't really encounter his now famous arrogance in my discussion with him.
Maybe he just seemed smart and well-intentioned.
And I had no reason to, I knew nothing about the details of FTX, apart from what he told me.
And, you know, I think anyone in our position of talking to him about his business could be forgiven for not immediately seeing the fraudulence of it, or the potential fraudulence of it, given that, you know, he had people invest with him, you know, quite sophisticated, you know, venture investors.
Early on, and, you know, they didn't detect the problem, right?
And we were not in the position of investing with him.
But in the aftermath, there are details of just how he behaved with people that struck me as arrogant to the point of insanity, really.
I mean, just like he's, you know, in these investor calls, apparently he is, while describing his business and soliciting, I think it was hundreds of millions of dollars at a minimum, from firms like Sequoia.
He is simultaneously playing video games, and this is, you know, celebrated as this delightful affectation.
But, I mean, clearly he is someone who thinks, you know, he need not give people 100% of his attention because, you know, he's got so much bandwidth, he can just play video games while having these important conversations.
Yeah.
And there were some things in Michael Lewis's book that revealed, or at least seemed to reveal, that he was quite a strange person and someone who claimed on his own account, at least to Lewis, that he didn't know what people meant when they said they experienced the feeling of love, right?
So he's neuroatypical at a minimum.
And I just offer this to you as just a series of impressions.
How peculiar a person is he and shouldn't there have been more red flags, you know, earlier on, you know, just in terms of his integrity ethically or just his capacity for ethical integrity given, you know, I mean, if someone tells me that they have no idea what anyone means when they say they love other people, That is an enormous red flag.
It's something that I would feel compassion for, that the person is obviously missing something.
But as far as collaborating with this person or putting trust in them, it's an enormous red flag.
I don't know at what point he told Lewis that, but What was your impression, or is your impression, of Sam as a person?
And in retrospect, were there signs of his unreliability ethically far earlier than when the emergency actually occurred?
Sure, so there's a lot to say here.
Briefly on the not feeling love, my descriptions of Sam and feelings about Sam are quite varied and variegated.
On his ability to not feel love, That wasn't something that seemed striking or notable to me.
After the Michael Lewis book and lots of things came out, it seemed like he had just emotional flatness across the board.
And whether that's a result of depression or ADHD or autism is not really that clear to me.
But that wasn't something that seemed obvious at the time, at least.
I guess I interact with people who are relatively emotionally flat quite a lot.
I certainly wouldn't have said he's a very emotional person.
He did seem like a very thoughtful, incredibly morally motivated person all the way back to 2012.
I mean, his main concern was for the plight of non-human animals on factory farms for most of that time.
It's kind of an unusual thing to care about if you're some sort of psychopath or something like that.
When I first reached out to Sam after FTX had been so successful, I talked to him about, you know, okay, you've started this company, it's a crypto company.
Isn't crypto like, you know, pretty sketchy?
Like, how much have you thought about risks to the company and so on?
And there was a narrative that came from him and then was echoed and emphasized by Nishad Singh, who in my experience was really a kind of crucial part of the story, like really crucial part of my interface with that world.
And the story I got told was FTX is trying, quite self-consciously, to be much more ethical than the standard crypto exchange or anything going on in the crypto world.
And there are two reasons why we need to do that, even putting aside the intrinsic desire to act ethically.
One is because they were trying to get regulated.
So you know, they were very actively courting regulation in the US.
Because they thought that was a way in which, you know, they were these centre-left people, they were not the libertarians that populate crypto normally.
They thought, you know, that's how they could get the edge over their competitors, was by being much more open and open to regulation.
And then secondly, because they planned to give the proceeds away, they knew that they would face a higher bar for criticism.
And that claim got made to me over and over again, where not just Sam, but Nishad.
Sam was very busy, so I spoke to him a number of times, half a dozen times or something, one-on-one, more times in group settings.
But I talked to Nishad, and Nishad really came across like I mean, this is the thing that maybe breaks my heart the most about the whole story, where he came across just as this incredibly thoughtful, morally motivated, careful, just kind person.
And I would ask him, kind of, okay, so why are you in the Bahamas?
And there would be an answer, which is that that's where they were able to get licensed.
Or I'd ask kind of, why is your apartment so nice?
And they would say, well, you can't really get kind of mid-level property in the Bahamas.
We just need somewhere that we can create like a campus feel.
And so yeah, it is nicer than we'd like.
Hopefully we can move over time to something a bit less nice.
So over and over again, or other kind of ethical issues in crypto we can go into.
And yeah, over and over again, he was painting that picture.
And something that was just so hurtful and confusing is just, was he lying to me that whole time?
Like, was that just all false?
Or was he just like a gullible fool?
I haven't followed the trial in sufficient detail to know what his role was revealed to be in all of this.
I mean, where is he, and has he been prosecuted, and what do you think of his actual intentions at this point?
Yeah, I mean, so he pled guilty, I think, for fraud, among other things, and he testified.
In these pleadings, do you think this was, you know, just kind of a classically perverse prisoner's dilemma situation where you have people given the shadow of Prosecution and prison hanging over them.
They're willing to testify to things that the government wants to hear, but which are not strictly true.
I mean, what's your theory of mind for the people who pled guilty at this point?
Yeah, I mean, again, this is something that comes up in Eugene Soltis' book, and he talks about where it's a very strange aspect of the US legal system.
It's not something that happens in the UK, where the government will reward people, literally with their lives, for going on the stand, because the other people probably will get no jail time.
They will reward people in that way, for going on the stand and testifying.
And so that just does mean, you know, they can't tell lies, or not verifiable lies, but there are very strong incentives to present things in a certain way.
And again, I don't want to, this is all sounding much more defensive of Sam than I want to be, but the Salters book talks about some people who were just, you know, they would rehearse with their lawyers for hours and hours and hours in order to seem, you know, display appropriate conviction and so on.
And so the view of this that kind of Michael Lewis took is just, you know, people understand, they will have just said true things throughout.
But the kind of tone of it is maybe a little different than it really was, where there was a lot of, you know, the co-conspirators talking about how bad they felt and how they knew what they were doing was wrong at the time.
They were really torn up.
That, you know, seems quite inconsistent with my experience of them, but maybe they were just incredible liars.
One question about that.
So they knew what they were doing was wrong, could mean many things.
It could mean that they knew that they were taking risks with people's funds that were unconscionable, you know, given the possibility of losing money that customers thought was safely on the exchange.
But that's not the same thing as stealing money and misappropriating it in a way that is purely selfish, right?
It's not like we took money that was not ours and we bought You know, luxury condominiums in the Bahamas with it and hoped no one would notice, right?
That's one style of fraud.
You tell me, is it possible that they thought they were going to, you know, wager this money on other real investments, you know, however shady some of these crypto properties were?
But they actually expected enormous returns as a result of that misappropriation and that money would come back safely into FTX and no one would lose anything in the end if everything worked.
Yeah, I mean, in terms of how things seem to me, I just think they didn't think the company was at risk.
Not at serious risk.
And here's a couple of, there's a few reasons why.
I mean, one, this is kind of how I felt, like, why I was so confused this whole time.
Like, you know, I visited the Bahamas a number of times in 2022, I never saw any kind of change in attitude from them over that time.
You would really think, if you're engaging this major fraud, that something would seep out, some sort of flax.
Maybe I'm a fool, but I did not see that.
And in fact, even, so in September, my last trip to the Bahamas, I heard from Michael Lewis that Sam had been courting funding for FTX from Saudi Arabia and other places in the Middle East.
And I do not love the idea of taking money from Saudi Arabia.
I have issues with that.
And it also just struck me as kind of odd.
And I was aware there was a crypto downturn.
So I talked to Neshad and I asked, look, is there anything up with the company?
Are you in trouble?
And he says no.
And we talk about this for, you know, it's not a passing comment for some time.
And that, by any account, is like past the point when he, you know, allegedly had learned about the huge hole that the company faced.
Similarly, Michael Lewis, who was at the same time, asked both Caroline and Nishad, as a kind of fun question, like, oh, what could go wrong with the company?
Like, if this all goes to zero, what happened?
And again, he said, like, no indication of stress upon hearing that question.
They had fun with it.
They were like, oh, maybe crypto is just a lot of air and everyone gets turned off, or maybe Sam gets kidnapped.
That was kind of one of their big worries.
But nothing that kind of leaked out there.
So given his guilty pleading and his testimony, what's your belief about your conversation with Nishat at that point?
Do you think he was unaware of the risk, or do you think he was lying to you?
So I think another thing that came out during the trial, though I'm not sure if it was admissible as evidence, was that Nishat commented to the government, kind of immediately upon pleading guilty, That in that period, he still thought that FTX would last for years.
And in terms of just giving an indication of what Nishad's personality was like, when the collapse happened, he had to be watched because he was on the verge of suicide.
He was so distraught about what happened to the customers, and I think was really quite close to taking his own life.
So then what sort of fraud was he pleading guilty to, if he's the kind of person who's Suicidal, when the wheels come off, as though he had no idea that this was in the cards, right?
Do you think he's just responding to all of the opprobrium and disgrace aimed his way in the aftermath, or do you think he was actually surprised, fundamentally, by the risk that was being run?
And if the latter, in what sense does he claim to be culpable for a conscious fraud?
Yeah, so I don't know about whether Nishad at this time was ignorant, as in just really did not know the risks they were running.
I don't know if he was just delusional.
So again, this is a thing that Soltes talks about, is just You know, the capacity for humans to create a narrative in which they're still, you know, the good guys.
I don't know, perhaps he thought that, yes, this was bad, but they would get out of it.
And so it was a little bit bad, but it'll be fine.
Maybe that was there, too.
Oh yeah, one thing that he does is he buys this three and a half million dollar property for himself in October.
Again, it's just not the action with someone who, again on the stand, said that, you know, how distraught he was when he was talking to Sam about this.
So all of these things are possible to me.
As for, you know, him pleading guilty, well, I mean, whichever of these is true, I think it would make sense to plead guilty if you're in that situation where, you know, there are huge costs to going to jail.
And like, I don't know, like, you know, plausibly also he just thought, look, yes, I thought... So, you know, there's various stories, but plausibly he thought, look, yes, I knew what I was doing was bad.
I thought it was only a little bit bad.
Actually, I was wrong.
It was very bad.
It was extremely bad.
I'm willing to just fess up and take the hit I should get.
Like, there are various possible explanations there.
Right.
There was a lot made, I don't know if this appeared in the trial, but in the court of public opinion, there was a lot made of a text exchange that Sam had with somebody.
I think it was a journalist or a quasi-journalist in the immediate aftermath of the scandal where he seemed to admit that all the effect of altruism lip service was just that.
It was just the thing you say to liberals to make them feel good.
I forget the actual language, but it just seemed like he was copying to the fact that that part was always a ruse.
Honestly, when I read those texts, I didn't know how to interpret them, but it was not obvious to me that they were the smoking gun.
They appeared to be, in so many minds, who were ready to bury effective altruism as a scam.
Do you know the thread I'm referring to?
Yeah, I know the thread.
In a way, from the perspective of the brand of effective altruism, Maybe it would have been better if that had been a big con, too.
But no, I think he believed in these ideas.
I think there he was deferring to what you might call corporate ethics that are really PR.
Companies will often make these big charitable donations to their local community and so on.
And in this case, everyone knows this is marketing.
I guess I don't know the details, but presumably FTX was doing stuff like that in the same way other companies do.
My interpretation of those texts is that that's what he was referring to.
Actually, it reminds me, the one concern I did have about Sam before the scandal broke, I don't know if this was contemporaneous with my conversation with him on the podcast, but I just remember thinking this.
When I heard how much money he had given away, and you referenced it earlier, it was something north of $100 million.
I'm always quick to do the math on that, and I recognize what a paltry sum that actually is if you have $30 billion.
It's an enormous amount of money out in the real world where people are grateful for whatever you give them, but it's analogous to somebody who has $30 million giving $100,000 away.
It's not a sacrifice.
It's a rounding error on their actual wealth, and it's certainly not the sort of thing that I would expect of someone for whom the whole point of becoming fantastically wealthy is to give all of it away.
I forget if I asked him a question about the pace of his giving during that podcast, but I know that some people think, well, the best thing for me to do is to use these assets to make more assets in the meantime, and then I'll give it all away later on.
Given the urgency of so many causes and given the real opportunity to save lives and mitigate enormous suffering every day of the week, starting now, my spidey sense tingles when I hear a fantastically wealthy person deferring their giving to the far future.
I'm wondering what you think of that.
Sure.
Yeah, I think that wasn't really an issue.
So a couple of reasons.
One is just, you know, his net worth is basically entirely in FTX.
There's no way of converting that.
So if you're in a startup, And all your wealth is in the equity of that startup.
There's not really any way of converting that wealth into money, the sort of thing that you could donate.
You have to basically keep building the company until you can have an exit.
So get acquired or sell the company, and then you can become more liquid.
And then the second thing is just, at least relative to other business people, he was very unusual in wanting to give more and give quickly.
So, I mean, I advised on the setup of his foundation, and it got a lot of criticism for scaling up giving too quickly.
So going from kind of zero to, you know, one to two hundred million in a year is like a very big scale-up.
And it's actually just quite hard to do.
And so I guess I like, you know, if you were asking me, yes, it's a tiny fraction.
And I agree with the point in general that when someone who's a center billionaire then gives, you know, a hundred million dollars to something, that is just really not very much at all.
Especially once they've had that money for decades and they can really kind of distribute it.
But in that case, the way it seemed to me at the time, and I guess still does just seem to me, was like basically consistent with someone trying to scale up their giving as fast as they can.
And in fact, in a way that, you know, plausibly should have been paying more attention to the business and not getting distracted by other things.
Yeah.
So what, if anything, does this say about effective altruism?
I guess there's an additional question here.
What has been the effect, as you perceive it, on EA and the public perception of it, the fundraising toward good causes?
Has it forced a rethinking of Are there any principles of effective altruism, whether it's earning to give or a focus on long-termism, which we haven't talked about here yet, but you and I have discussed before.
How large a crater has this left, and what has been touched by it, and is there any good to come out of this?
Just give me the picture, as you see it, of EA at the moment.
Yeah, I mean, huge harm.
Huge harm to EA, where, you know, at the time of the collapse, I put it like 20% or something, that the EA movement would just die, that this was a killer blow.
And so, obviously in terms of hit to the brand, you know, so many people think ill of EA now, or critical of EA now, in all sorts of different ways.
In a way that's not surprising.
It was this horrific, horrific thing.
I don't think it happened because of EA.
I think it happened in spite of EA.
I think EA leaders and communicators have been very consistent on the idea that the ends do not justify the means.
Really, since the start.
And really, this goes back centuries.
Go back to John Stuart Mill.
And actually, even Sam knew this.
So again, as part of just trying to figure out what happened, I did some Facebook archaeology.
There's an essay by Eliezer Yarkovsky called The Ends Do Not Justify the Means Among Humans, basically making the classic point that you are not a god at calculation, even if you're 100% consequentialist, which I don't think you should be.
But even so, follow heuristics that are tried and true, including heuristics not to violate side consents.
And this was shared in a group, in a kind of discussion group.
And this is, you know, well before FTX.
Sam's response was like, why are you even sharing this?
This is obvious.
Everyone already knows this.
So yeah, this was, in my view, in spite of EA, not because of it.
But yes, the damage is huge.
Also internal damage as well.
Morale was very, very low.
Trust was very low.
The thought being, well, if Sam and the others did this, then who knows what other people are like?
And there has just been an enormous amount of self-reflection, self-scrutiny, whether that's because of this catastrophe itself, or just if there's any point in time for self-reflection.
I think it was in the aftermath of that.
And so there's a whole bunch of things that have changed over the last year and a half.
Not in terms of the principles, because what is effective altruism?
It's the idea of using evidence and reason to try to make the world better.
That principle is still good.
I still would love people to increase the amount by which they are benevolent towards others, and increase the amount by which they think extremely carefully and are really quite intense about trying to figure out how they can have more positive impact with their money or with their time.
That's just still as true as ever, and the actions of one person in no way undermine that.
I mean, take any ideology, take any moral view that you can imagine, you will find advocates of that ideology that are utterly repugnant.
Yeah.
Hitler was a vegetarian principle.
Exactly.
And Sam was too, and so vegetarians having a bad time.
Yeah, exactly.
Exactly.
There have been a lot of changes to the institutions within Effective Altruism.
So it has essentially entirely new leadership now, at least on the organisational side.
So Centre for Effective Altruism, Open Philanthropy, 80,000 Hours, and the boards of at least some of these organisations are really quite refreshed.
This is partly just a lot of people had to work exceptionally hard as a result of the fallout and got really quite burned out.
In my own case, I've stepped back from being on the boards of any of these main organisations, and I won't do that again, really, for quite a while.
One thing was just that I wasn't able to talk about this stuff in the way I really wanted to for over a year.
I spent, again, months, literal months, writing blog posts and rewriting them and having them Then kind of knocked back because there was an investigation being held by Effective Ventures, one of the charities, and the law firm doing that really didn't want me to speak while that was ongoing.
But then also because I think a healthier effective altruism movement is more decentralized than it was, and there was an issue when the collapse happened, that I was in the roles of being on the board of the charity, also, if anyone was, being a spokesperson for the EA, but also having advised Sam on the creation of the foundation.
And that meant I wasn't able to kind of offer guidance and reassurance to the community at that time of crisis in a way that I really wanted to and wish I'd been able to.
And so I do think a healthier EA movement has greater decentralization in that way.
And there's some other things happening in that direction too, so various organizations are kind of separating, or projects are separating out legally and becoming their own entities.
Yeah, in the aftermath, I was certainly unhappy to see so many people eager to dance on the grave of effective altruism.
And in the worst cases, these are people who are quite wealthy and cynical and simply looking for an excuse to judge the actual good intentions and real altruism
of others as just, you know, patently false, and it was, you know, there was never a there, there, no, everyone's just in it for themselves, and therefore I, rich, Ayn Randian type, should feel a completely clear conscience in being merely selfish, right?
It's all a scam, right?
And I just think that's an odious worldview and a false one, right?
It's not that everyone is just in it for themselves.
It's not all just virtue signaling.
There are real goods in the world that can be accomplished.
There are real harms that can be averted, and being merely selfish really is a character flaw, and it is possible to be a much better person than that, and we should aspire to that.
And I say this as someone who's been, to some degree, always somewhat critical or at least leery of EA as a movement and as a community.
I mean, I think I'm one of the larger contributors to it, just personally and just how much money I give to EA-aligned charities and how much I have spread the word about it and inspired others to take the pledge and to also give money to GiveWell and similar organizations.
But I've always been, and I've spoken to you about this, and I've said as much on this podcast and elsewhere, I feel like as a movement, it's always struck me as too online and for some reason, attractive to You know, in the most comedic case, you know, neuroatypical people who are committed to polyamory, right?
I mean, it's just there's a Silicon Valley cult-like dynamics that I've detected, if not in the center of the movement, certainly at its fringe, that I think is evinced to some degree in the life of Sam Bankman Freed, too, and we haven't talked about just how they were living in the Bahamas, but, you know, there's certainly some colorful anecdotes there.
It seems to me that there's a culture that I haven't wanted to endorse without caveat, and yet the principles that I've learned from my conversations with you and in reading books like your own and Toby Ord's book, The Precipice, they're ideas about existential risk.
You know, actually becoming rational around the real effects of efforts to do good rather than the imagined effects or the hoped-for effects.
Divorcing a rational understanding of mitigating human suffering and risk of harm and the good feels we get around, you know, specific stories and specific kind of triggers to empathy, right?
And just performing conceptual surgery on all of that so that one can actually do what one actually wants to do in a clear-headed way, guided by compassion and a rational understanding of the effects one can have on the world.
And we've talked about many of these issues before in previous conversations.
I think all of that still stands.
I mean, none of that was wrong, and none of that is shown to be wrong by the example of Sam Bankman Freed.
And so, yeah, I just, you know, I do mourn any loss that those ideas have suffered, you know, in public perception because of this.
So, yeah, I mean, do with that what you will, but that's where I've netted out at this point.
Yeah, I think it's part of the tragedy of the whole thing.
Giving What We Can has over 9,000 people who are pledging to give at least 10% of their income to highly cost-effective charities, aiming for 10,000 people this year.
For those people, generally living Like, do they normalize middle class?
Or maybe they're wealthier?
Like, in what way does the action of Sam and the others invalidate that?
And the answer's not at all.
Like, that is just as important as ever.
And yeah, one of the things that's so sad is like, maybe fewer people will be inclined to do so.
Not for any good rational reasons, but just because of the, you know, bad order that surrounds the idea now.
And that's just a little tragedy.
I think that's, I think, donating a fraction of your income to causes that effectively help other people.
I still think that's a really good, a really good way to live.
You talk about, yeah, the kind of online cult-like and shot through with Asperger's kind of side of the movement.
I think I want to do say that, you know, EA is many things, or like the EA movement is many things.
And also, of course, you can endorse the ideas without endorsing anything to do with the movement.
But I definitely worry that, you know, there is a segment that is extremely online.
And perhaps unusually weird in its culture, or something.
And it's a bit of a shame, I think, if people get the impression that's kind of what everyone within the EA movement is like, on the basis of whoever is kind of most loud on the internet.
Well, you know, people can be poly if they want.
No moral objection to that at all.
Fine way to live.
People can have all sorts of weird beliefs too, and maybe some of them are correct.
I think AI risk was extremely weird for many years, and now people are taking it really seriously.
So, I mean, I think that's important.
But I think the vast majority of people within The effect of altruism movement are like pretty normal people.
They're people who care a lot.
They're people who are willing to put their money or their time where their mouth is.
And because they care, they're really willing to think this through and, you know, willing to go where the arguments or the evidence lead them.
And, you know, I'm not someone who's naturally kind of on the internet all the time.
I find Twitter and internet forums, you know, quite off-putting.
And when I meet people in person who are engaged in the project of Effective Altruism, it feels very, very different than it does if you're just hanging out on Twitter or on some of the forums online or something.
So, is there anything that has been rethought at the level of the ideas?
I mean, the one other issue here, which I don't think it played an enormous role in the coverage of the FTX collapse, but it's come under some scrutiny and become a kind of an ideological cause for concern.
The emphasis on long-termism, which you brought out at book length in your last book, Was that part of the problem here?
And is there any rethink?
Because that certainly brings in this issue of probability calculus that turns our decisions into, you know, a series of trolley problems wherein ends justify the means thinking, at least becomes tempting.
Which is to say that if you thought a decision you made had implications for the survival of humanity, not just in the near term, but out into an endless future where trillions upon trillions of lives are at stake and they hang in the balance, well then, there's a lot you might do if you really took the numbers seriously, right?
Is there anything that you have been forced to revise your thinking on as a result of this?
Yeah, so, I mean, I really think long-termism wasn't at play.
I mean, again, like I said, I feel like it wasn't.
What happened at FTX was not the matter of some rational calculation in pursuit of some end.
I think it looks dumb in a model from any perspective.
I also just think, like, if your concern is with the hundreds of millions of people in extreme poverty, or the tens of billions of animals suffering in factory farms, The scale of those problems are more than enough for the same reasons, for the same kind of worries to arise.
And in fact, we have seen, like in the animal welfare movement on the fringes, people taking violent actions even in the pursuit of what they regarded as the kind of greater good.
Long-termism, if anything, kind of actually shifts against it because this argument about, oh, you should be willing to take more risk if you're using your money philanthropically than if you're willing to just spend the money on yourself.
That argument applies much less strongly in the case of long-termism than it does for global health and development, for example.
Because, you know, if I have five dollars to spend, that can buy a bed net.
If I have a billion and five dollars to spend, that final five dollars still buys a bed net.
Global health and development can just absorb huge amounts of money without the cost effectiveness going down very much.
The same is not just to follow a further point along those lines.
My concern with long-termism has been the way in which it can seem to devalue the opportunity to alleviate present harms and present suffering.
Because if you can tell yourself a story that The one billion people suffering now, their interests are infinitesimal compared to the trillions upon trillions who may yet exist if we play our cards right.
So, it's an argument for perhaps overlooking the immediate suffering of the present out of a concern for the unrealized suffering of the future.
Yeah, and I think that in What We Are The Future, I was very careful to defend only what I call the weak form of long-termism, that positively impacting the long-term future is a moral priority of our time.
Not claiming it's the only one, nor claiming its overwhelming importance either.
Like, I think we should be uncertain about this.
In, you know, I've got a new preface, I suggest a goal, kind of a way of operationalising that, of rich countries putting one percent, at least one percent, of their resources to issues that distinctively impact future generations.
Because at the moment they currently put close to zero percent.
And I do think the mode of operating in which you think, oh, a pleasant catastrophe is nothing compared to the unparalleled good that may come in a trillion years' time.
I think that's a very bad way of thinking, even just from a pure long-term perspective.
I think it doesn't have a good track record, and it's really not how I would want people to think.
There has been a different line of criticism that I got from within EA, from the publication of What We Are The Future onwards, that I think has had a lot of merit.
And that line of criticism was that I was misunderstanding, actually, how near-term the risks that we are talking about were.
So, in particular, the risk from AI, where the risks we face from really advanced artificial intelligence, even artificial general intelligence, These are coming in the next decade, at most the next couple of decades.
And secondly, the scale of the problems that are imposed by technological development like AI are so great that you don't need to think about future generations at all, even if you just care about the 8 billion people alive today.
Imposing the size of risks that we are imposing on them via these technologies is more than enough for this to become one of the top problems that the world should face today.
Over the last few years, since the publication of the book, I just think that perspective has been getting more and more vindicated.
So I'm now much more worried about really very advanced, very fast advances in AI, in a very near time frame, as in literally the next five, six years or the next decade.
Much more worried by risks from that than I was even just a few years ago.
This is a sidebar conversation, but it's interesting.
Are your concerns mostly around the prospect of unaligned AGI, or are they the more piecemeal and nearer term and actually already present Concerns around just the misuse of AI at whatever capacity it exists to essentially render societies ungovernable and more or less guarantee malicious use at scale that becomes quite harmful.
To what degree are you focused on one versus the other?
I think I want to say I'm focused on both, but also other things too.
So I think misalignment risk, I think it's real and I think it's serious and I think we should be working on it and working on it much more than we currently are.
I am an optimist about it though, as in I think very probably it will either turn out not to be an issue because it's just really quite easy to make advanced AI systems that do what we want them to, or we'll put in a big effort and be able to solve the problem.
Or we'll notice that the problem is not being solved and we'll actually just hold back, put in regulations and other controls for long enough to give us enough time to solve the problem.
But there should still be more work.
However, I think that AI will pose an enormous array of challenges that haven't really been appreciated.
And the reason I think this is because I find it increasingly plausible that AI will lead to much accelerated rates of technological progress.
So imagine, as a kind of thought experiment, all the technologies and intellectual developments that you might expect to happen over the coming five centuries.
everything that might happen there.
And now imagine all of that happens in the course of three years.
Would we expect that to go well?
So in that period of time, okay, we're developing new weapons of mass destruction.
We now have an automated army an automated police force, so that in principle all military power could be controlled by a single person.
We now have created beings that plausibly have moral status themselves.
What economic rights, welfare rights, political rights should they have?
Potentially, I mean you talked about misinformation, but potentially now we have the ability to just have superhuman persuasive abilities.
Far, far better than even teams of the best, most charismatic lawyers or politicians in the most targeted ways possible.
And I think more challenges too, like over this period we'll probably also have new conceptual insights and intellectual insights or, you know, radically changing the game board for us too.
And all of that might be happening over the course of a very short period of time.
Why now?
Why might it happen in such a short period of time?
Well, that's the classic argument that goes back to I.J.
Goode in the 50s, which is, once you've got the point in time when AI can build better AI, you've got this tight feedback loop.
Because once you've built the better AI, that can help you build better AI, and so on.
And that argument has been subject to really quite intense inquiry over the last few years, building it into leading growth models, really looking at the input-output curves in existing ML development for how much of a gain you get for an increase in input.
And it really looks like the argument is checking out.
And then that means that it's not long from the point of time that you've got your first AI that can significantly help you with AI research.
to then trillions upon trillions of AI scientists that are driving progress in all sorts of scientific domains forward.
That's just a really quite dizzying prospect.
I think misalignment and misinformation are some of the challenges we'll have to face.
But it's really just like facing all of technological progress at once, and doing it in an incredibly short period of time, such that I think the default outcome is not that we handle that well.
Handling it well or not, I think we just birthed another topic for a future podcast, Will.
Sure.
There's a lot to talk about there.
Okay, so finally, where has all of this controversy and confusion landed for you?
Where does it leave you personally, and in terms of what you're now doing, and what's your view, optimistic, pessimistic, or otherwise, about EA going forward?
Yeah, I mean, so the collapse was, yeah, I mean, it was extremely hard for me.
And there's just been no doubt at all, it's the hardest year and a half now of my life.
Both, you know, so many reasons, just the horror of the harms that were caused, the incredible damage it did to organizations and people that I loved.
And so I found that just, yeah, very tough, very tough to deal with.
And I was, you know, really quite a dark place for the first time in my life, actually.
I had this chunk of time where I just didn't... I kind of lost the feeling of moral motivation.
I didn't really know if I could keep going.
So I did actually even think about just stepping back, really just giving up on EA as a project in my life, because it just felt kind of tainted.
And that was weird.
I mean, that was weird not having that motivation.
What would have produced that effect?
Is it just the public perception of EA becoming so negative?
Is it, practically speaking, fewer funds going into EA organizations that need those funds?
Was it funds that were getting clawed back?
Because Sam Bankman Freed or FTX had given those funds and now those become, you know, legally challenged.
What was the actual impact?
Yeah.
I mean, the reason for me thinking, you know, maybe I was just going to give up was nothing practical like that.
It was, you know, psychological.
Like after, you know, I really felt like I'd been punched or stabbed or something.
Like, you know, I felt like I'd been building this thing for 15 years and it really worked, you know, unsustainably hard on that.
I was tired and it had just been blown up in this one, in one swoop.
And it just been, you know, blown up by Sam's actions.
And yeah, it was just hard to then think, okay, I'm going to get back up off the ground and go into the shallow pond and rescue another child or something.
When you say it's been blown up though, so you're talking essentially about the brand damage to EA.
Yes.
With which you have been so closely associated as really one of its progenitors.
Exactly.
And I was talking about just what's going on, what was going on in my mind, not about what in fact happened.
So we can talk about what the hit was to EA.
So, yeah, huge brand damage.
I think people were definitely keen to disassociate themselves.
In my own case, I actually got surprisingly little in the way of being cancelled.
But it definitely means it's harder for people, especially in the aftermath, to go out and advocate.
But I'm now feeling much more optimistic.
So it was really tough.
The year afterwards, it was low morale.
I think it was just harder for people associated to EA to just do the work they wanted to do.
But a few months ago, there was a conference among many of the leaders of the most core-in organizations.
And I honestly just found that really inspiring, because for so many people, the principles are still there.
The problems in the world are still there.
Sam's actions do not change in any way how serious the problems we're facing are, and how important it is to take action, and how important it is to ensure that our actions have as big an impact as possible.
So there really was a sense of people rolling up their sleeves and wanting to get back to it.
And then in terms of the brand, I think once you go off Twitter, people again are quite understanding.
People understand that the actions of one person don't reflect on an entire movement.
And ultimately, just not that many people have heard of either FTX or Effective Altruism, so there's still plenty of room to go.
And so I think we're starting to see things change again.
And you know, it's just made me reflect on the good things that the Effective Alters movement is accomplishing and has continued to accomplish.
So just recently, GiveWell has now moved over two billion dollars.
To its top recommended charities.
Just so insane for me to think that that could be that amount compared to where I was 15 years ago.
It's like hundreds of thousands of people who are alive that would have otherwise been dead.
Like I live in Oxford, has a population of 120,000 people.
If I imagine like a nuclear bomb went off and killed everyone in Oxford, That would be world news for months.
And if a kind of group of altruistically motivated people had managed to come together and prevent that plot, saving the city of Oxford, That would be huge news!
People would write about that for decades.
And yet that's exactly, you know, not in nearly as dramatic a way, perhaps.
But that's what's happened, actually, several times over.
And so now I think, yeah, we're getting back to basics, basically.
Back to the core principles of effective altruism.
Back to the idea that, you know, all people have moral claims upon us.
We in rich countries have an enormous power to do good.
And if we do just devote some of our money or some of our time through our careers to those big problems, we can make a truly enormous difference.
That message, it's as inspiring to me as ever.
Yeah, well, I certainly could say the same for myself.
As I've described, I've always viewed myself to be on the periphery of the movement, but increasingly committed to its principles, and those principles have been learned directly from you more than from any other source.
I think, as I said in a blurb to your most recent book, I wrote that no living philosopher has had a greater impact upon my ethics than Will McCaskill, and much of the good I now do, in particular by systematically giving money to effective charities, is the direct result of his influence.
That remains true, and I remain immensely grateful for that influence.
You should not read into the noise as a result of what Sam Bankman Freed did any lesson that detracts from all the good you have done that many of us recognize that you have done and all the good you have inspired other people to do.
It's extraordinarily rare to see Abstruse philosophical reasoning result in so much tangible good and such an obvious increase in well-being in those whose lives have been touched by it.
So, you should keep your chin up and just get back to your all-too-necessary work.
It remains inspiring and While you may no longer be the youngest philosopher I talk to, you still are nearly so, so just keep going.
That's my advice.
Well, yeah, thank you so much, Sam.
It really means a lot that you feel like that, especially because your advocacy has just It's really been unreal in terms of the impact it's had.
So I was saying that Giving What We Can has over 9,000 members now.
Over 1,000 of them cite you, cite the Sam Harris podcast, as why they have taken the pledge.
That's over $300 million of pledged donations.
And so I guess it just goes to show that, yeah, the listeners of this podcast, you know, they just are people who are taking Good ideas, seriously.
You might think people who listen are just interested in the ideas just for their own sake.
You know, they find them intellectually engaging.
But no, actually people are just willing to put those ideas into practice and do something, like give them what we can pledge.
And that's just, yeah, that's amazing to see.
That's great.
Well, Will, we have another podcast teed up.
Let me know when our robot overlords start making increasingly ominous noises such that they're now unignorable, and we will have a podcast talking about all the pressing concerns that AI has birthed.
Because, yeah, I share your fears here, and it'll be a great conversation.