All Episodes
Jan. 16, 2026 - Making Sense - Sam Harris
21:35
#453 — AI and the New Face of Antisemitism

Sam Harris speaks with Judea Pearl about causality, AI, and antisemitism. They discuss why LLMs won't spawn AGI, alignment concerns in the race for AGI, Pearl's public life after the murder of his son Daniel, the post-October 7th shift toward open anti-Zionism, the overlap between anti-Zionism and antisemitism, the misuse of "Islamophobia," Israel's fracture under Netanyahu, confronting anti-Zionism in universities, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

|

Time Text
Welcome to the Making Sense podcast.
This is Sam Harris.
Just a note to say that if you're hearing this, you're not currently on our subscriber feed and will only be hearing the first part of this conversation.
In order to access full episodes of the Making Sense podcast, you'll need to subscribe at samharris.org.
We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers.
So if you enjoy what we're doing here, please consider becoming one.
Well, I'm here with Judea Pearl.
Judea, thanks for coming into the studio.
Great to see you.
It's the second time, isn't it?
Yeah, I came to you last time.
Yeah.
Yeah.
Yeah, I was in your office.
I actually didn't look to see when that was, but that's a few years ago, certainly.
That might be.
That was for your book, The Book of Why.
The Book of Why.
Which kind of wraps up for a popular audience all of your work on causality and the logic of that, which we'll touch briefly because I have to ask you about AI, given that you are one of the fathers of the field, but that's not really our agenda today, but we'll start near there.
But I want to talk to you about your new book.
You have a new book, Coexistence and Other Fighting Words, which I'm sorry to say I have not yet read, but that will give you the ability to say anything to a naive audience on this topic.
I'm sure it covers much of the ground I want to cover with you because I'm like you, I think, very concerned about cultural issues and the way that we've seen a rise of anti-Semitism on both the left and the right.
And we're now seeing the condition of Israel as a near-pariah state, you know, on the world stage.
Briefly, let's start with your background.
Where were you born and what did your parents do?
Well, I was born in a little town called Bnebrak, which is seven and a half miles north of Tel Aviv.
And it was established in 1924 by my grandfather, Chaim Pearl, with 25 other Hasidic families who came from Poland and decided that it's time to go back to where they belong.
So when did they move to Israel, your parents?
In 1924.
My father.
My father family came in 1924.
And when were you born?
In 1936.
Okay.
So, and what did your parents do?
Well, my father was a secretary of the Bnebrak municipality.
But that's only later, learning later.
He came in and became a farmer.
You come to Israel in 1924, you buy a piece of land and you schlep water some miles away and you grow radishes.
That's what he did.
Yeah.
Yeah, that had to be hard work.
It probably still is hard work, but that was farming had to be the first order of business.
First order, yeah.
The idea was to establish a biblical town with religious orientation and make it into agricultural success.
Do you know about much about your parents' state of mind when they left Europe in the 20s?
I mean, what was that?
Yes, I know.
Did they see it?
Were they witnessing Weimar and its my father didn't see?
No.
That was 1924.
And, well, the legend says, at least the family lore says, that my grandfather came home one day.
He was accustomed by a Polish peasant and called a dirty Jew.
And he came home bloody and he said to his wife and four children, start packing.
We are going to where we belong.
Not a family lore, but it has some truth in it.
And what were your principal intellectual influences as a kid?
And I mean, how did you find your path to computer science as a young person?
First, I had a very, very good education in high school.
In Tel Aviv?
I went to a high school in Tel Aviv.
I grew up in Bneibrock, but the municipality of Tel Aviv gave a quarter to its peripheral, to its suburbs.
And Bneibrog was one of its suburbs.
So from our town, they chose four people.
I was chosen among them.
It was a privilege at the time to go to a Tel Aviv high school.
And we had a beautiful education.
You know why?
Because my high school teachers were professors in Heidelberg and Berlin that were pushed out by Hitler.
And when they came to Israel, they couldn't find academic jobs.
So they taught high school.
And we were just privileged and lucky to be part of this unique educational experiment.
Yeah, yeah.
And your first language is Hebrew?
My first language in Hebrew.
All the studies were in Hebrew.
So the people who had just come from Heidelberg, your professors were speaking Hebrew at that point or what?
Hebrew.
Huh.
Interesting.
They had to struggle.
Some of them still had a Yakish accent.
Yeah.
Okay, so as I said, we spoke about your book of why last time where you talk about the importance of causal reasoning.
What's your current view of AI?
What has surprised you in recent years?
How close to causal reasoning are we achieving in the current crop of LLMs?
And I'm just wondering how you view progress at this point.
In causal reasoning or in toward the AGI in general.
If that is a goal, I don't think we are much closer.
We have been deflected by the effect of LLMs.
You have a low-flying fruits and everybody is excited, which is fine.
I mean, they're doing tremendously impressive job.
But I don't think they take us toward AGI.
So you think the framework, the LLM deep learning framework, is a dead end with respect to AGI?
No, it's a step.
But does it require a fundamental breakthrough of the sort that we haven't?
Absolutely, yes.
So it's not just more data and more compute.
No, More data and scale up.
It's all, I don't think it's going to lead over the hump that we need to cross.
Can you articulate the reason why, you know, in terms that a layperson can understand?
I mean, if someone asked you why is this insurmountable by virtue of just throwing more data and compute at it?
There are certain limitations, mathematical limitation, that are not crossable by scaling up.
I show it clearly mathematically in my book.
And what LLMs do right now is they summarize world models authored by people like you and me, available on the web, and they do some sort of mysterious summary of it rather than discovering those world models directly from the data.
To give you an example, if you have data coming from hospitals about the effect of treatments, you don't fit it directly into the LLMs today.
The input is interpretation of that data, authored by doctors, physicians, and people who already have world models about the disease and what it does.
But couldn't we just put the data itself in as well?
Here you have a limitation.
You have the limitation defined by the ladder of causation.
There is something that you cannot do if you don't have a certain input.
For instance, you cannot get causation from correlation.
That is well established, okay?
No one would deny even statisticians by that.
And you cannot get interpretation from intervention.
Interpretation means looking backward and doing introspection.
You say you can't get interpretation from interventions?
But intervention is, just remind me, but it's...
Intervention is what will happen if I do.
Right.
So it's a kind of an experiment or a thought experiment.
Experiment, correct.
And also, doesn't it imply a kind of counterfactual condition where you're saying, you know, what would have happened if we didn't intervene?
No.
No.
Here you have a barrier.
You have to have additional information to cross from the intervention level to the interpretation level.
I think you'd put counterfactuals on the side of interpretation.
Yes, correct.
Because you go, you say, look what I've seen, that David killed Goliath, and what would have happened had the wind been differently.
So who among the other patriarchs in the field fundamentally disagrees with you?
I mean, do people like Geoffrey Hinton or others who have had...
I don't think they disagree.
They don't address it.
I haven't, well, Jeff Hinton came up with the statement that we are facing a deadlock, okay?
Oh, I hadn't heard that.
Yeah.
Yes.
He mentioned that this is not the way to get AGI, but he didn't elaborate on the causal component.
So I can't recall if we spoke about this last time, but where are you on concerns around alignment and an intelligence explosion?
I mean, I know it sounds like you're not worried that LLMs will produce such a thing, but in principle, are you worried?
Do you take IJ Goods and others' early fears seriously?
That once we build AGI on whatever, on the basis of whatever platform, we're in the presence of something that can become recursively self-improving and get away from us?
Absolutely, yes.
I don't see any computational impediments to that horrifying dream.
And of course, but we're already seeing the dangers of LLMs when they fall into the hands of bad actors.
But that's not what you're worried about.
You're worried about a truly AGI system that will take over and maybe a danger to humanity.
Yes.
Definitely foresee that possible.
I can see how it can acquire free will and consciousness and desire to play around with people.
That is quite feasible.
It doesn't mean that I'm going to stop working or understanding AI and its capability simply because I want to understand myself.
Yeah.
Yeah.
Are you worried that the field is operating under a kind of a system of incentives, essentially an arms race that is going to select for reckless behavior?
I mean, just that if there is this potential failure mode of building something that destroys us, it seems, at least from the statements of the people who are doing this work, you know, the people who are running the major companies, you know, that the probability of such a encountering such existential risk is, in their minds, at least pretty high.
I mean, we're not hearing people like Sam Altman say, oh, yeah, I think the chances are one in a million that we're going to destroy the future with this technology.
They're putting the chances at like 20%, and yet they're still going as fast as possible.
Doesn't an arms race seem like the worst condition to do this carefully?
There are many other people that are worried about it, like Stuart Russell's and others.
And the problem is that we don't know how to control it.
And whoever says 20% or 5% is just talking.
I'm guessing, yeah.
We should not put a number there because we don't have a theoretical, I mean, technical instruments to predict whether or not we can control it.
We do not know what's going to happen, what's going to develop.
But what I find alarming about those utterances is that, I mean, if you just imagine if the physicists who gave us the bomb, you know, the Manhattan Project, if when asked about their initial concern that it might ignite the atmosphere and destroy all of life on planet Earth, if they had been the ones saying, yeah, maybe it's 20%, maybe it's 15%, and yet they were still moving forward with the work, that would have been alarming.
But of course, that's not what they were saying.
They did some calculation and they put the chances to be infinitesimal, though not zero.
It just seems bizarre culturally that we have the people doing the work who are not expressing, fallaciously or not.
I'll grant you that all of this is made up and it's hard to come up with a rational estimate.
But for the people doing the work, plowing trillions of dollars into the build out of AI to be giving numbers like 20% seems culturally strange.
I don't know what I mean, but 20%.
Look at me.
I am fairly sure.
All I'm saying is there's no theoretical impediment for creating such a species, dominating species.
That is true.
And at the same time, I'm working toward that indirectly.
Not toward that in order to create it, but to understand the capabilities of intelligence in general.
Because I want to understand ourselves because I'm curious.
Do you have any thoughts about how a system would have to be built so as to be perpetually aligned with our interests?
I mean, so if you're taking intelligence seriously, right?
So we're talking about building an autonomous intelligent system that exceeds our own intelligence and in the limit improves itself, one would imagine.
Do you have any notions about what a guarantee of an alignment could look like before we hit play on that?
No, I don't think we can imagine an effective alignment, an effective architecture that will assure us of alignment with our survival.
I think Stuart Russell, it's a couple of years since I've spoken with him, but I recall his notion.
Again, I'm sure this is a kind of a hand-waving notion from a computer science point of view, but to have as its utility function just to better and better approximate what we want, to be perpetually uncertain that it's achieved our goals insofar as we can continue to articulate them in this open-ended conversation that is the evolution of human culture.
Does that seem like a frame that...
It's a nice frame, but I don't see any impediment for the new species to overcome and bypass those guidelines and play.
What?
So people...
So people have an intuition that if we built it, there's no possibility of it forming its own goals that we didn't anticipate, the instrumental goals.
I mean, this is a that's, I mean, there are people fairly close to the field who will say this.
I'm not sure.
I mean, maybe even someone like Jan Lacun would say this.
But what would you say to that?
I mean, you just very breezily articulated certainty that, or something like certainty, that an independently intelligent system can play, that it can change its mind, it can discover new goals and cognitive horizons, just as we seem to be able to do.
Why is there a difference of intuition on this front?
I mean, your account seems obvious to me.
I don't know why I have different intuition than Lacun.
Look, once you want a system that will explore, explore its environment.
That's required for any intelligent system.
We wanted to play like a baby in a crib and find out why this toy makes noise and this doesn't.
It has to play around in order to get control over the environment, to understand the environment.
So once you have the idea of playing, what will prevent it from playing with us?
as instrument for his or her understanding, for instrument for environment and become part of its environment.
All right.
So this is kind of a reckless pivot from the topic of AI, but I think there's a bridge here.
I mean, I guess we could put this sort of in the frame of the cultural conditions that allow us to reason effectively or fail to reason effectively.
And this is on morally loaded topics like war and asymmetric violence, anti-Semitism, Islamism, again, Israel status among nations.
Unfortunately, you are unusually well placed to have an opinion on these topics, given your history and what happened to your son back in 2002.
I don't want to awaken painful memories, but I just feel like we need to, I'm happy to talk about this topic in any way you want, but I just need to acknowledge that your son Danny was one of the most prominent people killed by al-Qaeda when the war on terror, so-called, became salient to most people in America, certainly for the first time after 9-11.
So you've spent now a quarter of a century witnessing, as I have, but from a far kind of deeper space, the kind of consistent misunderstanding around jihadism and Islamism that has happened, especially on the left in our society.
I mean, to my eye, we have a kind of an anti-colonial, oppressor, oppressed narrative that has captured the moral intuitions of the left such that it's very difficult to talk about some of the ideas within Islam that reliably beget the kind of violence we've seen.
And, you know, groups like the Muslim Brotherhood has managed to play havoc with this moral confusion.
They found legions of useful idiots even on college campuses like your own.
I mean, I don't know if you noticed this, but the other day, the UAE announced that it would no longer pay for its students to study in the UK, at UK universities, for fear that they will be radicalized by the Muslim Brotherhood on UK campuses.
So, I mean, that's how far the rot has spread.
We can take this from any side.
We can talk about 20 plus years ago, how you came to this, or your experience after.
I want to talk about your experience after October 7th.
Just please start wherever you want to start.
My son's tragedy pushed me into public life and into my interest with social problems and cultural problems, the way you are describing.
We started the foundation after his death with the same belief that it's a matter of communication, dialogue between the East and West, Jews and Muslims.
And I got pushed into that very heavily.
And I started together with the Pakistani scholar.
We started the Daniel Pearl dialogue between Muslims and Jews.
And we went from town to town and we had meetings and discussions, audience discussions.
I even took a trip which I describe in the book, a trip to Doha in 2005 as part of the conference to bridge East-West relationship and to understand what prevents the Muslim world or the Arab world, A Muslim world, yeah, from modernizing and become enlightened as we are.
And I was very, very—that was the first time that I found the barriers which I didn't believe exist.
And this was the barrier of Israel.
We came there with the idea that they would like American help in getting modernized and progressive.
And my conclusion is that they had a different idea in mind.
And we are talking here about moderate Muslim scholars from all over the Muslim world gathering in Doha for this conference, the purpose of which was what can Americans do to speed up the process of progress, and democratization of the Muslim world.
And I came from, their idea was, if you want us to modernize, we'll give you that favor.
We are going to do you the favor of modernizing ourselves on one condition.
We want Israel's head on a tray on a silver platter.
That is a condition.
We cannot make any progress unless you chop off the head of Israel.
Well, and at this time, you were living in Los Angeles, right?
You were not living in Israel in 2005.
No, no, I was in Los Angeles.
When did you come to LA?
I came to Campbell in 1966.
If you'd like to continue listening to this conversation, you'll need to subscribe at samharris.org.
Once you do, you'll get access to all full-length episodes of the Making Sense podcast.
The Making Sense podcast is ad-free and relies entirely on listener support.
Export Selection