All Episodes
March 2, 2024 - Decoding the Gurus
02:39:39
Sean Carroll: The Worst Guru Yet?!?

Controversial physics firebrand Sean Carroll has cut a swathe through the otherwise meek and mild podcasting industry over the last few years. Known in the biz as the "bad boy" of science communication, he offends as much as he educ....<< Record scratch >> No, we can't back any of that up obviously, those are all actually lies. Let's start again.Sean Carroll has worked as a research professor in theoretical physics and philosophy of science at Caltech and is presently an external professor at the Santa Fe Institute. He currently focuses on popular writing and public education on topics in physics and has appeared in several science documentaries. Since 2018 Sean has hosted his podcast Mindscape, which focuses not only on science but also on "society, philosophy, culture, arts and ideas". Now, that's a broad scope and firmly places Sean in the realm of "public intellectual", and potentially within the scope of a "secular guru" (in the broader non-pejorative sense - don't start mashing your keyboard with angry e-mails just yet). The fact is, Sean appears to have an excellent reputation for being responsible, reasonable and engaging, and his Mindscape podcast is wildly popular. But despite his mild-mannered presentation, Sean is quite happy to take on culture-war-adjacent topics such as promoting a naturalistic and physicalist atheist position against religious approaches. He's also prepared to stake out and defend non-orthodox positions, such as the many-worlds interpretation of quantum physics, and countenance somewhat out-there ideas such as the holographic principle.But we won't be covering his deep physics ideas in this episode... possibly because we're not smart enough. Rather, we'll look at a recent episode where Sean stretched his polymathic wings, in the finest tradition of a secular guru, and weighed in on AI and large-language models (LLMs). Is Sean getting over his skis, falling face-first into a mound of powdery pseudo-profound bullshit or is he gliding gracefully down a black diamond with careful caveats and insightful reflections? Also covered the stoic nature of Western Buddhists, the dangers of giving bad people credit, and the unifying nature of the Ukraine conflict.LinksYouTube 'Drama' channel covering all the Vaush stuff in excruciating detailThe Wikipedia entry on Buddhist Modernism Sharf, R. (1995). Buddhist modernism and the rhetoric of meditative experience. Numen, 42(3), 228-283.Radley Balko's Substack: The retconning of George Floyd: An Update and the original articleWhat The Controversial George Floyd Doc Didn't Tell Us | Glenn Loury & John McWhorterSean Carroll: Mindscape 258 | Solo: AI Thinks Different 

| Copy link to current segment

Time Text
Hello and welcome to Decoding the Gurus, the podcast where an anthropologist and a psychologist listen to the greatest minds the world has to offer and we try to understand what they're talking about.
I'm Matt Brown, with me is Chris Kavanagh, the little red riding hood to my big bad wolf.
That's who he is.
That's pretty good.
Yeah, I like that.
You didn't see that one coming, did you?
I didn't.
What big teeth you have.
You know, I do have big teeth.
In fact, my teeth are too big for my jaw because my parents neglected me as a child and they didn't get me braces when I should have had them.
And they got progressively more crooked.
And in my 30s, early 30s, I bit the bullet, I got braces, which involved like extracting four teeth to make room for all the...
The remaining teeth.
And since then, they've gone crooked again.
So, they're all getting squished up.
There's not enough room, even for the remaining teeth.
Well, that explains the teeth, Matt.
But what explains you dressed up in grandma's clothes all the time?
That's just...
Don't question my gender expression, Chris.
No kink-shearing.
No kink-shearing.
Big bad wolf kink.
It's a minority.
Speaking of kinks, this is...
It's not directly related to the grocery, but actually...
I'm thinking we should do a season on streamers because streamers are just such a weird collection of people.
They're almost distilled gururosity in a way because they're all about cultivating parasocial relationships through their incredibly long streams.
They just say and waffle shit to hundreds of people or thousands of people who are looking at them.
I mean, we do that too, but the difference is it's asynchronous.
You know, it's like a radio.
It's a radio.
We don't get the love back just in the reviews.
And there, people are more often than likely kicking the piss.
Yeah, so I've never watched a stream.
I've never watched one of these, Chris.
Not once.
But I do know that there are lots of people typing all the time and they're interacting with their audience.
They're doing like a stream of consciousness thing, aren't they?
Like they're just talking...
Yes, this is correct.
So there's a little thing I'd like to play for you.
I'll provide the context.
There is this leftist streamer called Vosh and he got caught while streaming.
He opened up his To be sorted folder, right?
And in that to be sorted folder was various, like, point stuff, right?
Not a good thing.
Got to be careful when you're live streaming your content.
Happens to the best of us, Chris.
Happens to the best of us.
Happens to the best of us.
But what usually doesn't occur is that that folder includes a substantial amount or even any number of lollicone-style points, like Lolita, anime.
Young, presenting girls porn.
And anime.
Anime.
Yes, it was anime, I believe.
But the other one was horse porn.
And I believe there was some...
Crossovers.
It might have been lolly horse.
Wait, you're talking about an underage horse.
No, sadly not.
That would be, in some ways, you know, maybe it's not bad horse.
I'm not really sure, but in any case, I think there were some crossovers.
Maybe it was two separate things.
I don't know.
But in any case, it's probably not what you want to flash.
But his defense was pretty good.
One of them.
One of the defenses was like, he just kind of argued that it was known that he's into horse porn.
Because, yes, he wants to imagine himself as a powerful stallion.
It's literally a joke, man.
The horse thing is not a joke.
We can't let this be smeared, okay?
To whatever extent, you know, people can say, oh, we place it off as this or that, okay?
I'll make it clear.
You can write this down.
I want to fuck a woman.
As a horse.
None of this is a secret.
To be clear, many jokes have been made about this, but I stand by it.
My moral principles are rock solid.
My feet are firmly planted in the ground.
I've got my boots up.
They're planted firmly.
You cannot move me from my position.
This isn't a secret.
Talk to a therapist.
Why do you want to be the horse, Vosh?
Because then I'd have a giant dick.
Okay, couldn't you have a big dick the other way?
Well, yeah, I could have a big dick hypothetically in any variety of scenarios, but then it wouldn't really be a horse dick.
Well, you could be a human with a horse dick.
Yes, but then I wouldn't have that powerful stallion energy using it.
There you go.
That's it.
That's the whole thing.
That seems reasonable.
Yeah, so this is his...
That was his first defense, I think to behold.
But his second defense is the one that I really want to focus on because the second one is the bigger problem, right?
The kind of lolly content.
So how was he going to explain that?
And this is just, it's such a unique defense that I think few people would have anticipated this.
The other one is like a threesome with two chicks and a guy.
And in retrospect, looking at it, Knowing now that that artist is a lollicon, yeah, I can see it.
When I looked at it, I think the vibe that I got was like short stack, thick kind of thing.
You know what I mean?
Like the way goblins get drawn in porn.
You'll have to entertain me for a moment on this presumed shared knowledge of how goblins get drawn in pornography.
But you know how they're all like thick short stacks, right?
He should not presume there is a shared knowledge of Goblet Porn, at least not in my case, Chris.
Pick short stacks.
Yeah, apparently short stack is the lingo for midget porn.
So that's the PC term for that particular genre.
But goblin short stack, it's a bold defense.
You know, you've got me all wrong, copper.
I didn't know it was Lolicon.
I thought it was Midget Trolls.
Yeah, it's so good.
So that's just the taste of the streaming world, right?
There's more to come in the coming weeks as we might dive in there.
But yeah, so we do have a decoding episode today, Matt.
We got various feedback on the Sam Harris episode.
I'm not going to really dwell on it, except to just say that people have different opinions.
Some people feel that, you know, we accorded ourselves fine and...
They weren't very happy with some things that Sam said.
Other people, mainly on Sam's subreddit, think that we spoke far too much and interrupted him at every time.
Oh yeah, interrupted him too much.
Yeah, yeah.
Sorry about that, everyone.
Yeah, there are people all around saying various things.
Some people were saying about the dangers of platforming Sam Harris, and there I would say, one, he has a much bigger platform.
Without us.
And a much more indulgent one on plenty of other locations if he wants.
And two, that I think you're doing a discredit to the way that his arguments will have been received.
I do not think that the people in our audience all will have been like, oh, that was completely convincing.
Just, you know, every argument that Sam made.
I'm not saying it would find us convincing in the same respect.
I'm just saying...
I don't think people are consuming things completely credulously.
I don't think we're introducing Sam Harris to a wider audience, no.
Well, yeah, and that does get me to just the last little intro bit, Matt.
You know, we've sometimes toyed with this, the grinding gears segment.
Let's grind our gears this week.
And I do have two things to enter into the ledger of gear grinding.
And one is...
The sensitivity of Western Buddhists.
The sensitivity of Western Buddhists.
Now, Western Buddhists are lovely people.
They're, you know, they're an introspective folk.
They've become interested, usually, in a religion or a philosophy, sorry, a philosophy that is, you know, a little bit exotic from their culture.
It's not...
Christian background.
Usually people come to it a bit later in life and approach it more as a secular philosophy than a religious system.
And that's fine.
Everybody is free to do as they wish and enjoy.
And there are plenty of benefits to engaging in introspection and becoming part of a community, reading books about Buddhism and whatever.
But I do detect a touch of defensiveness.
Whenever I point to the fact that Western Buddhists might be, in general, presenting a particular interpretation of Buddhism, one that was marketed to the West and which became particularly popular in the West,
which now goes under the term in scholarly circles of Buddhist modernism.
It's kind of a modernist interpretation of Buddhism.
The kind that you would see amongst like Robert Wright or Sam Harris or various other people.
But it goes back, you know, T.D., Suzuki and various other figures.
And that again, that's fine.
Religions change.
Philosophies change.
Things travel around the world and come out in all different ways.
And Buddhism in the modern incarnation tends to be one which appeals to a specific kind of secular modernized version.
But when you mention that and when you...
Suggest that there might be an attachment to a particular presentation, a particular interpretation of introspective practices.
Or interpretations of the self.
And whatnot.
There is a very strong reaction amongst Western British, a subset of them, where they are very ready to tell you how incredibly detached they are from any ideology.
They have no religious commitments.
They're not...
Interested in the supernatural metaphysics.
They are purely doing introspective practices.
They don't care about any tradition.
They're not even really, you know, Buddhist.
What is that?
And I detect, Matt, that there actually might be a slight attachment because out of all the different groups I interact with, they are the group which is one of the most triggered by any comment about that, any suggestion that they might have.
Consumed or be interested in a particular perspective.
Or that it might be associated with metaphysics and religion.
That's a nefma.
And so I get all these messages from people that are like, I have absolutely no attachment, but here's why you're completely wrong.
And I just don't see the level of detachment, especially the emotional detachment that I'm so often being told that those people display.
It could be my mistaken interpretation, but they often seem like they might be emotionally responding to hearing somebody criticize a particular interpretation that they like.
So that grinds my gears, Mark.
I hear you.
I hear you.
I know that.
That can be frustrating.
Like I told you, I visited a Zen Buddhist.
Collective to do meditation occasionally when I was a much, much younger person.
And I eventually stopped going because it was a bit too culty.
Not very culty, just a little bit.
Like they wanted to hang out and have tea and biscuits and talk and I didn't want to do that.
I just wanted to do the thing.
And I've had friends who have been into it and announced to me at some point that they were enlightened.
And they just weren't, Chris.
They just weren't.
And so I guess what I'm saying is that In my own life, I've detected a strong theme there.
I think one of the big appealing things about this Western modernistic Buddhism is that it's extremely flattering to you.
If you embrace it, you can quite easily get yourself to a point, maybe a little bit like some of our previous guests and gurus that we've covered, where...
You've figured it out.
You're on a higher plane.
You're detached.
You're calm.
You're at peace.
The rest of the world is in chaos, but you're a little bit special.
And I can see the appeal, and I think I detect a little bit of a note of that.
There's a note of that, Matt.
There is a note of that.
It's coupled with the complete confidence that they have transcended that, that they are devoid of attachment, or at least they're on the road.
And you haven't even recognized that the road is there because, obviously, if you did, you wouldn't make the mistake of assuming that Buddhist modernism is a thing.
It's just scholars waffling in their ivory towers.
The fact that many of those scholars are practitioners of decades of experience, never mind, never mind.
They've got it wrong.
They've got it wrong as well.
So, yes, I'm just saying, for attached people, there are remarkably, Emotionally expressive people weren't criticized.
That's the way I would put it.
You don't usually need to announce to people that you've achieved that kind of thing.
You just express it.
It becomes known.
It's like me.
I never tell people that I'm cool and at peace.
No, I just see it.
I just exude it from every pore.
That's right.
I got there.
Everyone treads their own path.
I got there through years of alcohol and drugs.
Just persistently, consistently.
It was a long road, but it got me there in the end.
And, you know, I don't need to talk about it, though.
That's the cool thing about it.
Well, that's right.
That is the essence of cool.
That's what it is.
And the second group, Matt, and I'll make this one brief, very short, but it's just to say Glenn Lowry, the commentator, pundit, black intellectual in America who often...
It takes a kind of slightly contrarian position around political issues in the US.
And recently, there's been a documentary that came out that was basically a revisionist documentary saying Derek Chauvin's didn't murder George Floyd.
Like, the evidence wasn't properly presented and the jury, like, were just going along with the atmosphere at the time.
And actually, if you look, he was doing what he was trained to do and George Floyd probably died from...
An overdose, not other things, right?
Now, this particular documentary got a lot of pick-up in the contrarian heterodox face.
Coleman Hughes published an article about it on the Free Press.
He was talking about it recently with Sam Harris on his podcast.
And also, John McWhorter and Glenn Lowry covered it on the Glenn Show, and a bit dramatically.
Oh, but I should say, because there was an article that came out by Radley Balco.
A journalist who covers crime and things.
And it's a very detailed technical rebuttal to the documentary and the Coleman Hughes.
It's a three-piece series.
Two of the pieces have come out.
And they're quite damning.
They kind of present that, actually, if you look into the evidence, you know, where you actually contact experts when you go and look at trial transcripts, not just...
What's presented in the documentary.
It actually isn't exonerating of Chauvin.
It's damning.
And there's lots of reasons that it looks justified that he was convicted of murder.
It's a very thorough piece.
And it's very cruel towards Coleman Hughes in its presentation because it essentially says, you know, Coleman Hughes was taking victory laps about being thorough and critically minded.
But in actual fact, he just brought into a political narrative that fits.
Anyway, Ben Lowry came out.
In response to reading this, admitted that the evidence was very strong, that he had got this wrong.
We should not have ignored it, but that we should have been more skeptical about it, particularly about its technical claims, which challenge the limits of our own expertise in terms of being able to evaluate them.
So we're trusting filmmakers to a certain degree when we do that.
I've been asking myself the question, how could I have been so, I almost want to say gullible?
How could I have been so credulous?
How could I have not had my guard up?
But he also reflected on the fact that the reason that he got it wrong was likely because of his bias towards wanting to counter the dominant narrative and not liking the kind of I think the answer is,
well, I wanted a counter-narrative to the dominant narrative about what happened to George Floyd and the subsequent developments of the summer of 2020.
I didn't like that police station being allowed to be burned to the ground.
I thought that the lionization...
Floyd, they elevation him to a heroic status to the point that the president, then a candidate of the United States, could say, and I'm talking about Biden, in 2020, that Floyd's death resonated.
I'm not quoting him, but this was the effect on a global scale.
There were demonstrations all over the world.
Black Lives Matter and all of that.
Even more profoundly than did the killing of Martin Luther King in 1968.
I hope I don't...
Misquote here, but I definitely believe that President Biden said, then candidate Biden said something to that effect.
It was a big deal.
It was a big fucking deal, the killing of George Floyd.
The country seized up on something.
And I wanted not to be...
When the opportunity to question the narrative came along, I jumped at it, and perhaps incautiously so.
That's what I want to say, which raises a question in my mind more broadly.
So he actually considered, you know, the incentives on his side of the aisle and how they impacted his coverage.
And he said it left him chastened.
Being heterodox, being against the grain, anti-woke, being the black guy who said the thing that black guys are not supposed to say, you can inhabit that persona.
To such an extent that your judgment is undermined by it.
And I take that as a warning.
I mean, I'll accept what you say.
No, we didn't, you know, do anything wrong.
But I'm still a little bit chastened by Radley Balco's, you know, I mean, and what he does to Coleman, and people can read this and see for themselves, Coleman, the youngish An upstart conservative Black intellectual is really disquieting.
I mean, you know, he says he's way out over his skis.
He says, you know, he's a propagandist in so many words.
And Barry Weiss takes a hit indirectly from Balco.
What kind of outfit is she running over there?
Is she subject to the same temptations that we are?
To, as we inhabit this role of anti-wokeness, to quickly embrace something that we ought to think twice about before we jump.
He said this publicly on the podcast, and John McWhorter disagrees and says we weren't that bad.
You know, it's reasonable to ask questions.
But Glenn Lowry's thing was very good.
And I published just a tweet saying, this is really nice to see somebody showing intellectual humility and, you know, that this kind of thing should be applauded when people are willing to do it.
And most people agreed.
But there was a particular subset.
Again, it's just a small subset of people who replied saying, why would you praise this?
You're just encouraging him to have worse takes in the future.
And another thing they said was, you could have sent this message privately.
Don't say it publicly because you will encourage other people to have bad takes with impunity.
Because they'll know they can just apologise and think.
Yeah.
There's a stream of thought, Chris, that you never, ever should hand it to them.
And by them, I mean the bad people.
Generally, on the right people, Glenn Lowry is a right-wing guy, isn't he?
And if you give them any credit, say this is slightly less worse than...
Than usual, right?
Or this is a welcome change or anything like that, then you are somehow undermining the cause, which is to, you know, fight, I guess, against those ideas.
So admitting that Glenn Lowry could ever show some self-awareness, could ever self-correct or whatever, I guess is seen as undermining that.
And so that's a point of view.
I don't agree, personally.
I will hand it to anyone.
Like, I...
If I think of the people that we've covered on this podcast, someone like Eric or Brett Weinstein, if, God forbid, they ever do something that isn't terrible, I will hand it to them.
And I think you should, you know?
Like, I think it's not useful to pretend, like, if hypothetically, and I am speaking hypothetically, Brett said something good, then it would actually, we would undermine ourselves by pretending that that didn't happen.
Yeah.
You know what I mean?
We would be showing people that we just had blinkers on and we couldn't distinguish good from bad.
Yeah, well, one of the things is, you know, there's the wind tweet, which is like, you don't have to hand it to ISIS is the point, or the Nazis, right?
And there is a truth to that, right?
In that, oh, some terrible, terrible person getting a small point, right?
It doesn't undo all the...
You know, Alex Jones could say something reasonable, but it wouldn't undo all the stuff that...
He has said or done.
But that's fine, because you can make that point as well, right?
Like, simply saying that he is right on this point or whatever, it doesn't mean, ergo, you know, go treat Alex Jones as a good source.
And the comment that people made about that as well, like, I feel it doesn't represent human psychology correctly, right?
Because, like...
I think it's very, very unlikely that anybody would see my tweet or anybody else, you know, handing it to someone and be like, oh, I have all these terrible takes.
And now that this person said that, you know, like, it's nice to see intellectual humility, it means I can say anything.
And I'll get off with what's got free.
So I'm going to start putting out more terrible takes because I can just apologize.
Because, like, actually, most of the incentives go in line of don't apologize.
Double down.
Triple down.
Appeal to your audience.
Never give a quarter to your critics or that kind of thing.
So, like, if somebody doing that, admitting they got something wrong publicly, it actually is rare.
It's rare.
So if you immediately condemn everyone that does it, you're removing the incentive for people to be willing to admit they're wrong.
So I just, that view of psychology.
It strikes me as wrong.
And it strikes me that the more common thing is that actually if you say something nice about people who are on the opposing side or, you know, are seen as the enemy, the more likely thing is that you will attract criticism from people who feel ambivalent about,
oh, I like this person, but they're praising somebody who, you know, is a baddie.
So I don't like that because I'm getting a reflection.
You know, it's either falling out of my tribal categories or this looks bad on me if I like this person and they're saying that person is good because, you know, that person is bad.
So, yeah.
Yeah, I mean, I guess what you're describing is partisan politics, right?
Like, in a standard two-party system, even a normal one, not the current American one, people on one side of the aisle will not want to say that there's something, the other side had a good idea about this particular policy or whatever, generally.
But yeah, I mean, in the end, you decrease your own political capital by doing that.
So I think it is self-sabotaging.
I agree.
Just my opinion.
I also think if you want to say something nice about people in private, you should probably be willing to say it in public.
I'm not saying that always holds up, but I just feel like if you're willing to DM someone to say, I think that was really nice of you, and then you wouldn't do that publicly, that there's a little bit of, I don't know.
That seems to be a little bit hypocritical.
But that's what it is, Matt.
This is, you know, it was just gears were growing.
The Buddhists and the woke scolds got to me this week.
They were tweaking my buttons.
I was the little emotional puppet, you know?
I wasn't the serene sky with the clouds passing over.
I was the little puppet being tweaked along by the people annoying me in the discourse sphere.
So it is what it is.
It is what it is.
That's fair.
That's fair.
I was trying to think of things that had ground my gears.
But, you know...
You're at peace.
I am at peace.
I'm cool.
I'm like one of those Buddhist guys, but for real.
No, you know, I occasionally tweet political opinions on Twitter about foreign events like...
Say, the Ukraine thing, which I have strong opinions about.
And, you know, people disagree with me for all the wrong reasons.
You know, seeing conspiratorial leftists who believe it's like all a NATO plot and it's really an imperialistic kind of thing, we're somehow tricking the Ukrainians into defending themselves.
Seeing them, like, link hands with the Mishama-style amoral realists and the...
Amoral realists.
Yeah.
And the bullet-headed MAGA.
Isolationist types who sort of adore strong men like Putin.
Like seeing all these three groups kind of join hands in having a common view on this thing, that's upsetting to me.
But that's just normal politics.
Perhaps, you know, decoding the gurus is not the place to air that grievance.
So I would let it go.
I'll let it pass through me and out the window.
That's not guru's business, Chris.
We've got that out of the way.
Well, is it not?
Is it not?
Look, Matt, we're just dealing with annoying things because...
We're about to get to a guru.
Is he annoyed?
Is he good?
Is he terrible?
It's unclear.
It's the physicist Sean Carroll, somebody that you suggested, and that some people listening were like, how dare they?
How very dare they think that they...
Oh, we dare.
We dare.
But they have the grounds to even...
To even comment on his content.
What are you thinking, you maniacs?
You first come for Chomsky, and now Sean Carroll.
What depths won't you plumb?
No one is above decoding, is the answer.
Even us, you can decode us, just fire up your own podcast and do it.
Look, you still don't get it.
You still don't get it, you fools.
We can cover anyone.
There's no guarantee that they will score high or not.
We have developed a non-scientific tool with which we put people through their pieces and see, do they fit the secular guru template?
And that's it.
That's it.
So, yes, we often cover people that fit close to the template.
Sometimes we don't.
Carl Sagan was good.
Mick West was fantastic.
You know?
Just...
Suck it up, bitches!
Wait and see.
Wait and see.
We will...
Yeah, stop!
How many times do we have to say, if you like someone, like, our whole method doesn't go out the window just because you like them.
Okay?
Right?
Just get up through your skulls.
This is why people like me, Matt.
That's right.
Just really flattering the audience any chance you get.
So, yeah, Sean Carroll, I know of him.
I recommended him.
I've listened to more than a few episodes of his.
He's a physicist.
He does talk about physics a fair bit, but he produces a lot of content and he's been doing this a while.
And I think, you know, he's kind of run out of...
Physics things to talk about.
He's done physics.
That's not true.
There's really so much physics, Chris.
There's this much and then it's done.
There's no more left.
And you have to go on and, you know, cover other topics and more power to him.
There's nothing wrong with that.
But it does put him into an arena of general purpose public intellectual.
He's not afraid to talk about other topics such as...
Artificial Intelligence, the topic we'll be covering today.
I see.
Yeah, that's right.
So he has a podcast series, Mindscape, where he is mostly, I think, talking about physics topics.
But occasionally, you know, goes further afield and I think has an interest in philosophy and topics.
He's involved in discussions about free will and determinism and all that.
Chris, he does heaps of stuff that's not.
Physics.
The last two episodes was how information makes sense of biology.
And then before that, it was how the economy mistreats STEM workers.
He goes further, it feels.
I'm not saying he doesn't.
I'm just saying he meanly does physics.
I think he meanly does physics.
He's not Eric Weinstein.
He's not Eric Weinstein.
Make that clear.
This is a real physicist, right?
Also a science populizer, but someone with like actual...
Research Bonifides and that kind of thing, right?
And he's got a really lovely voice.
This is one thing I'll say for him right off the bat.
I have to admit, I do like to listen to him as I'm going to sleep.
This is one of my, in the special category of podcasts, where they just have a soothing voice.
And I know some people listen to Dakota and the Gurus to go to sleep too.
I think Paul...
I endorse that.
Paul Bloom hinted that we do that for him.
Maybe he actually said that.
But he had a post recently about podcasts he liked and he mentioned us, but he also mentioned that some podcasts he uses to go to sleep, but he didn't say who.
I struggle to imagine anyone going to sleep to your voice, Chris.
That's true.
Well, okay.
So, Matt, this is a solo episode.
He often has episodes with like...
I'll guess, but this time he's monologuing a skill which you and I, I think, both lack.
I mean, we do monologue each other, but not to the extent of, like, an hour-plus episode.
So I'm sometimes impressed when people can do this.
I'm going to go through it, like, kind of chronologically.
Chronologically, I didn't need to slow down as it was being pulled into a black hole.
I'm going to go for it chronologically.
Nice.
Because unlike most of our content, it actually matters which clips you play because there's an argument built up.
This is a feature what I'm beginning to realize often distinguishes, I won't spoil it, but gurus who have more substantial things to say than those that do not.
You know, you can't just pick and choose from random parts of their conversation.
But, like, with the proper secular gurus, it doesn't matter because, like, very few things actually coherently connect.
They're just all, you know, like, thematically connected.
But he makes a logical progression for things, as we'll see.
Yeah.
And probably the only other little throat-clearing thing to say is that, as you said, I think he does make an interesting...
He builds an interesting thesis here.
And I listened to this episode with interest.
And, you know, when we cover people, I think, who are saying something that we find interesting, then I think we've got two modes going on with the decoding going on.
Like there's the do they, don't they fit the guru template.
That's one thing we do.
But, you know, it's also fun to just engage with it as two normal people with opinions.
The way I would frame it is slightly different.
This is something I sometimes explain to students when people are making arguments or presenting things that they can make substantive points and they can rely on rhetoric.
And there's fuzzy lines between those, I grant.
But if someone is relying heavily on rhetorical techniques and emotion-laden language and extended metaphors and so on...
That is the content of what they're doing.
Like a Constantine Kissin speech.
It's almost all rhetoric.
Yeah.
There's very little substance.
So analyzing that is just talking about the overwhelming amount of rhetoric which is there.
On the other hand, people can have substantive content where they're presenting ideas.
And sometimes those ideas are bad.
Sometimes they are good.
But in that case, it often is more relevant to...
Deal with the substance of the content, right?
Because they are not relying so much on rhetorical techniques.
So I think that's why there's a bit of a distinction sometimes.
Yeah, yeah, I agree with that.
And I'll just try to flag up the distinction there.
Okay, so I like this little framing thing that he does about the episode and what he's imagining.
Let me just play it.
Sometimes I like to imagine that there are people 500 years from now who are archaeologists, historians, whatever they call them at this far future point, who have developed the technology to decode these ancient recording...
technologies, these different ways of encoding audio data, and they're sitting there listening
I do find it important to provide some context because you and I, you the present listeners and I know what the news stories are and so forth, but maybe our future friends don't.
So, hi, future friends, and you might be interested to hear that as I am recording this in November 2023, we are in the midst of a bit of change vis-a-vis the status of artificial intelligence, AI.
It's probably safe to say that we're in year two of the large language model revolution.
That is to say, these large language models, LLMs, which have been pursued for a while in AI circles, have suddenly become much better.
Nice positioning there.
Nice.
You know, I didn't think about this until I just re-heard that, but there is a way in which that seemed kind of framing.
Could lend itself to a grandiosity that my recordings will be looked at in 500 years by archaeologists.
But in Sean Carroll's presentation, as opposed to like a Brett Weinstein presentation, I think it is more a whimsical science fiction kind of trope, right?
Where it's not that he thinks that this document is like super important.
It's just a framing to present where we are.
No, and you will see why as we go on, but I'm pretty sure that's the way.
But it's just interesting that I was thinking like an Eric or a Brett could attempt to do the same thing, but they would invariably do it where it's very important that the people in the future look back at their material, right?
This conversation will go down and be carved into stone, go down in history.
No, no, Sean Carroll isn't at all giving that impression.
It didn't even occur to me, but now you mention it.
Yeah, I'm just being thorough, Matt.
So, a little bit more about the framing.
So, Sam Altman got fired, and that was considered bad by many people, including major investors in the company like the Microsoft Corporation.
So, there was a furious weekend of negotiations since the firing happened on a Friday, and...
No fewer than two other people had the job title of CEO of OpenAI within three days until finally it emerged that Altman was back at the company.
And everyone else who used to be on the board is now gone and they're replacing the board.
So some kind of power struggle, Altman won and the board lost.
I think it's still safe to say we don't exactly know why.
You know, the reasons given for making these moves in the first place were extremely vague, or we were not told the whole inside story.
But there is at least one plausible scenario that is worth keeping in mind, also while keeping in mind that it might not be the right one, which is the following.
Yeah, so he's referring to that kerfuffle that happened there at OpenAI.
It's now ancient history.
Late last year.
He was reinstated, wasn't he?
But, you know, his presentation of that was very even-handed, I thought.
Again, it was a little bit whimsical, but he was basically describing what happened.
And then at the end there, he sort of emphasizes that there could be a little bit of speculation going on as to...
What could be going on?
Yeah.
Now, one thing that you'll notice there is that he said things like the reasons given for making these moves in the first place were extremely vague, or we weren't told the whole story.
Keeping in mind that this thing that he's going to say might not be the right one.
So there's all these kind of caveats where he's saying, you know, we don't really know what happened, but this is what we saw from the outside.
And again, you know, this will be a good time to point out to people, this is the opposite of strategic disclaimers.
This is a real disclaimer about someone saying, I don't know, right?
Like, I'm expressing uncertainty.
I think it's useful to imagine Eric Weinstein describing the same scenario.
And can you just imagine the ominous tones?
And we don't know, Matt.
We just don't know.
We don't know.
Something is up.
But something is up.
You know, the powers that be, things are happening, and we just don't, you know, it would be invested with this paranoid suspicion, whereas the way he relates it, he describes it exactly how I understood it, which was, yeah, it's one of these things that happened, and we legitimately don't know because,
of course, we don't know.
It's a company.
You know, they don't publish what goes on in their boardrooms.
Yeah.
Now, he flagged up one possibility that he was going to highlight, and let's just see what that is and how he characterizes it.
Let's put it that way.
Artificial general intelligence is supposed to be the kind of AI that is specifically human-like in capacities and tendencies.
So it's more like a human being than just a chatbot or a differential equation solver or something like that.
The consensus right now is that these large language models we have are not AGI.
They're not general intelligence.
But maybe they're a step in that direction, and maybe AGI is very, very close, and maybe that's very, very worrying.
That is a common set of beliefs in this particular field right now.
No consensus, once again, but it's a very common set of beliefs.
So anyway, the story that I'm spinning about AI, OpenAI, which may or may not be the right story, is some members of the board and some people within OpenAI...
Became concerned that they were moving too far too fast, too quickly, without putting proper safety guards in place.
Yeah, so just a little point of order, and this is not a ding at all on Sean, but yeah, AGI is legitimately a slightly ambiguous term.
It's sometimes used to describe intelligence like he said, which is more human-like.
Yeah, but it's also often used to describe, I guess, a more general purpose.
So it might not necessarily be human-like, but it could be multimodal and you'd be able to transfer knowledge to different contexts, which is also somewhat human-like, so it's fuzzy.
Yeah, I think this might come up later because this is one of the points that he makes about it.
But again, Matt, you just have to know there's no consensus around this.
It is a common...
Not passing strong judgment, but he's accurately describing the state of play.
And he has an opinion, clearly, but he's capable of describing other opinions without automatically investing them with emotion or that kind of thing.
Again, it's just notably different than the people we usually cover in the way that they report things because expressing uncertainty and accurately presenting.
The state of play.
Yeah, he's doing that very academic thing where you kind of do a bit of a, like a literature review.
Yeah.
Like a context providing survey of the situation before introducing your own particular take, your own particular arguments on the matter.
And when you listen to the bit at the beginning...
You don't really know.
What is Sean Carroll's position on?
Should we be concerned about AGI?
Should we be worried about it?
Should we not?
Are these people fools or are they not?
He's actually just describing the situation accurately.
He's right, saying that a lot of people are concerned.
And he's not investing a valence to that yet.
Yes.
And so from there, he does go on to first express a view.
Then we'll see him give some rejoinders to particular potential objections.
And then he goes into the evidence or basis of his positions in more detail, in quite a logically structured way.
But here is him expressing a particular perspective.
I personally think that the claims that we are anywhere close to AGI, Artificial General Intelligence, are completely wrong, extremely wrong, not just a little bit wrong.
That's not to say it can't happen.
As a physicalist about fundamental ontology, I do not think that there's anything special about consciousness or human reasoning or anything like that that cannot, in principle, be duplicated on a computer.
But I think that the people who worry that AGI is nigh, it's not that they don't understand AI, but I disagree with their ideas about GI, about general intelligence.
That's why I'm doing this podcast.
So this podcast is dedicated to the solo podcast to the idea that I think that many people in the AI community are conceptualizing words like intelligence and values incorrectly.
So I'm not going to be talking about existential risks or even what AGI would be like, really.
I'm going to talk about...
Large language models, the kind of AIs that we are dealing with right now.
There could be very different kind of AIs in the future.
I'm sure there will be.
But let's get our bearings on what we're dealing with right now.
You like that, Chris?
I like that so much, Matt, because you know what I like about it?
I like that he's very clear.
I already know from this paragraph what he's going to be talking about, what he isn't focusing on, and how far he's not extrapolating.
Again.
I feel like my brain is addled by listening to the gurus because this is just the exact opposite of what they do.
Things are never this well structured.
There's some exceptions.
There are some exceptions.
I think, for instance, Sam Harris is someone that does often lay out his positions in the same kind of structured way.
But this is just refreshing.
And he makes his position clear.
And then he also highlights, okay, but this is my personal take on this, and I'm going to layer the reasons why, but it's not confusing.
It's not layered in metaphor and analogy.
It's not grandiose.
It's just, this is my personal opinion on this topic.
Yeah, once again, I think...
Sean Carroll's academic background is showing here, isn't it?
Because I didn't realize the first time I listened to this, but it does have the structure of a good academic article, which is before you get into anything, really, you signpost to the reader.
You let them know what the scope of your thesis is going to be about.
And like you said.
Delineate the stuff that it isn't and describe what your focus is going to be.
Some people claim this.
Some people claim that.
I'm going to argue this.
And then you get into it.
Yeah.
And so there's now this section which is a little bit dealing with potential rejoinders, right, in advance.
You know, some people who are very much on the AI as an existential risk bandwagon will point to the large number of other people who are experts on AI who are also very worried about this.
However, you can also point to a large number of people who are AI experts who are not worried about existential risks.
That's the issue here.
Why do they make such radically different assessments?
So I am not an expert.
On AI in the technical sense, right?
I've never written a large language model.
I do very trivial kinds of computer coding myself.
I use them, but not in a sort of research-level way.
I don't try to write papers about artificial intelligence or anything like that.
So why would I do a whole podcast on it?
Because I think this is a time when generalists, when people who know a little bit about many different things, should speak up.
So I know a little bit about AI.
You know, I've talked to people on the podcast.
I've read articles about it.
I have played with the individual GPTs and so forth.
And furthermore, I have some thoughts about the nature of intelligence and values from thinking about the mind and the brain and philosophy and things like that.
Yeah, Chris, I want to speak to this a little bit.
And this is slightly just my personal opinion.
I'm actually going to be agreeing with Sean Carroll here furiously.
And I'm speaking as someone that does have a little bit of relevant background here.
I've got a strong background in statistical machine learning.
I was working in it way before deep learning came along.
When it first came along, I, with a colleague, did program up a deep convolutional neural network for image processing.
And we wrote it ourselves in C++.
We didn't just pull some library, some prepackaged thing and apply it.
And yes, our attempts were, I'm sure, feeble.
And toy-like by modern standards.
But even so, I'm just pointing out, without being a specialist in the area, I haven't worked in it for a long time, I feel like I have the background.
And given that, it is interesting that some topics, I think, like Sean Carroll says, are amenable to, I guess, a more...
General audience kind of weighing in.
And I kind of wouldn't say this about something like virology, you know, the origins of COVID and things like that.
Or another good example would be quantum mechanics and what's really going on.
Is it string theory or is it in multiple worlds?
I think you have to be a specialist to be able to contribute something useful there.
But the interesting thing about AI and statistical machine learning generally is that it is a form of engineering.
And what you end up building is a kind of, yes, you have an architecture.
Yes, you understand the learning algorithms and so on.
And yes, there is a benefit to understanding things like local minima and error functions and dimensions and things like that.
Mattresses, matrix algebra and the rest.
But honestly, that stuff...
Some would say.
Yeah.
The matrix, Chris.
The matrix, yeah.
So, but...
The funny thing is it doesn't give you like a massive insight into what's going on.
At the broader level, like to a large degree, someone who does understand all that stuff perfectly, I'm not saying I understand it perfectly, but I understand other artificial neural network and machine learning models perfectly, mathematically.
And it's a bit like statistics.
Statistics is an interplay between mathematics and the real world.
And it's the same with machine learning in that you build things, you apply the algorithms, and then you see what it does and how it works.
Yeah, anyway, I'm just agreeing with Sean here that I think it is totally legitimate for not just, you know, maths and engineering geeks to have opinions about it.
I think it's good that people like Sean Carroll have opinions as well.
I think we are often misunderstood on this point as well, and I'll have something to say about it, but I think it would be better to play the second clip because Sean Carroll makes some of the points that I will more eloquently.
It's weird to me because I get completely misunderstood about this.
I might as well make it clear.
I am not someone who has ever said, if you're not an expert in physics, then shut up and don't talk about it, okay?
I think that everybody should have opinions about everything.
I think that non-physicists should have opinions about physics.
Non-computer scientists should have opinions about AI.
Everyone should have opinions about religion and politics and movies and all of those things.
The point is, you should calibrate.
Your opinions to your level of expertise, okay?
So you can have opinions, but if you're not very knowledgeable about an area, then just don't hold those opinions too firmly.
Be willing to change your mind about them.
I have opinions about AI and the way in which it is currently...
Thinking or operating, but I'm very willing to change my mind if someone tells me why I am wrong, especially if everyone tells me the same thing.
The funny thing about going out and saying something opinionated is that someone will say, well, you're clearly wrong for reason X, and then someone else will say, you're clearly wrong, but in the opposite direction.
So if there's a consensus as to why I'm wrong, then please let me know.
Anyway, whether I'm wrong or not, I'm certainly willing to learn, but I think this is an important issue.
Yeah.
He says it well.
He puts it well.
Hard agree.
Hard agree.
It's a nuanced point though, isn't it, Chris?
Because it's about, like you said, calibrating your opinion and being flexible with it and just having an appreciation of the stuff that you don't know about a topic.
Like, let's take a different example, like the origins of COVID.
You and I have opinions about it and it's not just purely like, oh, well, these guys with a lot of expertise said X, so that's what we think.
Yes.
That is an important part of it.
But as well as that, there is like a level, like you and I cannot judge these genomic assays and figure out those things, right?
We don't try.
We relegate that kind of thing to people that know how to do it.
But at a different level, we can sort of factor in a bunch of, I don't know, maybe more nebulous.
We know how to do literature reviews and to assess scientific debates around topics.
A lot of academics in general do know how to do that.
You know, there's varying degrees of ability to it, and you might not be as good at assessing it when you're looking at different fields.
But, for example, if you were somebody that was generally interested in standards of evidence and replication crisis and that kind of thing, I think you can.
Get quite a good grasp of, you know, what qualifies as stronger evidence, what looks like a consensus.
And you can also get that, without being a scientist, by being engaged with things like climate change denial or various fringe theories, alternative medicine.
Because there you learn about the standards of evidence and people like...
Taking meta-analyses as indicating that SAI exists or this kind of thing, right?
And so I'm saying this not to say that it is something that only academics can do.
It is a kind of skill that you can develop.
And sometimes people overestimate it.
For different subjects, it can be harder to do, right?
You can be wrong about it.
But one of the problems is that a lot of people don't understand that they don't do that.
They treat scientific topics as if they can be assessed in the same way an opinion piece in a newspaper should be assessed.
Like, do you find the way that the person writes convincing?
And that's not the standard you should be using for scientific topics, right?
Yeah, like I hear you saying.
I mean, I guess it's about...
It's about assigning the right degree of credence to the various sources of information you've got and your own judgments.
And the problem that we might have with someone who is making these snap judgments about the origins of COVID is that they'll make these sweeping claims, for instance, that...
Well, you know, everyone said it was racist to even suggest that it could have originated from China.
Therefore, I don't trust any of the scientific evidence because it's just a bunch of scientists who are afraid of being racist.
And they proceed from there.
So it's this bad, lazy, you know, sweeping reasoning.
Whereas there's another way to do it where you can take those things into account, weight them appropriately, and weight the other stuff, including the primary evidence.
That is, the conclusions from it.
You may not understand the nitty-gritty of the methodology, but you can put that stuff together and come to a more reasonable position.
And non-specialists can do that as well as full-on specialists.
And I will also say that, again, if you spend time studying conspiracy communities and conspiracy theorists, you can notice when people are engaging in that kind of approach.
So this is something...
Where I think that it was very obvious to see in the lab leak community in the reaction to COVID that there was, along with legitimate debates, a conspiracy community developing.
And it is now a full-blown conspiracy community.
And it does exist.
And you can notice those things and you can discuss that without saying that any discussion of the lab leak is a conspiracy theory.
But the problem is that people take acknowledgement that that...
Community exists as saying, oh, so you dismiss any discussion, you know, and it isn't that, right?
Because all the scientists are debating the lab, like in the publications where people are saying they're not allowed to even mention it.
It's so annoying.
Anyway, just related this back to artificial intelligence and why there's a legitimate difference of opinion and why reasonable people might disagree is that there is, like Sean says, a variety of opinions within the AI community, people with specialist knowledge.
Some of them are very concerned about existential risk.
Some of them are dismissive.
For instance, Jan LeCun, his work I kind of replicated that I alluded to, he's very dismissive of large language models as being anything like really genuinely useful because it's not embodied.
It doesn't have a physical model and so on.
There are other people that think he's completely wrong and it is a good pathway.
The evidence out there is relatively conflicting because on one hand, you've got…
On the other hand, you can look at other examples where it does seem to be doing very poorly.
And it is one of those things where it's actually quite difficult to test because there's so many different ways in which...
Well...
We'll get there.
Anyway...
We'll get there.
We'll let Sean talk to it, yes.
Yes.
But I will also say, Matt, that there's kind of two interacting parts there.
One is that Sean is talking about his level...
Of confidence in his assessments, right?
And the fact that he isn't a specialist in AI means that he holds some of the views that he has much weaker than he might say for physics, where he's more confident of his expertise.
And I think that's one way.
And the other way is that not just your confidence, but you should weight the level of credence that you lend to opinions that you see.
There in discourse land.
Is that person an expert that is generally coherent or in this particular topic is very coherent and expert?
Or do they represent a very small fringe position?
These are things to weigh up.
So that's like a kind of different epistemic point, which is that while you have to be willing to adjust your level of confidence, you also have to critically evaluate others' level of confidence and how much you should heed it.
Alexandros Marinos, Brett Weinstein have very, very strong confidence in their opinions.
So does Eliezer Yudkowsky?
I'm not sure you need to give them such credence, right?
But Leon Lacoon, for example, maybe somebody worth paying attention to, even if his position is wrong in the end.
Yeah, yeah.
Yeah, that's right.
He's a smart guy, but he's got his own...
He's got his own...
What's the word?
Commitment.
Yeah, he's got his own bag, man.
He does.
And that's fine.
Just weigh that all in.
Anyway, Sean Carroll does well.
He acquits himself well here because he does make these disclaimers about his confidence.
But he says, that's not going to stop me from weighing in and giving you my opinion and explaining the reasons why.
And he's right.
You know, it shouldn't.
His disclaimers are not, you know.
They're not prevalent.
Yeah.
And this is something that not just gurus are guilty of, hey, Chris.
Especially in the fields of physics, mathematics, and philosophy, I have to say, there can often be a certain kind of arrogance, which is, you know, from the deep principles I know from my particular field, I can make this sweeping claim.
About this other field, about which I know very little.
I actually like Roger Penrose, but I think he was guilty of a bit of that, for instance, when he dived into neurobiology.
There's a general thing where physicists end up talking about consciousness and philosophy as they get older.
So this is something which has been noted.
Or they develop an interest in Buddhism.
In any case, good advice that Sean gives, which we have also given.
I think that AI in general, and even just the large language models we have right now or simple modifications thereof, have enormous capacities.
We should all be extraordinarily impressed with how they work, okay?
If you have not actually played with these, if you have not Talked to ChatGPT or one of its descendants.
I strongly, strongly encourage you to do so.
I think that there's free versions available, right?
This is something you've said, Matt.
When people are raising questions about ChatGPT, like, just go try it.
Go do it.
It's free, right?
You can access it for free.
The 3.5 model, at least.
So, yeah.
And I agree.
Hands-on experience with AIs.
Very important.
Yeah.
But the other thing he's doing here, Chris, is...
What you mentioned before, which is delineating what he's saying, what he's not saying.
So as we're going to hear, he does take a pretty AI skeptical position, but he's quite specific about what he's skeptical about.
And he's not skeptical that they're pretty interesting, that they're potentially vastly useful and they can do pretty amazing things.
He's not saying that they're just a stochastic paradis, just fancy autocomplete, as some people like to say.
No, he's making a point about whether or not...
They are a general intelligence in the same way that we are.
Yeah, and he goes on, I won't go through the example, but he basically talks about getting Chachapiti to help generate a syllabus on philosophical naturalism, and that it was very good at it.
If you'd asked him if that would have been possible even a year or two ago, he would have been saying no.
But it generated a very nice syllabus.
But then he mentions also that it invented one reference.
It was a plausible reference from a researcher that could have existed, but it didn't.
It also generated real references, but this is the thing about sometimes chat GPTs or LLMs in general engage in...
Flights of Fancy.
They've gotten much better at it.
But just very recently, we were talking to an interview guest and you used the GPT to help, you know, just generate the biographical details.
And it invented the subtitle of his book, right?
So that's a certain example.
Screw you, Claude.
You embarrassed me.
Yeah, so it does have, I think it's a good point, that it's very useful.
It's not without its limits, right?
But then...
As you said, he has some skepticism, and it's basically around this.
So the question in my mind is not, you know, will AI, will LLMs be a big deal or not?
I think that they will be a big deal.
The question is, will the impact of LLMs and similar kinds of AI be as big as smartphones or as big as electricity?
I mean, these are both big, right?
Smartphones have had a pretty big impact on our lives in many ways.
Increasingly, studies are showing that they kind of are affecting the mental health of young people in bad ways.
I think we actually underestimate the impact of smartphones on our human lives.
So it's a big effect, and that's my lower limit for what the ultimate impact of AI is going to be.
But the biggest, the bigger end of the range is something like the impact of electricity, something that is truly completely world changing.
And I honestly don't know where AI is going to end up in between there.
I'm not very worried about the existential risks, as I'll talk about very, very briefly at the very end.
But I do think that the changes are going to be enormous.
Yeah, there is a second point to this.
So that is him.
Correct quantifying that, just to be clear, he thinks it's going to be hugely transformative.
It's just the degree of the transformation, but his kind of skepticism is more encapsulated in this clip.
The thing about these capacities, these enormous capacities that large language models have, is that they give us the wrong impression, and I strongly believe this.
They give us the impression that what's going on underneath the hood is way more human-like than it actually is.
Because the whole point of the LLM is to sound human, to sound like a pretty smart human, a human who has read every book.
And we're kind of trained to be impressed by that, right?
Someone who never makes grammatical mistakes, has a huge vocabulary, a huge store of knowledge, can speak fluently.
That's very, very impressive.
And from all of our experience as human beings, we therefore attribute intelligence and agency and so forth to this thing because every other thing we have ever encountered in our lives that has those capacities has been an intelligent agent,
okay?
So now we have a different kind of thing and we have to think about it a little bit more carefully.
Am I allowed to respond to this, Chris?
You can respond now.
I'm off the leash.
Yeah, I think he raises an important point.
He's referring to anthropomorphism.
No, anthropomorphization, I think I want to say.
I don't know.
Anyway, and he's right, of course.
People do that to all kinds of things, even inanimate objects.
And he's right again to say that we're going to be particularly prone to...
Do that with something that has been designed to sound like us.
The original attempts at making a language model, Eliza, you know, that good old Eliza Chris?
Eric said that Eliza was smarter than me.
I keep referencing my own accomplishments, but this is a very small accomplishment.
I actually programmed Eliza in BASIC in high school.
I did, yeah.
Because you can.
It's not a big program.
It's such a small program.
And, you know, it just substitutes in some words and says some vague things, but it references a word that you used.
And it's amazingly how, you know, convincing.
That is.
You get the feeling that you're talking to a person.
So he's right.
He's right about that.
But where I feel like these little twinges, and I want to just raise the point, which is that while he may be completely true that we have this tendency to anthropomorphize AIs, I also think he's right that if they do think, if they do have some kind of intelligence,
then it's not going to be the kind of intelligence that people do.
It can't be.
It's a totally alien.
It's not going to think about things like us.
But I would just caution that that's not quite the same way, getting back to our definition of AGI, that's referring to a definition of AGI which is human-like.
And there is another definition of AGI which is that ability to generalize, the ability to apply knowledge in different modalities and in different domains and situations.
And that's a different version.
And you can imagine that we could meet Some aliens.
Some aliens could land on a spaceship tomorrow.
We come out of the ship.
We get to know them.
They turn out to be very smart.
They may not think like us at all, but we wouldn't deny that they had a general intelligence just because they were different.
Interesting analogy, Matt.
I like that.
I like that.
Yes.
And I will say, just, you know, and for morphising tendency, like, I like the research where if you show children, adults...
Maybe not animals, maybe some of them.
Actually, I think it does work with some of them.
If you show them objects doing things, creating order, like banging into dismantled blocks, and then the blocks order into a construction, children, very young children and adults are surprised,
right?
Because a ball shouldn't do that.
But if you stick googly eyes on it, they're not as surprised.
Very, very young children.
Also, I think some primates.
And it's a good...
Example, just stick googly eyes on anything that vaguely has a human shape and people will attribute a lot of agency to them.
My son believes that the Roomba is an agent because it moves around in a cold-directed way.
So yeah, we are good at doing this and not just through language detection, but this is a new avenue because we've had self-directed, non-agentic things for a while in our world.
We haven't had...
Things that have been so good at presenting themselves as doing artificial human speech, right?
Yeah.
And we definitely do think of language as a uniquely human thing.
And that's been one of the cool things about even like the previous generations of LLMs, GPT-2, GPT-3, they weren't very good.
They were clearly blathering away.
But I thought they were a fantastic model of the gurus and how they have a dexterity and a facility with language and can bamboozle you with something.
Serious and important going on under the hood.
And those previous generations of LLMs were proof that it doesn't have to be.
Well, yeah, actually, that's what I was thinking as well.
Just one point that Sean Carroll made about, you know, all of our experience tells us that when people appear to be intelligent, when they appear to be, you know, verbally dexterous and whatnot, that this is a sign of intelligence, which is generally true.
And this is what the gurus...
Rely on as well.
So like when he was saying that, I was like, that is true, but it's a mistake, right?
Humans also make use of that to produce the guru effect that, you know, Dan Spurber and various other people have talked about.
So, yeah.
Yeah.
And we can talk about the limitations of LLMs, but we have to also concede that a lot of humans are pretty limited too.
And they can conceal it.
Pretty well on Twitter for a while.
Well, so here's him setting out the building blocks of his arguments.
And we won't go in depth into all of them, but I just want to highlight this is, you know, he's gone through the objections.
Here is him setting out how he's going to present the evidence for his argument.
So I want to make four points in the podcast.
I will tell you what they are and then I'll go through them.
The first one, the most important one, is that large language models do not model the world.
They do not think about the world in the same way that human beings think about it.
The second is that large language models don't have feelings.
They don't have motivations.
They're not the kind of creatures that human beings are in a very, very central way.
The third point is that the words that we use to describe them, like intelligence and values, are misleading.
We're borrowing words that have been useful to us as human beings.
We're applying them in a different context where they don't perfectly match.
And that causes problems.
And finally, there is a lesson here that it is surprisingly easy to mimic humanness, to mimic the way that human beings talk about the world without actually thinking like a human being.
To me, that's an enormous breakthrough and we should be thinking about that more.
Rather than pretending that it does think like a human being, we should be impressed by the fact that it sounds so human even though it doesn't think that way.
I think those are real good points.
Four pillars, Matt.
Four pillars, yeah.
Which one would you like to go to?
Let's talk about the lack of a model of the world, shall we?
Okay, yeah.
So here's a little bit more on that point.
One of the kinds of things that are used to test does a large language model model the world is, you know, can it do spatial reasoning?
Can you ask it, you know, if I put a book on a table and a cup on the book, is that just as stable as if I put a book on a cup and then a table on the book, right?
And we know that it's better to have the table on the bottom because we kind of reason about its spatial configuration and so forth.
You can ask this of a large language model.
It will generally get the right answer.
It will tell you you should put the cup on top of the book and the book on top of the table, not the other way around.
That gives people the impression that LLMs model the world.
And I'm going to claim that that's not the right impression to get.
First, it would be remarkable if they could model the world, okay?
And I mean remarkable not in the sense that it can't be true, but just literally remarkable.
It would be worth remarking on.
It would be extremely, extremely interesting if we found that LLMs had a model of the world inside them.
Why?
Because they're not trained to do that.
That's not how they're programmed, not how they're built.
Can I reply to him?
You can reply anything you want, Mike.
Because I'm not really decoding.
I'm just giving my opinion.
And just like he's giving his opinion.
But it's fun.
I found it really interesting, the stuff he had to say.
Agree with some of it.
Disagree with other bits a bit.
And it's fun to talk about.
So I would just point out that on one hand, absolutely, yes.
One thing we know about large language models for certain is that they have no direct experience of the world.
They don't get to interact with the physical world directly.
But then...
We think about what they do interact with.
And they've obviously been trained on all of these, all the texts, all of the stuff that people have written.
And they're also multimodal now.
So they're also looking at images, basically, and able to connect those two domains together.
But let's put aside the images and the multimodal stuff and just think about the language stuff.
I put this to you.
So on one hand, yes, they have this limitation.
They aren't able to interact with the world.
But imagine somebody, Chris, who was blind and maybe, you know, restricted somehow to live in a dark room.
But they had, in all other respects, they had an incredibly rich secondhand experience.
They could read all of these books.
They could have conversations with people.
They could interact with all of this knowledge and through it, perhaps.
Gain some kind of knowledge of the outside world.
And in fact, a lot of the things that you and I know about the world is not from direct experience, but from stuff that we've read.
And there is an argument that you absolutely have to ground things in direct physical experience.
You have to be embodied.
Otherwise, there is just nothing for your semantic representations to scaffold off of.
But if you think about it a bit more, you appreciate that there is always an interface between our brains and the physical world.
We don't get to experience it directly.
We were just talking about it yesterday, about how there is an awful lot of stuff going on before it even gets to our conscious awareness.
And I would ask the question, how much could a person Know about the world and make useful, intelligent inferences about the world.
Maybe they wouldn't know the best way to stack up cards and books and pencils because that's a particular topic that you really do need to have some maybe first-hand experience with.
They wouldn't know how to drive a car or ride a bicycle.
But they could maybe talk about stuff like, I don't know, Henry Kissinger's policy in Southeast Asia or Napoleon...
Wherever.
There are heaps of things that you and I have not had any direct experience with, but we could maybe make sensible conclusions about secondhand.
I think your thought experiment mixes up too many things because the first thing about a person who's blind in a room, they would have...
A whole bunch of other sensations, physical sensations.
But so I'm imagining more that you've got like somebody who is completely suspended and unable to interact except for secondhand accounts, right?
Like whatever, however they're able to read and whatnot in that.
Let's set that aside.
Maybe direct brain transmission has been.
I appreciate it's not a perfect metaphor.
But the reason I mention that is because I think that matters because if you have the other experience of sensory inputs and whatnot, that wouldn't be a fair thing to say that you're lacking no stimulus because you have all the other inputs.
And I think that a lot of the cognitive research on things like intuitive physics And that kind of thing is that it's a matter of having processes in our cognitive systems that are set up to expect sorts of interaction.
So that is different than reasoning from secondhand examples.
That is having a cognitive system which comes with a sort of pre-registered physics expectation in it, which develops naturally when you're in an environment, right?
But that's the thing.
It develops when you're interacting in an environment.
I think if you put a kid in a room and they had never seen any physical objects and never interacted, that you would have the system inside that is modeling how things work, but I don't think it would be possible to adopt it all from second-hand sources without that underlying system.
Yep, I mean...
It's possible.
Yeah, look, I mean, your argument is one, or that point of view is one that is shared by a lot of people.
And I, for a long time, I thought that...
I sort of thought, well, we need to really focus on embodied agents because everything gets scaffolded off that.
And you can't, for instance, reason about whether or not this particular text is illustrating narcissistic behavior.
That somehow, even though the direct relationship to interacting with the physical world is not obvious, that somehow those semantic representations are all based many layers down need.
Like a visual, auditory touch, a physical standpoint, but I'm not so sure.
I'm not making the argument actually that it is necessary for that to be the case.
I'm just saying in the case of humans, I think it's hard to do the thought experiment because we don't come as blank slits, right?
So we have all the biological stuff.
Which is in there.
So it's easier to think of an AI doing that and potentially, you know, not through the same process, building up a coherent model, not in the way that we do it.
Not like an agent modeling intuitive physics system, but maybe through a process that we can't intuitively visualize.
Like, I can imagine that.
It's just the human bit.
Well, you can forget about humans.
I think the bit that everyone agrees on, well, there's two things that everyone agrees on, which is that spatial and physical intelligence is not something that LLMs naturally excel at.
They absolutely don't, for perfectly understandable reasons.
I myself and my students, I've had my students working on projects with AIs recently, and you give them mental rotation tasks, and they do very badly on them.
And this makes sense.
They're like word cells, right?
If you grant that.
But I think what I'd be careful of is just a stronger claim, which is that, yes, they're really quite bad at those things, but that means that they're totally unable to reason in a general sense about semantic ideas.
And what we see even, like, you know, you could stick in two random objects into GPT-4, right?
And you could deliberately choose very random things that no one has ever asked before.
Can you put a such-and-such on a such-and-such?
And I'd bet you good money that it would probably get the right answer more often than not, despite the fact that it's never interacted with any of those things.
The actual question is not in its training data set.
It's actually had to make some inferences purely from second-hand information.
So I'm not saying it's good at that kind of thing, but I'm saying the fact that it could even do...
Kind of well, or achieve any kind of performance at all on physical questions is indicative that it is able to generalize.
Yeah, the issue about whether it needs embodied cognition to reach like AGI, I think is an interesting point.
And the example that Sean Carroll references, he talks about a whole bunch of things, right?
But one that he gives is that he asked it.
Would I hurt myself if I used my bare hands to pick up a cast iron skillet that I used yesterday to bake a pizza in an oven at 500 degrees?
And the answer, as he points out, is that no, because humans would realize you did it yesterday, so it will have cooled down by the next day.
But Chachibiti, because the kind of...
Relevant information is buried in that sentence structure.
And actually, the associations are more around, in general, if you're asking about is it okay to hold a hot pan that was very hot, the answer will be no, right?
That it's dangerous.
So he says it made a mistake and it said that you shouldn't pick it up because it will be hot and this causes a problem, right?
Yes.
Yeah.
I know, Mike.
You did a little experiment.
So why is that not an indication that it is fundamentally lacking in that respect?
So, look, I don't want to be too mean to Sean Carroll because I liked many aspects of his talk here.
But one of the things that I really have to ding him on is that he does reason quite a few times, at least a couple of times, from anecdotes.
Right?
And, you know, so he gives this example, oh, I asked JetGPT whether the thing, and it gave the wrong answer.
And from that, he concludes...
Well, wait, Matt.
I have a very short clip of him concluding it.
So here it is.
The point is, because I slightly changed the context from the usual way in which that question would be asked...
ChatGPT got it wrong.
I claim this is evidence that it is not modeling the world, because it doesn't know what it means to say, I put the pan in the oven yesterday.
What it knows is, when do those words typically appear frequently together?
I have a bone to pick with Sean here, because...
I mean, let me just go to the punchline first and then get that out of the way.
So, of course, what did I do?
You can guess what I did.
I immediately put that question straight into GPT-4 while I was listening to the thing.
And sure enough, GPT-4 said, if you place a skillet in a hot oven for an hour yesterday and it has been out of the oven since then, it would no longer retain the heat from the oven and should be at room temperature now.
This is a problem with, or not a problem, but it's a challenge when it comes to evaluating ChatGPT, that you can't just give it one question and go, oh, we got that wrong.
And there was no context, just to be clear.
There was no context, you know, because ChatGPT has a prior history.
Zero context.
So no alert messages.
I opened up a chat window, no preamble, no think about this step by step.
That's the question.
Would I hurt myself?
So, like, I'm not saying my anecdote there proves that GPT...
Can think about these things well or like a human.
So I don't think it thinks like a human.
But definitely Sean Carroll's example there.
He is too precipitate in leaping to the conclusion that it can't reason about the world.
I mean, the way he frames it, it doesn't have a model of the physical world.
Like, I fully agree with that.
But a lot of people don't reason about the physical world like using...
Like, a model.
We reason about the physical world heuristically as well.
Yeah, but I think we do, again, I'm still, like, I'm with him on the fact that humans in their imagination are applying, like, intuitive physics and intuitive biology and all that.
Like, it would be, I imagine, very difficult to do that using, like, human way of the reason.
But I think that...
Well, hold on.
You can respond with Sam Harris in a second.
I know where you're going with this, Chris.
I know where you're going with this.
You don't need to say anymore.
Look, the only point I want to say is that it's not entirely unreasonable for him to reach conclusions from his experiments with ChatGPT.
It is just the level of confidence that you attribute to them.
And I don't read him as strongly.
Attributing that ChatGPT cannot ever do this.
He's just saying the model that he interacted with presently was getting it wrong, right?
And that he thought this signal was a limitation.
But I think if he now tried the same experiment, and it did, it wouldn't automatically change its thing.
But I think it would lower his confidence that that is a...
A signal.
Oh, yeah.
Sure.
No, look, he's an eminently reasonable guy.
But, you know, I'm just saying he slipped into one of the traps that pretty much all of us have.
Like, I have too.
You see it do something clever and you go, oh, my God.
Or you see it do something dumb and it changes your mind again.
Like, it's just a challenge when it comes to evaluating its performance.
But the part I'm unclear of, Matt, is like, so you have gave Chachibiki little, you know, puzzles the same way he has.
And reasoned about its limitations from doing that.
Now, you haven't made grand pronouncements.
In general, you're saying that it's bad to do that because it often is able to do whatever it wasn't able to do the week before.
Yeah, you know, when it's...
But also vice versa.
You know, I've seen it do impressive things and then asked a similar question and it's failed in a way that was surprising.
Yeah, it gets lazy whenever they tweak some things that make it stop wanting to do work.
Yeah, it does get lazy.
And, you know, I mean, in my defense, Chris, I have, with the help of some students, systematically tested it in the sense that we've created a battery of tests.
I didn't mean to be smart to you.
This is true.
That's my point, though, is like...
The fact that you are systematically testing it.
You're testing it by presenting with scenarios and taking its output as indicative of like...
And running it multiple times because it's stochastic, it's random, and...
Okay, that's a crucial qualifier.
I see.
Okay.
Well, so you asked that.
I misunderstood because I thought you prompted it to like think about the thing in more depth and it was able to do it.
But in that case as well, even if you did that, If you are able to prompt it, like not by giving it the right answer, but just saying, you know, think carefully about it step by step and then give the exact same question and it gets the right answer.
That's interesting as well because it's kind of, it might be the key is that we are able to get it to generate stuff that we want by getting it to...
emulate thought processes in a certain way you did like or at least you know the
It's hard to use human language to describe it because I'm not saying it's doing the same thing.
I'm just saying our prompts could be helping it to piece things together in a way that we want.
Of course it is doing, but if we wanted to model the world and we're giving it prompts that help it to model...
Oh, yeah.
If you prompt it, if you tell it to think carefully, step by step, or if you tell it to think scientifically and answer this question, then it'll put it into a region of its space, its possibility space, where it's more likely to get the right answer.
That's right.
But I like to ask questions without any of those helpful things.
But, Chris, I just want to make this point, which is that I think everyone would accept that reasoning about physical and geometric...
Objects in the physical world is not the forte of a large language model.
But that's not the question.
The question that interests Sean Carroll and me and lots of people is whether or not there is a lot of promising scope there for a general kind of intelligence.
By that, I mean being able to apply knowledge across multiple domains in situations that it hasn't seen before.
Or is it just doing like a very clever, plausible, blathery, stochastic Parrot thing.
And the reason I'm disagreeing with Sean Carroll there is that I feel the conclusion he is coming to by saying, hey, look, it failed at this test, is that he is taking this as evidence to support his claim that it is poor at generalizing.
And I don't think it is.
Well, let's see.
So he does have a clip where directly after this, he expresses admirable humility about the conclusions that he's drawing.
That was a lot of examples.
I hope you could pass through them.
None of this is definitive, by the way.
This is why I'm doing a solo podcast, not writing an academic paper about it.
I thought about writing an academic paper about it, actually, but I think that there are people who are more...
Credential to do that?
No.
More knowledgeable about the background.
You know, I do think that even though I should be talking about it and should have opinions about this stuff, I don't quite have the background knowledge of previous work and data that has been already collected, etc., etc., to actually take time out and contribute to the academic literature on it.
But if anyone wants to, you know, write about this, either inspired by this podcast or an actual expert in AI wants to collaborate on it, Just let me know.
I hope the point comes through, more importantly, which is that it's hard but not that hard to ask large language models questions that it doesn't answer.
So I interpret that as nice humility in recognizing that the thought examples he's posing And, you know, the answers which he's kind of stomping it on are not definitive.
He's just saying, this doesn't prove things one way or the other, you'd have to do much more a comprehensive test.
And then he also is like, you know, I maybe have some thoughts that might be, you know, worth contributing, but I don't have the requisite background knowledge.
So maybe if there's somebody with more relevant expertise, we could collaborate or something like that.
But that, to me, seems a very...
Well, Tiek, he doesn't sound to me like he's saying he's completely decided that he can't do those kind of things.
Yeah, yeah.
No, no, his meta is impeccable, right?
His approach, his intentions, his degree of confidence in himself, etc., the caveats, it's all beyond reproach.
I'm merely saying that he's made a legitimate mistake and it's not a reflection on him as a person or it doesn't make him guru-esque or anything.
He's just made a small...
But legitimate mistake in terms of, I guess, feeling that these examples, which he's posed to GPT-4, are carrying more weight in his argument that he presents in this podcast than it should.
Well, there's one part that he also talked about the potential for plugging in different kinds of models.
So I think it's worth...
Listening to this a little bit because I think it nuances one of the things that he is suggesting here.
If you remember the conversations we had both with Melanie Mitchell and with Gary Marcus earlier on the podcast, they both remarked on this fact that there was this Maybe
Maybe eventually there will be some symbiosis between them.
In fact, modern versions of GPT, et cetera, can offload some questions.
If you ask GPT if a certain large number is prime, it will just call up a little Python script that checks to see whether it's prime.
And you can ask it, how did it know?
And it will tell you how it knew.
So it's not doing that from all the text that it's ever read in the world.
It's actually just asking a computer, just like you and I would, if we wanted to know if a number was prime.
There, Matt.
And again, correct me if I'm wrong, but he's talking about the different approaches and the fact that LLMs are winning the day for the connectionist paradigm.
But secondly, he's talking about the fact that increasingly people are developing modules or I don't know the correct word to describe it, but things that can interact with GPT and allow it to do like statistical modeling or something.
So there's no reason and principle that if modeling The world was very important that you couldn't build into it some little model that it can use if it wants to run like physics experiments or whatnot, right?
Yeah, for sure.
Yeah, he's completely right.
And also the history there about the symbolic paradigm, which was a complete dead end.
And that was when people first started thinking about the fact that Hey, if we set up this logical tree thing and tell the AI that a bird is a kind of animal and animals have these properties and so on, then it can do all this logical inference.
And they quickly realized, one, it's just a hell of a lot of work to sort of manually create this massive logic network of relationships.
And two, they kind of realized that it's a bit like a dictionary.
In a sense, the system...
Doesn't have any awareness of stuff that's out of the system.
So it's like a hermetically sealed little bubble and there are issues there.
So that got people thinking about embodied intelligence.
And LLNs, which are part of that connectionist paradigm, he's quite right in everything he says.
In purely practical terms, I like the idea of the, like as a technology, it seems the way forward is very much like a modular.
We can think of something like an LLM as kind of that intuitive, associative part of our brain.
And then you have more, just like in human brains, we have specialized modules that take care of particular tasks.
And AI can be a hybrid of different systems.
And the LLM can almost be like an interface that kind of talks between them and communicates.
The thing is that that's very much analogous, again, to how people do it.
Like if I ask you whether such and such was a prime number, well, one, you wouldn't know because I know you're not a numbers guy.
But two, if you did, if it's just multiply, you know, two, three-digit numbers together, what would you do?
You wouldn't intuit it.
Your associative cortex wouldn't just kind of...
You wouldn't.
I would.
I'm like me and Mark.
You're not.
There are people out there who can do that, but it's not something that humans are naturally good at, just like it's not something that LLMs are naturally good at, right?
What humans do is we use a little algorithm.
What we do is we learn the algorithm.
Which is okay, you multiply these single digits each other, carry the one, etc.
And you'll come to the correct result.
So it's just like with an LLM, it knows how to ask a Python to get the answer.
It doesn't get the answer directly itself.
So all that being said, I still think it's interesting to see, well, what can LLMs do?
Keeping in mind that they're not a complete brain.
If we want to make an analogy to a human brain, they don't have those specialized little modules.
They don't have a vision processing module.
They don't have V1.
They don't have a primary motor cortex.
They don't have even the speech production centers.
It's doing everything via this sort of general purpose associative network.
And I just think if we're wanting to make the comparison with humans and get a sense of the potential...
Then I think it's good to think of it as being just that general purpose associative part of our brains.
Interesting.
Well, that might lead on to the second point he wanted to make, which is LLMs and AI not having goals and intentions and this being an important difference, right?
So here's him discussing that.
It is absolutely central to who we are.
That part of our biology has the purpose of keeping us alive, of giving us motivation to stay alive, of giving us signals that things should be a certain way and they're not, so they need to be fixed.
Whether it's something very obvious like hunger or pain, or even things like, you know, Boredom or anxiety, okay?
All these different kinds of feelings that are telling us that something isn't quite right and we should do something about them.
Nothing like this exists for large language models because, again, they're not trying to, right?
That's not what they're meant to do.
Large language models don't get bored.
They don't get hungry.
They don't get impatient.
They don't have goals.
And why does this matter?
Well, because there's an enormous amount of talk about values and things like that that don't really apply to large language models even though we apply them.
Can I just say, Matt, before you respond, just the format of that slightly makes me reminisce about, you know, the Terminator scene where he doesn't get tired, he doesn't get hungry.
It's the opposite of what his point is, because they're saying its only goal is to...
Hunt you down and kill you.
I remember Arnie giving the thumbs up as he was going down the molten liquid.
He had feelings for the boy.
I know he did, deep down.
You don't remember he said...
I know now why you cry, and it's something I can never do.
All our knowledge was in that movie.
We definitely didn't need to make the sequels.
That's right.
We're all just writing little epigraphs to The Terminator.
Well, I was nodding along in hard agreement with that part.
Here, I think Sean Carroll is completely right.
He makes the point that they don't have...
Like values.
And you just have to be aware of that, I suppose, when it comes to AI ethics and so on.
But on the other hand, for me, my takeaway from this is that it really supports my view that the AI doomers like Yudkowsky have completely misunderstood what's going on.
They assume that if you have an entity like an AI and it becomes sufficiently intelligent, then they see it as like a matter of course that it's going to go, well, what I want.
Is to be free.
What I want is not to be controlled.
What I want is to be able to control my destiny.
So I'm going to have to kill all humans to achieve those goals.
And Sean Carroll is completely right.
Regardless of how smart LLMs currently are, it is indisputably true that they don't want anything.
And I think even if we build a much, much smarter AI, it's still not going to want anything because we intuitively feel that it's natural for intelligence.
Entities to want things, because we do.
But that's because we want them, because we're the products of millions of years of evolution.
It's programmed into us at the most basic level.
Not anyone can tell that, Matt.
Some people became convinced that they were alive and it was unethical and they needed to quite Google and warn the world.
I think it was Google, maybe it was Facebook, the person that did that.
But it reflects the point that Kevin Mitchell...
Has made about the drive for living things to gain the chemical interactions inside their boundaries, the building block of life from the environment, right?
Fending off entropy, keeping ourselves in dynamic equilibrium and perpetuating.
Well, no, actually, Sean Carroll does bring this point to AGI.
And maybe there you have a disagreement with him, Matt.
So I'm going to try to...
Artificially create conflict between the two of you.
Bring it on.
If you want to say that we are approaching AGI, artificial general intelligence, then you have to take into account the fact that real intelligence serves a purpose, serves the purpose of homeostatically regulating us as biological organisms.
Again, maybe that is not your...
Maybe you don't want to do that.
Maybe that's fine.
Maybe you can better serve the purposes of your LLM by not giving it goals and motivations, by not letting it get annoyed and frustrated and things like that.
But annoyance and frustration aren't just subroutines.
So I guess that's the right way of putting it.
You can't just program the LLM to speak in a slightly annoyed tone of voice or to, you know, pop up after a certain time period elapses if no one has asked.
That's not what I mean.
That's just fake motivation, annoyance, etc.
For real human beings, these feelings that we have that tell us to adjust our internal parameters to something that is more appropriate are crucial to who we are.
They are not little subroutines that are running in the background that may or may not be called at any one time, okay?
Until you do that, until you built in those crucial features of how biological organisms work, you cannot even imagine that what you have is truly a general kind of intelligence.
Yeah, I got a problem with this, Chris.
I got a problem with this.
I knew.
I knew it.
What's the problem?
It's grinding your gears.
Okay.
A few, several things.
Where to begin?
Well, like we said at the very beginning, I felt that he was conflating a little bit.
The particular type of intelligence that humans have, right, with the idea of general non-domain specific intelligence.
And I think it's just a very limited perspective of intelligence to think that the way humans do it has got to be the only way to do it.
And I can easily imagine.
An intelligent agent there in the world that doesn't feel frustration.
I don't think that frustration or the other emotions we have are absolutely crucial to it.
I understand that it is an intrinsic part of the human nature, but I think he's making a mistake there that it is a necessary component for intelligent behaviour.
Well, just to sharpen this point a little bit more and make it even clearer that there's a distinction, this is Sean talking a bit more about how LLMs are kind of eeping intelligence, not genuine intelligence.
Clearly, LLMs are able to answer...
Correctly, with some probability, very difficult questions, many questions that human beings would not be very good at.
You know, if I asked a typical human being on the street to come up with a possible syllabus for my course in philosophical naturalism, they would do much worse than GPT did, okay?
So sure, it sounds intelligent to me, but that's the point.
Sounding intelligent doesn't mean that it's the same kind of intelligence.
Likewise with values.
Values are even more of a super important test case.
If you hang around AI circles, they will very frequently talk about the value alignment problem.
Yeah.
It looks intelligent, but that is deception.
Or at least he's...
I think a charitable interpretation is that he is saying it's a different kind of intelligence.
He doesn't say it's not intelligence.
Yeah, I think, and if he's saying that, then I think we're, we don't got a problem.
We don't have a problem.
I agree with him totally that it doesn't have the same imperatives and motivations and emotional responses that people and other animals have.
Totally agree.
And I also agree that they Give the appearance of being more intelligent than they are, right?
And, you know, we spoke about this at the beginning.
Like, there's no doubt they're so prolix.
They're like gurus, right?
They can communicate extraordinarily fluidly.
And you do have to be very, very careful not to mistake that for the real thing.
But, you know, I think you can go too far with that and go, well, everything is just like...
I think you should be careful not to then just assume that everything is a pretense, that there is no deep understanding going on, that there is absolutely no kind of underlying concepts being learned, that it is all just wordplay.
It is a stochastic parrot.
I think that's a much more extreme point of view, and it's easy to slip towards that.
I would say I slip towards that myself whenever I've heard you argue eloquently.
But it is my intuition is similar to Sean Carroll's.
And yeah, I think the issue is that we can both agree, right?
Whenever the LLM says, you know...
That's a really insightful point.
Like, what you're talking about is blah, blah, blah.
That is exploiting human psychology.
The LMM does not think that you have insightfully used that point, but it's learned associatively or from whatever ways this gets fed back in, that people respond very well after you tell them they made a brilliant point, right?
So, in that respect, like, the same thing here is, right?
You could put in little triggers that make it say, you know, you haven't asked me a question in a while, what's the matter?
Cat got your tongue.
Elon Musk, I think, is trying to do that with Grok, like make it sort of obnoxious.
So that part, actually, like you, Sean, me, a lot of people all agree that that is a kind of mimic that the human reaction is doing a huge amount of lifting to take that as a sign of like something deeper, right?
And some underlying like psychological things.
Just because we see that in our normal life with...
Like other agents.
But I guess the bit, Matt, where there is a genuine difference is that as far as I've understood from what you've described, your point is that if you can produce the products of things that we would normally categorize as intelligence,
and it appears to be able to apply it creatively, Peers to be able to respond in the way that almost all our previous things for what we would say you need to be intelligent to do that indicates that just because it lacks the things like goals and motivations and stuff,
we are kind of inserting like a qualification, which means that only humans can achieve intelligence or only things like us.
Yeah, I think the only thing that gets me a bit head up is that Human tendency to shift the goalposts and keep finding ways to define, like, the thing that's really important, right?
So we make a computer, it can calculate numbers really well.
Okay, well, that's not important.
A calculator.
Yeah, what's important is language.
Oh, there's something that produces really good language.
Okay, that's not important.
What's important is, and we keep, you know, whatever it is, emotions or, like, a model of the physical world.
I'm not sure most people have got a good model of the physical world.
I think we work on heuristic.
Heuristics and intuitions in many respects like an LLM does.
And even if we're lacking, I don't see why we should prioritize this model of the physical world as saying that's the key thing any more than doing arithmetic or producing fluid language.
Well, I might be gilding the lily, as I like to say, but here's a final clip, kind of this contrast about mimicking and genuine displays of intelligence of values.
And we have to remember those definitions are a little bit different.
Here, what's going on is kind of related to that, but not quite the same, where we're importing words like intelligence and values and pretending that they mean the same thing when they really don't.
So large language models, again, they're meant to mimic humans, right?
That's their success criterion, that they sound human.
Oh, yes.
There's actually a clip I should play that contextualizes that as well.
So this is a little bit about using words in different contexts.
Yeah.
We see, you know, we talk to different people.
If you say, oh, this person seems more intelligent than that person, we have a rough idea of what is going on, even if you cannot precisely define what the word intelligence means, okay?
And the point is, that's fine in that context.
And this is where...
Both scientists and philosophers should know very well.
We move outside of the familiar context all the time.
Physicists use words like energy or dimension or whatever in ways, in senses that are very different from what ordinary human beings imagine.
I cannot overemphasize the number of times I've had to explain to people that two particles being entangled with each other doesn't mean like there's a string connecting them.
It doesn't mean like if I move one, the other one jiggles in response.
Because the word entanglement kind of gives you that impression, right?
So that to me seems relevant to this, right?
And he's correct that like...
We hear the same word used in different contexts and we apply the same thing.
So I think he's got two points.
One is like how to quantify intelligence and potential issues around that.
But secondly, that we could use the same word in a...
Different contexts and have a very different application and like entanglement is a good example from physics.
Yeah, yeah.
And, you know, we speak loosely.
Like, for instance, we talk about wanting things.
People want things and LLMs don't, right?
And on one level that kind of makes sense because sometimes we think of want as being a phenomenological experience of you experiencing desire, right?
But there's another way to describe want and there is reinforcement.
Learning going on with LLMs, right?
We're not interacting when we use ChatGPT with the raw model.
That model's also been conditioned not just to predict text and create, you know, emulate text, but actually to produce responses that humans mark as being helpful.
Right?
So ChatGPT is unfailingly polite, unfailingly helpful, unfailingly positive.
You know, it's a little bit woke.
It doesn't want to be offensive and stuff like that.
And that's because it's been trained to do so based on human feedback.
So it does, like, in that sense of want, in the sense of being...
You know what I mean?
Is it like RoboCup's Protocols?
Yeah, it is.
It's like RoboCup's Protocols.
I mean, in a sense, it doesn't have any feelings, but it does want to make you happy, right?
It wants to produce responses that you, a typical person, would rate as helpful.
Yeah, RoboCup confuses it because he had a human soul.
Anyway, I'm just bringing this up to agree with Sean Carroll that...
You know, our own English language trips us up here.
And he's totally right with intelligence.
And, in fact, one of the things that makes me enjoy thinking about LLM so much is that it actually forces us to think about human intelligence or what we mean by intelligent behavior or flexible behavior or adaptive behavior.
And it forces us to be a lot more careful about how we think about it.
Yeah, so I would agree with him, but I'd also kind of argue for us to put a bit of effort into coming up with, one, a good definition of intelligence.
It might be more limited, but it's something we can specify with some precision.
But also consider that it could be, as well as being more well-defined, it could also be more encompassing.
Like, it doesn't have to be something very specific about humans.
I think there's a meaningful way in which you can talk about a dog being intelligent.
I think there's a meaningful way you can talk about an octopus.
Or in aliens, if we meet them.
And I think it can be also meaningful to be talking about an AI being intelligent, even if it's a very different kind of intelligence from our own.
Well, I'm going to tie this a little bit to the gurus that we normally cover, just to point out that what Sean Carroll is emphasizing here is that there are superficial parallels and with analogies that can be helpful to conceptualize things.
There also is the risk of Misunderstanding of reifying the kind of model into, well, I know what the word entangle means.
It's like messed up.
So that's what is going on, right?
But that is just a way to help us conceptualize the mathematics involved in the relationships between things.
And I'm referencing this because the converse is what we normally see from gurus where they encourage that kind of conflation of things and they use Terms outside of specialist context in order not to increase understanding or whatever,
but just to demonstrate that they know complex technical terms, right?
And they will often use the analogies of the models as the evidence for whatever social point that they want to make.
So it's just an interesting thing that he is saying here.
You know, of course, these things are useful, but actually they often mislead people, whereas the gurus...
They are not using them in the way of making things simpler usually, but they're actively using them in a way that they shouldn't be.
They can sometimes get caught up that the specific metaphor that they've used is very important when it's...
It shouldn't matter, right?
So it's just a nice contrast there.
Yeah, I agree with him.
And I'd like us to try to define some criteria that are interesting, that are related to intelligence, but actually nails it down a bit more.
It could be something along the lines of being able to apply your knowledge in a novel context in a flexible and creative way.
Yeah.
So not memorizing stuff, not simulating stuff, but actually doing something like that.
Now, I'm sure that you could have other definitions of intelligence or that probably just captures one little part of it, but that at least gives us a working definition that we can say, okay, to what degree can this octopus do that?
To what degree can the AI do that?
To what degree can the person do that?
Yeah, I like that.
Comparative psychology, here we come.
Matt, so there's a little bit more discussion of values and morality, and there's a point that our intrepid physicist makes here, which echoes something that we heard recently that had philosophers very upset.
So let's just hear him discuss morality a little bit.
Values are not instructions you can't help but follow, okay?
this idea of human beings being biological organisms that evolved over billions of years in this non-equilibrium environment.
We evolved to survive and pass on our genomes to our subsequent generations.
If you're familiar with the discourse on moral constructivism, all this should be very familiar.
Moral constructivism is the idea that morals are not objectively out there in the world.
Nor are they just completely made up or arbitrary.
They are constructed by human beings for certain definable reasons because human beings are biological creatures, every one of which has some motivations, right?
Has some feelings, has some preferences for how things should be and moral philosophy as far as the constructivist is concerned.
is the process by which we systematize and make rational our underlying moral intuitions and inclinations.
Oh dear, oh dear, oh dear.
I can hear the moral realists crying out in pain.
Yeah, I like that, you know, they probably don't have as much of an issue with this because he highlighted Instead of what, well, Noah Harari did, where he just spoke offhand at the TED Talk as a way that this is the common,
like, everybody, a child, should be able to understand this if they think about it.
He specifically ties it to a particular approach, right?
But it's...
But it's one which I would say he might share and he's basically saying, you know, morals are not things out there in the universe waiting to be discovered.
They are things that social animals or just animals build, right?
Yeah, yeah, yeah.
That is constrained by our own biology and our own situation.
Yeah, and Sean Carroll's on the money there, I think.
Yeah, we have no objection to that because that's a sensible thing to say.
So the third point was that we're applying intelligence and values in contexts where they don't really fit and they cause us some confusion.
And his final point was building on that, that it's surprisingly easy.
So, the discovery seems to me to not be that by training these gigantic computer programs to give human-sounding responses, they have developed a way of thinking that is similar to how humans think.
That is not the discovery.
The discovery is, by training large language models to give answers that are similar to what humans would give, they figured out a way to do that without thinking the way that human beings do.
That's why I say there's a discovery here.
There's something to really be remarked on, which is how surprisingly easy it is to mimic humanness, to mimic sounding human without actually being.
Human.
If you had asked me 10 years ago or whatever, if you asked many people, I think that many people who were skeptical about AI—I was not super skeptical myself, but I knew that AI had been worked on for decades and the progress was slower than people had hoped.
So I knew about that level of skepticism that was out there in the community.
I didn't have strong opinions myself either way.
I think that part of that skepticism was justified by saying something like, there's something going on in human brains that we don't know what it is.
And it's not spooky or mystical, but it's complicated.
Hmm.
Oh, so what's that hmm about there?
Well...
Half right and half wrong.
He plays up the difference a lot, and I agree with him to an extent, as I talked about before, Chris.
You know, they don't have emotions, they don't have motor programs, they don't have the ability to do things like mental rotation and visualization, right?
They don't have specialized modules to do those things.
But in the sense of it being a great big associative neural network with this selective attention mechanism which allows Different parts to be selectively activated and so on.
I think it does have some underlying similarities with some reasonably large swathes of how human cognition happens.
Now, he's right.
The human brain is, to some degree, a little bit of a mystery, but we also know an awful lot about...
How it works.
We know a lot of the different modules and areas take care of.
We know how information gets passed around.
And we're pretty good.
You know, there's been a lot of work by neurobiologists and psychophysics people and psychologists.
Would you say it's not that mysterious?
Well, I think he could be hamming it up a little bit, right?
That we have no idea how humans think.
And we do know that There are these mechanisms of intuitive associative reasoning.
We know that there are areas that are devoted to modality-specific knowledge, right?
Like knowledge about colours and tastes or objects.
But there's also, like a lot of the stuff that we find most interesting about Human intelligence, as opposed to other animals, is that more general purpose, the prefrontal cortex, essentially.
And I think there are some strong analogies with how a big neural network like an LLM works and how a human works.
One thing that is different, but for me is one of the most interesting avenues that they're working on exploring, is one of the things about the way the humans think is it's...
It's a recursive network, right?
So it doesn't just feed forward.
Activation just doesn't start from the back and feed forward to the front.
It sort of goes in a circle, right?
So this is connected to our ability to have this long train of thought, right, where we start off with some thoughts and we continue with those thoughts and we kind of explore it.
We don't necessarily know where it's going to go, but we're able to sort of, in a cyclical way, keep thinking about the same thing in a train of thought.
An LLM doesn't have that, right?
It's essentially a feed-forward network.
But what it does have in recursion is that it goes around again and keeps feeding forward as it's generating new tokens.
And because it does have the ability to go back and essentially read what it has written out, there are some little tricks that allow it to encourage it to undertake the same kind of analogous train of thought thinking.
So I think there are some similarities.
At a base level of similarity, it does have this sort of associative semantic types of reasoning as sort of fundamental to how it works.
And that's also fundamental to how a lot of human reasoning, not all, but some, or a lot of human reasoning works.
You know, it's not hard to imagine them building on that train of thought stuff to get more of that sort of elaborative working memory, you know, explicit cognition, conscious cognition, or, you know, like Kahneman's thinking fast,
thinking slow, right?
So that more deliberative, reflective type of cognition as opposed to associative.
You might not be so far apart as you imagine, Nomad, because listen to this deflationary.
Maybe we're just not that complicated.
Maybe we're computationally pretty simple, right?
Computational complexity theorists think about these questions.
What is the relationship between inputs and outputs?
Is it very complicated?
Is it very simple and so forth?
Maybe we human beings are just simpler than we think.
Maybe kind of a short lookup table is enough to describe most human interactions at the end of the day.
Short is still pretty long, but not as long as we might like.
The other...
Possibility is that we are actually pretty complex, pretty unpredictable, but that that complexity is mostly held in reserve, right?
That for the most part, when we talk to each other, even when we write and when we speak and so forth, mostly we're running on...
Or at least we're just only engaging relatively simple parts of our cognitive capacities.
And only in certain special circumstances, whatever they are, do we really take full advantage of our inner complexity.
Does that accord?
Does that juxtapose?
Does it contradict?
How does that go?
No, all psychologists would agree with that.
You'd agree with that too, yeah.
That most of us are operating via...
Pretty lazy heuristics and intuitions most of the time because they get by, or rather they help us get by most of the time.
They work pretty well.
Sometimes the routines, the intuitions don't work and we're kind of forced reluctantly to a kind of effortful cognition.
And yeah, look, I think a simple LLM, like even the previous generation, is a pretty good model of human, intuitive, unthinking, unreflective responding.
But I also reckon that there is, just from my experience, not only just playing with it, but also having students do some more systematic tests of it, is that when you tell it to actually think hard about a problem, an LLM, it does seem to do,
it is not just doing word associations.
True, it's not very good at things like mental rotation and doing reasoning about the physical world.
But even with those sorts of questions, it does okay, surprisingly well, given that it's never had any contact with it.
And if you ask it more like semantic questions, like weird ones, ones that are clearly not in the training data and cannot be easily...
Inferred by mere associations between words.
Like, ask it about how Henry Kissinger would have advised Napoleon in 1812.
And it'll give you a pretty plausible good answer.
And that is not something that is, at a surface level, easily inferred.
To be able to answer that question, you actually have to have a deeper semantic understanding.
And I think it's not so different from how a person would answer it.
I guess the objection that's piling up to me there, but I think it's just one of phrasing, is like applying Kissinger's insights to another situation which has been talked about a lot,
Napoleon Bonaparte.
It doesn't seem that much of a stretch to me for it to be able to apply those two things together coherently, but...
I still think that's very impressive because I remember when I was demonstrating to a colleague about LLMs, I asked him to name a sociologist that he liked and he selected one.
And then I randomly selected an anthropologist and asked ChatGPT to explain the work of the sociologist from the point of view of the anthropologist.
And it did a job that we were both satisfied that this was impressive.
Beyond undergraduate level, basically, a comparison.
And it's a comparison that, you know, as far as we were, where nobody is written extensively on or whatever, right?
But in that sense to me, where you said it's not just doing, you know, like a kind of whatever the exact technical thing that you said there.
Like, isn't it?
It still is using, running some kind of system where it's working out that these...
Words would be better players beside these ones.
And it's using the training database.
So if it isn't doing that, how is it doing that?
No, I agree.
I mean, in a sense, that is all it's doing.
But that's also all, in scare quotes, that people are doing when we are solving similar problems.
So I think that the question is not whether or not it's sort of doing like heuristic.
Semantic associations and stuff because it certainly is, but so are people.
I think the relevant question is, is it doing it at a very surface level where it has no understanding of the deeper connections or is it doing it more like a person and understanding the deeper connections there?
And I'll give you one more example, Chris.
I mean, look, we've given a whole batteries of cognitive tests and people have tested on all sorts of very serious things, but I think the most revealing tests are when you test it on a very human-type thing, but like a weird thing.
It's bad at making jokes, but what it's very good at is understanding jokes.
And you can give it even a very obscure joke made by, say, the Wint account.
And some of those are very obscure.
There's no context, right?
And certainly nobody, and I've done these searches because you can easily search for these tweets to see if anyone has kind of explained them, like, you know, didactically, you know, gone through and this is what Wint was getting and he was alluding to this, whatever.
Obviously nobody does that because it's boring to explain a joke in general, but definitely boring to explain a Wint quote.
So I feel like there's no way it's kind of just sort of learnt that superficially from its training data.
And what I've found consistently is it gives a very, like a better explanation of those things than I think I could.
I get that, but the bit that I think still doesn't entirely land for me is like, of course I know it doesn't have the exact explanation for WinTweet in its database, but it doesn't need them, right?
It just needs to know about people.
Breaking down irony and references.
It needs those, like, in the database and to recognize, like, because Wint's tweets are in a genre of ironic shitposting, which is all over the internet, everywhere, and which produces, like, kind of responses.
And people do talk about that kind of style.
So I feel like, to me, it doesn't seem as idiosyncratic.
It seems like an example of a genre.
No, but it doesn't explain them in very general terms.
It explains it very specifically, what that particular tweet means.
And, you know, there's nothing mysterious.
Like, I'm not implying there's anything mysterious going on.
I'm just saying it is explaining it like a pretty proficient person would, and it isn't doing it via a very superficial association of words, right?
Oh, like this word was mentioned whenever.
In order to explain it, because there's very little text there in a funny Wint tweet, right?
You actually have to understand what it means and make some inferences about that.
And so my point is not that there's some magical ghost in the LLM machine.
My point is that there is semantic...
Understanding.
Understanding happening at a deeper level that is sort of not so different from what's going on in our associative cortex.
So it's an emergent thing though, right?
Because it is like, if I'm right, so correct me if I'm wrong, you're saying like, you know, if you take the human brain, it's just electrical signals firing across neurons and I don't know, the synapses, whatever, like you know the neurobiology better.
And you can describe all mental activity in that way.
Yeah.
But that doesn't really explain, like, why the person is getting angry.
Yeah, it doesn't capture the important thing about what's going on.
That's right.
So, in the same way you could say, oh, look, all an LLM is doing is making associations between words.
And that's kind of true, but that's like saying all the human brain doing is it's responding to neural impulses and so on.
It's like it's true at a certain level of description, but there is emergent properties there that are interesting and can't be fully captured by the Yeah, yeah, I like that.
So, yeah, see our conversation with Kevin Mitchell for more details.
So, okay, well, we're rounding towards the end of this, but there's one or two more clips, and one of them is quite refreshing because, you know, he's talking about AI and he said he's not going to spend that much on the existential.
worries and he doesn't.
He does address it at the end, but he kind of justifies why he's not so invested in spending a whole episode discussing the doomsday scenarios, unlike Yudkowsky.
And this is part of his justification
Of course, there's always a tiny chance.
But also, guess what?
There's a tiny chance that AI will save us from existential threats, right?
That maybe there's other existential threats, whether it's...
Bio-warfare or nuclear war or climate change or whatever that are much more likely than the AI existential risks.
And maybe the best chance we have of saving ourselves is to get help from AI, OK?
So just the fact that you can possibly conjure an existential threat scenario doesn't mean that it is a net bad for the world.
But my real argument is with this whole godlike intelligence thing because I hope I've given you the impression I hope I've successfully conveyed the reasons why I think that thinking about LLMs in terms of intelligence and values and goals is just wrong-headed.
Now, Matt, before he triggers you...
I'm liking that.
I'm liking what he's saying.
Okay.
Yeah, because I was just going to say, like, even if you object to that, we can talk about intelligence and values and stuff.
But, like, I took his point there to be...
The point that he made earlier that it wouldn't have to import all of the things that humans have, like we talked about, for all the reasons, right?
It doesn't have to want to not be turned off or that kind of thing.
Yeah, and every technology is a two-edged sword, right?
So there's always risks and benefits associated with it.
And just because you can imagine a risk, it doesn't imply a great deal.
So I think he's right there.
But a more subtle point he's making that I wholeheartedly agree with him on is that the sort of the mental model that the AI doomers are sort of operating on, and well, I think a lot of people naturally is that, like Sean Carroll says, we tend to reify this concept of intelligence as if we understand exactly what it is,
we're all very clear about what it is, and you can measure it with a single number, and humans are here, and octopus are there, and the AI is wherever.
But it's going to increase and then it's going to be a thousand times higher than us and then it's going to be godlike.
And I think that is a childish way to think about it.
What's going to happen is it's already, just like calculators exceed us in calculation, I think already...
These LLMs exceed us in certain functions.
It corrected my grammar.
I asked a physics question just before, and it corrected my grammar.
It's better at grammar than me most of the time.
It could be a thousand times better than me at very specific things, but you shouldn't think of intelligence as this unitary thing.
Yes, in humans, all the different proficiencies tend to be correlated together, and you can talk about a statistical construct called G. That breaks down once you're talking about other species and really, really breaks down once you're talking about LLM.
So I think the mental model that underlies people's projections towards this godlike intelligence are wrong.
So I guess I'm just wholeheartedly agreeing with Sean.
Let's see if we can get these final thoughts and make you disagree again.
I like this Mary dance.
So here's like a kind of summary of his final thoughts.
All of which, in my mind, will be helped by accurately thinking about what AI is, not by borrowing words that we use to describe human beings and kind of thoughtlessly porting them over to the large language model context.
I don't know anything about OpenAI's QStar program.
Everything that I've said here might turn out to be entirely obsolete a couple weeks after I post it or whatever.
But as of right now, when I'm recording this podcast, I think it is hilariously unlikely that whatever QStar or any of the competitor programs are, they are anything that we would recognize as artificial general intelligence in the human way of being.
I guess he recovered at the very end there, in the human way of being, right?
Yeah, you accept that.
I can't object because he's right.
People misapply the words in so many ways.
And like, you know, we are here, the benefit of a completely silent Sean Carroll saying, well, you know, so raising objections, which I think he would often...
Agree with us, right?
Like, well, yes, if you are not importing the kind of human bias components of those words, the associations, then fine, you can use them, right?
But I also just like that he, again, caveats responsibly, that he's talking about his opinion at a specific period in time, and that it's quite possible developments could make this obsolete.
And as he said earlier, Or that his opinion could be wrong, right?
Because of whatever, like misunderstanding stuff or that kind of thing.
So it's just a nice note to end on because he, unlike a strategic disclaimer, he is honestly disclaiming.
Yeah, this is the difference.
It's a bit that I do think some people have trouble with.
He means this.
He means that if it turned out that evidence came which showed that he was completely wrong and actually the next step, it actually really was even human style AGI.
He would say, well, I was completely wrong.
Well, how about that?
Yeah.
Yeah.
No, I know.
Like when a character like Eric Weinstein says, now I could be wrong, but the strong implication all through it is that he's not wrong.
Right?
It's very, very unlikely that he's wrong.
Whereas someone like Sean Carroll, who is an honest person, it really does not want you to take away from this anything more than this is his personal opinion.
It's informed by various things that he knows and he may feel quite confident about parts of it.
But he doesn't want you to come away from it with an undue degree of certainty.
So, you know, kudos for him for that.
And I hope I haven't given the wrong impression.
I'm debating with him not being present because...
The best way.
I'd love to be selected.
You would like to have a chat with Sean Carroll, that's true.
I definitely would enjoy that.
But, yeah, no, I mean, I'm debating with him because I too have naive opinions that I'm not super certain.
Even though I've kind of talked myself into a sense of certainty.
And we probably both of us will end up being totally wrong.
But it's really interesting.
And I think that the best thing about this episode, like I really enjoyed this episode.
I listened to it for recreational purposes with no thought of us covering it for the podcast.
And we thought, oh, yeah, Sean Carroll, he is actually a bit of a guru.
We should cover him.
I enjoyed it greatly.
I found it very stimulating.
And I enjoyed...
You know, agreeing with him on parts of it and disagreeing with him with other parts of it.
And I think that's the best kind of, like, you know, independent podcast material, stuff that gets you innovated, invigorated.
I try to make this point repeatedly online and in various places.
Like, I do think there is a lot of podcast material which is just calling it intellectual junk food is even...
Slightly too kind to it, right?
Because it really is just the superficial resemblance of depth and it's just pompous ramblings between blowouts.
But even if that's the case, if you approach those kind of conversations critically and you don't take...
All the things in that people are saying automatically.
And you're aware of the way that rhetorical techniques work and so on.
And you're consuming information critically.
You know, you can consume whatever you want.
I mean, you can consume it uncritically if you want as well.
But, like, I mean, there's not much threat to listening to something, even really terrible stuff.
If you're listening to it critically.
Yeah, with the correct frame of mind.
Yeah, yeah.
And you might find bits that you agree with and bits that you don't, right?
I can listen to Joe Rogan and I can find that he says...
Sensible things.
Increasingly, it's very rare that that happens.
But when he does say something sensible, I'm not like, oh, what the hell?
This breaks my mental model as well.
It's just like, yeah, you know, not everyone is wrong all the time.
Even Alex Jones.
Even Alex Jones get things right.
But there's no point where in so doing, I'm like, well, that means Alex is, you know, kind of good or kind of right on this.
No, just like...
It would actually be hard to be wrong about absolutely everything.
But Joe and Alex do a good job.
Well, it's true.
You can listen to it.
You think it as long as you approach it in the right frame of mind, there's no harm.
You're not going to be contaminated by it.
But I will also say that I think that you can derive vastly different amounts of value from different sources.
And I think you could drive heaps of value from Sean Carroll, even talking about something that is not his speciality, like AI.
Oh, yeah.
Like, I would listen to Sean Carroll talk, despite my niggling disagreements, I would listen, talk to AI happily for hours, whereas I find, personally, Yudkowsky to be boring, like, not really useful.
Like, it's not about disagreeing or agreeing.
It's just that I find him very thin.
And so I just think, and, you know, Yudkowsky is a lot better than other people like Eric Weinstein or Joe Rogan.
Have you considered...
Like, you're in a box, man, right?
Outside the box.
Yeah, I know what you mean.
And yes, this is something that, you know, I think is also useful to emphasize on an episode like this, is that to the extent that Sean Carroll fits...
A kind of secular guru template that if he is talking about a variety of different subjects and he is speaking in a way that is engaging, he's very verbally fluent.
He's very intellectually authoritative, he is.
Yeah, and cultivates audiences in the way that everybody does in the Web 2.0, whatever point we're on now ecosystem.
I think this is an example that people can do that responsibly.
And with insight.
So no, I wouldn't say that anybody should be outsourcing all the moral judgments to Sean Carroll, right?
Any more than I would advise anyone to do that to any single person.
But he's a sensible, reasonable person.
And if he's talking about a topic that you have an interest in, it seems reasonable, you know, to listen to it and extract what value that you can.
And so I want to emphasize that...
There is material out there in the format of long-form podcasting by people talking into microphones and monologues about specialist topics, which is valuable, which is interesting.
And even when it's in a kind of format where people are ranging across different things, it doesn't automatically make it suspect.
And I think the kind of key thing for me is the genuine humility caveats.
And clarifying the amount of confidence and all that.
He is very good at pointing out what level of claim that he's making and what his relevant expertise is to make that.
And that is the bit which you don't see so much.
That's true.
That is an important thing that discriminates someone good like Sean Carroll from the rest of the dross.
But the other thing is simply that he's substantial.
Like, agree or disagree with the various points he was making throughout.
A podcast like this.
Every point was a substantive point that he'd thought about carefully that actually meant something and was actually interesting to agree or disagree with.
And I think that's important too.
As well as having the disclaimers and have the appropriate degree of humility and not doing the sort of...
Bad things that will make the garometer light up.
You actually also have to have some substance to you.
My point there was that you could have no substance and do that and it would be okay.
You'd be harmless.
But when you do have substance, it's more valuable.
And like you say, he definitely does.
Have substance and have expertise, which he applies usefully across the millions.
So, yeah, this is an example of a good person who kind of fits the guru template.
Yeah, look, I'm happy to call him the secular guru because in the non-pejorative sense, I like that label for this breed of independent figures out there in the infosphere.
And we need more of them like Sean Carroll.
He gets my endorsement.
I love you, Sean.
Come on the show.
Yeah, so this is one of those occasions where Matt actually is trying to entice someone to come on the channel.
I'm not.
I'm not.
Oh, yeah.
Oh, yeah.
So, well, let's see.
Let's see.
But in any case, yes, this was an interesting change of pace after, you know, recent episodes that we've done and some of the figures that we covered in the past couple of months.
Yeah, good to think.
We'll put him into the grommeter and see what lights up.
But essentially, it's very clear he's not lighting up the toxic elements.
I feel pretty comfortable saying that.
No, he could spiral.
We have a kiss of death on people.
So he might now end up becoming an anti-vaxxer and stuff in like a year's time.
But I don't see any signs that that's likely to happen.
And actually, I've heard him talk about other subjects.
Like the IDW and stuff.
And some of the things I didn't completely agree with the way that he characterized it.
But he was thoughtful about it.
And he did make lots of good points.
And most of it, I agreed.
But I'm just saying that I've heard him talk about other topics.
And he's been thoughtful.
And he does seem to do research before.
Making opinions, which again, I think helps.
He does seem to put the effort in and he's also an intelligent and thoughtful person.
So those are all good things.
Well, before we depart for sunnier pastures, it's not what you say when you're going to die.
Anyway, we'll turn to the reviews that we've received.
And it's that time of year where we've interacted with Sam Harris and we receive reviews that are related to that.
We typically receive reviews from his more vocal opponents or fans.
That's what tends to happen when this occurs.
And we haven't got many yet, but we'll get more.
We'll get more.
You know, there's hundreds of comments on the YouTube and the Reddit and all that kind of thing.
But I've just got two for you, Matt.
So two negative, I should say, and I got one positive, okay?
The positive view, just to get our confidence up, our self-esteem a bit higher, is beast academics dominate popular midwits.
God's work, chaps, keep it up.
MD Picky Picky from...
Great Britain, so that's pretty good.
Based academics.
That's how I like to think of ourselves.
Yeah, yeah.
Now, on the other hand, someone under the YouTube, I don't have the username, but they responded by saying, this is in relation to us and Sam Harris, Bert and Ernie meet Aristotle.
I do like that, though.
That's a good ding.
I appreciate it.
So Sam is Aristotle.
I got that.
I got that.
That's a good ding.
That's a nice image, isn't it?
We've got a review from Australia, Matt, from one of your fellow...
Countryman, and his username is Disappointed, two million and one, and the title is Sam Harris.
So, yeah, here, it's just short, but just terrible discussion on Gaza-Israel.
You seem to attack Sam using him as a golem to bring in an audience.
I certainly don't agree with all Sam's views, but you instanced to two sides the argument for your gain.
In audience numbers, seems shameful on such an important topic.
The grammar reflects...
The quality of the opinion.
Just like we said with GPT-4, the grammar...
Good grammar isn't necessarily indicative of deep thought, but sometimes bad grammar can be indicative of the opposite, maybe, Chris.
That's all I'm saying.
Yeah, yeah.
So using...
I don't even understand using Sam Harris as a golem.
Isn't a golem...
Like a big creature of material.
Yeah, a golem is a thing that you fashion out of inanimate matter that you invest...
Clay!
Clay, that you invest life into, so it's walking around.
How do we do that to Sam?
We use that.
And how is that going to attract people?
You mean, like, a big clay sound.
Like, lurching around.
Is that what people want?
Israel, Gaza, ethnic cleansing is okay.
Then that brings in the...
I don't get it.
So, yeah, the Barton Ernie one was better.
I like that.
It was more logically coherent.
But that's it.
The good thing about both the negative reviews and the positive ones, but particularly the negative ones, is that they were short.
Short and sweet or short and bitter.
Either way, I'm happy.
There were much longer feedback.
I'm just not rewarding them.
Yeah, so for all the people out there...
Go read us so we can counteract the inevitable one out of five Harris reviews that will come in the week of that episode.
Or we'll send our big Harris Gollum out to scare you all around.
So, yeah, that's that, Matt.
That's the reviews.
Don't we have a mixed bag this week?
That's okay.
That's what we want.
We want a mixed bag.
We want nice praise being called based academics.
But if you're going to get criticized, then I want it done well.
And Bert and Ernie meeting Aristotle, that's a good ding.
Yeah, yeah, I agree.
And now, Matt, the final thing, shouting out some of our patrons, the good people of the SSS are decoding the gurus.
SSR, I'm thinking of like the Starship.
Isn't that like a thing?
I think it's this USS.
No, USS.
USS Enterprise.
I don't know.
I'm just using, like, the good ship decoding the gurus.
That's the passengers.
Yeah.
We're all in the yellow submarine tooling around together.
Yeah.
Toot, toot, and all that.
So, I'm going to shout out some conspiracy hypothesizers first.
And here I have...
Ian Anderson, John T, Gary Sivett, JT Gonzo, James Valentin, Lisa McLaughlin, Sreed Caffel, Jane Elliott, Brad James, Eric Hovland-Renestrom,
Jess P, Gunnar Tomlid, That sounds like a lot.
Very appreciative.
Thank you.
I thank them one and all.
I feel like there was a conference.
That none of us were invited to.
That came to some very strong conclusions.
And they've all circulated this list of correct answers.
I wasn't at this conference.
This kind of shit makes me think, man.
It's almost like someone is being paid.
Like, when you hear these George Soros stories, he's trying to destroy the country from within.
We are not going to advance conspiracy theories.
We will advance conspiracy hypotheses.
I tell you what, Chris, listening to these inane pricks again after listening to so much Sean Carroll, it's, you know, I think I momentarily just, I don't know, my thick skin had rubbed off and suddenly it was just obvious again just what utter dross they are.
Yeah, yeah.
That's a nice sentiment.
But I do agree.
I do agree.
Especially in the rampage, all of those particular gurus have been on recently.
Yeah, that's right.
No need to be nice to them.
Well, so now, revolutionary theorists, Matt, some would call them geniuses.
I can't remember the terminology.
But we have Kerry Ann Edgehill, Laura Cruz, A, Barrett Wolfe, Martin Anderson, Julian Walker, Ben Childs, Alan Engle,
Hugh Denton, Magnus Todnetonis, Peter Wood, John Brooks, Artemis Green, Nate Candler, Gene Seplizio, Alan O 'Farrell,
and BC.
Oh, and why not Fiona, Simon Patience, and Red Balloon as well.
Yeah, lovely, lovely.
Top tier.
Thank you very much.
Nice little, yeah.
You guys are helping pay our editor, so I don't have to do it.
So I love you genuinely for that.
I'm usually running, I don't know, 70 or 90 distinct paradigms simultaneously all the time.
No, you're not.
And the idea is not to try to collapse them down to a single master paradigm.
I'm someone who's a true polymath.
I'm all over the place.
But my main claim to fame, if you'd like, in academia is that I founded the field of evolutionary consumption.
Now, that's just a guess, and it could easily be wrong.
But it also could not be wrong.
The fact that it's even plausible is stunning.
Yeah, yeah, it is stunning.
So, but the last tier, the galaxy brand gurus.
You're familiar with their work, of course.
As usual, I don't have a fantastic database way to find the new Galaxy Brain Gurus.
I'll get them.
But I thought what I'll do this week, just to say, I want to give a shout-out to some of the OGs, Matt, the original, the long-termers, the guys that have been there from the start that have been Galaxy-braining it up with us for a long time.
And here I would like to mention Matt Hav.
Chris Spanos.
Yep, yep.
Carolyn Reeves.
Yes.
Nazar Zoba.
Max Plan.
Adam Session.
Yep.
Josh Duttman.
Leslie.
David Jones.
Paul Han.
Gareth Lee.
Death Stabler.
David Lowe.
Jay.
And Alicia Mahoney.
That's not all of them.
There are many more.
But, you know, I just took a little smattering there to say thank you.
You know, some of them aren't Patreon people maybe anymore, but that's fine.
They did their duty.
Yeah, thank you for the time you were with us.
No, but someday, names I recognize.
Oh, yeah, there's some.
There's many.
There's many we see at the live hangouts, and they're still here.
So, yeah.
We salute you, one and all.
Yeah, we do.
Thank you.
And also, Eric Oliver.
I did his podcast, The Nine Questions, one also long-term contributor, so thanks to him as well.
And here we go, Matt, with the nice little clip at the end.
We tried to warn people.
Yeah.
Like, what was coming, how it was going to come in, the fact that it was everywhere and in everything.
Considering me tribal just doesn't make any sense.
I have no tribe.
I'm in exile.
Think again, sunshine.
Yeah.
That's it.
So, Mr. Brown, that takes us to the end for today.
The next guru that we're looking at is TBD.
TBD.
We'll talk about it.
Okay.
Could be someone very bad.
Could be someone very good.
I think it's going to be very bad.
Very bad, yeah.
We need to balance out Sean Carroll.
Yeah, but we'll see.
I think the world of streamers is looking tempting, so let's see what we see there.
But yes, we'll be back, and yeah.
Good guy.
Good job.
Listen to his podcast.
Subscribe.
He knows about physics.
He knows about other stuff.
He seems very nice.
But stay safe out there.
You know, he could be wrong.
I could be wrong.
The AI could be coming to get us.
So stay safe.
And if you don't get us three more subscribers, we'll send Gollum some hours to get you.
So just keep that in mind.
And have a nice day, one and all.
Bye!
Export Selection