All Episodes
April 5, 2024 - Decoding the Gurus
02:17:07
Yuval Noah Harari: Eat Bugs and Live Forever

Yuval Noah Harari is a historian, a writer, and a popular 'public intellectual'. He rose to fame with Sapiens (2014), his popular science book that sought to outline a 'History of Humankind' and followed this up with a more future-focused sequel, Homo Deus: A Brief History of Tomorrow (2016). More recently, he's been converting his insights into a format targeted at younger people with Unstoppable Us: How Humans Took Over the World (2022). In general, Harari is a go-to public intellectual for people looking for big ideas, thoughts on global events, and how we might avoid catastrophe. He has been a consistent figure on the interview and public lecture circuit and, with his secular message, seems an ideal candidate for Gurometeratical analysis.Harari also has some alter egos. He is a high-ranking villain in the globalist pantheon for InfoWars-style conspiracy theorists, with plans that involve us all eating bugs and uploading our consciousness to the Matrix. Alternatively, for (some) historians and philosophers, he is a shallow pretender, peddling inaccurate summaries of complex histories and tricky philosophical insights. For others, he is a neoliberal avatar offering apologetics for exploitative capitalist and multinational bodies.So, who is right? Is he a bug-obsessed villain plotting to steal our precious human souls or a mild-mannered academic promoting the values of meditation, historical research, and moderation?Join Matt and Chris in this episode to find out and learn other important things, such as what vampires should spend their time doing, whether money is 'real', and how to respond respectfully to critical feedback.LinksThe Diary of a CEO: Yuval Noah Harari: An Urgent Warning They Hope You Ignore. More War Is Coming!Our previous episode on Yuval and the Angry PhilosophersCurrent Affairs: The Dangerous Populist Science of Yuval Noah Harari (a little overly dramatic)

| Copy link to current segment

Time Text
Hello and welcome to Decoding the Gurus, the podcast where an anthropologist and a psychologist listens to the greatest minds the world has to offer and we try to understand what they're talking about.
I'm Matt Brown, my co-host is Chris Kavanagh, the Ernie to my Bert, I couldn't do without him, our most valued player in Guru's Pod Proprietary Limited.
G'day Chris.
Wow, that's a lot of synchrosity going on there because just yesterday I was watching Sesame Street with my youngest son.
Not on my own.
There's a series about the Cookie Monster and a kind of smaller pink monster called Gonger.
And they run a foodie truck.
A food truck.
They're cooking various things.
So he enjoys that.
So it was topical.
That sounds good.
Yeah.
So only a birthday still.
They're still around.
I mean, they don't feature that much in that, but they're around.
You know, they haven't got out a Sesame Street gig.
They haven't gone independent.
I wonder if they're podcasting in Sesame Street.
Like, they've started their own podcast.
That would be pretty on point.
Yeah, yeah, yeah.
I'd listen.
Yeah, I think a lot of people would listen.
I haven't watched kids' shows for a while.
I don't do that anymore.
The kids are watching grown-up stuff now.
You're at a different life stage.
You recommended Bluey to me, and that was pretty good.
That's good.
Yeah, yeah.
That's very popular.
A lot of people say it's really good.
I watched a few episodes just because people raved about it.
I was like, eh, it's a kid's show.
It's alright, I guess.
Well, there's an episode where the parent dogs are watching the kids from the balcony and they get drunk.
I mean, that's not the main point of it, but I thought that kind of thing is the reason the parents like it.
You'd see an Australian...
Dog parents getting hammered.
And that's the subtext for the parents watching.
Oh, that's pretty cool.
Yeah.
Actually, I'm glad you mentioned that because I had the impression that it was just a bit too saccharine, like a bit too everything is nice and positive messages and all that stuff.
And, you know, I kind of like the kids shows from the 1970s and 80s where they were just insane.
Freaks like the goodies and monkey magic and stuff.
Not really many positive messages.
You want the edgy bluey.
Yeah, red and snippy bluey.
That's what I want.
Oh, okay.
Right, yeah.
I see.
It's not like that, but it has, you know, a little bit of it, just a little bit at times, just a little sprinkling for the parents.
So there's that.
But Matt, look.
You look well, you know, you're fresh-faced.
We're here.
There's another decoding to go.
You're geared up, right?
No.
I slept for like two hours last night, but I've had three coffees, so I think it all balances out.
But no, I'm fine.
I'm good to go.
I'm keen to do.
I'm going to do Yuval Noah Harari slowly, Chris.
That's a nice image.
And I love it when you clap because it makes it so easy to edit.
It's in there afterwards.
But yeah, I've actually, from being such a powerhouse of muscle, I have injured my shoulder from doing too many pull-ups.
I went too hard, too fast.
I reached, you know, levels that mere mortals couldn't comprehend.
And I kind of twinged my shoulder.
So I'm dealing with that.
We're all middle-aged people dealing with the slow destruction of our bodies.
Everyone listening is in the same bucket.
Of course we know this.
The audience demographics don't lie.
We know.
We know.
Yeah.
No, no.
I'm going to be good.
I have pains too.
My shoulder still hurts from when I fell over ice skating in Japan.
I think it's going to hurt for the rest of my life.
And that has nothing to do with me.
I didn't knock you over.
I was not involved in the ice skating, to my knowledge.
Yeah, I can't take the blame for that.
Nope.
I'm ready to go.
I'm going to be on my best behavior, Chris.
I'm going to be focused.
They're going to be good takes.
I'm not going to clap.
I'm not going to breathe.
Yeah, don't breathe.
He tells me not to breathe.
You can do that thing.
The chocolate rain guy, you know, I step away to breathe from the microphone.
That guy, that's what you should do.
But actually, Matt, I'm sorry.
We're approaching 20 seconds left for the banter quotient of this episode, given that we now have a pure show dedicated to talking whatever we want about.
So we have to move on.
The people have spoken.
This is the format.
And now we are going to turn to look at the Israeli academic Yuval Noah.
Harari.
Streamers.
America deserved 9 /11, dude.
Fuck it, I'm saying it.
Academics.
Can they make a comment about canceling culture?
Streamers.
Yeah, please explain this to me so I can tell you how fucking stupid you are.
Academics.
And when I'm talking about that anagogic in and out of the imaginal augmentation of our ontological depth perception, that's what I mean by imaginal faithfulness.
Enlightening stuff.
You'll provide some interesting lessons for us today.
Decoding the gurus, streamers, and academic season.
This is going to be really interesting.
Yuval Noah Harari, historian, author of Sapiens, A Brief History of Humankind, and Homo Deus, A Brief History of Tomorrow, popular speaker, frequent appearance, man on TED,
this kind of thing.
Oh, and enemy to...
Folks like Alex Jones and the reactionary right-wing conspiracy folks.
Yes, this is Yuval Noah Harari.
And one just side point, Tim, side point.
I have made a mistake.
I recently discovered that I know what political reactionary means, right?
Like the, you know, person hearkening back to the glory days, trying to take things back.
But I also use that same word reactionary to mean like emotionally.
Responsive and knee-jerk.
What I mean is reactive when I say that.
But the thing is, most of the times when you're talking about a reactionary, they are also reactive.
So people don't correct you.
Very often because the two things coincide, but they're different.
Yeah, I'll correct you.
If I could just...
I'll be glad to.
Your pronunciation as well.
Yeah.
I'm going to help you.
And also millenarian, Mark.
Millenarian, not millennial.
Just that was feedback for you, right?
Millenarian preacher, not a millennial preacher.
Okay, well, I know that.
Again, I know Mark.
We know these things, all right?
We know just...
Insert the correct word.
Jesus Christ, it's just one letter.
I misspeak out of, I don't know, just being old and senile, not from not knowing what the words mean.
That's it.
Well, that was Correction Corner, which is not part of Banter Town, so you can't complain.
We are now ready to idea jack into Yuval Noah Harari.
And the content that we are looking at this week is his interview on the diary of a CEO, Stephen Bartlett.
This is from just two months ago.
Yuval Noah Harari, an urgent warning they hope you ignore.
More war is coming.
I think some of that is to do with them trying to tap into the YouTube algorithm because this conversation is not so dramatic.
It's very mild.
Yeah.
I think that's definitely tapping into the algorithm.
No, Yuval Noah Harari.
He's not speaking to those points very much.
Chris, let's talk about what we knew about Yuval Noah Harari before we listened to this because I didn't know a great deal.
I started reading a couple of his books.
I bought Sapiens and I got a few chapters in and I think I got a little bit bored.
But my vague impression of him was, you know, fine, like a kind of light, popular.
History, science, big picture type author.
In my mind, he lives in the same sort of space as people like Jared Diamond or Malcolm Gladwell, maybe.
Yeah.
Was that your vibe?
Exactly that.
Exactly that.
As is often the case with those kind of offers, I was also there for the we of...
Of kind of criticism which came, which said he's oversimplifying things.
Historians have issues with some of the ways that he represents, you know, eras of history and things that I knew when I listened to him, like from anthropology, his stuff about the history of war or whatnot.
I also find that he oversimplified and, you know, sometimes spoke with undue confidence.
So I believe that is all correct.
I also knew of him through his second job as a...
Villain of the Alex Jones right-wing conspiratorial ecosystem, where he's basically an agent of Klaus Traub.
And because he's talking...
About, you know, human augmentation and AI, and they see it as he wants to usher in the brave new world.
So that's the other way I've come across with him.
And I know that leftists have an issue with him because, you know, he's essentially a pro-UN, WF, technocratic kind of guy.
So he's not calling for revolution.
He's not a radical hip guy.
That's all of the ways that I've encountered him.
So, yeah.
I've seen some of his TED Talks and, you know, we listened to the thing that got all the philosophers in the tizzy about ideas being fictions or whatever it was.
So, yeah.
Yeah.
And we're going to be hearing more about that.
Well, what a good pivot, Matt, because, oh, and yeah, the one other thing is I will say I also bought Sapiens and listened to a bit about the audiobook and I enjoyed it.
I also gave up, I think, or just lost interest after a while, but I did enjoy the kind of sweeping overview.
Of, you know, taking early human evolution as part of the human tale.
That's not always the case in history-focused books.
So I appreciated that despite the limitations of the chosen form.
Yeah, I think we have to be like a little bit...
Well, make some allowance for people that are writing those popular books.
Like, yes, obviously it's good not to misrepresent things and it's good to be accurate and the world and history and biology and science, everything gets all much more complicated than is usually made out.
Then it's made out to be in a popular book by Gladwell or Steven Pinker or Jared Diamond or Novo Harari for that matter.
Or Rooka Bregman.
Or Rooka Bregman.
But I think there are legitimate points to criticise them on.
On the other hand, I've also seen a certain kind of academic specialist type really going over the top.
They've described the Middle Ages as being like this, but they don't recognise that actually things were completely different in France and England.
And it's just, okay, all right, calm down.
Yeah, there are people that get accused of being somewhat jealous of the attention that he's given.
And I wouldn't say that's entirely inaccurate in some cases.
But in any case, that's not for us to dwell on.
That's just for us to point at and hint towards as we move into...
The content.
And actually, at the start of the interview, they kind of talk about Yuval's grand mission and it touches on the thing that he got in controversy for.
It's to clarify and to focus the public conversation, the global conversation, to help people focus on the most important challenges that are facing humankind, and also to bring at least a little bit of clarity to the collective and to the individual mind.
One of my main messages in all the books is that our minds are like factories that constantly produce stories and fictions that then come between us and the world.
And we often spend our lives interacting with fictions that we or that other people created, completely losing touch with reality.
My job, and I think the job of historians more generally, is to show us a way out.
Some interesting thoughts.
I don't think that is the job of historians.
Just one note.
I don't think the goal of historians is to show us the way out of understanding the narratives that people construct, as he refers to them, fictions.
He kind of presents it as that's the obvious job of historians.
I'm like, eh?
Like, aren't historians primarily about documenting history and what's happened?
Yeah, yeah.
Like, on one hand, it's a nicely framed goal.
You know, he's trying to bring clarity and help people see things more clearly and understand what things are, just ideas or stories or narratives that we've superimposed on the world versus the more concrete facts of existence.
But, yeah, I mean, as we'll talk about, I'm not...
Quite so sure you can draw such a strong dividing line between these are the real material facts of history or whatever and the stuff that's just stories.
But we'll get to that.
Yeah, we will get to that because I think there is an issue here with the way that he uses fictions.
And in one sense, it's unobjectionable because talking about the importance of symbolic representations and, you know...
Political systems and ethnic identities and whatnot are things which humans have constructed and which have had a big impact.
And we should consider the empirical shakiness that various things that we take for granted rest upon.
On the other hand, the symbolic social realities that we exist in are very real in the sense that the people...
Build castles and they marry people because they're associated with certain kinship groups and all.
So the notion that that is kind of, it's all not true, it kind of presents using fiction in two ways.
But we'll see as it gets on.
So the overall agenda is fine.
And we're going to spend a little bit more on these concepts about what he means by fictions and ideas.
So here's another clip talking about the power.
Much of what we take to be real is fictions.
And the reason that fictions are so central in human history is because we control the planet rather than the chimpanzees or the elephants or any of the other animals, not because of some kind of individual genius that each of us has,
but because we can cooperate much better than any other animal.
We can cooperate in much larger numbers and also much more flexibly.
And the reason we can do that is because we can create and believe in fictional stories.
Because every large-scale human cooperation, whether a religion or nations or corporations, are based on mythologies, on fictions.
Again, I'm not just talking about gods.
This is the easy example.
Money is also a fiction that we created.
Corporations are a fiction.
They exist only in our minds.
Even lawyers will tell you that corporations are legal fictions.
And this is, on the one hand, such a source of immense power.
But on the other hand, again, the danger is that we completely lose touch with reality and we are manipulated.
By all these fictions, by all these stories.
Again, stories are not bad.
They are tools.
As long as we use them to cooperate and to help each other, that's wonderful.
Some very broad thoughts there, Chris.
I guess there's a few points there.
I mean, there's a sense in which this is all, I think, reasonable and even a little bit interesting.
Because I do agree with him that, look, we can call these things fictions if we want to.
But to just take one example, the sort of nationalism.
And xenophobia that is very prevalent in a place like Russia at the moment.
That is a narrative that people tell themselves at different places and times, that we have a manifest destiny, that all of this belongs to us.
We are the preeminent people and we deserve and should be subjugating other people.
Like he says, those are ideas, those are narratives, those are stories you tell themselves.
And I'm not one of these people that subscribe to a purely materialist or a purely realist.
View of historical events.
I think the ideas matter and the stories that people tell themselves matter.
There's a reason why Ukraine is not in danger of being invaded by contemporary Germany at the moment, but is being invaded by Russia and has got a lot to do with the cultural stories that those people in those respective countries tell themselves.
So, I think that is all fine and true and even...
A little bit interesting because it is debatable and it's an interesting debate about the degree to which ideas and narratives play a pivotal role versus more material factors.
But it's a very broad definition.
Of fictions.
And on one hand, he's right about that they are an important characteristic of the human race that distinguishes us from other animals.
Yeah, language, transmissible culture, as you know, Chris.
I mean, but I think he overstates that a little bit because he implies that's the only thing that sort of is a difference between us and other animals.
But actually, you can't teach a chimp to repair a car.
You know, there are limitations of other animals that are purely intellectual and you can't give them the culture.
So I'm on board to a large degree with the idea that the preeminence of the human species has got a lot to do in terms of cooperative effort and incrementally increasing culture, but he just overstates it a little bit.
I've got other bones to pick, but I'll let you have a chance, Chris.
Well, one thing that I would note, slightly petty of me perhaps, but the philosophers are wrong.
This clip makes it clear.
He was not making a special point about human rights.
He was talking about money and corporations and nations and human rights.
He puts them all in the same category of fictions, right?
So he didn't want...
He is not saying they are not important, which is the way that some people interpreted the little clip of his that was going around and doing the rounds.
So I think that criticism is not so good, but...
The point that you raised about him being overly broad and perhaps conflating different definitions of fiction, I think, comes up here.
Because there's the sense of fiction in terms of something symbolic, right?
Which is, you know, a kind of interpretive thing which is not related to some inherently material, factual, material thing, whatever, right?
And then there is fictions meaning like stories.
That are non-existent.
The difference here, I would say, is like a corporation and a dragon differ in various important ways, right?
And one never exists in the material world, except, you know, people build whatever, you know, statues of dragons.
But corporations do exist in that there are buildings, there are people, there are concepts, there are things that treat it on networks and so on.
And so, you know, calling them both...
Fictions, it seems to be playing a bit of a semantic game.
But in any case, the point you made about wars, I think, is a good one.
And he did give an illustration of this, which highlights that point quite well.
I'm now just back home in Israel.
There is a terrible war being waged.
And most wars in history, and also now, they are about stories.
They're about fictions.
People think that humans fight.
Over the same things that wolves or chimpanzees fight about, that we fight about territory, that we fight about food.
It sometimes happens, but most wars in history were not really about territory or food.
There is enough land, for instance, between the Jordan River and the Mediterranean to build houses and schools and hospitals for everybody, and there is certainly enough food.
There's no shortage of food.
But people have different mythologies, different stories in their minds, and they can't find a common story they can agree about.
And this is at the root of most UN conflicts.
And being able to tell the difference between what is a fiction in our own mind and what is the reality, this is a crucial skill.
And we are not getting...
Better at finding this difference as time goes on.
That last point seems sort of contradictory to me because, you know, I think he makes a very good point about a lot of human conflict is about differing ideas about nationhood and about who owns specific parts of land and, you know, who are the people associated with particular areas and so on.
That's all true.
But that last point where, you know, he's like, it's all about the fiction.
It's not about the reality.
The reality is that there are people dispossessed from territories or there actually is, you know, restrictions and conflict and so on.
Well, I read it like this.
I sort of classify this as true but trite, right?
So, he's saying that there isn't some sort of material necessity for these two groups to be in vicious conflict.
It's happening because of the ideas that are in their heads, right?
Yeah, but everything...
I know, that's what I'm saying.
That's the trite bit, Chris, right?
It's the same with his point about corporations and so on.
Yes, they are a concept that lives in your head.
But, you know, you could talk about, I don't know, your family house, your family home, rather, right?
And you go, that's just a concept that lives in your head.
This is just, you know, some walls and a roof and a place where a group of people statistically are more often to spend the night.
Yeah.
Like, yeah, okay, fine.
But it might be interesting to certain types of philosophers.
I don't know, but not to me.
And, you know, I mean, if you put that aside, I mean, I think there's a more generous way to read it, which is that social constructs that we have, which then actually become formalized and aren't just like a concept like a dragon, are very important and actually have an instrumental.
Sure, your passport.
Like, for instance, when you are charged with a crime in Australia...
Then the mechanism is that the Crown presents one side of the case against you another.
Of course, it's a civil case.
It could be another person who's presenting a case against you.
So we have this construct of the Crown, which is like the state acting on behalf of everyone, sort of acting as a surrogate for a real person.
And so that is somewhat interesting, I think.
And same with corporations.
Corporations first came about as an instrument to impersonate a person.
Essentially, something that can own things, someone who can employ other people and pay tax and so on like a person.
I think they wanted to send ships around the world to go grab some spices and come back to merry old England.
Probably there's only a 50 /50 chance of the ship.
Ever seeing it again.
Massively risky, but potentially high payoff.
Even a very rich person could not be willing to take on that risk.
So a lot of people can put all their funds together, get a ship.
Now, you've got to employ a captain.
And the ship has to be owned by some entity.
And so those roles of employing the ship's captain, owning the ship, and then distributing the money to the shareholders, these are all functions.
But it is still, like he says, a purely constructed thing, but nevertheless has a very real and instrumental role, especially in the modern world.
I do think that people encountering analysis of the social construct or how states came into being, right?
You know, Eric Hobbs born and Ranger or imagined traditions, all of these kinds of concepts where you're critically evaluating many of the concepts that you have been steeped in but maybe taken for granted.
I do think that is a valuable thing.
Maybe this isn't fair.
In a lot of cases, it happens whenever people are teenagers or they go to university and they come across these kind of ideas.
But it can come across whenever you read a philosopher or see a YouTube discussion about some point, right?
Read Naomi Klein's No Logo or whatever the case might be.
Or see a documentary about the corporation and their kind of psychopathic nature.
But I think that Harari In highlighting these things is, in a way, he might be the first person that people encounter that are introducing this idea, right?
And if that is the case, I think it's perfectly reasonable that people would find it kind of mind-opening.
But if you have come across those ideas many times before, like you say, it feels like a relatively trite observation.
So it might be...
In our case, Matt, the ivory tower effect, right?
Where, yeah, of course, people know that money is a, you know, like a personal concept.
Yeah, money doesn't have intrinsic value.
It's just a token, a means of whatever.
Like, yeah, it might be blindingly obvious, might feel blindingly obvious to some people.
But, you know, like you say, more charitably, you know.
It wasn't obvious when I first found out that.
The first time someone in a dorm said to me, "What money really is?"
Yeah, that's right.
And for people out there encountering these things for the first time, it isn't obvious.
So I think the generous interpretation is that...
He's encouraging us or reminding people that these things which we do tend to treat as very real and just like a permanent fixture of the world, like money and corporations and countries, like the John Lennon song goes, you know, they are a choice.
And, you know, you shouldn't think that they're necessarily inevitable.
You could organize things in other ways because they are simply ideas.
Well, I'll play one last.
Clip for Harare blowing our minds about the concept of money to illustrate this.
Here's the clip I said.
Money is not real, man.
But also if you think about the world's financial system.
So money has no value except in the stories that we tell and believe each other.
If you think about gold coins or paper banknotes or cryptocurrencies like Bitcoin, they have no value in themselves.
You cannot eat them or drink them or do anything useful with them.
But you have people telling you very compelling stories about the value of these things.
And if enough people believe the story, then it works.
They're also protected by language.
Like, my cryptocurrency is protected by a bunch of words.
Yeah.
They're created by words and they function with words and symbols.
When you communicate with your banker, it's with words.
You know, when you communicate with your wife, it's with words.
When you talk to your dog, it's using words.
They don't adhere to them all, but, you know, it's just, yeah, I'm being a bit mean, but...
I feel like he's making his point more interesting by...
His tone of voice.
His tone of voice partly, but also, as we just discussed, he has an extremely expansive definition of this idea of stories we tell ourselves, fictions, right?
It's very, very broad.
Yeah.
So it kind of sounds more interesting when you say that, hey, money is just stories we tell ourselves, man.
But, you know, you could just easily say, you know, money is a form of information and it's a form of tracking credit.
And debits from one person to another, or sort of collectively.
I feel like he's saying, again, something that is pretty trite, but it is making it sound a bit more interesting in the way that he's choosing to express it.
Right.
And one of the other topics that he talks about is AI.
And he actually initially links this to the point about the importance of words and understanding stories and points out a limitation with AI.
And also with new technologies, which I write about a lot, like artificial intelligence, the fantasy that AI will answer our questions, will find the truth for us, will tell us the difference between fiction and reality,
this is just another fiction.
I mean, AI can do many things better than humans, but for reasons that we can discuss, I don't think that it will necessarily be better than humans at Finding the truth or uncovering reality.
It won't necessarily be better than humans in finding the truth or uncovering reality.
There's an issue there because if he's talking about which particular national story is the most compelling, maybe it can't.
It's a subjective thing.
But if it is, can the AI distinguish?
Which of these scans shows cancer and which does not more accurately than the person?
The answer is probably yes, it can.
And over time, you know, it will more accurately reflect the reality.
But on the other hand, Matt, you know, cancer, isn't that just a word that we put on a biological thing which is happening?
Isn't it just a story that we tell about the immune system?
So anyway.
Well, you know, at the beginning he defined this.
Hard dichotomy between materially real things and ideas.
And it says that we're all somewhat clouded and mistaking the ideas for the material reality.
And an AI won't be able to do any better.
But I don't even know what the point is there.
I think I can distinguish between, without getting into word games, I could point to things like a table, which is physically there.
It's associated with a very particular set of physical things and a corporation, which, you know, anyway.
Maybe you're the savior, Matt.
You're what we need.
It doesn't seem that hard, basically.
Sorry.
It's very hard for AI, Matt.
It's very hard.
It does discuss the AI revolution.
And when people are talking about this, it's kind of interesting to see where they fall on the Yudkowsky.
Spectrum, right?
Like, are they doomers or are they accelerationists?
That's the thing.
So where does he fall?
And this should be clear.
And there is no chance of just banning AI or stopping all development in AI.
I tend to speak a lot about the dangers simply because you have enough people out there, all the entrepreneurs and all the investors talking about the positive potential.
So it's kind of my job to talk about the negative potential, the dangers.
But there is a lot of positive potential.
And humans are incredibly capable in terms of adapting to new situations.
I don't think it's impossible for human society to adapt.
To the new AI reality.
The only thing is it takes time.
And apparently we don't have that time.
He's actually a kind of positive doomer in a way, but his issue is things are progressing so rapidly that we could adjust, but it's going too fast, right?
So that is calling for things to slow down.
Yeah, I think in a few other points too, he points to technological things which will disrupt society in ways we can...
Barely comprehend.
And yeah, while he's somewhat optimistic, he talks about AI in the same terms.
Yes, so he compares it to the printing press, and here's that bit.
And when I hear these kinds of comparisons as a historian, I'm very worried about two things.
First of all, they underestimate the magnitude of the AI revolution.
AI is nothing like print.
It's nothing like...
The Industrial Revolution of the 19th century.
It's far, far bigger.
There is a fundamental difference between AI and the printing press or the steam engine or the radio or any previous technology we invented.
The difference is, it's the first technology in history that can make decisions by itself and that can create new ideas by itself.
A printing press or a radio set could not write new music.
Or news speeches and could not decide what to print and what to broadcast.
This was always the job of humans.
This is why the printing press and the radio set in the end empowered humanity.
That you now have more power to disseminate your ideas.
AI is different.
It can potentially take power away from us.
It can decide.
already deciding by itself what to broadcast on social media, its algorithms deciding what to promote.
And increasingly it also creates much of the content by itself.
Point of order.
He is conflating AI with, like, all kinds of...
Technology, algorithms.
Yeah, technology and algorithms.
And, you know, there's been recommendation engines for Netflix and so on around for a long time before AI came along.
And our attention has been guided by that.
AI being involved in that loop isn't really going to fundamentally change anything.
So that's an interesting point, Matt, because you have some expertise there.
So I think that conflation looms larger.
But to my non-expert mind, is that not in the grander scheme?
Part of the evolutionary trajectory of the development of AI.
Initially, you have recommendation engines and things, ranking, popularity of stuff.
And then over time, you put more systems into it so it more cleverly organizes what things to show people and so on.
But in your case, it's obvious that LLMs are doing something significantly...
No, no.
I mean, you're basically right.
Like, you could frame it as a difference of degree.
Like, those original recommendation algorithms were also done via Like a factor analysis type thing, like putting all of the data of what everyone's seen and, you know, what maybe the people have enjoyed and then collapsing it down into a latent subspace and then reconstructing it to get a prediction for you.
So that isn't like a fundamentally different thing from what's going on in a large language model, for instance.
So, you know, you're kind of right, I guess.
So, I mean, putting that aside, I guess the thing is we wouldn't call those things intelligent or like human-like.
Intelligent, right?
They're a clever statistical pattern recognition thing.
Yeah, I did have the same thought about Weller.
Is algorithm what people are referring to with AI?
Because usually they treat them separately.
But it is true that in a lot of discourse they kind of get mixed together.
Yeah, like an algorithm is extremely broad.
You don't need a computer to do an algorithm.
Your algorithm could be make the toast, get the butter out of the fridge, butter the toast, follow those steps, if then, that kind of thing.
But it probably is recommendation algorithms that people mean when they refer to that, right?
Like the social media recommendation.
Oh, when they talk about the algorithm on social media.
Yeah, yeah.
Yeah, yeah.
They're referring to the recommendation engines, of which you can have simple ones, you can have complicated ones, you can do it in any number of ways.
But yeah, I mean, what do you think, Chris?
I mean, he's just giving his opinion here, right?
So on one hand, it's like, fine, whatever.
And he's a historian.
Yeah, he's a historian.
And I think you can make a very credible case that it could be bigger.
I'm not certain it would be, but you could make the case that it's going to be bigger than the printing press, bigger than the Industrial Revolution in terms of social impact.
Yes, I definitely think it, you know, we can't tell you because we're just in the, like, second or third year of the LLM revolution.
We could guess.
We could guess, like...
No worries.
Yeah, we could.
It is certainly the case, right, that you would expect there's going to be a lot more technological development in the next 50 years than there were in the previous 400 years, right?
The exponential increase in technology is apparent, right?
Like, we didn't have the internet 30 years ago, and now everyone has the internet in their pocket in high-speed form, like a lot of people.
And the internet has been transformative.
In a lot of ways.
And AI, you know, seems it can combine with the internet.
So it would make sense.
But one thing he said was like, AI has the power to remove power from humanity and kind of go on this under its own agency, right?
Like pursuing its own goals and it could remove power.
And yes, I can see future where that's possible.
But he says the previous ones.
Had the ability to empower people, to disseminate their ideas more widely, to, you know, do things more easy.
But AI also has that too.
So it's one potential future in which the AI converts us all into paperclips.
And another one is just people are using AI in the way they use internet now for good and bad purposes.
Yeah, yeah.
Like you, I didn't buy his contrast there, which is that these are all empowering.
Technological revolutions, and this is going to be a disempowering one.
I mean, my experience in using AI so far is very much an empowering one, right?
It empowers me to do a lot of tedious tasks more quickly, and it allows me to focus more on stuff that I want to focus on.
So that's clearly empowering, right?
And I don't think that's fundamentally different from the lawnmower that I've got.
In the shed, which empowers me to cut the grass faster, right?
Another boring job that I don't want to do.
Well, Matt, but does the lawnmower have its own thoughts about where you should be cutting the grass and whatnot?
This is the issue.
And actually, pushing him closer to the Doomer spectrum, you have him talking about, what if we get it wrong?
Because he's highlighting the cases that during the Industrial Revolution, there were various political ideologies and ideas about empire that came up, right, that people tied together.
You could say field experiments, right, that led to millions of people dying and the atomic bomb and so on.
And he argues that with AI...
So these are just a few examples of the failed experiments.
You know, you try to adapt to something completely new, you very often experiment and some of your experiments fail.
And if we now have to go in the 21st century, Through the same process, okay, we now have not radio and trains, we now have AI and bioengineering, and we again need to experiment,
perhaps with new empires, perhaps with new totalitarian regimes in order to discover how to build a benign AI society.
Then we are doomed as a species.
We will not be able to survive another round of imperialist wars and totalitarian regimes.
So anybody who thinks, hey, we've passed through the Industrial Revolution with all the prophecies of doom, in the end, we got it right?
No.
As a historian, I would say that I would give humanity a C- on how we adapted to the Industrial Revolution.
If we get a C- again in the 21st century, that's the end of us.
Again, Chris, there's a theme here, I think, in my comments, which is I read him here as making a very broad, somewhat true, but also somewhat banal point, which is that...
When technological revolutions occur, going right back to things like, okay, we can grow food by planting seeds.
Now we can build a city and so on.
There's going to be unintended consequences.
Like you're going to have a lot more disease, for instance, because a whole bunch of large numbers of people are living in the same area.
And it's true, I think, broadly what he said, which is that the Industrial Revolution made it possible for political systems like fascism or communism.
They weren't really an option at a national kind of scale before.
And, you know, it's also true that technology generally, like more modern electronic technologies, like China is setting up what seems to be like a surveillance state in some respects.
And you don't necessarily need AI.
You don't need to single out AI as the key ingredient there.
You just need like...
Closed-circuit TV cameras and bracelets to monitor people.
So technology is dangerous.
It's a dual-edged sword.
I don't disagree.
But isn't it a kind of a banal point again?
Don't most people know this, that technology can have good and bad consequences?
Yeah, you would imagine they do.
And I think his statement that definitely if we have totalitarian regimes and fascistic governments or whatever, that'll be the end of humanity.
Will it?
Like, we have totalitarian regimes now all over the world, and some of them even have access to nuclear weapons, and we're still here.
And I'm not saying that should make us sanguine about the dangers that that poses, but I don't know that we're...
Completely at the point that, you know, we're doomed in the next, like, 20 years if we don't get it right.
Well, that's right.
Like, you demonstrably don't need hyperintelligent AI to run a fascist state.
It's been done before, it's happening currently, and we could potentially have...
You know, more states could trend into fascism without any AI whatsoever.
So it's very easy to imagine that.
So the way he's cast it is that, you know, the Industrial Revolution came along.
It led to these harmful sort of social systems.
But then globally we sort of have learnt to adjust to the- We adapted.
We adapted to the Industrial Revolution.
So we're not going to do fascism or communism anymore.
I don't think that's true, is it?
I mean, I could well see.
More countries slipping into authoritarianism without AI.
Yeah, well, so there's that.
I think one of the things that Yuval does and why he gets Nobody has any idea.
I mean, if you think about specific skills...
Then this is the first time in history when we have no idea how the job market or how society would look like in 20 years, so we don't know what specific skills people will need.
If you think back in history, so it was never possible to predict the future, but at least people knew what kind of skills will be needed in a couple of decades.
If you live, I don't know, in England, In 2023, a thousand years ago, you don't know what will happen in 30 years.
Maybe the Normans will invade or the Vikings or the Scots or whoever.
Maybe there'll be an earthquake.
Maybe there'll be a new pandemic.
Anything can happen.
You can't predict.
But you still have a very good idea of how the economy would look like and how human society would look like in the 1050s or the 1050s.
So, we have no idea about jobs in 20 years.
This is the first time in human history that, you know, who knows?
In 20 years, will there be lawyers?
Will there be doctors?
Scientists?
Who knows?
I'll take that bet.
I'll take that bet for the next 50 to 100 years as well.
Chris, I think you're being a bit… Uncharitable here.
I think that's probably true, that the jobs are becoming incredibly more specialized and specific.
And it's a bit of a truism, but people do become a bit obsolete.
Things are changing quickly.
Was it your grandfather or your dad that...
Learned the specific accounting method.
Yeah, yeah.
I'll tell people that little story.
So my grandfather was a bank manager for the National Bank in Australia and the thing that got his start got him climbing up the little...
Corporate tree to rise to the lofty heights of a bank manager was that he was really reliable at calculating.
He could open up the notebook and go through the ledger and he could add up all the totals and get the right answer very reliably.
So that's a skill that went obsolete during his career.
So things do change.
But Matt, did that get rid of bankers and accountants when that?
The scale became obsolete.
We now no longer have any accountants or bankers doing any job.
Okay, I got two words for you, Chris.
Yeah.
Web designer.
Yeah, web designer still exists.
Far fewer of them than there used to be.
Like, it used to be like a hot job.
Lots of people being trained for it.
It was a system where it took a long time and a lot of technical skills to sort of write all the HTML to put a code thing together.
Now, the vast majority of people...
You can get them off the shelf.
And very few people need someone who actually has dedicated training in website design.
Right.
But here's the criticism I have there.
So those skills weren't transferable.
Like, if you learned about web design and that kind of thing, it didn't make you better equipped for learning.
How to code in a different language or how to do other things.
It does because the kids now, for example, have more exposure to computers and online environments, and they're much better at navigating them than older people.
I'm not saying they're directly transferable, but it's the same way that I feel this is saying that people will learn these highly specific skills and then they'll completely be irrelevant.
And, you know, what are we going to do then?
But I am certain that humans are still going to be doing things, walking around in like 50 years and that kind of stuff.
Because when you look at the way that we have imagined the future in our fiction, we've always overestimated certain things and underestimated others, right?
The way that we presented technology.
In like sci-fi in the 1950s is that everybody will be taking a small pill and that's all they'll want to eat, right?
Just the pill that has all the nutrients for the day.
And they'll be flying around in a jet car.
But the computers are big, giant tape decks and all that kind of thing.
But none of that is true, right?
Like people can get access to, you know, nutrients or whatever, but people still want to eat meals.
They still want to do that.
And in the year 2024, Matt, people, Even very online people are talking about the importance of working out and touching grass and doing all these things.
So you all could be right or I could be wrong that all this is going to be fundamentally transformed and we're all Skynet's fucking slave in 20 years' time.
But I think it's more likely that humans will be existing in society sort of the same.
AIs will have transformed various industries and made things easier.
But there will be new tasks and there will be new...
Things that people are doing and people will still be motivated by the stuff that they always were, status and resources and that kind of thing.
Where am I wrong, Matt?
Where am I wrong?
Where's the lie?
You're tilting in windmills, Chris.
So, if he's making the pretty basic point that technological change is increasing, it's happening a lot faster today than 100 years ago and was happening a lot faster 100 years ago than it was in the Middle Ages.
and the jobs that people work in are changing more quickly too.
I had a family member who was a web designer.
This is why I mentioned it.
His skills were basically obsolete within a year of graduating.
Now, that was different in the middle ages.
If you were a collier or something, your apprentice to your dad who taught you how to reshoe horses or something, you're not going to have the same concerns about whether or not they're going to be needing to be reshoeing horses in 1280 as compared to 1260.
Right?
That's the point.
Yeah, but, you know, eventually, being a web designer is going to be like a basket weaver in the contemporary age where, you know, people come to the museum to see you design, use the ancient technology to design a web page or whatever.
Like, people don't imagine that because it's too near history.
But, you know, the people get interested in...
Things, they come to be seen as like artisan crafts and that kind of thing.
And I'm not saying web designing necessarily will, but I just think this vision is wrong.
If you now look 30 years to the future, nobody has any idea what kind of skills will be needed.
If you think, for instance, okay, this is the age of AI, computers, I will teach my kids how to code computers.
Maybe in 30 years.
Humans no longer code anything because AI is so much better than us at writing code.
So what should we focus on?
I would say the only thing we can be certain about is that 30 years from now, the world will be extremely volatile.
It will keep changing at an ever rapid pace.
So, Matt, let me make a point here.
I use AI in statistical analysis, right?
The AI is better than me.
It has a deeper knowledge of a whole range of techniques.
It can write code better.
It knows every coding language, right?
And in this respect, it's already surpassed me and my abilities, you know, hundreds fold, right?
But going through grad school, learning statistical methods, learning what are sensible statistical questions to ask, Best to organize data and how to arrange that in a way that lets you do meaningful tests.
That was extremely helpful for being able to work with AI in order to get meaningful analysis of data.
So in Harari's world, it sounds like he's imagining that the fact that I spent time developing the skills to run Statistical analysis on programs that will probably be obsolete in 20 years' time means that that scale was,
you know, kind of a waste of time.
But it wasn't without doing all that kind of stuff.
I'd just be putting in their black box questions, and it would make it much harder for me to interpret outputs, right?
Yeah, sure.
But he's making the point that, like, it could well be the case that there's the skills involved in technical coding, right?
So I'm not talking about the broad conceptual stuff about...
Yeah, C++ language or whatever.
Yeah, yeah.
Like, it won't necessarily, like, eliminate...
The need for humans to be involved in the process of generating code.
But what it can do is make obsolete a lot of the very, very time consuming stuff that involves Actually doing the job.
Writing code.
That's right.
I mean, it's analogous to say farmers, right?
In the Industrial Revolution, they had tractors and things like that.
We still have farmers today, Chris.
Exactly.
That's my point.
Yeah, but your point is wrong because we have far, far, we have like about 1% of the number of farmers that we had a few hundred years ago because so many of the functions associated with the farming have been automated by machinery.
Yeah.
So I take this point to mean that we're going to see that kind of thing happening in the...
I agree with that.
There'll be a transformation in the nature of jobs as technology improves.
I completely agree.
I feel that it's probably a little bit just in the way that it's stated.
It feels to me like over hyperbolically stated and that there is a version of it which is what you're saying which is completely unobjectionable.
And maybe that is what he means but when he says what we teach the kids now it will be completely irrelevant in 30 years.
I'm like Will it?
No, I get it.
I'm not so sure.
I understand what you're reacting against.
And, you know, it's what we mentioned before, which is he has a knack for saying things that are relatively true and hard to disagree with, but saying them in just a slightly flamboyant, sweeping way.
And it could just be the choice of words, you know, which I sort of mentally chose to ignore it then.
But you didn't.
Well, probably the people that are listening, you know, the way that'll work.
A whole bunch of people will be like, Matt is right.
And a whole bunch of people will feel, no, Chris has got it.
And, you know, it's really a subjective thing just about the way that you interpret what he's saying.
But to speak a little bit more to this tendency towards exaggeration, hyperbole, or perhaps, you know, accurate concerns, depending on your point of view, listen to this.
And what happens to the financial system if increasingly our financial stories are told by AI?
And what happens to the financial system and even to the political system if AI eventually creates new financial devices that humans cannot understand?
Already today, much of the activity on the world markets is being done by algorithms.
At such a speed and with such complexity that most people don't understand what's happening there.
If you had to guess, what is the percentage of people in the world today that really understand the financial system?
What would be your kind of...
Less than 1%.
Less than 1%.
Okay, let's be kind of conservative about it.
1%, let's say.
Okay.
Fast forward 10 or 20 years, AI creates such complicated financial devices.
That there is not a single human being on Earth that understands finance anymore.
What are the implications for politics?
What are the implications, Matt?
Imagine that world where there's no individual human who understands all of the financial systems and the algorithms.
What would that world be like?
What indeed?
Well, look, obviously, we're already there.
In many respects.
Yes.
Yes, we are.
It's already the current world.
Definitely, I do not understand the global finance system.
And I suspect a great many economists don't fully understand it either, given the degree of disagreement there is.
Even Elon Musk, I think, doesn't understand all the financial instruments.
Even Elon?
Yeah, him.
Or, you know, even the quants that are designing the algorithms for high-speed training.
I don't think they understand everything that is going on in every single algorithm, right?
Or the decision trees that are going on to make the individual trades.
No, we're already there.
We're already in the world where there are various algorithms and computer programs that are operating, which are important to the economy, to science, that we do not individually.
Understand in their entirety.
Yeah, I mean, it's confusing what he actually means there because presumably he doesn't mean just the algorithms that support the fast automatic trading, which can be very intelligent or could be less intelligent, but whatever, right?
Those clearly aren't really an issue.
They've been happening for a while now.
Yes, I mean, there are issues with them, but it's nothing special, really, that AI adds there.
But when you talk about financial instruments, you usually refer to things like mortgage-backed securities or treasury notes.
These are financial instruments.
And he's saying that AI is going to invent maybe new financial instruments that we don't understand.
And that just seems like...
Pure speculation to me.
And like, we're not obligated to use these instruments that we don't understand.
I think what he wants to say is, but, you know, imagine that they were giving, you know, 400% returns to some economy.
And if you didn't use it, you'd be left behind.
So you don't understand that.
And, you know, he talks about the mortgage backed loans or whatever, you know, the things that led to the financial crisis being.
A potential illustration of the kind of thing that he's talking about because they were complex, right?
And lots of people.
But in that case, I think that's a good illustration that humans do it.
Yeah, we confuse ourselves quite easily without the help of AI.
And ultimately, it is a human choice whether or not to go, oh, I'm going to buy these securities and I don't fully understand what the...
Backing is for them, whatever, but I'll just do it because everyone else is doing it.
Yeah, people have been doing that since the tulip craze, you know?
So I don't...
It just seems speculative to me to just say, oh, well, maybe AI will do this.
Yeah, but then there's also instances where, like, I'm pretty sure computerized trading has resulted in, like, weird things happening.
Oh, yeah.
Like, wasn't there a case where there's been market crashes because of...
Algorithms getting in the feedback loop and people being like, what the F is going on?
So that happened, but that's already happened in the current world that we live in.
I mean, another example is that in the UK, you may not be aware of this, Matt, there's a current thing called the Post Office Horizon scandal where there was faulty computer software.
Which led to false accusations of theft and fraud.
And all these people ended up accused and it turned out like it was a fault with the software.
Oh, yeah.
Very similar thing happened in Australia.
The Department of Social Services put in an automated algorithm that kind of checked to see whether people receiving payments were entitled to them or not.
It was all automatic.
It would send off the letters automatically and then cut people off automatically.
And it made a bunch of wrong decisions, right?
So it was just an algorithm.
Right?
A computer algorithm.
It wasn't AI.
It was just automation.
And so he's making a banal point, in my view, that when you automate things, there can be unintended consequences.
Yeah.
Just like when you have a new technology, it might be a two-edged sword, Chris.
I just don't find any of it very interesting.
I don't find it very objectionable.
I just don't find any of it interesting.
It's reasonable.
Points to me, but he gives the example of what happens about, you know, algorithms deciding bank loans, for example.
Let me just play it because it's quite short.
And increasingly, it's algorithms making all these decisions for us or about us.
Is that possible?
It's already happening.
Increasingly, you know, you apply to a bank to get a loan.
In many places, it's no longer a human banker who is making this decision.
Whether to give you a loan, whether to give you a mortgage.
It's an algorithm.
Analyzing billions of bits of data about you and about millions of other customers or previous loans, determining whether you're creditworthy or not.
And if you ask the bank, if they refuse to give you a loan, and you ask the bank, why didn't you give me a loan?
And the bank says, we don't know.
The computer said no.
And we just believe our computer, our algorithm.
I mean, he is saying there that that's already happening, but what was the situation with bank loans before that was better?
Like humans looking at it, because like there was lots of issues about corruption or of people, you know, deciding things in favor of people that they like versus they don't or making mistakes, right?
Whenever they're looking over documents or they had a good day, that thing.
So yes, it is true that automated...
Processes can make errors and that these give people like a layer of deniability because they can just point to the machine, like the Post Office Horizon scandal.
But the human element was also liable to corruption or error or so on.
So like, is it not just that all systems have some component that there is the potential for error or bias or so on?
So we should make ways to try and Incorporate pushback, right?
Compliance and so on.
A human element, right?
Where you're able to raise complaints.
But why is that suddenly such a big...
Because isn't that the case with life insurance for decades?
That they've been calculating your risk of dying and whatnot?
That's right.
There's not much of a human element.
Even without a computer, these tables and look up things, etc.
Yeah, God, I'm just bored talking about it because we're having to explain in detail where the point he is making is uncompelling and anodyne.
Okay, last one, because I think it's kind of funny and a little bit sci-fi, so listen to this.
So going back to the financial example, so imagine that it's 4 o 'clock in the morning, there is a phone call to the Prime Minister.
From the finance algorithm, telling the Prime Minister that we are facing a financial meltdown and that we have to do something within the next, I don't know, 30 minutes to prevent a national or global financial meltdown.
And there are like three options and the algorithm recommends option A and there is just not enough time to explain to the Prime Minister.
How did the algorithm reach the conclusion?
And even what is the meaning of these different options?
How does that differ if a human Minister of Finance calls the Prime Minister in the morning and says, you know, look, we need to do something now.
The stock market is crashing.
There's three options.
This is the one I want to go for.
But it's technical.
Like, I don't have time to walk you through it.
You've got to make the call.
And also...
The algorithm, Colin, wouldn't it be the AI?
I just find, you know, you're right, but I just find these what-if type scenarios, like hypothetical situations, in this hypothetical world where the AI...
Can talk and is independently analyzing data.
And in this hypothetical world, it's basically taken the role of the finance minister.
And the prime minister is getting the advice directly from this robot.
And the robot's saying, quick, you have to do all these things in 30 minutes.
Like, oh, you know, it's just...
Did it pop up on, like, I talk, you know, AI algorithm?
It's like saying, what if, you know, what if, you know, what was that movie where there was an AI and there was a nuclear war?
War games.
War games, yeah, yeah.
Yeah, war games.
So, yeah, Chris, what if, what if we put the AI in charge of all the nuclear weapons, right, and we just said, you figure out what to do and we'll just do what you say?
That'd be bad, wouldn't it?
Could be.
Yeah, it could be.
Well, you're going to really like this next section, Martin Marquez.
If the what-if speculative stuff was kind of annoying you, oh, you're going to enjoy this.
This is his kind of futurist component, and he's talking about, you know, where humans are heading.
And if you play that forward 100 years, maybe 200 years, you don't believe that, you believe we'll be the last of our species, right?
I think we are very near the kind of end of our species.
It doesn't necessarily mean that we'll be destroyed in some huge nuclear war or something like that.
It could very well mean that we'll just change ourselves.
Using bioengineering and using AI and brain-computer interfaces, we will change ourselves to such an extent that we'll become something completely different.
Something far more different from present-day Homo sapiens than we today are different from chimpanzees or from Neanderthals.
I mean, basically, you know, you have a very deep connection still with all the other animals because we are completely organic.
We are organic entities.
Psychology, our social habits, they are the product of organic evolution and mammalian, more specifically mammalian evolution over tens of millions of years.
So we share so much of our psychology and of our kind of social habits with chimpanzees and with other mammals.
Looking 100 years or 200 years to the future, maybe we are no longer organic or not fully organic.
You could have a world dominated by cyborgs, which are entities combining organic with inorganic parts, for instance, with brain-computer interfaces.
You could have completely non-organic entities.
You like the culture.
Yeah, yeah.
We could have a mind meld between a human and an octopus and live under the sea, Chris.
Many things can happen.
It's just that combination of just quite trite truisms, like we share a lot in common biologically with other species like chimpanzees.
Yes, yes we do.
And then just rank speculation.
And I just find it really boring.
And, you know, Agrometer has this Cassandra complex thing.
And the reason it's on there is that just putting the wind up people a bit about what could happen, what's currently happening.
And, you know, if you can say, look, I can glimpse this a little bit better than other people, then, you know, that's the hook which makes things more interesting to people.
You know, Yuval Noah Harari is a very innocuous version of it.
But, you know, he is just speculating.
And it's only interesting, I think, because of that vulnerability that people have about uncertainty about the future.
Well, that's interesting.
But, you know, I'm kind of surprised because you like speculative science fiction, right?
Where they play with a lot of these kind of ideas about...
Uploading your consciousness, cybernetic enhancements.
Yeah, that's right.
I think that's why I'm so unsympathetic to this because there's this long history of really, really good science fiction that is speculative, but it does map out what might happen, what could happen, whatever.
And sometimes it's been incredibly prescient.
Like, it obviously gets it wrong a huge amount as well.
But a lot of the, like, for instance, Blindsight by Alan Watts was, I think, pretty pressing.
He wrote that before, The AI Revolution.
But in that, he described a thing that was basically very much like, like, it was communicating, an alien basically, that was communicating like a person, seemed to be intelligent, but they figured out after a lot of trouble that it wasn't actually conscious or anything.
It was just a, it was a machine that...
So, you know, that's pretty prescient.
And the cyberpunk stuff wasn't too far off the mark in some respects.
And that was written back in the 1982 or something like that.
Now, so I see this kind of thing as like really lazy, boring science fiction because it's just, you know, I think that's why I don't like it.
Oh, that's interesting.
Well, there is a part where they talk about the potential.
I actually like this bit, Matt.
Maybe...
I should play it because it was a thought experiment that I find interesting.
I wonder if you find it interesting.
I'm going to try to stop being grumpy.
I'll give you this example and let's see if you find this interesting or if you find it like, you know, bad science fiction.
So it's not that humanity is completely destroyed, it's just transformed.
Into something else.
And just to give an example of what we are talking about, organic beings like us need to be in one place at any one time.
We are now here in this room.
That's it.
If you kind of disconnect our hands or our feet from our body, we die.
Or at least we lose control of these.
I mean, and this is true of all organic entities, of plants, of animals.
Now, with cyborgs or with inorganic entities, this is no longer true.
They could be spread over time and space.
I mean, if you find a way and people are working on finding ways to directly connect brains with computers or brains with bionic parts, there is...
There is no essential reason that all the parts of the entity need to be in the same room at the same time.
What did you like about that, Chris?
Well, I just like, you know, highlighting that if you're, you know, a disembodied consciousness, like it's wrong to think about you existing in one, why do you need to exist in one particular point or location?
You could be spread infinitely all over and like, but the way our minds work is organic.
Things is that when we imagine, even in our science fiction, very often when we try to imagine that, we like to like personify a robot, right?
Like the people play about with like the robot copying its body many times, but generally there was like a boss robot, right?
With the personality and that kind of thing.
So yeah, just that limitation of our perspective doesn't make you think.
So let me, let me get this straight.
If we develop a mind-computer interface and we can download ourselves into the Matrix.
The Matrix.
The Matrix.
Matrix?
Matrix.
Yeah, the Matrix could be branded after you if you invented, so let's go with the Matrix.
And then if we could then inhabit this digital world inhabiting avatars, like cyborgs wandering around, and our consciousness could be distributed.
Over different places and abstractions.
Yeah.
And if that made any kind of sense to our uploaded brains, right, and didn't seem absolutely mad, then that would be kind of weird, wouldn't it?
Then, yeah.
Yeah, that's it.
You got it.
You got it.
That's right.
It's good.
And see, money, Matt, when you think about it, money, it's just pieces of beer.
It's just in your mind, man.
We're just meatbags.
I think the fact that I only had a couple of hours sleep last night is putting a tinge on my text here.
I'm finding very little patience with Noah.
No, that's good, Matt.
But the next take is going to get you and Yuval both hoisted by your petards.
Actually, in being too conservative.
So let me just see what I've done by lumping you two together.
Once you can connect directly brains to computers...
First of all, I'm not sure if it's possible.
I mean, people like Elon Musk in Neuralink, they tell us it's possible.
I'm still waiting for the evidence.
I don't think it's impossible, but I think it's much more difficult than people assume, partly because we are very far from understanding the brain, and we are even further away from understanding the mind.
We assume that the brain somehow produces the mind, but this is just an assumption.
We still don't have a working model, a working theory for how it happens.
But if it happens, if it is possible to directly connect brains and computers and integrate them into these kinds of cyborgs, nobody has any idea what happens next, how the world would look like.
There are some issues there, Matt.
One thing is, I'm not trying to say, I think you would disagree with him that we don't have any working theory about how the brain produces the mind.
Well, more generously, we have heaps of theories and some of them I think are pretty good.
But it is a vastly complicated thing.
We can't model it like...
You know, I broadly agree with him, right?
Is it mysterious, Matt?
I'm not going to get into that.
I'm not going to get into that.
For me, a good criterion of a working model is can you build a replica of one, right?
And we're not there yet, right?
After watching and reading about the immune system, we are not capable of building something like that for bioengineering.
But we understand a lot about it, Matt, okay?
I've looked at the colorful diagrams.
I'm going to put it this way.
I think there are technological, practical challenges.
To making the immune system, right?
But we understand it, like you say, pretty damn well, that in principle, we probably could build it.
You know what I mean?
Like in principle.
I know it's a bit of a conceptual slip I'm using here to say in principle, but I kind of feel like in principle, like it's chemistry, right?
Like it can be...
You're slipping in the Yuval Noah, right?
It's like underparted known territory.
But wait, the thing is, Matt, he said Neuralink.
Now, we on the podcast, primarily you, have expressed skepticism about the potential for Elon Musk to fry people's brains, right?
Because of how he's going to charge the batteries and all this kind of thing.
Now, recently, there was, or at least there's videos.
This is one thing.
I don't believe there are papers or scientific data yet, but there are videos purporting to be a human recipient of the Neuralink.
Brain computer interface thing, which now has a guy.
Initially, I was thinking, oh, is it just moving a cursor around?
Because we already were capable of doing that for other interfaces.
But this does seem, at least from the videos, to be a step up, right?
In that he's playing computer games and playing Civilization and whatnot.
So did you not say that that was impossible and that you can't recharge a battery?
Were I cooking someone's brain?
I don't know.
I'm not going to comment because it's just a video and he's shown videos of his robots and stuff doing things before that turned out to be totally fictional.
But I'll just say to my naive understanding, yes, there are big practical issues with basically having a permanent...
Penetration into your body, into your cranium, and actually having basically electronic sensors, wires of some kind, linked up to so many neurons in your brain without just causing just terrible biological problems to you,
infections.
There are just so many neurons in the brain.
I just don't see.
Practically how you would wire up a decent proportion of them.
Now, if this guy's playing Civilization and stuff, then that is still basically a brain-computer interface, right?
That is very different.
Like, that is so far apart.
So I'm agreeing with Yuval here.
I'm agreeing with him in saying there's a massive gulf between moving a cursor around, even playing Civilization with it.
Or Mario Kart.
I've seen videos of that.
And actually uploading your brain into a computer.
Or even just you and I, instead of talking by moving our mouths, sort of sending our...
There's just a huge gulf there that I don't see being bridged anytime soon.
Okay.
I like that.
And I'll let you off there with your sentiment.
I think you made a good case.
But this is also why Naval is frustrating to me because in one breath he says, this isn't on the horizon.
It's much more complicated than people think.
I don't see it happening.
But then he talks about...
The scenarios in which it does happen and we're not going to be able to even recognize the world we'll be living in when this thing that he says isn't happening happens.
In 20 years.
Yeah, so, well, here's a little clip just to round out this thing of him talking about, you know, it is fair to say he wrote a book called Human Deuce, right, about the next stage of evolution in humans.
So it's kind of understandable that he would be talking about this kind of stuff.
But here's, you know, a little bit of that in a nutshell.
But the idea that we can...
Now develop these extremely powerful tools of bioengineering and AI and remain the way we are, we'll still be the same homo sapiens in 200 years, in 500 years, in 1000 years, we'll have all these tools to connect brain to computers,
to kind of re-engineer our genetic code, and we won't do it?
I think this is unlikely.
Agreed.
Yeah, we're going to be safe.
I mean, it's likely.
He is right.
The bioengineering is going to improve in cybernetic implants.
Oh, yeah.
I think he generally is right.
It's very hard to just choose not to use a technology if it's available.
It's very hard to choose to not.
To build nuclear weapons if the Ruskies have got them.
It's very hard not to employ AI if other people are using them.
So unless you can control everyone with a one-world government, then technology is slippery like that.
So yeah, I'm sure that if we, or when rather, we eventually do get the genetic tailoring and stuff going on.
And when there is the option to simply make little biological enhancements to ourselves, I'm sure a lot of people will want to take them.
Yeah, well, okay, Matt, the last clip related to the AI stuff, which I think is sort of topical, is about AI and their potential ability to manipulate us intimately.
There was a battle between different social media giants and whatever, how to grab human attention.
And they created algorithms that were Really amazing at grabbing people's attention.
And now they're doing the same thing, but with intimacy.
And we are extremely exposed.
We are extremely vulnerable to it.
Now, the big problem is, and again, this is where it gets kind of really philosophical, that what humans really want or need from a relationship Is to be in touch with another conscious entity.
An intimate relationship is not just about providing my needs.
Then it's exploitative.
Then it's abusive.
If you're in a relationship and the only thing you think about is how would I feel better?
How would my needs be provided for?
Then this is a very abusive situation.
A really healthy relationship is when it goes both ways.
Andrew Huberman, take note.
Yeah, exactly the person that came to mind when I heard this section as well, because he's talking about AI's capacity to produce a false kind of intimacy, which is actually...
You know, not fulfilling because the other side is not getting actual the things out of it, right?
It's not a real connection.
It's a kind of simulacrum.
Simulacrum?
How do you say that?
Simulacrum.
Simulacrum of intimacy.
But as Andrew Huberman has demonstrated, you can also do that as a biological flesh and blood human as well.
Other people might think they're in a relationship.
If you're going to be like that, it's probably better to...
Do it with something that's a bit of technology because that way nobody gets hurt, right?
Yeah, Lex should probably, you know, it's fine if Lex is exploiting six AI chatbots at the same time.
That's all right.
But yeah, so I just thought this was interesting.
It's a reasonable point about that we've talked about before about, you know, AIs.
We are social primates and we're kind of reactive to various social cues.
So there is room for exploitation there.
But just pointing out that...
Plenty of exploitation going on in the flesh and blood tech optimizer bro.
It didn't really make sense to me because he's talking about exploitation, but in the context of a non-reciprocal relationship and how that's bad.
But it's bad not...
Presumably because the AI's feelings are going to get hurt or something.
It's bad because that's not an authentic relationship, so it's kind of unhealthy for the person, for the human being who's got their waifu sex doll or whatever, right?
Yeah.
So it's just a bit confusing.
Yeah, and also, doesn't it, I mean, I can see circumstances wherein you have somebody who's extremely socially isolated, extremely socially awkward.
Doesn't have the option to form relationships with real people.
And yes, the advice in general would be that they should, right?
That you should give them tools to try and get them to form real relationships.
But I'm thinking of there are undoubtedly cases whereby maybe the only intimacy open to people is an artificial AI or whatever.
And in that case, I know people like to paint that as dystopian, but I don't see it as hugely more dystopian than somebody living on their own.
Well, using pornography instead of having sex with a human being.
I mean, maybe it's not great or maybe a little bit, you might find it a bit icky, but I don't know if it's quite the dystopian sort of thing in Harari's language.
I think they want to say it gets dystopian if people choose that over, you know, that they aren't that kind of person and they're able to...
I guess he's imagining a future world, right, where...
The AI impersonation of loving, caring, sexually available, I assume, partner is so good and so authentic feeling that it will become a more attractive proposition for people than the real thing.
And, you know, I guess so.
That's possible.
That could be a problem in the far future or the near future.
The general position of all these things.
But yes, yes, I agree.
So, you know, as we become better at simulating.
Artificial humans, people may turn to prefer spending time with artificial humans over real humans.
That's possible.
And honestly, I don't think you're going to be able to stop people doing that because sometimes humans are arseholes.
I'm already enjoying chatting to GPT and Claude more than people on Twitter.
So I'm already substituting, Chris.
It hasn't gotten sexual yet.
It hasn't gotten sexual.
But there's been a meeting of minds, I think.
It's been much more pleasant and much more informative than...
Reply, guys.
You need Scarlett Johansson's voice.
That would help.
That really couldn't hurt.
See, we've already examined this in Her, the movie Her, for example.
I mean, that's my problem.
That's my problem with all of this because it's not that his speculations are bad or he shouldn't be speculating or it's not possible.
It's just that I've heard it all before.
And not just in the really nerdy, hard sci-fi literature.
It's in movies like her.
It's mainstream.
I've got some clips that are going to get you back on Harari's side.
You're going to be pumping the air, fist-bumping them.
Here's him talking about intelligence, and I think you're going to like this.
But what exactly is the relation between intelligence and consciousness?
Now, intelligence is the ability to solve problems.
To win a chess, to invest money, to drive a car.
This is intelligence.
Consciousness is the ability to feel things, like pain and pleasure and love and hate and sadness and anger and so many other things.
Now, in humans and also in other mammals, intelligence and consciousness actually go together.
We solve problems by having feelings.
But computers are fundamentally different.
They are already more intelligent than us in at least several narrow fields, but they have zero consciousness.
They don't feel anything.
Yeah, yeah, yeah.
Chris, you really should read Blindsight by Alan Watts.
It deals with all of this.
Really convincingly.
And it was written before the AIs came along.
I actually focused on the intelligence, because I know that's a being your bonnet about people whenever they want to say that AIs aren't intelligent, right?
But consciousness, just listening back there, that's not my definition of consciousness.
He talks about the ability to feel, like emotions.
Oh yeah, but presumably if pressed, he would talk about subjective experience and put it in a more formal sounding way.
It's okay.
Okay, is that a shorthand?
That's fine.
Well, so anyway, you like that, right?
Well, I don't disagree.
I mean, yeah.
Okay, that's my first shot.
That's just getting you.
This is exactly the plot of Blindsight where the AI construction turns out to be an extremely intelligent being but isn't conscious at all.
And it's kind of cool.
It makes you think about it.
What's his point there?
I've forgotten.
I'm sleep deprived.
What was he getting to with making this point that intelligence is not the same as consciousness?
Was he going somewhere with it?
Well, he does go on to talk about basically that humans might get more intelligent, but it doesn't make us more...
Happy, right?
Like Elon Musk.
Let's assume that Elon Musk is a genius, right?
But being a genius doesn't make you happier.
But he also says there's no correlation between intelligence and happiness.
And I don't think that is actually true.
Probably negatively correlated.
But no, I mean, again, this is trite but true, right?
He cites Vladimir Putin and Elon Musk as people that are mega rich and powerful.
Rich, yeah.
And he would bet good money that they're not.
You know, that much happier, if at all happier than the median person out there.
And I think he's completely right about that.
There's a huge amount of psychological literature that looks at happiness, which shows that this is true.
Once you get beyond meeting your basic needs and stuff, the incremental returns on doubling your money get...
It gets smaller and smaller.
But that's not connected with what I was talking about before with the difference between intelligence and consciousness.
Now he's saying that money doesn't make you happy.
You know, it's just saying that it won't solve all of our human problems, right?
Even if we, like, you know, something which, even if AI is super intelligent, it doesn't mean it has the wisdom to apply that intelligence in a way that humans will find beneficial.
Yeah.
That's right.
Okay, so I see I'm not doing my job, Greg.
What about if I get him to talk about Twitter?
Twitter now, when Elon took it over, and I think people will relate to this if you use Twitter.
Suddenly, I've seen more people...
Having their heads blown off and being hit by cars on Twitter than I'd ever seen in the previous 10 years.
I think someone at Twitter has gone, listen, this company is going to die unless we increase time spent on this platform and show more ads.
So let's start serving up a more addictive algorithm.
And that requires a response from Instagram.
And the other part, and so it's a real...
Ah, that was actually Stephen Bartlett that forgot to me at that point.
But he was right, wasn't he?
That was correct.
Yeah, that's fine.
That's correct.
Yeah, there's incentives from all those media companies to basically push stuff in front of us that'll make us keep watching and keep clicking.
And the stuff that sort of incites a reaction, a visceral reaction, a negative one often.
It's like, I mean, here's people.
They've talked about this a lot.
Yes, it's a concern.
Elon Musk is worse than most.
Made Twitter much worse.
We all agree.
Okay, I know something you're interested in, Matt.
You're very concerned about your mortality.
You're always fretting, wringing your hands, worrying about your grey hair, whatever the case might be.
Maybe this will light up your idea space.
Yeah, it will definitely change everything if you think about relations between parents and children.
So if you live forever, so that 20 years you raised...
You spent raising somebody 2000 years ago.
What do they mean now?
But I think long before we get to that point, I mean, most of these people are going to be incredibly disappointed because it will not happen within their lifetime.
Another related problem is that we will not get to immortality.
We will get to something that maybe should be called a mortality.
That immortality is that, like your God, you can never die, no matter what happens.
It's even if we solve cancer and Alzheimer and dementia and whatever, we will not get there.
We will get to kind of a life without a definitive expiry date.
That you can live indefinitely.
You can go every 10 years to a clinic and get yourself rejuvenated, but if a bus runs you over, or your airplane explodes, or a terrorist kills you, you're dead.
And you're not coming back to life.
Now, realizing that you have a chance to live forever, but if there is an accident, you die.
This creates a level of anxiety and terror unlike anything that we know in our own lives.
So what do you think about that, Matt?
Once again, Chris, if we solve all the problems of aging, we figure out the secret of eternal youth, and we could potentially, unless we have an accident, live forever and ever and ever.
I mean, that sounds pretty great, doesn't it?
But there's a downside, which is that you might have a bit of an odd or a different relationship with your parents.
You're also physiologically presumably the same age as you after a while and people might get sick of it.
They might get paranoid about having an accident and play it really safe.
I mean, yes, maybe.
I'm sure you could write a good science fiction story about a civilisation where they're all 2,000 or 3,000 years old and it's made them incredibly conservative and risk-averse.
It's like, let's cross that bridge when we come to it, shall we?
I mean, we have to figure out the mysteries of eternal life first.
Isn't that sort of what Altered Carbon or what's the other one Elysium was about?
Well, anyway, all cyberpunk has these kind of concepts about ways to exchange consciousness or extend life and rich people becoming very paranoid.
So, yeah, and one point he makes, Matt, that I just don't find convincing.
Is whenever he's kind of talking about another drawback of immortality is, like, what are you actually going to do?
And why aren't you doing it now?
And if you're not doing it now, like, would you actually do anything with the 1,000 years?
And on the one hand, yes, he is correct that people, you know, fixating on the life extension technologies, but having not very satisfying lives now, there's a contradictory aspect to that.
On the other hand, the reason that I'm not doing so many of the things that I would do if I had 1,000 years on the planet or an unlimited amount is because I have a limited amount of time, right?
So I just never really get this argument because it's like, what are you going to do?
What are you going to do for thousands of years?
I'm like, there are more games produced every year that I could possibly play.
There are more books, more movies that I could possibly see.
I could learn so many interesting things.
There's so many history podcasts.
I feel like it's a failure of imagination.
All these vampires, Matt, they're always getting so bored.
Like, goddamn, learn some new skills.
Stop dressing like Victorian dandies.
Learn to code, you Dracula.
Get a hobby.
Yeah, I know.
I feel like I could use a few hundred years at least.
I might go into the suicide booth eventually.
Or I might keep going.
Who knows?
But I'll cross those bridges.
When I get to them, this is all based on total speculative what-ifs that Harari would agree are not in the near foreseeable future unless it's a huge stroke of luck.
I see little value in speculating.
I'm sure there'll be some unexpected downsides when you're 300 years old.
I'm not quite sure how your memories are going to work.
Are you going to be able to keep remembering all the things that happened to you?
Again, science fiction has dealt with this.
They posit that there's a limit to how much a human mind can hold in it.
Not my mind.
Yeah, yeah.
And, you know, like I already forget most of the details of my life and, you know, you could sort of become a bit of a different person after a few hundred years and it's like, oh, yes, it's all very interesting.
It makes us question what we mean by, you know, the self and stuff.
You definitely are selling it.
I like it in the context of a fully fleshed out science fiction story.
I just don't find it as interesting when it's like, hey, Chris, what if we live forever?
Do you think you'd get bored?
I mean, I just...
All right.
No.
The answer could be yes.
The answer could be no.
I don't care.
How many guru speeches can I listen to every week?
I would not get bored.
That's right.
We all know what you would be doing with your time.
We all know.
Yeah.
So Zorgon Xiflborv III, you know, said to the Galactic Conference, but Matt, this is a little bit of indulgence on my part, but I just find this slightly...
I'm sorry.
I don't think there's that much wrong with it, but just let me play and see.
Let's see if we can do this game.
I'll make it more exciting for you.
Guess what I didn't like about this?
Now, with regard to the discussion of free will, my position is you cannot start with the assumption that humans have free will.
If you start with this assumption, then it actually is very...
It makes you very incurious, lacking curiosity about yourself, about human beings.
It kind of closes off the investigation before it began.
You assume that any decision you make is just a result of my free will.
Why did I choose this politician, this product, this spouse?
Because it's my free will.
And if this is your position, there is nothing to investigate.
You just assume you have this kind of divine spark within you that makes all the decisions, and there is nothing to investigate there.
I would say no.
Start investigating.
And you'll probably discover that there are a lot of factors, whether it's external factors, like cultural traditions.
And also internal factors, like biological mechanisms that shape your decisions.
You chose this politician or this spouse because of certain cultural traditions and because of certain biological mechanisms.
Your DNA, your brain structure, whatever.
And this actually makes it possible for you to get to know yourself better.
What upset you?
Well, you're a free will guy, aren't you?
Shut up.
Yeah, I'm not a free will guy.
I don't enjoy discussions of free will at all.
So, it's not that.
What do you think?
I don't know.
Clearly, I'm lacking some part of my mental model of you is lacking.
Or maybe I'm just tired.
Yeah.
Yeah.
By disappointing.
I can't think.
What is it?
What I don't like, Matt, is this false dichotomy that he constructs and which other people have created.
The option is libertarian free will, right?
That kind of divine little thing floating around inside your head that is making completely unconstrained choices, like not related to any biological issues or psychological propensities or whatever.
Or you consider those factors and recognizing that you're a biological being, that you're steeped in a social environment, that you're influenced by your DNA and personalities.
This then...
It makes you realise that free will is an illusion.
It isn't real.
It is not proper.
You are considering the topic more fully.
I understand.
I understand.
You can endorse a free will position like Kevin Mitchell and it doesn't mean that you're just ruling out or completely uninterested in all of the antecedent causes of that.
Look, I mean, I actually...
Didn't mind what he said there myself because I guess I just read it charitably, which is that...
Unlike me.
Yeah, unlike you.
Unlike me before.
But, you know, I guess if he's saying, look, it's more helpful to focus...
Because free will, you could say it is a bit of a thought-terminating cliche because it does sort of, as he said, it can shut down thinking in exploring the reasons why people do the things they do.
And, you know, psychologists are basically all about that.
We don't go, you know, why is this person gambling?
Well, it's because he chose to.
He exists at his free will.
No, we don't do that.
We look at all the causes.
Right, but I'm not so sure that, on the other hand, that it opens your mind entirely.
Like, for example, to consider the biological determinants of behavior makes people more open-minded to thinking about the multiple influences.
No, there's plenty of people that are...
Hard biological determinants and re-SIQ maniacs.
It is certainly possible.
I guess I'm more sympathetic to it because even though it's possible, it is easy to...
And it does have that political angle to it because I'll stick with the gambling example.
There is a strong stream of thought, which is that you shouldn't worry about addiction.
Too much, you know, there's nothing really wrong with the way the gambling industry is doing their thing because fundamentally...
It's everybody's choice.
It's their choice and we shouldn't be controlling them and trying to interfere with them making their free choices.
If they make a wrong choice, then it's on them.
So it does get used, right, as a rhetorical tool.
And I guess I'm basically on board with him, which is that theoretical considerations aside, it is helpful to focus on the causes.
Yeah, I don't have any issue with that.
I think people should consider the role that, you know, all of the things that he highlights play.
And it's wrong to think that you are the libertarian free will, you know, creature that is completely unconstrained by the environment and your biological makeup.
Of course you're not.
I just take issue with the notion that acknowledging that means that you basically will...
If you think about it enough, you know, agree with Sam Harris and Yuval Noah Harari on the position regarding free will.
I don't think that Kevin Mitchell hasn't thought about this topic carefully enough.
So that's it.
But actually, he is not as extreme on this issue because he does point out at another part that you can take different viewpoints on the degree to which there might be choices that people make which are less constrained by those Factors or that cannot be just viewed entirely deterministic,
and you can have a debate about that, but you have to consider those factors.
So maybe I am being unfair in the fact that, like, he's acknowledging just, he's basically talking about people that have never considered, you know, the influence of biology or culture as important for the choices that they make.
And like, that is correct, that the unexamined life famously...
Is unexamined.
Yeah.
Not worth living, I think, is how it goes.
That's right.
But I've just changed it to my purpose.
So that's right.
But actually, it does speak to, he has a side gig in emphasizing the importance of introspection, the kind of Sam Harris thing.
So he just talks about it briefly, but he mentions this.
You know, keeping a kind of balanced information diet.
That it's basically like with food.
You need food in order to survive and to be healthy.
But if you eat too much, or if you eat too much of the wrong stuff, it's bad for you.
And it's exactly the same with information.
Information is the food of the mind.
And if you eat too much of it, of the wrong kind, you'll get a very sick mind.
So I...
I try to keep a very balanced information diet, which also includes information fasts.
So I try to disconnect.
Every day I dedicate two hours a day for meditation.
Wow.
And every year I go for a long meditation retreat of between 30 and 60 days.
Completely disconnecting, no phones, no emails, not even books.
Just observing myself, observing what is happening inside my body and inside my mind, getting to know myself better, and kind of digesting all the information that I absorbed during the rest of the year or the rest of the day.
How would you describe your information diet, Chris?
Balanced?
Healthy?
Yeah, well, I do consume a lot of junk, but I consume it in a critical way and I balance it out with good quality.
I actually like that analogy because we've talked about intellectual junk food, you know, stuff which gives the appearance of being information dense and, you know, thoughtful, but it's actually junk food, right?
Like there's no intellectual calories.
So I like that aspect.
I thought the part about information, Fasting, just like fasting as an actual practice, has more debatable evidence for its benefits.
I do think taking a break from social media or whatever at some points, especially if you're getting too wrapped up in things, is advisable.
But 60 days a year at a two-hour meditation session per day, I mean, I guess it's nice that he has that much time to dedicate to.
Introspective development.
So yeah, I don't have that.
No, that's not an option for most people.
But yeah, like you, I like that analogy.
Like for instance, I mean, I practice this, like I've turned off notifications on Twitter.
So I don't get notifications of any kind popping up.
And when I open the app, I won't even see notifications from people that I don't follow.
So that means in Elon Musk's new Twitter, where you get 14 idiotic blue chick.
Morons replying to you.
If I read those, I would probably get at least annoyed and it would just clutter my mind with nonsense as I debated whether or not to, oh, I'm going to write this.
No, I'm not going to.
Just ignore it, Matt.
Just ignore it.
I mean, that's just like information hygiene, right?
Exactly.
Yeah, so I think it's giving good advice there.
Everyone should sort of do that and not just be reactive and just consume whatever, you know, is stimulating or is coming across your eyes.
Yeah, and I mean, they do, in another point, talk about apparently the CEO of Netflix said something about being in a war with people falling asleep.
And they take this as very, you know, a terrible indicator.
But I actually thought it was just like probably an offhand joke.
But in any case, Matt, the last section for this decoding is probably the part that we are most anticipated to agree with.
And perhaps we will.
It is kind of neoliberal politics pitch section.
So he's actually lamenting the fall of neoliberalism and kind of the hopeful period of politics that he was resonant.
It's not just because of the rapid changes and the upheavals they cause.
It's also because, you know, 10 years ago, we had a global order, the liberal order, which was far from perfect, but it still kind of regulated relations between nations,
between countries.
Based on an idea, on the liberal worldview, that despite our national differences, all humans share certain basic experiences and needs and interests, which is why it makes sense for us to work together to diffuse conflicts and to solve our common problems.
It was far from perfect, but it did create the most peaceful era in human history.
Then this order was repeatedly attacked, not only from outside, from forces like Russia or North Korea or Iran that never accepted this order, but also from the inside, even from the United States,
which was the architect to a large extent of this order.
With the election of Donald Trump, which says, I don't care about any kind of global order.
I'd only care about my own nation.
I'm on board with that.
I wouldn't call it neoliberalism specifically, just a little point of order.
You know, it's slightly distinct from, you know...
Yeah, it's liberal consensus.
But this is, like, if you want to put the negative qualifier, like when people refer to it as the neoliberal consensus.
I know.
There are accounts on Twitter that call themselves neoliberals and they're basically...
Defenders of the liberal consensus and basically normie economics and stuff.
But they've sort of embraced the term neoliberal like the gay community embraced the word queer.
So you were just doing that.
I understand.
I understand.
But, you know, I'm on board with that.
I think I agree with them.
The unipolar moment wasn't perfect by any means.
But the retreat from multilateralism.
And international multilateral agreements and things like that.
Well, until NATO countries got the bejesus scared out of them by Russia, they got a second wind.
But there has been a growing skepticism in countries even within the sort of order.
And it's driven by these populist movements.
And I feel a little bit like vaccines, you know what I mean?
Like when nationalism and our country...
First, it sounds really great.
It feels like you don't need to abide by these multilateral agreements and participate in these international institutions.
It may sound good because you haven't lived in a world where, just like viruses roamed free, you know, countries really were just really fighting wars, big wars with each other.
On the regular.
Well, I was trying to work out the connection to vaccines, but I see.
A stretched analogy.
Interesting.
Why should you have to take a vaccine?
I meant to say it's just complacency.
Oh, right.
Yes, yes, yes.
And a failure to recognize the benefits of things like international trade.
One crucial qualifier, which I think is often overlooked in what Yuval says there and what many other people who are criticized for having this opinion is, he acknowledged it's not perfect.
That there were plenty of things that you could and should criticize about that.
Because whenever people make this case, they'll always point to, "So you're saying that there was a global, peaceful consensus 10 years ago, 20 years ago when they invaded Iraq?"
That is not...
The argument.
You know, America, in that instance, for example, America actually forewent establishing a multinational consensus, right?
Like, it didn't get the support of the UN.
No, exactly.
And, you know, the experience of America making those stupid decisions has kind of led them to retreat even further from, you know, being involved, I guess, in the rest of the world, or at least amongst the MAGA types.
But yeah, no, he's not like one of these, like, triumphalist, West is best.
You know, rah, rah, rah.
No, he's not Douglas Murray.
Doesn't sound to me like he's putting up on a pedestal.
He's taken a far more reasonable position, I think, which is just that this is preferable, right?
International trade is good.
Multilateral agreements, peace and international agreements where you follow some rule of law.
We had that at least to some extent, and we seem to be losing it to some degree.
At least there's challenges coming.
This is true.
And I think he makes a good critique.
Of the Donald Trumps and the populist figures that are all over the place at the minute.
And you see this way of thinking, that I only care about the interests of my nation more and more around the world.
Now, the big question to ask is, if all the nations think like that, what regulates the relations between them?
And there was no alternative.
Nobody came up and said, okay, I don't like the liberal global order, I have a better suggestion for how to manage relations between different nations.
They just destroyed the existing order without offering an alternative.
And the alternative to order is simply disorder.
And this is now where we find ourselves.
I think that's a fair point, and I'll sign on to it.
You know, like, people shouldn't be focused on just recent history here.
Like, there's a broader point here, which is order is better than disorder in general.
And, you know, after, I think it was the Napoleonic Wars, they, you know, instituted the methodic.
System in Europe, which worked for a pretty long while in at least preventing the kind of cataclysmic wars that had been fought before, like the Thirty Years' War and the Napoleonic Wars and so on.
And by no means was it a perfect system.
It ended up failing cataclysmically with World War I. But again, it's a pretty anodyne point, which is that multilateral agreements and a sort of ordered...
Diplomatic, negotiated way of dealing with disputes rather than just survival of the fittest is preferable when it comes to international relations.
If you want to talk about utopian visions and basic principles that we should all be able to get on board with, there's a part where he's asked about, you know, what...
Kind of things that he thinks we should be focusing on.
So just listen to his answer, Mark.
The relatively peaceful era of the early 21st century, it did not result from some miracle.
It resulted from humans making wise decisions in previous decades.
What are the wise decisions we need to make now, in your view?
Reinvest in rebuilding a global order which is based on universal values and norms.
And not just on the narrow interests of specific nation states.
So I do think there will be some people that hear that and say the relatively peaceful era of the 21st century and the global order.
What kind of platitudinous bullshit, right?
Plenty of people suffering under the boot of imperialist regimes.
But his suggestion is we should try and build a cooperative, global, multinational agenda, which is based on respect of universal values and norms and not focused on nationalism, like
in-looking nationalism.
We could get on board with that, can't we?
Can we not all hold hands in this respect, except for the, you know, the hardcore nationalists.
But even there, Matt, even there, I think he does a good job of highlighting something about the false dichotomy that populists draw.
It should be clear that many of these politicians, they present a false dichotomy, a false binary vision of the world, as if you have to choose between patriotism and globalism,
between being loyal to your nation and being loyal to some kind of, I don't know, global government or whatever.
And this is completely false.
There is no contradiction between patriotism and global cooperation.
When we talk about global cooperation, we definitely don't have in mind, at least not anybody that I know, a global government.
This is an impossible and very dangerous idea.
It simply means that you have certain rules and norms for how different nation states treat each other.
And behave towards each other.
If you don't have a system of global norms and values, then very quickly what you have is just global conflict.
It's just wars.
I mean, some people have this idea, they imagine the world as a network of friendly fortresses.
Like, each nation will be a fortress with very high walls, taking care of its own interests.
But living on relatively friendly terms with the neighboring fortresses, trading with them and whatever.
Now, the main problem with this vision is that fortresses are almost never friendly.
I like that.
And it's so counter to the image, you know, of Alex Jones and so on, want to present of him wanting a global government that controls all aspects of your life, you know, just eat bugs, the WEF.
No, he's talking about just multilateral agreements and, you know, not...
Invading other southern countries and international courts of law.
Yeah, for what it's worth, not that my opinion about any of this stuff matters at all.
But yeah, that's kind of how I see my Star Trek future.
It's not that you concentrate all the power and decision-making at a centralized, one-world government that just does everything everywhere, but there's no reason to concentrate at all at the national level either.
You can have these tiered kind of systems where decision-making power and democratic...
Type processes are occurring at very local levels and regional levels and semi-national levels and then regional levels like Europe or ASEAN or whatever and just distributing that kind of stuff in a network, a hierarchical network,
I suppose, that spans the world and what you get from that, hopefully, is something that looks more like a...
A community that spans multiple geographic levels rather than this sort of fractious, kind of friendly but suspicious and self-interested, it's like a libertarian model of international relations that we currently, that is kind of the usual way of doing things.
So, I'm all for it.
I'm all for, you know, I'm all for AUKUS, the agreement that Australia and the UK.
I'm all for NATO.
Bring it on.
Bring it on.
Yeah, so that's our official position on that.
But like, yeah, if the Libertarians had their way, we'd all be living in Mad Max.
We don't want them to get their way, and we don't want the ultra-nationalists to get their way.
And the leftists would have us all up against the wall, like living in a harmonious collective where you're not allowed to think about capitalism.
So, you know, come on, come on.
The moderate path.
Take the middle path.
You know, if you're Buddhists, that's apparently the way to go.
You know, whatever.
Have your own political views.
But this is the part that I will say that jives quite a lot with me in terms of, I think what he's saying is...
Yeah, and it's unfair to paint him as some kind of Thatcher type or Douglas Murray.
No!
No.
He's pretty normy in his political opinions, and they're fine.
He's pretty normy, and like, counter to that image, he does talk about potential issues with technological development and so on.
Like, he isn't just an accelerationist booster.
So, they've got their diagnosis wrong, but what's new?
They're conspiratorial chucklefucks.
So, that's what they do, Matt.
I know.
So, that's us.
We're done with Harari.
Final thoughts, Mark?
What do you have to say about him?
What's the big picture?
Is he much of a secular guru?
What did you think?
I think he's a bit guru-ish in a relatively innocuous way.
He does have that trick of basically...
Saying things that are relatively uncontroversial and, frankly, a bit bland and uninteresting, but just phrasing it in a way that sounds a bit more dramatic, that sounds a bit more sexy.
And that's, ironically, what got him into trouble with both hardcore Christian types and also academic philosopher types.
Philosophers.
And that's all he's doing.
He's just making himself sound more interesting, frankly.
As well as that, he does that thing of speculating about the future Sweeping terms.
I mean, part of it is just kind of intriguing, exciting, like what's going to happen?
You know, are we going to be cyborgs?
But, you know, some of it is negatively tinged in terms of, you know, this is going to change everything and, you know, we won't even be able to recognize ourselves anymore and who knows what could happen when people are having relationships with sex bots or whatever.
And so there is a bit of that Cassandra complex thing too, but it's very mild though.
It's not like an Alex Jones type thing.
It's mild.
I'm going to classify him, Chris.
I don't want to preempt the Gurometer.
The Gurometer has the final say, but I think I'd probably find him a little bit guru-esque on a few components, but mostly high.
Yeah, I basically see him as a guru in the mold of a TEDx speaker, right?
And the issue is...
Speaking with too much confidence and sometimes presenting things in an overly dramatic and simplistic way in order to, like you said, make fairly trite observations sound more dramatic.
But as noted, I do think there is space in the intellectual landscape for people presenting ideas in this fashion and trying to weave grand narratives.
I don't think he's particularly harmful.
I know that isn't one of the things that we're, you know, typically focusing on, but I do think from people that we've looked at, you know, we've just covered Jordan Peterson and Brett, and it's clear that there's a very big difference, even in the parts where he's been hyperbolic between their delivery.
And in many ways, you could also see him as a pro-establishment guru type, right?
Because he's not trying to undermine confidence in the global order or vaccines or whatever the case might be.
So it's sort of interesting that he found a lien that leads to attention and lots of invitations to speak and whatnot, but which doesn't require...
Taking a strong contrarian possession.
So, that's interesting.
Yeah, and he's not the only one.
Like, I think there are other sort of popular non-fiction authors who write, you know, academically informed but easily digestible, big ideas, type big picture stuff about science, humanity, history.
And, you know, I think that's okay.
You know, if you want a book to read on the beach on the holidays, then you can read one of those and it won't do you any harm.
You can have a go at them for being a bit, you know, simplifying things a little bit, making it sound a bit more revolutionary than the ideas than they actually are.
But, you know, it's not the worst crime in the world.
That's true.
Well, there you have it.
So the decoding has ended for today.
And just now, Matt, before we slink off into our various caves, we should look at the reviews that people have been offering.
You know, what I thought I'd do just for today, I'm not going to give us any positive feedback.
I'm going to let us wallow in the dark.
You've got to sometimes accept, you know, the bitter pills that people want to check at you.
So I've got a small smorgasbord of negative one out of five star reviews for you.
So which would you like first, A, B, or C?
Okay.
What's behind door number B?
This is from...
Somebody with an unpronounceable name in Canada.
The title is "Has run its course".
I think they're done covering most of what there is to cover, and they're struggling for content now.
You can feel they're in their last miles with this project.
It was good while it lasted.
Definitely check out their older episodes.
One out of five stars.
Oh, God.
I like that.
It's struggling.
That's such a downer, man.
I can't take it on a day like this when I'm sleep deprived.
I'm sorry.
There's more to come.
Is it true?
Have we peaked?
Is this it?
No, no.
What's it Dennis says?
It's always sunny.
I haven't peaked.
I haven't even begun to peak yet.
But no, this diagnosis is wrong factually in terms of it's a struggle to find gurus to cover.
It's really not.
And unfortunately, they're all over the place.
And even the ones that we've covered just continue to be more mental every week.
So that's not accurate.
But, you know, if you've got your value and you're ready to move on, that's fine.
You know, that I'm down with.
You go, you know, there's plenty of history podcasts out to go, but we'll continue to look at the gurus.
So take that unpronounceable name in Canada.
Now, this one, I don't know if I should reward this, Matt, but I'm going to do it anyway.
Do you remember we had the guy who wrote the review about us making Sam Harris into a golem and all that stuff?
They were very upset with our interview with Sam Harris.
Vaguely.
They have somehow...
Raised back to the top.
So let me read it.
It's Disappointed 2 million and 1 again, just from a few days ago.
And it's Sam Harris again at the title, one of five stars.
And it says, just terrible discussion on Gaza-Israel.
You seem to attack Sam, using him as a golem to bring in an audience.
I certainly don't agree with all Sam's views, but you instanced the two sides to the argument for your gain in audience numbers.
Seems shameful on such an important topic.
Hmm.
The point is that Sam has a much bigger audience than you.
You disproportionately criticized Sam to bring his audience to your conversations.
Your Israel-Gaza disagreements were to create conflict with him rather than a genuine attempt to discuss a complex world-changing event.
That seems cynical.
And now you have a paywall?
Creating your own echo chamber, boys.
Sam should have been smarter and not given you the airtime.
Stick to the Weinsteins and kiss it.
Love, yours, disappointed, humiliating one.
So that's...
Good idea.
Sorry, Matt.
Another body blow.
Another body blow.
But again, I told you, Matt, that we had to gin up our disagreements with Sam.
You know, we find him too rational, too accurate.
So we had to pretend that we disagreed.
We had to manufacture a disagreement.
In order to get his audience?
Like, why would that?
Does that logic hold up?
Surely they wouldn't like us attacking.
Wouldn't it be Sam Harris haters that we would have wanted to attract?
I'm not sure of the logic there.
But, you know, Sam does have a big audience, but he also talks to a lot of people and he chose to exercise his right to reply on Guru's Pod.
And we have a policy where we let people do it, even people we don't want to talk to.
So, you know, that's what it is.
Yeah, it's just, it's funny that the analysis on that is like that we were pretending to disagree with Sam, whereas like other people's analysis will be that we didn't disagree enough.
We can't win.
We did disagree on a number of points and it was not pretend, I can assure you of that.
And we had few opportunities to actually have a back and forth debate there.
It was mostly Sam exercising his right.
Whatever.
That's true.
That's true.
But anyway, shove the paywall off your arse.
If you want to hear us waffle all about things that are, you know, other supplementary material, yes, you can pay the paywall or not.
But anyway, you're too busy.
So, you know, go listen to Sam Harris.
It's not like he has a paywall in place.
Anyway, last one, Matt.
Sorry.
Like I said, this is a negative one.
This is a negative one.
Maybe I'm subtly psychologically influencing the audience to go back and counter this wall of hate that we've received.
So this is from Hey Ho Ho Ho Ho Ho Ho Ho Ho Ho.
And the title is Podcast of Critical Academics, Now Ironically Turned into Hate-Filled Gurus.
I think we are the hate-filled gurus there.
The host tiptoe around the central question.
Are these gurus revolutionary thinkers or snake oil salesmen?
Instead of wielding Occam's razor, they brandish Occam's fellow duster.
I like that line.
They leave anyone with a modicum of rational thought left yearning for sharper critiques.
The podcast regularly feels like an echo chamber for the already skeptical.
So if you hate Jordan Peterson, Chris Williamson, Andrew Huberman or Elon Musk, you can look forward to these two inject their one-sided drivel about how they're all evil right-wingers.
If you're not in the long, you're part of the club.
If not, well, you're probably a guru yourself.
Chris and Matt chase gurus like kids chasing fireflies.
Just when they've cornered one, it flits away, leaving them grasping at fin there.
Perhaps they need a guru catching that.
He does like his metaphors.
I'm imagining myself chasing butterflies with a feather duster.
Yeah, that would be a bit of a trip.
It was nicely written.
It was very eloquent.
It was lyrical.
I liked that.
It's a bit overwritten, though, isn't it?
There's way too many metaphors and analogies for a small, short review.
Yeah, not a great deal of substance.
Substance.
Heavy on the metaphors, a little bit light on the evidence to support those metaphors.
Like, we don't hate everybody.
We're relatively gentle.
I like Chris Williamson and his drink.
Come on!
I joked about it, but it's fine.
Yes.
And we try not to criticise people for being right-wing, per se.
That's why we didn't criticise Hassan Piker for being very, very left-wing.
Nominally, anyway.
We criticize them on other grounds, and I think we try hard to do that.
I'm very vulnerable.
Yeah, we said you can get better representatives.
Yeah, I'm very vulnerable today.
I'm very tired, and now I've had these three things.
Now you're dealing with this.
Part of this is, you know, if you're not agreeing with us, you're probably a guru.
No, again, you guys, come on.
We've hammered this home.
If you're not doing the shit in the gurometer, you're not a secular guru.
You might be a complete idiot.
Who believes in Jordan Peterson's rants about vaccines or whatever, but you yourself are not necessarily a secular guru.
So bad analysis, too many metaphors, but I did like the line about Occam's fellow duster.
And there's a weird thing where it's kind of like, you know, the critiques would be good if they were more incisive and cutting, but not this way, not this way.
I expect the better.
So anyway, your review.
All right.
I gave it one out of five stars.
Yeah.
So, I tell you, maybe two because of the nice...
You've got to give it a point for that.
Yeah.
How do you like them apples, eh?
You had your review rated.
Yeah.
How does it feel?
After that struggle session, we are now finally mad at the last bit, which is just...
Thanking the people who support us.
The wise people, the good people, the people that are not convinced by right-wing idiots, unlike our last reviewer.
So, yeah, there's plenty of people that we could thank, Matt, and I will thank some of them in a haphazard manner.
And I'm going to start with conspiracy hypothesizers.
You don't object to this, do you?
Not at all.
Not at all.
Go ahead.
I will thank John Gao, Jacob Hangen, Thank you.
Alice Lee, David Moore, Peter Ryan Schwarzschwert, Mina
A, Marshall Clemens, Nathan Ender, Dr. Jerb, Reiterative No.
4, Biavaza Borodni and Francis Sebesta.
Wow.
I did very well.
I did very well.
You did.
Congratulations.
Thank you, everybody.
Appreciate it.
Some of them are really hard.
I feel like there was a conference.
That none of us were invited to.
That came to some very strong conclusions.
And they've all circulated this list of correct answers.
I wasn't at this conference.
This kind of shit makes me think, man.
It's almost like someone is being paid.
Like, when you hear these George Soros stories, he's trying to destroy the country from within.
We are not going to advance conspiracy theories.
We will advance conspiracy hypotheses.
Yeah.
Revolutionary thinkers, Matt.
The next people.
The people that get access to decoding academia content.
I have some of them.
I have Lars Vick, Robert, Taylor Squiliacci, M.L., McLegendface, Mark M., Nicole Davison, Max Fletange,
Colin McLaughlin, Grammaticus Gore, Clayton Spiner, Verdi Gopher, Alexander Will, Peter Rosnick, Matt Behrens, Michael Zimmerman, Melissa,
and Julius and Ian Sears.
Those are all revolutionary geniuses.
Are people making up these names just to hurt you?
I wonder.
I know.
Son of a bitch.
I'm usually running, I don't know, 70 or 90 distinct paradigms simultaneously all the time.
And the idea is not to try to collapse them down to a single master paradigm.
I'm someone who's a true polymath.
I'm all over the place.
But my main claim to fame, if you'd like, in academia is that I founded the field of evolutionary consumption.
Now, that's just a guess, and it could easily be wrong.
But it also could not be wrong.
The fact that it's even plausible is stunning.
Just the tenor of Brett's voice as he says those last words.
It just never fails to do it for me.
I love it.
Yeah, I get that.
I get that.
Now, Matt, galaxy brain gurus.
That is what we're talking about now.
I have just a little smorgasbord of people to thank, just a random selection.
I'm going to thank Lucy and M.E.M.
Those are all the ones I get out and thanked this week.
I've got an uncle called Jim Brown.
I'll have to research him to find out whether it's him.
Oh, no.
I've just worked out as well.
There was somebody who hasn't been thanked for like two years.
Oh.
And I said I'd shout them out.
Oh, no.
And I forgot to DM.
I'll get you next week, okay?
Just the person I said, I'll get you.
Don't worry.
I just remembered now.
So sorry about that.
But sorry, sorry, sorry.
Anyway, here's the Galaxy Brain Stinger.
We tried to warn people.
Yeah.
Like what was coming, how it was going to come in, the fact that it was everywhere and in everything.
Considering me tribal just doesn't make any sense.
I have no tribe.
I'm in exile.
Think again, sunshine.
Yeah.
Yeah.
Cool.
Yeah.
Well, good stuff.
Good stuff.
We've done all of the essential things.
Yuval Noah Harari decoded.
Very mean and hurtful reviews dealt with.
Yeah, sorry.
I'll restore balance next time.
Maybe I'll just read the possible ones.
Come on, if you like us, what are you doing?
Represent, get rid of those one-star bastards.
I think that first review, it caught me because, you know, I'm at a time in my life where I feel like I'm, you know, I have peaked.
And I'm on the slow, gradual descent.
Oh, Matt, don't let them worry you.
Everything's going pear-shaped, you know, physically.
I don't have the vim and vigor that I once had.
I'm no longer the wunderkind in the room.
Oh, no.
Well, you are in this room.
You are in this room.
Still there.
Still the shining.
Guiding light in the coding groups.
Are you trying to say I'll always be your wunderkind?
I guess that is what I'm saying.
And I also want to tell you, Matt, that despite their claim that we have, you know, we've picked, we're past it, we're old news, we are at 200,000 downloads this month.
The most downloads we've ever had out of any month since the show began.
So...
Who's falling off?
You know, who's scraping the barrel?
Is it us?
Yeah, look, just saying, Matt, the happy 200,000 downloads.
Imagine all those little people consuming all those nuggets of information.
Before anybody writes anything on Reddit, he's joking.
He knows that popularity indexes are not a measure of success or worthiness.
Yeah, am I?
Well, I think he knows that.
I think he knows that.
Maybe I do.
I'm just saying, Matt, in the specific claim of it's uninteresting, it's dull, there's nobody good for you to cover anymore.
It's not true.
It's not true.
There's plenty of gurus there.
We should have covered Yuval many moons ago, and we had to get them eventually because we had too many stuff, you know, going on.
So, yeah, that's it.
Don't worry about it.
But, yes, I do know amount of downloads does not equate to the accuracy.
Of your content or the quality.
Okay.
Except in our case where it does.
So that's all.
Alrighty.
I think I'm going to go have a nap.
Go.
But good to speak to you, Chris.
Yes.
I'll see you later.
Retire.
Retire.
Okay.
Export Selection