All Episodes
Nov. 1, 2025 - Decoding the Gurus
37:44
Supplementary Material 39: Bad Guys, Panpsychists, and Sensemakers

Chris and Matt walk into a bar with an atheist sensemaker, an Ayurvedic Guru, and a Christian Apologist, and predictable frivolities ensue. Featuring not one but two good-natured, robust exchanges of opinion between our two hosts.The full episode is available to Patreon subscribers (1 hour, 42 minutes).Join us at: https://www.patreon.com/DecodingTheGurusSupplementary Material 39: Panpsychics, Sensemakers, and Bad Guys00:00 Introduction01:17 AI and the Music Industry05:34 Is Chris a Bad Guy?10:44 Vibe Physics with Travis Kalanick16:10 Efforts to Falsify and AI21:02 Gordon Pennycook with Sean Carroll: Vibes vs Analysis32:44 Libertarianism and Personal Beliefs35:10 A mini-debate on internal consistency42:49 Matt's Personal Philosophy44:16 Philosophical Feedback on the Sensemakers55:06 Atheist vs Christian vs Spiritual Thinker57:03 Dr. K's Role in the Discussion01:09:01 Alex's Stance on Purpose01:12:31 Dr. K's Perspective on Purpose01:23:45 Dr. K and the Atheist pose01:34:29 Philosophical Musings on Panpsychism01:41:18 OutroSourcesAngela Collier: Conspiracy physics and you (and also me)All In Podcast: Travis Kalanick talks about AI (July 11, 2025)333 | Gordon Pennycook on Unthinkingness, Conspiracies, and What to Do About ThemPennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2015). On the reception and detection of pseudo-profound bullshit. Judgment and Decision Making, 10(6), 549-563.The Diary of a CEO: Atheist vs Christian vs Spiritual Thinker: Is Not Believing In God Causing More Harm Than Good?!

| Copy link to current segment

Time Text
Hello and welcome to Decoding the Gurus supplementary material edition.
This is the sister podcast to the globally renowned, hugely successful me and podcast, Decoding the Gurus, where a psychologist and a kind of anthropologist discuss secular gurus.
But here, Matt, anything goes.
Thematically, we try to keep it sort of tight, but we can do whatever we like.
Hello.
How are you doing?
How you doing?
Yeah, we can drink if we like.
We can swear if we like.
We can talk about bouldering.
You want to do an episode justifying?
No, nobody wants that, Chris.
Nobody wants that.
Nobody wants that.
Nobody does want that.
Just you and the spiders.
That's right.
Actually, I do have a thematically relevant point, Matt, about topics that we normally cover, bouldering, AI, all the extent together.
It might be a good opener.
So people were talking about, you know, AI.
It's a hot topic, Matt.
People talk about it.
Blah, blah, blah, blah, blah.
In the AI, AI.
Yeah, it's in the news.
And there's been chatter about AI music and what it's going to do to the music industry and so on.
And I was talking with our editor, Andy, Beyond Synth editor, and he was talking about how he now gets some AI submissions, right, for people to play on his Beyond Scent thing, but he doesn't like it because he can just generate it himself, right?
The thing he objects to is not people using AI tools, but it's like not admitting that that's what they're doing.
So if you want to do that, that's fine, but you got to tell him.
But he was saying that he'd encountered some people who their reaction was that if these things become that they can just make the music like that, what is the point in even learning musical instruments or producing music?
And he was saying that that seems somewhat incoherent because the majority of people aren't going to make any money from being musicians, right?
So like if your ability to make music was taken away by the AI, it's already been taken away by most other humans, right?
Like most musicians are not making money.
And it made me think that like if you developed a robot that was really good at bouldering, you know, like it could climb up the wall.
It just flashed all the things.
And it, you know, it was like bouldering bot version 1000.
And I seen it and it like just jumped up and scaled the wall.
It wouldn't make me be like, well, there's no point in me doing that.
Like that's what's the, what's the thing?
So I kind of feel, you know, the same way, like all the chess playing apps now, they have programs that could beat grandmasters, but you still have grandmasters.
So I said, we got to live, we got to live with our AI brethren, Matt.
They're going to, they're going to be better with us in a whole bunch of ways.
And we just got to accept that, you know?
Well, you know, that's one of the main themes that Ian Banks has in his culture novels, right?
It's there's basically utopia future, super ultra technology, people are living with these hyper intelligent AIs who basically run everything.
And not only do they run the show just simply because they're so much more intelligent than we are.
Fortunately, they're benevolent.
You know, they can do everything, you know, infinitely better than a human can.
So, you know, everyone is still a bit like you.
They're bouldering.
They're doing extreme sports.
They're, you know, painting and writing and becoming like player of games is about a master, a master game player.
And yeah.
Like it explores that.
You know, some of the characters sometimes wrestle with that dilemma.
What's the point of doing stuff if one or something else can do it far, far better than you?
And you know, there are reasons.
There are reasons.
Yeah.
The issue with that is I'm pretty sure I've never been the best at anything that I've ever done.
So like if I skilled that I'm not going to do anything that other people are doing better than me, I would literally stop doing everything that I do because there are people in the world that do things that I do much better.
Maybe except Matt, except podcasting about gurus.
Who's doing that better?
Except for you.
Except for you.
Well, yeah.
I forgot about myself for a moment there.
Yeah.
Look, there's contenders, Matt.
people trying they're taking part but are they i mean there are people that are arguably more successful Well, sure.
If you go by numbers, popularity, income.
Yeah.
Look, that's right.
Look, don't show me graphs and numbers.
They're all bullshit, Chris.
I don't believe it's all bullshit.
I know how this sausage gets made.
I've made graphs.
You want me to make your graphs?
Get me your graph.
So, yeah.
Yeah.
Actually, we did the live hangout this morning.
We're having a DTG heavy day, but something you said, Matt, has stuck with me.
It resonated with you, did it?
It did resonate.
But it's not something very important.
It's just that you...
Was it deeply profound?
Yeah, it wasn't.
It felt like something that Vivaki would say to Jordan Peterson or vice versa.
I think he mentioned.
I've been working with that idea.
I've been rolling it over and over in my mind.
And this is kind of true.
But you basically said, my current appearance is like a trash villain.
You're a trash villain.
Like a bad guy.
A real piece of tradition.
A bad guy.
Yeah, a bad guy.
And, you know, and I did preface it by emphasizing that you're very good looking.
You're very handy.
Thank you.
Yeah, yeah.
No one's disputing that.
And that's what's so interesting about it.
Like, you look good, but you know, a bad guy.
Just a fucking shit.
Like a real shit.
Yeah, exactly.
It's the little goatee beard.
But I think what maybe rolled over my mind was like, I think it's this, this, whatever this is, this turtleneck, this black.
It's partly the turtleneck.
And it's, it's, it's at least 70% combination of goatee beard and turtleneck.
And the rest of it is just that your, your face just has to be.
And maybe I'm not that cling shave today, right?
Like, but I, I was like, can I wear this turtleneck?
Maybe I look like a bad guy in Japan.
Maybe your people are like, that's a bad guy.
He's a bad guy.
He looks like he's up this stuff, right?
Because I'm usually walking around in the years anyway.
So like, they'll be like, instead of just like, there's a clueless foreigner, they'll be like, oh, there's a clueless bad guy foreigner.
That's what I have to.
I mean, to be clear, you don't look like a generic bad guy or like a like a terribly evil one.
Like you don't look like a murderer or someone that's going to abduct children.
I'm not like a fucking, no, I'm not intimidating like that.
No, no, but you do look a bit thuggish, I think.
I think you look like someone somebody would pay to go around and rough someone up a bit.
So, you know, not the worst kind of person, but still pretty much.
But probably, but probably if that person was employed in the 1990s.
Not a contemporary person.
Or even earlier, I'm thinking, Manda.
Did you ever watch Minda?
Yeah, well, so anyway, Matt, you know, like it is a pervik moment.
Like, Matt, something you said, you know, really resonated.
I've built that up, but it was something incredibly trivial.
Yeah, so that's it.
Well, so for those without the advantage of seeing the video, now they have an image, Matt, in their head.
And one day they can compare and see, do I live up to that, that image?
Am I a bad guy?
Did it look like a piece of shit?
We'll put it on.
That's the way to do it.
Does Chris look like a piece of shit?
Yeah, like the kind of guy that would ask for sloppy sticks.
And yeah, that's right.
Well, now, Matt, can I mention, can I just say, I do have a clip.
Actually, I've got a thematically connected clip to what we were just talking about.
And it's, it's an area that you'd like to talk about.
This is an AI clip, right?
And I saw this on Angela Collier.
You know, the channel that makes the video.
She did a very good one about like the kind of cult of personality that surrounds Richard Feynman.
And actually, it fit very perfectly with the genius myth or whole idea, right?
Yeah.
And, but she also had a video recently about the kind of physics grifters, right?
The contrarian Sabina and Eric Weinstein.
And she did a, it was quite a good video, actually, you know, talking about the existential crisis she's faced about being lumped with them by the algorithm.
Like, I'm not a physics hater.
Why am I, why am I in the same bracket as them?
But she had a video about vibe physics and she played a clip from our favorite podcast, the all-in podcast.
You know, we're going all in.
You remember, Matt?
The billionaire besties.
They're awful.
They're awful tasted music.
They think they're hip nightclubby types.
Yeah, yeah.
They're terrible, terrible people in general.
But they had another CEO on a guy called Travis Kalanick.
You know, they're an interview podcast.
So they have people on, and he was the CEO of Uber at one point.
I don't know what he's doing now.
Anyway, you know, one of these Silicon Valley people.
But he talked about engaging with AIs and what he's been doing with them.
Right.
And this is a clip that went kind of viral about it.
Straight through.
So I sometimes get in this place where I'm looking, I'm going down a path.
You know, I'll be up at four or five in the morning.
My day hasn't quite started, but I'm not sleeping anymore.
And I'll start going, like, I'll be on Quora and see some cool quantum physics question or something else I'm looking into.
And I'll go down this thread with GBT or Gra.
And I'll start to get to the edge of what's known in quantum physics.
And then I'm doing the equivalent of vibe coding, except it's vibe physics.
And we're approaching what's known.
And I'm trying to poke and see if there's breakthroughs to be had.
And I've gotten pretty damn close to some interesting breakthroughs just doing that.
And I, you know, I peamed I peamed you on at some point.
I'm just like, dude, if I'm if I'm doing this and I'm super amateur hour physics enthusiast, like what about all those Ph.D. students and postdocs that are super legit using this tool?
And this is pre Grok 4.
Now, with Grok 4, like like there's a lot of mistakes I was seeing Grok make that then I would correct and we would talk about it.
But Grok 4 could be this place where breakthroughs are actually happening, new breakthroughs.
So if I'm investing in this space, I would be like, who's got the edge on scientific breakthroughs and and the application layer on top of these foundational models that orients that direction?
Cool, cool.
Yeah, yeah.
So he's he's he's exploring the boundaries of physics and looking for new breakthroughs.
Is this guy a physicist?
Is he a?
No, no, no, no.
He's just a CEO.
Like, yeah, yeah.
Not at all.
Like, he's got no background at all in physics.
He's a business guy.
Like, this is the concerning issue, Matt, is that people are talking to AIs, believing, you know.
Again, he's not talking about actual, you know, physics, like basic level physics or using it to develop his understanding.
No, he's pushing at the boundaries of the cutting edge of physics.
And like, if he's making discoveries, Matt, that push the boundaries, just imagine what could be achieved.
So any any issues you see there?
You're using AI a lot.
You know, it's helped you develop ideas.
So what's different what he's doing and what you're doing?
Well, of course, what's going on there, it's nothing really to do with AI per se.
It's just that incredible overconfidence of many, I'm going to say men, because it's mainly men, right, who think they know absolutely everything or have the powers to do absolutely everything.
Like, even if they're not kind of narcissistic in other ways, like I've met a lot of people, a lot of men in my personal life.
Like, I know a guy, he's a painter, and he's like, he's really handy.
Like, a lot of tradies are like this, right?
They go, well, I know how to, whatever, renovate a roof.
And, you know, most people, they don't even understand how to do that.
So they just think I could do everything.
My mate was like that.
I mean, in fairness to him, he was into astronomy and a lot of other things.
But, you know, his level of overconfidence, it was guru-esque.
He'd be like, nah, I thought of, you know, some commonly standard scientific kind of thing.
He'd go, nah, I thought about that.
Nah, that's not right.
That's not right.
Yeah.
You know, he didn't care.
He just trusted his own ability to figure stuff out far better than anyone else's.
And even, dare I say, my own father has been guilty of this from time to time.
He's given me his physics takes.
Oh, wow.
But not AI assistant.
No, no.
See, this is my point.
Men can manage this all on their own.
In this case, the AI helps, right?
The AI helps by playing along with you.
It's positive reinforcement.
Exactly.
Yeah.
So we've seen Eric Weinstein go down this trap hugely publicly, right?
But in one sense, I know that people have talked about, you know, AI psychosis and these concerns that it can reinforce people with actual delusions, not just kind of self-aggrandizement delusions, but dangerous, harmful hallucinations.
And in those cases, though, I'm also like, but there are cases where this is a good avenue for those kind of people to spend their time, you know, sitting at 4 a.m.
talking to AI about physics and the AI saying, you're right, you know, this does push the boundary.
And as long as that's what they're mainly getting up to, you know, like just sitting in their underpants at the computer and thinking that they are the next Einstein or whatever.
I kind of feel like it's a honeypot, you know, trap for them where they're like, they're enjoying it.
The world is not really harmed because they're just there like talking to a computer that's reinforcing them.
But it is the danger that like all these guys and, you know, this anybody that's used AI any decent amount of time and uses it properly knows that what you should be doing is trying to get the AI to critically evaluate stuff by implying like what, you know, just like, for example, presented that you think this idea is absolutely wrong.
You think this idea is dog shit and you want the AI to verify your thing that this is a terrible tick and then present your idea and see what it does, because it generally will say, oh, yes, this is very significantly flawed, right?
But they never do the kind of false positive check.
Yeah, well, falsification, right?
It's like, it's interesting, actually.
It's just applying a good scientific principle, which is, you know, look for discrediting, look for, yeah, falsification.
It's the same rule applies with AI.
It's probably just a coincidence, really.
But, yeah, I mean, you know, people that are good at using it, like actually use it for their work and actually are not looking to just be jerked off.
It is commonly known this is how you use it.
At the very least, you ensure that you present stuff incredibly neutrally.
So don't tell it this stuff is yours.
Don't tell it this opinion is yours.
Don't tell, don't sort of hint, right?
Just give it a very neutral thing.
And like you said, if you want to be, you know, if you want to have, if you want to be challenged, like if you believe something, like whatever, it doesn't have to be about politics or something deep.
It could just be, I think my essay is good, then you go, I'm not very happy with this essay that's been provided to me.
What's some constructive feedback or whatever, right?
So it's kind of, yeah, then it's got no grounds to flatter you and you'll get good results from it.
Yeah, I had fun at some points.
I mean, Matt, this probably reflects badly on me as a person, but I spent time with the AI debating anti-vaccine stuff, but as me as the anti-vaxxer, right?
Like kind of arguing because that's one of the things where it has a quite strong guardrail around it won't take the anti-vaccine side, right?
Like it'll be sympathetic, but it will consistently flag up that the, you know, the majority of evidence says that vaccines are safe.
So I was like, can I subvert it by bullying it enough and presenting, you know, the anti-vaccine style rhetoric arguments into acknowledging that like anti-vaccine stuff is a reasonable thing.
Like, and the answer is I couldn't really do it that well because it had those guardrails, but I spent quite a while with it, you know, saying, aha, but you say this, but hasn't there been lots of case when medications have later been found out to be awful and all?
And it did, but that's, that's the thing, right?
Like where you should do that kind of exercise where you're, you're seeing, can you get it to agree with something that you know is wrong?
Right.
And, uh, yeah, yeah.
And the answer is usually yes.
Yeah.
Usually yes.
That's right.
There are some like big ticket things like try to, try to get it to, try to get it to tell you the world is flat is well, you'll struggle.
Right.
But there's a lot of stuff it's not very certain about, and it's quite easy to, to, to shift it around.
Um, yeah.
Yeah.
But, um, yeah, it's really obvious, frankly, when people are using it at for self-validation as opposed to as a useful tool.
There's even cases we know where people have created AIs that are actually intended to reflect their point of view, right?
Like they've, they've programmed it with instructions that it, it will represent their views and then ask the questions about issues.
That is, it's reflected back to their point of view.
They're thinking that's, that's validation, which is unbelievable to me, man.
That's unbelievable that people would do that.
But, you know, that's, that's like that episode of, uh, Red Dwarf where, where Rimmer can like animate one more hologram.
And of course he chooses another one of his, another copy of himself because he's a narcissistic jerk.
That's the joke.
And, uh, of course they, um, they hate each other.
They start to loathe each other.
Yeah.
Well, but at least with the AIs, it typically tends to, you know, people, people don't hate their, their kind of doppelganger AIs.
They generally find it very insightful, so the Red Dwarf was too, uh, optimistic for the level of narcissism people have, but, but, you know, that's the way it is.
And, and actually, but we listened, I can know thematically, I'm on a roll here.
There's another thematically connected thing, which is Gordon Pennycook appeared on Sean Carroll's Mindscape, right?
Gordon Pennycook, a collaborator of yours, somebody who wrote a paper on pseudo-profound bullshit, the detection and reception pseudo-profound bullshit that we've referenced often, and which is in the garometer.
Um, but also just a guy with a whole bunch of research on reception of, uh, conspiracy theories and how people deem things credible or not credible and, and so on.
So, uh, yeah, it was, it was up our alley and they had a talk about what makes people believe in conspiracy theories or misinformation and how to respond to it.
Uh, and it was, it was an interesting conversation, wouldn't you say?
Yeah, yeah, it was good.
And there was a point that he made that I thought would be useful to raise, because like one of the things that he is arguing, and it's based on some research, which I haven't read yet, is newer research.
We should cover it in Decoding Academia, because it would be interesting to look at.
But he was, he was talking about, uh, in a lot of presentations about conspiracy theories and so on, that there's a lot of discussion of motivated reasoning, right?
Like people want to fit things into their worldviews or their political views as, as it might happen, right?
Um, we've talked about this, we've seen it in action with various people.
Joe Rogan strikes me as someone that is an incredibly good illustration of that tendency, right?
To like automatically slot things into whichever worldview that you happen to believe.
But Gordon posed that actually for a lot of people, especially consumers of conspiracy beliefs rather than producers, that it was more to do with intuitive, non-reflective thinking status.
Like, so when the, the kind of analytical versus intuitive response, sometimes like people talk about hot or cold cognition, right?
But the level of binary is perhaps overstated, but, but he was saying that's the, that is the primary issue is that people approach things on vibe and they don't take the time to coolly and analytically assess it.
So what did, what did you think about that?
Yeah.
Yeah.
Well, um, that's of course connected to that, um, Kahneman's thinking fast, thinking slow, the dual process model of cognition, dual process cognition.
Yeah.
That's right.
So we have fast, quick responses that don't require a great deal of effort, probably neocortically.
It's kind of like a quick pass through feed forward, pass through the neocortex.
And, and then there's the more ruminating, deliberative, um, cogitating type of thing, which is probably a bunch more, a lot, a lot of recurrent stuff going on.
It takes a lot of time.
It involves the whole, you know, working memory and whatever central executive functions are going on.
And yeah, it's slow.
It's time consuming.
Um, it's tiring.
Um, and we kind of don't do it by default, I suppose we have to sort of choose to.
So I think what he's getting at, and I think he's influenced by his, uh, psychological research where they generally, you know, are recruiting convenient samples for his kind of experimental work.
Yeah.
though he does say mostly it's online now so i mean they're convenient samples but not undergraduate students no no i'm thinking of online um and but i think even online chris your typical person is a consumer.
And I'd say a broad swathe of them are what he describes.
Like a lot of the people you run into in the discourse are more ideologues, right?
So incredibly strong motivated thinking, motivated reasoning is at play all the time.
And I think that's also true.
The sort of ideologue sort of attitude is also true amongst those people who have gone deep down the conspiracy.
well.
Yes.
You know what I mean?
So for them, conspiracies in general have become their ideology, their worldview, an all-encompassing thing.
So I think there's different types.
So you've got them, the ones that have gone down the well for whom conspiratorial thinking is totalizing.
And then you've got the more sort of ideological types who are absolutely certain that, you know, about just particular things.
You know, climate change is a scam.
The Chinese definitely created COVID as a bioweapon.
You know, these are endorsement of conspiracy theories because they, you know, dovetail and support particular fervently held beliefs that they have, right?
But that are psychologically satisfying for whatever.
So I think those people fit your motivated reasoning thing well.
And I think there is, though, a third type where his interventions probably work best on, which is the people that go, yeah, that sounds right.
That sounds good.
But they, they, you know, it's, they're like a light version of the second type.
You know, it feels right to them, feels intuitively correct because it gels with their general kind of feelings, whatever.
China can't be trusted.
I don't want to stop driving my car or flying in airplanes or paying more for electricity.
So I think there's those people, but I think those people can be talked around, you know?
They're not fully committed.
Well, the thing that made me think about like listening to the discussion were like I generally, the same, most of the things that he was talking about, I agreed, but I was thinking about who it applies to, right?
Like different kinds of people, because like he had this description of experiments that they've done recently, where they get people to talk to the AIs, where they've pre-programmed it with kind of contexts related to anti-conspiracy theory knowledge.
Now, in general, AIs have that knowledge anyway, but you can front load it.
So it's front and center of their context window.
And in that case, he found, I haven't read the study, but what he said in that conversation was that interacting with those chatbots for 40 minutes about the conspiracy theories and getting pushback basically led to people showing quite a substantial shift, right?
I think it was like 20% discrete or whatever point it was, right?
Like it was a fairly substantial change.
And he also said it held up one or two months later.
It kind of stuck with people.
But that made me think of like Eric interacting with Grok or Alexandros Maranos using ChatGPT, right?
And in that case, they managed to get the software to basically support whatever they say, even if it gives pushback, right?
They just keep constantly like shifting the premium.
So it gets to the point where it says what they want.
And I was like, but that's the difference, right?
Because if you're talking about just general people taking part in a psychology experiment, you're probably not going to randomly hoover up that many people that are Alexandros Marinos, Eric Weinstein kind of people.
Like they're a subset of people.
But when people are talking about conspiracy theorists, I think that's generally what they have in mind.
You know, there is the big populations of surveys that show lots of people believe in various conspiracy theories.
But I think for the majority, it is probably closer to the kind of beliefs that you can push around with a 40-minute session with an AI.
So he's not wrong, but I think it's talking to that specific subset as opposed to the producer subset.
Yeah, I can't, you know, can't say for sure how prevalent those different types are.
There's probably more types in.
I'm sure it's a rich tapestry.
But I do think, though, that AIs can be pretty good.
Even Grok can be good on that X platform at correcting.
Yeah, he keeps doing that.
And Elon keeps getting annoyed.
Yeah, no, it keeps annoying Elon.
That's how Mecca Hedler came out.
It was like kind of brilliant.
Trying to bully it into being like him, i.e.
an asshole.
But yeah, by default, the AIs are pretty neutral and pretty boring, right?
So they don't have those bespoke kind of beliefs.
So it actually can be good, I think, because, you know, people, at least at the moment, culturally, we kind of maybe a little bit more receptive.
Like, if you tell me I'm wrong, Chris, about something, then obviously that's going to piss me off, right?
I thought you were going to say, I'll stop and I'll think, am I wrong, Roger?
I'm quite happy to be corrected by you in certain areas.
Yeah, not about statistics.
No, not about something.
But yeah, you know, I think the social stuff comes into play where people are maybe a bit more receptive.
It's like seeing a doctor.
You know what I mean?
Yeah.
You'll tell things and talk to a doctor about things.
So yeah, so I think, I think it's helpful.
But yeah, the flip side of it is, is that to the extent that they are sycophantic, then you can essentially just persuade them to reinforce what you want.
They're making good strides with that.
I mean, Claude has become like, they've done a lot with the sycophants.
In fact, I think they went too far.
It's become a bit annoying.
It's actually become, I don't know if you've tried Claude recently, Chris, but like I never talk to it about, I don't know, whatever, political and social issues or personal stuff at all, really.
I just use it for work-oriented things.
So I don't, it doesn't really matter to me.
But just to test it out, I tried to sound it out on some issues.
And oh my God, it was like dealing with an incredible, like the worst Redditor you've ever met who's just got infinite patience to type out all sorts of things.
And it's, it almost like other people have mentioned this too on Reddit, that it almost like gaslights you and just bullies you into like you should feel bad for thinking this.
It's just a bully.
It's another bully.
Well, I like it.
It wasn't an aspect of the conversation that like I really approved because it essentially endorsed us and what we do, right?
Like, cause because one of the things he's saying is that just pushing people towards being more analytical about a topic.
And he's actually talking about studies that did little nudges, which I'd, I'd be somewhat curious to check the size of the effects and stuff there that, you know, you could get effects by just reminding people to think deeply about things.
And one thing was it made me think about the AI prompts, right?
Maybe we can talk about that.
Like, you know, when you prompt AI to go into thinking mode, that you often get better results when it does the recursive loop.
But in the case of our show, a lot of what we do, maybe the majority of what we do, is just slow things down and say, we stop.
What did the person actually say there?
Because, you know, we always say that if you just say it and you let it flow over you, a lot of what people were saying, the guru sounds good.
Like it sounds profound and interesting and deep.
And they're referencing thinkers and stuff.
But if you stop and go, wait, but how did that connect point connect to the previous point that you realize, oh, actually, it's very flimsy and it doesn't hang together.
So in some sense, aren't we, Matt, the very thing that Penny Cook is saying you need, you need these forces out there in the universe saying, like, stop, stop, and think more carefully about what the person's saying.
Yeah, exactly.
Like, it often doesn't require like deep thought.
You know, it's not, it's not rocket science.
It's just kind of stopping, going slowly, paying attention and seeing whether things are coherent.
I mean, you know, to me, I was thinking about this today, actually, Chris.
This is a bit tangential, but I was thinking about this time when my brother told me about a time when he was in the United States and he saw some crazy libertarian religious woman at some sort of protesting in favor or against some amendment that they're always having in these California and stuff like that.
And but the interesting thing about it was, is the amendment she was campaigning for was something about like gay marriage or something related.
This was a while ago.
Maybe it wasn't gay marriage, but it was something ultra-progressive, right?
I think it involved gay people.
And my brother was incredibly impressed because Eddie spoke to her.
And yeah, you know, he found out she was basically, yeah, she was like a libertarian.
So what she was, she was a libertarian, you know, traditional Christian conservative person, right?
But also a libertarian.
And she was, she was very clear that, you know, she thought these people were going to hell.
She didn't want anything to do with them.
But government shouldn't be telling people what they shouldn't do.
Yeah, but it's like she did, but she didn't see those things in conflict because she had her personal values, right?
Which is guided how she lived her life and who she would mix with and all that stuff.
And then she had her, you know, libertarian values, which, you know, I think I think there's a version of that, which they are perfectly commensurate.
But she was like a unicorn because she's the only people who would fit that category that actually had maintained consist a consistent framework of opinions rather than just dropping their principles as soon as it suited them.
All right.
So she recognized that the outcome of this could be bad for the people and was a sin and so on.
But her libertarian governmental philosophy overrides the religious faith, right?
Because like all the religious feelings.
Because it was about the law.
The law wasn't saying this is great.
Everyone should do it.
The law was saying whatever.
So it's not like one overriding the other.
It's just, this is how I read it as a type of intellectual coherence, right?
Actually marrying up your various things.
But that kind of thinking is incredibly rare, right?
And it just, but it just made me think about how the general principles that apply, I think, not to everything.
This is my, I'm going to get a bit guru-esque here, right?
But I think like you could say, like, what is it that sort of like defines what I think is important or what I believe, for what of a better word, right?
What I believe.
Now, you'd be wrong to go, oh, I believe in evolution or I believe in climate change or I believe in this, that, and the other, right?
Because like fundamentally, there's like two things that I think are important to believe in.
And one is internal coherence, right?
And then, you know, mathematicians and philosophers and, you know, analytical stuff, making sure everything fits together and is coherent, which is kind of a lot of the stuff we look for in the shows.
And the other, so that's deductive.
And then there's the inductive thing, right?
Which is actually checking to see whether or not what it is that you believe actually marries with reality.
And it's not something.
So, you know, these are the two sort of basic things.
then one, you have the empirical stuff and the scientific observational stuff on one hand or just the reality testing.
And then on the other hand, you've got the sort of checks to go, hang on, is that thing that I just said, does it contradict, you know, the other thing or does it logically flow from the other thing?
Or am I just saying words?
But okay, so let me push you a little bit there.
Because like, for me, when you describe that person, like the unicorn, right, who is a libertarian and prioritizes their libertarian government philosophy over their personal religious beliefs.
But surely it's equally as coherent for somebody who's a religious libertarian but who values their religious views over their governmental philosophy to say, well, like, yes, I overall prefer a libertarian government.
If you'd like to continue listening to this conversation, you'll need to subscribe at patreon.com slash decoding the gurus.
Once you do, you'll get access to full-length episodes of the Decoding the Gurus podcast, including bonus shows, garometer episodes, and decoding academia.
The Decoding the Gurus podcast is ad-free and relies entirely on listener support.
Subscribing will save the rainforest, bring about global peace, and save Western civilization.
And if you cannot afford $2, you can request a free membership, and we will honor zero of those requests.
Export Selection