Andrew Huberman and Peter Attia: Self-enhancement, supplements & doughnuts?
In a kind of meta cross-over with our Decoding Academia series, we're going to decode a journal club discussion between two well-known health optimisers: Dr. Peter Attia and Dr. Andrew Huberman. So you get to listen to two academics talk about two other academics talk about academic papers... we know...We've already been introduced to the bulging biceps and morning sun-drenched routines of Huberman elsewhere but this is our first introduction to Peter Attia, MD. Attia is a former ultra-endurance athlete and a physician in the field of longevity and performance, a podcaster (who isn't amirite?!?) and author of "Outlive: The Science And Art Of Longevity".Attia introduces us to a paper that casts doubt on the supposed general life-extending properties of a diabetes drug called Metformin. This is a drug that is apparently very well known in the biohacker/life extension communities and one that Attia administered to himself for a number of years despite the rather preliminary evidence. This is the first of many indicators that both gentlemen are certainly on the bleeding edge of self-medicating experimentation, doggedly pursuing the elusive goals of huge pectoral muscles, minds that laugh at the concept of cognitive decline, and bodies that will live... well for a lot longer than Matt and Chris!We get to hear about week-long starvation regimes, medications that take the edge of pizza and doughnut binges, dealing with month-long nausea from self-dosing experimental treatments, and frequent prick-blood tests all for the sake of optimising, optimising, optimising...Huberman's paper (a preprint, actually) falls more into the "big, if true" category - although he seems fairly confident himself. Does *believing* you are getting a treatment generate the relevant physiological and neurological effects in the body that could mean we can bypass the need for certain pharmacological substances entirely, including some vaccines?!? Based on the results of a small-N, fMRI study that reports mixed results, Huberman muses... maybe! Or how about those other small-N studies, with p-values hovering suspiciously close to 0.05 that report other counterintuitive findings? We will leave it to Huberman to explain.But the bad stuff aside, Huberman and Attia (especially Attia) actually do a pretty decent job talking about how to approach research papers and some of the pros and cons of different approaches. Chris and Matt thus have ample opportunities to give credit where credit's due and demonstrate that they are the fair-minded souls everyone knows them to be! In any case, it's an interesting peak into an alternative health optimiser world. It seems to be a rather "serious" hobby a bit like body modification or tattoos. But who are we to judge? Matt likes cultivating succulent plants and Chris is into eating sushi in lush forests. So biohacking, self-experimentation for longevity? Well, at least it's an ethos.Also featuring, an introduction that covers Irish history, the most humble guru in the gurusphere, and our very own theory of guru cringeosity!LinksJournal Club with Dr. Peter Attia | Metformin for Longevity & The Power of Belief EffectsThe Most Arrogant thing Bret Weinstein has ever said? Bad Stats ThreadKeys et al. (2022) Reassessing the evidence of a survival advantage in Type 2 diabetes treated with metformin compared with controls without diabetes: a retrospective cohort study.<a href="https://dom-pubs.onlinelibrary.wiley.com/doi/10.1111/dom.12354?utm_source=youtube&utm_medium=social&utm_campaign=huberman_lab" rel="noopener...
Hello and welcome to Decoding the Gurus, the podcast where an anthropologist and a psychologist listen to the greatest minds the world has to offer and we try to understand what they're talking about.
I'm Professor Matt Brown.
Chris Kavanagh is my co-host.
Welcome, Chris.
Now we start the podcast.
I forget how my spiel ends.
I've noticed that.
Yeah, cognitive decline.
It's horrifying to witness in person.
I've done it too many times.
That's all right.
That should make it into non-reflexive memory.
You should be, you know, just able to read it off.
You would think so, wouldn't you?
Yeah.
You got too comfortable.
That's your problem.
You're too relaxed now.
That's true.
Before there was a frisson, a tension.
Now I'm like, ah, whatever.
I know.
I can do it with my eyes closed.
Yeah, yeah.
So, yeah, I'll tell people why we are here.
But first, Matt, I have a little surprise clip to play for you.
It's from someone we barely mentioned, so you'll probably even forget who he is when you'll hear it.
But let's see if you can work out who it is.
And what the issue is with this clip.
I'll keep it very big for you, so just here you go.
All right, a test.
But there's a piece of the story that haunts me, and it will probably sound arrogant for me to describe it, but I'm going to do it anyway because it's later than we think.
You'll remember that in The Hitchhiker's Guide, The Earth is actually a sophisticated computer designed to discover the meaning of life, the universe, and everything.
After the initial computer that was designed to do this after, I think, thousands of years, spit out an insane answer, which was 42, which didn't mean anything, and then when asked to explain it, it said, well, what was the question?
The Earth was the 2.0 version of the investigator for what is the meaning of life, the universe, and everything.
And as you will recall, the computer actually produced its result.
The Earth did figure out what the meaning of life, the universe, and everything was in the form of a young woman on whom it dawned.
And she picked up the phone to convey the answer.
At the exact moment that the Vogons destroyed the place.
So anyway, the point is the Earth spits out the answer, but we still don't know what it is because lo and behold, it was destroyed at exactly the moment that the answer was ready to be delivered.
I feel a little bit like I might be the young woman in that story because I believe that actually evolutionarily, I do know what the next thing is supposed to be.
And it isn't that complex.
it's...
And we describe it, Heather and I, in our book, in the last chapter.
It's called The Fourth Frontier.
There we go.
Did you recognize that figure?
Brett Weinstein, get your dirty hands off that book.
Hitchhiker's Guide to the Galaxy is not your plaything.
You cannot use it.
You're banned.
It's too good for you.
The least of his crimes, though, there's a couple of bits where his description goes slightly awry from the plot.
I think he got the broad contours right.
Yeah, he broadly got it right.
I didn't like how he said that the computer, after delivering the answer of 42, said, what was the question?
As if it had forgotten it or something.
That's not right.
Yeah, it was designed to provide the answer.
That was the...
And they gave an answer, but then the issue is, well, how do you interpret just 42, right?
Yeah, so there's a little dimension on our Garometer.
And, well, which one is it?
I was going to say Cassandra, but it kind of is.
Yeah, it's basically he has the secret, the riddle that unlocks the future.
He knows it, just like...
Yeah, so the important thing from Brett's perspective is she is the person provided the most important information in the universe,
right?
And that she...
Tragically, he is ignored or faces calamity because of what the Vogons are doing and that kind of thing.
So Brett is saying he feels a bit like the most insightful person.
In the universe.
Like he says, it's going to sound a bit arrogant.
It's not a bit arrogant.
That's like, that has just smashed the wheel of the narcissism measure in the carometer.
It spun around so many times it came off the hinge.
Yeah, I mean, Brett and Heather both really do believe that their insights into evolutionary biology means that they...
He can understand everything, everything that happens in the world, whether it's COVID or Ukraine or Israel, you name it.
He obviously famously explained the German invasion of Russia during World War II in terms of evolutionary biology.
So, yeah, it really is quite amazing.
No normal evolutionary biologist would think that, that you could, you know, riddle out.
Conundrums in modern history and politics using evolutionary biology.
So it's galaxy brainness as well, I guess, isn't it?
Yeah, yeah.
So he gets some things wrong about the pot.
That's a sin.
And perhaps more significantly, he draws a parallel between himself and the most insightful person in the entire universe.
Yeah.
I think one of the reasons Douglas Adams chose a young lady to be almost a Christ-like figure, like he explicitly compares her to Jesus Christ, right?
He basically figured it all out, but Jesus Christ had to get nailed to a cross, unfortunately.
But she'd figured it out.
No one has to get nailed to a cross, yada, yada, yada.
But I think he chose a young woman because rather than, say, a middle-aged man and an academic...
Abominationary biologist.
Yeah.
Because the sort of joke in Hitchhiker's Guide is that, like, she really had figured it out.
Like, she wasn't deluding herself.
She wasn't just full of hot air.
But yeah, that doesn't apply to Brett.
No, no.
So, yeah, I thought this was up your alley because, you know, the theme music for the show has a little hitchhiker's trying to use your name on Twitter.
We're obviously fond of it.
So I thought this would get you in the fields.
And again, this is a clip from Bad Stats.
Worst crossover ever.
Chris, worst crossover ever.
Hitchhiker's Guide to the Galaxy exists in a part of my mind that is unsullied by the tawdry goings-on of the gurus.
So thank you, Brett, for that.
Yeah, yeah.
What dabs will he plumb next?
Who can tell?
It's always a surprise.
Well, I'm really loving succulents at the moment, so if any of the gurus start talking about succulents, I'll be very upset.
Oh!
This is a good time to...
Unveil our revolutionary theory about the secular gurus, some of them at least, and why they might be particularly appealing in a sense to myself and me.
Why can we withstand them?
I think these are slightly different because in some respect, Matt and I are slightly different in our characters.
You may not have noticed, but I probably have a greater tolerance and appetite for consuming People-spiting nonsense or that kind of thing.
Like, I can tolerate that more than most people, including Matt.
And it's helped by the fact that I can listen to things at, like, two and a half times speed.
That makes a difference.
So, but you have been listening to Garth Merengue's or watching Garth Merengue's Dark Place and consuming a kind of British dark comedy, cringe comedy, Alan Partridge, The Office, Garth Merengue's Dark Place.
This kind of...
Yeah, imagine if Alan Partridge was Stephen King.
That's Garth Marenghi.
And he's also written a book called Terratome.
Best listened to as an audiobook.
Highly recommended.
Just like in the TV series, it commits every sin.
Of terrible, terrible horror fiction, genre fiction.
But the fun thing about these characters, and I think the reason why you and I both like them, is that the humour is all character-based.
Alan Partridge, Garth Marenghi, the idea is that they are this arrogant...
You know, what's the word?
What are our gurus?
What do we call them?
They're always narcissists.
That's right.
Again, narcissists, like totally lacking in self-awareness.
And that's where a lot of the humor comes from.
And they're kind of pompous, middle-aged men who believe that their insights are being ignored, typically.
Not all of them, but this is a common theme in that kind of character study.
And they're like tragic comic figures because...
They regard themselves with such self-importance and gravitas, but you as the viewer can see that it's a very insecure and needy personality.
And yeah, I think there are clear parallels there in some of the people we cover.
Yeah, and I think the other thing I like about that humour is that I think all of us can...
I think that the best character comedy is when you can detect a little bit of that in yourself, you know?
Like a lot of British humour is like that, right?
It's that cringe kind of thing.
Like The Office, for instance, you know, the character in The Office.
Yeah.
We sort of wince and we hate the idea that others might see us like the audience sees that person.
Yeah.
Peep Show is another one which captures this quite...
Well, you know, gives the internal monologue of the characters.
And yeah, that's one where I can see various parallels.
Yes.
Too many parallels with my life, Chris.
So good, so good.
The Gurus are basically all of that comedy, but none of the appealing aspects.
Like, they have no redeeming features.
We actually noted that the fictional versions that are endurable to consume...
If they did what the actual secular gurus did, it would be too broad.
It would seem like too on the nose, right?
Well, also, I think the thing that sets Alan Partridge apart from characters like that is that it's not superficial.
Two-dimensional.
There are layers to Alan Partridge's character.
It isn't simply that he's politically incorrect or something.
It's more interesting than that.
He understands about political correctness and he wants to be politically correct and he sometimes succeeds and sometimes...
Fails because he doesn't quite get it, but he's not a total caricature.
You know, there are layers to him and some of them are kind of appealing layers and many of them are totally cringeworthy and awful.
And yeah, it's a great mix.
So I'd call...
Characters like that, deep.
Garth Merengue, that character is a bit more broad.
It's a bit more straightforward.
Like, Garth Merengue does compare himself to Jesus Christ, like Brett Weinstein just sort of did there, really.
And that's funny, too.
Yeah, yeah.
He's like an arrogant horror writer, making like 80s slasher and pulp horror, but regarding it as a high art form, discussing, you know, the plants.
Turning into carnivores as a metaphor for a capitalist society and all this.
There was a great quote from him, which was, I've met writers who use subtext and they're all cowards.
Yeah, so it's just to say that those characters often have redeeming features that the actual guru people like.
Maybe they're perfectly nice in their personal lives or whatever, but When you can set up the narrative so that ultimately the character has some kind of self-awareness or whatever, that doesn't actually tend to happen.
Our gurus tend to become worse and become openly anti-vaccine or hard-right apologists and stuff.
So yeah, reality is worse.
I think Gad Saad is probably the one that approaches fictional characters more than any other, simply because he is so obviously...
Banging it on.
Sometimes transparently and sometimes apparently in earnest.
But it is still obviously an act.
A shtick.
A shtick, yeah.
Except he's not funny.
Yeah, that's the killer.
That's the killer.
And he's not a fictional character, unfortunately.
Oh yeah, Matt.
One more thing.
Just one.
One more.
One more thing I need to mention.
I know that you like history podcasts and you like to hear factoids and repeat them.
So I'll just mention that I was listening to The Rest is History and I went back into their back catalogue and was listening about the Easter Rising, you know, Irish independence movement.
Very interesting about the Hollywood...
Dublin, Dublin Wood story, you know, the fairytale version of it and the actual complexities of all the groups involved and their levels of support and how the events went and, you know, now how they are immortalized, but how they were perceived at the time.
So it was very interesting.
And the takeaway was history is always more complicated.
Always more complicated than a Hollywood movie.
Who would have thought?
I remember the most astonishing factoid.
I think I learned about that was that they, like an English warship, like shells.
Oh, yeah, yeah, yeah.
I remember when I first heard about that and I didn't actually know much about all that stuff.
I was like, what?
Like to my mind, it was like an English warship is shelling Newcastle or something.
Like it just didn't seem.
It's also an interesting case alone, like because it's sort of utterly failed.
It wasn't a very well conducted.
Uprising.
And it was actually cancelled by half of the participants.
The people that were supporting it because the whole shipment of arms from the Kaiser were intercepted.
And then they weren't going to go ahead with it.
But they went ahead with it anyway.
And it kind of went to shambles.
But the thing is, Matt, that as the historian pointed out when describing that, even though they go to shambles and that the narrative is they were executed and that is what completely changed everything.
You don't just get that from a dozen executions or so.
There has to be latent sympathy and that as a catalyst.
The narrative is a bit too simple in either way.
If you want to say they didn't have any popular support or they had complete support.
It's all mixed.
Ireland wouldn't have been able to have a mass independence movement.
If that sentiment wasn't at least fermenting in the general populace.
So anyway, interesting thing.
You know, we learned about history and there was an Irish historian in the representation for my sovereign Irish.
Southern Irish people.
So, there we go.
Do the Southern Irish think of you guys as properly Irish, or are you kind of a little bit alone?
Oh, you're opening a can of worms there.
That's a long-standing dispute.
It depends on the Southern Irish, I guess, that you ask.
There are degrees of parochialism, I assume.
Yeah.
Really, do you ask a bastard or a reasonable person?
No, I will say that the general dynamic is Northern Irish Catholics might have a little bit of a chip on their shoulder about being authentically Irish, and the Southern Irish might have a tendency to dismiss the people in the North as not properly Irish,
and this creates friction with the Northern Irish saying, We are up here defending the unified Ireland.
That's right.
You were left in the lurch.
And you leave us.
You agree with the oppressors.
Yeah, that's the dynamic that tends to go down.
I'm imagining an Australia-New Zealander type relationship.
That makes sense.
They got a chip on her shoulder too.
Yeah, I can tell you a funny story about that.
But maybe I'll do it off air involving.
Young Chris is up in Irish town and too many alcohol.
Now, set aside that whole kerfuffle and let's turn to the topic for today.
Now, it's fair to say that you have been taken to this topic, the subject for this podcast this week, at my urging.
You were a little bit unenthusiastic to cover this topic, it can be said.
But I think...
You will change your tune.
As you realize, through the decoding, all of the things that you noticed, which you didn't realize, were so insightful.
And I'll tease it out to you.
It's fine.
It's fine.
I'm accustomed to this now.
I know he wears the pants in this podcast.
You're like, don't worry about it.
It'll be great.
It'll be great.
You'll see.
You'll see.
I'm like, okay.
You'll be a star, man.
You'll be a star.
You'll be a star.
Yeah, and well, I will say that the reason That you're not enthusiastic, I think, is partly because on the one hand, you watch things at times one speed.
So when we cover something which is like two hours long, it is two hours for you.
It ultimately adds up the same for me because I have to watch it multiple times to remind myself by the time you get around to watching it.
And then I have to clip it at times.
So anyway, who suffers more?
It's unclear.
It's unclear.
But it is Journal Club.
With Dr. Peter Attia and Andrew Huberman, metformin for longevity and the power of belief effects.
And it's from a month ago.
So it's an episode that was out on Huberman's feed and Attia's feed, a kind of crossover.
Yeah, so they're doing like what we do on Decoding Academia, talking about some journal articles.
And now we're going to be decoding their decoding of the academic articles.
I mean, it sounds riveting already.
I could tell people.
Well, look, Matt, listen up.
I hear what you're saying.
But one thing is that, like, we've covered Huberman, but we only covered like a 15-minute segment thing about, you know, grinding.
This is longer form Huberman.
And it's kind of an episode where he's presenting what...
I think you could say it's like his better aspects, looking at studies, digging into research and talking about the ways that you can critically examine papers.
So if you thought that us covering him when he's discussing grounding was a bit unfair, this would seem to be a better piece of content, potentially, right?
Sure, yeah.
Okay.
And Peter Adia, we haven't covered him.
Who's he, Matt?
You don't know anything about him, do you?
He is a medical doctor, a Johns Hopkins-trained physician, but also more commonly known, I think, for his optimizer health diet recommendations.
He's in that whole framework and space, the same place Huberman is, the same place a lot of the kind of...
Health optimizer types are.
And we haven't done that much on them, except for Lex and Huberman, really.
Is there anyone else that covers the optimizer kind of area?
No one's...
Chris Williamson, kind of?
Yeah, he does.
But he was only incidentally covered by the proximity to Gadsad.
Right.
Well, it's true I didn't know anything about...
But I knew that he was in that health optimizer thing just by the size of his pectoral muscles.
They both have very impressive pectoral muscles.
This is right.
They are both optimized individuals and look very well.
They do.
They practice what they preach.
And as we'll see, they genuinely do do that.
So there's that, Matt.
And my last piece of justification for making you cover this is, as you say, this is a purported journal club.
We do something very similar on the Decoding Academia episodes, which are on the Patreon.
And as academics, we have sat in on many, many journal clubs where people are discussing.
So this is a very familiar area for us.
So I think in that respect, it's a bit like when we covered Eric and Brett talking about his desk rejections and stuff.
This is very normal for us, but it's not that normal for non-academics.
So we can approach it from the point of view of people who are not unfamiliar with this kind of format.
So, see?
Sure.
Okay.
Makes sense.
I'm on board.
I'm on board.
So the way that this is organized is basically they've both brought a paper with them.
They've read it in advance and they're going to discuss it, right?
And I'll let them introduce the framing a little bit.
So by the end of today's episode, you will not only have learned about two novel sets of findings, one in the realm of longevity as it relates to metformin, and another in the realm of neurobiology and placebos or placebo effects, but you will also learn how a journal club is conducted.
I think you'll see in observing how we parse these papers and discuss them, even arguing in them at times, that what scientists and clinicians do is they take a look at the existing peer-reviewed research and they look at that peer-reviewed research with a fresh eye
asking, does this paper really show what it claims to show or not?
Yeah, that's the pitch.
Yeah, okay.
Got any issues?
Well, it's broadly fine, but yeah, I think where you and I maybe diverge from these guys, especially Huberman, is that I don't think it's good advice for people to be reading the primary scientific literature.
Because you're an academic snob.
You can read it, but you shouldn't be going, okay, now I'm going to change my diet and my supplement regime based on this paper that I just read.
I got that.
Yeah, that bit about, you know, you might have some things that you can apply to your actual life.
I wouldn't be so keen to recommend that, especially, you know, if you're dealing with just like one or two papers.
I think the part that you could possibly give them credit for is that they talk about these kind of stuff a lot.
So maybe they're assuming that their audience is also, you know, consuming.
Other sorts of stuff and has their regimes already.
They're probably geared towards an optimizer audience.
I still don't think you should do it, but I'm just saying it's not the same necessarily as telling a normie audience that you're going to get health advice you can follow.
That takes some of the edge off it, I think, still on principle.
The kinds of optimizers that are...
Yeah, like changing their regime based on the latest papers that have just come out.
They're sort of making a mistake to begin with anyway.
But that's just my opinion.
That's my opinion.
It's a different world.
Just an opinion.
That's all right.
Everyone has them.
So the two papers.
A little bit about Amor.
This is still the kind of framing, but a bit about Journal Club.
So here you go.
You know, people probably ask you all the time, because I know I get asked all the time, hey, what are the do's and don'ts of interpreting, you know, scientific papers?
Is it enough to just read the abstract?
And, you know, usually the answer is, well, no.
But the how-to is tougher.
And I think the two papers we've chosen today...
Illustrate two opposite ends of the spectrum.
You know, you're going to obviously talk about something that we're going to probably get into the technical nature of the assays, the limitations, etc.
And the paper ultimately I've chosen to present, although I apologize, I'm surprising you with this up until a few minutes ago, is actually a very straightforward, simple epidemiologic paper that I think has important significance.
I had originally gone down the rabbit hole on a much more nuanced paper about ATP binding cassettes in cholesterol absorption.
But ultimately, I thought this one might be more interesting to a broader audience.
So the point there, you know, it is kind of framing it in the same way that we do, Matt.
But, you know, it's useful to think about the way that you might assess the quality of studies, look critically at things, and that's what he's discussing that they're going to do.
I don't have much issue with the framing about encouraging people to critically...
Evaluate things, although I grant that you should be aware of your limitations when doing that, but, you know, as saying this is what academics do, that's true, isn't it?
Yeah, that's basically fine.
I'm not super interested myself in glucose binding and ATP inhibitors and so on, but, you know, each to their own.
Each of their own.
You've got your history podcasts and your...
Succulents.
Succulents, yeah.
Neither of which are your academic speciality.
And they've got that.
So, yes, I admire.
Admire might be too strong, but I'm kind of glad that he chose a less sexy paper than whatever he was previously considering.
So, you know, he says it's a bit boring, but...
That's good.
Boring is sometimes okay.
Now, before they get into the papers, Matt, there's just like, you know, there's some other things.
And this is not poisoning the whale.
I just want to point out a little bit of the dynamic because it will come up in various clips.
So you need this context.
Adia describes a dream that he had about Huberman.
Do you remember this?
Yeah, vaguely.
Yeah.
So, yeah, you need this because it will come up in various clips.
So here's his dream about Huberman.
So I had a dream last night about you.
And in this dream, you were obsessed with making this certain drink that was like your elixir.
And it had all of these crazy ingredients in it.
Supplements.
Tons of supplements in it.
But the one thing I remembered when I woke up, because I forgot most of them.
I was really trying so hard to remember them.
One thing that you had in it was do.
You had to collect a certain amount of dew off the leaves every morning to put into this drink.
Sounds like something that I would do.
But here's the best part.
You had a thermos of this stuff that had to be with you everywhere.
And all of your clothing had to be tailored with a special pocket that you could put the thermos into so that you were never without the special Andrew drink.
And again, you know how dreams, when you're having them, seem so logical and real?
And then you wake up and you're like, that doesn't even make sense.
Like, why would he want the thermos in his shirt like that?
Yeah, so I actually think that's a fairly...
It's a straightforward dream to interpret because he is someone that is obsessed with supplementation and various things.
And natural supplementation in particular, so that explains the due.
Yeah, no, that's a very plausible dream to have about Huberman.
I've had dreams about you, Chris.
Am I doing that?
Am I dancing through the forest?
I just want to emphasize that there's nothing sexual.
At all.
No, it was that kind of, yeah.
You were just being you.
Podcast things.
Probably trying to convince me to listen to something I didn't want to.
Another Huberman episode.
Seven hours?
No.
Yeah.
So, and I appreciated Huberman's reaction is good-natured.
Like, I know this doesn't bear mentioning.
It's just people, you know, having a joke about a dream that they have.
But it is worth mentioning in the guru's sphere because they take themselves so seriously.
To promote these creditors, you know, that it'll sound like something that we're doing.
Yeah, so it's just an amusing little dream.
But, Matt, dreams can be more to them.
Oh my.
Some other time we can talk about dreams.
Recently, I've been doing some dream exploration.
I've had some absolutely transformative dreams for the first time in my life.
One dream in particular that allowed me to feel something I've never felt before and has catalyzed.
A large number of important decisions in a way that no other experience, waking or sleep, has ever impacted me.
And this was drug-free, etc.
And do you think you could have had that dream?
We don't have to get into it if you don't want to talk about it now.
But was there a lot of work you had to do to prepare for that dream to have taken place?
Oh, yes.
Yeah.
At least 18 months of intensive analysis-type work with a very skilled psychiatrist.
But I wasn't trying to seed the dream.
It was just I was at a sticking point with a certain process in my life.
And then I was taking a walk while waking and realized that my brain, my subconscious, was going to keep working on this.
I just decided it's going to keep working on it.
And then two nights later, I traveled to a meeting in Aspen, and I had the most profound dream ever where I was able to sense something and feel something I've always wanted to feel.
So real within the dream, woke up, knew it was a dream, and realized this is what people close to me that I respect have been talking about, but I was able to feel it, and therefore, I can actually access this in my waking life.
It was absolutely transformative for me.
Anyway, sometime I can share more details with you or the audience, but for now, we should talk about these papers.
Very well.
Who should go first?
Yeah, that's wild.
Not to be...
I mean, that's fine.
I mean, but...
Chris, are all Americans like this?
Like, Americans talk like this.
Some of them do, anyway.
They're not.
They're not, Matt.
They're not all...
To be fair, an American I once did, did ask me various questions about dreams, if it was important.
Like, what colors I dreamed in and stuff.
I don't know.
General, normal colors.
But in any case, this is not the cast aspersions of all Americans, because I think here, I just want to highlight, there's a little bit of mysticism to Huberman, right?
That's talking about 18 months of presumably some kind of psychoanalysis therapy that allowed him to have a breakthrough in a dream setting that is going to transform his life and make all these important decisions of the basis of.
You know, that's something of a kind of Jordan Peterson-esque approach to things.
I know, I know.
But it's also a little bit American.
Like, there is a culture in America of seeing a psychiatrist regularly, even when you are really perfectly psychologically well.
That's not a cultural thing in Australia.
People don't see psychiatrists.
I know.
We're all repressed in the non-American anglosphere.
That's right.
We just push those feelings down and drink more.
That's how we handle it.
It works fine.
But Americans are built differently and they're a lot more prone to be talking about an issue that they've been mulling over with their psychiatrists and then having revelations and so on.
I don't know.
Australians don't get revelations like that, but it's fine.
It's a different culture is what I'm saying, Chris.
If I were to interpret that, Matt, I might think it relates to his disclosure on the Lex Friedman about having a religious experience and becoming spiritually inclined.
So he referenced people he respects.
Talking about experiences that he couldn't properly comprehend.
So perhaps an encounter with the divine or, you know, I don't know, could be the cosmic dwarves that control the universe that you usually can only contact through DMT.
Could be them.
Who knows?
But in any case, I'm just pointing out, there's that vibe as well, because now we're going to get into the hard-nosed science, right?
But it's the...
Yeah, there's a crunchy vibe there.
Yeah, I know.
It's at the intersection between sort of psychiatrist culture, therapy culture, and yeah, spiritual type culture.
I don't know.
Where's he from?
Is he from California or somewhere else?
Who knows?
He's at Stanford, so presumably somewhere around there.
I don't know.
Do Americans ever travel to places that they weren't born in?
I don't know.
I don't know what they do.
So, yes, the first paper, Matt, this is how he goes first, okay?
And here's a little bit about the framing of this paper.
This is a pretty straightforward paper.
So, we're going to talk about a paper titled Reassessing the Evidence of a Survival Advantage in Type 2 Diabetics Treated with Metformin Compared with Controls Without Diabetes, a Retrospective Cohort Study.
This is by Matthew Thomas Keyes and colleagues.
This was published last fall.
Why is this paper important?
So, this paper is important because in 2014, Bannister published a paper that I think in many ways kind of got the world very excited about metformin.
So, this was almost 10 years ago.
And I'm sure many people have heard about this paper, even if they're not familiar with it, but they've heard the concept of the paper.
And in many ways, it's the paper that has led to the excitement around the potential for Giro protection with metformin.
So, this paper, part of the reason I kind of like this, Matt, is that it's a, as he describes, a paper looking at a previous finding and reassessing it.
There is a drug usually used to treat diabetes that seems to have a potential life-extending property for non-diabetic people.
And you can see why the optimizers and anti-aging community would be interested.
Longativity, I think that's what they're...
I can't remember how they describe themselves, but that whole...
Against an essence.
Yeah, life extension.
Life extension, yeah, those guys.
There are a lot of rationalists hovering around the meetings in the corner with Yudkowsky's fedora floating around.
So, yes, you know the type.
But I guess it is a relatively common thing, isn't it?
Often they find that a drug that's been used for something turns out to have these side effects, which are good side effects.
Yeah, yeah.
This is not unusual, not unusual.
And next clip is about describing the potential mechanism of metformin.
And I wanted to include this, Matt, to just point out that this does get pretty technical at various points.
And I get the impression from the way that Adia speaks, and Huberman as well, that they know what they're talking about.
They're very familiar with the terms.
But it's not the kind of thing that I think, if you were half listening to, Maybe I'm being a little bit unfair,
but just listen to this description.
The mechanism by which Metformin works is debated hotly.
But what I think is not debated is the...
Immediate thing that metformin does, which is it inhibits complex one of the mitochondria.
So again, maybe just taking a step back.
So the mitochondria, as everybody thinks of those, is the cellular engine for making ATP.
So the most efficient way that we make ATP is through oxidative phosphorylation, where we take either fatty acid.
Or a breakdown product of glucose once it's partially metabolized to pyruvate.
We put that into an electron transport chain.
And we basically trade chemical energy for electrons that can then be used to make phosphates onto ADP.
So you think of everything you do.
Eating is taking the chemical energy in food.
Taking the energy that's in those bonds, making electrical energy in the mitochondria, those electrons pump a gradient that allow you to make ATP.
Got it?
Yeah, I thought that was pretty good.
No, it was.
I will say this.
I think if you're immersed in this world, that's a little bit second nature.
But also, to Adia and Huberman's credit, they're very good at communicating complex ideas and kind of doing it in a technical way and then they stop.
And break it down.
And one way they do this is one asks a question of the other one to, you know, okay, so can you clarify what you said there?
And they're really good at it.
I think this is part of the reason that people like their content, right?
Because you get both the apparent technical depth and you get the layman descriptions, right?
So you can, I don't know, feel a little bit like you're listening in on the technical discussion, but you're still able to follow.
I have an example of that dynamic.
One question, is it fair to provide this overly simplified summary of the biochemistry, which is that when we eat, the food is broken down, but the breaking of bonds creates energy that then our cells can use in the form of ATP.
And the mitochondria are central to that process.
And that metformin is partially short-circuiting the energy production process.
And so...
Even though we are eating, when we have metformin in our system, presumably there is going to be less net glucose.
The bonds are going to be broken down.
We're chewing, we're digesting, but less of that has turned into blood sugar, glucose.
Well, sort of.
I mean, it's not depriving you of ultimately storing that energy.
What it's doing is changing the way the body...
Partitions fuel.
That's probably a better way to think about it to be a little bit more accurate.
That's good, right?
Isn't it?
Hooperman saying, okay, so what you're saying is this accurate?
And then giving Adi a time to kind of say, well, that's not exactly, you know, let me change it a bit.
And yeah, I thought we could do it.
They do it very well.
Like, this is quite...
Impressive in the way that, I mean, they're professionals at this for a reason.
But that's good science communication, I think.
Yeah, yeah.
You can compare it to the way Brett and Heather do science communication, where there's just layers and layers of abstraction and bladder.
Whereas, you know, these guys are...
I'm talking in concrete terms.
And yeah, I think they did well, at least as far as I could tell.
You know, I can't tell if they're describing ATP wrong or right.
Sounded right to me.
Yeah, and I'm going to play one more clip that I think is good because I genuinely think Adia is good at this.
So this is him talking about the process associated with diabetes.
And again, get a fair amount of medical jargon or biological jargon, but explained.
Nicely, Matt.
The kind of thing Eric never does.
So again, what's happening when you have type 2 diabetes?
The primary insult probably occurs in the muscles, and it is insulin resistance.
Everybody hears that term.
What does it mean?
Insulin is a peptide.
It binds to a receptor on a cell.
So let's just talk about it through the lens of the muscle because the muscle is responsible for most glucose disposal.
It gets glucose out of the circulation.
High glucose is toxic.
We have to put it away.
And we want to put most of it into our muscles.
That's where we store 75% to 80% of it.
When insulin binds to the insulin receptor, tyrosine kinase is triggered inside.
Ignore all that, but a chemical reaction takes place inside the cell that leads to a phosphorylation.
So ATP donates a phosphate group and a transphobic.
Transporter, just think of like a little tunnel, like a little straw, goes up through the level of the cell, and now glucose can freely flow in.
So I'm sure you've talked a lot about this with your audience.
Things that move against gradients need pumps to move them.
Things that move with gradients don't.
Glucose is moving with its gradient into the cell.
It doesn't need active transport, but it does need the transporter put there.
That requires the energy.
That's the job of insulin.
I understand insulin now.
I feel like Lou Reed's in the matrix.
But that was, again, not qualified to judge.
Seemed like he was doing.
I know.
So this, you know, we're not mindless haters.
I'm actually highlighting this to say I think this is pretty good.
I think this is why people like Adia and Huberman, because...
This is the stuff that they do, which actually, if you listen to a proper academic who isn't good at science communication, they could explain the exact same process and it would make no sense.
And they wouldn't factor in that people aren't going to follow along.
So they add in the little descriptions to kind of keep you going along.
And I think they pitch it at generally the right level, but it is clear that they're really comfortable.
Talking about this kind of stuff.
So, you know, it isn't really that difficult for them.
And Matt, you know, we were saying that maybe people can't put things into practice that they hear here or maybe they shouldn't.
I just did want to note there was a bit of bad news related for me here.
Maybe some health advice.
But there are other things that can cause insulin resistance.
Sleep deprivation has a profound impact on insulin resistance.
I think we probably talked about this previously, but if you, you know, some very elegant mechanistic studies where you sleep deprive people, you know, you let them only sleep for four hours for a week, you'll reduce their glucose disposal by about half.
Wow.
Which is, I mean, that's a staggering amount.
You're basically inducing profound insulin resistance in just a week of sleep deprivation.
Fucking hell.
This is terrible news.
And you've had some blood tests done, haven't you?
You don't need to tell us.
Yeah, so I was like, oh my god.
Yeah, so see, look what they're giving me.
We cover the girls.
They give me actionable health advice.
But yeah, anyway, I already knew you're supposed to sleep longer than I currently am.
But that's, you know.
Just, yeah.
I would enjoy hearing that.
So, thank you, Peter.
You might recall me, Chris, often recommending you just go home, go to sleep, get a good night's sleep for God's sake.
And you're like, yeah, yeah, yeah, Matt.
You think you're built out of cast iron steel, but you're human just like the rest of us.
You need to sleep.
So, well, so it turns out, if I did that, I just imagined the human I could be.
Imagine the blood test I could get if I slept eight hours.
But I actually related to this, Matt, this is from much later in it.
There was a discussion, it kind of comes later, but they're talking about Huberman's binging tendencies.
But anyway, Adia said something which, again, I think endorses...
Something that you've been talking about, your kind of humans are rats thesis.
So listen to this clip and we'll see if you think it endorses your worldview.
I can imagine a scenario where a person could be in a negative energy balance eating Twix bars all day and drinking, you know, big gulps.
But I also don't think that's a very sustainable thing to do because if, by definition, I'm going to put you in negative energy balance consuming that much crap.
I'm going to destroy you.
You're going to feel so miserable.
You're going to be starving.
You're not going to be satiated eating pure garbage and being in caloric deficit.
You're going to end up having to go into caloric excess.
So that's why it's an interesting thought experiment.
I don't think it's a very practical experiment.
For a person to be generally satiated and in energy balance, they're probably eating about the right stuff.
But I don't think that the specific macros matter as much as I used to think.
I'm a believer in getting most of my nutrients from unprocessed or minimally processed sources simply because it allows me to eat foods I like and more of them.
And I just love to eat.
I so physically enjoy the sensation of chewing that I'll just eat cucumber slices for fun.
I mean, that's not my only form of fun, fortunately.
So, it was sandwiched there with some things which you haven't advocated for, like consuming crap at a caloric deficit being a good thing.
But, you know, you have argued the macros are not that important.
It is calories in, out, right, that ultimately matter.
And they kind of mention that.
Huberman doesn't agree with you.
He likes, you know, organic.
Dew, drenched food, nibbles on organic cucumbers for pleasure.
Yeah, definitely on the idea side of this one where I do think people are rats and it's surprising how people get by pretty okay even if they're not super optimizing with regard to the micronutrients or whatever and you basically eat a bunch of different...
You know, you're not eating Twix bars and Big Gulps and you don't eat too much, then you can forget about it.
You know?
You're probably fine.
Yeah.
So, I'm going to go back to the Adia paper here and he's talking about the old paper, the one from 2014.
And some issues that might make the finding that it was healthy more questionable or that metformin would have general life extension properties.
So the way the study worked is if you were put on metformin, we're going to follow you.
If you're not on metformin, we're going to follow you.
And we're going to track the number of deaths from any cause that occurred.
This is called all-cause mortality or ACM.
And it's really the gold standard in a trial of this nature or a study of this nature or even a clinical trial.
You want to know.
How much are people dying from anything?
Because we're trying to prevent or delay death of all causes.
Informative censoring says if a person who's on metformin deviates from that inclusion criteria, we will not count them in the final assessment.
So how are the ways that that can happen?
Well, one, the person can be lost to follow-up.
Two, they can just stop taking their metformin.
Three, And more commonly, they can progress to needing a more significant drug.
So all of those patients were excluded from the study.
So think about that for a moment.
This is, in my opinion, a significant limitation of this study.
Because what you're basically doing is saying, we're only going to consider the patients Who were on metformin, stayed on metformin, and never progressed through it.
And we're going to compare those to people who were not having type 2 diabetes.
So, you know, I thought that was a good thing, describing the particular, I can't remember what it's called, information censoring or something like that, as a significant limitation, properly highlights why that could be an issue and give you a misleading perspective.
Flagging it up as a limitation.
That's good, isn't it?
It's very good.
I like these methodological details.
It's a big issue with longitudinal studies.
Your data is censored for various reasons.
Sometimes at the researcher's discretion, sometimes...
Not, like with Dropout.
So it's good that he talks about it.
This is just riveting, Chris.
We're listening to people talking about a very technical academic article and it's fine.
It's all fine.
It's fine.
Look, it's going to get, trust me, we're not on Hoopermans, but yeah.
So, well, you're really going to like that.
So that was him describing.
The finding from that 2014 paper, here's some emphasizing in the science communication way that I've highlighted.
Type 2 diabetes on average will shorten your life by six years.
I see.
So that's the actuarial difference between having type 2 diabetes and not all comers.
But you're right.
This is not a huge difference.
It's only a difference of a little less than one year of life per thousand patient years studied.
Okay.
And by the way, Pierre just pointed out my math was wrong when I said about a year and a half.
But the point here is...
You would expect the people in the metformin group to have a far worse outcome, i.e.
to have a far worse crude death rate.
And the fact that it was statistically significant in the other direction, and it turned out on what's called a Cox proportional hazard, which is where you actually model the difference in lifespan, the people who took metformin and had diabetes had a 15%,
1-5, 15% Relative reduction in all-cause death over 2.8 years, which was the median duration of follow-up.
Well, that seems to be the number that makes me go, wow.
Yeah.
Right?
Because...
Could you repeat those numbers again?
Yeah.
So 15% reduction in all-cause mortality over 2.8 years.
That's a big deal.
Yeah.
So there, Matt, I'm highlighting...
The science communication dynamic, right?
Of like, you know, tell me those figures again, right?
But also the reason that that initial banister at all study got attention was because of this, right?
Like they're finding 15% relative reduction in death.
That's a big reduction, right?
You can see Huberman's, like, antennas go up.
Yeah, 15% less likely to die.
Yeah, that's good.
I mean, is that a big effect, Chris?
I'm not familiar with the epidemiological effect.
No, I think, yeah, it has to all be counted into, like, it is, you know, significantly qualified by the way that they've compared the populations and all that.
So, yeah, I think...
He does highlight before, it depends what way you look at it, but this is the kind of result that had people at least a little bit exercised.
And highlighting the type of audience that they have, you know, we sometimes point out when people say, people ask me these kind of questions a lot or whatever, they do mention this.
Fast forward until a year ago, and I think most people took the Bannister study as kind of the best evidence we have for the benefits of metformin, and I'm sure you've...
Had lots of people come up to you and ask you, should I be on Metforma and should I be on Metforma?
I mean, I probably get asked that question almost as much as I'm asked any question outside of dew.
I mean, people definitely want to know if you should be consuming dew, but after that, it's Metforma.
Fresh off the leaves.
Has to be.
While viewing morning sunlight.
Yeah, so you wouldn't have got that joke, Ma, if I didn't play the two ones.
Yes, continue.
Good thing you played the thing about the dream at the beginning.
Yeah, to keep that in context.
But, you know, there, Matt, how often are our audience asking us for advice about metformin?
It's the friggin' question that comes up all the time on the Patreon.
So, I know this is their particular thing, but I'm just noting that obviously this was a big deal in the longevity community.
If this is your second most common question or whatever it is, and it's on the basis of this...
Primarily this banister at all study.
Like, I am thinking that this seems very flimsy evidence to make that such a big deal.
But, you know, that's what their audience are there for.
They want these hacks that are going to give them 50% reduction and, like, some metric about mortality that, you know, might be significant for them.
So, yeah.
Yeah, that's the dream, to live till 105 with huge pectoral muscles.
It's a good dream.
Yeah, it's a good dream.
And so just a note here as well that kind of ties into the naturalism aspect as well as the optimizer bit.
So there's an aside a little bit earlier where Huberman is talking about his own kind of daliances around these kind of chemicals or things which might cause that.
Metformin tackles the problem elsewhere.
It tamps down glucose by addressing the hepatic glucose output channel.
GLP-1 agonists are another drug.
They increase insulin sensitivity, initially causing you to also make more insulin.
So that's Ozempic?
Yes.
Yeah.
And is it true that berberine is more or less the poor man's metformin?
Yeah.
Okay.
It's from a tree bark.
It just happens to have the same properties of reducing mTOR and reducing blood glucose.
Yeah.
Metformin, by the way, occurs from a lilac plant in France.
That's where it was discovered.
Metformin is also based on a substance found in nature.
You need a prescription for metformin.
You don't need a prescription for berberine.
Correct.
We can talk about berberine a little bit later.
I had a couple great experiences with berberine and a couple bad experiences.
I just noticed that, again, it comes from a tree bark.
Oh, this one comes from a lilac plant.
That's important, right?
It's not purely synthetic, because presumably that would be worse.
And that Huberman...
So this is our first mention of a substance that Huberman is taking, berberine, which he says is the poor man's version of metformin and is available without a prescription.
Was it the berberine that he had a couple of good experiences and a couple of bad experiences with?
Yeah, and he'll return to that later.
This is what we call in the...
Fuck, I forgot the word.
What do you call it when you plant something that will come back later?
It's that priming.
Goddammit, man.
Damn you.
Is it a bimmerang?
It's on the tip of my tongue.
Foreshadowing.
Foreshadowing.
I had it exactly.
I got that.
No, I had it.
It's foreshadowing.
It's foreshadowing another clip that will...
This is a super mild thing.
It's not even a criticism.
But, you know, the little ways in which it does deviate from, you know, like a normal academic journal club is the working in of the personal...
lived experiences of hooverman in that case where it's kind of relevant that he's been taking this and he had some good experiences and bad experiences and and that's kind of weaved into the scientific evidence um so you know it's a small observation what you could
just say it's keeping it personal keeping it real um adding a bit of flavor um but i don't know it just does demarc a slight
Well, I think the difference that you're talking about a bit is not injecting personal.
References, because I think that just naturally happens in this kind of long-form podcast, but it's more the self-experimentation aspect of it, right?
Because I remember I was just saying, you know, this study I read first when I was taking my cat out to the vet or whatever, you know, that's just, so what?
You're just talking about it, but it's this element of...
Their personal experience has been tied into the way that you assess this literature, right?
Yeah, that's right.
Yeah.
So, Journal Club Map, they're actually, up to now, have been talking about the background.
They haven't been talking about the individual paper.
This is the paper before the one that Adia is focused on.
And actually, just one last thing, to be fair to Mr. Huberman, I've mainly played Adia doing...
I think this is Huberman doing a little bit of good science communication.
I'll just play an example of it.
I'm sort of rusty on my neuroscience, but an action potential works in reverse the same way.
Like, you need the ATP gradient to restore the gradient, but once the action potential fires, it's passive outside, right?
Yeah, so what Pierre's referring to is the way that neurons become electrically active is by the flow of ions.
Across the cell, from the outside of the cell to the inside of the cell, and we have both active conductances, meaning they're triggered by electrical changes in the gradients via changes in electrical potential.
And then there are passive gradients where things can just flow back and forth until there's a balance equal inside and outside the cell.
I think what's different is that there's some movement of a lot of stuff inside of neurons when neurotransmitters like dopamine binds to its receptor and then a bunch of...
You know, it's like a bucket brigade that gets kicked off internally.
But it's not often that you hear about receptors getting inserted into cells very quickly.
Normally, you have to go through a process of, you know, transcribing genes and making sure that the specific proteins are made.
And then those are long, slow things that take place over the course of many hours or days.
What you're talking about is a real on-demand insertion of a channel.
And it makes sense as to why that would be required.
But it's just, oh, so very cool.
Was that okay?
You're the neuroscience expert.
What's your grade for that?
Yeah, it's the intersection of what I know about this stuff and what he knows.
That bit sounded totally fine.
I think it is true that axon potentials are, once they get going, they just go.
They don't need sort of active.
I never really learned about how the neural cell actually regenerates its energy.
These guys are interested in ATP and how cells get energy to be stronger and so on, whereas psychologists don't care about that.
We just go, okay, this is what it did.
The support system to keep the actual cells alive is not something we care about.
A telling difference.
So you're learning something.
That's what you're telling me.
Giving you new information.
Well, I didn't really follow the other bits, his explanation, but it sounded good.
It just sounded like it's probably just me being stupid.
Okay.
Well, if you are an expert in this, you can let us know, did Huberman do good or was he terribly wrong?
Sounded okay to me.
Yeah, sounds fine.
I'm just giving him credit, Matt.
See, you Huberman haters on Reddit, you got Huberman all wrong.
He's fine.
That's great.
We'll see.
Let's move on.
So, oh, we were talking about self-experimentation.
We saw Huberman talking about his barbering or whatever it is.
So Adia is in on this too, Matt.
Let's just make that clear.
So, maybe taking one step back from this, in 2011...
I became very interested in Metformin personally, just reading about it, obsessing over it, and just somehow decided, like, I should be taking this.
So I actually began taking Metformin.
I still remember exactly when I started.
I started it in May of 2011, and I realized that because I was on a trip with a bunch of buddies.
We went to the Berkshire Hathaway shareholder meeting, which is, you know, the...
Buffett shareholder meeting.
And, you know, it was kind of like a fun thing to do.
And I remember being so sick the whole time because I didn't titrate up the dose of metformin.
I just went straight to two grams a day, which is kind of like the full dose.
And we went to this...
Is that characteristic of your approach to things?
Yes.
I think that's safe to say.
Next time I'll give you a thermos of this dew that I collect in the morning.
Oh, really?
So I remember being so sick that the whole time we were in Nebraska or Omaha, I guess, I couldn't.
We went to Dairy Queen because you do all the Buffett things when you're there, right?
Like, I couldn't have an ice cream at Dairy Queen.
You couldn't.
I mean, I couldn't.
I'm so nauseous.
Again, the Dewey came in.
Just pointed out again, it was worth investing in that Dewey Club.
Otherwise, everyone would be saying, what's all this deal about?
So Adia is interested in Matt Foreman before.
The Bannister et al paper.
Self-experimenting, apparently with a high dose immediately, which caused bad reactions.
But it's actually, it's that thing that he's dosing himself on this, you know, purely experimental treatment, right?
But enough that he doesn't mind being extremely nauseous at a conference.
So, yeah, this is the first sign about...
The self-experimentation kick that they're both on.
Remind me, why is he taking metformin?
Why would he have been taking metformin?
To live longer?
To reduce his chance of death by 15%?
Geroprotection or whatever they call that thing.
Yeah, life extension.
So presumably there were some indicators that it might be useful outside of diabetics before.
The Barnester study.
It's just odd.
I just don't get it.
Like, I rate my chance of death in the next five years as being extremely low.
So, I'm not going to make myself nauseous to reduce that by 15%.
Like, what's the thinking?
I don't get it.
It's a different...
Well, Mark, listen, listen.
Here you go.
I can let him answer for you.
But regardless.
That's the story on metformin.
There were a lot of reasons I was interested in it.
I wasn't thinking true gyro protection.
That term wasn't in my vernacular at the time.
But what I was thinking is, hey, this is going to help you buffer glucose better.
It's got to be better.
And this was sort of my first foray into, you know, self-experimentation.
Buffering glucose better.
That's just not something that's on my...
List of priorities.
Yeah, and Gero protection, right?
That's what he said.
It wasn't in his vocabulary.
It's not in my vocabulary.
It is now.
Yeah, so Gero as in gerontology, geriatric protection against being old.
I see.
Yeah, like gerontocracy or whatever it is.
So we've got Hooperman on his poor man's metformin.
Adia.
Is ahead of the curve, dosing himself to the state of nausea on metformin, even before the banister at all study.
And I may as well follow this line a little bit.
There's more clips that go down.
So here's the next stage when they're looking back at this period.
So metformin has failed in the ITP.
So you no longer take metformin?
I stopped five years ago.
I mean, you're not a diabetic, so presumably you were taking it to buffer blood glucose in order to potentially live longer.
Yes, exactly.
And the reason I stopped, and this will be the last thing before we move on.
Well, because you couldn't go to the Dairy Queen at the Buffett event.
No, finally the nausea went away after a few weeks or a month maybe.
Once I got really into lactate testing, I noticed how high my lactate was at rest.
So resting fasted lactate in a healthy person should be below 1, like somewhere between 0.3, 0.6 millimole.
And only when you start to exercise should lactate go up.
In 2018 was when I started blood testing for my Zone 2. So previously, when I was doing Zone 2 testing, I was just going off my power meter and heart rate.
But this is after I met Inigo San Milan, and I started wanting to use the lactate threshold of 2 millimole as my determinant of where to put my wattage on the bike.
And I'm like...
Doing finger pricks before I start, and I'm like 1.6 millimole.
And I'm like, what the hell is going on?
I can't be 1.6.
This is if you ran a flight of stairs up the back of the Empire State Building.
Yeah, well, this is the most relatable thing.
I'm just constantly pinpricking myself for the day to check my...
Millimoles of lactis and just fretting constantly.
So this is the optimizer world, but you were shaking your head in disbelief.
But yeah, it is really micromanaging your whole internal system, right?
Yeah, it's fascinating to me.
I mean, no judgment.
You know, it's fine.
Is there no judgment?
Is there really no judgment there?
I can't judge them.
My pictorial muscles are nowhere near as big.
I'm sure they're much healthier than me.
It's like the thing which is doing that, though.
I don't know.
Because this is a good example.
So he mentioned he's off the Metformin now, and we'll see why, because we talk about this paper, but he stopped five years ago.
But that means he was on it for a number of years.
If I was taking something that made me nauseous, I would stop immediately.
The nausea only lasted for one month, Matt.
I would last about 15 minutes before I'm like, screw this.
That's why you don't have the pectoral muscles.
This is the optimizer mindset.
I get it.
Matt likes his succulents.
I like whatever the fuck I like.
And these guys like measuring the millimoles before they go for a run, right?
And discovery, though, is the thing that he was taking to extend his life was seemingly doing something, like, bad.
So this is the hazards of, like, self-experimentation, presumably, because you're never quite sure about the exact effects.
And maybe it is...
Undoubtedly that this is not the only thing that they are taking, right?
So isolating out the causal factor is not going to be a thing.
But yeah, so it's an interesting mindset.
It is very different, I think, from the ordinary worker Joe.
They are micromanaging their blood levels in a level here to for unheard of in Mortal Man.
Yeah, I mean, there's a lot involved in being...
A health and wellness optimizer, right?
You've got some little insight into the amount of testing, the amount of calculation, the amount of supplements, but just thinking and how it must affect all aspects of your day.
I'm just amazed that people take all of that on.
I'm all for hobbies and obsessive type interests, but it's a big commitment.
Let's stick on this a little bit.
Huberman's now going to talk about why he was taking the berberine, so what he was up to.
And basically, he's talking about following a fasted eating schedule, like a caloric restriction and various things, but having a cheat day where you go mental and binge all the cakes and stuff that you want.
But then the next day, introducing a complete fast.
So anyway, let's let him explain that.
So the last thing you want to do is eat any food.
I would just hydrate and oftentimes try and get some exercise.
And what I read was that berberine, poor man's metformin, could buffer blood glucose and in some ways make me feel less sick when ingesting all these calories.
In many cases, spiking my blood sugar and insulin because you're having ice cream and, you know, et cetera.
And indeed, it worked.
So if I took berberine, and I don't recall the milligram count, and then I ate, you know, 12 donuts, I felt fine.
It was as if I had eaten one donut.
Wow.
I felt sort of okay in my body, and I felt much, much better.
Now, presumably because it's buffering the spikes in blood sugar, I wasn't crashing in the afternoon nap and that whole thing.
And do you remember how much you were taking?
I think it was a couple hundred milligrams.
Does that sound about right?
It was a bright yellow capsule.
I forget the source.
But in any case, one thing I noticed was that if I took berberine and I did not ingest a profound number of carbohydrates very soon afterwards, I got brutal headaches.
I think I was hypoglycemic.
I didn't measure it, but I just felt I had headaches.
I didn't feel good.
And then I would eat a pizza or two and feel fine.
And so I realized that berberine was putting me on this lower blood sugar state.
That was the logic anyway.
And it allowed me to eat these cheat foods.
But when I cycled off of the four-hour, because I don't follow the slow-carb diet anymore, although I might again at some point.
When I stopped doing those cheat days, I didn't have any reason to take the berberine.
And I feared that I wasn't ingesting enough carbohydrates in order to really justify trying to buffer my blood glucose.
It's such a complicated relationship with food, isn't it?
It's, yeah.
I mean, I know he's being tongue-in-cheek probably, but if you're eating 12 donuts at once or eating two pizzas at once and you're buffering with berberine or whatever it is, and then if you do take it, it does this, and if you don't, I mean, you're really screwing with things.
And I don't know.
I guess I've got a kind of a – is it committing the naturalist fallacy to just say, Maybe just don't screw with it.
Just let nature take its course and let your body do its thing and don't be oscillating between weird diets and calorie restriction and cheap days where you just stuff two pizzas and 12 donuts into your mouth.
And you need to take a chemical so you're not feeling nauseous.
If you don't eat enough calories, you're getting these brutal headaches.
Yeah, it's pretty extreme.
To me, Matt, the contrast here is like, Huberman and Rogan and all these people, they're always talking about...
Nature, being out in the forest, sunbathing, you know, how great this is.
And, you know, you want to get tree bark supplements.
What they're talking about now is not natural.
And then they're doing, you know, the cold plunges and the hormone replacement therapy and injecting, you know, taking ivermectin for prophylactic purposes or all those things.
So it's like this weird mix of, like...
Yeah, I guess this is what optimizers do, though.
But, you know, nature is good.
You need to be out in the wilderness, you know, getting the sun's rays to power you up.
Plus, you should be micromanaging your fucking blood glucose level and the lactose amounts to the micromillimeter.
Yeah, there's a contradiction there, isn't there?
Like, between...
The natural thing or the paleo thing where you're trying to replicate the natural interactions of human bodies with foodstuffs and environments and make it as paleolithic and as natural as possible.
But at the same time, they are really interfering with the mechanisms to an unholy degree.
And I do actually mean it when I say kind of no judgment because I figure people could do what they want.
I know, I know.
If you want to make it your hobby to...
You know, and play with yourself.
Look, it could be risky.
I wouldn't necessarily advise it.
But, you know, that's...
It's a bit like tattooing or body modification or something like that, right?
It's like, you know, it's not for everyone, but, like, it's your body.
Do what you want with it.
And, you know, as you say, they're both, they look...
At least outwardly.
I don't know what their fucking blood levels are, but they outwardly look, you know, very healthy individuals and they are doing exercise and stuff as well.
So, you know, I agree.
I don't think this is the worst thing in the world for people to be doing by any stretch.
It might even turn out, you know, that they do hit on something that's healthy, but it is a very odd way.
And I guess I would say the way that they square the circle, reduce the dissonance, is that they are saying, All that natural stuff is, you know, evolutionarily programmed to gel well with our biology.
But now we can go a step further.
We can actually scientifically look at the aspects.
So that's why it's the negative ions that are out in nature which are important, right?
And this was part of my critique about, you know, the scientism kind of creeping into Huberman's recommendations about being out in nature because it very much wants to say It is because there's a scientific underpinning to all of this.
No, you're right, actually.
And I take that back because even though there's an apparent contradiction there, it does actually make sense that they're kind of starting from a natural is good foundation, but then optimizing further based on what science and stuff can tell us.
And, you know, whether they're right or wrong, you know, it's a hobby.
You know, I'm into succulents.
It's a hobby, yeah.
They're inter-optimizing their bodies.
But that's not all, Matt.
It's not just...
Those drugs.
Clearly, exercise, meaning regular exercise, is the best way to keep that system in check.
But in the absence of that tool, or I would say in addition to that tool, is there any glucose disposal agent, because that's what we're talking about here, metformin, berberine, acarbocyt, etc., that you take on a regular basis because you have that much confidence in it?
The only one that I take is an SGLT2 inhibitor.
So this is a class of drug that is used by people with type 2 diabetes, which I don't have, but because of my faith in the mechanistic studies of this drug, coupled with its results in the ITP, coupled with the human trial results that show profound benefit in non-diabetics taking it even for heart failure,
I think there's something very special about that drug.
So metformin has been replaced with an SGLT inhibitor.
Yeah, so he's taking it based on his understanding of the scientific literature, thinks it's going to do some good even though it's not sort of medically prescribed.
Yeah, I can see why...
And he might be right.
He might be right.
Yeah, that's right.
I can see why optimizers who are wanting to be at the bleeding edge of self-improvement who would be keen to listen to this because he's a medical doctor and he's probably got all kinds of qualifications and he does know how to read literature.
He's a smart guy, I can tell.
So you're getting his kind of, he's expressing it as this is what he does personally.
He's not telling other people to do this, but definitely people who are listening to this are going, okay, that could work for me.
I might do that too.
Yeah.
Yeah.
So we'll get off this self-experimentation with one more clip, but the last one.
So Huberman, you know, he was doing his like fasting for our whatever consumption protocol from Tim Ferriss.
And Adia himself has...
I think the short answer is no.
For two reasons.
One, I don't think that that duration would be sufficient if one is going to take that approach.
But two, even if you went with something longer, like what I used to do, I used to do seven days of water only per quarter, three days per month.
But I was basically always like it would be three-day fast, three-day fast, seven-day fast.
Just imagine doing that all year, rotating, rotating, running.
For many years, I did that.
Now, I certainly believed, and to this day, I would say I have no idea if that provided a benefit.
But my thesis was the downside of this is relatively circumscribed, which is profound misery for a few days.
And what I didn't appreciate at the time, which I...
Obviously now look back at it and realize it's muscle mass lost.
It's very difficult to gain back the muscle cumulatively after all of that loss.
So I just highlight that because, you know, this means that for a couple of years, he was on like an extreme cyclical, like water only.
Fast, right?
And describes it as inducing profound misery for a couple of days.
And now he isn't sure if that actually has any benefit.
Like that was Huberman's question was, do you recommend caloric restriction?
He was like, no, not intermittent caloric restriction.
Not really.
But he did for years.
And on the one hand, it's kind of good to see somebody moderate their position.
But it just does speak to that level of...
You know, that's a hell of a commitment to do for a significant amount of years and then now be like, well, maybe actually it was damaging my muscle mass and I'm not sure it actually produced any benefit.
Like, I have to respect the...
Commitment.
Yeah.
These guys are really into it.
And he's okay with the profound misery.
Seven days with just water.
That's hardcore.
Yeah, but it's the loss in muscle mass that's the real concern.
That's a big concern.
The pectoral muscles, it is kind of deflating.
We're making fun, but it's a hobby.
It's an interest.
They take it seriously.
So that's okay.
You know, we don't.
But look at us.
You wouldn't want to look at us naked.
Well, you know.
So, Matt, Journal Club.
Back to the journal club.
So we'll get to the papers.
Now we'll do it quick.
We'll be efficient.
Don't worry.
So here's the journal club for me.
The Bannister study used, I believe it was like roughly they sampled like 95,000 subjects from a UK biobank.
Here they used a larger sample.
They did about half a million people sampled from a Danish health registry.
And they did something pretty elegant.
They created two groups to study.
So the first was just a standard replication of what Bannister did.
Which was just a group of people with and without diabetic that they tried to match as perfectly as possible.
But then they did a second analysis in parallel with discordant twins.
So same sex twins that only differed in that one had diabetes and one didn't.
I thought this was very elegant.
Because here you have a degree of genetic similarity and you have similar environmental factors during childhood that might allow you to see if there's any sort of difference in signal.
So now turning this back into a little bit of a journal club, virtually any clinical paper you're going to read, table one is the characteristics of the people in the study.
You always want to take a look at that.
So there we go, Matt.
You know, introducing the paper, starting off, let's look at table one.
You're familiar with this.
You've said it yourself a million times.
The approach to papers, you know, this is normal.
Yeah, yeah, yeah.
Table one.
I think table one is a participant characteristics.
Have a look at the people.
Let's see.
So table one in the keys paper shows the baseline characteristics.
And again, it's almost always going to be the first table in a paper.
Usually the first figure in the paper is a study design.
It's usually a flow chart that says these were the inclusion criteria, these were all the people that got excluded, this is how we randomized, etc.
And you can see here that there are four columns.
So the first two are the singletons.
These are people who are not related.
And then the second two are the twins who are matched.
And you can see, remember how I said they sampled about 500,000 people?
You can see the numbers.
So they got...
You know, 7,842 singletons on metformin, the same number then they pulled out matched without diabetes.
On the twins, they got 976 on metformin with diabetes, and then by definition, 976 co-twins without them.
And you look at all these characteristics.
What was their age upon entry?
How many were men?
What was the year of indexing when we got them?
What medications were they on?
What was their highest level of education, marital status, etc.?
The one thing I want to call out here that really cannot be matched in a study like this, so this is a very important limitation, is the medication.
So look at that column, Andrew.
Notice how pretty much everything else is perfectly matched until you get to the medication list?
I know that was long.
I just, I'm playing with this show, like, that's...
Normal, highlighting a paper, looking at the characteristics of the participants and noting good advice that, like, see if anything jumps out, especially if it's between groups, right?
Is there something notable about the differences in the demographics or the medications?
It's good.
And I think he did spot some important differences between the test and the control group.
And he does talk about limitations here in also a nice way.
Full credit, Daddy.
Listen to this.
Exactly.
So this is, again, a fundamental flaw of epidemiology.
You can never remove all the confounders.
This is why I became an experimental scientist.
Yeah.
So that we could control variables.
That's right.
Because without random assignment, you cannot control every variable.
Now, you'll see in a moment when we get into the analysis, they go through three levels of corrections, but they can never correct this medication one.
So just keep that in the back of your mind.
Okay.
It's good.
It's right.
Highlighting the weakness of epidemiology studies, right?
Yeah.
Or overlapping treatments that people are receiving.
It is good.
It is good.
Recent study of mine, Chris, I was looking at demonstrating the effects of problem gambling on quality of life.
We can't do an experimental study.
We can't give a group...
Problem gambling to people.
Yeah, gambling problem.
We can't give that to people.
The bloody ethics board won't let us do it.
So we're forced to do this kind of epidemiological thing.
We do our best to balance the groups, both methodologically and statistically.
We try to control for all the comorbidities.
But I know, Chris, I know that there are other confounders out there, and Adia makes that point very forcefully.
Yeah, and here's him again, I think, doing a good job.
Correctly identifying strengths and limitations of this epidemiological approach.
That's the way that epidemiology will make up for its deficit.
So you could never do a randomized assignment study on half a million people.
So epidemiology makes up for its biggest limitation, which is it can never compensate for inherent biases by saying we can do infinite duration if we want.
We could survey people over the course of their lives, and we can have the biggest sample size possible.
Because this is relatively cheap.
The cost of actually doing an experiment where you have tens of thousands of people is prohibitive.
I mean, if you look at the Women's Health Initiative, which was a five-year study on, I don't know, what was it, 50,000 women?
I mean, that was a billion-dollar study.
So this is the balancing act between epidemiology and...
Randomized prospective experiments.
So they both offer something, but you just have to know the blind spots of each one.
Yeah, I think elsewhere too, he talks a lot about those complementary strengths and weaknesses of experimental studies, very expensive as a result in normal circumstances.
COVID was a bit of an exception.
You can't do really large-end experiments.
So the small end, on the flip side, you get to...
You know, do random allocation and you've got a really strong argument for causality.
The other approach where it's more observational or epidemiological, you could say, you've got the capability of getting a huge number of participants.
There's a big advantage there.
It could be ecologically valid, all sorts of things like that, but you don't have that strong argument for direct causality of the thing you're manipulating.
No, and I'm going to give more credit, Matt.
I'm a credit-giving machine today.
I'm doling out like it's Christmas over here.
Adia discusses confidence intervals.
I think that's a pretty good...
Now, in this table, they look different because it's 24.93 for the Metformin group and 21.68 for the twin group that's on Metformin.
When you adjust for age, they're almost identical.
It goes from 24.93 to 24.7.
One other point I'll make here for people who are going to be looking at this table is you'll notice there are parentheses after every one of these numbers.
What does that What does that offer in there?
Those parentheses are offering the 95% confidence interval.
So, for example, to take the number 24.93 is the crude death rate of how many people are dying who take metformin.
What it's telling you is we're 95% confident that the actual number is between 23.23 and 26.64.
If a 95% confidence interval does not cross the number zero, It's statistically significant.
Okay.
So no issues with any of that, right?
That was a good summary there.
Yep.
Okay.
And then Huberman follows up that, talking about a similar sort of thing.
So to your point, the people with diabetes taking metformin, in both the matched singletons and the discordants, Are dropping much faster and they always stay below.
And I was just going to say that the shading is just showing you a 95% confidence interval.
So you're just putting basically error bars along this.
So if this were experimental data, if you were doing an experiment with We're good to
go.
Standard deviations and standard errors.
And so, these confidence intervals just give a sense of how much range.
You know, some people die early.
Some people die late.
Within a given year, they're going to be different ages.
So, these error bars can account for a lot of different forms of variability.
Here, you're talking about the variability is how many people in each group die.
We're not tracking one diabetic taking metformin versus a control.
I know why you played this clip.
Poor old Huberman.
His explanation of confidence intervals isn't quite as good as it is.
It's a bit confused.
But, you know, explain statistics on the fly.
How dare you, Matt?
I was just, I thought it wasn't that bad.
But, yes, he does get a bit lost, tongue-tied, because when he's describing it.
But he's not fundamentally doing things wrong.
He's using examples that people use about the distribution of heights.
Take a small sample, you could end up with a skewed average or representation of the average height and so on.
So, yeah, it's a little bit mean, but I'm just playing Clips math.
That's all I'm doing.
Well, let's see.
There's some discussion about p-values and that kind of thing.
So, there's another chance, Matt.
So, they're using this quite complicated type of mathematics called a Cox proportional hazard, which is what generates hazard ratios.
Basically, any model has to have some error in it, and so they're basically saying this is the error.
So you could argue when you look at that figure, we don't know exactly where the line is in there, but we know it's in that shaded area.
Sorry, to make one other point.
If those shaded areas overlapped...
You couldn't really make the conclusion.
You wouldn't know for sure that one is different from the other.
Yeah, that's actually a good opportunity to raise a common myth, which is a lot of people, when they look at a paper, let's say it's a bar graph, and they see these error bars, and they will say, people often think,
oh, if the error bars overlap...
It's not a significant difference.
But if the error bars don't overlap, meaning there's enough separation, then that's a real and meaningful difference.
And that's not always the case.
It depends a lot on the form of the experiment.
I often see some of the more robust Twitter battles over, you know, how people are reading graphs.
And I think it's important to remember that you run the statistics, hopefully the correct statistics for the sample, but determining significance, whether or not the The result could be due to something other than chance.
Of course, your confidence in that increases as it becomes typically p-values, p less than.00001% chance that it's due to chance, right?
So very low probability.
P less than.05 tends to be the kind of gold standard cutoff.
But when you're talking about data like these, which are repeated measures over time, people are...
Dropping out, literally, over time.
You're saying they've modeled it to make predictions as to what would happen.
Hooverman, again, getting a bit confused there.
What?
What?
I don't know what you mean.
Like, it is mean, right?
This is mean.
It is mean.
Because P values are notoriously...
It's difficult to explain.
I know.
Yeah.
But two slight...
Red flags there a little bit.
He's not wrong about the error bars, right?
He is correct that you can have overlapping and it still be statistical significance.
But the gold standard being P less than 0.05, I think that gold standard suggests that when you get that, you're really happy with it.
And I'm like, no, that is actually a very marginal cutoff, which is arbitrary in a lot of ways.
It's actually a relatively high error rate that isn't accepted in most card sciences.
Would not accept.
0.05.
I think I lost track of how many zeros he was saying.
So he was referring to 0.05.
Initially, he said 0.000001%.
But he got confused there and said that it's due to chance.
It's not a percent either, right?
It's a probability.
But then he said P less than 0.05 is the gold standard cutoff.
This is nitpicking, but I think it matters because you'll see.
When we look at the paper that he's discussing later, if we go and look at the p-values in that paper, a lot of them are hovering just below 0.05.
And if you look at that as a gold standard, that's not a problem.
I'm nitpicking too a little bit, but I did raise my brow a little bit when Hillman was talking about how it all depends on the methodology, whether it was repeated measures and things like that.
Actually, not so much.
The method that goes into the calculation of it takes into account extracting variance attributable to the repeated measures, for instance.
So actually, you don't need to kind of keep in mind the methodology when evaluating a p-value so much, except in a very broad sense.
And yeah, I don't know, it's just a bit odd.
A little aside, Chris, by the way, is that nobody or very few people seem to understand how to understand confidence intervals in the context of repeated measures.
Because when you put confidence intervals on bar graphs and there are repeated measures, then typically it's not straightforward to actually extract the variance attributable to individual differences, variance, and take that out of the error bars.
And many researchers don't.
They just calculate your standard confidence intervals and they actually don't...
I know it's a little bit cruel to focus on somebody's offhand description of statistics because everybody can make And sometimes people perfectly know what they're talking about,
but just on the fly, they say something, you know, which isn't accurate.
I'm sure I do it all the time as well.
But I also think there is an element where, from this conversation, I would judge that Adia has a much stronger grasp of the relevant statistics and whatnot.
But if you were a layman, I think you would have the impression that they are both equally competent.
And I don't get that impression.
I get the impression.
I'm not saying Huberman doesn't know anything about statistics and whatnot, but I do think he has a little bit of a pre-replication crisis mindset, as we'll see as he goes on to talk about the other studies.
But I think that's what gets him in trouble, is because he overhypes studies that other people regard as not being so impressive.
So I don't think it's completely irrelevant.
These small indicators about just maybe some issues around interpreting things, whereas Adia is much clearer about these are significant limitations and that we have to adjust our level of confidence accordingly.
Yeah, yeah, I think that's correct.
I definitely, I'm never going to ding someone for not being confident in talking about statistical nuances off the top of their head.
But, you know, it's fair to say Adia is a lot more confident talking about.
These technical things than Huberman.
And also, Huberman is more bullish on interpreting the results.
And I think you're right to say those two things are connected.
Yeah, when we get to the conclusions drawn from paper, I think it'll be clearer.
So I'll get off Adia's paper quite shortly.
So here's just an example, for instance, about Adia correcting an inference.
But again, this paper is in his wheelhouse, so it's reasonable that this would happen.
The more you need these medications, they're never able to erase the effect of diabetes.
But in this case, it seems that they might be accelerating, possibly accelerating death due to diabetes.
Possibly.
We could never know that from this because we would need to see diabetics who don't take metformin, who take nothing.
And I would bet that they would do even worse.
So my intuition is that the metformin is helping, but not helping.
Nearly as much as we thought before.
Oh, and maybe for context, Matt, if I just play the summary of the paper, that will make clear, right, what they find from the re-examination of metformin for potential life extension properties.
The point here is the Keyes paper makes it undeniably clear that in that population, there was no advantage.
offered by metformin that undid the disadvantage of having type 2 diabetes.
This does not mean that metformin wasn't helping them because we don't know what these people would have been like without metformin.
It could be that this bought them a 50% reduction in relative mortality to where they'd been.
But what it says is, in a way, this is what you would have expected.
This is what you would have expected 10 years ago before the banister paper came out.
Basically, in this paper, when they compared, and they did additional things in the paper to make the comparisons better, it's a better-powered study, it controls for more things.
They basically didn't see this dramatic improvement that was reported in the Bannister paper.
So people who were taking metformin and had diabetes died more than a matched group that weren't taking.
Metformin.
But also, they don't have diabetes.
So that's something because it's a diabetes treatment.
So what you don't have is a population who's taking metformin and doesn't have diabetes.
But in any case, there isn't a signal that it's this wonder drug.
And that's the issue.
So it is what you would expect.
And then so when Huberman suggests that it's potentially doing harm...
To be taking Metform?
And he's like, no, because we don't know.
We can't say that from this data, right?
Because they're doing worse than a healthy control.
That's the comparison.
Yeah.
So you got it?
You're right.
You're not going to take math from it now.
Have I talked you out of it?
You have.
So now, Matt, and this is coming to the end, so don't worry.
Don't worry.
You're getting to Huberman's paper now.
But on our Decoding Academia, we've talked about the way that you should approach papers, and they have some general advice about reading papers that I thought is interesting to look at.
So here's one piece of advice that comes first.
You see these big stacks of numbers, and it can be a little bit overwhelming.
But my additional suggestion on parsing papers is, notice that Peter said that he's read it several times.
Unlike a newspaper article or an Instagram post, with a paper, you're not necessarily going to get it the first time.
You certainly won't get everything.
So I think spending some time with papers for me means reading it and then reading it again a little bit later.
Read a paper that you want to understand multiple times.
Good advice.
I've often done that.
You don't oppose that, right?
If you have time and if you're not busy doing other things, then by all means.
Yeah, this is what you can, you know, you're giving people advice.
And Huberman goes on in greater detail about the way he approaches people.
And it actually sounds very similar to advice I give undergraduates, Matt, which is...
He outlines trying to find the key question, you know, what the topic is.
Then how did they address it?
What's the methodology?
What is their result that they claim?
And then lastly, does the evidence that they provided support the conclusions that they want to draw?
And generally, I agree with all of that.
That's a reasonable question to add.
There's more questions you can add in.
That's a pretty good initial way to approach a paper, understand the question, check the methodology, look at the results, and then compare the quality of the evidence to the conclusions drawn.
Yeah, that's fine.
Yeah, they talked a lot about sort of stepping backwards and forwards for the paper.
And, you know, I think by all means, read any paper or any way you like.
I think it's okay.
I think there's an argument for just reading it from the beginning all the way through to the end, personally.
Listen, here's their approach.
So what I do typically is I'll read title abstract.
I usually then will skip to the figures and see how much of it I can digest without reading the text and then go back and read the text.
But in fairness, journals, great journals like science, like nature, oftentimes will pack so much information, the cell press journals too, into each figure.
And it's coded with no definition of the acronyms that almost always I'm into the introduction and results within a couple of minutes wondering what the hell this acronym is or that acronym is.
And it's just...
Yeah, it's just wild how much nomenclature there really is.
Yeah, but I ain't going to say, Matt, especially with more recent papers, I advise students that they should read the abstract.
Well, basically, I know that students aren't going to read the full paper.
So I tell them to go and read the figures.
And when they don't understand something in the figure, go and read the part of the paper.
To let them understand it, right?
Because I also make them do things like identify the outcomes and the predictors and this kind of thing.
And they are usually plotted on graphs.
So figures can be very useful, save you a lot of time.
Yeah, yeah, I guess so.
If you're trying to speed read stuff and, you know.
Optimize your consumption.
I guess this is a bit of an optimizer approach, but it's a realistic one.
You're doing the old professor thing of, well, you should just read 50 papers and you'll get them if you're doing a literature review.
But your students are going to do that, Matt.
They're going to be in ChatGPT, getting ChatGPT to generate the summaries and stuff.
So you have to play with the Play-Doh that is available, not your dream.
Play-Doh.
Yeah, yeah, fair enough.
I know, like, you listen to things at two and a half speeds, so you're always trying to, you know, maximize your time efficiency or whatever.
Yeah, look, I think it's fair enough.
If you know the area really well, you might well just, you know, skip the introduction and the discussion and you just...
Depends what you're doing.
Wow!
Yeah, depends.
It all depends.
Sounds like something I heard.
Did someone say that?
Well, again, if I'm reading papers that are...
Something that I know really well, I can basically glean everything I need to know from the figures.
And then sometimes I'll just do a quick skim on methods.
But I don't need to read the discussion.
I don't need to read the intro.
I don't need to read anything else.
If it's something that I know less about, then I usually do exactly what you say.
I try to start with the figures.
I usually end up generating more questions like...
What do you mean?
What is this?
How did they do that?
And then I got to go back and read methods, typically.
And one of the other things that's probably worth mentioning is a lot of papers these days have supplemental information that are not attached to the paper.
So you're amazed at how much stuff gets put in the supplemental section.
So look, when I had the same advice, skim over, you know, if you know the topic.
And I've actually also agreed that discussion sections often...
That's a lot of authors spin on the results.
So that's the stuff you can actually take with a pinch of salt, usually.
Yeah, I agree.
I agree.
It all depends who you are, what you know, and what your purposes are for reading the paper.
You might be just kind of doing a brief skim of literature to figure out the weight of evidence for and against such and such, an area you know intimately, in which case you might well just be...
Looking at abstracts and checking the basics of the results.
So yeah, depends.
I'm just giving credit, Matt, because we're going to talk about Huberman's paper selection.
So I'm just saying that's an example of what I would consider good advice to people who may not have read that many scientific papers and want to know how to approach them in a potentially efficient way.
You shouldn't overestimate your abilities.
In many cases, I think, especially with students that I teach, they don't have the capacities to understand the statistical analysis that's in the paper.
So if they're reading a regression table and they don't know what regressions are, not that helpful.
But nonetheless, overall, I sign off on this approach to reading papers.
There are many that you could do, and this one is not bad.
So Adia had a big, large epidemiology study, which essentially, Looked back at a previous finding that people were all excited about in the longevity area and said, actually, it doesn't look like it works very well.
And he summarizes the paper nicely and says, I'm not taking metformin now.
Are you fucking crazy?
So now let's look at the last bit, what paper Huberman picks and what takeaways he has.
I would hope Adia's takeaway is, maybe I should be less...
Automatically enthusiastic on the basis of single studies.
But if you want to be on the cutting edge, you've got to take risks with your lactose levels or whatever the hell it is.
So here we go.
Paper two from Huberman.
Well, should we pivot to this other paper?
It's a very different sort of paper.
It's an experimental paper where there's a manipulation.
I must say, I love, love, love this paper.
And I don't often say that about papers.
I'm so excited about this paper for so many reasons.
But I want to give a couple of caveats up front.
First of all, the paper is not published yet.
The only reason I was able to get this paper is because it's on BioArchive.
There's a new trend over the last, I would say five, six years of people posting the papers that they've submitted to journals for peer review online so that people can look at them prior to those papers being peer reviewed.
So there is a strong possibility that the final version of this paper, which again, we will provide a link to, is going to look different, maybe even quite a bit different than the one that we're going to discuss.
Nonetheless, there are a couple of things that make me confident in the data that we're about to talk about.
First of all, the group that published this paper is really playing in their wheelhouse.
This is what they do.
And they publish a lot of really nice papers in this area.
I'm going to mispronounce her first name, but I think it's.
And the first author of the paper is Ofer Pearl.
This paper is wild.
And I'll just give you a couple of the takeaways first as a bit of a hook to hopefully entice people into listening further, because this is an important paper.
This paper basically addresses how our beliefs about The drugs we take impacts how they affect us at a real level, not just at a subjective level, but at a biological level.
Okay, so yes, a very exciting experimental study showing these placebo-type effects are actually reflected biologically, physically, not just psychosomatic or mental or whatever.
I was inclined to...
Complain about the framing about preprints being a new innovation, but I actually think he's correct that it is a trend that has, you know, become more popular in five or six years.
They're long-term things.
But I mainly, like, I remember Brett Weinstein discovering during the pandemic that preprints were a thing and kind of treating it as there's this whole new system.
It just goes to show how little that...
EcoSphere is immersed in research or actual stuff that they weren't familiar with.
But Huberman is not doing that because he's correct.
There is a trend for this being more common in recent years.
So just going to say, my instinct was to dunk, but I would have been wrong to do so.
Well done.
Well done.
Yeah.
And there is that element, though, you know, just going to point it out again.
You know, Americans...
Is it Americans or is it the people that we cover that?
I'm so excited.
I love this paper.
It's really...
This is hugely influential.
We need to talk about this.
Yeah.
I know what you mean.
Like, we covered a paper recently that we both liked quite a bit.
We liked.
Yeah.
That's wallet-finding paper.
It's Intercoding Academia.
Subscribe now.
Listen.
But, you know...
I don't think I've ever really had a feeling of excitement about papers, even ones I really like.
I get excited, but, like, I put it in context of, like, you know, I think here, to me, Matt, the inclination would be saying this is an important paper at the same time as pointing out it's not been peer-reviewed,
and as we go on, we might see some other issues, but, like, is it an important paper yet?
Like, even if it was completely valid, the finding that he's going to report, it's a single paper, right?
A small experimental study.
If you were a psychologist, especially, you might be wary of touting single studies claiming dramatic effects on small sample sizes.
Just saying.
Yeah, as being exciting.
Yeah.
That's okay.
That's the framing.
It's okay to be excited about things.
It's okay to be enthusiastic.
Anyways, I'm just pointing out a difference.
I'm just pointing out a difference, saying, you know, our American brethren, they're more expressive than us repressed Australians and Irish people.
Anyway, it was about the placebo, the framing there.
So here's another clip, a little bit more, Hooperman.
You know, introducing placebo effects and the purported power.
Even more striking is the studies that Ali's lab did and others looking at, for instance, you give people a milkshake, you tell them it's a high-calorie milkshake, has a lot of nutrients, and then you measure ghrelin secretion in the blood.
And ghrelin is a marker of hunger that increases the longer it's been since you've eaten.
And what you notice is that it suppresses ghrelin to a great degree and for a long period of time.
You give another group a shake, you tell them it's a low-calorie shake, That it's got some nutrients in it, but that doesn't have much fat, not much sugar, etc.
They drink the shake, less ghrelin suppression.
And it's the same shake.
And it's the same shake.
And satiety lines up with that also in that study.
And then the third one, which is also pretty striking, is they took hotel workers.
They gave them a short tutorial or not, informing them that moving around during the day and vacuuming and doing all that kind of thing is great.
It helps you lower your BMI, which is great for your health.
You incentivize them.
And then you let them out into the wild of their...
Everyday job.
You measure their activity levels.
The two groups don't differ.
They're doing roughly the same task, leaning down, cleaning out trash cans, etc.
Guess what?
The group that was informed about the health benefits of exercise lose 12% more weight compared to the other group.
And no difference in actual movement?
Apparently not.
Now, how could that be?
So that is like a huge, if true kind of finding, isn't it?
That you tell these workers about the health benefits of doing it.
Their activity levels don't differ, but your experimental manipulation of just people believing that it's good for them to do this hotel work makes them lose more weight, be healthier.
Chris, I think you looked into some of these articles that Huberman references.
Yeah, I did.
Because whenever I hear...
This kind of description about studies that are counterintuitive and dramatic.
It rings the replication crisis warning bells, right?
They jingle in my ear because this sounds very much of a piece like hurricanes that now have female names lead to more fatalities because people don't take them as seriously.
Himikines and Hurricanes.
Himikines and Hurricanes.
Yeah, the title did a lot of the work, I think.
But yeah, so I'm wary of it for good reason, because, you know, cute results get lots of press, but the replication crisis has shown that many of those studies don't hold up when you dig into it or whenever you have a more...
Robust replication.
And I didn't do a huge deep dive, but I did go back and look at some of the studies that he's citing.
And one of them is mind over milkshakes.
Mindsets, not just nutrients, determine ghrelin response, right?
And this claim is saying that if you give people the exact same milkshake and you tell them in one case like it's this super sweet one and in the other one it's a 0% fat one, that you see this difference in the...
Production of grayling.
In any case, you see a physiological difference in the body in response to that.
And it's not such a dramatic claim because priming the body that it's about to receive a certain type of food, maybe it could induce a type of reaction.
But when I went and looked at the study, these are the kind of warning signs you would have for a pre-replication crisis or post-replication crisis.
Questionable.
We'd want a study that isn't pre-registered, that has a small amount of participants, that has measures which are noisy, right?
And in particular, people might think that physiological measures allow you to be more precise because you can measure heart rate or you can measure different hormones in the blood or so on, and it's a quantifiable, objective measure.
But anybody who's worked with physiological data, Should know that there's a lot of noise in it.
And there's a lot in exactly how you measure things, which measures your count, what you're comparing to.
And this is especially the case if you have longitudinal heart rate measures or something like that, right?
I've worked with heart rate, can confirm.
Yeah.
So in any case, following...
Huberman's recommendation about looking at the figures in the paper, two of them are showing the different advertisements they use.
One of them is showing the self-reported perceived difference in healthiness of the milkshake.
And the fourth one, which is the key one, is showing Graylin over time as a function of shake mindset.
And it has 20, 60, and 90 intervals.
And it's comparing the Graylin production or whatever pattern.
And there is a difference between the two of them.
At p equals 0.02 significance overall.
And the amount of subjects in it were 53 participants.
However, two did not attend.
So 51 participants divided by two conditions, around 25 in each condition.
p equals 0.02, 25 per condition study.
Yeah.
Doubt!
The little meme with doubt flashing above my head.
And similarly, there's a paper about hotel maids being told that they were doing exercise, that their housework was exercise, and they lost a bunch of weight if they were told that,
whereas the same maids doing the same amount of effort didn't lose as much weight.
In the control condition.
Now, again, this would be an important result because it would mean you should just tell everyone everything they're doing is exercise because apparently their body will switch into some burning calorie mode or whatever because they're saying the amount of physical exercise that they did was consistent according to self-reported measures but the group that was told that doing housework was exercise.
The kind of power of mindset over physical reality.
So here, Matt, looking at the sample size first, 84 subjects this time, 44 in the informed group, 40 in the control condition.
So bigger, at least than the previous study, but still 40-odd per condition.
Not huge, right?
And looking at the Not the self-reported differences, the differences in terms of mean weight loss, the kind of objective measures.
The weight difference between the two was 143.72.
This was in the informed condition, whereas the controlled condition had a weight of 146.7.
I guess that's in pounds.
So around three pounds difference and the starting weights are similar, 145.5, 146.92.
So you're talking about a difference of around two pounds, right, over the period.
And it does reach high levels of significance, like under 0.001.
But again, Matt, color me skeptical.
It is possible that they've found a very large effect here, but when you have a sample size of 4D and you have a claim that just telling people over the course of 10 weeks one thing and then it decreases,
right?
Like there's so many other aspects and of course they try to control for them or whatever, but I've seen in skeptical reporting somebody said: Okay, well, if you report people that it's exercise, it might make them do more physical activity.
But I also seen other people saying, you know, just that they were skeptical that the result would replicate.
And this is the kind of thing where, like, I think you could report this in a manner which is responsible, saying some studies that have small samples but were not pre-registered, have not been replicated, that we might be wary of, have found these kind of effects.
But that's not the way Huberman reports it, right?
Huberman is like, these guys are stellar.
These are the leaders in this field.
And they find these incredible studies with amazing effects that demonstrate the placebo effect is just beyond powerful, right?
It's kind of TED talky as opposed to post-replication crisis talk.
Yeah, that's right.
I mean, you don't have to do Bayesian statistics, but you...
I think you can apply that slightly Bayesian reasoning, which is that your prior expectation on the probability that just telling someone that something is good for you makes it better for them should be pretty low, right?
That there's an actual physical effect going on there and one small end study shouldn't move the needle so far that you're breathlessly excited about these.
Jaw-dropping, earth-shattering results.
Like you said, even in an experiment like that, there's all kinds of other mechanisms that could be at play.
Like, it could be true, right?
It's possible.
But it could also be that the people who are told that exercise is really great for them, that their work is really good for them, could have been doing some other exercise or even eating a little bit less or who knows, and they failed to measure that.
Or self-report it exactly.
There's a thousand possible explanations.
So, you know, you really would want to get excited about this kind of thing once it's been triangulated and confirmed using multiple different methods.
Because if it really is true and it really is earth-shattering, then you should see a bunch of different groups looking at it in different ways and confirming it.
Exactly.
Yeah, and it should be, you know, if the effect is this big, you get a couple of pounds for just one short intervention, then, like, great, you know, this is huge news, and you should be able to replicate it fairly easily.
But in general, this doesn't happen, right?
People don't make efforts to replicate, and when they do, they do it in a completely different way.
So, yeah, so I'm just saying, again, Hubermintomy is like a pre-replication crisis TED talk.
Guy, he doesn't seem to have picked up about the issues with overhyping studies and stuff.
Yeah, which is all because he sometimes does reference it, but it's obvious that he's much more about the hype than he is about temper your expectations.
Yeah, that's right.
So yeah, even if you're a member of the public, it's not hard to, I think, absorb what I think is good practice these days, which is that...
If you see a small end study, if it's finding a counterintuitive and very surprising result, if there's a p-value somewhere between 0.01 and 0.05, there's probably other red flags too.
But just don't assume that it's definitely, definitely true, the findings.
So here's a bit more in the study, Matt, and I want to get your own reaction to this.
I won't prime you by saying what I thought.
Let's just see what you think about this little segment about the study.
And to make a long story short, we know that nicotine...
Vaped, smoked, dipped, or snuffed, or these little zin pouches, or taken in capsule form, does improve cognitive performance.
I'm not suggesting people run out and start doing any of those things.
I did a whole episode on nicotine.
The delivery device often will kill you some other way or is bad for you.
But it causes vasoconstriction, which is also not good for certain people.
But nicotine is cognitive enhancing.
Why?
Well, you have a couple sites in the brain, namely in the basal forebrain, nucleus basalis.
In the back of the brain, structures.
Like locus coeruleus, but also this, what's called, it's got a funny name, the pedunculopontine nucleus, which is this nucleus in the pons, in the back of the brain, in the brainstem, that sends those little axon wires into the thalamus.
The thalamus is a gateway for sensory information.
And in the thalamus, the visual information, the auditory information...
It has nicotinic receptors.
And when the pedunculopontine nucleus releases nicotine or when you ingest nicotine, what it does is it increases the signal to noise of information coming in through your senses.
So the fidelity of the signal that gets up to your cortex, which is your conscious perception of those senses, is increased.
And how much endogenous nicotine do we produce?
Well, it's going to be acetylcholine binding to nicotinic receptors.
I see.
We're not making nicotine.
We're just binding.
So this is a nicotinic.
Acetylcholine receptor.
Right.
Of which there are at least seven and probably like 14 subtypes.
But so, right.
They're called nicotinic receptors in an annoying way, in the same way that cannabinoid receptors are called cannabinoid receptors.
But then everyone thinks, oh, you know, those receptors are there because we're supposed to smoke pot or those receptors are there because we're supposed to ingest nicotine.
No.
So, what do you think there?
Well, there's so many little squishy parts of the brain.
I'm not super familiar with the exact mechanism that he's describing.
I'm sure he's right.
He knows more about the various neurotransmitters than I do.
I think, though, you can't simplify the functions of those particular regions and even their interconnectivity.
Into a simple thing, like activating this area is going to enhance your perceptions and make you reduce the signal to noise ratio.
If there was some simple effect like that, then our brains would be wired to do that all the time.
Because it's by evolution, right?
So it's generally not suboptimal.
Tweaking something with nicotine or any other psychoactive drug is going to have a wide range of effects all over the place.
And there's usually some downsides along with the plus sides.
Those are some general thoughts.
You could talk about the brain forever.
No, that was interesting because you picked up on something different.
And I was just curious how this lands to somebody that is more familiar.
With the neuroanatomy kind of stuff.
Because to me, there's a lot of jargon there.
And I don't think it's wrong.
I think it's going to be accurate.
But I do get the sense, like, sometimes when they're describing things, I feel like it's necessary, you know, that they're talking specifically about...
I don't know if that's just my sense because I'm not as familiar with neuroanatomy.
Well, it could be because if it's a flex, then he's rattling off a bunch of extremely specialized bits of information.
I'm not super familiar with, at least in the...
I think only the researchers that are really focused on those sorts of mechanisms would...
Those are some pretty specific little functional things that he's talking about.
So in this case, I think part of the appeal of Huberman and Adia is that they get into the nitty-gritty like this.
But I do think that their audience generally is not well-equipped to assess.
And I'm certainly not well suited to assess these kind of descriptions.
But I will say, just on the flip side of that, that when he was talking about fMRIs, which I do know a bit about at least, I think this description Huberman gives of how they function is kind of technical, but also pretty good in a nutshell.
They measure carbon dioxide and they're measuring nicotine in the blood as well.
So they do a good job there.
So then what they do is they have them vape.
And they're vaping either a low, medium, or high dose of nicotine.
The doses just don't really matter because tolerance varies, etc.
And then they are putting them into a functional magnetic resonance imaging machine where they can look at...
It's really blood flow.
It's really hemodynamic response.
For those of you who want to know, it's the ratio of the oxygenated to deoxygenated blood because when blood...
Blood will flow to neurons that are active to give it oxygen, and then it's deoxygenated, and then there's a change in what's called the BOLD signal.
So fMRI, when you see these hot spots in the brain, is really just looking at blood flow.
That seems in line with what I understand.
There was a neat little short description.
That's correct.
Yep, that's right.
That's fMRI.
The study, right?
He mentioned that people are receiving different doses of nicotine and then being put into an FMRI, right?
This is what the study is.
I thought they were just being told that they were getting different doses.
Oh, look at you.
You're like Peter Adia.
You spoiled the reveal.
Yeah, so that's what Adia does in one of the clips.
But weren't they all given the same?
He's like, well, yes, I was going to reveal Adia.
But anyway...
They were told.
They were told they got either this is a low amount, a medium amount, or a high amount.
And then, of course, they looked at brain area activation during this task.
And what they found was very straightforward.
Sorry, they were all given the same amount.
Yes, this is the sneak.
I was going to offer it as a punchline, but that's okay.
No, I think that the cool thing about this experiment is that the subjects are unaware that they all got the exact same amount of relatively low nicotine-containing vape pen.
But anyway, so yeah, they're all receiving the same dose, so their behavior should not be different based on the actual chemical stimulation, right?
Yes.
Oh, but on the fMRIs, Matt, there was a discussion about the limitations of fMRIs, or at least Peter Adia brought up a relevant issue about potential issues with fMRIs, and that leads to this exchange.
And what they'll do is they'll pulse with the magnet because my understanding is that, and this is definitely getting beyond my expertise, but that the spin orientation of the protons, then it's going to relax back at a different rate as well.
So by the relaxation at a different rate, you can also get not just resting state activation, like, oh, look at a banana, what areas of the brain light up, but you can look at connectivity between areas and how one area is driving the activity of another area.
So very, very powerful technique.
So what they do is they put people in a scanner, and then you'll like this.
What are the limitations of fMRI in terms of, I mean, how fine is the resolution?
I mean, where are the blind spots of the technique?
So resolution, you can get down to sub-centimeter.
They talk about it always in these papers as voxels, which are these little cubic pixels things.
You know, sub-centimeter, but you're not going to get down to millimeter.
Are a number of little confounds that maybe we won't go into now that have been basically worked out over the last 10 years?
Yeah, yeah.
So I think he missed a couple.
First, the most important limitation of fMRI, which is the temporal resolution.
That's the big one in terms of it being relatively low.
The spatial resolution is okay.
Might be a stretch to call it sub.
I mean, could be.
I mean, the thing is, it's quite blobby, so you might have voxels or pixels at a higher resolution, but it doesn't necessarily mean that the resolution is really there.
The other thing, too, though, of course, with fMRI is that there's so much individual variability and just your baseline resting behavior, so it does involve a lot of those, you know, complicated...
Signal processing transformations and spatial and temporal statistics in order to extract out signals.
So, you know, this is one of the reasons why, you know, I know you've been skeptical of FMI studies, Chris, because they do produce very nice, science-y looking...
Pictures with heat maps superimposed over a picture of a brain looks super science-y, but can sometimes be a little bit misleading as to...
There have been various studies on this topic showing that if you...
If you just show people a picture of a brain, like an fMRI result, they rate the paper as more scientifically rigorous, right?
Like, it doesn't matter.
Any other detail is different.
Well, they're so bloody expensive.
I mean, we should give them points just for that.
Yeah, and they did have an issue as well, which I think Huberman might be gesturing vaguely at there when he says, you know, there were some issues for a decade, but we've mostly ironed that out.
I guess he's referencing about the issue with the software that people were using to analyze fMRIs, which turned out to be calibrated incorrectly and leading to a whole bunch of false positives, I believe,
which was a very, very serious issue and I believe has been mostly rectified.
But the fact that it could occur and persist for such a significant amount of time is...
Yeah, yeah.
I mean, I think fMRIs as a technology is a good example of this extended concept of researcher degrees of freedom, where just the technology is super advanced and super sophisticated, produces reams and reams of raw data.
But as a consequence of that, a huge amount of pre-processing has to go on.
And even then, when you get your filtered pre-processed data, It's still relatively high-dimensional.
That is, like, it's got a lot of pixels, voxels.
And, you know, so you have to use pretty complicated spatial temporal statistics in order to compare them.
And because they are so expensive to run, you almost always have low-end studies.
So I think those things have contributed to some right skepticism about the method.
Correct.
And there was another paper, a quite famous one, which Adia references here and who everyone is less familiar with, which I took as diagnostic, but we'll see why you did not after.
Wasn't there a really funny study?
Done as a spoof maybe a decade ago that put a dead salmon into an MRI machine and did an FMRI of a dead salmon that demonstrated some interesting signal.
I didn't know that.
We got to find this one for the show notes.
We should do one of these wild papers ones.
There are papers of people putting elephants on LSD that were published in science and things like crazy experiments.
We should definitely do a crazy experiments journal club.
Yeah, I think him was missing at his point there, though.
It wasn't so much like, oh, wouldn't it be wild if we studied dead fish?
They were making a very specific point about problems with the methodology.
Correct.
This was Craig Banner took an Atlantic salmon.
And scanned it and showed that using normal methods that its brain was lighting up in specific areas in response to being shown like some prompts.
So he wasn't saying that we shouldn't use fMRIs.
It was just simply, I believe it was just a poster at a conference or something.
But in any case, it was just an illustration, right?
We have to be somewhat skeptical about the application of these.
Now, I thought this was a famous study that everyone in the science reform oeuvre was familiar with, and that not knowing this was like a big telltale that, you know, you weren't paying attention.
but in our pre-podcast discussion it emerged that you didn't know I did
I've never presented myself as someone who's all in on the science reform movement.
I'm being dragged along.
I'm an unwilling participant.
Look, I will say this is an example of things being falsified because I would say, from my experience, Interacting with you for statistical issues.
That you're completely in line with all of that.
About not overhyping things, with being skeptical about claims, with knowing all the bullshit that people do with statistics.
So that scanning a dead fish would light up.
An fMRI, you know, would be completely in line.
But here's where we're not the same, Chris, because you see me as someone who's all in on that stuff, which I am.
But that's just because I'm a good statistician, not because I'm part of the science reform movement.
This is true.
So I'm saying it's a kind of parallel evolution effect.
Yes, yes.
So I learned from your reaction not knowing about this paper and being like, oh, that's a funny paper.
Okay, I cannot use it as diagnostic.
Although I think in many occasions, if someone was a science communicator and they don't know the study, it does talk about the blind spots, right?
But the difference is, Huberman...
Does not have your background in statistics.
So, you know, you have to take each person as a special case.
Okay, but the main thing is you cannot ding him for not knowing about that study because I didn't know that.
Because you also didn't know about it.
Yeah, that's why I can't ding him for not knowing about it.
Yes, there is that.
But whether or not you know about the fish, he doesn't seem to entirely get the point that Adia is pushing towards about.
And wanting to talk about limitations, like the way he did with the epidemiology study.
So anyway, they put them into the fMRI and then what do they do?
So they put people into the scanner and then they give them essentially a task that's designed to engage the thalamus, known to engage the thalamus, reward centers, and the ventromedial prefrontal cortex.
And it's a very simple game.
You'll like this because you have a background in finance.
You let people watch.
A market.
You know, okay, here's the stock market, or you could say, or the price of peas.
It doesn't really matter.
It goes up, it goes down, and they're looking at a squiggle line.
Then it stops, and then they have the option, but they have to pick one option.
They're either going to invest a certain number of the hundred units that you've given them, or they can short it.
They can say, oh, it's going to go down and try and make money on the prediction it's going to go down.
You could explain shorting better than I could, for sure.
So depending on whether or not they get the prediction right or wrong, they get more points or they lose points, and they're going to be rewarded in real money at the end of the experiment.
This is an economic game, in a way, right, Matt?
These are things that psychologists and behavioral economists and a whole bunch of people like get people to play little games with money.
Because there's actual money involved, the thought is that people will be actually more invested in the outcome and a nice little investment prediction game, right?
This is actually an area where I'm a bit more in my wheelhouse because it's kind of related to gambling, reward approach behavior and decision making and stuff.
So all of that sounded plausible to me.
The ventromedial prefrontal cortex interacting with the thalamus and they cooperate certainly in terms of finding sources of reward, making decisions about how to approach it and modulating.
Emotions and things like that.
Yeah, and here I would just note, Matt, that I know, I don't know if you've heard about this trick, but basically you should sell when things are high and buy when they're low, and this is how you can beat the market.
You explain this theory of trading to me.
Yeah, no, no, it's brilliant, brilliant stuff.
I can't believe it.
No one's ever thought of that one before.
I mean, my God, I know when the line is down, that's when you buy it, when it goes up.
Anyway, it would be easy for me, this game.
So Huberman makes a note here.
So they're playing this game, and they're getting told that they're having different doses.
And you might expect them, Matt, that part of the goal is to see, do they perform better in this attention-demanding game?
But...
Now remember, These groups were given a vape pen prior to this where they've vaped what they were told is either a low, medium, or high dose of nicotine.
And they do this task.
The goal is not to get them to perform better on the task.
The goal is to engage the specific brain areas that are relevant to this kind of error and reward type circuits.
And we know that this task does that.
Matt, here, as I want to note, this claim that...
The goal was not to make them perform better or worse in this task.
What I would want to see, to know that that is true, is a pre-registration where people said that, right?
We're going to do the game, but we don't actually think anybody will perform better or worse.
Because that's kind of what the paper says now.
But I wonder if the people had performed better in the high-dose condition, the one where they're told, would that really?
Have been presented as this was, you know, not a part of our goal.
Yeah, this is contrary to our expectations.
Yeah, no, yeah, one suspects not.
Because that's pretty plausible, right?
Because nicotine, like some of the things you said at the beginning were all true, right?
It's associated with, especially amongst people who are, like these are all vapers or smokers or people who have some level of nicotine addiction.
Sorry, speaking from personal experience, give them some nicotine and you'd probably expect their attention and cognitive performance to be slightly better.
Yes, correct.
So that would be the expectation.
Maybe their mental expectation could trip over into performance, right?
It's not implausible, but it's just highlighted here that they didn't think that was going to happen.
And you're like, hmm.
I'd be curious if that were the case.
Anyway, what the paper found overall, here's a general summary provided from Huberman after he goes through it a bit.
Now, a number of things happen, but the most interesting things are the following.
First of all, people's subjective feeling of...
Being on the drug matches what they were told.
So if they were told, hey, this is a high amount of nicotine, like, yeah, it feels like a high amount of nicotine.
And these are experienced smokers.
If it was a medium amount, they're like, yeah, it feels like a medium amount.
If it's a low amount, they think it was a low amount.
Now that's perhaps not so surprising.
That's the placebo effect.
But if you look at the activation of the thalamus, In the exact regions where you would predict acetylcholine transmission to impact the function of the thalamus, so these include areas like what's called the centromedian nucleus, the ventroposterior nucleus,
the names that really don't matter, but these are areas involved in attention.
It scales with what they thought they got in the vape pen, meaning if you were told that you got a low amount of nicotine, you got a little bit of activation in these areas.
If you were told that you got a medium amount of nicotine and that's what you vaped, then you...
Had medium amounts or moderate amounts of activation.
And if you were told you got high amounts of nicotine, you got a high degree of activation.
And the performance on the task, believe it or not, scales with it somewhat.
So keep in mind, everyone got the exact same amount of nicotine in reality.
So here, the belief effect isn't just changing what one subjectively experiences.
Oh, this is the effect of high nicotine or low nicotine.
It actually is changing the way that the brain responds to the belief.
And that, to me, is absolutely wild.
Okay.
So that's the key result, isn't it?
So I read the paper, but I've mostly forgotten it, Chris.
Refresh my memory.
They basically have got these different conditions.
Everyone got the same amount of nicotine.
People got told they got different amounts.
And then when they had people in the fMRI machine, During this task, they found that those attention-related regions of the PFC were more activated in the higher conditions.
Yes?
Yeah.
So there's a bunch of results that they look at and report.
And they report, as he highlights at the start, the subjective thing that self-reported belief about dosage reflects.
What they were told, which makes sense, because why would they think otherwise?
And then they look at a bunch of activity in the brain, right?
And they have those little brain scans that you want from fMRIs, looking at the foulness.
Accumbens neural patterns.
That's the other one.
Figure, tune, figure.
Nucleus accumbens.
You familiar with those, Matt?
Not really.
No.
Well, but in any case, you've got like a little brain with bits lit up, and then you've got various figures illustrating the effects.
And the way Huberman presents it is that the effect is dramatic.
You can see it in the brain's reactions, not just from the self-report.
That's the crucial thing.
But, Matt, when I went and looked at these figures, okay, so first of all, you have one where you have parameter estimate representing reward-related activities extracted from an independent thalamus mask.
Okay, whatever the case.
So some part of the fMRI result extracted and then...
And then they split that out by category and they have the little, you know, amounts in each plotted on bar graphs with the distribution shown.
Now, here, important to note is there's no significant difference between any of the groups except for the low to high group.
And then the significance is 0.04.
0.04, right.
0.036 to be precise.
What was the N in this again?
The N in this study overall is it's around 24 to 20 per condition.
Yeah, yeah.
So pretty typical for an fMRI study.
Yeah, and the other thing too is like you...
Read out with some of their methods descriptions there.
This is the issue with these really high-tech studies, which is that they apply some mask that identifies a particular kind of little region.
And, you know, there could be nothing wrong with that.
That could be totally orthodox.
But unless you are like an fMRI guy and are fully familiar with the software and all these things, you really have no idea about the research degrees of freedom that are involved there.
You don't know.
How much, like, was there any tinkering with parameters?
Are there different masks they could have had a look at?
There's, like, so many things there that are just, it's a bit of a black box.
And I think that's what, you know, at least for me, it just increases my level of skepticism.
But even that aside, even if it was a really simple methodology that you could really understand, you know what I mean?
Like a simple behavioral measure, which you just counted something, you know, something you could easily wrap your head around.
It was all pre-registered.
Even if all that were the case, then, yeah, with that sample size and only finding one significant effect between the three groups that they looked at and it being p equals 0.04, all of those things would make me go,
yeah, I mean...
I wouldn't put a huge degree of credence in it.
And this is not to say that, Chris, I mean, I wonder what you think about this because I've detected a note of skepticism from you regarding the effects of placebos having actual physical effects on the body.
But, I mean, my rough understanding of it is that it's not a priori totally ridiculous.
Like, it is, you know, the mind and the body, if we're going to do Cartesian dualism, are sort of intimately related.
They both have effects on the other.
Believe something is true, then it's going to have the psychological effects which are going to translate into physical effects at some point, right?
Yeah, I have no skepticism about the placebo effect existing or that it, like you say, you know, what people expect about things that, you know, where else is it going to show up?
Even the fact that the people's subjective self-reporting is different.
It wouldn't be a surprise for me to find that, you know, you could find correlates with brain activity that represent that.
So it's not a huge reach, but the evidence here is not hugely persuasive to me because, like you said, the limitations of sample size, the fact that...
So there is another figure towards the end, which is looking at...
Falmouth, the ventral medial prefrontal cortex, functional connectivity.
And there they say they do find a difference between the three conditions.
But again, it's all this stuff where how many different measures you have on the fMRI, right?
And you have the ones that are reported.
In the paper at the end.
And even then, they're highlighting ones which are on the verge of significance in a bunch of cases.
But I would not be surprised if there are very many other possible candidates that are on the file drawer.
And the way that you could prove me wrong about that is by pre-registering the study and saying which measures you're going to focus on, right?
And in the absence of that...
In the absence of various replications, my initial stance is skepticism, but not skepticism overall.
If the actual effect was validated, it wouldn't destroy my worldview or anything because I expect the placebo effect to exist.
I think this is a great example of where pre-registration would really, really help because you have this really sophisticated technology.
You have a bunch of different things, a bunch of different areas, a bunch of different co-activation of different areas.
My assumption is that there's a huge number of options there.
And, you know, it is tempting to suspect that they looked at more than just the two.
The two ones where they found a significant difference, you have to suspect that they did.
But when they came to write up the paper, it's kind of human nature to focus on the interesting bits, right?
Right, yeah.
And the authors often overhyped.
Or present them in a dramatic fashion because this is what journals also like to see.
So I'm not surprised at that.
But the point of a journal club is that you are not the offer, right?
So you should be approaching things critically.
Otherwise, you're just the PR person for them.
And so this is Huberman talking about the results.
And let's see if he is looking at them critically or overhyping.
What I find just outrageous.
And outrageously interesting about this study is simply that what we are told about the dose of a drug changes the way that our physiology responds to the dose of the drug.
And in my understanding, this is the first study to ever look at dose dependence of belief effects.
And why would that be important?
Well, for almost every study of drugs, you look at a dose-dependent curve.
You look at zero, low dose, medium dose, high dose.
They clearly are seeing a dose-dependent response simply to the understanding of what they expect the drug ought to do.
In other words, you can bypass pharmacology somewhat.
That last line.
Yeah, I think that's a big stretch.
There's some evidence of...
Placebo effects having measurable physiological effects, I think, even in the treatment of depression or something like that.
But they're reasonably weak, I think.
It's certainly not something that you could use to bypass pharmacological interventions, I would think.
Yeah, and so I want to contrast a little bit with the way Adia responds when he sees this study.
Right.
So he's looking at the same figures that I was highlighting and listen to his response.
Look at figure 2B.
Am I reading this correctly?
So it's got four bars on there.
You've got the group who were told they got a low dose, the group who was told they got a medium dose, the group that was told they had a high dose, and then these healthy controls who presumably were non-smokers who were just put in the machine.
That's right.
Yeah, this is measuring.
Parameter estimate.
Is that referring to their ability to play the trading game?
The parameter estimate is the activation, reward-related activities from an independent thalamus mask, right?
So what they're doing is they're just saying, if we just look at the thalamus, what is the level of activation?
I see.
So this suggests that the only statistical difference was between the low and the high.
That's right.
And nobody else was statistically different.
That's right.
But that's not the whole story.
Yeah, so, like, Huberman was super excited about it being a dose-dependent relationship, right?
But it's actually just the, like, nothing and the high one.
So that's, um, but yeah, I mean, but Chris...
So he points out, Matt, just to give Huberman his jury.
He does...
Justify that claim by pointing to the 4B.
Ah, okay.
Fair enough.
And the different one.
And he says this.
So figure 4B, if you look at parameter estimates, so this is the degree of activation between the thalamus and the ventromedial prefrontal cortex, and it's called the instructed belief, you can see that there's a low, medium, and high scatter of dots for each, and that each one of those is significant.
So he's saying, no, look at the solar measure.
You do see a difference between the conditions on this one.
Just so I understand, they observed a difference in the degree to which the thalamus was communicating with the prefrontal cortex, right?
But they didn't see any differences in performance on this game.
Yeah.
So there's this activity which is supposedly indicative of better concentration, better signals and noise filtering, or whatever, but it's not yielding any measurable changes in performance.
Yeah, I actually tried to go in because the reporting of the result of the behavioral measure wasn't completely clear.
I downloaded the file and had a look, and I think they are claiming some Signal, like, you know, one that relies on, like, interpreting a particular kind of result and, like, way that, you know, you measure.
Not like an obvious, right, straightforward, better performance overall.
But they're not emphasizing it in the paper in any case.
So, no, this is about the activation in those conditions.
And in particular, the activation of the kind of network.
Not the actual areas.
Areas themselves, yeah.
Yeah, this sort of binding or whatever is going on.
But don't you think that's odd, just stepping back a moment, that if it were true that there was this placebo effect resulting in measurable changes in significant brain activity, and that brain activity is meant to be indicative of better concentration and all of these things,
that the fact that you don't see a difference in...
The primary behavioural measures is revealing.
And I say that because the behavioural measure, like the simple scores of this game, is probably far less amenable to researcher degrees of freedom, Chris.
My suspicion is that you've got a lot of choices in terms of how you want to slice and dice your fMRI images.
But, you know, with these standard games, economic games they set up, there isn't so much wiggle room.
Yeah, so I would be inclined to think the same.
And actually, Adia is somewhat, again, notes, hits a skeptical note about focusing on this relationship measure instead of the actual area.
The rock devotion in the areas, yeah.
Yeah, so he says this.
So isn't it interesting that at the Thalamus, which is, and you'll...
You'll immediately appreciate my stupidity when it comes to neuroscience, which is more proximate to the nicotinamide, what do you call it, the nicotine acetylcholine receptor.
You have a lower difference of signal strength, and somehow that got amplified as it made its way forward in the brain.
Does that surprise you?
It is surprising, and it surprised them as well.
The interpretation they give, again, as we were talking about before, important to...
Match their conclusions against what they actually found, which is what we're doing here.
The interpretation that they give is that it doesn't take much nicotinic receptor occupancy in the thalamus to activate this pathway, but they too were surprised that they could not detect a raw difference in the activation of the thalamus, but in terms of its output to the prefrontal cortex,
that's when the difference showed up.
4B is more convincing than figure 2, because even figure 2E, if you read the fine print, the R, the correlation coefficient, is 0.27.
It's not that strong.
It's weak.
So at the thalamus, it's kind of like, yeah, there might be a signal.
And the scatter graph he's mentioning as well, like...
To say that there is a weak signal is perhaps...
A very scattered scattergraph?
Yes, it's a very scattered graph, like, you know, color-coded for high, medium, and low, and if you can discern the pattern there, you're, like, yeah, so anyway.
So Adia threw out this, I think.
I'm pretty much on board.
Adia's coming at this stuff pretty much exactly the same way that I would, but I get the vibe that with Huberman, And he's kind of got the stance of a believer, right?
It's like, this is a really cool study.
It's very exciting.
These results were amazing.
Oh, there's some inconsistency.
There's some weak spots.
It's surprising that this doesn't happen.
And he's kind of looking for explanations rather than saying, well, actually, it doesn't add up.
It's a weakness.
It's a weakness, yeah.
And every time Adia raises a point, he's like, this is what we do at Journal Club.
I'm like, but you aren't.
Like, if Adia wasn't here.
Basically, you would have used the perfunctory, you know, this is a preprint.
I'm not saying it's the be-all and end-all.
But Adia is the one that is actually saying, well, but hold on.
So they didn't find this result in this one and so on.
And there's a good contrast we can see when they talk about sample size, which again, Adia raises here.
By the way, this goes back to our earlier discussion.
There could be a huge signal here and we're underpowered.
How many subjects were in this?
You wouldn't have a lot of subjects in this experiment.
Yeah.
No.
And this just speaks to the general challenge of doing this kind of work.
It's hard to get a lot of people in and through the scanner.
Yeah, and it's expensive.
And it's expensive.
I should know this, but we can go back to the methods.
But you can sort of just look at the number of dots on here.
I mean, it's in the low tens, right?
It's like 40, 30, something like that.
So it's possible you do this with a thousand people.
This could all be statistically significant.
Right.
So they talk about this.
Based on this, we estimate that an N of 20 N is sample size.
In each belief condition, the final sample would provide 90% power to detect an effect of this magnitude at an alpha of 0.5 in a two-tailed test.
Okay.
So that's them referring to what we just talked about, which is we believe at 90% confidence to get an alpha of 0.05, which means we want to be 95% confidence.
We need 60 people, 20 per group.
Right.
Yeah.
But if the difference is smaller than what they expected, they'll miss out on some of the significance, which it looks like they're missing between the medium and high group.
And I, too, was surprised that they did not see a difference between the medium and the high group, but they did in the output of the thalamus.
I was also surprised that they didn't see a difference.
This is kind of interesting in its own right.
Figure 3 talks about the belief about nicotine strength did not modulate the reward response, the dopamine response.
How was that measured?
Also just in fMRI?
Yeah, exactly.
So if you look at figure 3b, other people can't see it, but basically what you'll see is that there's no difference between these different groups in terms of the amount of activation in these reward pathways.
Right, but Addy's framing is kind of...
Kind, right?
Because he's suggesting that the study is underpowered, and that's why you get these non-significant effects.
But the other possibility is that being more highly powered, you would find that, like...
Yeah, you could confirm that the effects are either non-existent or extremely small.
Yes, exactly right.
Yeah, Hildman keeps referring to the findings as surprising.
Surprising they didn't find this difference.
Surprising they didn't find that difference.
But...
It's not surprising, though, is it?
I mean, if your assumption is that if there's an effect here, it's probably quite small, and you've got a small end study, then not surprising at all.
No, no.
It would only be surprising if you assumed the effect was huge.
Yeah, exactly.
And so, you know, again, I think that some people will regard that as like, well, you know, this is just a difference in character.
You know, Huberman is allowed to get excited about the study.
I want to show the conclusion that he reaches.
Having gone through this and now talked about all these issues about small sample size and actually they didn't find a result in two out of the three measures that they're using in the fMRI and not the behavioral measure, then...
So, you know, my glee for this experiment is not, or this paper rather, it's not because I think it's the be-all, end-all or it's a perfect experiment.
I just think it's so...
Very cool that they're starting to explore dose dependence of belief because that has all sorts of implications.
I mean, use your imagination, folks.
Whether or not we're talking about a drug, we're talking about a behavioral intervention, we're talking about a vaccine, and I'm not referring to any one specific vaccine.
I'm just talking to vaccines generally.
I'm talking about psychoactive drugs.
I'm talking about illicit drugs.
I'm talking about...
Antidepressants.
I'm talking about all the sorts of drugs we were talking about before.
Metformin, etc.
Just throw our arms around all of it.
What we believe about the effects of a drug, presumably, in addition to what we believe about how much we're taking and what those effects ought to be, clearly are impacting at least the way that our brain reacts to those drugs.
A little concerning.
A few notes that are concerning to me.
Yeah, well, so, you know, because the first bit is like, you know, this isn't the perfect experiment, which is fine, that's good.
But then it's like, the implications about this dose-dependent effect are stunning, right?
They reach across all vaccines, like he references vaccines there, and he's quick to say, I'm not talking about any specific vaccine.
This, I think, hearkens to Huberman's absolute allergy.
To discussing vaccines in a positive way during the pandemic.
But even if this were true, Matt, vaccines, what implication does this have?
Because whatever your internal assessment does, like the extra power or whatever it adds, it is not giving your immune system...
The information to help it fight a virus that it hasn't encountered.
That's right.
There's a huge difference.
First of all, Chris, before we forget, I just don't get his enthusiasm for this, why it's so exciting that there's a dose-dependent relationship.
They had a low, medium, and high condition in terms of the placebo.
That's been done heaps.
You can give people a blue pill or a bright red pill or a medicine that tastes really strong or one that doesn't, and you see these effects.
You could vary the dose.
I don't understand why that's such a big deal.
As you say, the main thing is that you can see these placebo-type psychosomatic effects, which I accept are almost certainly real to some degree because the psychology interacts with the neurophysiology and percolates down into the body,
and you could see effects even further afield in terms of cortisol levels or stress and things like that.
You're going to see effects on things that are...
More psychosomatic or things that are more related to psychology.
And that's why this study here is more plausible than, say, some of those other ones, like the hotel people working away, because that's just straight-up metabolism, right?
And it's happening over a period of weeks.
That's fairly different from this, which is about concentration and reward processing and stuff like that, stuff which is actually more tightly...
Connected to how you perceive things and, like, psychological effects.
But then when you actually go even further afield to, like, how the immune system works, and you're talking about, what, T cells getting trained to recognize antibodies and stuff like that, and you're, like, the idea that there's going to be a placebo effect there and that this has some implications for how other types of...
All kinds of drugs, but including vaccines, should be administered or approached.
That seems nuts to me.
That's a huge...
It's not just a stretch, it's a huge galloping, flying leap.
Yeah, and just to be clear that we're not over-interpreting, so he goes on a bit here.
Yeah, to take this to maybe the ADHD realm, let's say a kid has been on ADHD meds for a while and the parents, for whatever reason, the physician decide they want to cut back on the dosage.
But if they were to tell the kid it's the same dosage they've always been taking and it's had a certain positive effect for them, according to the results, at least in this paper, which are not definitive but are interesting, the lower dose may be as effective simply on the basis of belief.
And this is the part that makes it so cool to me is that And it's not a kid tricking themselves or the parents tricking the kid so much as the brain activation is corresponding to the belief, right?
So that's where this is.
This is why, because it's done in the brain, I think we can, you know, it gets to these kind of abstract, nearly mystical, but not quite mystical aspects of belief effects, which is that, you know, your brain is a prediction-making machine.
It's a data interpretation machine, but it's clear that.
One of the more important pieces of data are your beliefs about how these things impact you.
So it's not that this bypasses physiology.
People aren't deluding themselves.
The thalamus is behaving as if it's a high dose when it's the same dose as the low dose group.
It's just like he's now kind of here, apart from a couple of fur away sentences, it's as if this effect is completely established, right?
And that the implications are stunning.
Describes as like, he says, you know, it wouldn't be unethical, but like actually telling the patient that they're receiving a different dosage that is inaccurate on the basis that their placebo reaction will potentially produce the same effect.
Like, no, that is not ethical, right?
And it's relying on, you know, an over-extrapolation, which is that you could produce...
These exact same effects.
And, you know, especially when he just talked about vaccines and stuff in the preceding discussion, you're like, it doesn't work like that, right?
Like you say, you might be able to get it for some specific occasions or individuals or some specific drugs.
Maybe you can.
But the way that he's presenting it is like...
Well, this looks like it can produce the same effect that's taking the drug.
Just tell someone that they're taking a stronger drug and they'll do all the same stuff and you're like...
Yeah.
I know.
It's like the general theme and it's consistent with what we've seen of him before.
It's just that kind of wild...
Exuberance or, you know, overconfidence in particular studies and running with, you know, as you say, apart from the kind of throwaway studies, which is, you know, further research required, etc.
But you can kind of tell that he kind of believes it and runs with it and forms like a view of all of these things, which is informed by the assumption that...
A lot of these studies are largely confirmed.
The extrapolation is really quite large too.
I've got to keep delineating the reasonable and the unreasonable version of these kinds of things.
A reasonable version is if you're wanting to quit smoking and say you're vaping, a good way to go about it is to gradually reduce the nicotine in your mix, all of the behavioural...
Type of things are kind of the same.
The sensory motor type feeling is the same.
And you're sort of disconnecting the physiological substance addiction from all of the psychological aspects of the addiction.
And it's easier to sort of divide and conquer and quit that way.
That's a super reasonable application of what he's talking about.
But going on to, and maybe we can do it for this and give it to kids with ADHD.
Vaccines or any supplement or any kind of medicine you can imagine, just believe it and make it so.
That's a wild extrapolation from what is a very weak set of results in this study.
Yeah, and I know the vaccination one just stuck with me, but that doesn't make any sense because it implies that the kind of biological process for which vaccines work is...
Manipulated by the strength of believing in the vaccine.
Vaccines work by giving the immune system experience of a particular kind of virus so you can store the relative instructions for when you encounter the actual virus.
You can't do that through your mental power.
Just think about COVID.
Just think about it a lot.
There are some Aspects of our physiology which you can control psychologically, right?
Like if you sit down, you take deep breaths and you think calm thoughts, you can reduce your heart rate, right?
We can make a big list of things that are kind of controllable, but there's a lot of things which really are not, right?
And for the operation of the immune system, except perhaps within the very limited degree of maybe if you're super stressed or something like that, then...
Maybe that's going to have an effect.
But largely, yeah, it's just over-extrapolation is how I would describe it.
Yeah, and so one last example, Matt, before we get into rounding up on this episode.
So I think another good illustration, it comes from much earlier in the discussion, where it shows the differences between them and a little bit, I think, Huberman's...
particular interests come into play.
So this is talking, Addy is talking about limitations, again, of research and to get on to the topic of comparing meat eaters and non-meat eaters.
Yes, although, again, this is a great opportunity to talk about why, no matter how slick you are, no matter how slick your model is, you can't control for everything.
There's a reason that, to my knowledge, virtually every study that compares meat eaters to non-meat eaters finds an advantage amongst the non-meat eaters.
And we can talk about all the- Lifespan advantage.
Yes.
Or disease, you know, incidence studies.
And yeah, it might be tempting to say, well, therefore eating meat is bad until you realize that it takes a lot of work to not eat meat.
That's a very, very significant decision that a person, for most people, that's a very significant decision a person makes.
And for a person to make that decision, they probably have a very high conviction about the benefit of that to their health.
It is probably the case that they're making other changes with respect to their health as well that are a little more difficult to measure.
Now, there's a million other problems with that.
I picked a silly example because the whole meat discussion then gets into, well, you know...
When we say eating meat, what do we mean?
We're not talking about deli meat versus grass-fed.
Exactly.
Or a deer that you hunted with your ball.
That's right.
So how do we get into all those things?
But my point is, it's very difficult to quantify some of the intangible differences.
And I think that even a study that goes to great lengths, as this one does, epidemiologically, to make these corrections, can never make the corrections.
And so, for me, the big takeaway of this study is, one, this...
Makes much more sense to me than the Bannister paper, which never really made sense to me.
Yeah, yeah.
So Adia is making a good point throughout, right, which is worth repeating, that experimental type studies and observational longitudinal epidemiological studies complement each other in a way in terms of their strengths and weaknesses, and Nadia spells that out quite well.
The example of finding out that people who don't eat meat are healthier than people who do eat meat is a good example of where It's difficult to isolate and attribute causality to that one specific thing because it's confounded with a whole bunch of other things.
Famous examples that people will probably remember the headlines to, which is drinking a glass of red wine every day or eating dark chocolate or something is going to make you healthier in a variety of ways.
It's almost certainly the case that it's other socioeconomic lifestyle correlates of those things that are having observed differences.
I feel like Kuberman is just kind of missing the point a bit when he's talking about, oh, well, yes, there's all these nuances.
Like, was it grass-fed?
Was it deli meat?
Or was it a deer that you hunted with a bow?
That's really important.
That's not what Adia was talking about.
Yeah, just that bull hunting, you know, the reference there.
I wonder who that is pointing to.
But, yeah, so there's a difference.
And this is not to say, I think, that Adia, Is not part of that world also.
I think he is, but from my reading of this conversation, I think Adi really has chops when it comes to critically reading studies and that kind of thing.
I'm not saying Huberman doesn't have them to some degree, but it definitely looks to be like a pre-replication crisis academic approach to reading papers.
Yeah, I think that's fair.
That is an interesting case, isn't it?
Because he, like Huberman, is very much a self-experimenter.
He talks about how he went on...
Was it Metformin?
I'm forgetting the term.
Metformin.
Metformin.
He went on that based on the original study, which he said that he didn't really...
Oh, no, no, no.
He went on it before even the Bannister study.
Oh, okay.
Yeah, I made that mistake.
But he did kind of point out that the Bannister study was the big thing that made it become...
Very influential, but he was on that year before.
Based on even less evidence, right?
Yeah.
So that's interesting.
And then that one came out, and then he read this paper, which is kind of debunking the thing that he'd committed to.
And he read it with like a receptive mind, shall we say.
And for him, it kind of sealed the lid on this thing.
And it's just an interesting combination of things, because like you said, he clearly can.
He can read studies.
He's got a good sort of scientific approach to these things.
But he's also a self-experimenter.
On one hand, he is re-evaluating his choices based on the new evidence that comes in.
But it's just interesting that he sort of is...
I'm impressed at the level of conviction that we see amongst Adia or whatever, because they're talking about years of their life.
Dedicated to taking something which they then think actually it wasn't...
Yeah, and it's sometimes extremely dysphoric, right?
Like it makes them sick.
Yeah, not eating.
Or it makes them nauseous for days, you know.
Yeah, like, you know, you gotta hand it to them.
It's impressive to do that, I guess.
Yeah, so my take on this whole area, like this content that we're covering is, it is a separate area and it's much more...
Aligned with the kind of academic sphere.
In a way, this is people sitting down and talking for hours about studies, doing critical analysis of studies, or at least purporting to.
And lots of the information that they give, I think, is good.
It's valid.
Unlike, say, a Brett Weinstein, where I think if you listen to it, you're actively being misinformed about the scientific method.
Yeah.
No, that's right.
These guys are like a...
Many, many rungs above Heather and Brett, for instance.
Yeah.
Yeah.
So our criticisms are kind of a little bit subtle, I suppose.
Not subtle.
That's not the word.
But we're hitting some finer points, except with the vaccination thing.
That's a big one.
With Huberman.
Yeah, with Huberman, yeah.
I mean, I'm not quite sure how to think about these guys because on one hand, you can think of them as enthusiasts and as this being a very serious hobby.
You know, some people are into model trains, some people are into succulents.
For them, this self-experimentation and optimizing their strength and their fitness and their health and their longevity and all that stuff is like an odd hobby.
Yeah, like bodybuilders.
Yeah, like bodybuilders or people that are into...
Piercings or whatever.
You wouldn't necessarily recommend it to everyone.
But it's like, well, you do you.
That's fine.
But I suppose you have to be careful when it comes to health and wellness because, you know, there's a reason why vaccines attract so many.
Delusional beliefs and conspiracy theories.
There's a reason why the supplement industry is worth untold billions of dollars and because I think it's different from model trains or succulents in that I think people have an underlying vulnerability to these existential issues about death and about health and about being strong and fit and all of those things which can make it risky.
The thing which sticks for me when looking at this is that they're all about micromanaging your health and the relative levels of different microparticles in your blood and all this kind of thing,
right?
And in a global pandemic, Huberman never did an episode recommending vaccination, which is the most scientifically supported and lowest cost.
Intervention for your health.
And he's steered away from it because, I'm extrapolating, but he's clearly implied that he doesn't want the alien part of his audience.
But if you were a guy that was just about the science, about health optimizing, and you're willing to talk about these controversial things like metformin, which haven't been proven, you'll talk about them at some depth, and yet you won't touch...
Vaccination.
And you're good friends with Joe Rogan and all the influencers and you're promoting supplements.
That's what makes me raise the eyebrow that it's not about that everything that Herman says is bullshit or the things that he promotes are wrong.
But it is a kind of spin-off of the health and wellness space.
I think it's health and wellness for men.
And that's why, in a part of it, I think the scientific aspect.
It has an appeal, right?
Because you're not doing dieting, you're doing optimizing, right?
And so one thing that people dinged me for before when we talked about Huberman is saying like, he doesn't have his own supplement brand, right?
He actually works with another company.
So let me just play the clip of him promoting supplements.
Please also check out the sponsors mentioned at the beginning and throughout today's episode.
That's the best way to support this podcast.
Not so much on today's episode, but on many previous episodes of the Huberman Lab Podcast, we discuss supplements.
While supplements aren't necessary for everybody, many people derive tremendous benefit from them, for things like enhancing sleep, for hormone support, and for focus.
The Huberman Lab Podcast has partnered with Momentus Supplements.
If you'd like to access the supplements discussed on the Huberman Lab Podcast, you can go to Live Momentus, spelled O-U-S, so it's livemomentus.com slash Huberman, and you can also receive 20% off.
Again, that's livemomentous, spelled O-U-S, dot com slash Huberman.
That's what I say to people.
If you don't think Huberman's shilling supplements, like, my God.
He gives you the discount code at slash Huberman.
If you go, his picture is there with, you know, Momentous X Huberman.
And that disclaimer, not everyone needs.
Supplements, right?
Or not everybody requires it.
But then immediately into, but there are great benefits.
People find tremendous benefits for blah, blah, blah.
That's the nature of a disclaimer, right?
Yeah.
Oh, yeah.
And the whole show is all about really being quite hypey about the amazing benefits of this, that, and the other supplement.
And, yeah, the obligatory caveat, which is, you know, consult your doctor and see whether this is right for you.
Yeah.
It's not...
It doesn't hold water.
And yes, you could be shilling supplements without owning a supplement company.
There's a thing called affiliate marketing.
It's out there.
So, yeah, I mean, and I don't really pay the defense of, oh, well, I'm not going to talk about COVID or the vaccines because that's not my speciality.
I'm not qualified to.
But, you know, Huberman clearly considers himself qualified enough to talk about...
A wide range of different things that are not in his speciality.
Why carve out an exception just for this one particular thing that Joe Rogan and the rest of the self-optimizing bros have got to be in their bonnet about?
No, it's quite obvious what's going on there.
Yeah, yeah.
So, well, that's it, Matzi.
This is why I thought it's good to go back into these waters because I do think there's differences here from a lot of the The more dramatic secular guru types that we cover making their profound conspiratorial statements.
I do think this is an ecosystem where the kind of science-y aspect gets more play.
But if you're consuming the content here, a recommendation as it is with all of the content we cover is...
Just be skeptical.
You can like Huberman.
You can find that they give really good advice about caffeine or sun exposure or whatever.
Moving bottles of water.
But be wary when he's talking about small land studies and overhyping them.
That's fair enough.
I've got a final thought to leave you on, Chris, which is if we take Huberman's bullish approach to these placebo effects at face value, totally re-evaluating our approach to pretty much all.
Medications and supplements.
Then we could be taking sugar pills instead of the Athletic Greens or that other stuff we were just talking about there.
Oh, yeah.
Why didn't they go there?
That's an interesting thing.
You don't have to pay $80 for a three-month supply of whatever, the Hyperzone X. You just take some sugar pills and believe.
Believe, baby.
Yeah.
That's a good point.
Yeah, I didn't think about that.
I wonder why that wasn't what the takeaway was.
Yes, a nice point to end on.
If it's all in the mind, supplementation is a fool's errand.
Oh, you actually just take homeopathy because there's nothing there anyway, except whenever they're producing places with low-quality control and there are actual toxins.
So maybe don't there, Eli.
Yeah, just eat a biscuit and tell yourself it's a hyper-brain-enhancing nootropic, whatever you pronounce it.
And so, there we are, Matt.
There we are.
That's him.
Yeah, that's right.
I don't think they're going to score particularly highly on the Grometer, but they're both going to be fed into it.
And we said enough positive things about them.
We don't need to pick out any particular thing.
No, no, we don't.
Goodbye and good luck, everyone.
Hope you enjoyed it.
Kind of a decoding academia meta.
Oh, hey, you're not signing out yet, Matt.
You don't get off that easy.
Oh, goddammit.
We don't have a review of reviews this week, but we do have Patreon shoutouts.
I know you're so loathe to thank people, but here I am to drag you down to their level and to get you to reward their kindness, their generosity.
So, shall I begin, Matt?
Please do.
Please do.
Okay.
Conspiracy Hypothesizers.
The Fruit Soldiers in the Decoding Wars.
We have Sam Laffitt, Joe Percy, No Lips or Joints, Sonia Benito, Matty Laycock, Marco Tresh, Young Bai, Andy Walters, Dish, Gerard Sanderson,
Nico Pomato, The Rootledge, Stephanie, Alan, Paul, Wow.
Fantastic.
Thank you very much, everybody.
Conspiracy, hypothesizers, thank you all.
I feel like there was a conference.
That none of us were invited to.
That came to some very strong conclusions.
And they've all circulated this list of correct answers.
I wasn't at this conference.
This kind of shit makes me think, man.
It's almost like someone is being paid.
Like, when you hear these George Soros stories, he's trying to destroy the country from within.
We are not going to advance conspiracy theories.
We will advance conspiracy hypotheses.
Alright, you will, you will.
Now, how about, Matt, revolutionary thinkers?
Of a few of those, the people that can't get access to the Decoding Academia, hear us overhype small-n studies.
That would include people like Dag Soros, the Norwegian comedian, Gretchen Koch, cartoonist, Ben Neihardt, Anonymous Ephesus, not a serial killer at all,
just asking questions.
I feel he's read right before, but nonetheless, thank him again.
Jargar, Magnus Glarum, Lindsay Kate Frey, Will, Catherine Collins, James Glover, Ed Smith, Randy Marinan, Gavin Boynton,
Michael, Jorgen, Joseph Necht, Amy Poza, Wow.
A lot of them.
I'm usually running, I don't know, 70 or 90 distinct paradigms simultaneously all the time.
And the idea is not to try to collapse them down to a single master paradigm.
I'm someone who's a true polymath.
I'm all over the place.
But my main claim to fame, if you'd like, in academia is that I founded the field of evolutionary consumption.
Now, that's just a guess.
And it could easily be wrong.
But it also could not be wrong.
The fact that it's even plausible is stunning.
Thank you, everybody.
Thank you, revolutionary thinkers.
Well, I think we've shouted out enough people for this week.
So we'll end there, Matt, with the revolutionary thinkers.
Do you have...
Anything you would like to let our listeners know to end with?