All Episodes
May 20, 2020 - Dark Horse - Weinstein & Heying
01:07:01
E16 - The Evolutionary Lens with Bret Weinstein & Heather Heying | Meaning, Notions, & Scientific Commotions | DarkHorse Podcast

The 16th livestream from Bret Weinstein and Heather Heying in their continuing discussion surrounding the novel coronavirus. Link to the Q&A portion of this episode: https://youtu.be/D4Vp5JeW4uUSupport the Show.

| Copy link to current segment

Time Text
Hey folks, welcome to the Dark Horse Podcast live stream, our 16th.
We have heard you internet, loud and clear, and we promise you more Cat Bell is coming.
Not only more frequency of the rings, but higher amplitude, so stay tuned for that.
I am sitting with Dr. Heather Hying.
We have many things planned for today.
We are going to have a discussion that will last for whatever period of time it lasts, and then we will switch.
We will take a break, and then we will switch to another link, which will be findable in the description to this video, at which we will take your Super Chat questions.
So, you can file those questions during our discussion here, and then we will take as many of those as possible, and then switch to Super Chat questions filed in the Q&A.
All right.
I thought we would start today by talking about the catastrophic, meaning-making crisis that we are facing and what we might do to bootstrap our way out of it.
Let's do it.
Alright.
So, the place I want to start is this.
As many of our long-time listeners and followers will know, I am an evolutionary theorist.
Now, people do not generally have a good grasp, though, on what theorists actually do, and I must say there's something about the term theorist that when I hear somebody describe themselves as a theorist, I roll my eyes just like everybody else.
But nonetheless, theorist is how I understand my scientific role.
And what it means is that there are some tools in the theorist's toolkit, which I believe are not well understood by most people, that I think are actually vital to the crisis that we find ourselves in now.
Let me just say that, although this isn't where I thought we might be going, Uh, that when I hear theorist, and you are indeed an evolutionary theorist, I don't, I'm not really, um, I can, I can play along, but it's not really what I do.
I'm more of an empiricist.
And I think of those two things as counterpoint to one another and both as necessary.
And I don't know if, if you see these two approaches as sort of Compatible with one another, but very much describing different parts of the process.
Would you say so?
Yes.
100%.
Each necessary to the other.
In fact, you will remember many years ago, as I was engaging the questions surrounding telomere senescence and cancer, in a fit of frustration, I wrote a mad poem.
And this very point was made in the center of that poem about Um, that, uh, these two things are, um, they are, uh, hands that heave loads best together was, I think, the phrase.
But in any case... Man, do I wish we had that poem right now.
We could read that poem.
That would be perfect.
It would be.
Unfortunately, I didn't think to... Yeah.
So, I think we just need to define, um, the, the terms.
And let me just, let me start with empiricist.
It's easier.
It's more straightforward.
It's, it, it is a, in closer alignment to what people imagine if they have an accurate view of what scientists tend to do.
And most people don't think of theorists as even in the realm of possibility, right?
There's a lot, we've talked a little bit in past live streams about the difference between hypothesis-driven science, which is also known as hypothetical deductivism, versus data-driven science, I tend to use the square quotes, versus, you know, conclusion-driven conjecture, which is being passed off as data-driven science a lot these days.
Both of these, you know, theorists and empiricists both ought to be, if they're actually doing science, driven by hypothesis in trying to, you know, driven to either generate or generate hypothesis most fundamentally as what they're doing, as opposed to most fundamentally be testing hypotheses.
And so an empiricist often maybe
...synonymized mostly aptly with experimentalist, is someone who makes an observation, generates as many possible hypotheses that there might be to explain that observation, generates predictions which would inherently follow from each of those hypotheses, and then begins to try to test those hypotheses, specifically, hopefully, in the Popperian universe, trying to falsify those hypotheses.
And so it is that act of testing via either carefully controlled observation or experiment that is the empirical part of that process.
That description I just gave doesn't include the analysis, which is generally data analysis, which is generally statistical analysis, and it doesn't put as much of a focus on how was it that you generate your hypothesis in the first place, and are you outside of the scope of what other people have even considered.
And so, you know, I found, for instance, that I had particular skill that I never knew before I started spending time in the field alone at, yes, making observations and generating hypotheses and predictions, but specifically in figuring out what experiments would accurately test for distinctions between hypotheses.
And so that's what the empiricism is, that what experimental design and then follow-up data analysis would allow you to distinguish between hypotheses.
That's sort of more the realm of empiricists, whereas theorists spend more time earlier in the process and often are doing not sort of brick-in-the-wall type science, but would seek to do paradigm-shifting science.
Well, there are two things in here.
One is, what exactly does a theorist do?
And then the other is, what is their objective?
Okay.
And so let's just say, I think that there's actually a confusion in the public's imagination about theory that arises from the expectation that a theorist generates theories, which is not true.
A theorist cannot generate theories.
So let us step back into a dialogue that we've all heard many, many times that will begin to unearth the question.
So we've all heard at some point some person advocating for the teaching of some creation theory alongside Darwinism in school or something.
And inevitably, in making the argument that this should be done, they will make the claim that Darwinism is just a theory, right?
And so why shouldn't other competing theories be taught right alongside it?
Now, this is actually a non sequitur, right?
It always reminds me of I don't think that a word means what you think it means, right?
The word theory has a very precise meaning, and if it is understood precisely, then the claim that Darwinism is just a theory, of course, doesn't make any sense.
And this is always what you hear the scientific establishment responding to these claims, that these creationist theories should be taught alongside Darwinism, is that the word theory is as close to a fact as scientists ever get.
But then, when we travel around in other disciplines, we find that this isn't true.
The term theory is invoked all the time for things that are not as close to a fact as we ever get in science.
And in fact, sometimes it's used for things that aren't even remotely fact-like.
And it would be easy pickings to go after grievance studies fields here, right?
And even legitimate humanities and social science fields, which tend to use literary theory and such really broadly and too liberally and frankly inaccurately, but we don't even have to go there.
We can stick to talking about science and the misuse of the term within science.
Right.
So, theory is actually the top of a hierarchy of concepts.
And the reason that we're raising this, and the connection to the meaning crisis, is that if one understands how elegantly these different kinds of ideas fit together and lead to each other, then one discovers that the tools for fixing the meaning-making apparatus already exist.
They are well understood at a philosophical level, but they are not widely understood.
Even people who study science for a living are often quite vague on these things.
So, for example, we frequently see in the major journals, papers that have no relationship, they report data of some kind, but have no relationship to any hypothesis that is
Described anywhere and what you and I have generated over years of teaching is the idea that every paper that reports Data ought to be doing one of two things and it ought to say which it is It is either data leading to a hypothesis or data that tests a hypothesis Data should never be collected in isolation of those two things either you're looking to unearth a pattern that needs an explanation which would be a hypothesis or
Or you're testing an explanation that has been provided either by you or someone else in order to see whether or not it stands up to that attempt to falsify it.
And this is in fact one way to make the distinction between data-driven and hypothesis-driven science.
The data that we're collected, and I don't even love to use the term data here, but we'll do it.
Data that were collected absent a hypothesis, in which the researchers were letting the data reveal patterns that the humans behind it didn't actually have any idea about what might be true, have not tested anything.
What they have done is the first step in the usual scientific process, which is in fact a cycle, Um, that, uh, such that they have now generated a hypothesis, but the results of those papers that use data to produce a hypothesis, even though that's not what they say they're doing, compared to papers that actually test a hypothesis by having a hypothesis, generating data, and then figuring out if the hypothesis is borne out or falsified, um, the results of those two types of papers are often, uh,
Boy, you just can't even tell the difference by the way that people talk about them or respond to what it is that they're claiming.
And in fact, they're very different in terms of the standards of proof that they are then letting loose on the world.
Yeah, in fact, there's a kind of rarely spoken but widely detectable sense that hypothesis testing is almost an archaic mode, that we've gotten past it because things have gotten more complex, or our technology has allowed us to see into a level of detail where that doesn't work, and of course it's nonsense.
I'm reminded of the faculty member I taught with once at Evergreen who revealed in front of the class, and this is a direct quote, I used to think hypothesis was important.
Now I know better.
So, again, I want to step back.
There's a lot of things that need to be sort of in the mix here in order to ultimately get to the power tools.
And really, from the point of view of those of you listening or watching at home, the real point here is if you understand how these tools work, and it isn't all that complex, Your thinking will, without knowing anything new, become that much more powerful.
These things enhance the power of what you already know to an immense degree.
So, hang in there if this is seeming far afield.
Where was I headed?
I was headed somewhere.
You were going to explain what a theorist is.
Well, oh yeah, that's one thing.
And you were backing up to a larger view, I think.
So, a theorist, ah, I know what it is.
There's a question.
I've said elsewhere, and actually it's gotten quite a lot of positive reaction, that every great idea starts with a minority of one, right?
If you're going to change the way a field sees something, then it's first going to occur to you, and you'll be the only person who believes it until you come up with something that might demonstrate to others the need for them to pay attention to it, which, frankly, sometimes takes decades, right?
So it starts out as a minority of one.
But in a world where what you have is Endless appeals to authority.
What that person says makes sense because of the credential they have, I'm going to believe it.
That world in which ideas compete based on how appealing they are or who's saying them is not a world in which the idea that occurs as a minority of one can ever win because it will always be driven to extinction by the majority who may have a vested interest in preventing the idea that upends them from ever gaining currency.
So how is it that these minority of one ideas Ever take hold.
And the answer is they take hold because of an elegant, scientific, philosophical system that says exactly what is necessary in order for your idea to replace the ones that preceded it.
And it doesn't anywhere mention anything about authority or credential or publication or any of those things.
It has nothing to do with them.
What it has to do with is whether your idea explains more and or assumes less than the ideas that it is replacing.
If you come up with an explanation for an observable pattern and your explanation assumes less than the preceding explanations or explains more of what is observed, then it is taken to be correct.
May I rephrase that slightly, which is that the scientific method explicitly rejects any kind of gatekeeping.
There is no stay-in-your-lane admonition within the scientific method, and we have come in modern science to do a lot of gatekeeping and a lot of finger wagging and stay-in-your-lane and, oh, you don't have expertise in that, therefore you can't speak to that, or you can't apply for grants in that.
And some of that is maybe necessary in a giant world with a lot of people vying for very limited resources.
A little bit of how do we channel interest and funds and such, but the vast majority of it, if not Nearly all of it is now being used as a way to keep out ideas that are, in fact, heterodox, that are more shamanistic, that are more about consciousness as opposed to established culture, that might break the wall that is the current paradigm in whatever scientific interpretation we're currently working under, as opposed to putting yet another brick in it.
And another brick in the wall science is absolutely necessary and is terrific, assuming the foundation on which you built it and the wall itself So, I'm going to take a little detour and steel man the gatekeeping, okay?
I hate to do it, but I think it's the right thing.
So, there is an argument that we need a kind of gatekeeping at the boundary where we take that which we think we know to be true and we try to enact it as some kind of policy.
Obviously, we have to have somebody empowered to say, that idea is not worth taking seriously if we're figuring out how to allocate resources or what medicines are safe.
We have to have some kind of gatekeeping.
The problem is, once you establish that, and it seems like gatekeeping is what keeps the world functional, then it gets Transported into the philosophy of science where it doesn't belong and it just kind of gets grafted onto it and it gets used by people who have perverse incentives to shut down their competition, which is both philosophically wrong and very bad for the world.
It results in us being farther behind in what we understand than we would otherwise be.
So I was just about to add on and say this is exactly what we see in peer review, which seems like a good proposal, but became so mired in its own internal perverse incentives that it is really hard to trust.
But I was reminded before I began saying that that you were trying to steel man the idea of gatekeeping, and I'm not sure I totally heard the steel man.
Oh, that we need it when we get to what are we going to do?
But what is correct, we don't need it.
Okay.
And I would say when I hear peer review – Say that again.
Be clear.
There's a point at which you take what we think we understand and you know it's imperfect, but you have to translate it into policy.
There's some kind of gatekeeping necessary there.
And I do not like gatekeepers and I think they've become so corrupt that I just – I'm not defending anybody who holds the position of gatekeeper.
But I am saying in principle what I want is some standard by which we figure out how to translate what we think we understand into policy that eliminates some of the hazard that would otherwise accompany that step.
And is the idea there that the abyss between the people who figured out what is true and the people who think they know what to do about that truth is typically so deep, such a big chasm, that there are few people who can actually interpret in both worlds?
Why is that the moment that gatekeeping, you think, is maybe most easily defended as necessary as opposed to the various other moments?
Well, I mean, you know, to take a trivial example, there is some hazard that comes from the update process, and us theorists must pay no attention to it, right?
It is a non-factor, right?
The objective of the exercise is to figure out what's right and how you're going to establish it.
But that is not the same thing as trying to, you know, make a planet function and prevent harm to people and things like that.
So you have to have some mechanism that isn't, you know, your policy isn't going to change on a dime the day you realize that something you thought was this way is actually that way.
You have to have some coherent, practical policy.
Now, again, I hear myself defending something I absolutely detest, which is the abuse of this process.
And, you know, you are an extraordinary example of someone who does do both of these things.
And, you know, perhaps that's the reason for some of the criticism that comes down the pike at you in particular, which is that, no, you do this thing.
But it's all very much of the tenor of stay in your lane, we know what you do, you know, get out of our space.
And, you know, it's territorial, it's Pissing on boundaries, all of this.
So how do you maintain that gatekeeping which is necessary while still allowing those who can, who have the chops to, and who are willing to transgress boundaries that most people can't transgress?
All right, two things.
The answer to your question comes down to a principle that we've advocated to students for, you know, a decade and a half, which is, it is natural for your science to affect your ideology, right?
What you know to be true will certainly affect what you think ought to happen.
It is completely unacceptable for your ideology to affect your science.
That is to say, your ideology doesn't change what is true at all, nor can it be allowed to.
So it's a one-way relationship.
So, asterisk, you had a second thing, but I have something to say in response to that.
Okay.
The second thing I wanted to say is when I hear the word peer review now, I have the sense that I'm hearing somebody say something analogous to the Patriot Act, right?
The Patriot Act is a desperately unpatriotic document.
But it's labeled in such a way that you'll think it's doing this job.
Peer review seems like, well, you don't want your peers to evaluate your work, can't be very good.
No.
I want my peers to evaluate my work.
I want them to do it in public, and I want them to show their work, so that if they turn out to be wrong, then the credit flows the right direction.
If they turn out to be right, I'm also interested in the credit flowing the right direction.
But it has to happen Out in the open, because otherwise what happens is people abuse this process to shut down their competitors.
That's right.
So that was the second piece, that we shouldn't be misled.
It's just like data-driven.
Data-driven sounds like a defense of empiricism and who could possibly be against empiricism.
But that's not what it is.
It's data-driven, right?
Data is driven is hidden behind the word data.
So people will think that all you're doing is saying something obvious that every scientifically minded person would embrace when in fact, it's a coup against hypothetical deductive science by just this endless cycle of observation.
Yeah.
And it's, you know, you could, without changing the meaning at all, and it wouldn't clue too many people off to what was going on, but you could, instead of saying driven, say first.
Data first versus hypothesis first.
Do you want your science to engage hypothesis first or data first?
And if it's data first, you've got a problem because you have to go through many more iterative processes before you've actually tested a hypothesis.
And since most of the science doesn't do that, you actually don't have a full cycle of the scientific method.
With regard to your first point, you had said it is unambiguously clear that humans, being humans, as they do science, will find as they learn things through their science that their worldview is affected by what they have learned.
But what we cannot have is that process going the other direction, which is to say we cannot have ideologies affecting science.
And this reminded me of the objection, this gets deeply modernly political very quickly, but the The admonitions from many people emerging from grievance studies, but also many good-faith academics and thinkers who are making cries for greater diversity in science, for instance.
And the easy one to go to, because we've both thought about it a lot and talked about it a lot, is we need more women in STEM.
We need more women participating in science, technology, engineering, and math.
And, you know, this is not the place to go into all of the ways that the statistics that are usually trotted out here are often misleading, and the goals that people are advocating for are probably not the goals that we should be shooting for.
But I remember saying to a number of Earnest, smart, but frankly confused, young, mostly women, but some men, at a conference that we were participating in, actually the summer of 2017, after Evergreen had blown up before we knew that we weren't going to be faculty there anymore.
When they were asking me, when we had opportunity to get up and talk about what is the nature of sex and gender, the evolution of sex and gender actually, and what does it mean about proclivities and abilities and desires and all of this, And I said, who you are, what your particular demographic is, should have absolutely no effect at all on the results that you get if you are doing good science.
It should have no effect at all.
And therefore, in that regard, with regard to if scientists are just being handed questions to address, Then the particular answers that are coming out of those scientific questions should not vary based on where those people are from, what sex they are, what color they are, how old they are, what their socioeconomic background is, any of that.
And that's of course a perfect world and it's not exactly totally true, but the fact is that the scientific process specifically is about extracting the bias of the human doing the work such that you can get to a result that would be true no matter who you are.
But what isn't the same is what questions you choose to ask.
So the place where diversity, diversity to use a super nuanced and politicized term now, but the place where we, the reason that we actually do want a more diverse pool of people doing science and asking questions across all domains, is that your particular history, which includes your worldview, which includes whatever political views you have because of the questions you've asked in the past, will affect the questions you ask.
And the questions you ask are, of course, part of the purview of what you do as a scientist.
So diversity doesn't matter at all for the results, but it does matter for the questions, which of course affects the results.
It shouldn't matter for the results, but for the kinds of corruptions, mental and otherwise, that enter into the process.
So we are going to have to get back to the basic structure here.
No, no.
We're somewhere so important though.
I will say, I do think there are certain questions, and this goes back to considerations of empathy and who you will have an easier time understanding.
There are places where I do think our field has actually wildly fucked up, not to put too fine a point on it, because for historical reasons that have nothing to do with this field in particular, Um, it was historically done by men.
And so what it discovered, the order of things it discovered, um, had a lot to do with what males are more likely to see and what they're more likely to be unable to see.
And so my favorite example of this is Jane Goodall, who great hero of mine.
I know of yours as well.
Saw her video this week actually was great to see.
She's usually travels all the time, but under lockdown, she had time to make a video.
Defending bats.
Defending bats.
Wonderful.
Thank you, Jane.
But anyway, I think Jane Goodall cracked the code of the great apes where men had failed.
George Schaller, in particular, had failed with the great apes after he had succeeded with lions, actually.
But she cracked the code because, A, she was not over-educated, so she didn't impose what she had learned from her elders on her chimps.
But, B, she had the intuition to avoid the male bias against empathizing with her study organisms.
There was this sense that it would be a scientific corruption to, you know, to anthropomorphize your creatures.
But, of course, we're talking about... She got all sorts of grief for naming them.
Right.
And for claiming that they were doing things like engaging in war.
Well, and, you know, the key thing that she did was ingratiate herself to her study organism so that they would ignore her enough that she could understand what they were up to, which, you know... This became an absolutely necessary part of the study of social behavior of anything that is in any way conscious.
You know, I didn't particularly need to ingratiate myself with my frogs.
They would fall on me from trees no matter what and ignore me mostly, but if you're studying, you know, dolphins or elephants or chimps or any primates, really, or crows or, you know, a number of organisms, then you would need to.
And was it leaky?
I can't remember now which big name in anthropology Tapped, not just Goodall.
Oh, it was Leakey.
So it was Louis Leakey.
Just to add a little bit more meat to this story, Louis Leakey, who is this big old man in anthropology, says what we really need in order to understand human origins and human behavior is a better grasp of what is going on with the... at that point we thought three extant species of great apes, bonobos, were classified as a small species of chimp, so he didn't have people going after the bonobos.
But he said, I'm going to find three people who I can send into the field for a long time and see if they can't crack, you know, crack the code, so to speak, of what it is these that the chimps, gorillas, and orangutans are doing.
And he was quite explicit About choosing three women.
He chose Jane Goodall for the chimps, Dian Fossey for the gorillas, and Birute Galdikas for the orangutans.
Galdikas is, I believe, still working in the field, and Fossey obviously was killed by poachers, and Goodall has become a massively important cultural figure and probably has helped save not just many chimps but many other organisms throughout her work.
And Leakey's position was, Not just, okay, the men have been trying and they're not getting anywhere, but I, Leakey, actually think that there is a chance that women actually have more empathy and are going to be better able to put themselves, to basically do the theory of mind trick and imagine what it is like to be a chimp, a gorilla, an orangutan, and therefore get somewhere with these studies.
And of course, all three of those women We're trailblazing.
And we know much more than we would have had there been any other individuals, including most women, right?
But that there is some kind of work that the particular propensities and proclivities on average of men versus women may make one or the other more suited to that work.
Yeah.
So, and you know, it's interesting that the story is that, you know, the grand old man of paleontology sent these women to do this work because he understood that they would get it better.
And I think history bore out his suspicions.
So anyway, it's a complex story.
I would say there's one other place that I think our field has screwed up as a result of a kind of male cognitive bias that's blinded us and it has to do with our misunderstanding of male sexual displays and licking species and that there is a way in which
Men who don't entirely get women have looked at the fact that females in many species require males to do these elaborate, seemingly senseless things, and they, you know, they had kind of a, you know, females who get some kind of a reaction.
And anyway, I think had the field incorporated women more quickly, we might have fixed this one sooner.
So one more anecdote here before you get back on.
Let me say, I do think we have the answer to that puzzle, and at some point here it's going to emerge, and I think it's a fully satisfying, very interesting answer.
But what are lucks about anyway?
Yeah, why does the peacock have such a fancy tail?
Well, we know why he has it.
But why does she care?
Because the females require it.
But why do the females require it?
That's the tough part.
Yeah.
So the anecdote is, you know, animal behavior and especially the study of social behavior was, of course, biased towards mostly males doing it for a very long time, just because of the history of science and how work of any sort really outside of the home has been done in humans.
And there's a book by Sarah Blaffer Herdy who is, I don't know, is she our academic?
Sibling.
Sibling?
Yeah.
So she was a student, she was an early student of Dick Alexander's, right?
Is that right?
Bob Travers.
She was, oh yeah, she was an undergrad when he was a graduate student.
Okay, so Sarah Blaffer Herdy is an extraordinary researcher.
She's worked on He works in India on langurs and has written a number of excellent books, including Mother Nature, which has different subtitles depending on when and where it was published.
But it is such an excellent investigation of the basis of social behavior in mostly primates, including humans, that it was, and I, you know, we've talked before about, you know, neither of us really ever liked to use textbooks, so this was one of the key books that I would use in my animal behavior programs.
And it is mostly an investigation of social behavior through a female lens.
It's about mothers and daughters and how fathers come into the picture, and in pair-bonded species how important they are, and in non-pair-bonded species how the females have to defend against, you know, putative fathers and would-be intruders in all of this.
And at the end of, I think it was the first time I taught animal behavior using this as one of my books that we seminared on every week, I remember hearing some, it was good-hearted, it wasn't deep criticism, but some sort of grumbling on a field trip from some of my male students that they hadn't had any idea that the field of animal behavior and the study of sexual selection was such a female-biased field.
My God, I had no idea that that could be an interpretation, but I feel like inadvertently I just flipped the expectations on their head because no one I don't think in the history of the study of anal behavior had ever before voiced the concern that it was too female biased.
And it's really just, you know, which stories and written by whom do you run into first?
And yes, if you're studying social behavior, You really do need both men and women doing it.
If you're studying quarks, it's not clear to me that you do.
And if you're studying, you know, medical procedures, it's not clear to me that you do, although medical procedures need to be done on both men and women, because men and women have fundamentally different anatomies and physiologies and everything else.
And it's also worth pointing out that our era, the era in which we studied in graduate school, our mentors were all of a generation that they could be forgiven for a certain view of women, you know, a throwback view.
But none of the people who mentored us had that view.
They were all very farsighted like your dad was.
And had excellent female students that they produced because they had no such bias.
So anyway, kudos to Arnold Kluge and Dick Alexander and Bob Trivers.
All of these men were ahead of their time with respect to understanding how broken that particular chauvinism was.
And just not caring.
They didn't privilege female students.
They didn't seek out female students.
Nope, they didn't give you a special break.
Can you talk science?
Can you generate hypotheses and experiments and all of these things?
And if you can't, I really don't have the time for you and I'm not going to be gentle on you because you're female and I'm afraid you might cry or something.
That's chauvinism too, and none of them engaged in that either.
Not in the slightest.
Alright, so let us return to the meat of the matter.
Theorists do not create theories, they cannot.
What theorists do is they advance hypotheses, and those hypotheses, if they survive, test, and ultimately go on to be the default assumption of the field, that is the moment at which they become theories.
So I think the important thing...
The reason that this is so deeply connected to the meaning-making crisis and the possible route out of it is that that process whereby you take a minority perspective, you demonstrate its importance, and it replaces the dominant paradigm.
This is a rigorous mechanism whereby heterodoxy becomes the new orthodoxy.
And orthodoxy has this bad connotation, but in this case what I simply mean is It is the standing of what we commonly believe to be true.
You know, that the Earth goes around the Sun is now orthodoxy, as it should be.
It's almost certainly correct.
How did it get there?
Well, it started as a minority perspective that had to win the day through this process.
So, to make the process a little clearer, just realize that mathematics has an analog of this whole thing, and the terms are different.
But the logic is very similar.
It's not identical, but it's very similar.
So in mathematics, you can offer a conjecture.
That's a proposal.
That's the equivalent of a hypothesis.
You can offer a proof of the conjecture.
A proof is analogous to a hypothesis test.
And if a conjecture is proven, then it becomes an axiom.
And what an axiom is, is the default assumption for what is true.
It has become axiomatic.
So in science, instead of doing those things, again it's not exactly the same but it's close, you observe a pattern.
You hypothesize an explanation for the pattern.
A hypothesis is effectively a conjecture that comes along with predictions.
And then those predictions allow you to test the hypothesis and see whether, in fact, it stands up to that rigorous challenge.
At the point that something has stood up to enough challenge that the default assumption of the field is that it is true, then it is a theory.
And so this explains the mysterious conversation that we hear so many times where people allege that Darwinism is just a theory And that that means that other theories should be taught alongside it That is in fact a non sequitur because there aren't other theories There is only one theory of how life became complex on earth and it is Darwinian in one form or another So let me just reiterate what you just said in a slightly different order because I think it's important and it bears repeating a few times
Science is to hypothesis as math is to conjecture.
So hypothesis and conjecture are the same thing in science and math space.
Science is to test as math is to proof, right?
Testing in science is like the proofs in math.
And science is to theory as math is to axiom.
And I think you've also said or postulate or theorem in math, right?
So, science has hypothesis and test and theory, and math has conjecture, proof, and axiom, also postulates and theorems, and that those are, you know, more or less analogous points with one important caveat distinction between theory and science and axiom and math, which we're going to get to.
So, I'm a little out of my depth here.
I'm really not a mathematician, although obviously there's math in my family very close by.
The difference, I think, is that when a conjecture has been has a proof has been delivered that has withstood scrutiny, I believe it is accurate to say that the axiom that arises is simply true.
And we don't have to worry about it being falsified because the nature of proof is such that proof isn't like a big stack of evidence that something is right.
It's an ironclad demonstration that it is right.
And so Once something has reached the status of axiom, is it beyond disproof?
I suspect it is.
So that's different than in science where a theory, Darwinism, could be false.
Let's say, for example, We are living in a simulation and Darwinism was hard-coded into it or the appearance of Darwinism was hard-coded into it in order to figure out, you know, what we humans would think in response to it.
Then it's not real and the appearance that it is real is explainable by a larger hypothesis, which could be potentially demonstrated, but nonetheless, it could be false and we could discover it.
But at the moment, we have no reason to think that's coming any more than we have reason to think that we're going to discover that the earth doesn't orbit around the sun.
One of the distinctions, at least in the realm of science that we often engage in is also that test is, as you said, basically based on a stack of evidence that then endures basically a probabilistic assessment.
Statistics is a probabilistic assessment.
And because it's probabilistic, the theory that may result after many, many, many tests and the test of time, as it were, It's still based on probabilistic analysis, whereas a proof is not generally probabilistic.
It is yes or no.
Each of the steps has been binary, and so it is not a probabilistic situation.
Right.
So, one of the outgrowths of this that I think people who use the term theory casually don't understand is that there should really only be one theory for a given observation.
At a time.
The theory is that which is so likely to be right that it is taken as the assumption.
And it can be displaced by some other theory, but then it's no longer a theory.
It gets demoted to the status of hypothesis, and in fact it's a falsified hypothesis at that point.
So that is an important fact.
Theory is not some infinitely large space.
It is not something you yourself produce.
It is the result of a combination of things.
Actually, it's a result of the exact combination of the teamwork that you were describing before.
Right?
You generate a hypothesis, the empiricist tests it.
Sometimes the empiricist is also the theorist, but it doesn't matter.
But the end product of the cycle is theory.
So why are we so confused about this?
This is the problem, and this is a difficult one to deal with.
What we have just described is the way it is supposed to function.
It very frequently doesn't function even inside of fields where we wield the term theory as a defense of, for example, Darwinism, where we swear that the creationists don't know what they're talking about because they say Darwinism's only a theory and we say you don't get what that term means, right?
Even inside of a field where we need that defense, we're not consistent about the use of the term.
But there are other fields in which a much worse job is done.
And I think that this is the problem.
Part of the crisis in meaning making has to do with the fact that there's not even a standard between disciplines about how to use these terms.
So I would say the worst offenders, in my opinion, outside of maybe the The humanities who abuse the term theory like nobody's business, and maybe we'll return to that.
But the worst offenders inside science, I would say, are the physicists who very frequently, when they talk, talk as if they are advancing theories, which doesn't make any sense if you use the proper definition.
And this results in an immense amount of confusion that, frankly, I think is hiding in plain sight.
Right?
I think anybody can see it.
So there's a hierarchy.
You have something that isn't a hypothesis.
It's an idea.
It could be true, but you don't know how to test it.
It doesn't make any obvious predictions.
Okay?
That's a notion or interpretation.
Okay?
It's not necessarily wrong, but it's below a hypothesis.
A hypothesis is an idea of why something might be the way it is that comes with a prediction that would tell you if it was false.
Right?
So it has a higher status because it comes along with something that gives it the potential for being eliminated if it isn't true.
A conjecture doesn't have to be, you know, we can conjecture that this is one of a, you know, a billion universes and that would have no implication for us here in the universe.
So, you know, I can't say it's true or false, but it's not as good as a hypothesis.
And then you've got, you know, a hypothesis that has withstood test, and if it withstands enough test that it becomes the default assumption, it becomes a theory, right?
So that's the hierarchy.
You've got notions, you've got hypotheses, you've got theories, which are hypotheses that have become the default.
Basically, they have, in a legal sense, you might say they have the presumption of being accurate, and they are capable of being removed, but until they are, we just treat them effectively as axiomatic.
But what do we do with something like string theory?
Why do we even call it string theory?
If I'm to understand, string theory has yet to deliver a testable prediction.
Now, that doesn't mean it's wrong, but it does mean it's a notion or an interpretation.
It's not even a hypothesis.
And if we discussed string theory as string notion, or, you know, then... It loses a little bit of its cachet.
It loses a bit of its cachet, as it should.
And, you know, again, that doesn't make it wrong.
But it doesn't deserve the status of theory, the top level, right?
The default assumption.
Now what happened in physics, if I'm to understand, again, I'm not a physicist, what I understand is that effectively String theory socially won the day.
People were so excited by the idea that it effectively killed off all of its competitors and it took over the field, despite the fact that it had yet to rise to even the first level of scientific rigor, which would be to deliver a prediction that could, in principle, allow it to be tested.
So, until we realize that, that there are things out there masquerading as theories that aren't anything more than a notion, we have a real problem.
And I think that this is actually corrupting field after field.
So, in our field, panpsychism.
Panpsychism, the idea that consciousness is written into the very particles of the universe.
That's a nifty notion, but that's about all it is.
It doesn't make any prediction that I'm aware of about nature.
It is actually a response to the difficulty of the study of consciousness and that's it.
It's some idea that would just simply render it outside of our purview and therefore not our problem.
Take it seriously at all.
I don't see any reason anyone should but nonetheless it is talked about in serious circles as if it were something scientific when in fact it hasn't It hasn't even risen to the bottom rung of the ladder So you've got a category, Notions, into which you've currently put string theory and panpsychism.
And I think it's a great bin to start filling with things that have been given probably more press and more importance than they would seem if only for the fact that no one pointed out not testable, therefore couldn't possibly be a theory because not testable means not knowable and can't even be a hypothesis.
Yeah, and I should say I'm violating one of my own rules here.
I'm kind of being a dick by calling them notions.
That's really just designed to annoy people.
Okay, maybe we need a new name for the bin.
Well, the bin has a name.
It's Interpretation, right?
That's not quite right, is it?
Yeah, I mean, it is in physics.
So, you know, the Copenhagen Interpretation, for example.
This again points to another one of these things that's artificially been boosted because it's a cool idea, but it still hasn't reached the bottom rung, which is the many worlds interpretation, right?
The many worlds interpretation is consistent with, you know, events happening the way they seemed to or seem to.
This is emerging a hundred-ish years ago, maybe.
This is quantum mechanics.
Yeah, this is in response to, you know, the double slit experiment and the Schrodinger experiment or thought experiment.
But anyway, the point is, yes, you can solve the puzzle by imagining an infinite number of universes springing off of each other.
I think it's the worst idea I've ever heard.
But nonetheless, I get that technically it's consistent with what we see, but that isn't even the first level of saying, okay, it's consistent with what we see, but you want to make it a hypothesis, tell us what it predicts that we don't yet know to be true.
Right?
I would say likewise, the interpretation that this is a deterministic universe or could potentially be a deterministic universe.
If the point is in a deterministic universe, you wouldn't know because all of this was predestined from the very first particle interactions, then, you know, okay, I get it.
But what does it predict?
The bottom line here, we have a rigorous set of tools.
It involves using terms that people have butchered by misusing them.
And if we can resurrect those tools and just use them consistently and well, I believe we are halfway out of the meaning-making crisis, because we will at least know what the hierarchy of ideas is.
You know, are you offering a hypothesis?
Is it a theory?
Is it just a notion?
You know, knowing where things fit, I believe, will be very helpful.
And I probably should not point out that one place where this is very important is hypotheses of collusion, which get dismissed by boosting them to the level of theory, which they don't deserve, not until they've been demonstrated.
So, you know, Watergate... We don't have to call them collusion hypotheses now, do we?
Well, I'm using the word collusion because conspiracy has become so tightly associated with the word theory that it's like an instant grounds for dismissal.
And so my point would be Watergate is a conspiracy theory, right?
We all assume it happened, right?
Lizard people is a falsified hypothesis.
We can falsify it on the basis of phylogenetics.
Right?
9-11 was an inside job?
That's a hypothesis, right?
It's not the theory, right?
It hasn't been falsified as far as I'm concerned, but it fits in with a bunch of other theories, including the 9-11 Commission's theory of what happened, or hypothesis of what happened.
It's hard to keep them straight.
It's hard to keep them straight, yeah.
So anyway, Uh, I don't know.
Let's try to be consistent with the terminology.
I'll try to do a better job than I just did there in that last sentence.
But, uh, Watergate?
Now that's a conspiracy theory.
Yeah.
Good.
Yeah.
All right.
Well, we're not too far from an hour.
Do we want to just touch on a couple of, uh, addendums and corrections, uh, from previous shows?
Sure.
Let's do it.
And then, uh, and then take a break?
Okay.
I think we've got three of them here.
Let's see, in the Super Chat last time, someone asked about a paper that's on a pre-print, a psycho, a psycho, psychology pre-print server, that asks, does this pandemic make people more conservative regarding gender roles?
A quote from it is, In times of threat, people can embrace traditional ideologies to cultivate meaning and purpose.
Environmental uncertainty, that is, can promote desires to preserve the status quo.
This effect is observed historically in U.S.
presidential elections, throughout which times of greater societal threat to the established order have predicted increases in preferences for political conservatism, reference, Specific evidence for Yost and colleagues 2003 motivated social cognition model of conservatism also comes from findings that people in the U.S.
reported more conservative attitudes after the terrorist attacks of 9-11 than before, regardless of whether they personally self-identified as liberal versus conservative.
Similarly, increased salience for the Ebola epidemic in 2014 predicted preferences for more conservative political candidates.
So all of those are referenced in this paper and the authors go in with the hypothesis that, well actually a few hypotheses, That people will actually shift to the right politically during a pandemic, which they find no evidence for, incidentally.
And then two other hypotheses, if memory serves, if I can find it here.
Boy, there's a lot of sort of jargony gobbledygook in here.
Epistemic existential motivational processes, yada yada.
Does a pandemic render people more strongly conforming to traditional gender roles and believing more strongly in traditional gender stereotypes?
Both of which they claim to find evidence for, which is interesting.
If true, and raises a number of questions that we might just talk for a couple of minutes about.
You want to start us off?
Yeah, I'm having trouble remaining objective about this claim, because, you know, I'm watching the pandemic as a scientist, and in general, I'm pretty good at keeping things straight.
And on the other hand, you know, I'm watching it as a father.
And I'm having this sense of Everything that makes life work is coming apart without, clearly without anybody having an understanding of what replaces it.
And, you know, we're not going to be locked down forever, no matter what happens, even if we just end up facing the virus and, you know, tens of thousands die needlessly, which is not something I'm advocating.
And I know you're not advocating, but even if that's what we end up doing, We will escape lockdown soon enough and things will go sort of back to some kind of normalcy.
I don't think they go back to normal but they go back to something.
But I do think we've lost our innocence and I wonder, I really wonder about how
How romantic love emerges from this for people who are in the early phase of their life where they're still trying to figure out who they are and who they're going to be with and how they're going to find them and how they're going to court and woo and do all sorts of things that they're going to need to do, whatever the modern terms for those things are.
So anyway, I guess, I guess the point is, there's, you know, if I hear that people return to traditional gender roles, in part that goes very much against what you and I have been advocating, which is that it's time for a renegotiation of those roles, because so much is different about males and females in the world we live in.
On the other hand, You know, at least there's some pattern there.
There's something that people could depend on, and... And this is, I mean, I think the argument, which is that in times of deep uncertainty, where maybe what you need is a totally new approach, but knowing that totally new approaches are very likely to be flawed, the shaman is more likely to be wrong than right.
Yeah.
even though you need the exactly right shamanistic approach, people are more likely to retreat to the known, the comfortable, the understood, even if it's patterns that they themselves were fighting against before.
Yeah.
So, yeah, I mean, that does feel a little bit right, that we do need until the new way has been plausibly spelled out.
And, you know, the way this is done in the context of modern markets is nothing like a viable new way.
But until some new way has been spelled out, until somebody has proposed the renegotiated arrangement between man and woman, I get why people would retreat.
Mm-hmm.
So let me just say one more thing about this paper, which I read it just quickly before this and you haven't had a chance to yet, so you have not looked at their methods.
With regard to testing whether or not people were endorsing gender stereotypes at higher rates after the pandemic or during the pandemic versus before, for which they found statistical evidence, Their methods I find really weak and a little strange, and that's not, I think, just because I find the gender stereotypes themselves, you know, not particularly comforting at all.
But I'm not going to put you on the spot and say, you know, come up with some gender stereotypes here, but I think all of us just listening and such can imagine what it is that you might think.
What would it be like to be more masculine or manly?
Think of some things.
And now, what would it be like to be more feminine or womanly?
Think of a few things for a moment.
And now, let me share with you the words that they used.
Which is that, you know, they treat these as, they say, endorsement of traditional gender stereotypes was assessed by eight items.
Of these eight items, four items assess attitudes towards men and four towards women.
So we got eight words, four of which is supposed to be more masculine and four of which is supposed to be more feminine.
For the masculine ones, we have risk-taking, brave, courageous, and adventurous.
That doesn't strike me as four things.
It strikes me as one thing.
I love them all, but that doesn't strike me as four independent things.
And for Attitudes Towards Women, the four things that they supposedly did were clean, hygienic, sanitary, and pristine.
I've got a problem with the category, but it's only one category, right?
They have basically brave versus clean.
I could tear apart the women are clean thing, but probably most people watching or listening came up with some version of courageous or risk-taking or adventurous as being a typically more masculine trait, and that which many women are also engaged in a lot.
But this isn't eight things.
At most it's two, and one of those two, which is represented by four words here, is questionable in and of itself.
So I'm not compelled by at least that part of this research.
I ain't seen the methods, but that sounds like a weird kind of pseudo-replication.
Exactly.
Yeah.
Exactly.
Okay.
So there's a lot more we could do on that paper, but let's, since I think we're coming to an hour, let's just quickly talk about two more things that have come up in the past.
Oh, I don't remember which live stream it was, but we were talking about that Contra Costa study that presumed to show that invited people via Facebook, and it turns out the wife of the principal investigator was specifically promising people that they would be able to get tested if they came up positive, they might be more likely to go back to work.
In Contra Costa County, which is the county in California that has San Jose, Silicon Valley, etc.
We talked a lot about the various reasons that we didn't trust the results.
Well, there's another thing now.
I only find it on BuzzFeed News, which is obviously a questionable source, but they find that a whistleblower complaint has been lodged at Stanford claiming that JetBlue's founder helped fund the study and that that, which is potentially a pretty big conflict of interest in the financing of the study, was not disclosed in the publishing of the study.
So, you know, one more thing to make us suspect that that whole study was politically motivated at best and potentially quite skewed.
Well kids, if you're going to go with corrupt study design, go big or go home.
That's the lesson here.
There you go.
Okay.
And finally, um, I think it was last time or the time before you were talking about the aircraft carrier that, uh, okay.
So you want to fill us in here?
Yeah, there was an alarming report.
This is one of the things I fear most in this crisis.
An alarming report of people who had recovered from COVID-19 who were showing signs of a new infection.
And this would be disastrous.
It's still possible.
Off of the, I just can't think of the name, the Roosevelt.
So this would be a terrible thing if you, if it turned out that having had it once didn't mean that you were immune, not even for a short period of time, that you could get it again almost immediately.
So what we have now begun to see out of South Korea, um is evidence that the second wave of positive tests on these people is not the result of new infection but it is in fact the result of them shedding uh forgive the term here for a second but dead virus people will complain about that and they'll say viruses are not alive to begin with we don't need to fight about it the point is you've got
Viruses that are capable of inducing an infection.
These are complete viruses.
And then you've got fragments.
And these tests are such that they don't detect an entire viron.
They detect a particle that is recognizably viral in origin.
But that does not mean that you have an active virus in your system.
So I think in this case, they were actually shedding dead cells from their lungs that had been infected.
And it's not surprising that those would have molecules that would trigger these tests.
Yeah, I don't know if it was lungs, because the one last thing I wanted to say about this, that you had found this on the Bloomberg site, and strangely... Yeah, I guess you can put that up, Zach.
You had found that on the Bloomberg site, and they didn't link to what should have been a preprint.
We don't expect it to be a peer-reviewed article at this point, but a preprint.
And so I went looking and I found Stars and Stripes, I found Mother Jones, I found Forbes here, and all of them are telling the same story, but none of them have links to the preprint.
And you look, the only thing that I see to indicate why is at the top of this Forbes story, which says, South Korea's health agency announced Monday.
Which sounds like it was a press conference, which is really even such a lower bar than anything else that we have yet seen.
This means that, um, really, really hope this is true.
Like this, this flips the script again back to, okay, hopefully if you've had it and you've got antibodies, You now have some level of immunity and you cannot get it again or have it re-emerge.
But this, it's 200, I think they're looking at 400 people maybe, 285 of which had it, had tested negative, had tested positive again, and they go and find these dead, in quotes, viruses that are causing the antibody test to test positive again.
But all of that, all of that that I just said is hearsay, because it's from a press conference.
We aren't We haven't been allowed into the scientific paper.
And of course, scientific papers almost never have links to the actual databases.
And most people can't do the appropriate analysis on someone else's data set, especially if they're outside of their own field.
But you can do what we did, for instance, on You know, this Rosenfeld and Tomiyama paper on whether or not a pandemic can make people more socially conservative, or what we did specifically with that Contra Costa study paper, went through, looked at the methods, went, methods are not going to be reliable here.
Therefore, we at the very least can't say that they think that they found what they think they found, or they found what they say they found.
In this case, we can't do that.
And that means, you know, I really hope it's true, but where's the preprint?
If they did this work, like, it needs to be out there on the form of a paper, and the preprints allow multiple revisions up until, you know, up until it's accepted somewhere and published.
So it should be out there, and I really, I can't see what the excuse is for it not being available for people to look at.
So this is a particular thing.
It's rare in my experience, but it's not unheard of.
And it is a terrible corruption of the method.
This is the opposite of doing your peer review out in the open.
This is, nobody's even able to access it.
Now here, presumably, this is not intentional.
But I recall in my own past, in my paper on telomeres, senescence, and cancer, I had a prediction about cloned mammals, and in particular Dolly, that she would die of... The sheep.
Dolly, the first cloned mammal, the sheep, that she would die of Pathologies that were beyond what her age would predict she would have.
And at the point that she died, we... Of pathologies beyond the age at which she was supposed to be?
We don't know.
Because they didn't publish it.
They told us that it wasn't the case.
They told us that they had put her to sleep to put her out of her misery because they couldn't stand to see her suffer from the pathologies that she had.
But she died far earlier than most sheep of her species.
I don't remember the specifics.
We don't know because they euthanized her.
But we know how old she was.
Yes.
They euthanized her, claimed it had nothing to do with age-related pathologies.
She did have early arthritis.
But in any case, it made it impossible to evaluate.
So there was a hypothesis test that had been run effectively, but we weren't allowed to see the result of it.
So anyway, this is something It's something that happens and it is really problematic when it does.
All right.
I think we're good.
We are going to... Yep.
Do we want to say something about what we're going to do next time?
Or do we want to leave that on Saturday?
Yeah, I think we can say something.
Okay.
So, we don't really know yet, but we noticed that Saturday, when we are scheduled to do our next livestream, is May 23rd, which makes it the third anniversary of...
Brett having 50 or so students whom he had never met before show up at his classroom and chant for his immediate firing or resignation, and the rest is history and largely televised as well.
We have not talked much publicly, just the two of us.
Benjamin Boyce did a very nice interview with us that summer, I think, and we've both talked.
You've talked a lot, you know, with Joe Rogan and Dave Rubin and Sam Harris and such publicly.
And we talked to Mike Nayna.
Oh yeah, Mike Nayna has done an extraordinary job.
But we're thinking of spending the next live stream talking a bit about Evergreen and maybe going over some things that haven't been public so far, maybe.
Yeah, so we'll talk about what it looks like three years out.
Yeah.
Okay, so we will be back in 15 minutes.
We will do Q&A with your Super Chat questions.
If you are not subscribed to the channel, why don't you subscribe, hit like, hit the notify bell, and set it to all so you don't miss future live streams and other content, and we will see you very shortly.
Export Selection