All Episodes
Sept. 11, 2023 - Truth Unrestricted
47:22
Scientism with Dr Dan Wilson

Dr. Dan Wilson, a PhD biologist behind Debunk the Funk, debunks scientism by explaining how peer-reviewed experiments—like his ribosome research or Einstein’s relativity—rely on rigorous scrutiny to correct bias, unlike anti-vaccine conspiracy theories. He cites John Ioannides’ work on replication failures and contrasts science’s iterative progress with stagnant misinformation, warning of figures like RFK Jr., whose unfounded claims (e.g., AIDS denial) risk undermining public health if unchecked. Science thrives on evidence, not dogma, and its credibility hinges on peer validation, not isolated assertions. [Automatically generated summary]

|

Time Text
And we're back with Truth Unrestricted, the podcast that would have a better name if they weren't all taken.
I'm Spencer, your host.
Here today, I'm going to talk to Dr. Dan Wilson of most famous for his YouTube videos as Debunk the Funk.
Why don't you first just kind of introduce yourself and kind of tell us, you know, I know you have a degree in biology, but I'm sometimes a little unfamiliar with all the, you know, which type of biology you do.
Why don't you just introduce yourself, please?
Sure.
Yeah.
Thanks for having me, Spencer.
So not super famous, definitely most known on the internet for my channel, but not super famous.
But yeah, my PhD is in biological sciences, and I did my dissertation in ribosome assembly, which is a pretty complex molecular biological field.
And now I work in industry and work on gene therapy drugs dealing with adenoviral vectors or adeno-associated viral vectors as the gene therapy.
So in my spare time, I do my YouTube channel.
Yeah, I haven't seen all the videos.
I've been there several times and watched some of them, but I'm really amazed at how many you have.
It's quite a lot of work.
I mean, I just do one of these kind of two of these every three weeks more or less.
And it's already a lot, but to do video too, I don't know how that's a lot more.
It's been three years since I've started the channel.
And yeah, the videos do take quite a lot of work, but find the time.
Yeah.
So I've mentioned you on this podcast before, mostly as a resource to debunk bad information about vaccines and COVID.
And whenever there's a part of it that comes up that, you know, I'm a little confused on, I kind of go have a look and look at what you've said about it.
Or, you know, I have a few other sources, depending on what the actual thing is, but your name has definitely been mentioned a couple times already.
But today I'd like to talk about confirmation bias and a thing that is called scientism.
So generally, for anyone, in case anyone is a little confused as to what that is, confirmation bias is the process of thinking whereby you start from the conclusion and then you work backwards toward the things that will support that conclusion.
That's very simple explanation.
It can get very complex, but despite the fact that it can be explained so simply, it's really not easy to see when you're experiencing this.
I've caught myself sometimes going down this path where I'm like, no, wait a minute.
What about, you know, am I on the wrong track here or whatever?
And I know that when a person is doing quote unquote science, we all at some point in our everyday lives who don't have training in science claim to be doing some kind of science and it's not really what you do.
Can we just talk a little bit about how is it that when you're doing experiments and you're doing looking at data, how we go about testing hypotheses and looking at the evidence, looking at the results and attempting to not fall into the trap that's provided to us in our own minds of confirmation bias.
Yeah.
I mean, that's a really, I think, interesting thing that people who don't work in science might not really know about or might not see the mechanisms that science as a community has to kind of solve those problems, because it is a problem in every human mind to have an idea or have a bias in whatever you're testing and want to find a particular outcome.
Whether it's a scientist wanting an experiment to show exciting results so that they can publish something new and impactful and exciting, or if it's just their own pet idea that they want to prove, that bias is there.
Luckily, science does have ways to get around that.
They're not perfect.
But I think to put it simply, one big thing is just that science is simply a really competitive field.
If I'm working in an academic lab and we are working as a group, maybe there's a small group of four or five of us, or maybe it's a big lab with dozens of postdocs in it.
We are all working to make experiments that are going to make us look good.
Excuse me.
And by that, I mean we think, how could people tear this apart?
What flaws in our logic, in our methods, or in our interpretations of the results will people find?
Because when you present your work to the scientific community, whether it's a paper or a conference or whatever, you definitely want to have sealed as many of those cracks as possible.
Right.
Because that way, your work just looks really tight and you get respected for it, whether or not the results are exciting or game-changing or whatever.
Scientists get respected for having really tight, objective, good work.
And so not wanting to be the person who puts a work out there and then it gets shredded by scientific peers who can be less than polite or not very forgiving in how they think of you,
your reputation, that makes you think harder about what might I be missing in this data set, in this experiment I just set up that might lead room for cracks or interpretation, if that makes sense.
So yeah, science, for better or for worse, it is really competitive.
And so that does actually weed out a lot of the bias.
Right.
Because, you know, if you, well, I can speak from experience in my PhD.
For my thesis, for the first three years I worked on it or so, we were, my advisor and I, we were committed to this idea that this one part of this protein, it just looked like it was so important and it must be doing something super important to put the ribosome together.
And so I cut this part of the protein off.
I mutated it, did all sorts of things, and it just wasn't looking like it was that important.
And for those first three years, we were kind of trying to think of ways: well, okay, well, how can we design an experiment to find out whether or not this thing is really important?
And there's value in that thinking, but we were also biased because we had this idea and we were so tantalized by it.
We just wanted it to be true.
You had an outcome that you wanted, yeah.
Yeah, but it just wasn't.
It turned out it did serve a function and we did figure it out, but it just wasn't nearly as important as what we originally thought.
But we had to kind of spend a lot of time doing a lot of experiments that stared us back in the face and said, you are wrong about this.
Yeah.
It's interesting you mentioned that you, in thinking about what you're going to do, how you're going to arrange an experiment, how you're going to display the results, how you're going to collect everything, you think about what the criticisms will be.
I immediately realize that I do a similar thing at my work.
Whenever I'm doing things, I'm, I mean, I work away from the shop.
I work in the field and I make decisions out there.
And I'm always thinking to myself, if something goes wrong that's completely out of my control, but there will still be some parts of it that I do control, which is all the preparation.
If afterwards someone has to ask, well, did you check this?
Did you check that?
Did you get this ready in this way?
And then I have to say yes or no to those questions.
That drives me to go through the checklist of my mind and do it more along the appropriate way.
And because I've been in those situations where things didn't go the way they should have.
And then you have a meeting afterward and you go through it and you say, okay, which way did we do this?
Did we check this?
Did we check that?
And I don't want to have to answer those questions.
Well, no, I didn't do that this way, or no, I didn't check that or whatever.
That makes me very uncomfortable.
So I just think about it and I think, what questions will they ask me at the end of this?
And I say, okay, well, I'm going to need to know all these things.
I'm going to need to have all these things checked and in place.
And most of the time, I don't need them, but every once in a while I do.
And that's, I mean, it's almost like there's 90% preparation and only 10% of it is really relevant or useful at the end.
Does that sound pretty similar to what you experience with when you're doing this kind of work?
I would say so.
Yeah.
Saying 90% preparation and 10% is what you use gave me flashbacks too when you when you do a PhD in the sciences and you you do a proposal first, which is where you tell your committee, this is all the work that I'm going to do during my PhD, during my thesis.
And at that point, you are a, let's say, a young, bright-eyed PhD student and you plan out all these fantastic things and experiments you're going to do.
And then only 10% of it actually makes it into your actual thesis at the end.
So yeah, it's it's being a good scientist is mostly planning a good experiment and figuring out how you're going to execute it with the proper controls and how you're going to actually design it so that it can answer a question that you're asking.
And then also still acknowledging at the end what questions it doesn't answer.
Right.
So let me move into scientism.
This comes up in discussions and Twitter and online discourse from mostly people who are attempting to deny reality is what I call them.
But scientism is the idea that the entire scientific community is engaged in sort of extended and institutionalized confirmation bias.
The accusation is that science isn't really discovering things.
It's just looking at the answers that are in the textbook and then arranging their results to match what those are because Usually this accusation is wrapped up in all kinds of religious language, words like dogma, doctrine, holy books, things that must never be questioned, that sort of thing.
And I immediately push back on that because I've never worked as a scientist, but I once upon a time went to do engineering and I could tell already that this isn't how it works.
You can question anything, but your answer has to be useful.
Like the question that you ask has to be useful.
You can tell a math teacher that, which I did once upon a time, he'll tell you that a function doesn't converge.
And then I try to tell him it does converge.
And he'll be like, okay, show me.
And that's the moment when you're stuck, when you have to prove it.
Like you're allowed to question whether it really converges or what the really criteria for the thing is.
You're allowed to do all those things.
But as soon as they say, oh, really?
Okay, show me.
Show me why you think that's true.
That's when you don't get to just stomp off and run away.
So have you ever run into this?
I mean, you probably have, I'd imagine, this accusation that you're not really discovering new things, that this whole enterprise is just a new religion that gets pushed upon us by a new set of priests.
How do you approach that?
Yeah, I'm sure you have.
Yeah.
I mean, it's a pretty popular idea among conspiracy theorists or people who just like to ask questions or whatever you want to call them.
It's a really common attitude.
And I think it comes from this place of they want to be able to quote unquote just ask questions, but they don't have a system for really getting at the answer.
Yeah.
So because they're not really doing science, right?
So they reject the system that will help them get to the answer to their question and replace it with something that is more fantastical or comfortable or whatever reason may be that they that they choose to believe it.
But it's you're you're right that it's that moment where they say where you say, okay, show me.
And that's the part that scientists actually do all the time.
Yeah.
Whether you're I always say that if people like just lay people or conspiracy theorists or whoever could attend journal clubs at a department or seminar talks at a department or just go to a few science conferences and sit in on as many talks as they can, they would learn a lot about how science works.
Because first of all, it might humble them because a lot of the talks are probably going to be geared towards the scientists' peers and not the lay people.
So a lot of it would be over their head.
But what would be really interesting for people to observe are the reactions, the questions from the audience, because that is when they'll see, when you'll see if someone is presenting data for an exciting find or a find that shows something different, then there will be a lot of questions from the audience to really drill down and make sure that what they're presenting is real.
Yeah.
Is a real result.
And of course, the conversation doesn't end at the questions.
The conversation goes into dinner and into drinks at the bar and into lab meetings the next week and so on and so on.
It's a never-ending process of scientists asking questions, but also doing the work to find answers.
Yeah.
And it makes progress.
It makes progress.
The same, the questions that scientists are asking today are not going to be the same questions that were being asked decades ago or even five years ago.
Some questions might be especially not in biology.
Yeah.
Yeah.
It's advanced so much in the last 50 years.
Right.
Obviously, some fundamental questions are going to remain questions that just keep going, that are so deep.
But any one lab, any one individual lab that is working in a particular field, they're going to have answered questions and have formed new questions in the span of a few years.
Whereas a conspiracy theorist, someone who, or someone who subscribes to this scientism idea, let's just call them that, people who subscribe to that idea.
They're probably going to be asking the same questions for practically the entire time they are subscribed to that idea.
We have seen, or at least I have seen anti-vaxxers and people who subscribe to all sorts of alternative health beliefs ask the same questions and say the same things for over 100 years now, pretty much as long as vaccines have been around.
It's a belief that doesn't make progress.
And I think that that is really important to see, at least from the outside, and probably a hard thing for someone who is in it to realize.
Yeah, they think they're asking questions, but they're not asking the kinds of questions that they want answers to.
They're asking the kinds of questions that they think they already know the answers to.
Right.
And because they're not asking those questions with an honest attempt to learn, they're not learning and they're not progressing in their ideas.
Yeah.
I mean, you know, in some conversations about this, people will kind of imply that all the, you know, the word shill comes up a lot, but that all the people who are doing this work, they're just indoctrinated into science.
And I mean, these people might imagine that they walk to a university that has first-year biology and they might actually hear the students chanting over and over again, evolution is a process that results in changes in the genetic material of a population over time.
Evolution is a process that results, you know, like if I said that they were doing that, that I walked by a classroom and saw them doing it, they would nod along, oh, yes.
Yes, I've also seen that because that's what they want it to be.
They don't want it to, you know, the idea that it's a real science and that it's coming up with real results directly contradicts with the thing they want to believe.
And that's, that's a problem for them.
And the whenever I talk to other people about this who have training in science, I talk to them about the idea of scientism.
The first thing that most of them say is that, well, if all the experiments were only attempting to adhere to the things that are on the written, in the written textbook that's already there, then there wouldn't be any progress being made in anything.
You would only be able to get the result that was already written down from a previous time.
And you would be stuck in the same way that you described that people who are vaccine hesitant are stuck for 100 years now or more.
They don't generate any new ideas along this line.
They scurry away from the new ideas and cling on to the old ones.
But without those advancements, we wouldn't have all the things we have now.
There would be no smartphones.
Transplant surgeries would never occur.
I mean, the bar would never move forward with anything.
Cars and airplanes wouldn't get progressively better year after year.
And obviously you are well aware of this, right?
But I think it's an important thing to talk about is that we ordinary people have to latch on to these ideas and understand how science works and also be able to, in conversation with other people who are hesitant about these ideas and are pushing back against them to just be able to talk to them and say, look, I don't think that's true.
Or maybe if they don't know how to properly debunk these bad ideas, maybe know at least where to find a resource that can debunk them, right?
And that's the most important thing.
I mean, that's a thing I learned about when I was doing engineering was that you can't know everything.
The best you can do is know where to find everything.
And that's all you can do, right?
Right.
Yeah.
Yeah.
For sure.
And just to add to what you said about finding new things and making progress and making things better and better.
One principle of science is that experiments are built upon previous experiments.
And some people who subscribe to scientism would say, or they might say, oh, this experiment assumes that all this stuff before it is true.
Therefore, it's just making assumptions.
Right.
I've heard that.
Yeah.
But that's not the case because when experiments build off of previous experiments, the results or just the experiment working in general sometimes is not just assuming that the previous ones are quote unquote true.
It's dependent on them.
And I can give an example of this.
In the field I was working in during graduate school, there was one protein in the ribosome assembly pathway that a lab had previously studied and found that if you mutate this part of this protein a particular way, that ribosomes get built just fine.
The finding was that this part of this protein was not important for ribosome assembly.
And then a few years later, another lab published a paper saying, no, this part is actually essential.
We mutated it and found that ribosome assembly can't happen without it.
And so this lab that published the initial result went back and were trying to figure out if they did something wrong.
And it turned out that the mutation they had made reverted because there was a wild type copy in the system that they were using that would get turned off.
And there's some molecular genetics to explain there, but essentially the mutation did not stick.
So to speak.
Yeah.
Yeah.
And so they were studying the effects of a normal protein, right?
There is an example where another lab found a paper and challenged it.
Yeah.
Challenged it inadvertently, right?
They weren't saying, oh, this must be false.
Let's test them on it.
They also wanted to study that protein, but their experiments depended on the lab's results being true.
And they weren't true.
So they got different results.
Right.
And that is how experiments build off of each other.
When fundamental findings are established, then they are tested every time someone does an experiment that builds on them.
And so it's not just assuming.
It is literally building off of results that were obtained previously that if were false, would affect your experiment or invalidate it or show up in another way that makes it clear that something wasn't quite right.
Yeah.
The replication of results is important.
And really, it's the most important thing.
If you get a result, but you can't ever make it happen again, it doesn't mean anything.
And maybe this would be, I can imagine some people listening to this and immediately thinking of this paper written by John Ioanides, often called the most cited paper ever.
It's not the most cited paper.
It's the most downloaded paper because people like to read things that are exciting and contrary.
And so it's this paper called why most published research findings are false.
Right.
Basically implying that experiments can't be replicated.
Science is way off the mark and never hitting the right thing.
Yeah.
Yeah.
But if you if you read the paper, what it's really pointing out is that things like medical research, like when people are testing drugs or diets or things in humans, often those things have trouble getting replicated because studying humans is complicated and very difficult.
What it does not say or what it does not demonstrate is that most basic science findings are false.
And so the distinction there is basic science would be asking a question like, how does this protein work?
Or does this protein do thing X or something, right?
Yeah.
Like where is it localized?
What does it do?
How does it function?
That is more basic research questions.
Applied research is like, can we target this protein therapeutically with a drug?
You might have a study in phase one trials that says, yeah, maybe.
But then by the time you get to phase three, it might become clear, okay, no, not really.
This drug doesn't work at large scale.
You might find that the same thing that other research team did where they made the change, but it regrew over it, regrew back after and just reverted the change back again, right?
Like it, you know, it did make a change, but was temporary or whatever.
And, you know, that can happen, right?
I mean, clearly.
And so this is why attempts are made.
And with the increased complexity of especially biological science now, I think it has to be the most complex area of science now.
You know, every quantum physicist in the world says, wait a minute.
So, well, quantum physics, probably the most difficult to understand, but the vast number of different combinations of DNA to make everything from microbial forms to complex life, it's going to take another millennia to understand it all.
And we're going to need computers even to do all that.
So it's clearly the most complicated, the, you know, most number of things could go wrong in the way that you describe, where you try things and they just don't work out because there were 20 factors you weren't able to fully encompass or there was another reaction that you didn't know about yet or whatever.
But because of that, yeah, you're right.
We're going to try many, many things in this area and many of them aren't going to turn out the way we think.
And so, yeah, John Ionides is right when he says that you're going to get a lot of results that turn up negative, but they're not of negative value to science because the way science works, you get a hypothesis and then you have a test to test that hypothesis.
And when you test it, you get a result.
And the result is either that the hypothesis is confirmed or disconfirmed.
Or the third result is that you can't tell.
Your test wasn't good enough to properly resolve.
But all of the tests are new observations with which you can form new hypotheses.
So there's, I think Richard Feynman described it in a lot of ways where he compared it to trying to crack a safe open, where anyone might be able to just walk up to the safe and crack it open.
But he says, I get letters where people say, have you tried 10, 20, 30?
And those are useless because maybe I have already tried 10, 20, 30.
Maybe it's a five-digit code.
Like if you don't even know any of the context of what it is that you're working on, then these are useless guesses.
You have to have a system by which you try to do it.
And of course, if anyone knows anything about Richard Feynman, he definitely did crack safes.
That was one of his sort of pastimes.
And he went about it methodically in the same way he did science.
And, you know, that's why he used that metaphor because he knew it well.
Yeah.
And in trying to crack a safe, he must have had many attempts that didn't work out before he opened each safe.
But you only need to open it once.
Right.
And if you keep track of all the things you tried that didn't work, that leads you closer to a result that will be useful.
Right.
And that's that's why a lot of science is leading to results that aren't good or not coming up with the hypothesis being true.
Right.
And you make me think of another concept that I think people who think that science is scientism might not understand.
And it's also something I think that gives them fuel to believe in scientism or believe that science is scientism.
And it's this fact that normally these people, they are trying to argue against the basics.
They are trying to just ask questions that challenge the fundamentals, the basics of a field.
And those are well established and have had many, many, many experiments over years or decades supporting them.
You know, scientists would have had many, many failed experiments by now if those fundamentals were not true.
And so when someone who thinks that science is scientism tries to question the basics that they don't understand, they get scoffed at and said, well, that's just ridiculous.
But they don't understand.
They might not understand where that reaction is coming from because it really is so just fundamental, so consistently proven, so to speak, that this basic fact of a certain particular field is true.
Meanwhile, scientists are asking questions to push the boundaries of a field, questions that you would have to study for a few years to really understand what that question is and what it's getting at.
But it takes a lot of work to reach that understanding.
So from the surface, it looks like science is scientism.
That's what I'm getting at here.
But yeah, it looks like you're angling your boat toward a certain place because you know that should be the answer.
Like you were given that from the textbook from the start, but you're angling your boat that way because you knew that you had to maybe avoid the shoals on the one side of the lake, right?
Like because you had more knowledge than the people who were on shore just wondering why you're angling your boat that way.
Yeah.
I see it.
I see it often in, I think people who deny germ theory is a good example of this.
Yeah.
Where they will look at the ingredients of a cell culture, for example, and just be bewildered at the names of things that are in a cell culture where you might do an experiment with a virus, not understanding that each of those ingredients has a purpose, that those ingredients are always present if you're culturing cells for whatever purpose you may be culturing them for.
Yeah.
And those ingredients were empirically determined painstakingly by several people over many years.
And for that to culminate in someone reading those ingredients and thinking this is poison is just a funny irony that is also sad.
Yeah.
And I point out that the misinformation problem makes every other problem difficult or sometimes impossible to solve.
You know, that's why we need to work on it.
Because as soon as you have a large enough portion of the population that will stand in the way of a solution that really is going to need more or less all of us to solve, that's when these ideas become actually dangerous.
They become roadblocks to useful solutions.
And, well, I think we've seen that in a lot of ways during this pandemic.
For sure.
For sure.
I mean, one thing that I often think is a shame of just what I've observed and experienced in the world of disinformation is that a lot of people who believe it have the ingredients to be a good scientist.
They have qualities that include, you know, obviously a really intense curiosity because they do spend a long time looking at very specific content, but looking at content nonetheless in the attempt to learn something.
But it's misguided, misdirected into, as we said, kind of a treadmill where they're not going to actually make much progress.
But that intense curiosity, if it could be harnessed and directed in a different direction, could make for a good scientist.
And, you know, I say that because I used to be a conspiracy theorist, believed that 9-11 was an inside job and that the government was hiding cures for cancer.
And I was into ancient aliens and all that stuff.
And I did spend a long time trying to learn about this stuff or just trying to learn about these topics.
And it was fun.
I had fun learning about these things.
It was just leading nowhere.
And getting out of that rabbit hole, so to speak, took a while.
I'd say at least several months.
But one of the things that helped me get out, which is relevant to what we're talking about, I ended up reading this book called The Invention of Air by Stephen Johnson.
And it really helped me understand the collaborative, the iterative, and the competitive nature of science.
It's a really fascinating retelling of the history of how oxygen was discovered by a guy who is credited with it named Joseph Priestley.
And the book kind of works to break a lot of the stereotypical tropes that a lay person might think science is.
It breaks the stereotype that science is an ivory tower, a lone wolf endeavor, because that's just not true.
Joseph Priestley interacted extensively with a lot of colleagues, a lot of times at a bar to discuss his findings and try to come up with new ideas or just new ways of tackling this problem of figuring out what is this thing in the air that we need to breathe, right?
I mean, imagine not knowing something so what is so basic today.
Yeah.
Like, how do you figure that out?
Well, some smart people or just curious people talk a lot about it and do a lot about it and go through this iterative process of doing experiments, thinking about the results, interpreting the results, and trying to come up with new experiments to challenge whatever interpretation you had for that previous experiment's results.
And it's a really fantastic story.
And realizing that science was so collaborative really helped me dispel the idea that it could be swallowed up by scientism, because I think when I did believe conspiracy theories, I really did think that it was like a few experts, quote-unquote experts who decided things or, you know, just controlled the dogma or whatever.
But it's really not.
It's really a global community that is always trying to prove everybody wrong.
And not just the people outside of its community wrong.
They're mostly trying to prove the other people on the inside of that community wrong.
Exactly.
Exactly.
Right.
I mean, they're not working in concert to protect their set of ideas from the masses.
They're working against each other to prove each other wrong and themselves right all the time.
That's the environment that they're in.
It's not them versus the lay people.
It's them versus them, scientists versus scientists.
Right.
And whatever we learn out of that sometimes gives us a cell phone that does cool stuff.
That's where that is.
Right.
Yeah.
The scientist who makes the finding that overturns a lot of previous observations or a lot of previously conceived ideas.
Yeah.
That's the one that is going to make the most impact.
So scientists are always looking for something that will show us that we've been wrong all along.
But if they say that they have found something like that when it's nothing, then their reputation will just be trashed.
And a scientist does not want that because that feels real bad.
There's nothing worse than being up in front of a room full of experts giving a talk and then someone asks a question or points something out in your work that makes you realize or makes everybody in the room realize, oh, this person doesn't know what they're talking about.
Oh, yeah.
I've seen it happen.
Oh, no.
Yeah.
People like to say that it's possible for one person alone to upend and have a whole new modality and change the world.
And they're right.
It is.
And people always point, of course, the biggest example, Albert Einstein.
He changed the face of science when he said that time wasn't constant everywhere in all spaces.
Time is this weird, flexible thing.
And they say that and they say, well, if that's true, then why can't, you know, scientist X over here that I really like, who has been shunned by the scientific community or at least told by all of his scientific peers that he's wrong, why can't he be right?
I mean, Albert Einstein was told everyone, he was a lone scientist against the world, telling everyone that time worked differently than everyone thought.
The difference between those two stories is that Albert Einstein took an experiment that was roughly 30 or 40 years old at the time that no one could explain.
Everyone could replicate this experiment and no one could explain its result.
And then he explained it in a new way.
And as soon as he wrote his paper, everyone who read it, almost everyone who read it and did physics understood what he was saying and went, wow.
And some of them probably said, why didn't I come up with it?
But they all understood it and said, okay, now we will help to teach everyone else this.
As soon as he wrote his paper, it was never him against the world anymore.
It was him and almost all of the rest of the physics community against the world.
And there were still a couple of physicists who weren't convinced.
And that stat stayed that way for a long time.
But it was actually a long time to convince ordinary people that this was happening, that relativity was a real thing.
And I remember it's a common reaction.
The first time you are approached with this seemingly really contradictory result of relativity, it's almost like it's part of the human mind because of the way we see the world.
I wanted to reject it.
I did for a long time.
I wanted to reject relativity.
I didn't want it to be true, but I couldn't prove that it wasn't.
I could question it this way or that way.
And the more I read, the more questions that I had got answered.
I went, oh, well, that one's shot down.
Okay.
What if I poke holes at it this way?
And at the end, I had to finally admit at the end of all my questions that, well, especially once I read about some of the experiments that were done to actually prove it, because it took a long time to actually really full on prove that what he was saying about time was true.
I think it was only in the 90s when it was finally, finally done.
But I, for a long time, I just didn't want to believe it.
You know, it's just such a backwards thing.
And all of physics knew it was right.
And I was not convinced.
I was willing to die on that hill, be a lone wolf and say, no, it's wrong.
And in the end, I had to admit that Einstein was better than me, because he actually was, really, you know, but Brett Weinstein isn't.
He has these ideas and the people who understand very complex things about biology don't agree with him.
And that should be the first sign that tells us that we shouldn't trust what he's saying because instead of convincing the rest of his scientific community that it was right, the way Einstein did, he hasn't done that.
And he has just sort of gotten angry about it and tried to convince lay people anyway that he's right.
Yeah.
I think that if Einstein were alive today, he probably wouldn't be trying to promote the idea of relativity on podcasts.
He probably would be busy trying to convince his peers with evidence.
Well, that's how he did it originally.
Yeah.
And I think that's just generally a good rule of thumb.
If someone is disagreeing with experts and they're just doing it on podcasts or TV or whatever media, and they're not publishing papers or doing experiments that are gaining respect or attention from their peers, then I think you can, as a layman, just ignore them.
I think that's a safe, a safe rule of thumb.
It's a strong approach.
Yeah.
Yeah.
And that might sound like I'm saying, you know, don't think about these people, dismiss them, whatever.
But I'm saying this because it really is going to just be a waste of time, 99.9% of the time, for a lay person to get swept away by a contrarian on the news or on a podcast and think that they're right and the experts are wrong because that's just not how good scientists convince people of a controversial new idea being true.
Yeah, they convince each other first.
Exactly.
The way Einstein did.
That's why there's a scientific community.
So I think I've taken up enough of your time.
I'm really grateful again.
Thank you for coming on the podcast and giving us an hour of your time.
Of course.
Yeah.
So your YouTube channel is Debunk the Funk.
Is that right?
Oh, right.
Yeah.
Did we say the name?
I didn't, I don't remember.
Well, I did.
Yeah.
But I just wanted to reiterate it here that people should go check that out.
If there's, you might be surprised.
I had no idea that RFK Jr. was so deeply anti-vax.
I was like a lot of people, I just thought he was the member of a famous family and happened to brush up against politics sometimes because that's what their family did.
But that's not really the case.
And it's important that he not become president.
Just saying.
Yeah, he probably won't.
But yeah, no, but if his idea survives his run for presidency, we'll still have the problem of his idea.
Oh, for sure.
Which it probably will survive his run for presidency.
So oh, yeah.
His ideas predate his run for presidency and they will probably elastic afterwards as well.
But most important that he not become president because of some weird loophole or whatever.
Yeah.
Yeah.
So important.
That's the only thing we take away tonight.
That part really.
Again, please.
Yeah.
He's also an AIDS denier.
So that's also bad.
Yes.
Yeah.
But yeah, you can, if that, if that surprises you that RFK Jr. is an AIDS denier, then yeah, you can see why and find out all about that on my channel.
What he actually said and where he said it.
And it's a really great resource for anyone confused about any of these things.
Yep.
Yeah.
And if anyone has any comments about this podcast, they can send that email to truthunrestricted at gmail.com.
And with that, I think we'll sign off.
So thanks for coming on.
Yeah.
Thanks for having me.
It was a good time.
Great.
Export Selection