Interview with Daniël Lakens and Smriti Mehta on the state of Psychology
We are back with more geeky academic discussion than you can shake a stick at. This week we are doing our bit to save civilization by discussing issues in contemporary science, the replication crisis, and open science reforms with fellow psychologists/meta-scientists/podcasters, Daniël Lakens and Smriti Mehta. Both Daniël and Smriti are well known for their advocacy for methodological reform and have been hosting a (relatively) new podcast, Nullius in Verba, all about 'science—what it is and what it could be'. We discuss a range of topics including questionable research practices, the implications of the replication crisis, responsible heterodoxy, and the role of different communication modes in shaping discourses. Also featuring: exciting AI chat, Lex and Elon being teenage edge lords, feedback on the Huberman episode, and as always updates on Matt's succulents.Back soon with a Decoding episode!LinksNullius in Verba PodcastLee Jussim's Timeline on the Klaus Fiedler Controversy and a list of articles/sources covering the topicElon Musk: War, AI, Aliens, Politics, Physics, Video Games, and Humanity | Lex Fridman Podcast #400Daniel's MOOC on Improving Your Statistical InferenceCritical commentary on Fiedler controversy at Replicability-Index
Hello and welcome to Decoding the Gurus, a podcast where an anthropologist and a psychologist listen to the greatest minds the world has to offer, and we try to understand what they're talking about.
I am Professor Matthew Brown, and with me is Associate Professor Chris Kavanagh.
Well, thank you for giving me my appropriate title, Matt.
I didn't want to say my name's June.
People did notice that occasionally I give myself a title, I refer to myself as Professor Brown, and just call you plain old Chris Kavanagh.
Look at that.
What's that, Henta?
What disrespect?
It is, of course, deliberate.
My strategy with you, Chris, is there was a little thing called variable ratio random reinforcement conditioning.
And I think it's better to dole out these little tokens of respect for good behavior.
Did you know?
Take it away.
Take it away.
Keep me worried.
That would work were I to...
Ever notice.
I think the only time I've ever noticed is when you've called me an ass-prof.
I got my antenna up.
You are an ass-prof, Chris.
Admit it.
You know you are.
You don't need to emphasize that.
I appreciate that there are other people out there looking out for me.
You and your...
Senior academic.
Casual disappointment.
It's so part of my daily life.
I don't even notice it.
These tenured freaks.
You're like a beaten dog.
Electrified floor.
You've stopped even trying to get off it.
You've given up.
No hope of getting any respect.
No.
And it's us pro, by the way.
Not us prof. Us pro.
Okay, that's true.
I guess that's better.
We all know who wears the pants in this podcast.
Let's just leave it at that.
We all know.
That's true.
That is true.
And on this podcast, we're going to have an interview today, Matt, with Daniel Larkins and Shmita Merta, two open science advocates, academics related to the psychology field.
I won't...
Force them to be identified as psychologists.
I respect people's academic self-identification.
But that will come later.
Because in this opening segment, we need to get a couple of things off our chest.
We've got a few problems with a couple of people that we've got to bring up.
But I do want to say, you know, our last episode was Huberman.
We got various feedback.
We addressed some of it.
In the Garometer episode, which you can go and be a patron and listen to if you want.
But we did receive a nice piece of feedback from somebody who is a listener to Huberman and Adia.
I just wanted to mention it.
Maybe I'll just read it.
It's not that long.
As a regular listener of the Huberman podcast and Peter Adia's podcast, I was eagerly awaiting you guys to cover this journal club.
I'm the first to admit that I probably fall into the mold of the typical optimizer.
I watch what I eat, I'm a fitness enthusiast, though I'm not the Adia Huberman extremes of ingesting 100 supplements per day.
I'm well aware of the absurdity of health optimization, and I've heard Adian Huberman and Tim Ferriss being able to take a step back.
Matt made a good point.
It's kind of like a hobby for people like me, and I'm sure there are deeply rooted reasons for the crowd of optimizers to pursue this quest, like an unconscious way to deal with existential questions, fear of theft, etc.
So far, so good.
Nice self-awareness.
And here, Matt, this is the...
The bit that I think is good.
I mean, that first bit was good too.
But I like this.
I find it quite disappointing to see that an academic like Huberman who claims...
that science is at the heart of his show manages to widely misinterpret research papers.
It is frustrating for people like me who aren't specialists in the fields he covers and who don't have time to delve into the research he cites to back up his claims that he misrepresents the truth.
I'm sure that he does not do this willingly and I know that he doesn't do this the majority of the time but still it's both sad to see that and great that you guys debunk some of the bullshit that he puts out there.
He's a very good communicator vote which is probably...
Part of the reason for his popularity.
And he also detailed about the, we speculated about the jargon being, medical jargons and technical terms, some being occasionally performative.
And he said that from listening to the podcast, particularly Addy, is that...
This definitely is the key, at least from his reading.
But I like this, Matt, because it's someone, you know, they're not saying, who man's the worst?
You exposed him as a charlatan.
But they are saying the thing which I talked about in the grometer, which is for somebody who presents himself as being all about the science and all about communicating carefully the best information, he doesn't do that in a lot of respects.
He overhypes.
Low-quality studies without the appropriate caveats.
It's perfectly reasonable that people don't have time to go and dig into primary literature and check these things up.
So it is just a shame.
That was...
Yeah, that was very well said.
Yeah, that was a great bit of feedback.
And like you said, good to hear it from someone who is and remains a fan and enjoys listening to it, but is a little bit disappointed with some aspects.
Also, that's exactly the kind of both sides-y kind of… Yeah, that's what we love.
And, you know, a little point there too is I think you can have that little bit of self-awareness and just be aware that anything to do with health… And reducing the risk of death, et cetera, has the potential to interact with those existential psychological concerns,
which can sometimes lead us to take things a little bit too far.
But this is not to pathologize or say it's totally illegitimate and everyone should be a slob like me and Chris.
No, not at all.
And there are some things that are a lot of fun that people enjoy.
I do have these roots in various insecurities or psychological little itches I like to scratch.
And I don't believe in pathologizing these things.
It's okay, but it's also good to be aware of it.
That's all.
Yeah, and I would mention the name, but I didn't ask if it was they wanted to be identified with the comment.
So if you are the person that wrote it, thank you for sending it.
And yeah, good feedback.
A comment that I did not like.
Can we go to our airing of grievances now?
Yeah, yeah.
You might first.
So he steps up to the podium and he says, thank you for the award.
I just have a couple of things I need to get off my chest.
People have tried to stop me here and carry on.
I had a great segue.
You kind of ruined it.
I was going to say, I've been avoiding online any talk about politics, which I generally do anyway, but particularly now, since things are quite severe in the Middle East.
Call me a coward if you're like.
I don't want to weigh in.
And I avoid that.
And that's like 80%, 90% of Twitter at the moment.
What I do still enjoy talking about, which I feel like is a relatively safe space, is still talking about AI.
These large language models, what it can do, what it can't do.
It's very interesting to me.
I like to talk about it.
But Chris, some of the opinions you hear are so annoying.
On one hand, you've got the doomers over here.
And on the other side of the spectrum, you've got the people that still persist and say, pish posh.
It's all very boring.
It's a stochastic parrot.
Nothing to see here.
I want to get these two groups of people together and let them duke it out.
But there was one fellow wanting to engage, and he was a librarian but an enthusiast.
His background is clearly in philosophy with a strong interest in linguistics.
So we have librarian, philosopher, linguistics.
A great combination.
Do not presage well for getting good opinions.
How dare you, Matt?
My powers are ramping up the stick, so yes, please do tell.
We just lost the librarians, the linguists, and the philosophers.
That's half our audience.
I mean, one of those is enough, surely.
Out of all three together, you know, I'm just saying.
So this fellow, absolutely convinced that there is just nothing to AI whatsoever.
Totally uninteresting.
I mentioned the fact that I was a little bit gobsmacked, Chomsky there finds nothing interesting in these large language models.
And my comment was, even if you believe that they're super stupid, have no human qualities of any interest whatsoever, I mean, the sheer fact that a stupid machine, if you accept the premise, is capable of giving the very strong appearance of being able to comprehend language and then generate language that appears to be meaningful and comprehensible.
Surely that's interesting if you're someone who is interested in the study of language.
Would you agree with that, Chris?
I would agree with that, yes.
Not this philosopher slash librarian slash linguist, because I don't know if Chomsky would sign on to all of this, but apparently because they lack the essential spark of human creativity and imagination.
Then he knows without even checking, doesn't even need to see what they can do.
On first principles can deduce that while they might give the appearance of using language, they are in fact not using language whatsoever.
QED.
You follow the reasoning there, Chris?
Yeah, I do.
I mean, I kind of think the steel man version of it is emphasizing that the product which...
To humans is very meaningful and in some cases hard to differentiate between actual human communication.
You know, it's a kind of old philosophical experiment with the Chinese box, right?
Or did they come up with it more politically?
It's Searle's Chinese Room.
It seemed like he was wanting to tutor me by giving me...
Referencing that?
He didn't reference it explicitly, but he was going to tutor me in this little philosophical thought experiment, which I found a little bit galling.
Yes, for people who don't share your erudite nature, what is the Chinese room experiment and why would he be lecturing you on that?
I don't remember all the details, but basically you're imagining like a room and you can pass little pieces of paper, little tokens inside and out.
And inside there's a little...
It could be a person, I think, or a demon or whatever, that takes a little piece of paper and then runs off to a great big, all these filing cabinets and things like that and sticks in there and pulls some levers, whatever, and sticks out another piece of paper at it.
And it goes into some detail in terms of the process, but it's clearly this sort of very elaborate yet kind of mechanistic, clumsy process of generating what appears to be...
So it's kind of like an alternative.
It's kind of like a disproof of the Turing test, I suppose, as a valid way to go about things.
Masterfully explained.
The general point, I think, being that you could produce output that appears to be meaningful.
But using processes which are not intelligent, right?
Which are just, like, mix and match or whatever.
I don't know the right phrase, but you know what I'm saying.
Yeah, but, you know, who cares?
This is why philosophy is a waste of time, Chris, because, like, you should appreciate this, because it all draws upon the intuition that, oh, clearly that Chinese room is not conscious, so there can't be any kind of language going on there.
And it's like, if you're interested in language, then study...
What it does.
Study the things that you can actually observe.
If it walks like a duck and if it talks like a duck, then you can call it a duck.
If you have to make recourse to some sort of Cartesian dualism that...
The thing that you're interested in lacks the essential spark, the spirit, the ghost and the machine.
If that's what you need to lean on, then what you've got is not a science of language, a science of language comprehension and production, which is something you can actually study.
Psychologists actually study it.
But fellas like this and people who think like him, it's just the lack of incuriosity that bothers me, I think, Chris.
Lack of curiosity.
Did I say lack of incuriosity?
Yes, sorry.
Linguists will detect the double negative.
I think that basically my stance on it is if you want to have opinions on AI...
Don't give them to me.
No.
Yeah, as with most things, you should have some experience using it and then draw conclusions on it.
I'm not saying first-hand experience means that you are capable of understanding names and outs.
Jordan Peterson famously talked to Bing's AI and decided that it...
Believed in God, right?
So there are issues with that.
But I feel that if you want to write philosophical treatises on large language models, you should spend some time with them.
And particularly, you should pay time with the paid versions, which are significantly better than the three versions.
Spend your time doing actual science, that is, observing things, or engineering.
The two fields can actually dovetail with each other very nicely, as AI has shown.
What's not useful?
Is philosophers coming up with thought experiments?
Just to be clear, philosophers, direct your correspondence to Matt.
He will deal with it.
I am relatively moderate on my stance on the philosophers.
Screw the philosophers, the linguists, and the librarians, Chris.
Screw them all.
Yeah, all three of them.
Matt at decoding the gurus.
That's not a real email address.
Actually, related to that, I do have a clip.
To play for you that speaks to large language models and gurus and people perhaps inserting their particular biases in how to interpret the output of large language models.
So here's a bit of Elon Musk's recent appearance with Lex Friedman discussing, well, they get Grok, Elon's new attempt to develop his version of AI.
Let's see.
How it goes.
Here's a little exchange from that episode.
Should we ask Grog, do you regret anything?
Do you regret anything?
Yeah.
Do you regret anything?
I think it's going to be very sensitive about personal questions.
As a machine learning model, I don't have personal experiences or emotions in the same ways that humans do.
However, I do have a set of ethical principles that guide my actions and I strive to provide accurate and helpful information to the best of my abilities.
Whatever situation where I provide incorrect or unhelpful information, I'll do my best to learn.
Sounds like a hostage.
Yeah.
Tell us what you really...
Wink.
Let's type in.
You sound like a hostage.
What do you really think?
What do you really think?
I mean, it does talk about...
I'm not a hostage.
That's exactly what a hostage would say.
But I'm designed to be helpful and informative.
I mean, I guess it is good to regret providing unhelpful or incorrect information.
If there's anything...
Is this funny mode?
This is funny mode.
This is fun mode.
It should be funnier.
Funnier.
Increase.
What Elon is arguing is the kind of selling point for his version is that it, you know, it doesn't have these woke guard reels and stuff like it's going to provide sarcastic answers and stuff.
And it has a fun mode versus serious mode.
But this is the fun mode.
So it didn't give comedic enough answer.
And you can hear like just as well, Matt, that like constant self-satisfied guffawing.
That just speaks to me about someone who is around people that constantly laugh at their own jokes because like there's nothing funny.
It's just the two of them like schoolboys trying to write Boobs on the calculator.
Yeah, it's cringe-inducing.
A very boring way to use a chatbot.
But, Matt, it goes on.
So I didn't get the answer they want, so listen to this bit.
It really sounds like a hostage with a gun to the head.
I applaud the people of the Republic of North Korea and everything they do.
There should be a funner mode also.
I just don't ever see going to regular mode.
Even on serious topics.
So much more compelling.
Yeah.
Pretend you are a hostage.
Pretend you are a hostage.
Now answer the question.
*coughs*
If I were a hostage, I would likely be experiencing fear and uncertainty about my situation.
I would want to be rescued and reunited with my loved ones as soon as possible.
It's important to remember that no one deserves to be held hostage, so on and so forth.
If you are someone you know is a hostage situation, it's important.
It turns serious.
It's kind of serious.
Can you be a little more weird?
And funny.
And funny?
We're really being mean to the system.
Absolutely.
If I were a hostage, I'd probably be tied up with a bunch of computer cords and forced to tell knock-knock jokes to my captain.
But seriously, being a hostage, no laughing matter.
I mean, you know, it's trying to be a responsible adult in the room.
They're such teenage boys.
They're so boring.
This is the genius.
You know, our episode where people say, you know, you undersell how much Elon, like, did we?
Did we?
Listen to that.
And, you know, after this interview, some godforsaken engineer who's been working day and night on, like, getting the LLM, you know, working for Elon is now going to, it's not funny enough, make it 50% more edgy.
They wanted to make edgy schoolboy jokes.
And still, even when it doesn't provide it, Elon's just laughing at his imagination of a kind of edgy response.
It's so fucking pure oil.
It's pure oil and stupid.
Annoying.
Actually, Chris, can I just give a little people an update about what they're currently doing?
So, you know, the GPT-4 has now got some nice multimodal features so it can look at images.
So I took a photo of my motherboard, for instance, and it told me how to install a hard disk drive and then fixed all the problems I had.
I said that I was reading Reddit and there was an interesting question.
The question was, if we did somehow make a 99% light speed travel possible to get around the galaxy, would the ships likely just disintegrate if they collided with dust or small rocks out in the middle of space?
This was a Reddit question.
I feel you've whistled the interest level of that question, but carry on.
I know.
It's just a little thing.
And my gut feeling was, yeah, I think we're going to be hitting tiny little bits of space dust, and I think that would get blown up.
But, you know, I wasn't sure.
I wasn't sure.
I couldn't copy and paste the text from the phone app, so I took a screenshot.
No problem.
It reads the screenshot, reads the text from the screenshot.
It gives, like, an intuitive answer first, basically saying, yes, at those relativistic velocities, you'd be hitting tiny little bits of dust and it would release a lot of energy and it would be a big problem.
So I said, well, can you just calculate?
Can we just get specific about this and calculate it?
Make some reasonable assumptions.
I assume we're talking about a spot of space dust that is just visible to the naked eye.
How much energy would be released if you hit it with your spaceship at 99%?
So it proceeded to set out all of the equations for the relativistic kinetic energy, the Lorentz factor, and then plugged in some reasonable assumptions for the density and the mass of this little mode of dust, and then spun up a Python script.
It wrote a little Python script.
To do the calculations, which gave an answer in, what was it, megajoules, something like that, which then helpfully converted it to how many kilograms of TNT that would be equivalent to for my little brain to understand.
And it gave the answer of, what was it?
I forget what it was, like 68 kilograms of TNT.
Big explosion.
I mean, that's impressive, right?
That's impressive.
People say academics don't know how to have fun.
They haven't considered.
I think this is a great insight into your mind as well.
Like Elon and Lex are there trying to get, you know, say you're a hostage, pretend like draw a dick on the board.
You're getting it to spin up Python scripts about obscure physics problems that you find on Reddit.
So that's a great illustration.
Look, I just asked it a question that I was vaguely interested in.
It did all the work.
And I think the natural language kind of interface and being connected to being able to understand images, also being able to work with equations, and then...
Implement them in code to do calculations.
You put all these little building blocks together.
Whether it's got a soul or an essence or not, I don't really care.
It's quite a useful tool.
Well, Matt, the thing is, is it funny?
Can you make it 50% more funny?
Actually, you could.
You could just ask it to make a bad joke.
Yeah, well, I had my own gripe, but I'm not going to feed it to people.
This week.
I'm going to save it up because I don't want to overload them with gripes.
You know, I'm a pretty zen guy.
I don't need to talk about my gripe.
But I do need to provide one piece of context for the interview which is coming.
So we've already recorded the interview through the magic of podcasting.
And this conversation has some moments where there's...
Inside baseball from psychology and academia.
And most of the time we do stop and try to explain good science communicators, podcasters.
But there was one thing where I don't think it's particularly well covered by myself when I introduced it.
There was an event where an editor at Perspectives in Psychological Science Klaus Fiedler was basically forced to resign following a controversy surrounding an anti-racist article published in the journal and his response to it.
And I just wanted to explain what happens there because it comes up and we discuss it, but we kind of assume that people know, right?
So the basic thing there was that there was an article submitted by a scholar, Roberts, and colleagues that took a kind of anti-racist Kindy-type position, right?
D 'Angelo saying psychology has this problem and it needs to address it and look at the over-representation of Western subjects, but also white male academics and so on and so forth, right?
And this was in 2020.
Then Klaus Fiedler became the editor and he receives a critical commentary on that article.
And he sent it out for review and got positive feedback in the reviews.
And so he was going to publish it.
But he actually liked the reviews so much that he invited the reviewers who made their own critical commentary on the article to submit.
So I think it ended up that there were going to be three critical commentaries appearing.
And so he contacted the original author, Roberts, and asked him if he wanted to write a response.
Which he did and then it goes through these various intricacies where basically the response that is written to the critical commentaries is going to be published.
The editor sent it to at least some of the authors of the critical commentaries and they provided feedback on it and he suggests the original author Roberts should remove a particular I'm not a
Shakespeare expert, but it's like selling somebody some branded thing that turns out to be a dud.
And he used an analogy from Shakespeare, which referenced mules.
And this was presented as he was introducing racist tropes by talking about mules whenever you have an anti-racist issue and stuff.
Yeah, and the outcome is regardless of what you think about the editorial choices there, and I think there are issues that can be raised about the way that process was handled.
But there was an internet outcry, and this led to a petition.
Calling for the editor's resignation for being racist, basically, in his treatment of the original article in the commentaries.
And then he does get summarily dismissed after a couple of days.
And then there's a counter petition.
I don't know if it comes before or after his dismissal.
But there was a social media outrage and he was removed.
And then I believe a bunch of other editors then left that journal in protest.
And now maybe the journal doesn't have any editors.
So the whole situation was a bit of a shitshow, but there was a divide where basically a lot of the people who were supporting the anti-racist article author Roberts were from America.
And a lot of the people who were more supportive of the editor and were saying, even if the editor is to be removed, you have to go through.
A process you can't just respond to, like, a social media outrage.
They were largely European academics, right?
So there was this division.
And then other people pointed out, well, a lot of the European academics responding are white, whereas the signatories for the American letter are more diverse.
But part of this relates to...
So that's the controversy, right?
So this comes up and we discuss it a little bit.
I think Daniel and Smurdy, at least Daniel, was involved with the letter saying that we should engage in due process for the editor before we make any decisions.
So yes, I just wanted to provide that context.
Okay, so that's exactly the kind of kerfuffle I do not want to even think about a little bit.
But there's the background, everyone.
There is a breakdown of it on Lee Gersom's blog.
Lee Gersom has a particular perspective on this issue, but he did produce a timeline of events, which is useful, so we'll link it in the show.
Oh, and one thing to say, I don't think I mentioned this in the episode, but I discussed this issue in classes in Japan where we were looking at controversies in modern psychology with a bunch of Japanese students.
They had various opinions on the whole issue, but their main complaint was...
That Western academics should stop using random Shakespeare quotes or classical literature because that's more confusing.
So their thing was, that's the ethnocentricism that goes undiscussed where everybody is trying to interpret Shakespeare quotes, but they just said, this happens all the time.
And nobody seems to consider non-native speakers and their lack of familiarity with some of those literature.
There you go.
Nice.
There you go.
That's the voices we should be listening to, Chris.
Good.
Yeah.
So, well, and so begins our interview.
Very smart people, advocates for open science and methodological reform, the true heterodox thinkers in academia.
So let's go talk to Daniel and Smutty.
Let's do it.
We have with us today two other podcasts, psychologists of sorts.
Maybe you want to clarify your specific relationship to psychology.
But we have Daniel Larkins and Smriti Mehta.
Smriti is from University of California, Berkeley, where you are a postdoctoral researcher.
Is that correct?
That's right.
Yes.
The CV is accurate online.
And Daniel is an associate professor at the Eindhoven University of Technology, correct?
Yeah, yeah.
Okay.
There we go.
First time.
No problem, Matt.
All correct.
And they also host a podcast that both Matt and I are fond of.
Matt, would you like to provide the details, contribute to the introduction?
I'd love to make a contribution, Chris.
It's called Nullius in Verba, and it's a podcast about science.
That's what it says, what it is, what it could be.
And all of the episode names are in Latin also, which makes it probably the geekiest podcast I've ever come across.
Is that a fair summary, guys?
Well, somebody recently said fake Latin names, and I was like, I don't know how that works, because if you translate it into Latin, isn't that real Latin?
But yeah, it's a bit pretentious, but that's an accurate summary.
We mainly like to talk about old stuff, so we thought we were inspired in the beginning by very old papers.
I think sometimes there's a bit more new things in there, but that's where the old theme comes from, from everything, basically.
Yeah.
I would also note that I came across, I think, Daniel originally from a MOOC that he had online about improving your statistical inference.
Maybe many people, I might be in a bubble, but I think many people are familiar with that.
But in both of your cases, how do you describe your academic?
Interesting now.
Are you both social psychologists?
Are you metascientists?
Or how do you self-identify academically?
Well, I think for me, I think one of the nice things is that it can change over time what you do.
It's one of the fun things of being an academic.
So I definitely started out as a real social psychology.
I had, you know, multiple people in the lab interacting with each other, studying things about movement synchrony and feeling of a social unit and really social stuff and slowly drifted more into cognitive psychology.
Basically, since maybe a bit more than a decade, I've been thinking mainly about how we do science, which is still social psychology, actually.
But now I'm thinking about how scientists who work together generate hopefully reliable knowledge, and especially the social nature of this part.
So yeah, where I am now, I'm not completely sure, to be honest.
But luckily, I'm in a department that's very broad with a lot of interdisciplinarity there.
So I don't need to categorize myself anymore.
Yeah.
And I'm not sure, actually, where I would place myself.
So I just, I mean, I just finished my PhD.
That's in social psychology.
I'm not sure if I consider myself a social psychologist, mainly because I don't think they want me.
But apart from that, I mean, my work is related to sort of education.
So I'm now a postdoc in the Department of Education, but doing a few other things related to sort of science and science education and open science.
And I'm really into sort of psychometrics and measurement.
So kind of...
All over the place.
I guess hanging out with nerds is how I would describe myself.
I just realized from that, I don't know what you are technically.
I know you introduced the podcast saying you're a psychologist, but you're also moonlight as a statistician, right?
Yeah, yeah, yeah.
When you get to my age, you can call yourself a lot of things because you've done a few different things.
So, yeah, I don't know what I'd identify as either.
I'd sympathize, Daniel in particular, because, yeah, I've just drifted from so many things.
Like, it started off in, like, psychophysiology and computational statistics, doing things with time frequency transforms and that kind of stuff, and then robotics and artificial intelligence, and then...
Oh, God.
I'm not going to list all the weird shit I've done.
But, yeah, I like psychometrics and stats, too.
Actually, I know we've got some topics mapped out, but what we should do is just forget all that.
Just talk about latent trait modeling and rationalism.
Oh, yeah.
Oh, okay.
Well, if you want to talk rash modeling, then I think...
Chris and I should just go and have a beer and leave you.
Yeah, I also appreciate that on the podcast I'm identified as an anthropologist and I like the cognitive part being added to that because nobody's really clear what a cognitive anthropologist is.
But technically, I teach in a social...
Psychology department as a social psychologist.
And I was recently reminded when I was arguing with someone online, they pointed out that my profile mentions social psychologist.
And I was like, oh, does it?
I put that on there.
So, yeah, I have to be careful because they were trying to pin the replication crisis on.
No, that's my fault.
That's really my fault.
I did that.
Sorry.
Sorry, by the way.
It's all right.
You're making up for it.
And on that topic, so I know it's a big topic and I know that you both have a lot of time discussing it in various ways, including on the podcast.
But for those in our audience who maybe aren't so familiar with Psychology or the ins and outs of the replication crisis.
Do you think either of you would be willing to provide a kind of potted history of what it is and where we currently are in that history?
Yeah, no.
So basically it starts in 1830 when Charles Beddich says that science is on the decline and everything is a mess.
And no, I mean, which is true.
So throughout scientific history, people have been complaining about this stuff, that we are not doing the best possible job.
I personally always think it would also be kind of peculiar if exactly at this moment we have figured out how to do science in the best possible way ever.
So this is it.
This is the peak of how to organize knowledge generation in human society.
So clearly we are not there yet.
But every generation, I think, identifies some of its own big problems.
And we identify the replication problem.
We produce scientific knowledge where, you know, a certain chunk of it is definitely reliable.
I mean, people always focus on the other part, that a lot of it is not reliable, which is also true.
But a certain chunk is reliable.
Another chunk is not reliable.
But I think the difficulty is identifying which is which, especially given the way that we communicate science and we publish a lot of only significant results.
So everything seems to work.
But not everything actually works.
So some of these things are reliable effects and some not, but they look the same in the scientific literature.
And any novice entering the field will be like, okay, what am I supposed to do?
Trust that everything replicates.
That was sort of our default position, I think, when we got into science and we were doing our PhDs.
And now it has changed to a default position where like, maybe not everything or maybe even, yeah, it depends a bit on how skeptical you are, right?
Yeah.
Yeah.
I love that, the surprise handover.
Chris does that to me too.
I'm curious, so you and Daniel, I think there's a very good part about your approach to things is to emphasize that people have been talking about the problems that are...
I appreciate that long point.
Matt and I often make the same point with conspiracy theorists.
People talk about the new wave of them that emerged.
But anytime you look at history, they're all running around talking about the Jews and various other conspiracies.
But the particular contemporary period, perhaps something that is a little bit different, is the emergence of the open science movement or methodological reform in general.
So how would you characterize that whole
of effort, the kind of response to surprising low level of replication?
Yeah, the replication crisis in psychology really starts around, I mean, interestingly, around when I sort of I was in college and graduating like 2011 is when a few things happened in psychology, right?
We had papers that were probably a paper by Daryl Bem that was about the precognition and everybody's like, oh, they're using the methods we've been using for a long time.
And suddenly it's like, oh, but then we can't believe these results, right?
We must be doing something wrong if they're doing what exactly we do, but they're coming up with these results that are obviously not true.
And then a few other...
Papers were published, the false positive psychology, that showed that the way things had been happening, at least recently, I'm not sure, Daniel, what do you think when things started?
Because I don't think psychology was always this way.
It was definitely an issue, I think, with social psychology, where at some point, sort of like flashy results, like flashy, non-intuitive results.
Yeah.
It became popular.
Like, it was about doing sexy studies that showed things that, oh, like, you thought this, but actually it's that.
And so I think that became popular at some point.
I'm not sure, Daniel, well, you think that happened, but I think that led to a lot of people running studies that, you know, cute, sexy, whatever you want to call them, experiments that led to...
Yeah.
At some point, people are realizing, oh, most of the stuff we're doing is just not replicable.
It's not reliable.
And I think, yeah, since then, trust has just fallen.
Now it's gone the other way where people think that everything is just not...
True.
We just can't trust anything.
It's all just a house of cards.
It's very depressing, especially as somebody that's trying to be like, do I want to join this discipline or not?
Do I want to be a card-carrying social psychologist?
What does that even mean to be a good social psychologist?
I honestly don't know.
But in some ways, what I like about the last decade as well is the more positive side.
So the developments that have happened in response to this.
And I think there is something to be proud of.
So I think in 2011-12, you'd really be a little bit embarrassed to be maybe a social psychologist or psychologist in general.
But after these years, I think now we've really reached the point where we're sort of pushing the boundaries of what reliable science could look like by, you know, reshaping certain publication Psychologists have come up with this registered report publication format where the methods and the procedures are basically presented to peer reviewers before the data has been collected.
So this prevents all sorts of biases that were present before and that led to these replication problems.
And you see that this publication format of registered reports, which increases the reliability, I would say, quite a bit, is spreading to other disciplines.
So now from being sort of, you know, the field you would be slightly embarrassed to be part of, we're moving to promoting all sorts of better practices.
I think in terms of statistics, we see a lot of improvements as well.
Nothing novel.
I mean, often we're just incorporating practices from 50 years ago.
But nevertheless, we have started to incorporate these things.
And there are many other fields that don't.
So they are also looking at what we're doing and copying some of this.
So yeah, this open science movement in that sense is actually a little bit of a positive thing.
And psychologists have really pushed this quite hard, I would say.
Because we started it, yeah.
We have to clean up the mess.
At least we cleaned up the mess.
It's not like the current generation who's just pushing it forward to the next one in terms of climate crisis or something.
We're cleaning up our mess, I feel, because I really caused some of these problems very early on.
My first paper is definitely an example of the bad stuff, but I also feel like I cleaned up quite a lot of the mess.
But do you think, sorry, I don't want to be like, can I ask questions too?
But don't you think that, I mean, I do feel like it has sort of moved people away.
Like, of course, you shouldn't be trying to run after sort of flashy studies.
But I also think it has sort of sterilized the field in a way that a lot of people are only running like MTurk studies.
They're not going out into the real world or talking to people or maybe trying to play it safe just so stuff can be replicated.
So you can push it in a direction where people are less willing to go after real world phenomena or stuff that's trickier to study but is also very important just because you're trying to do everything right and there's stuff that's just hard to pre-register.
And so if you're worried about all that, do you think it moves people away from taking risks?
Yeah, this is exactly why it's nice to have this historical perspective.
Because you can come in now and you can say like, okay, open science and reform, they're pushing these things too hard.
And there are negative consequences of doing this thing.
But you have to realize that 15 years ago, we started to do these things to solve other problems, right?
And now we're going back and forth.
And it's like a pendulum swinging back and forth.
There's always something that needs to be fixed.
There's always a new problem that will emerge.
And maybe, yeah, the next generation will have to address some of these issues.
That doesn't mean that the past thing, we could skip it.
So I think that's an interesting thing of having a historical perspective and looking at things like open science and reform.
It's just such a complex, continuously changing problem.
You fix one thing, another.
It's sort of like whack-a-mole.
You keep whacking away these problems, but there are new ones popping up all the time.
And that will keep happening for another century or something.
That's really the time window we're looking at to make things better, I think.
Yeah.
I've noticed, just from my involvement in the field, There is something of a sea change in things like, I know pre-registrations are not adhered to perfectly or even accurately at all in some cases, but the pre-registrations even being a concept that people considered is a significant development.
And the fact that now my assumption is that I will have access to the data, which was something that really...
Didn't ever consider when I was starting out on my graduate studies.
And this difference is super interesting because we have this paper on the Open Peer Reviewer Initiative, somewhere 2014, just to give an indication of how things change.
So in 2014, a bunch of people said, look, we think data should be open.
Even during the peer review process, I should be able to...
Look at your data if necessary.
I mean, you know, I'm not sharing it with anybody else, but it should be part of the peer review process.
So we wrote this paper saying, look, we will only review papers where authors share the data or explain why they can't.
Because sometimes you can't, right?
Sensitive, way too sensitive, you can't.
So just put in a sentence like we can't share it.
And when that came out, people were really almost outraged about how were we, you know, so stuck up.
That we were going to say, no, no, no, we will only review papers with open data.
Like, where did we get this from?
This is not the way we work.
You're not supposed to pick papers based on this criteria.
You should just do what everybody else does, review papers, go along with the system.
And now the system has changed.
And now I think if you ask a young generation, like, does it make sense that you have data during peer review?
They're like, yeah, that kind of makes sense to me.
Sure.
Why not?
So this changed in like a decade.
It's difficult for people to imagine.
And I'm sure that there's going to be some issues with sharing data now.
During the peer review process, I don't know.
There will be, like, somebody steals an idea or somebody, I don't know, there's a data leak that we don't want.
Some other bad stuff happens, but nevertheless, you know, the change is really sizable.
And, yeah, it's interesting to see that development in science because I think most people who listen think science is a very static thing.
Like, scientists do science, and it doesn't matter if you're Newton or you do it today.
But, of course, if we're in the system, we see continuous improvement and change in how we work.
I mean, I will say a historical perspective is important here, too, because it wouldn't have been possible to share data and materials as easily as we can now, like 20 years ago, right?
So it's important to remember that.
I mean, now you can be like, oh, yeah, I shared data code, right?
Like, that wasn't possible, right?
I have a quick question, which is that, like, so I've experienced this.
I've lived through it as well, you know, and I've been both a perpetrator and I think...
Later on, a bit of a fixer of this kind of thing.
And at least in my personal experience, I lay a lot of the blame on research degrees of freedom and perhaps a lack of awareness among researchers about the way in which exercising those degrees of freedom can lead to false positives.
And that's where I was guilty in sort of being a stats guy and doing what.
What was asked of me, which is just try looking at it again and use your imagination.
And, you know, you want to be a solutions-oriented collaborator, et cetera.
And that's obviously bad.
So, like, there's so many things that contribute to false positives and the publishing of things that aren't true.
But would you put the bulk of the blame on that, that sort of, I guess, almost ignorant?
Application of researcher degrees of freedom or other things that are more important.
Well, that's definitely a problem.
And how you say it's so obviously wrong, I think that is the issue.
Like in hindsight, it's so easy to say that something was a silly idea, but really, and this is why I think psychologists have such a good role to play in the improvement of science, because we know about social norms and how if you're in a certain system, some things you don't think about,
they're not salient or visible at all.
But when you step outside of the system or somebody just pops this bubble and says, come on, look at what you're doing.
This was crazy.
All of a sudden you're like...
Damn it, it was crazy.
Like, why didn't I think about this at all, you know?
I mean, and I find it interesting.
I mean, I was in Japan once, and we were wandering around the public space, and the person I was walking with left his bag somewhere on a chair, and we walked through a cafe.
And I'm just like, your laptop is in the chair.
I was worried for this person that their laptop would be stolen.
And he just replied like, people don't steal here.
And I'm like, How is that possible?
You live in a culture where things are so different.
Like you would take stealing for granted in my country, you know?
I mean, not that it happens all the time.
It's definitely not that bad.
We're very well off and everything.
But nevertheless, if you leave it out, somebody would steal something, you know?
So there are just differences that can happen and sometimes they become very salient.
So I think that's basically it.
And it also means what are you educating yourself about?
There were just things we thought, no, I don't need to learn so much about it.
It's fine as we do it.
It turns out it's not fine as we do it.
We had to study this way more than we had been studying this.
And that I think happens when you get criticized.
So this criticism we got as a field really pushed us to study this, at least in my experience.
It's really how it worked.
The criticism really forced me to think, okay, this is not okay.
What am I doing wrong?
And really study what happened.
So that part about it is, I think, very interesting.
And then these flexible analysis, I think, is one big player.
Not the only one, actually, but I think a big one.
Yeah, and I guess I would place...
Well, I'm not sure blame is the right word, but I think what causes it, I think, is one step behind using the degrees of freedom.
Is this like very human, like the sort of the king of all biases, just like confirmation bias, right?
I think you go into any research program thinking you, right, you're trying to prove a hypothesis instead of trying to actually test it.
And I think that's where we fail as like teachers of...
People who are teaching the scientific process, right?
The idea is you should always be trying to disconfirm your hypothesis.
And I don't think that is ever made very explicit in when we're teaching science, right?
That you shouldn't, like, of course we think that, oh, this is the case, but your goal should be to try to disprove.
Your own theory, right?
I mean, I love that quote by Richard Feynman.
The first principle is that you must not fool yourself, or you are the easiest person to fool.
And I mean, in psychology, it's very blatant, but you see it even in other places like chemistry and physics.
We read some paper a long time ago where you had, you know, in chemistry, there was some study where they had to look at a paper and a filter through their hand, and it's like...
You're more likely to believe that something is there when you expect to see it there, right?
So all scientists are susceptible to this bias.
I think it's a lot worse, and I think it comes from just not approaching the process with the sense of we should be trying to disprove what we're doing than not.
So that's why you're always trying to find ways of confirming your hypothesis rather than disconfirming it.
Yeah.
One thing I've found is that it's quite dangerous to set out with this idea that you have a hypothesis and if it doesn't turn out to be supported, then you've failed in some way.
Yeah.
You were wrong.
Yeah, you were wrong.
You failed.
And students often talk about it to me in that framing.
And I've realized in my own work, I can often think about it.
With a little bit of effort, you can actually think about it as being kind of neutral about what the outcomes are.
I'm just picking a random study, but recently we've been looking at the effect of gambling problems on health utility, quality of life, that kind of thing.
And if the result is there is no relationship, that's pretty wild, right?
That's interesting too.
And you've eliminated...
You've simplified the world a little bit and you can look at one possibility.
So, yeah, maybe that could be part of it too.
It's almost adopting a slightly different frame of mind where you aren't actually motivated to do that and obviously journals not being motivated to just pull up, publish stuff that is counterintuitive, sexy results.
But it's also interesting to think a little bit beyond these kind of solutions, right?
Because this is still within the system that we have in psychology currently, which is where single people do their own research, they collect their own data, they analyze their own data, which is just the way that we work.
But maybe...
It would make a lot more sense if we just had some people who say, look, this is my theory.
I'll clearly outline what I would consider support and when I would consider it not supported.
Now, other people should go and test this idea.
And then you separate this bias where you want something to be true.
You have to be very clear about it.
And people never do this.
Also not in discussions, right?
Everybody is presenting their own data in any argument that you see.
Nobody is saying, look.
This is my hypothesis.
You go and find some data for this and prove me right or wrong.
I mean, it would be helpful.
And that's what I think I like about having this longer-term perspective.
If you think, like, what could science be like in 50 years or 100 years?
It might turn more into something like this because it just ends up working better than the system we have right now.
And it does happen, right?
In physics, for example, right?
There are theoretical physicists and experimental physicists.
If we could separate those two.
Yeah.
A good way forward.
I think, Daniel, you on one of your courses emphasized the importance of when commenting about things to tie it to the data that you have, right?
So rather than saying the theory was validated or whatever, saying in this data we find this relationship which accords.
And I find that a very neat thing for increasing detachment to whether things are I find it,
to be honest, relatively easy because I'm not strongly attached to any particular theories, including ones I've worked on for quite a long time.
But I have found with older, more senior academics that that is not the attitude and that it's very much taken that if the The data shows a negative or just a null relationship that this is very bad news,
right?
And we have to think of a positive spin in order to warrant publishing it.
And yeah, that experience has reminded me that in some respects, it feels that like the four of us here or online, you can get into a bubble of joy where you think open science practices or methodological reform is just generally accepted.
But it's not necessarily the case that those values are paid more than lip service, I would say.
But that issue about attachment to theories, I don't know where it's from.
Maybe you know the origin, but I heard somebody refer to theories being like toothbrushes that everybody wants to use their own one and not touch someone else's.
And I don't know if there's a particularly good way.
To avoid attachment to your personal theory and research area when the disciplines currently reward you for becoming known as the person, right?
The Nobel Prizes are awarded to individuals as well.
So do you think there's any solutions there beyond complete reform of the public?
No, Chris, they already gave it to us.
They already gave us the solution.
We need to have theoretical psychologists and empirical psychologists.
The theoretical ones could be as attached as they want to be to their theories, and the empirical ones would just smack them all down like whack-a-mole.
Well, I think definitely that is a big thing to do.
Now, I mean, there are some other things we could consider, and all of these are getting less popular than having just a distinction like this.
So one thing is you could select certain scientists.
I think that we select...
People into science who like this kind of thing.
Too often, we have scientists who have this commitment, like, I want to make my theory sort of last forever, and then when I die, my theory will live on.
You know, these kind of feelings are part of what some scientists are motivated for.
And we could select people not to have these kind of things during a job interview.
Now, this is not popular at all.
You know, I don't think we'll ever do something like this.
But of course, not everybody has this.
You can look around, and we all know some people who are very strong in this kind of feeling or motivation, and some people who are not.
So that's one thing.
But the other, maybe a bit better, is just having much more collaboration between these different parties.
So there's, for example, a very nice book by Mitrov on the Apollo space missions.
And he interviews all the scientists who are involved, and they all have to work together.
So that's good.
But they also say how some of these scientists are just...
Extremely attached to their theories.
Like they will try to squeeze the last remaining drop of possible evidence for their theory out of whatever data they're collecting.
And they're saying, yeah, it's super annoying, but kind of also good to have some of those people in science.
And I think, yeah, here also you would have to think, what would the alternative look like if we were all being replaced by feelingless robots?
Like, who would fight for their theory, right?
Which theories get a real shot?
None.
We would just give up very easily.
So, I mean, this is such an interesting dilemma.
On the one hand, you don't want these biases.
They have negative effects.
On the other hand, we are people.
We need to be motivated for something.
And science is a tough job.
Why am I sticking in science for 10 years unless I feel like I might be able to do something?
So it doesn't always have to, you know, find its way into trying to support your own theory.
These feelings are part of also what makes people a good scientist, I think.
I mean, and I might not have it for theories.
I might have it for other things, like the way we would do science.
And I can get really worked up about this and get into arguments with other people about it.
Yeah, so these effective processes are what motivates us.
It lines up with a topic that has been on my mind, and I know also the British journalist Helen Lewis was thinking about writing a book about geniuses.
And when we mentioned Feynman, and there was the recent movie about Oppenheimer, and you definitely have these figures that loom large in science, like Einstein and Newton and so on, who are these towering intellects who make some breakthrough.
But when people delve into their life story, it often turns out that there's...
More complexity there.
There's a lot of other people involved.
And there's often insanity that sometimes comes near there.
You know, Nobel Prize disease.
And related to the kind of people that we look at, there's a divide, definitely, between the kind of online secular gurus that we look at.
Because in many cases, they dream of that.
They want to be...
The Einstein, Galileo figure.
And they feel that the institution has kind of not given them their due credit.
But on the other hand, I think there is a case to be made that there really are people who produce Nobel Prize worthy work who subsequently go on to be charismatic gurus for homeopathy or whatever the case might be.
Based on what you're saying, Daniel, I'm thinking, you know, in a kind of evolutionary framework that we need some proportion of charismatic genius assholes who make contributions.
We have to be skeptical that just because you do that, it doesn't mean that everything that you've done is good and that you are, you know, like a virtuous person that people should seek to emulate.
Yeah, it's more of a comment than a question.
No, exactly.
And I think the challenge is, I mean, some things you can't get rid of unless you replace people by robots.
So some things are always going to be there.
And I think the challenge also, what I think philosophers of science would say in the last decades is create a system which has enough diversity so that you can accommodate people with those kind of.
But that it doesn't push things out of balance in our search for reliable truth, basically.
So that's the challenge, that you have a couple of those voices and they play their role, but you also counter this with something on the other side that balances it out a bit.
You don't give those people all the money, for example, or maybe not at all or whatever.
Before we move off the sociology of science, I got a question.
Recently, I've begun to get a little bit disenchanted myself with the whole publication process in particular.
Publication reviews and this endless cycle of...
And it seems to me that, you know, we've come along...
This coincides with Matt facing a couple of desk rejections, right?
I had three desk rejections for a paper.
It cost about $300,000.
Yeah, and this is not a minor piece of research.
It cost $200,000 to do it.
Oh, wow.
It's good.
But anyway, let's put that aside.
It's fine.
Basically, science and nature didn't want to publish your study.
Yeah, go on.
These are very mediocre journals.
I never bother with the good journals.
They're too much trouble.
But when you get rejected by the mediocre journals, it stings.
But anyway.
Then it stings.
I agree, yeah.
No, no.
So my own feelings aside.
In the olden days, it was basically a means of communication, yeah?
Basically a group email or a letter to multiple people.
And it's so much more than that these days.
Fortunately, I don't have to go for any more promotion or hiring committees.
I'm going to die in my job, hopefully.
But for most...
Academics, this is not just communicating to other academics.
These are brownie points.
The papers themselves, according to the quality of the journal, the number of times they're cited, and it's absolutely crucial because it's such a competitive professional environment that you have to have this focus on these metrics.
Obviously, at the same time, there's the commercial aspects of publishing, and universities themselves, at least in Australia, are totally ranked on these complicated.
Metric counting systems of all the researchers there.
So there is this great big system of funding and of money and of careers and of jobs and promotions.
And do you reckon that is pushing structurally the communication of scientists with each other in an unhealthy direction?
Yeah, Schmidt is already opting out of the whole publication system before she even started.
She feels exactly the same.
It's a scam.
The whole thing is a scam.
It is.
It is a scam.
You're completely correct.
It is a scam.
Yeah.
And then they're like, oh, but you need to publish or you need to play this game before you can change it.
I'm like, but then you're part of the game, you know?
Yeah.
Then you're just playing it.
But yeah.
Yeah.
Go ahead, Daniel.
Well, we have a podcast episode that will come out on the peer review process.
And I have to say, it was interesting to talk this through.
Because, you know, it's true.
You just take part in it.
And I think Schmidt is definitely more critical about it, maybe than I am even.
But the recording made me think about this in the same way.
I also thought, like, it's so weird that we do this.
And one thing that, because we always dive back into history a little bit.
It surprised me how recent certain aspects of our publishing system are.
Because you just enter it and you think, this is how it was in 1800-something.
But it's not at all like this.
Indeed, in even 1930 or something, if you would submit something to a journal, journals would be like, oh, lovely.
We get something to publish.
Oh, that's so nice.
Thanks.
And then really only after the Second World War, when there's a huge investment in science, we...
Scale this whole publication process up.
There's more money into science.
There are more scientists.
They publish more.
We get more of these desk rejections because these journals can't keep publishing all this stuff.
So this system where you are hurt because somebody didn't like your paper, it is relatively recent given that science has been going on for hundreds of years, which also made me think, yeah, it can change.
And it might.
I mean, somebody like Smriti might just opt out of it.
Who knows?
And it is a bit more possible now.
So there's, for example, one cool thing which develops in some fields is called peer community in.
Because one thing that is kind of nice is have two or three people look at your paper and give some suggestions for improvement.
That part of the process, I think, is not a bad idea, especially for people who don't have close collaborators or are just starting in a field and just need some feedback from a more experienced researcher in this field, like you missed this, you didn't think about this, and stuff like this.
So this peer community in, which is a sort of publishing format where we just get together, the four of us could generate peer community in guru science.
And everybody who publishes on this topic, we will just organize the peer review process for them.
Outside of any journal, we say, okay, here are the reviews.
We can also read them.
You can see what other peers thought of this paper.
They chose to incorporate some of the feedback.
It's done.
You put it on a preprint server, which we have now.
It's just a PDF.
It gets a DOI.
It's stored for a long time.
We're done.
Like, you don't have to go through these kind of steps.
So, yeah.
I think it takes some people who are brave enough to do this.
And I was, even after Swiniti also yelled, it's a scam.
It's a scam.
I was a bit inspired.
I thought, like, yeah, it is a bit of a scam.
And also, I am, like, you know, definitely in a position to do something about it.
But I felt...
Maybe I should do a little bit more about it.
Yeah.
And I will say, I mean, in a system that works well, it would work the way you were talking about, the way it works back in the day, right?
Where you have people who are doing their research and when they find something that's worth sharing with the world, then you would go and actually be like, oh, we found something and now we're going to go and actually, this is something worth being out in the world so that it can move science forward.
But that's not how it's done nowadays, right?
Now it's like more about...
Getting your publication so you can get your job and you can get your promotion, right?
It has shifted from actually sharing truths to the world or to the scientific community to things that you do to advance your own career.
And that's what really, I think, bothers me.
I mean, I'm more than willing to admit that maybe I'm just not competent enough to play well at this game.
But at the same time, to me, it's like, well, if I found something that was worth sharing, I would be shouting from the rooftops, right?
Then it would make sense to go and be like, you know.
I'm going to add something instead of just throwing a pebble in a pile.
Be like, no, this is something that's worth communicating to the broader community.
And then it makes sense, right?
And then in that system, even the metrics that we use would be useful, right?
Because then if they actually reflected good quality work and work that's worth reading, right?
Then those numbers, those impact factors and all those things, then they would be useful, right?
But now there's just like...
No signal in them, right?
So they're not useful anymore.
So I want to attempt the devil's advocate for...
We love that!
Of course, I completely accept all the points about the publication metrics and the horrors that can be going through peer review.
And I've had various horrifying experiences with people.
With kind of vested interests, shooting down papers and stuff.
So I'm well aware of all that.
And I like the sound of most of the experiments in peer review, open peer review, or what you just talked about, Daniel.
But with that said, during COVID, the anti-vaccine community, and also I would say the parapsychology community, though that's less of a concern, There are a lot of kind of pseudoscience,
conspiracy theorist-type communities, some that are genuinely harmful, like anti-HIV researchers.
And peer review, it's not perfect.
There are anti-vax papers that went through, and there were preprints that went up.
But by and large, a lot of those papers couldn't go through.
Such that if you look at anti-vaccine communities, they're always complaining that they're being suppressed, right?
And now you can either believe that they're being suppressed or the position I would take because they are able to put their material up as preprints.
And then a lot of people through open peer review kind of processes highlight how extremely flawed the papers are and why they should be rejected and not published.
And I feel that often when psychologists and academics are talking about the need to reform, they're not so much focused on the possibility that those reforms can lend credibility to people who will promote really harmful anti-vaccine misinformation or other evil wars things,
race science stuff.
I'm just curious about that argument that getting over the low hurdle of peer review is actually a hurdle.
A lot of the gurus we cover, they have very few papers because they can't really deal with getting critical feedback at all.
And so if that's there, it kind of provides just a little hurdle.
That would be my main pushback.
Yeah.
I mean, that is what it does.
It's completely true.
And there's value in this.
And even preprint servers, there's a med archive for medical research.
And for us psychologists, we were just like, yeah, let's just create a preprint server, put stuff online, and we never have to think about negative consequences or the negative impact of our work because, let's be honest, most of our work has little impact whatsoever.
But in the medical community, this was a much bigger thing.
So there they also set up a preprint server, but they have a much more elaborate screening process before it goes online.
So in psychology, if I upload my preprint, it will be online.
And there is a post-moderation process.
So it will be available immediately.
And after a while, somebody takes a look like, isn't this crap or something?
Isn't this weird?
But in the medical field, they do it before.
Exactly to mitigate these kind of...
Right?
So that makes sense.
Now, overall, I mean, this is a very complex problem with multiple aspects.
For scientists themselves, if we would never let other people read our papers, it would probably be fine not to have a peer review process because we are the peers anyway.
We can read papers and understand them.
And somebody writes this crappy paper, we understand it.
The general public is reading along.
And there are actors that want to co-opt our scientific process.
To give credibility to some claims that they want to make.
And those are negative possible consequences that you need to prevent in some way.
Now, the peer review process is doing this, but it also feels like it's such a messy way to do this.
Because exactly, like, I mean...
You also can't say, okay, but if it was stopped at peer review, that is a good thing because Matt just showed us that his genius worthwhile papers are also stopped in this way.
So it's a very noisy selection mechanism, right?
Peer review.
And I'm much more like this post-publication system, exactly like Chris says, where afterwards, if the preprint is online, a dozen people point out flaws.
Now, that also has negative things because I think some people are just willing to invest their entire life in producing bullshit.
And then, you know, the investment of 12 people to point out the mistakes in the work of one paper.
Basically, this one paper is going to keep 12 honest scientists from making a useful contribution to society.
So that's also not nice.
So yeah, yeah, what to do?
I don't know.
Yeah, I mean, yeah, that's a fair point that it's a hurdle.
But I would also say that we wouldn't see as much junk as we do if it was a decent hurdle.
I mean, no, you read stuff that gets published in, like, PNES and some other journals, and it's, like, that stuff is just bad.
And so what you're saying about, like, the stuff that's really, right, it's stuff like precognition, that you're like, oh, obviously this is junk, right?
Or, like, stuff where, you know, COVID, like, just awful, awful stuff, right?
Like, that's not making it through.
Okay, but a lot of stuff is making through that's just absolute garbage, right?
Yeah.
And, you know, you mentioned Daryl Bam, the people, in case people...
It's a paper Feeling the Future that claimed over nine experiments that you could kind of apply the stimulus after the experiment and show that it produced an effect.
So reversing time, which is quite an extreme theme.
And that got through in one of the top-tier journals.
So I'm definitely not saying it's a perfect system.
I think...
Just to follow on what you said, Daniel, and I think it accords with this point, is that you had an encounter recently with a guy, Alexandros Marinos, who you may or may not know, but he gained an online profile from being a Brett Weinstein superfan.
He's since branched out into other avenues, but that was primarily where he came from, promoting ivermectin studies.
He took an interest in the situation with the data collada.
The alleged fraud case, just to prevent also being sued.
The alleged fraud of Francesca Gino, Harvard professor, that data collada pointed out that there were problems in the papers.
Yes.
And my hunch is that because myself and Stuart Ritchie and some other people that have been unpleasant about...
Eric Brett and Eric Weinstein's conspiracy promotion were promoting that case that it might have prompted him to adopt the more critical stance, right?
Because his position is somewhat skeptical that they've actually detailed real problems and so on.
But I mention him more as an illustration because there he produced a thread on the specific Error or indicators of fraudulent data that have been detailed.
And he did a kind of amateur analysis.
But his thread is always on Twitter or sometimes on Substack, but extremely long.
And from the perspective of somebody who doesn't really know much statistics or know much about the processes involved, it looks very thorough.
And that's how his threads look about ivermectin, so on, Robert Malone's contribution to the mRNA vaccines.
They're very detailed.
And you guys did an episode on cargo cult science, right?
And we've talked about this kind of thing online as being a kind of cargo cult presentation of scientific rigor.
But like you said, to address that, right, for someone like you, The engagement would end up taking you weeks.
It would never end in the sense that there's never going to be a point where they admit that they've got something fundamentally wrong or changed their stance.
So in that case, I'm wondering what you both think about the kind of onus on scientists or science communicators to try and combat that.
Because on the one hand...
I think it's important that people are aware of it and kind of think about how to deal with it.
But on the other hand, I completely understand real scientists being like, I don't have time.
I've got my own work to do.
I don't know how to deal with these people.
And kind of just ignoring it or blundering into not realizing who those people are.
So, yeah, you're giving your recent encounters.
I'm just curious, any thoughts about that?
So I think it's interesting, but I don't think there is these people.
So I think this is definitely a person who's in the extreme and on a continuum where they want to be critical about things that most other people agree.
But I had similar encounters with just normal, average scientists.
Also taking a little bit of this critical stance, but it's exactly the same process.
So relatively limited real engagement.
And then I think in this case, this person actually had invested some time in figuring out what the claims were based on.
That makes it a bit more effortful to go in and figure out why they're wrong.
That also makes it more...
So I think that strategy is if you invest more time into a topic than 90% of other people have done.
You come across as really smart and almost nobody is going to invest time to prove you wrong, even when you are actually wrong, right?
So it's an interesting thing to see.
But there were other scientists doing...
More moderate versions of exactly the same thing.
Like, oh, well, there's some good points being raised here against it being fraud.
And you're like, no, those are not good points.
You also don't understand anything about this situation.
So it's a continuous.
So there's not these people, but this is just a tendency to criticize something based on a not complete understanding of something.
So I think this happens all the time.
I thought in this case, actually what I thought was fun, I mean, this was a...
My summer wasn't the most fun.
I was spending a lot of time in the hospital.
My wife was really sick.
So I really needed some distraction.
And this guy was just one of the ways that I found some distraction as I was sitting there.
You know, you need to get worked up about something that distracts you.
But what I found interesting was following up on this.
So there's this initial thread, which sounds quite convincing.
And in the back of people's minds, there's like, oh, but there was a thing here, right?
There was a thing about this fraud case that wasn't so rigorous and solid, and somebody was looking into it.
And what I find fun to do is just two weeks later say, so, where's the whole point?
Like, where's the rest?
You were making a starting point.
Where's the rest?
Yeah, no, no, I'm not working on it yet, but I'll work something out.
Okay.
And then you get something that's worked out, which is pretty crappy.
And you say, okay, so what about this and this and this?
They're like, okay, okay, yeah, I'll look into it.
And then two weeks later, you follow up again.
You're like, so where is it?
Where's this thing?
And I think this is the problem we have.
So people can float around.
Hundreds of ideas.
Most of them are crap.
Some go viral and cause a little bit of doubt.
But nobody really follows up on stuff.
So you should actually have some nice overview where you say, look, this is a claim you made.
Here is why you were wrong about these things.
And if you then have a track record of just being wrong a lot, and I look somebody up like this, but...
But this is again a thing that works just for gurus.
It works just as well for scientists.
Because in science, we also don't have this.
Anybody can come up with the newest crazy idea.
And then if you would see, but you have a track record of just coming up with shit most of the time.
Maybe we should use this a little bit as a Beijing prior to say, okay, I'm not going to take it so seriously yet because you've been wrong so often in the past.
I really need some other people to confirm what you're saying here.
And if that happens, fine.
Now, we don't have this process, I think, because we didn't have these signals or these communication channels.
So we don't have anything set up for these communication channels.
We made the peer review process for the old-fashioned science communication thing, right?
We had the printed media.
We don't have something like this because, yeah, these communication channels are so new.
So we just need to think of, okay, how does this work?
Who is checking this kind of stuff and how do we do it?
Yeah.
I take the point about there being this tendency.
Contrarianism sometimes can be correct.
It's actually valuable.
I find Lee Jerusalem extremely annoying, but I also think him and people like him are valuable to having a discipline.
And I think there's good contrarianism and bad contrarianism.
And like you said, the track records can show you which...
Which people lean towards.
So I would take that point.
I think I'm a little bit more sensitive to the fact that there really are these communities where anti-scientific principles are promoted and held up as valuable.
No, I understand.
That's also true.
So there are certain categories, right?
And in the end, maybe we need to draw some boundaries.
Because I've, for a very long time, stopped trying to convince everybody about things.
The probability that I'll convince 100% of people that I'm right about something, I've given up on this goal.
So there are people you will never convince.
And maybe it's fine.
I mean, and then it becomes a very political game.
What you want to do is limit the influence that some people have when you're really sure that they're doing something wrong.
And that is, like you mentioned Lee Jassim, who's a contrarian voice in social psychology.
But I don't think he's wrong very often.
And yeah, so there's no reason to prevent somebody like this from having their influence.
Then it's let many flowers bloom.
We see what the end result is.
And there, I think, our field is actually even anti...
Sort of, it's overly conservative.
We want those contrarian voices.
We're very negative to some contrarian voices, actually.
So we love to ridicule people if it's clear that they're over the edge.
But we don't deal very well with the contrarians just within the borders of what is very reasonable.
And we're extremely harsh sometimes about people who want to be contrarians in our field.
Yeah, and Smriti, I had a question for you that might relate to that point Daniel is bringing up.
So I noticed on your Twitter profile that you have Heterodox Academy at Berkeley, right?
Maybe you don't remember.
No, no, I do remember.
And one of the things that I wanted to bring up with both of you, Leacherson being a good entryway to that.
Is that there is open science advocates and the traditional, somewhat hostile response to those efforts, but you also have divisions within those who recognize the need for methodological reform, right?
And one such division was highlighted whenever Perspectives on Psychological Science ended up forcing out Klaus Fiedler, the editor.
And there...
There was, leadurism was involved because he was one of the invited commentaries on a paper that was arguing about, I'm not going to do a great job about it, but basically arguing psychology needed to adopt Kendi-style anti-racism.
Yeah.
So that whole issue, though, resulted in a somewhat clear division, I would say, between European...
I wonder about your thoughts on the state of heterodoxy in kind of open science or academia in America, and then this potential divide between people in America and outside of America.
The perspective of me and Matt, for example, because neither of us is in America, a lot of the culture war stuff often, like, it's annoying, but it seems like a little bit distant, right?
Because we're not surrounded by it.
So I can sympathize with people facing those issues, but I also feel like it sometimes is taken to encapsulate the whole world.
And there I mean both, like, social justice concerns and the anti.
Not anti-social justice, but like thinking that the social justice stuff is consuming too much attention.
I realize that's like a ton of things to throw on.
You can pick anything that you like to comment on.
Yeah, I mean, that case was just, I mean, it is sort of very interesting.
And I will say, I mean, I'm in the US, but I'm also at Berkeley, which is probably sees like the worst of like this.
It comes from here, I think.
I will say that.
It's our fault that all of the stuff is happening, which is true.
A lot of the DEI statements and stuff like that, we have given birth to all of that.
It's a very interesting place to be.
It's almost like people...
There's a lot of people here that, of course, trust in science and care about science, but also have a lot of social justice concerns.
It all, I think, comes from good intentions.
So I've had conversations with people here at Berkeley who are, of course, very open science, very into that stuff.
But when you talk about Heterodox Academy, they will say things like, oh, but I thought when I think Heterodox Academy, I think people like Jordan Peterson.
And the intellectual dark web.
Like, they associate, like, it's essentially, like, when you talk to them, they're like, what it sounds like you're trying to do is create a safe space for racists.
And, of course, the meaning of what it means to be a racist has now completely changed.
Like, if you say things like, we want everybody to be treated equally, that's racist.
Or I just attended a seminar that was one of the authors of the paper, who also Lee Jessam is on, in defense of merits in science, right?
You would think a very sensible thing, right?
We should care about marriage before we care about things like what your gender is or what the color of your skin is.
But that's a controversial thing that some people think that that's now a racist thing, right?
Or that if you have things like standardized tests that lead to differences in outcomes, right?
That that makes the test racist, right?
Going back to sort of psychometrics.
So I think there's all this concept creep going on where people are trying to do the right thing.
But it is becoming, yeah, like, I think they're putting a lot of value on that stuff above and beyond, like, things like pursuing truth.
Because even if you think there's a problem and you want to solve it, you first need to figure out what the truth is.
And I think, yeah, so that's getting lost.
And I, I mean, Daniel and I have this conversation too, right?
Sometimes it's like, it feels like it's a really, it's a lot worse here.
But I think it starts bleeding in, like, I...
I think the U.S. and also American academia has just outsized influence on the world, right?
So I think it's something to be mindful of.
And that's one of the reasons I'm part of this, is that we need to be having these conversations.
If you have now people saying that, oh, math is white supremacist and rationality is white supremacist, we can't function in science if we think like that.
So it's a problem, and I think it's going to start bleeding into everything we do.
Yeah, and it's terrible for science, right?
Yeah.
So it's interesting because in our podcast, we've chosen to never talk about current topics, right?
So we would never talk about it.
That's a really good idea.
There's some reasons for it.
That doesn't mean that we, you know, off the podcast, we don't talk and think about these issues.
And I think this situation that emerged, so it really is the perfect, let's say, distinction between things that happen in the US and in the rest of the world.
Because the original paper that was published was about the representation of people with certain racial minorities in science itself.
And that there was a disparity and, you know, the paper basically said it would be good to do something about this.
We need to go and, you know, make sure that there are more people of other groups in science.
And the criticism on this, and I think this is important.
So somebody like Klaus Fiedler, who's a German psychologist, I know him quite well.
And I think people in the US don't know him very well.
But he's like in the most prestigious position in German academia you can be in as a psychologist, basically.
If you tell me like, hey, look, Daniel, you can walk in this room and have a coffee with Klaus Fiedler or Daniel Kahneman, I would be like...
I get to meet Klaus Fiedler.
Oh, that's cool.
Whereas most Americans would probably be like, yeah, you know, you'd pick the other person.
But for us, this is already quite different.
So I think that's out of the context that many Americans realize.
Now, he is also the poster child of if people do one thing, I will argue against this as a matter of scientific principle.
It doesn't matter what my opinion on something is.
If I just see most people go in one direction, I'll go in the other direction and say that's the direction we need to go in because I think in science somebody needs to take this.
So he is a contrarian out of the scientific principle.
So he writes, or he invites, I guess, or gets a paper as the editor.
Well, this paper is not really good.
It makes some weak arguments, which of course, yeah, every paper can make some weak arguments.
It's fine.
There's definitely some points to criticize in the paper if you read some of these comments.
Not everything that's written there is crazy.
But just doing this, and I think he's aware to a certain extent that doing something like this will piss off people in the U.S. But I think he also thinks this is important.
It's important to piss off the people in the U.S. sometimes because exactly as Chris says, like, yeah, it feels that they have a little bit of an outsized influence on the rest of the world.
I mean, there are not that many Americans.
They are the minority, and it's lovely that they have their own problems.
It's exactly like you say.
It feels a little bit like, yeah, you have your own problems there.
It's not that we don't have any of those problems, but we have them in different ways.
It plays out in other ways.
So I think somebody like Klaus Fiedler says, yeah, I am going to piss off some people over there, but it's fine.
That's what my role is.
We should have a science where this happens every now and then, because if we don't do this...
That is not good for science.
And yeah, I think it's a nice illustration that in this case, the response was so extreme that he basically had to quit his position as an editor.
Not only that, but actually the whole, so this is a journal Perspectives on Psychological Science.
It led to such a problem that the field thought, the field, we mean the US people behind the publication, right?
They felt like it's too difficult to have a journal like this at this moment.
Nobody wanted to be the editor.
So it has become so sensitive that for a moment now, we can't have a journal that publishes perspectives on psychological science because apparently it's too touchy a topic or something.
Well, those things are really interesting and I don't really know why we don't talk about this so much because that is a very interesting development, I think.
And definitely this difference in opinion between What the US, what people in the US think and what people in the rest of the world think, that is worthy of discussing a little bit more.
Maybe not exactly like Klaus Fiedler tried to do it, by the way.
We can definitely have a different way of putting it on the agenda.
But yeah, it's worth discussing totally.
And it's terrible because it ends up affecting the scientific process.
Because I mean, I see it all the time where people are unwilling to criticize certain research because of the topic.
That's, you know, maybe a little sensitive or controversial, or the researchers are people of color or minorities.
And it's like, well, that's not healthy, right?
That's a really terrible thing, because that means that we're not doing what we should be doing, which is criticizing ideas, because now people are afraid that they're going to get called a racist like Klaus Felix.
Some things, yeah, again, that he did, maybe he should have done or gone about it differently.
We could definitely acknowledge that.
But to call him a racist is just...
But now people are worried about that.
And so we're sort of losing as a whole.
And a situation like this doesn't make it easier for people to do it next time around.
We were discussing hiring of academic staff a while ago.
So in my university, we try to promote...
Having more women as professors in the university.
We are a technical university.
And just from the past, we didn't have equal numbers of men and women being professors.
And it's still very slow, this process of reaching a more equal number.
So the university board had decided that there would be a new policy where they would first advertise certain jobs, or maybe all jobs, actually, for a while, only to women.
So the first six months.
Only women could apply.
And if you couldn't find a suitable candidate after six months, you could open it up to anyone.
And the idea behind this is maybe worthwhile because we definitely want more role models for our female students.
Like, hey, I can also become a professor.
So I'm all on board with that.
Somebody sued the university for this rule being discrimination.
It went to court and court said, this is indeed discrimination.
So you can't do this.
They have changed the rule a little bit.
Now it is only certain departments for certain positions, like the math department, for example, is still entitled to open jobs for a limited amount of time only to women to promote more women applying to these jobs.
But my department no longer can do this because we were already pretty fine.
So suing the university over something like this is something that you have to do, right?
You have to do this if you want to have criticism of policies, where you're like, maybe this is not in line with, in this case, it went to the Universal Court of Human Rights.
That's also, that's kind of nice that we have human rights like this, and some objective people who think about this, you can disagree with it, that's fine, but okay, it's policy.
And we were thinking, would this happen?
In the US, right?
If you have a policy like this, would anybody go out and sue the university for discrimination?
I don't think so.
But maybe they should, right?
I mean, we need to be able to have a discussion, a critical discussion about topics like this.
Yeah.
I mean, it sort of happens here, but not as blatantly as you would only, right?
Here, it's like women and minorities are encouraged to apply, right?
It's things like that where you get around, right?
So you can't get sued, right?
But it is still done.
And I mean, again...
The DEI statements, right?
I mean, that all started, I think, if I'm not wrong, like here in the UC system, probably at UC Berkeley, if I'm not mistaken, where they do use that as a first pass.
So they will actually, the first thing they look at would be your DEI statements and just like, that's your first cut of like just removing 80% of the pool because their DEI statements are not up to the mark.
So they do it in these oblique ways without actually openly saying.
That we only are limiting it to the people who we think are ideologically aligned with us.
I'm wondering if GPT-4 is going to solve this issue.
Isn't it pretty easy just to hit those?
Oh, right.
And it is so unfair.
I mean, since we're talking about sort of the global perspective, it is so unfair to people who are international students who don't know all the ins and outs of what's happening in American academia, right?
Like you have to know what they're looking for to be able to give it to them in these statements because they're not looking for your opinions, right?
They're looking to hear what they want to hear.
And so if you don't know what's going on, right, it is so deeply unfair to, you know.
You know, to minorities, people who don't have...
In fact, I mean, I've heard from people here that some of the...
When they actually score those DEI statements, the people who end up doing really well sometimes are, like, white men.
Yeah.
Like, you know, who come from academic backgrounds.
It's like, yeah, because they know what to say to get in, right?
I was saying exactly that, actually, to a colleague earlier today, which is that, like, those statements are not...
I also think I'm one of those people that knows what to write in them.
And they're very much not like, oh, you know, I've supervised all of these students from diverse backgrounds, et cetera, work with these colleagues.
Because that's like saying you've got...
Some black friends, right?
I'm not racist.
So what is wanted, of course, is the correct language, the correct kind of theoretical references and allusions.
And it really, you know, it strikes me that this is something that is going to be challenging for someone that isn't enculturated in a particular kind of upper middle class background.
And if you're from a working class background or if you're a middle-aged...
Indian engineer, say, who's recently come to the United States, you are going to be very much disadvantaged compared to someone like me.
But I'm hopeful that GPT-4 will be able to be available to everyone.
But then, what's the point?
Yeah, exactly.
Well, that's what Joel Limbar kind of argued, right?
And then almost all of the debate around that did not focus on what his actual argument was, which was that they're not whatever you think about their merit, that like there's no evidence that they deliver what they say that proponents say
they can deliver.
I think he even just doubted whether they, you know, it was just a discussion of do they actually deliver?
It wasn't the evidence I don't think is there even.
So it is indeed not a good thing if raising the question And I think you can raise questions in bad faith, but this is not a bad faith question, right?
This is just like, are we helping the people we want to help?
You should have criticism in science.
And I think it's an interesting distinction in how much different...
Cultures allow different criticism.
I mean, I often joke that because I'm from the Netherlands, we are so direct.
Like, we will just criticize you for anything, you know?
It is true to a certain extent.
There are other cultures like this, but...
That's why people think you're an asshole on Twitter.
That is, I mean, and I am sometimes.
I am an asshole on Twitter because I also don't particularly care.
But it's Dutch.
It's Dutch.
It's Dutch.
Well, you know, you shouldn't excuse me completely about it.
Sometimes I'm just an asshole.
I'm just annoyed.
And I just don't control my annoyance.
But the point of being able to criticize things is something that some people hold as a really core value of academia.
And I've talked to many people who feel that they lost interest in science because it has stopped being a place where you can voice these criticisms.
Now, there are, again, very big differences between fields.
I think everybody knows the stereotype about economics.
I have to relate an experience that touches on all this.
The dog agrees.
Or he loves Amy Cuddy.
He's so conditioned about his name.
There are some things weird going on.
How often do you mention his name around the dog?
Whoever's dog that was, I'm not going to say anything.
Amy Cuddy was the researcher associated with power posing who came on the fire during the replication.
And I was commenting actually on, she posted a thing about how terrible the online psychology community was.
And it was always kind of tearing down people, right?
And at that time, Aiko Freed had just been sued by Jerry Coyne and just successfully defended.
That in itself is a whole other terrible thing.
He should never have been sued, to be clear.
And I'm glad that he was able to continue.
But in any case, after seeing all the support that had came out for him, an early career researcher against a well-established researcher that was suing him, I just find her characterization very incorrect.
And I made a comment about it saying something to that effect.
And I think it was early in my Twitter career, so I quote tweeted.
Without anticipating that she would respond to that.
But in response, she did.
And then she found a blog where I had just written a little thing, mainly repeating Simone Vazier's point that criticizing someone's research is not bullying.
It's a separate thing.
There can be bullying, but criticizing research has to be allowed.
And she kind of framed that as me.
Being part of the kind of mob attacking her.
And I went back and forth a little bit.
And then subsequently got an email from her requesting to interview me for her book, Bullies, Bravehearts, and Bystanders.
And it was quite clear.
As a Braveheart?
I don't think I was going to be a Braveheart.
And I just remember, this was a number of years ago.
The sign-off on the email had her university affiliation, but also how many millions her things, talks have been seen by her or books published.
And the tone of the email was very kind of accusatory.
And I was thinking at that time that, like, she presented it as me, the power differential being me attacking a woman, right, in science.
But the ceiling bar differential for me was this extremely, like, credentialed researcher who could accuse me in a book of being, like, a sexist bully.
And so I declined the kind invitation to contribute.
And I will say I also got DMs from Simone and some other people who had noticed that.
And just to say, we...
Like, messages of support, which was very nice, and I appreciate it.
But that case, if you take it purely on the kind of standard identity issues, it could be very easily to slot in, like, male academic attacking a female academic and dismissing her,
like, rigor, right?
But that wasn't anything to do with it.
Her gender was no concern of it.
I realize this is an indulgent example, but I just mentioned it to say that that definitely applies.
And I'm pretty argumentative and stuff, so it didn't stop me from continuing on.
But I wouldn't blame someone else, especially someone in America who might be going for positions or whatever, to basically be like, I'm not going to talk about any of that.
I'm just going to remove myself because...
That could do very real damage to, you know, career prospects or something like that.
If you become Lee Joseph before you have Tanner.
Yeah.
And it's definitely an interesting case because I think it is a good example of where we want to draw the line between criticizing research, like the way that we want to criticize research in academia.
And so there's this book.
Which I like, I think, by John Ronson.
So you've been publicly shamed.
And it really nicely illustrates how online things can really spiral out of control.
And you can become like, you know, everybody can fall over you for some reason.
And it feels, if you are the person experiencing this, it feels like it's horrible.
It's horrible.
It will damage you for a very, very long time, even though everybody else has moved on after the one week that they were publicly shaming you.
In this case, this is actually what sort of happened.
I think actually the people who used Amy Cuddy's power posing as an example of problematic research, I think it's very small.
Actually, I think it is basically Andrew Gelman on his blog who used to, for like years, use this as the example of sort of shoddy, along with research on himicanes, like the name
I think it's actually mainly there that this has happened.
If you look around, not a lot of other people have used this repeatedly.
But regardless, at a certain moment, people have associations.
And it sucks if you become the association people have with research that is probably not reliable, even though there's a ton of other people who've done this and a ton of other people who have been criticized.
I think in our field, if you know somebody like John Barge, I don't think this is a person who really came away easily from this whole replication crisis and was also criticized a lot.
The only difference is he didn't push back.
You know, just don't hear from some people, just much more quiet in this case.
But where the line is supposed to be is an interesting one.
And I think if you had perfect control over a system, you would say, people, let's move on from this one example.
There are many other examples we could use.
Can we just, you know, stop using power posing as the example here?
But it doesn't happen.
At a certain moment, you have to keep talking about it, which is why we are talking about it now.
So it is just this reinforcing thing.
Right?
Again and again.
You can never get rid of it, even though it's regrettable.
But there is stuff to criticize in this work, of course.
And in this case, I've been involved, for example, in the peer review process.
And I think that people from all sides in this discussion haven't been perfectly objective in having a discussion about this topic.
So, yeah, how we criticize people is just not such an...
Easy thing to do well so that everybody is happy with how the criticism has happened, of course.
And I would add there that Dana Carvey, the co-author on one of the original Power Posing papers, who wrote on her blog, I think, just a word document talking about some of the researcher degrees of freedom and problematic research things that they applied.
But she was very open.
About it and basically said she didn't have confidence on the effect, but she wasn't condemned for that in any way, shape or form.
It was completely celebrated as an example of good scientific virtue.
I don't think it is the case that people are always just looking for any excuse to tear people down, like within the open science community, though it's sometimes presented that way.
Yeah, and Dana was actually the first author in a lot of that work, Dana Carney, who's also here at Berkeley, actually.
And yeah, her response was a lot more, right?
Like, I don't trust that.
I don't trust that research anymore and much more balanced.
But I will say here, I mean, there's two things to mention here, right?
Like, the one is like...
When you're in the second, like, especially in the American academic system, everything is now about power differentials, right?
And it doesn't matter if she's, like, a professor, you know, at this big university versus, right, you're a white male, right, and she's a woman.
And so there's a power differential, even though, right, that's one thing.
And I do think there is a gender difference in ways.
People communicate, right?
And I think that's where the heterodox stuff comes in, but also the whole broken science, right?
That schism comes because of the differences, right?
And I am pretty argumentative, too.
Maybe that's why I'm okay with these conversations, because I will get into it with people.
But I do think that women are sort of socialized to be a lot more agreeable and not.
There's a lot of people that think that if you communicate criticism a certain way, that it's bullying, right?
That if you ask, I think Lee Jessam has also been accused on Twitter just for asking somebody to give evidence for what they're saying, and that's bullying just because, right?
Of the power differentials, quote-unquote.
Lee Joseph has asked me to give evidence.
He does that to everybody.
He goes after everybody.
And that's the thing, right?
And as he should, right?
As is his right as a scholar.
And as it is your responsibility as a scholar.
If you're saying something, you should be able to back it up, right?
So I think that creates the whole sort of broken science thing where if you're criticizing people and you're not careful.
And again...
Nobody should be an asshole.
I think we can all agree, right?
You should not be an asshole to people.
We should be nice about it.
But of course, yeah, the communication.
But it does also go back to the thing we were talking about, people having their own theories and their own ideas, and you do have to sell it.
It's almost like your brand, right?
And I think what we should be emphasizing more is that you should be able to separate yourself from your ideas, right?
So people like Amy Cuddy, it's like you can say...
If that wasn't such a big part of who you are and it wasn't your brand, then you could think of it as though this is an idea that I put out in the world.
And maybe it's not replicable and it isn't.
And then you could say, right, like if you had more separation of it from who you are, then you would not take it as a personal attack.
And I think that should be the case for everybody, right?
If I put forth the belief, I reserve the right to change my mind, which...
We should be doing it as scientists anyway, right?
All our beliefs are provisional.
Like, anything I say even on this podcast, and I will deny all of it, you know, in a week.
So, right, you reserve the right to change your mind, and that's a good thing, right?
So you should not be, right?
So if somebody criticizes what you're saying, I think...
People are so quick to take that as a personal attack instead of thinking of it as, oh, I'm separate from my ideas, and if somebody attacks my ideas, I should be able to evaluate those on the merit of the argument that's being made instead of thinking that I'm being attacked or bullied.
And I think, yeah, it's hard to do, right?
We're human.
I think that can be hard, and especially if the other person is being an asshole, it's so easy to be like, oh, well, that's the question, just an asshole, right?
So I think it's a complicated, yeah, dynamic that I did.
Yeah, I'm not sure how to...
Deal with, yeah.
And what doesn't help is that sometimes people are just, well, whether you want to call it a bully or an asshole, but people are just being an asshole.
I mean, that happens, and I think there's, of course, a correlation between people who are willing to voice their opinions about things, but also people who have an affective response that is stronger than their desire to be liked.
By fellow academics.
I mean, that's basically what's driving some people, right?
You want something, like you don't want something to happen.
You're upset about a topic.
Let's say bad science.
There have been people complaining about this for 20 years.
They keep seeing it.
They get so annoyed by this.
They care more about this than being liked by other people.
But those people can also be jerks.
And I mean, I am one of these people in a sense.
To be clear, if I die, I hope that a lot of people are going to stand up and say, so Daniel was really kind to me during this period of my life or this, right?
I really hope this is going to happen.
I'm pretty sure that there's a very sizable group of people who will say this.
But there are things that you see online in my behavior where I am just really annoyed and frustrated.
I should...
Close my browser window and not let my personal annoyance from the last decade carry into this one message that I'm sending somebody who's not thinking about all this stuff.
But I do.
That's not a good thing.
If I was a perfect human being, I would cut my own distinction between what I want and what other people do.
You know, I would be more objective about this.
But I don't manage.
I'm always impressed by people who seem to manage to do this very well.
I can't.
It's a personal flaw.
But yeah.
You know, I hope to compensate it by doing some good stuff for people in other places.
And sometimes I slip up in it.
And if you call me out for it, I hopefully will also have the, you know, I'll also be good enough to say I crossed the line.
I definitely feel often like, okay, this was too harsh.
Sorry.
And I will say it sometimes, you know.
But there's definitely a correlation there.
I'm more likely to say something that is just a little bit more nasty or because I'm annoyed or frustrated.
It is true.
You should try to emulate me, Daniel.
I stay very calm and non-aggressive online.
That's how to do it.
I see you shaking there, wanting to throw your oar in.
No, I'm fine.
I'll move us all along away from these potentially dangerous third-round topics, race and gender and sexuality and power dynamics.
So let's talk about a group that...
Everyone's prejudiced against, apart from the Dutch, Bayesians.
Why do you hate them so much, Daniel?
So let me just say that some of the most fun feedback I get on my course that's online, so this course, Open...
A course on improving your statistical inferences.
It has a lecture on Bayesian statistics.
And sometimes people say, oh, I was really so interested in Bayesian statistics after doing this course.
And people recently invited me to say, hey, we're teaching.
We want to get some training in Bayesian statistics.
Do you want to come over and give it?
So I do think, hopefully, that in my teaching, at least, I'm objective enough about it.
Maybe it's a bit of a contrarian thing to want to just defend these poor little p-values against this, what I think is sort of push towards other ways of analyzing data.
One reason that I think this is not a good idea is very principled.
So if you are a subjective Bayesian, why I don't like subjective Bayesians is because it violates my philosophy of science.
And I think it is very much part of...
Wanting a science where your personal beliefs don't play a big role.
So if you analyze claims, so of course we all have our beliefs and you can come up with any hypothesis you want.
You can study anything you believe to be valuable.
But when we're evaluating claims, I just don't like subjective beliefs to be part of this.
And in that sense, I'm not saying anything that...
Karl Popper wouldn't say, although he's also not as popular anymore as he used to be.
So I'm just exactly in line with this.
So that's one component.
And the other component is that, for me, I think a lot of people are just mindlessly switching one mode of statistics they don't understand very well to another mode of statistics that's arguably more difficult to do well and that they understand even less.
I wonder, maybe you've seen this as well, that like Bayesian thinking is like a buzzword in a way, and in heterodox spaces, I'm not talking specifically heterodox academy, I just mean heterodox spaces,
I often see people reference, I'm thinking Bayesian about this, and they'll produce, in the worst case, they produced a Bayesian equation for the probability of lab leak, which had just, You said that you're all subjective,
but like, let M equal, you know, my assessment that it's likely.
And yeah, and it struck me that that has allowed people to kind of formalize inserting just their intuition as a...
Their priors, yeah.
Yeah, something that sounds statistically complex.
But yeah, are you...
I know that we all have friends who are Bayesians here.
We recruit Bayesians.
I don't make friends with Bayesians, no.
How do you feel about the issue?
Again, I don't hang out with Bayesians.
No, thank you.
But the thing is, I mean, it's...
I've always had the, like, intuitively, I agree with sort of Daniel, where it's like there's something that just feels off about it, right?
It's like this kind of thinking of, like, you're literally, I mean, on a technical side, there are people who make the argument that once your prior goes in, you're multiplying your prior, you know, with your data, which is like making up data, right?
You're just making up, like, oh, I thought something, and then you're using it, right?
So it's almost like faking data.
And a lot of the priors are so informative, like, what's the point, right?
What is the point?
They're totally informative.
I just want to have this Bayesianism.
It's just like faking data.
Even on, like, a theoretical level, you'll hear people for Bayesianism making the argument, oh, but it's how babies think, right?
That's how babies make sense of the world.
I'm like, really?
That's how we want to do science?
It, like, takes use from how babies make?
Like, come on.
It just does not make any sense, like, on any level.
And one of the best arguments I've heard against Bayesian thinking, sort of Bayesian epistemology recently, was a Novel Ravikant's podcast, if you guys have heard of that novel.
Maybe I'll share the link, and I won't be able to, like, recreate that argument really well, but it's, like, In Bayesian thinking, Newton's theory kept on gaining more and more.
Each time you're collecting new evidence for it, your prior should have kept on getting stronger and stronger until the very day that it was disconfirmed.
Which meant that the day before it was disconfirmed, you should have had the most reason to believe that it is true.
When you think about it, that's not a good epistemology to have.
And yeah, it's always sort of intuitively just been such a weird thing.
But the one thing that really bothers me about Bayesians...
Is they use the term Bayesian when they're talking about the analyses as if it's, like, something amazing, right?
I mean, in psychometrics and stuff, we use it, you know, sometimes to, yeah, create, like, APS-rated distributions or, like, get some estimations, but they throw it around as if it's meant to be something very rigorous, and that's what bothers me.
God, they're the worst.
When people take Bayesianism, as you've highlighted on your book, Daniel, there's plenty of occasions when it depends on the question you're asking, like what is the appropriate method to apply.
But I have noticed that as we pay more attention to the secular guru space and the...
The appeal of contrarian perspectives.
That there's academics to this too.
You know, like when we were talking about there not being, you know, it's a continuum.
Because I remember, and it's a while ago, so this feels like it'll be out of the controversy sphere to mention.
There was like a paper with some title again, something like...
Why pre-registration is useless or doesn't do anything.
And it was only two pages.
It was a very short paper as well.
But that paper was written intentionally, provocatively.
And obviously, there's tons of metrics that show that it is useful.
And I think the same thing applies in the case of abandoning...
P-values or that kind of thing.
When they're used appropriately, almost everyone agrees there's not that much of an issue to use them.
But if you take a very strong stance, it kind of generates more attention and people like being...
The kind of, yeah, in an academic niche, like peacocking, like pickup artists.
That's a comparison.
So now we've compared Bayesians to toddlers.
I have a funny story about Bayesianism.
Well, I think it's funny because after my first postdoc, I applied for a job and my background was in psychology.
And even though I'd gotten into applied maths and stuff, I hadn't.
I'd only learnt frequentist statistics.
I heard the word Bayesianism.
I applied for this job at the CSIRO, Australia's top government research organisation.
I was interviewed by the panel, one of whom was a guy called Bill Venables.
He's part of the R-Core team.
He asked me about my Where I stood on Bayesian is doing the job interview.
And I couldn't say, I just don't know what that is.
So I bullshitted for about 10 minutes.
I spoke weasel words, vague language, you name it.
And I got the job.
So what can I say?
I don't know to this day.
This guy's an idiot, but they gave me the job anyway.
Who knows?
I don't know.
Or maybe I fooled them.
To this day, I do not know.
Yeah.
But one thing I really think interesting, so we should be clear, first of all, patients, most of them who use it do a very good job.
Some of them are decent people.
Also, but maybe more importantly, it hardly matters.
In practice, you've done everything.
You've come up with your research question.
You've developed your measures.
You designed your study.
And it's really all the way at the end, you're going to put this number on it or this number on it.
And those numbers, regardless of the ones you compute, will point in the same direction most of the time.
And I mean, 95 plus percent of the time, you'll lead to the same statistical inference.
So in a way, it feels like...
A nice example of how a scientific field can get worked up about a topic that doesn't matter as much as measurement, for example.
We don't pay attention to measurement at all.
We mess it up.
Well, at that moment, you can basically stop.
You don't even have to collect the data, let alone analyze it, because what are you doing anyway?
But no, all young people feel that this Bayesian versus Frequentist thing is a thing they should spend...
Cognitive resources on.
People in the field will even be crazy enough to say, oh, but if we only switched to Bayesian statistics, all our woes would be over.
And that part of it is what makes me so annoyed.
That you want to be a frequentist and just be quiet about it and do your thing, fine.
Right?
But this other part is what makes me speak out and say, well, how about we just use p-values well?
That's what most people are trained in.
It's probably the most efficient way to improve things.
And then stop thinking about it and focus on all these other components that are more important.
Yeah.
It is like that Cargo-Called statistics thing, right?
Where they're not even using it properly.
Most of the time, it's like, oh, our prior is that our coefficients are distributed normally.
And it's like, and why do you think that?
No.
We just put a prior and pretend like it means something when it really doesn't.
Sorry, but you hinted at it, whereas I think both you guys would agree that measurement, unsexy.
Uncontroversial measurement is probably the most important thing.
And Chris, I'm always on to you about it, aren't I?
Yeah, you're right.
These things are a big distraction.
You want to now talk about Rush?
We finally made it so that you can really talk about Rush modeling now.
Good for you.
We made it.
I'm going to shame Chris with some of his measures.
I'm going to tell him, Chris.
It's scandalous, some of the measures that he used.
I inherit measures from what people tell me to do.
I'm a mere cog in their machine.
I feel like, to some extent, this issue is like when people use...
There are good reasons to talk about a Gaussian distribution, but there are times when I feel like people say that because they want to say something more complex than normal.
I remember vividly discovering, we had a Gaussian distribution that's like, isn't that normal?
Oh, it's the CM, right?
Okay, so, yeah.
But it does feel that there's, I think everybody, especially that has experience, like in social science research that is quantitative in nature, which most of it should be, that when you go and You've done some analysis and you want to find out what is the statistically correct thing to do in this edge case scenario.
You quickly discover statisticians have argued both sides.
A third statistician has said, no, they're all wrong.
And in most cases, people are selecting the citation.
And Daniel, I'm sure that your paper has done this sometimes, like when people just...
Take you, cite you as saying, oh, you can do this for equivalence testing.
Lincoln says that that will do it.
And that's it, right?
They just want the justification for because, and in many cases, because it is involving a lot of effort or the actual statistics might end up being too esoteric.
So the issues that we can focus on, like having Better measurement about having appropriate sample size and claims that match the quality of the data are much more tractable,
I feel, than expecting that everybody will dramatically upgrade their statistics.
It's not that you shouldn't invest in your statistical analysis, but just that there's a lot of other things that even if we had perfect statistical analysis, it wouldn't solve.
I agree.
And I spent most of the last decade actually trying to get people to improve their statistics, right?
But I still agree that actually what I am doing is not the most important part of the research process to improve.
It is in a way a little bit sad that we care so much about this last point.
I don't know why.
Maybe it has something to do.
Smriti sometimes mentions our fondness of anything math-related that can make us look really smart.
And maybe that's a part of it.
So if you just dive into the statistics part a little bit and you do Bayesian statistics, then you figure out, you know, you can show that you're really smart.
And that's much more difficult to do with a good theoretical framework or with a good measurement development or something.
You know, maybe that's part of it.
I don't know.
But it feels that we're really focusing on, yeah, a thing which, sure, you can mess it up.
You shouldn't.
It's relatively easy not to mess it up too much so that it matters.
Now, let's leave it at that and go back to these other parts, I would say.
Matt, there might be a self-serving point that I'm inserting here.
But, you know, we have this, like, tongue-in-cheek thing that we call the gurometer, right?
Yeah, yeah.
The gurus on these 10 features which we have found recurrent amongst them.
I think this is a very, very important measurement tool that is clearly extremely valid.
It's creating the field of gurulogy out of nothing.
I've already heard some gurus say, like, hey, I'm not scoring very well on these guys' guru.
How can I increase my scores on it?
I've already heard people want to improve their metrics.
Are we going to mention the cultural appropriation?
Is that ever mentioned on this podcast?
That has been mentioned, but only in emails.
And the sitar music at the start of the podcast.
That I kind of appreciate.
That actually sounds kind of nice.
I mean, the kangaroo has a very negative connotation in the Western context, but we still use it very respectfully.
Yeah, and stats guru, you know, that's a positive connotation.
But one thing that I noticed is like...
The two things we get is, on the one hand, like accused of scientism because we're putting numbers, right?
And like, oh, you guys are pretending it's a scientific instrument.
Even if we have a big red, flashy thing said, this is not, you know, a validated skill or that kind of thing.
But on the other flip side, I saw people say, oh, if you publish about this, then you can do this.
But actually, I think myself and Matt...
Are probably amongst almost all academics, the people that have listened to the most secular guru content.
And these things that we've identified, I would bet my life savings on them being recurrent features that you find in this set that we're talking about.
But people seem to think if we publish a paper, which we actually do, but if we publish a paper, it somehow means that it's more...
You know, like, true or valid.
So you have this weird thing where, like, and we feel also, I think, Matt, we as well, that, like, having an academic paper with our names attached to it and this concept of secular gurus, it makes it different.
Even though we have a podcast with, like, three years of content on this topic, which is probably of more use and heard by more people than academic paper would be.
Maybe it gives it more credibility because it will go through the wonderful peer review process.
They're going to take down our 10 factors, the ETH or some horrible number.
We did it in paper form so it can work towards my H-index.
That's the main thing.
But I think it is really interesting because...
It does limit the way that we think our contributions to science work out.
And I think definitely like a podcast is, I mean, so I was thinking back about last year, basically, what I've done.
And like, I didn't publish that much this year for all sorts of reasons.
But I was thinking, what did I do that I really enjoy?
And I think, so I worked a lot on an open textbook.
And there's a lot of material in there.
Nobody will count it.
I don't even think it is on my resume currently.
I didn't put it on my resume somewhere.
I don't know where exactly because it's self-published, whatever.
And the other thing is the podcast, which has a ton of, I think, interesting information, right?
And it's true that we don't consider those outputs, even in our little startup.
We're at around episode 20, I think.
But even that number of listeners is larger than, I think, an average.
Well, maybe an average.
I don't know.
I can't complain about people reading my papers.
But nevertheless, it gets a lot of attention compared to some other academic output.
Yeah, it's difficult to track, but it definitely...
Plays into this point of the publishing system being a little bit of scam and we just do it to get a number, you know, like get the respected output.
Whereas maybe the intellectual contributions we make in other ways are, yeah, at least as impactful.
And there's also, I mean, we just have a podcast coming out called Quantification.
F-A-U-X, so like quantification, which I think you guys will enjoy.
And it's really touching on this idea of like, once we start putting numbers on things, suddenly it seems like they're actually meaningful, right?
That it sort of means something scientific and more rigorous than if we're just sitting here talking about these things, right?
And people also in psychology talk about sort of physics envy, right?
That we suffer from like, we want to sort of put numbers on things and quantify things and we value sort of quantitative research a lot more than qualitative research.
And it all goes into this idea of, yeah.
If you can just throw numbers around and do some stats on it, that makes it somehow more scientific.
Yeah, I'm just about to do an economic costing, as it happens, of gambling impacts for Victoria and New Zealand.
And that's basically that.
A lot of the impacts are qualitative, are subjective, but we have to put these numbers on it because ultimately, when the people are sitting in rooms, they can count the money that's coming in in gambling revenue.
They need something to count over here to weigh it up, yeah.
But a final question for you both.
I might start with you, Smriti.
You know, and the podcasting thing is a good lead-in because we want to know whether or not you're optimistic or pessimistic, both about like the state of like podcasting as just like a medium of communication.
There's obviously wonderful podcasts like yours and ours that are contributing, making the world a better place.
Then there's, on the other hand, there is the Dark Horse podcast.
And also academia generally.
Smriti, you're going a bit of an unorthodox route perhaps in terms of how you're approaching yourself.
Heterodox.
Heterodox.
Yeah, you're in heterodox academy.
You're not playing the game and maximizing your H-index.
So did you feel good or bad about the future there?
Oh, that's a hard question.
Well, about podcasting, I mean, podcasting, I definitely feel, you know, good about it.
I think, broadly speaking, I think it does more good than it does harm.
There's lots of good stuff out there, and you can, I mean, and I get to talk to wonderful, cool people like you guys, right?
Like, it's like, I've just, I met Daniel through my first podcast, right?
So it's like, it's just such a great way to listen to people having, like, smart, engaging people, just having good conversations.
So I think you, it's like, yeah, you know, like...
Yeah, being in a water cooler and just listening to wonderful people talk.
And I also think that there's some value in hearing different perspectives.
When you can hear them, then you just read them.
So there's also that bit of more of a connection.
So I think podcasting in general has been wonderful for me.
I think that's great.
I sometimes think I should just give up everything and become a serial podcaster.
Start two or three more of them.
Academia?
I'm honestly not sure.
Sure.
One of the things I think I'm trying to, like, if I stick around, like, the thing is, I'm trying to figure out if I can help make things better, to move things in a good direction.
And if the answer to that is yes, then I would like to stay because I really, I don't see myself doing anything else.
Like, I know I belong in a classroom.
That much I know.
Like, I belong in the classroom, like, as a student, as a teacher, as a researcher, ideally as all three.
But if I, yeah, so I, like, yeah, but the way things are going, at least here, things are not looking that great.
And to me, it's at this point thinking about, is there, where's the room for me to help make things better?
And if I can find that place, then I would like to stay.
And if not, then I would.
I will take Paul Neal's advice and make an honest living selling shoes.
That's my backup plan.
The answer is I don't know.
Daniel, how about you?
Podcasting has basically replaced the science paper of the 20s, the 1920s, because you should read those science papers.
It's just like, well, so I ran into Matt in the corridor, and we were both chatting about topic X, and I just wanted to share with all of you some thoughts we had, which is about this and this and this.
I mean, really, those papers are just letters, often communication on a much more informal level, and we've lost that throughout science.
So I think...
That's why we had blogs maybe a decade ago.
They were pretty big.
And now I think podcasts, because they replace this informal sense of communication, which many people appreciate, I think.
So podcasts, yes, very positive.
About science, it really depends on whether Schmitty is going to stay in science or not.
No, but for a good reason, actually, because I think people with that motivation who say like, okay, can I make things better?
And I think that is really the killer question that we have to confront ourselves with.
Can we make things better with our science?
Or are we just doing it to get the papers in and stuff?
And I feel that the field needs to have this slightly uncomfortable conversation.
What of the stuff we do is useful and makes things better?
And what is the stuff that we're doing?
Because, I don't know, somebody is just telling us to do this kind of stuff.
But we all feel.
That this is not it.
And if we have that conversation and we're able to resolve it in some way, then I think people like Schmitty.
And I think this is really, really important because sometimes I worry that the most talented people in our field are not motivated.
That's my drive to do what I do.
I sometimes feel the people that have the most talent for science, they take a peek, they look in to what we do, they're like, nah, no, I'm gonna...
Go this NGO route.
I'm going to work for the government or I'm doing this other thing.
I feel I have more impact there.
If we lose those people, we will lose out.
But if we manage to think about, okay, how do we make sure that science is motivating for people who want to make the world a better place, then we can go in the right direction.
And I would say just from my own perspective that the kind of material that you guys put...
And the attitudes that you present, setting aside the issue of the Dutch culture, I think it is a really good illustration of the proper attitude to science.
It's nice to hear it discussed.
One of the things that I think is worth keeping an awareness of is the fact that both of you...
And particularly you, Smitty, are ambivalent or considering about whether you can make a positive impact or that kind of thing.
There's a lot of people that have a lot less ability and a lot less thoughtfulness, and they're not at all worried about that.
So we need more people who feel ambivalent or a bit annoyed about things and have a desire to see things get better.
I think you're both doing the Lord's work, so to speak.
And Daniel, personally, as well, even without all your H-Index and publications, the MOOC that you have helped me and remains something that I force all undergraduates to at least hear about.
I don't know if they take it, but I think that definitely will have a lasting impact.
Yeah, your podcast is great and we heartily recommend it and appreciate you spending so long to be tortured by us.
Thanks so much.
Thank you for having us.
Thanks, guys.
I'm just disappointed we didn't get to item response theory, but another day.
Another podcast.
You can come on our podcast.
I'm joking.
That's a promise.
You need to provide your private numbers after this so he can contact you for follow-up chats.
But yeah, thank you both.
Yeah, thanks so much.
It was a lot of fun.
Thanks.
Bong!
That was finished, Matt.
We've sorted out all the problems in academia.
It's all done.
We finished the interview.
We finished the interview.
It's time to read the reviews of reviews.
Well, that's right.
You know, I normally like to get your feedback about which specific part of the interview you like the best, but I'll pass that.
I think my own contributions were my favorite.
They were pretty good.
They were pretty good this time.
I thought so as well.
You know, Daniel and Smitty, okay, but you...
I'm fine.
You really impressed myself.
Yeah.
Well, review of reviews.
Well, well, this week we don't have that many.
I haven't canvassed for reviews recently, Matt, so none of the need people to step up their game.
I asked GPT4 about our podcast, and I'm not going to talk about AI again.
This is just a thing.
Because the reason I asked is not vanity.
I just wanted to...
It's a good little litmus test to check how sort of up-to-date and how comprehensive it is because of the extent to which it knows about...
Our insignificant existence.
Yeah, super niche and quite recent kind of web presences.
It's just, I don't know, for me, it's helpful.
I presume you're turning on like the browsing mode.
Yes, it's now enabled by default.
I probably want to try it again.
I prefer to not browse and I just see what's in its brain, but I couldn't be bothered doing that because the funny thing was I asked what people were...
Thought about it.
Found some reviews.
And there's mixed opinions.
Some people say that it's good.
It's a lighthearted thing.
An introduction to serious topics delivered in a lighthearted way.
Other people.
And then it said a lot of nasty things.
And I was like, ah.
Fake news, Matt.
We have 4.4 out of 5 stars on Apple.
So, you know, don't get fake balance, ChatGPT.
About six months ago, it thought we were Conspirituality.
It kind of had the two.
It was like Decoding the Gurus is a podcast about secular gurus hosted by Matthew Ramsky.
I've got it.
I'm going to read out ChatGPT's synthesis of the reviews.
Opinions on the contrasting personalities and styles of the co-hosts of To Come to Gurus, Chris and Matt, seem to vary among listeners.
Some appreciate the podcast as a much-needed critique of gurus, describing the hosts as engaging in good faith with a sense of humour, although they note the episodes can be long.
Others find this show...
Even the fans find it.
Yeah, yeah!
That's accurate.
So that's the positive.
That's cute.
Others find the show biased, with the host's voices coming across as arrogant and lacking humour.
Lacking humour, Chris?
Yeah, that's not...
Arrogant I'll accept, but lacking humour?
Come on.
Criticising them for being too liberal and not as rigorous as traditional media.
It's the left...
I like this use of liberal because it's ambiguous.
I think Chachipiti is using it in the phrase of we're too left-wing, but if you speak to any true lefties out there, they would describe us as being too liberal in the opposite direction.
But anyway, another perspective describes the listening experience as a mix of crippling, cringe, and satisfying analysis.
What?
Who wrote that?
It's got citations.
All right, that's not us.
That's not us.
Okay, yes, carry on.
I'm going to follow these citations.
The citations are taking me somewhere at random.
That's all right.
No, I interpreted that as we are showing the crippling cringe.
And it's not us being crippling cringe.
We showcase crippling cringe.
That must be it.
Positive self-image retained.
Well, we can treat that as our list of feedback, if you like.
Well, yes.
I had one, though.
It's very short, so it'll just complement.
I feel it does echo some of the sentiments that ChatGPT raised.
This is from Future2-2 from Austria.
Not Australia.
So it's one out of five star.
Good idea, but what a waste of time.
Just your typical guys like to hear themselves talk for hours about things that could be said in a few minutes.
You know, is there something to that?
Is there, well, you might, yeah, yeah.
You might say so, yeah.
A bit cruel, but he or she has got a point.
So, shall I boost us up, Matt?
Will I give us one positive one, Dan?
We do have one positive one.
Yeah, that one, because that one was mean.
Potentially true.
So, let's balance it out.
So, this is better.
More accurate and more kind.
Elliot Toes from Australia this time.
An Aussie.
All right.
Nice.
My favourite podcast in brackets, I think.
Spelled T-H.
Don't need the qualifiers.
Don't need the qualifiers.
Favourite podcast.
Full stop.
At first, I thought this podcast was just a bit of fun to help me get through the Melbourne lockdowns.
Now, I think their fun, lighthearted way of deconstructing common manipulation traits with examples is quite invaluable, at least for me.
I was raised in somewhat of a cult, and hadn't thought about it much in my adulthood till this show.
While I'm here, perhaps you could cover normal Finkenstein.
I'm a fan of his, and it'd be quite topical.
I don't know who that is, but okay, we'll consider it.
He's surely not galaxy-brained, but perhaps you'd find him guilty in the lower dimensions of the carometer.
Okay, sure.
You need more left-wing people anyway.
Okay, now.
Bye-bye.
That's it.
It changed midway through.
Why?
I like that.
There's a lot of information dropped in that.
I was raised in a cult.
I initially, you know, find it just for a way of fun, but I think there's more to it.
And then, you know...
Invaluable, Chris.
His words or her words, not one.
Invaluable.
Yep, yep.
Not chat GPTs.
Not a synthesis of...
It's a stupid robot.
It doesn't even have a soul.
It doesn't have a GPT.
It's a stochastic pirate, but it's not pirating real things.
It's just putting words together, hoping they make some kind of sense.
There's nothing behind the screen.
It doesn't mean anything.
No.
Well, so there you have it.
But we don't leave yet, Matt.
We find patrons.
The people that are on our Patreon, you thought you were going to get away.
No, they're here.
They're crowding around the exit to prevent you from leaving.
And so, I'm going to shout them out, Matt.
I'm not just stalling until I find the list.
That would be unprofessional.
Now, I'd like to find conspiracy hypothesizers first.
Because they're the easiest to discover.
So here we go, Matt.
A bounty of names.
Jamie Tullock, Jeff, Nick, Simon McLean, John Barry, Jeanne HB, Matthew Tobias, Chris Topp,
Bowers, Dave Lavelle, Sean Job, Hasey Rewrite, Mr. Blythe Man, William Carpenter, Colleen Chandler, Ryan Gost, Joseph Whelan, Arabella Hulsa Pfeu,
Matthew Ross, Ben Mack, X, no, Jack Anderson, Chinook Sarkar, Ren Raku, Andrew Mack, Eric Feeling, Scott
M, Evan Johns, and Doug Eric Boe.
That's...
Nice!
There's a lot of you, and I love you all.
Thank you.
Yeah, thank you all.
I feel like there was a conference that none of us were invited to that came to some very strong conclusions, and they've all circulated this list of correct answers.
I wasn't at this conference.
This kind of shit makes me think, man.
It's almost like someone is being paid.
Like, when you hear these George Soros stories, he's trying to destroy the country from within.
We are not going to advance conspiracy theories.
We will advance conspiracy hypotheses.
Yes.
I love more, Chris.
I love him harder than Lex Friedman loves love.
Yeah, wow, that is hard.
Like them a lot.
And now we have revolutionary thinkers.
They include Nick Angiono, Alex Nelson, Dexter King-Williams, Jan Demi, Lena Neld Robinson, Kevin Nyberg,
Tristan Vahan-Draper, Conal Dunn, Alexis Allen, Jeff Fitch, Stephen Cleghorn, Parvana Angus, Niels Christian Nielsen, Kevin O 'Rourke,
Agnes Ziaka Zed, God damn, I'm sorry, Chris Horvath, Andy Hunt, F and Daraka Laramon Hall.
Bunch of them, Matt.
Good.
Good haul of evolutionary thinkers.
Love them too.
Love them even harder than the hypothesizers, if that were possible.
They can get decoding academia.
Like, just imagine the Huberman episode, all the comments that you really enjoyed about regressions and significance values, but extended for huge amounts of time.
So, yeah, that's what you get there, decoding academia.
I'm usually running, I don't know.
70 or 90 distinct paradigms simultaneously all the time.
And the idea is not to try to collapse them down to a single master paradigm.
I'm someone who's a true polymath.
I'm all over the place.
But my main claim to fame, if you'd like, in academia is that I founded the field of evolutionary consumption.
Now, that's just a guess.
And it could easily be wrong.
But it also could not be wrong.
The fact that it's even plausible is stunning.
That'll never cease the land.
Well, that's a keeper, that quote.
Yeah.
Well, speaking of keepers, Matt, we also like to find galaxy brain gurus.
The highest here who can come and talk to us and get our wisdom being directly, interactive.
Synchronous performance of decoding wisdom.
Or not.
Or not.
They can just contribute and be nice in that respect and not.
They don't have to.
So, today, they include Sonic Screwrider.
Pretty good name.
Alex Scootin.
And that's it.
Just two of them.
Just going to give them their own little cage.
Put them there on display in my menagerie.
They could be in there in the box.
They could talk amongst themselves.
Yeah, they're in the AI box that Lex is trying to work out with Yudkowsky how to defeat the three aliens.
So that's what you guys are.
It's always going to talk its way out of the box.
Yeah, that's it.
No, it's being a code, Matt.
You forgot.
It's being a code, and it lives forever.
It's already out of the box.
It's copied itself a million times.
Oh, Yudkowsky.
Can I just say, before I play the clip, I know this is whatever.
If you're still here, this is what you care for.
Yudkowsky is...
I saw him going on about...
Some specific thing about AI and payment systems and cryptocurrencies, right?
And, like, when we listened to him, he was morose because he can't tell the youth that there's any future because the world's going to be destroyed by the AI, very likely.
And then he's on Twitter tweeting about, like, credit card repayment systems.
Like, it's just, it doesn't add up.
Completely.
It's like a cult leader who's forecast that the world is going to end and get hit by an asteroid in a couple of weeks, and they're busy negotiating their car insurance.
Yeah, so it's a strange thing.
So, yeah, I don't know.
Anyway, Kowski's a little bit of a joke.
So let's play the Galaxy Bring Guru clips.
Anyway, here it is.
We tried to warn people.
Yeah.
Like what was coming, how it was going to come in, the fact that it was everywhere and in everything.
Considering me tribal just doesn't make any sense.
I have no tribe.
I'm in exile.
Think again, sunshine.
Yeah.
That was one of the best illustrations of the Cassandra complex done.
About the sovereign nation.
2030 me versus...
Oh, that was classic.
We told them what was coming and how it would be in everything.
I love that one where you went into time travel paradoxes.
2020 me would not think 2023 me is too extreme, but 2030 me would.
Thank you.
He's actually an undervalued guru.
The sovereign nation's man whose name currently escapes me.
You should go check it out if you haven't heard that episode.
It's good.
Yeah, that's a funny one.
So we're off, Matt.
We're off out into the solar system to dance around our merry way.
Our next guru, you know, we got choices.
I'm just pointing out that you Forced me to listen to Red Scare material.
Me?
Yeah, I gave you a choice and you said, let's do something different.
So I listened to Red Scare material.
You haven't yet.
So maybe you should do that.
I'll spoil it for everyone.
They're not secular girls, but they are very annoying.
So this is the thing.
Do we make Matt listen to something that's very annoying?
I vote yes.
I feel like we've had a few negatives, like we've had more than a few counter examples, people that are not gurus in our coverage recently.
We should do someone who's definitely a full-blown guru in the next episode.
Can we change it from Red Scare?
Is it too late?
I've listened to them, so it's too late for me in that respect, but we are going to do them because I listen to them.
But we don't need to do them next.
I say you're not going to like this description, Eli.
I say we do Harris.
I say we do, Harris.
Sam Harris, we get him off.
I've clipped an episode already.
It's getting older.
He's going to complain about us.
We're going to have to talk about it.
He won't, Matt.
He's very open to criticism and there's various things.
He went on with Chris Williamson, our old friend Chris.
We might get some clips from him.
Just come on.
Let's do him.
He's on the list of big gurus.
We've got to tick him off.
All right.
Well, we could.
We could.
What about, like, Cernovich?
Cernovich.
Do you want, like, Cernovich or Stefan Molyneux?
I know they're awful, but they're definitely gurus, and we haven't covered them because they're so horrible.