All Episodes
Feb. 20, 2025 - Decoding the Gurus
34:56
Decoding Academia 32: Do Babies REALLY like good guys?

In this episode, Matt and Chris take a look at a recent developmental psychology paper on the social evaluation of young babies. Do they display a preference for agents who are nice to others or could they care less at the babbling age? This is a large-scale, multi-lab, preregistered replication effort of a rather influential paper so it ticks all of Chris' Open Science boxes, but how does Matt react? Is he stuck in his pre-replication crisis paradigms? Join us to find out and along the way find out about baby Matt's psychotic tendencies, how cats feel about cucumbers, and how Matt narrowly escaped being eaten by a big ol' crocodile.Paper Reference: Lucca, K., Yuen, F., Wang, Y., Alessandroni, N., Allison, O., Alvarez, M., ... & Hamlin, J. K. (2025). Infants’ Social Evaluation of Helpers and Hinderers: A Large‐Scale, Multi‐Lab, Coordinated Replication Study. Developmental Science, 28(1), e13581.Original Study: Hamlin, J. K., Wynn, K., & Bloom, P. (2007). Social evaluation by preverbal infants. Nature, 450(7169), 557-559.Decoding Academia 3200:00 Introduction00:59 Matt's Close Shave with a Crocodile03:15 Discussion on Crocodile Behavior05:13 Introduction to the Academic Paper06:18 Understanding Registered Reports07:49 Details of the Replication Study12:07 The Many Babies Study18:23 Challenges in Developmental Psychology20:35 Original Study and Replication Efforts26:27 HARKing and the QRP problem in psychology34:24 Discussing the Results36:58 Exploring the Red Ball Experiment39:38 Forest Plot Analysis41:19 Infant Preferences and Social Evaluation43:24 Failure to Replicate the Original Study47:06 Exploratory Analysis and Moderators50:03 Interpretations and Implications54:21 Evolutionary Perspectives on Social Behavior58:34 Prosocial Evolutionary Speculation01:05:10 Psychopathic Baby Matt01:06:28 Concluding Thoughts and Reflections01:11:20 Comparative Psychology on Snake Hatred!The full episode is available for Patreon subscribers (1hrs 15 mins).Join us at: https://www.patreon.com/DecodingTheGurus

| Copy link to current segment

Time Text
Hello and welcome to Decoding the Gurus, Decoding Academia Edition with the psychologist Matthew Brown and the anthropologist of sorts Christopher Kavnat.
We're here to look at academic papers, critically evaluating them, issues in academia.
And we have in this instance, Matt, a paper that I suggested, that I recommended for a variety of reasons.
But first, before we get to that, G'day!
How you doing?
You all right?
G'day.
G'day.
How's it going?
I'm all right.
I'm all right.
I had a close shave this morning, Chris.
I went for a run because that's what I do these days.
Yeah, the run I swim in the mornings.
Very, very good.
I'm all about maximizing my health.
And afterwards, you know, I walked past my local beach and I went, I'll get hot and sweaty.
I'll go for a swim.
So I stripped off, had my undies in the water, did some body surfing.
It was very nice.
Then I got home.
And my wife mentioned that a formula crocodile had been spotted at that very beach, Chris.
Big old salty.
So, I don't know.
I had a close shave.
But I survived, you know.
Unlike Crocodile Dundee, I just would have hypnotized that bastard if he'd come for me.
I would have been worried about sharks myself.
Sharks, you know, and in the sea.
And what did you say you were doing in the sea?
What was I doing?
Body surfing.
Body surfing.
Isn't that the kind of thing that would make you look like a seal, a silhouette of a seal from the bottom if you were a shark looking up for a tasty meal?
Yeah, look, you can't help but look like a seal.
Danger, Matt.
My overall body shape is getting more and more seal-like as I get older.
Good coating of blubber.
And was there anyone else around that could have distracted the beasts of the ocean, you know, that would have provided other targets?
Just you?
It was a lone swim.
It was a lone swim, Chris.
Yeah, just me.
You've got a death wish, I swear to God.
Out in Australia.
There are sharks, right?
There's sharks in Australia and other beasts that live in the water.
I mean, there's four-meter crocodiles.
That's pretty big.
Take care of yourself, Matt.
The podcast needs you.
It was the four meters that gave me pause.
If it was a two-meter one, you could probably take it.
I could take it.
I'm not trapped with you.
You're trapped with me.
Grab the crocodile.
Spin, yeah.
If only crocodiles had brains big enough to feel any kind of fear and trepidation.
No, they're just 100% anger and hunger.
I saw a video online, probably on Twitter, because of the nature of it, but there was a crocodile.
There were a couple of crocodiles hanging around, and they're in an enclosure somewhere, right?
And one of them bites the other one's leg, does a roll, and rips its leg off.
And the other one is just, like, kind of sitting there, not really reacting to that it's just had its leg torn off.
And I'm like, those are alien creatures.
Like, it didn't say, ah!
Or even look surprised.
It was kind of just, like, an inconvenience.
And I'm like, those don't grow back, right?
Like, you need that.
Well, yeah, like, the thing is, a 4-meter crocodile is, like, I think it weighs about half a metric ton.
They're, you know, as big as a small castle.
That's even difficult for me to imagine handling that.
Yeah, even you, Chris, might struggle.
But the thing is, Chris, they're getting bigger and they're moving south because in the good old days, they used to hunt these crocodiles and they, you know, make boots out of them and so on.
And they're protected now, which is obviously a good thing.
But, you know, a slight consequence of that is...
Mixed feelings.
Mixed feelings, because they're getting bigger.
And actually, the bigger crocodiles, they eat the smaller crocodiles.
Oh, my God.
So crocodiles eat other crocodiles, as well as everything else that moves.
And so they haven't been hunted, so they're just kind of protected.
And they don't really have any natural predators, so they're just getting older and bigger.
And some of them are moving south to fresh territories.
Global warming is another...
Fantastic consequence of global warming.
How far so?
Where are they coming?
That's what matters.
That's the threshold at which point I say, enough is enough.
We need to do something about climate change.
The crocodiles have become fire enough.
This is true.
Well, see, now, there we are.
I'm glad you survived.
I'm glad you're here.
And I will reward you with discussion of a very interesting paper, a paper published just last year.
In developmental science.
And I like it for a whole bunch of reasons.
But do you want to tell people what the title is?
What the paper would you like me to do?
The dealer's choice.
Title of the paper.
I can handle that.
That's a weighty responsibility.
Infants social evaluation of helpers and hinderers.
A large-scale multi-lab coordinated replication study.
Published in Developmental Science in what year?
2024.
With a very large cast of authors we will not be reading out.
Although I will note that it was received in 2019.
So it's been under review for five years.
Oh, okay.
Five years.
My God.
I know.
Kind of impressive.
But I understand why, Matt.
I understand why.
Let me...
Explain.
This is a registered report.
Now, for those of you who don't know what a registered report is, it is a format that has rose to prominence after the replication crisis, whereby to try and address the issue that journals have a bias for publishing research that shows kind of sexy results or finding positive results.
They don't like research that gets null results because they say it's not interesting.
And this is a problem because then...
Journals just become journals of positive results, and the negative results don't get published, right?
You have this bias in the literature for positive results.
So one way to help address this was registered reports whereby people would submit usually the introduction, the methodology, and the planned analysis sections of the paper, and it is reviewed in advance and accepted in principle before any results are collected.
So that's why it's been received so early, because they planned this large-scale multi-lab study.
There's, I think, over, I don't know how many offers there are there, but it's over 50. So there's a lot of offers, a lot of different labs involved.
So that's why it took five years to get published.
Okay.
The reviewers just weren't being slack.
There's a logical reason.
Well, that's good.
That gives me faith.
Well, imagine that, like planning things out five years in a head.
I'm struggling to plan, you know.
Anyway, but that's good.
That's the power of science.
And one note as well is this paper is a coordinated replication of a previous paper from 2007 by Hamlin, Winn& Bloom called Social Evaluation by Pre-Verbal Infants, which was published in Nature.
And this was a big deal at the time.
So I knew that paper.
I taught that paper, Matt.
I taught this literature a little bit.
And then this paper came this year, and I found it very interesting for various reasons that we'll get into.
But it's a replication study.
As you said off air, Chris, you're obsessed by replications.
This is your kind of study, and it is.
You seem to love studies that are replication studies, replication crisis studies, and studies that find no results.
These fascinate you, but not...
The rest of us, Chris.
Do you understand that?
Does your theory of mind encompass other people's views on this topic?
I mean, what about something sexy, something fun, something new?
This is sexy.
This is the new sexiness in psychology.
This is what we want.
And to help give people context here, what this is, what the original study was, is that they're interested in how young...
These kind of assessments about character and morality develop, right?
Because it speaks to this question about whether morality and judgments of good and bad behavior and so on are all socially derived or whether there are intuitive assessments.
So you want to conduct studies looking at social evaluation with the youngest children you can possible because, in theory, they will be less influenced by cultural...
Sources, right?
So you want to work with pre-verbal infants because they've had less time to develop.
You know, you're not having many philosophical conversations with one-fold infants.
But if they are capable of assessing social behavior of third parties, this would say that's a very early developing intuitive behavior.
And it doesn't mean that you can't then layer on top social evaluations and cultural values and so on.
There might be a degree of innateness to social evaluation in humans, right?
A kind of hard-coded thing.
So this is what the original study was looking at with very young children, getting them to watch a puppet show with puppets.
We'll talk about what kind of thing exactly, but like getting them to look at social behaviors and then judge that, right?
And see whether or not they could.
And the original paper said, yes, they do.
They can and they do.
And this led to a lot of other papers looking at similar sort of things.
Paul Bloom wrote a book, a popular science book called Just Be a Beast, which is kind of about this concept.
So yeah, this is now a very large, multi-lab attempted replication of the original finding, published nearly two decades later.
And that's science, Matt.
That's good science.
Gee, dare say it's not sexy.
This is...
Okay.
Exactly what sounds like.
I take it back.
Nature, nurture stuff.
That's always sexy because, you know, the internet and everybody, you know, still doesn't seem to agree about this.
It's still a hot topic to what degree are things like, you know, innate, species typical.
We come with a lot of biological baggage.
You've got a lot of choices of life and how you live it.
You've got no choice about going to the toilet and eating.
These are the things you must do.
And, you know, to what...
We know that we're social creatures, like dogs and monkeys, but very much unlike crocodiles, Chris.
Horses?
Horses, yeah.
We should do that paper.
Somebody has written a paper arguing that crocodiles are social-coordinating hunters, and I dispute that, Matt.
I dispute that, but it's interesting, the argument that they make for it.
So they say we underestimate the sociality of...
It seems unlikely.
Well, yeah, you're not wrong to be skeptical, but I've just pointed out people have made that argument.
And so if I have a little look here, Matt, at the abstract, I think this does a good job of summarizing things, right?
I realize I'm just reading here, but I'll read this abstract.
Evaluating whether someone's behavior is praiseworthy or blameworthy is a fundamental human trait.
A seminal study by Hamlin and colleagues in 2007 suggested that the ability to form social evaluations based on third-party interactions emerges within the first year of life.
Infants preferred a character who helped over hindered and null or character who tried but failed to climb a hill.
This sparked a new line of inquiry into the origins of social evaluations.
However, replication attempts have yielded mixed results.
We present a pre-registered, multi-laboratory, standardized study aimed at replicating infants' preference for helpers over hinders.
We intended to, one, provide a precise estimate of the effect size of infants' preference for helpers over hinders, and two, determine the degree to which preferences are based on social information.
And then they start to get into the results and this kind of thing.
So this is a large study involving 1,018 infants originally.
I think this comes down to nearly half that after you do all the various data cleaning and kicking babies out that are sleeping or throwing up or whatever.
And it was conducted in 37 labs across five continents, Matt, in case you were keeping track of the continents, okay?
So this is a very large sample, a very involved endeavor.
And the original study, in comparison, 16 10-month-olds and 12 6-month-olds.
Okay, so much smaller sample in the original study.
And so presumably this was a pretty influential study if people are going to the trouble of replicating it, right?
Yes, yes.
The way that it works is that the kids are shown...
I think it's described as a puppet show, but it's basically like there's a diorama.
With a little hill.
And then you see these shapes, like the things that you would see in preschools or in storybooks or whatever.
And they interact in various ways.
And then the children are asked to select between some of the characters that perform different actions, right?
And in their interactions, they are either helping or hindering.
And they're basically helping a person or an object to go up the hill.
They are trying to go up the hill and they feel, and then they are either pushed down the hill or pushed up the hill.
And the thing being pushed up or pushed down is either an agent, meaning it is something with eyes, or it is an object, the same kind of shape but without eyes.
We can get into how this is a difference.
That is the general procedure, is get children to watch a puppet show.
Actually, get them to watch lots of puppet shows so that they understand the behaviors and what the characters are doing.
And then give them this forced choice to reach out and grab.
Because obviously, like babies, nonverbal babies cannot explain their reasoning, so you have to look at behavioral things.
You could look at...
Eye gaze behavior or like reaching choice and various other things, right?
But in this case, selection for the character.
They're given an option at the end.
And these are incredibly young babies, at least seems young to me.
Five to...
Oh, yeah, yeah.
Ten and a half months.
Less than a year.
Yeah.
So, you know, hypothetically, if it were true that little babies at that age who...
Like, they can't really speak at all at that age, can they?
I mean, I forget.
I've got three children, but I can't remember what babies can do at what age.
No, they cannot speak.
I mean, hence in the title of the original paper, which says pre-verbal.
Yeah, well, that would be true.
It sounded pre-verbal to me at that age.
I mean, yes, I learned all the Piaget stages and stuff when I was younger, but it's all gone now, Chris.
I don't remember.
So, yeah, I mean, that would be big news, right?
Because if they've got a preference towards...
Kind, pro-social, helping things, little characters, agents, then that would be pretty strong evidence that people have got a sort of inbuilt, innate pro-sociality and a preference for rewarding actors who are pro-social.
And the important thing here is also to recognize that this one aspect of this is judging a third party behavior.
Because the puppets in the show are not interacting directly with the baby.
They're interacting with each other.
So it actually does require a little bit of sophisticated cognition because you have to recognize a behavior to a third party as being positive or negative and then evaluate that that in some way relates to you or how they are likely to treat you or preference for that character.
It isn't the infant who is being helped or hindered.
It's a third party.
So this is why it's kind of interesting if that developed so early, because it would be judging social behavior directed at others at a very early stage, right?
Like babies can't walk around at this age.
Yeah.
Okay.
So falls into the big if true category and the original studies.
Found it was true.
That was very exciting.
Yes.
These people are going to do a larger mini-labs type experiment, which, you know, we'll get into the methodology, but I think, I'm guessing, I'm pretending that I don't know the details here, I'm guessing that they took a fair bit of care in terms of preventing those research degrees of freedom and various sources of bias.
Would that be correct, Chris?
You would be correct, Matt.
You would be very correct.
And the thing to note as well is the original sample size is small, and psychology in general had smaller sample size at that time.
But in particular, developmental psychology, especially with very young infants, it is prone to have small samples, right?
Because of what we talked about, it's very hard to run a study with pre-verbal...
Infants.
And you have to get them in the lab.
If you've had kids, you will understand the difficulty that might be involved in doing research.
So it's not the same as having an undergraduate sample of 30 people in the sense that you should be able to get a bigger sample at that in most universities.
But it actually could be very difficult to rustle up 30. Under 12-month-olds who are going to participate in research, right?
So not defending, not saying it matters for the validity of the statistics, right?
That doesn't matter.
But it is more understandable why sample sizes tend to be on the smaller size in developmental psychology with very young children.
Yeah, it's very understandable.
And the cost and difficulty of obtaining samples is part of the...
Criteria might have when deciding whether or not this is reasonable.
However, importantly, the statistics don't care about the expense and difficulties you've got in obtaining the data, and a small sample statistically is a small sample.
One thing that can justify a smaller sample in terms of detecting experimental effects, I suppose, is a well-controlled Experiment, in which the researcher does not have their fingers on the till there,
because simply by controlling a lot of other nuisance sources of variants, you can expect to detect small sample sizes.
So, yeah, it's fair enough.
These weren't bad people who collected the original data.
The sample sizes were small, but for one reason or another, they detected strong effects, and we're checking them out whether or not they replicate.
Correct, yes.
And the original study, just to say, Matt, there were attempted replications, but the original study found a huge effect in terms of the difference, right?
Like, I'm just talking about the most straightforward of the conditions where you had the pusher up the hill, right, the helper and the hinderer, and there was a preference of...
Nearly, I think, 100% in the six-month-olds and upwards of like 79, 80%, can't remember, somewhere around there, with the 10-month-olds.
So this should be like a huge effect, right?
If you can find that with a sample of, you know, 15 or less per condition, then this should be easy to detect in like such a huge...
You know, much bigger sample, right?
So it's not a difference of, like, they prefer this character by 60% to 40%, because that would be, you know, potentially harder to detect.
But the original study, at least, found a huge preference.
Yep, yep, yep.
So that's interesting.
Yeah, almost.
Almost 100%, eh?
Okay, good.
All right, good context.
So what about this study?
We know why they're doing it.
We know who the subjects are.
What's next?
Well, so one thing is that a lot of this paper is a very detailed explanation of the methodology, right?
In terms of various checks that they put in place that you would...
Want them to standardize the procedures across labs and to make sure that everybody was...
You know, there's obviously going to be variation in the conditions and whatnot, but try and keep all the stimulus the same.
And everything agreed upon the assessments for what was considered a choice or when people should be excluded and whatnot.
It all has to be kept consistent.
And they also detail how they trained people, how they recruited labs.
You could say it's too much detail, but I think this is part of the thing.
They really wanted to get this right.
They had 37 labs across all these different countries.
At the end, there were three excluded for various reasons.
They have in the first table, Matt, the list of the universities or places involved, and you can see the N from each place.
The largest sample was from Peking.
University in China, and then University of Hong Kong next, and British Columbia, and so on.
And the one that interested me was, if you look at the bottom, University of the Incarnate Word, United States Free.
I'm thinking that's a Catholic hospital, Chris, or a Catholic children's something or other.
Could be.
Could be.
Yeah.
The Catholic Church has great names for things, like the Sisters of the Immaculate Conception and things like that.
My big gay uncle was a member of...
Oh, they had a very raunchy name that sounded like a Catholic name.
Anyway, I can't remember it.
Move on.
Okay, so, well, in any case, they set out the procedures.
It's really detailed.
It's maybe too detailed if you're not attempting to...
Oh, the Sisters of Perpetual...
The Sisters of Perpetual Indulgence.
That's right.
That's what they call me.
That sounds like something from Warhammer.
But yeah, so they've got this very detailed thing, but also understandable, Matt, because of the format of registered reports, right?
They wrote all this in advance so that it was all written down and everybody adhered to it.
And then they need to explain any deviations that happened and whatnot.
So that's kind of understandable.
Then they go into the analysis.
And here they're applying a Bayesian analysis map, but they're doing it appropriately.
They specified it in advance and they determined, you know, necessary sample sizes and how they're going to classify, you know, how they're going to report things, all this kind of thing.
So Bayesians should be happy.
Nothing wrong with Bayesianism.
Nothing wrong with it.
It's a fine, fine thing.
And there's also this thing which happens in good open science.
Pre-registered studies where now people are splitting research into confirmatory analysis and exploratory analysis.
Now, do you know the difference between those two, Matt?
Yes, Chris.
Yes, I do.
Would you like to explain it for the audience?
That was a problem.
Okay, so a confirmatory analysis is one where you specify Everything up front, right?
So your questions, the model specification, all those little decisions that are involved in an analysis, they all get specified up front.
Your model is set, and it could even be the parameters of your model are set, right?
So you could say, I've got a confirmatory factor analysis, and I'd say that all of these factors have got equal loadings on the main factor or something.
So that's confirmatory.
Exploratory is basically, well, you give yourself all the freedom to Do everything post-hop.
You can poke around with the data, make decisions on the fly, have a look and see what's interesting.
And, you know, there's obviously pros and cons to both approaches, right?
Like, you've got the data.
Sometimes you can't, I think, pre-specify everything in advance, right?
Or anticipate.
You anticipate stuff, and you do need to make some decisions on the fly.
So I think it's also worth saying that there's like a fuzzy boundary, I believe, between confirmatory and inspiratory.
Oh, yeah.
It's not super clear.
So that's my take on it.
Would you agree?
Well, I agree with most of that, except I would also add that one of the reasons it's important to distinguish between the two is that there was an issue in psychology and other social sciences, and also sciences in general, with this thing called HARCI,
which is the acronym for hypothesizing after the results are known.
So people would collect the data.
They might have had, you know, Hypotheses that they were initially testing, and those turned out no.
Like, you reacted, Matt, this is uninteresting.
Oh, no, boring data.
You know, we can't publish this.
Then they would be encouraged to go back and look at the data, and they would dig in and look at subgroups and so on, and they would find some interesting, significant result, and then they would frame it that that was the original hypothesis.
This is how I bought it to my PhD students.
I say, if you haven't found anything...
They bring me a null result.
I say, you're just not looking hard enough.
Go back there and come back to me with a significant result.
And, you know, they usually do.
Matt is joking.
Just to be clear.
Just to be clear.
He's joking.
Matt, you don't know.
There was a researcher cancelled specifically for giving that very bad advice.
He did a blog post where he talked about were these data sets collected and the results were null.
And then he gave those data sets to a nuller student.
Who went and dug around and extracted lots of positive results and got like three or four papers published from the past results.
And the researcher involved, I forget who his name was, but he was a food researcher at Cornell.
Anyway, he held him up as like, this student is good, this student is bad, right?
The one that got the no results.
And then, predictably...
People looked into those studies and found out that there was serious issues across, in general, his research output.
There was a lot of problems with the reporting and a lot of p-hacking seeming to go on.
But if you give that advice to students and those incentives, it's understandable.
And there was also, Daryl Bam wrote a guide for undergraduates for psychology that was widely used that told them...
Exactly that.
Like, you have your original hypotheses.
Often they don't work out, but you've got to be creative.
You've got to go to the data and then you've got to find the story.
Just keep digging until you get the story, right?
And this is a recipe for finding, you know, chance results that aren't going to replicate.
But I think a good takeaway from that is that the fact that these characters were very open about it, right?
And they were, you know, publicly giving...
It was the norm.
Yeah, it was the norm.
And giving this advice to aspiring researchers and graduate students in a genuine effort to be helpful, right?
Like, if you want to have a good career in psychology, this is what you need to do.
I think it really illustrates that the cause of it was not malicious, nefarious motives.
I think just a genuine kind of ignorance where the incentives at play, of course.
In academia are entirely on the side of all of that bad behaviour.
And when the system is rewarding you and saying, oh, that's an exciting result.
Paper published.
Oh, you've published those papers.
Yes, we'll give you a job.
The strong message that academics were getting from the system is this is what you should do because this is what works.
So you share that.
You give that advice to PSU students who then internalise it and it becomes the norm.
Yeah, it just illustrates it was a cultural problem.
It still is a cultural problem.
And it's one born of the incentives at play.
And even though academics and scientists are meant to be kind of smart, we still made the mistake of kind of mistaking the systemic incentives for good behavior or good scientific practice.
Yes, I didn't.
But you're correct that the fields did.
And you didn't, right?
Like, I mean, I'm not saying...
There are no questionable research practices.
So this is a term that's used for like, did you ever go into your data and reanalyze things and focus on some significant result?
Everybody will have at the time because that was the norms in the discipline.
I'm not saying that, but I'm saying I went through undergraduate methods, so I always knew that hypothesizing afterwards, changing...
The focus, that that was bad.
Like, it is not good.
And I feel like a lot of people did know that.
Like, it's sometimes presented that there was this revelation.
But it wasn't a revelation, right?
Statistically, statisticians have talked about this for, like, at least a century, if not even longer.
So, yeah, I am agreeing that it was a systemic issue.
It still is to various degrees.
But that...
There are a lot of people that came like, well, I just never realized there was an issue.
And if you took an undergraduate statistics course, you should have known that that is an issue.
Yeah, not everyone took a course from a good teacher.
You're quite right.
It's not like the ignorance was universal.
Data-focused, quantitative people, stats-type people, you know, many people appreciated those issues.
Even people coming from anthropology.
Really?
Yeah, you get it.
But look, the general thing was that, and one of the complaints that people make about pre-registrations, it's a straitjacket.
Maybe I collect all this data, but actually the interesting results are things that emerge afterwards from analyzing the data.
And now I'm not able to report them because I've been locked into this pre-registered straitjacket.
But this is making a mistake because you can still report them.
You just have to.
Explain that they were found kind of by chance, right?
They were not anticipated in advance.
And that means that people reading the paper can adjust their confidence in it.
Because if it was like a very clear prediction that you made and you went into the research with that idea and you tested it and then you confirmed, right, that that pattern existed, that is very different than if you ran a bunch of analysis,
you spotted a pattern.
And it's interesting and it makes sense with the literature, but you didn't anticipate it in advance, right?
There's a difference.
So that difference is important for when you're reporting to people the results.
And it is in a way like the way you narrativize it, but the narrative matters.
So that's why I think having these, there are fuzzy boundaries, but having the distinction is important.
Is this something you planned in advance?
Yeah, totally agree.
The fact that there are fuzzy boundaries doesn't take away from the fact that you absolutely have to be super clear about exactly what degree of research degrees of freedom you exercised in the process of analysing the data.
So it could be a case where it's primarily confirmatory.
You pre-registered it, you had a bunch of questions, whatever.
You gather the data and you realise that you actually need to filter out some of the data for reasons you hadn't anticipated.
You write that in there.
You explain that, explain the reasons why, explain that this was a post hoc decision done after you, whatever, did some exploratory thing and checked for something or other, right?
The important thing is that you present it honestly and you don't sweep it under the rug and filter out that data without telling people why or just pretending that it was an a priori decision.
So that squares the circle.
It's okay for that to be a continuum of exploratory
If you'd like to continue listening to this conversation, you'll need to subscribe at patreon.com slash decodingthegurus.
Once you do, you'll get access to full-length episodes of the Decoding the Gurus podcast, including bonus shows, gurometer episodes, and Decoding Academia.
The Decoding the Gurus podcast is ad-free and relies entirely on listener support.
Subscribing will save the rainforest, bring about global peace, and save Western civilization.
And if you cannot afford $2, you can request a free membership, and we will honor zero of those requests.
Export Selection