All Episodes
Feb. 27, 2026 - Decoding the Gurus
01:12:26
Decoding Academia: Moral Entrepreneurs, Measurement Issues, & Screentime with Andrew Przybylski (Patreon Preview)

Andrew Przybylski, psychologist at Oxford Internet Institute, debunks exaggerated claims about screen time harming well-being by exposing flawed methodologies—like self-reported data (15–16% variance with telemetry) and median splits distorting correlations (e.g., 0.12 for general internet use-depression links). His team’s 33,000-regression analysis of global datasets (Gallup, WHO, Facebook) found mostly small positive effects, except for women under 24 with low community belonging. Critiquing policies like Australia’s social media ban for under-16s—driven by Jonathan Haidt’s The Anxious Generation—he argues evidence-free restrictions reflect generational distrust rather than scientific rigor, urging open science to counter sensationalized narratives. [Automatically generated summary]

|

Time Text
PhD in Human Motivation 00:04:08
Hello and welcome to Decoding the Gurus.
Decoding academia interview edition.
With me, Matthew Brown, professor, psychologist, guy from Australia.
Christopher Kavanagh, as always, associate adjunct, something professor from associate professor, associate professor.
Okay, sorry.
An anthropologist, extraordinaire, but not really, as he likes to emphasize.
And today we got, it's a, is it an interview edition?
It's an academic edition.
What is it, Chris?
Well, some would say academic, but you know, academic, that counts too.
We have somebody from my old haunt, Matt.
You might not have known it, but I graduated from the University of Oxford.
I might have.
I probably haven't mentioned it.
You know.
And those bastards, Matt, the way they treated me as a working class member of the proletariat, just shameful.
But that's for another time.
We have Andy Shabilski, who I think I did a reasonable job of pronouncing his name, but I think he could tell me if it's true.
And he is from University of Oxford's Human Behavior and Technology.
That's what you're a professor of, isn't it, Andy?
Yeah, I'm a psychologist.
And my main department is something called the Oxford Internet Institute.
I'm familiar with it.
Yeah, I'm the only psychologist professor there, so that's pretty sweet.
But I'm surrounded on all sides by philosophers, anthropologists, lawyers, you name it.
So it's a really great place to work.
I don't mean to besmirch you before we start, Andy, as well.
But is it true that you were a psychotherapist in a past life?
So no, I did.
I was trained with many of the classes that Matt derides.
It's true.
I was locked in a university psychology department basement modeled after Hitler's bunker.
And we were locked down there for three hours every Wednesday with nothing to do but to sit there and talk about the here and now.
So yeah, I do have that experience behind me.
You sense me as a psychotherapist?
No, see, I'm protected from this because my wife actually properly has a clinical psychology PhD.
And she's firmly of the opinion that I have half of all the skills required to be a psychotherapist.
I don't have that second part that is really about empathically helping.
And that's really where the wheels fall off the wagon with me helping individuals.
So yeah, I stop there.
You dabbled in the darker arts, but you saw your way out.
That's what we like to see.
But yeah, so your work in general, when I was looking into it, you've covered a bunch of things.
You've got work around computer games and the psychological impact thereof.
And also on screen time, you went into the violent video game debate a little bit as well.
So your work has covered a bunch of different things.
And we'll probably touch on random selection as well.
But obviously, for our purposes, one of the things that we consistently hear in this space is about, well, two things.
One is radicalization narratives, you know, especially with the gurus, people are concerned with that.
And then most of the channels that we look at are convinced, whether it's from the left or the right, actually, that we're going to hell in the handbasket because we opened this social media Pandora's box.
And, you know, Jonathan Haidt is a big figure in these kind of narratives online.
So where does your work fit into that bigger debate and narrative around screen time and all of that?
So the thing that I care about, the reason why I did a PhD at all is I'm really interested in human motivation, like why people do what they do.
And when my PhD supervisor came and said, hey, Andy, let's study, I want to study video games and motivation.
I approached it from that perspective.
I was like, what is being claimed?
How do we understand this?
And I just, I really jumped in feet first into a PhD studying motivation and video games.
And I really wanted to know what was real and not real when it came to like motivation and engagement at this boundary between the online world and the offline world.
Dipping into Data Correlations 00:03:59
Why do people do what they do when they dip in and they dip out of online spaces?
And where does open science fit into that?
Like, what does it mean to like really show that you've made a video game more fun?
Or by making this video game more visually aggressive or narratively aggressive, does that carry on through to how someone behaves outside of the game?
And I honestly was minding my own business as much as the field minds its own business until 2017.
There's an American psychologist called Jean Twenge.
She published an op-ed in The Atlantic that claimed that smartphones had destroyed a generation.
And I looked her up on Google Scholar, not a single paper on the topic.
I thought to myself, Jesus, what is this?
This is a mess.
Because every two years, there was like, like somebody would write a book, like screens are destroying the universe.
And then that person would go away.
But this person had written things before.
And then I started getting messages from other researchers saying, oh, God, she's your problem now.
Because before that, she had written a bunch of books about narcissism, like the most narcissistic generation, stuff like that, right?
Like it was called like Generation Me.
And it was, it was always like the basic like model is download a large data set.
Like the GSS, like sociologists have these large data sets.
There's these birth cohort studies.
And what she would kind of do is I guess download the big data set in SPSS, run a correlation table and kind of like correlate personality characteristics to like some other characteristic in the data set.
And then take an earlier year and do like t-tests between them.
And so like if narcissism goes up, then kids have gotten more narcissistic.
And it didn't matter if there was like measurement invariance or, you know, like the measurement invariance assumptions weren't met.
It didn't matter.
It didn't matter if the number of participants was wrong compared to if you cleaned the data set properly.
You know, she could lose 50,000 participants because she does like casewise deletion or something with missing data.
And so all these scholars of like narcissism and personality research, they're like, oh, look, she's your problem now.
And I was like, I don't want this problem.
What's going on?
Then the paper started coming out and it was very clear just from Twitter and the discourse that Jonathan Haidt was also a person who had, he had done a bunch of research in moral foundations related stuff in social and personality psychology.
And they became some type of Twitter team.
And then it was this classic Motts and Bailey where like they were publishing these things or kind of saying these things on Substack or Twitter or someplace where it was like, yeah, that makes sense if you don't use error bars.
Yeah, that makes sense if you don't clean your data.
Yeah, that makes sense if you don't bother with real peer review.
And the issue was that we couldn't reproduce anything of theirs.
And for me, as someone who's like, let's learn from any of the mistakes between William James and false positive psychology, 2011, 2012.
Let's apply any of these lessons, pre-registration, open code, open materials, any of this stuff.
And at that time, just before as all this was kicking off, I just received these two for me gigantic grants to look at large scale data and motivation and health.
And so just with my team, we cracked on for five years.
The project's just ended, actually.
And it's just been a wild ride as just an academic psychologist to just have these two, like people in our circles, we call them moral entrepreneurs.
This is like a sociological term.
There's like a class of kind of like public intellectual.
They leave academia and they kind of attain some other level.
Maybe like Amy Cuddy could be an example of someone like this, where I don't know, like they kind of stop engaging on the like academic level and they're engaging on some other level of discourse, like and they dip back into academia.
And it's a genuinely like unpleasant experience as a researcher.
Yeah, I mean, it sounds very familiar, the decorative scholarship on the part of people who are writing more popular-oriented books and also people from outside of the field just sort of parachuting in, writing a compelling narrative, tapping into the popular zeitgeist.
Measurement Invariance Challenges 00:15:39
You know, and it's a narrative I think that is plausible and satisfying to a lot of people.
Yeah.
Mobile phones are destroying us and computer games are rotting children's brains.
But you took a skeptical, pre-registered sort of open science type approach to checking these claims.
So yeah, in a nutshell, what did you find?
I think the biggest thing that we found is that the data that exists publicly when you use it really responsibly probably isn't very well suited to answer the kinds of questions that society right now has about mobile phones and video games and social media.
Like screen time as a concept is broken.
This whole argument is based on a few what we found to be very silly assumptions.
Like you can't ask a human being how much time do you spend playing video games with your mobile phone on social media and expect that answer to align with what they're actually doing.
Like the shared variance between the two things, they share about 15 or 16% of variance between what I say I do and what I quote objectively do according to telemetry.
Right.
So upwards of 82, 83, 85% of that answer, it's some kind of noise.
And that noise, that measurement error, that measurement error is correlated with individual differences in personality, anxiety, and depression.
Yeah.
We found that young people who have different levels of anxiety and depression and life satisfaction, that shifts in those things more often than not precede changes in engagement across games and social media.
And in some cases, you can see reciprocal paths.
But again, it's impossible to like these reciprocal paths, they like, it's like half of 1% of variance.
Before you go on, I just want to check and understand them for the people that are listening.
I think it's straightforward to follow, but I just want to make sure I am following along.
So when you're talking about people looking at these correlations and data sets, right?
And in most previous research, they're using self-report measures, not things from screen time or something like that, right?
I'm sure people are trying to use that now, but previously and for most large-scale studies, it's self-report.
And what people say they do or remember that they do is not very accurate in describing like what they have actually done.
There's a lot of variance between the two things.
And there is a systemic relationship with people making differences in estimates depending on personality factors, such that you might find a correlation between mental illness and screen time.
But a lot of that might be third factors applying, which are not being accounted for in the analysis or in the explanation.
That's entirely accurate.
There's also a more boring thing that is happening, which is that if you think of like what people say is their screen time in these studies, it's not like a uniformly distributed thing.
Like the same number of people don't say zero hours as half an hour, as two hours, as three hours, as four hours, as eight plus.
There's a skew in the data such that the people on the very low end, like at zero, and the people on the very high end, there aren't many of those people.
And so the estimation errors actually grow larger at those sides.
And then many times when people are analyzing this data, they don't treat the data as having its distribution.
They don't even assume that it's normally distributed.
They compare the tails.
They arbitrarily say like the top time people, like six hours plus, let's say.
They just take that as like a category because that's like a response category in the data.
And they say, people who say the highest response category are three times more depressed than those in the bottom half of the data.
So they like kind of like flatten out the median to zero and then they contrast that against a very small number of participants who have a much higher level of kind of error around them.
So they compound this problem with median splits and all of these other kinds of contrasts that distort in a way, they can distort like what you would even infer from a normal Pearson's correlation coefficient or something.
So what you're describing, then that raw material gets fed through Microsoft Excel in many cases to shocking graphs.
So when you did analyses a bit more carefully, hopefully, then you tended to find, you know, something, you would find some effects, but generally much, much smaller ones than what might get reported by these more popular sources.
Yeah.
So you can count on in most of these large data sets, all the variables in them, and there's many hundreds of variables.
You can count on there being kind of like an ambient correlation that's around 0.1 between everything in the data set.
Yeah.
Like I have glasses.
I think about killing myself, you know, 0.08 or something, just to make up a number that is within an order of magnitude correct.
Right.
And so if you treat these things as they're like natural categories, you can find anything.
So if you analyze it like absolutely straight, you get like screen time self-report of depression, 0.12.
Then if you cut it by boys and girls, girls will be 0.15 and boys will be 0.08 or 0.06.
Now, the confidence intervals around boys and girls still overlap.
They're both above zero.
But because the girl number, the point estimate is higher, the girl estimate is higher than the boy estimate, they'll say this effect of social media is really impacting girls, but not boys, let's say.
So this other thing happens where people can come up with moderators on a theoretical basis or sometimes on a discourse-based basis, but then they're unencumbered by are these values statistically different?
Like is a difference in significance itself significantly different?
And there's just no pointing that out that like there's kind of like an exercise of searching for p-values.
That doesn't happen.
So yeah, when you cross your T's and dot your I's, you'll always get like 0.05 through maybe at the high end, 0.20 of a correlation.
But the moment you put in parent education or parent screen time or parent depression or any factor that would be reasonably temporally preceding, the correlation will drop to zero or sometimes negative or something.
But again, because the screen time data isn't great, you don't know like, is that no effect or is that this data is not suitable for the question?
What you're saying is interesting in some respects, because like, I think when I'm seeing discussions around this, Gene Twangi is often invoked by Heid as the data person, right?
She's the one with all the statistics and databases that provide the fuel for his position.
And he has also pointed to two factors, which he regards at the heart of the social media ills that we experience today, right?
And he talks about the introduction of the retweet button and the like button.
These are the two things, right?
Because this gamifies social media and also allows things to go viral.
And I'm curious from your research, right?
Because those are features which come in at a particular time.
To what extent, if any, is there evidence for those being particularly potent, terrible.
forces on the discourse.
And related to that is, you know, you talked about how the emphasis in their output is that there is a problem in particular for teenage or adolescent girls and certain forms of social media because they are uniquely vulnerable and this is leading to, you know, increased suicides, worse mental health, and so on.
And they have a habit of trotting out, you know, a bunch of graphs that look concerning with kind of hockey stick style things on it.
So I wonder any of those points about the teenage girls or about the like and retweet buttons from your research, do those things hold up?
Are they things that people shouldn't be concerned about at all?
Or what do you think the status is around that?
I have my own suspicions, but yeah, I'd like to hear what you think.
So it is entirely possible that there is something about likes or retweets.
There's like a social signaling function here that is like intrinsic to online behavior that is just important.
There are absolutely things that we've observed that like draw people away from how they would normally act, but they're always changing.
They absolutely came in at certain times.
But you have to ask yourself, let's imagine that in 2010, that feature did not exist and then you introduced it in 2010.
And in 2011, depression goes up half of 1% or 2%.
That feature is still there in 2011.
And now Instagram is being used by a gigantic portion of the population.
It's gone from like 5% of all teenagers to like 30%.
And then it goes to 50%.
It goes to 70%.
That goes to 80%.
That goes to 89, 90%, right?
Why don't the other numbers scale like that?
Why does it go up in a middling way that looks like the exact mirror image of the 1990s in the epidemiological data?
If this thing is really bad, and if this has been introduced into the water supply, the negative effect should follow some law.
It should follow some pattern such that depression levels shouldn't be going down in teenagers the last three years, two and a half, three years, right?
Like they're as tech saturated as ever.
What's why?
Like they're on TikTok all the time.
Related to that, one thing I saw from my amateur investigations in the topic was exactly that, that the patterns that Height and Twengi are talking about, even if you take them as valid for America, and as you've pointed out, there's a lot of flaws with doing that.
When you look at countries in Europe or Africa or, you know, especially countries where the internet has entered and started to become more ubiquitous, and you don't see the same consequences, which suggests even if it were true, it would be a more culturally variable process, but that's not the way it's presented.
But is that an accurate read about the cross-cultural data?
Or can you not even compare cross-cultural data because the way it's collected?
Well, so there's like three things there.
The first thing is you really should not make an assumption of measurement invariance.
You can't take a depression questionnaire written in English, translate it into Vietnamese, and assume that it's just going to work the same way.
Like in Vietnam, I want to say depression is felt as a tension between your chest and your stomach.
It's not like construed necessarily in the same way.
So there's a problem of measurement invariance.
The next problem is what you're saying, which is that it could be culturally bounded.
Like people use social media differently.
And the third is about goalposts, moving goalposts.
So every country in the world does not collect suicide data, depression data, anxiety data, happiness data.
They don't all collect that data in the same form.
There's not really a lot of good apples to apples.
And so if you want to look at something simple with scare quotes around it, like suicidal ideation, you need to actually really break down how is that measured in these different places.
And you have to use the same idea, suicidal ideation, in these different countries when you're trying to do a comparison.
And so if you're willing to create a basket of an outcome called bad stuff and go all around the world and operationalize bad stuff in Canada as a sadness survey in a sociology, you know, a large sociology project, right?
I feel sad, something like that.
And then in Denmark, use an anxiety scale that was used in one of the large cohorts.
And then in Japan, use a happiness report and then reverse code that.
And then call all of that the same thing and pretend like 10 other things weren't studied in Japan and nine other things weren't studied in the UK.
And there weren't 45 other sociological concepts that were better in that Canadian study.
You can string together a story that says, we've got 50 data sets that show that teens are less happy and then highlight the countries where it's actually like suicidal ideation data.
It's like a mott in Bailey where it's like, you say this technology is destroying kids all around the world.
And then researchers say, hang on a second.
I don't think a one half of 1% of a drop in life satisfaction in Finland counts.
Then they retreat to another position and say, oh, you must be working for big tech to disagree.
Or, oh, you're being a stickler for methodology.
You're just being a statistician.
You know, and it creates this like really weird thing where if you just point out that the way that they're telling the story is inconsistent, then it's like, you know, whoa, then you are a skeptic.
You can see those things getting like politicized or dragged into culture where it always ends up like that.
But is there not at least a little bit of a way around some of those issues?
If you have a database that is, you know, like the world value survey or the Pew database where at least they've went to some effort to try to, you know, localize material and make measures relatively understandable cross-culturally.
And yes, there is in those data sets, you know, there's, there's different kinds of responses.
There's countries where people don't use extremes and so on.
But is there not like studies that have done that where they've constructed like large, you know, multi-country databases?
And so this is some of the research that we did where we did these three studies where we took data from the Gallup, which covers like 168 countries.
And it's like individual level data, a thousand households.
It's mostly face-to-face and some telephone.
And it goes back to like 2005.
We've taken data from the WHO and from the international telecommunications authorities.
So like everything from around the world at that.
And we actually we haranged a bunch of guys at Facebook for their daily active, monthly active user data across countries.
And so we wrote these three papers where we looked at does internet and mobile phone penetration across 168 countries matter for anxiety and depression as measured in an abstract way, but then well-being, positive and negative well-being.
That was one paper.
The second paper was in 72 countries.
We looked at the spread of Facebook and its monthly active users over the summer, like I think it was like July or August from 2012 through 2020.
And then we went through the Gallup data on an individual level and broke out absolutely everything having to do with eight kinds of well-being and three different types of technology engagement, like having access to home broadband, having a mobile phone with internet access, using the stuff regularly.
Across these studies, generally speaking, when you have the internet in your life, it doesn't matter how old you are, you tend to be happier.
Recent Internet Users' Regressions 00:14:57
If you answer the question, I have mobile broadband, I've used the internet in the last couple of days, I have a mobile phone that I can access high-speed internet on.
It doesn't matter.
And we've done it like hundreds of thousands of regressions.
And so in 33,000 regressions, analyzing the data, every way we could think of analyzing it, 85% of those regressions showed a positive correlation between having access to the internet, using the internet, number of daily active Facebook users across 72 countries.
Those correlations are positive.
They're negative with bad stuff, positive with good stuff.
Nothing to write home about.
Again, we're in this realm of like 0.5 through 0.25, right?
So like a small-ish effect.
In about 14% of our regressions, it's a null, null result.
And in 0.43% of those 33,000 regressions, there is a negative correlation.
That negative correlation exists in entirely one demographic group with one outcome variable.
That is those people who are women who are below the age of 24, if they say my community is not a place that I feel good about.
I don't feel a sense of belonging in my community.
They are more likely to be recent internet users.
They're just more likely to be regular internet users if they say that.
Not having a mobile phone with internet, not broadband.
So I'm going to take a wild guess here and that your interpretation of this like very, very small percentage of a bad association with internet use is likely attributable to other factors rather than like rather than being a direct causal effect.
I imagine whatever is the median 15 to 24 year old in the world who is a woman answering the question, I don't like the city I live in.
I interpret that as a good reason to go online.
That's why, I mean, that's why I go online sometimes.
Hey, you're in Oxford.
It's a lovely place.
Yeah, it's true.
But I'm a particularly spoiled human being.
But that's right.
And we can talk about it.
But there is a reaction to those papers that is irrational and upsetting as a researcher that comes from them.
So as a processing comment, I just want those to be two epochs.
That's a science epoch, but then there's a discourse epoch.
Yeah, well, let's go to the discourse epoch because I think this is important, right?
So what was the reaction out there in the discourse to you publishing the results or talking about the results of these studies?
So we published these papers, the last of which was published in 2024, and a familiar pattern emerged.
This happened when my colleague Amy and Orbit and I wrote a paper in 2019 where we thought we crossed our T's and dotted our I's.
And then we got drawn into a reply arc and a reply to a reply arc where issues are raised, not necessarily on the basis of their scientific value, but on the basis of their discourse value.
So what happened was we published these articles in three different journals.
The first article was about internet access and it went to a journal called Clinical Psychological Science.
The second one was about the Facebook adoption.
It went to Royal Society Open Science.
The last one, the individual level data one, where it was at 33,000 regressions, that went to American Psychological Association journal called Technology, Mind, and Behavior.
What happened in the discourse was immediately we belonged to a category of pro-tech apologists because people would bring up like, hey, Andy and Mati and these other guys, no one is finding these results.
And then they would point to these studies.
So everything on the discourse got like super heated where it was like, I was getting like freedom of information requests from journalists.
And it was part of a larger thing.
Like I heard other scholars in North America getting them and then they came our way as well.
But what happened was there started to be a rebuttal to these three papers like floating around early this year.
So like a year and a half later.
There was a paper that said that it would be wrong to take these papers as scientifically valid because they don't prove or disprove that social media causally negatively impacts the mental health of teenage girls.
It's like a critique of three papers that doesn't have anything to do with the three papers.
It's what you would write if the papers were inconvenient for a specific narrative, but it's a critique that's unconnected meaningfully to the three papers, but it's out there, like and it's just floating around.
Who is like leading the critique?
It is two researchers who they look like maybe two early career American researchers, but Jonathan Haidt is the last author and there's a man called Zach Rausch, I think.
And he's like an assistant or ghostwriter of some type for Jonathan Haidt.
And so it appears that it's been written by these two independent or early career researchers, but it appears to be a collaboration.
And all the correspondence about it comes from this guy, Zach, and from John.
There's a lead author, but lead author doesn't seem involved.
So I've got a question about the devil's advocate for the position.
So given that there is all this discourse around social media companies or platforms and the various incentives, and you were talking about like kind of algorithms to a certain extent and where they can drive the, I think they're sometimes overstated.
But like we've all used Twitter.
We've seen that the moderation standards and the things that companies decide to promote can definitely impact things.
So given that it's almost universally agreed that social media companies are kind of big and evil and only interested in monetizing attention and they don't really care about impacts on their users beyond what they can monetize.
Wouldn't it stand to reason that if you come out with papers that say, you know, actually, from when we look at the data, it looks like the effects are small, relatively insignificant, and they're mixed, that some people could perceive you like the tobacco industry researcher.
So they would want to know, oh, is this guy being paid under the table, right?
Like, because that to me seems like it could be the motivation for, you know, freedom of information requests.
Let's extend the metaphor then a little bit, which is we are doing environmental research about agriculture and the watersheds.
And we're looking at watersheds next to tobacco farms.
And we're saying, as near as we can figure, tobacco farmers are no better or worse at managing their watersheds than other forms of agriculture.
We're not saying anything about cancer.
We're not saying anything about lungs.
And our papers are really precise about this.
Like we're looking at well-being, mental health, anxiety, whatever the thing that we're looking at in the paper, like that's the fight that we're picking with that data.
It's like a different topic.
Like if I was a misinformation researcher, maybe, I don't know, like I can appreciate why it would be suspicious that presenting what somebody would think would be counterintuitive would raise that.
But there is a history of this field that exists before Gene Twengi and Jonathan Haidt.
And so that dynamic was introduced on like a specific day in 2017.
It's new to that moment.
The critical scholarship on social media and video games and all of this stuff, it all existed before and it exists now.
There's like, there's still a field of online safety.
The politics came at a moment.
I should say that, I mean, for me, it's not controversial at all.
And the reason why I say that is I've done a lot of research in health and well-being as an outcome.
Yeah.
And I've looked at gambling, like serious gambling problems, serious gambling related harms, right?
Yeah.
And their impact on well-being.
Now, your intuition would be, oh yeah, that's obviously going to be huge, right?
Imagine if you've lost $100,000, you can't pay your bills, all your relationships are falling apart.
It's got to be big, right?
And it is, it is real.
But when you actually measure that, the effect size is not as big as one might think.
And that is because of very simple things that quantitative social science researchers know, which is something like health and well-being is incredibly multiply determined.
Yeah, there's so many things affecting health and well-being.
And people are remarkably resilient, right?
You can put someone in a difficult position and well-being will tend to revert, you know, anyway.
So if you've said to me earlier in this interview, oh, we looked at the data and we found correlations of 0.5, I'd be like, what?
You know, hardly anything correlates with something that general at that rate.
So I just, I guess I'm just saying, to me, it shouldn't be controversial.
That's such a broad outcome.
One should see small to negligible effects on something which, like even if even if the truth is that social media use, that internet use, whatever can be bad is is is not a healthy thing.
It's not on the same level as really severe gambling problems on any reasonable level that someone like Jonathan Haidt might be claiming the effect sizes are going to be small and diffuse.
I agree entirely, and this happens in video games as well, where video games as a category right, like sitting on your ass in front of something with a screen and doing something, being addicted, clinically addicted to gambling and stealing money and doing this stuff that is way a like a subset of the human behavior of sitting on your ass in front of a screen and doing something.
Yeah right, and you wouldn't expect that to shake out, I agree, but merely pointing that out that maybe doing a population level study of 5 000 people in Australia is not the best way to measure the mechanisms of gambling harms yeah well yeah, I was going to ask you about the gaming addiction stuff because I think this is really interesting and it has interesting parallels with the gambling.
I mean it's interesting parallel because we're very much moved towards gambling related harm as opposed to addiction, but there's a very clear mechanism right which is financial losses, extreme financial losses, and then there's a whole bunch of domino effects from that.
So i've I've always been interested, and I've even sort of had some associations with the people who've published on this, in terms of internet gaming disorder and some of the controversies about whether or not it should be treated as, you know, an addiction on the same level as something like the substance addiction.
So I was just wondering if you wanted to tell us where you landed on that.
No, so I love it.
This is the work that we do on games.
I want to get us to a place where we can get the games research to be as finely grained and detailed.
as the type of data that you can get with like ID linked gambling.
The issue that we have right now in the field is that actually games companies don't keep their data very carefully.
And what we've been doing to facilitate this conversation and this type of research is we've been releasing, like building and releasing large scale data sets that involve session level gaming data along with the titles of the games and the platforms.
So getting to a place where we know who played what when and then creating data on their well-being, their functioning inside and out of play, and then making it as comprehensive and as well documented as possible.
And I'll just give you an example of one of these where we worked with a company called Future Lab.
They make Power Wash Simulator.
I don't know if you've ever played this game.
It's exactly what it sounds like.
I've heard of it.
It's very good, Matt.
Yeah, it's very relaxing.
But we created a version of the game that 8,000 people on Steam installed.
It was called Research Edition.
And what it did was it recorded key things that happened in the game, like advancement, like, you know, accomplishing things, being given missions, like all the shit that the game can see when it plays, basically.
And we asked people how they felt, like a range of questions up to six times in an hour.
So we just created a data set that is 8,000 people playing, answering three quarters of a million questions about how they feel, along with 12 million in-game events.
Just so researchers could begin to pick apart how does fluctuation in mood, well-being, motivation, enjoyment, like what does that hedonic journey look like as 8,000 people play across a year?
And like, and this is the place where we're trying to get the field to go to, where like you could actually ask questions about harm or the risk of harm or what does it look like if somebody starts the spiral.
And until the data is like, until there's like standards for what the events are, because in gambling research, like with electronic gambling and betting, like there is a universe of data that you can look at.
It's just not there on games yet.
And so they don't know what makes games fun.
So it's a little bit weird for them to be worried about what makes them harmful.
Yeah.
I mean, can I ask about this?
Because it's super interesting.
First of all, there's this, I think there's always like a philosophical issue in terms of distinguishing between something that's just really, really fun.
So you want to do it and something that is addictive, right?
Really fun is good.
Addictive is bad.
Right.
And, you know, we can make our lives a little bit easier and say, okay, when it's actually causing you harm, it's actually an irrational choice to continue consuming more of this, right?
Because it's actually hurting you.
Then that can kind of help us out.
So with gambling, for me, it's very easy because we're very clear about what the mechanism is.
Like the vast majority of it is simply money and maybe 10% at most, I think, personally, would be preoccupation and sort of, you know, too much time essentially spent on the activity.
So when it comes to gaming, what's the mechanism of harm exactly?
The theoretical account of it is that it is so fun that it interrupts people's ability to self-regulate.
And then it leads to some form of maladaptive coping that leads to some spiral away from reality towards a world that is a better games world.
That's like the idea.
The way that it is operationalized is comparatively extremely silly.
They literally will take the word gambling out of the self-report questionnaire and they'll put in PowerWatch simulator, let's say.
Mechanism Of Harm In Gaming 00:10:32
Yeah.
And then they'll correlate, they'll say 5% of people had five out of the seven indicators of PowerWatch simulator addiction, and they score higher on sadness about their life, etc.
So we approach this from like a motivational perspective, like largely from something called self-determination theory, which is about kind of like autonomous and heteronymous self-regulation.
We see something really interesting here, which is it doesn't matter how much fun you have playing a game.
It doesn't matter how immersed you are.
Those things don't really predict the next time you play a video game.
The thing that is the thing that predicts the next time you play the video game is the last time you played the video game.
So like there is a habit.
There's like kind of an autoregressive habit that people have that they fall into every single day or every single week.
They play on weekends, right?
And so their weekend habit is extremely stable.
The only thing that we've seen that changes habit, and we've published on this, is lockdown.
So when lockdown happened, we looked at like data from Steam going like back in time a couple years and then two years after one full year after the pandemic.
And then we looked at when all the lockdowns were.
And so we found that an exogenous shock like lockdown, it leads people to be like play more multiplayer games than they were playing before.
And their play wasn't as stacked up in the weekends.
It kind of like it bled out into the week.
And that last thing is the only thing that has permanently changed as we, as far as we can see in terms of gaming, is that it's like people really play all the time now.
Like their weekends are shorter, but their weekdays are a little bit higher.
So there is something that happens with games, at least in our data, that looks like a habit-like behavior that isn't necessarily inside of someone's reflective capacity.
But when we ask people about whether or not games fit into their lives, are games, you know, a net positive or negative for school, for work, for relationships, for this, for that.
That is highly predictive of well-being and adjustment.
Like if people feel like games are a good part of their lives, then they're happier, but maybe happier people are able to fit games into their lives.
Yeah.
It's just that the science is really primitive here.
But I will say the thing that terrifies me is sports betting on esports as it is happening.
Yeah.
That is like extremely obviously a mechanism that is going to lead to a gigantic amount of harm and very, very high spenders in game also uncontroversially.
Well, one of the things we actually wrote about this recently, which is this internet mediated technology mediated products, there some of the barriers are breaking down between traditional gambling and stuff that we didn't used to think about as gambling.
You know, I think that's what you're hinting at.
And the companies are very good at coming up with new products and all kinds of new features and amazing flashing lights and so on.
They tell me loot boxes are a form of gambling.
That doesn't seem that doesn't seem right.
They're just games.
They're just perks.
Back when my son was quite young, he was really into a game called Grotopia.
And I thought, I'll just try to understand what's going on with this game.
And I played it for a while and it was the strangest game.
And it really is an example of where sometimes the sort of motivational appeal of a certain kind of game can have a big crossover with the kinds of motivations that draw people towards gambling.
And that's because with a game like this, there was no gameplay in the normal sense of a game.
What there was was growing items, which could then be combined with other items to create progressively rarer items that were perceived by all the players.
And it was a social game, of course, as being extraordinarily valuable, which they would then barter and sell and trade and rip each other off on and so on.
And I tangibly felt it.
Like it was such a dumb game in the sense that there was nothing to think about.
There was no skill involved.
But I could feel the acquisitive appeal of getting more and more stuff, right?
These virtual items and accruing them.
And somehow, and of course, the rarity just spiraled off exponentially, right?
Like it never ended.
I should say we've looked at now.
This is data from one of our students, Matt.
We can see when games are changing their reward frequencies in people's patterns of behavior.
We can see the games that have events that are meant to draw people in.
We can see the breaks in the data, like when they introduce a feature.
It is uncontroversial for us at least.
And we have to figure out how to write it up and how to frame it so that it's actually useful for other scholars.
So that observation you're making is entirely uncontroversial as far as I can see it.
Like we have in our house, we have two banned video games.
The first one is Fortnite, because I don't believe it is a game.
Like it is a kind of event app store social media platform and Roblox, which is objectively unsafe place for children to play.
And besides that, there is a lot of video gameplay in this household.
And the basic philosophy for us is we could spend a thousand pounds on an iPad or we could spend a thousand pounds on two Nintendo Switches and enough games for our children, enough good games for our children.
Kids are going to waste time on games and on play.
And then the question is like, do you want them to be inside of capitalist simulator, extraction capitalist simulator?
Yes.
That sounds like or do you want them to play a truly demented game like WarioWare that takes you to the bizarre pits of the Japanese game show psyche?
Like, it's like there are manifold paths that a child can have through play.
It feels like a little bit that things moved on from the kind of video game nasties and the discourse around, you know, are violent video games leading to shootings and, you know, kind of at the same time as Marlon Manson, Eminem, gangster app, all that kind of stuff was being vilified.
Now you have the fact that essentially a whole entire generation grew up playing video games, right?
And playing lots of different games.
And I'm not saying everybody did, but I, but they are ubiquitous now, and probably even more so amongst the younger generation.
And I find, you know, again, anecdotal experience, but it seems to be the case amongst all the youngsters that I meet that, yes, you know, there are dangers, there are predatory companies and there are like things which are there like Matt described to kind of game, you know, attention and loot boxes and monthly passes and stuff, right?
These are all genuine things that are going on, including YouTube algorithms that lead children onto addictive videos and whatnot.
But it's also the case that children all over the world are playing lots of games with their friends in co-op local things or online and then meeting up afterwards and talking about it or they're forming communities and they're taking part in guilds and all these kind of things.
I feel like we're now at a stage where when people try to demonize that overall, you know, it might work with people over the age of 50, but under that, people are like, no, the gaming environment is more mixed than that, right?
Like it's too ubiquitous.
But when it comes to social media, maybe it's because we're still, you know, just a generation or two since it came into being, that it's, it's still possible that people talk about it as if there's, you know, a single effect and that it's, you know, it's changed a generation, whereas like the reality is, you know, of course it's had an impact, but the internet had an impact.
AI will have an impact.
All of these developments over time have impacts, but it's not likely, and it would be frankly extraordinary if it was just a simple story that it made everybody mentally ill.
Well, you'd think that the hundreds or thousands of people who have been studying this before 2017 would have actually stumbled upon it.
Or you'd have thought that if violent video games caused aggression in the way that it was proposed, that would have actually been like extremely discovered and increasingly nuancedly discovered.
Like, it's kind of like, it's like if you invented dynamite, you wouldn't expect people to like keep looking for dynamite or say, does dynamite cause harm if we light it?
You would actually find that whatever that discovery was would start like leading to like other things, but instead it's like violent video games probably did not cause aggression in the 1990s.
Like there's some, there's some other form it takes, but but there's this thing that happens.
And this is like the Douglas Adams, you know, there's this quote that's attributed to Douglas Adams, which is like, if there's a technology that's around when you're born, it's like natural and the way the world is.
If it's invented between when you're like 15 and 35, it's exciting and you can get a job in it.
And if it's invented after you're 35, it is like immoral, dangerous.
It's straight natural fallacy.
I've seen that occur in the discourse around like, you know, like I said, video games were that for an age.
And then it was online and chat rooms and stuff that's going on.
And now social media and now AI.
I'm old enough to remember Dungeons and Dragons.
I remember the Moral Panicer.
Oh, the 80s.
Yeah.
I mean, that's back on the menu, right?
Dungeons and Dragons still exists.
It's just, it's gotten a bit, it's gotten a bit cultural itself these days.
But go on.
Well, that's me and J.
It's been Dungeons and Dragons and all of the fantasy games and so on and the cultural stuff that stemmed from it has become pretty widespread, right?
But I remember back when it was all new when I was in primary school and they really thought it would cause devil worship and just break little kids' binds.
Andy, has anybody looked into the correlations with Warhammer 40K popularity and these negative mental health outcomes?
Because it seems like it's increasingly getting popular.
A conflict of interest.
I have a very tall pile of gray shame right now.
And my audible library is not available for freedom of information requests.
So I will not ask you.
Cultural Shifts and Correlations 00:03:40
No, but it's we have, you know, one of the things we tried to look at was we have a pet theory that what happens is that young people hop on new technologies and that maybe there's some type of effect over time.
So we've looked at the correlation, say, between television and well-being over time or video games and well-being over time or smartphones or social media over time to see if that correlation changes.
Right.
And so what we see with television from the 90s through the mid-2010s is the correlation, the negative correlation between television hours and well-being.
It starts at about like 0.2, 0.22, and then it goes straight to zero across the following two and a half decades.
Social media and smartphones look like they're doing the same thing, but they're much more jagged because there's less data.
So five out of the seven correlations go down.
One out of the seven correlations stays consistent.
One out of seven correlations goes up in a token way.
It's not statistically whatever.
So something might happen over time with the young people who say that they're doing a new thing.
They might be cohort-wise different, like early adopters might be different, but that data doesn't really exist like the way we would want it to.
You know, one of the funny things is since we did the streamer season, I went and ended up getting, I didn't before understand like Twitch and YouTuber culture.
I now do.
And perhaps I'm worse off.
I'm so sorry for it.
Yeah.
I might, I might have opened Pandora's box by doing that.
But the thing is that the interesting thing is YouTubers look down on Twitch streamers.
Like they might acknowledge, you know, that they're good, but they kind of, you know, there's, there's like, this is a curated platform where it's like Twitch, people are just streaming and the Twitch streamers.
The pros.
The proles.
They look down on TikTokers and people on Instagram, right?
And it's, and there is like the cohort that like, because TikTok is more popular with Generation Z, right?
It's kind of like, what the fuck are those guys doing, right?
They can't even do long form streams.
They're doing 30 second videos and stuff.
And it's just funny to see because there is also, you know, the kind of generational looking down at the, and I mean, I do it too.
I look down at the zoomers on TikTok.
I think I'm between your and Matt's ages.
I remember when America Online connected to Usenet and it stopped being a governable space.
It was, it's called the eternal September.
So what used to happen online through the mid-90s is every September, I was a very young internet user, inappropriately so.
Every September, what would happen is college freshmen in the U.S. would get their first access to the proper internet.
And it would take about two months before they would learn how to behave online.
And it was called like the September was something that everybody was upset about because it would dump in a bunch of new 18 year olds and then it'd take about two months and by Christmas the internet was back to the way it was supposed to be.
But one day America Online dumped in like five million noobs and I've been looking down on everyone ever since.
But the question, the question is, what do you do about that?
And so there are going to be things that are going to be cool or interesting about how technology changes and about how our psychology and our motivation and our well-being might fluctuate along with that and might be a function of it, or it could just be a descriptive thing that is really good food for thought.
Not everything is going to wind up being video games cause school shooting.
Instagram causes teenage girls to kill themselves.
Academics vs. Public Narratives 00:10:05
There's going to be a lot of stuff that I think is the job of academic scientists or independent scientists to try to like piece apart and to argue about and to debate and to like communicate their uncertainty about in a kind of a coherent, consistent way.
And that is not what we're getting from the Jonathan Heidz, the Gene Twangies of the world, because they're coming in and they could be extremely well-meaning if we didn't know anything else about them.
And their behavior, a lot of their behavior would still make sense as a kind of genuine reaction to this shit wasn't invented after I was 35 and it scares me.
And I don't really feel like learning about any of it, but I'm going to protect kids.
And I just don't give a shit about second order consequences or third order consequences.
The thing that's frustrating as an academic scientist is that if you crack on and cross your T's and dot your I's and these people kind of move in to your world, you get a Jonathan Haidt or you get a Gene Twangy.
Every time you express academic circumspection, you're engaging in some political cultural debate.
Like people all last week came to me, literally two dozen journalists, asking me what I think. about Jonathan Haidt talking to the prime minister today about a social media ban for under 16s.
I didn't respond to any of them because I've now learned and many others have learned that saying what is currently known to be scientifically true, it is neutral in the academic debate.
It is instructive to other researchers.
Hey, this isn't the way to look for moderators.
Hey, do a test of significant differences instead of interpreting difference and significance as important.
Hey, remember the central limit theorem.
There's going to be issues about sampling on the tails.
That's neutral from a researcher perspective, but that is not neutral from a discourse perspective.
And what you get on LinkedIn or old Twitter or any of these things is 500 people screaming at you for not listening to Jonathan Haidt.
I get phone calls.
I've just disconnected my phone in my office and I've like disabled it because I just get phone calls of people screaming at me.
It's like not a fun position to be in as a researcher.
You know, it's a common theme that we've seen whenever researchers, whenever their topic sort of crosses over to these hot button topics of public concern, then you get this sort of crazy, crazy influence.
You know, we've seen it with COVID and we've seen it with so many other issues.
Gambling obviously has its own political interactions with politics and big business and stuff, which has its own problems.
But I just want to clarify, to sort of forestall some potential comments.
I mean, I don't think what you were saying or what we are saying is that, oh, everything on the internet is completely fine.
You know, social media, there's nothing to see here.
Don't worry about it, whatever.
I think the point that you're making and one that I, that if you're not making it, I will, is that oftentimes whatever effects, if they are there, then they are not so simple as doing this thing will always make you worse off and is always going to hurt you.
Or if you're a young woman, it's always going to hurt you.
Or if you're a young man, it's always going to make you more antisocial.
Anecdotally, there will be all kinds of individual responses to it, some good, some bad, that these broad sort of sweeping kind of cross-sectional studies are kind of the wrong tool to investigate that nuance.
As an example, I'm sure you could do a study and look at political discourse on Twitter.
And if you did a study of how people interact there versus how they might interact in person, you would find differences, right?
And that might be something of interest.
On the other hand, maybe facilitating the ability for people to communicate more is a good thing, right?
It's complicated.
So I just wanted to forestall that.
I don't think you are saying that nothing to see here.
Nobody should ever have any concerns about computer games or the internet or anything else.
No, I'm saying something actually considerably more damning than that.
So the problem with violent video games and aggression isn't that those guys were wrong.
The problem is we weren't studying video games properly, right?
Like we didn't learn about what happens when people play video games.
And so it's kind of like tech debt.
It's like it builds up.
Like when a science gets co-opted into discourse, it begins to like build up like a methodological debt to it.
So the next time we panic about a technology or a cultural social thing, actually so much of the attention, even of the researchers kind of goes up to confront the guru or the more entrepreneur or whatever, that it derails what would otherwise be a maturing science.
So it never gets to a place where actually, like if someone asks me right now today, like, hey, Andy, what's the biggest danger of AI and kids?
I like literally have written a paper like to answer that question, I suggest you do not make the last five mistakes that everyone made doing social media research.
Because if we had done that part properly, instead of being distracted by Adam Attler, Gene Twangy, whoever the hell came before that, then we would know more.
And for your field, like, do you know the researcher Mark Griffiths?
Yeah.
Yeah, I know Mark.
How many papers has Mark Griffiths written?
Many.
I clocked him publishing a paper once every three days in the year 2019.
But he might have defined a new kind of addiction, but go on.
But I'm only saying that because it is an example of how even one individual can kind of pull in the direction of a specific kind of narrative and the field might be poorer for it.
And so I think that the hazard of science entering the discourse and being like kind of like harvested for discourse is that that deprives the research area of its own autonomy to find the things that maybe are good or bad or interesting about the online world.
Because Chris, you asked like, what about likes and retweets?
I hear that as like two of the most boring things that are among millions that a person could do when they're on one of these platforms.
And like this was like a big thing.
Like you have to figure out like, what are the things that a social media company actually tracks?
What are their key performance indicators?
What are the behaviors that make them money?
And are those measures actually of things that have a psychological basis to them?
And so if somebody just says in Substack, it's social comparison or it's mindless scrolling or something like that, that actually like goes back into the research in a weird way.
Like it agenda sets, it can agenda set back into the research.
Yeah, no, look, I think it's particularly a problem with all disciplines that interact with society, right?
I mean, it'd be true of economics.
It's true of psychology and so on.
It's not such a big problem for certain kinds of genetics or astronomy, right?
Because it doesn't interact like that.
It is for virology.
Yeah, yeah.
Who would have thought we didn't see that one coming, did we?
I just noticed, I looked at Jean Twangi's Twitter since she came up and the two most recent retweets.
Okay, she's got one where she's testifying.
And the quote from it is, raise the minimum age for social media to 16 and actually verify age 16 or probably 18 for the AI sexy chat apps.
So we don't have 12 year olds having their first romantic relationships with a chatbot.
And she has retweeted this, right?
She has retweeted her testimony.
So is this really an issue, a common issue that people are having the first relationship with AI chatbots?
Maybe we'll see.
But on the other, the most recent thing that she's retweeted is from Vigilant Fox.
This is a MAGA, you know, bottom of the barrel Elon Musk boosting account, like almost in Miles Chung level.
And it's saying, this teacher turned cognitive science, shared a disturbing reality that left the room stunned.
Our kids are less cognitively capable than we were at their age.
Every previous generation outperformed its parents since we began recording in the late 1800s.
So what happened?
Screens.
Dr. Jared Horvov explained Gen Z is the first generation in modern history to underperform us on basically every cognitive measure we have, from basic attention to memory to literacy to numeracy to executive functioning to even general IQ, even though they go to more school than we did.
So why?
The answer appears to be the tools we are using with schools to drive that learning screens.
And they go on to say it's the same across 80 countries, blah, blah, blah.
So that's the alternative message, right?
And that is the one that is super popular is we know that that is in the kids today are they're basically brain damaged because that's fine.
Let's imagine for one picosecond that that is true.
Why would a social psychology professor at San Diego State University who does not know how to use SPSS, why would a business school professor and all-around moral foundations researcher who have millions of dollars at their disposal, why would they be the ones who would know what the solution is?
Let's imagine that this is all true, right?
In my world, it doesn't actually matter if anxiety among teenagers has gone up 10%.
Whatever it was before is something to do something about, something to learn something about, like to, you know, figure out, like, is this a resourcing problem?
Is this a problem of understanding mechanism?
What have you?
Why is the behavior of retweeting something that crazily unhinged?
How is that an indicator that you know what to do to fix it?
Like, let's imagine it was the screens in the classrooms that were somehow magically doing it.
Wouldn't you want like some type of plan to get the screens out and measure the like relationship and its effect on pedagogy?
The problem here is that there is these policies that are being proposed that they're not even being tested.
Policies Without Evidence 00:04:46
Like in South Korea from 2011 for 10 years, they turned the internet off for under 18s to increase test scores and sleep.
And when there was finally a constitutional challenge on it, they did research on the efficacy.
It had no effect on test scores, no effect on well-being, and saved less than two minutes of sleep every night.
In Australia.
I was going to bring up Australia.
I should tell everyone this, but yeah, yeah.
So very recently, just a few months ago, Australia basically banned social media platforms for, I think, people under 16 and parental consent doesn't matter, but it's a straight up ban.
And that applies to YouTube, Twitter, X, Facebook, Instagram, TikTok, Snapchat, Reddit, Twitch, threads, kick.
So that's the deal.
I was surprised that happened.
Yeah.
But here's the interesting thing.
I think that the public opinion is like all for it.
Actually, we love banning things in Australia.
They recently banned vaping.
You can still buy cigarettes, but they banned vaping.
Oh, you're banned?
I'm banned.
Oh, I'm sorry.
Is that true?
You can't vape anymore.
Well, I can't vape legally.
I like Australia in many ways, but we are a bit of an anti-state, I think.
Yeah, so Andy, I guess you would see this as, and I think it is too, probably policy uninformed by good evidence, right?
But I'll tell you what happened though.
The premier's wife in New South Wales read the Jonathan Haidt book.
Are you kidding?
And this has all been reported out by Crikey, which is an amazing name for a news circle.
Oh my God.
And she told her husband, then they talked to Haidt.
Then they organized an entire conference that Gene Twangy Skyped into.
They didn't invite any Australian researchers.
Oh my God.
They didn't know what social media was defined until after the law passed.
They left one day for debate on the law, and there was no plan to measure if it works.
They assembled a unfunded panel of experts to assess it in the middle of last year, six months before it went into effect.
And those experts met once before the ban went into effect.
There is no ability for this government panel to assess anything about the effectiveness.
There's no like plan to test, does it actually help kids?
There's no like medical outcomes trial.
And so we just have this really weird situation where like a random San Diego State University social psychologist and an arbitrary business school professor who did some interesting work in moral foundations 20 years ago, where they're like making policy prescriptions that are being put into effect and pushed in different parts of the world without any plan to measure who it helps or why or what social media should be included.
And so it's just really strange extrapolation from no principles, like some type of principle.
And it's just as an academic, it's just a really strange thing to kind of watch happen.
And you're living inside of it, right?
I feel the market is right that like there's widespread support for it.
But like we were talking about widespread support for children under the age of 16 being allowed to use social media.
Let me clarify.
It wasn't like a hot issue as far as I recall.
Like I'm not dialed into it so much, but I don't think it was top of mind for anyone.
But I think, you know, people's just general feeling is that whatever kids want to do is probably bad and it should be stopped.
Right.
And people don't like social media.
So when it was like, okay, the government's like, we're doing this, people were like, oh, yeah, sounds like a good idea.
It is not a mystery to me that 70 or 80% of parents, because I would include myself in this, are unhappy with big tech companies and don't want them in our kids' lives.
But what does UNICEF have to say about this?
What does Save the Children have to say about this?
What are the large international charities who eat, sleep, and drink children's rights and well-being and health?
What do they say about Australia's ban?
That's where the discourse should be happening.
It shouldn't be the wife of a minister of New South Wales and an airport book.
It should be social workers.
It should be teachers.
It should like it.
The idea shouldn't come out of nowhere with a massive marketing campaign behind it.
But that's my opinion.
That's not my scientist, you know.
Yeah.
Well, look, I feel like there should be a grand finale, like a take-home message for people.
But it's also okay, I think, if we kind of just peter out.
But Andy, what are the final thoughts?
The Importance of Context 00:03:38
I thought that your point there about the general point about the way in which this sort of like hot button social issues can interact with certain disciplines of research, especially when the evidence is ambiguous or relatively weak and then can have a destructive force.
I thought that was an incredibly important point.
Were there any other final thoughts you wanted to leave people with?
Yeah, I would just say that like, I don't think that this challenge is going away.
I think that the discourse around the internet in our lives isn't going to go away.
I think that scientists have been arguing about the internet for a long time and commentators are going to be increasingly involved in those academic arguments because it is a very rich environment for cherry-picking specific statistics or specific findings that are usable if you remove the context from them for arguing online.
Like I don't see this problem going away, whether it's disinformation research, like the stuff that we do, the kind of long tail of economics and quantitative sociology.
I truly believe that like academic research right now is being treated kind of like a farm team that researchers or research findings are getting recruited from for kind of a different level of argument, like a discourse level of argument.
And I just think it's critically important that as researchers, we include as much context as humanly possible so that whatever we do is reproducible, is auditable, is extensible, is documented.
So that when people have to dig down and say, like, where did this person get this statistic from?
Once you find the academic paper, you escape the discourse realm as much as humanly possible and you understand the context of the data generating process, the theories that were actually being tested.
Because if we don't build that up very strongly, if the scientific part of this doesn't involve pre-registration when it's needed, registered reports when it's needed, open data when it's possible for ethical reasons, and always transparent and reproducible code and synthetic data sets.
If we don't do that as academics, when we have our academic debates, when our work is inevitably co-opted, it doesn't matter if it's Taylor Lorenz or Jonathan Haidt, when our work is eventually taken out of its context for point scoring, we need to keep ourselves consistent and honest.
We need to remember which ones of our hypotheses did we have before we collected the data?
And what are the things that when we're asked for our expert opinions on, we know that we're not answering on the basis of our opinion.
We know if we're answering based on something we actually found or failed to find.
And that's just, that's a challenge that all of the academic listeners, you got to keep your head straight and you got to think about all the stuff that we learned since 2011.
Because like these people can be wrong because they're wrong.
They can be wrong for the right reason.
They can be right for the wrong reason.
And like, I'm sure once in his entire life, Jonathan Haidt has been right about something for the right reason.
I haven't personally observed that.
But then again, I was locked in a psychology department basement for three hours every Wednesday.
And I can tell when somebody would just fill that with bullshit.
So I don't know, man.
But the long and the short of it is that.
People Can Be Wrong 00:00:31
I think those are good notes to end on.
And, you know, generally, whenever people are advocating for open science and those kind of practices in general, we're all on board with it.
And I think, you know, the topics that you covered for highlight the need to do that in general.
So, Andy, thank you very much for coming on and walking us through a bunch of your research and stuff.
And you have a website.
There are publications for people to read.
We'll put them in the show notes.
So yeah, thanks very much for coming on anytime.
Export Selection