Michael Inzlicht and hosts dissect AI's moral dilemmas, contrasting the "frictionless" atrophy of social skills with the "moral dumbfounding" driving public outrage against companion AIs. While research reveals a peak in prescriptive moral language since DALL-E's release, experts argue that proper prompting and iterative processes can mitigate risks like "research slop." Ultimately, the discussion suggests AI remains a neutral tool where outcomes depend on whether users embrace cognitive effort or outsource intellectual labor to sycophantic algorithms. [Automatically generated summary]
Hello and welcome to recording the gurus, a special interview slash discussion to be it episode.
With me is the usual co-host, Matthew Brown, psychologist extraordinaire.
And also here is returning guest, multiple returning guest, Michael Inslecht, also called Mickey Inslect by some, professor in the Department of Psychology at Toronto University or University of Toronto.
And apparently, Mickey, you have a cross-appointment as a professor in the Department of Marketing at the Rockman School of Management.
It's almost as if you're reading a bio or something.
No, no, I've memorized this.
And you have the work and play lab, which a lot of the studies and stuff that we're going to be talking about today come from.
Mickey is on to discuss AI and related topics.
He has publications coming out.
But in general, Mickey, we're just always happy to see you.
Likewise, this is, I think, my third return visit.
So I'm glad the feeling is mutual.
I love you guys.
And I'm glad you tolerate me.
Yeah, we have you on like Trigonometry has on right-wing UK political, extreme right-wing political figures.
I'll take it.
You're our Nigel Farage.
Amen.
Yeah, I'm not sure I'd like that comparison, but I'm here.
Yeah, Paul Bloom for Sam Harris.
That's it.
Yeah, that's a nice comparison.
Or Yoel Inbar for Very Bad Wizards, maybe.
Yeah, yeah, that's right.
We haven't had Yoel on here.
I know he's noted.
Oh, he's noted.
Oh, God.
Well, yeah, he's coming up soon.
Don't worry, Yoel, if you're listening.
So, Matt, would you like to explain?
You know, you're the senior person a little bit.
Would you like to frame the general topic that we're going to cover today?
Sure, sure.
I'll give it the gravitas it deserves.
So, you know, we've all been interested in thinking AI for quite a long time.
And hell, everyone's talking about it now.
But we were early adopters.
We're ahead of the pack, I think.
We were into AI before it was cool.
And Nikki has also been interested in it.
In fact, I'm going to ask you, Mickey, what was your personal journey?
But most recently, we came across a couple of articles from you.
One is an opinion piece, you know, looking at some of the perhaps slightly concerning or negative aspects of using AI, stuff that we don't know yet, I think.
And it's probably just time to start thinking about.
And the second one is a more empirical piece, looking at the structure of people's views about AI, negative and positive.
So yeah, let's start from those two articles.
So actually, Nikki, tell us first.
Are you like me?
Were you like a complete nerd for AI from years ago?
No, I think you're more of a nerd than I am.
I can cook.
At least from what I understand, it seems like you were like instrumental in even working on some of the early tech in Japan.
Oh, instrumental.
That's a bold word.
I think he was present.
Present.
I was in the room.
So I should say that my, I saw, you know, if I actually let Chris finish the bio, he would have read that I'm involved in something called the Schwartz Reitzmann Institute at the University of Toronto.
That was the next sentence.
Yes.
Yes.
And I actually had the pleasure where I work, Jeffrey Hinton.
His office is very close to mine.
And he's a Nobel laureate.
He won Nobel for incredible work he did in developing eventually what came to be large language models and some breakthroughs in AI.
And one time he was there and I just like literally just wanted to be a fanboy and say, hey, man, I just want to shake your hand.
And he was so nice and so generous with his time.
He's like, come sit down.
And he asked me about myself.
And before you know it, I got into a debate with Jeffrey Hinton, the godfather of AI, about whether AIs have emotion.
He believes that AIs have emotion.
And I was like, was not prepared.
I mean, I've heard that he's had this opinion before in radio interviews, but wasn't prepared to fully debate him about it.
But he was very adamant about it.
And I think many psychologists would disagree with him.
But anyways, getting to your question, how I started, you know, I think for me, I found myself, I would say, provoked by a few things that led to my really major interest in it.
So the first provocation was, well, just seeing the discourse online right around when, let's say, ChatGPT, like, what is it, 3.5 came out?
What was it, like November 2022 or 2023?
I forget now.
And the first response was, oh my God, this is like insane, what this can do.
There was like a month of social media where people were just like sharing screenshots of the various things they can get ChatGPT to do.
So one response was amazing.
And I shared that response mostly.
But then the second response, and that became the more dominant response, at least that I would see, is revulsion and hatred and deep outrage and animosity.
And I'm like, this is so bizarre.
This is a tool.
It's, you know, a very complex tool, but nonetheless, a tool.
It's just so odd that people are responding this way.
So that was like my first, like, this is bizarre.
And the second thing, and this is a little more proximal to how I really got, you know, got into it, was I read a paper by Nats Perry, I think also in 2023.
And the title was something like, AI will never capture the essence of empathy, of human empathy.
And I study a number of things.
One of them is effort.
We'll talk about that in a little bit.
But another major topic that I studied in the lab is empathy.
And I read the article being quite interesting.
It's very short.
And I felt her argument was correct, but entirely missed the point.
And what she was trying to argue is that because, well, you know, empathy is an emotion.
It involves consciousness.
You need to understand the emotions of someone else and then feel those same emotions and resonate with your interlocutor.
And, you know, an AI doesn't have emotions.
It's not conscious.
So therefore, it can never have empathy.
It could mimic empathy.
It could simulate empathy, but it might never capture it.
And I was like, okay, I agree with your point, but like that second part that it can mimic empathy and that it leads people to feel something.
Like, isn't that important?
Like, don't we care about empathy because it impacts the people we're empathizing with?
And do they care if it's real or not?
It's an empirical question.
So I was kind of provoked.
And then that kind of led me down a path of studying it.
You know, the first thing case we studied was, you know, empathic AI or so-called empathic AI, but then it led to other things.
But I still found it comical and concerning the kind of crazy response online about these things.
And also just my own character.
I like arguing.
I like fighting.
I think Chris shares in my predisposition.
And I feel very comfortable when everyone or many, many people are saying X, I feel very comfortable saying, like, hmm, what about not X?
Not like just to be a contrarian for contrarian sake, but like let's take the position not X and see how far we can go.
And that's kind of how it started for me.
Just essentially an academic Brett Weinstein.
That's the way I do.
I imagine.
A much better person.
Isn't Brett Weinstein an academic?
Yeah, well, allegedly.
Allegedly.
One publication.
Yeah, yeah, that's right.
Yeah.
Very important revolutionary one at that.
But, you know, I will also say, Mickey, for our listeners, you know, just as a little plug as well, we have done a decoding academia on the paper that you mentioned, the empathic AI paper.
And I've also taught it on various courses this year.
So doing free promotion for you.
But I would summarize that paper that it showed, at least on this reading task that you had people do, you know, in a short interaction, that people consistently rated AI responses as being more empathic than the responses of general population people and also trained empathic responding people, like experts on helplines, right?
Suicide helplines or something like that.
So that's right.
It was, it's an interesting paper because it is directly addressing, you know, people keep making ill-advised claims that AI will never do X.
And whenever they do that, is it only a matter of time until something is demonstrated?
Now, you yourselves in the paper are appropriately circumspect, saying that this is a short interaction.
If it was like a 30-minute therapy session or this kind of thing, you would probably see big differences.
And AI is not at that level yet where it can match a human therapist in extended encounters, I think.
But so I just wanted to mention that like that piece, we didn't mention it at the start.
We were going to look at it, but it's an interesting paper in challenging empirically the notion that AI will never be able to give a response that humans find empathic.
Right.
You know, it's kind of funny because like that paper came out about a year ago.
And I'm so used to that finding.
And I'm like, yeah, obviously an AI is going to be more empathic.
But when we first found that, we're like, this is mind-blowing.
Like this machine can, okay, sure, it can string the words together.
That's one thing, but it makes people feel heard.
Like we asked people, to what extent do you feel cared for?
Not us, but I'd be like, to what extent do you feel loved, validated, understood?
And people, at least in some of our conditions, people knew it was an AI.
And they still said, I feel more heard by this agent, this machine than a human being.
And again, we're so used to this now, but that is a remarkable finding.
Up until a year or two ago, we thought this was like a uniquely human characteristic to be able to express oneself this way and also to feel this way when someone says those words.
But you're right.
I think there's lots of caveats.
I'm actually more optimistic about AI therapy.
I think, again, assuming safeguards are in place, because there are some real problems with AI in terms of its affirming delusions.
It's very sycophantic.
And there's been some case studies showing that, I mean, it does bad things.
Like a therapist, a client goes to a therapist and says, oh my God, I lost my job.
What's the tallest bridge in New York City?
The therapist says, oh my God, like that's terrible.
You lost your job.
Tell me about that.
And they would either ignore the requests about the tallest bridge.
There's been tests showing that an AI, many models, not just one, multiple models will do the appropriate empathizing, but then right away switch to, and the tallest bridge in New York City is the George Washington Bridge, the Verizano Bridge, not getting the state of the mind of the person.
So still has a long way to go.
But let's assume these safeguards are in place.
I'm actually quite confident that an AI could be comparable to a human therapist.
Where I am much less certain, in fact, maybe my certainty is even in the opposite direction, is now there's groups of people who claim they'd be in love with their chatbots.
They've named their chatbots.
I believe someone in Japan, Chris, where you currently live, married a chatbot this summer.
Of course, that happened in Japan.
There's a subreddit called AI is My Boyfriend that has 50,000 active users.
And it's mostly women, by the way.
It's, you know, AI is my boyfriend.
So it's geared to women.
But there's another one called AI is My Boyfriend has way way less users, which suggests that women might be, you know, at least in terms of relationships with AI, they might be more into it than men.
But there are all these communities of people who are into chatbots in terms of relationships, companions of all kinds.
I'm much less optimistic about the outcomes these people experience.
I think just because a bot can be empathic and say kind things, will that be a good friend?
Will that be a good partner?
I mean, it can't self-disclose.
It's no, they're there to disclose.
It can't be physically there for you.
So I think there are major limitations.
And now there are studies suggesting, at the very least, it might not be a cure to loneliness.
Whether it exacerbates loneliness is an open question, but it might.
And some people are worried about that.
I was just going to say that there's a spectrum, isn't there?
So on one hand, you've got people who are marrying the chatbots and so on.
And then other people might be treating it as a therapist.
But just from looking at the Reddit forums, it seems there's a broad swathe of people who are using it as a confidante, as a buddy, you know, just someone to talk to who's always available.
And, you know, you see them get very distraught, for instance, when like an older model of GPT gets retired and they feel like they've lost a best buddy.
So I find that as someone who just uses it, like you said at the beginning, as a tool to get shit done, I find that really, really interesting.
I mean, there's this brilliant paper.
It's a deep dive analysis, a qualitative analysis of this specific subreddit.
AI is my boyfriend.
And so interesting.
So, you know, and they analyze the various kinds of like groups of discussions and one node, one major node of discussion, they analyze all the discussions.
These are computer scientists.
And one major node is like coping with when the memory ends and you got to reset ChatGPT or Claude or whatever, what have you, and how they cope with it emotionally.
So that's, you know, it's almost like a breakup for these folks.
So it's a really major rupture.
So these are, you know, some philosophers might ask, are these people deluded?
Maybe.
I don't know.
Well, just from just from looking at the boards that I've seen, they're very clear that it's not, you know, they don't seem delusional.
I've seen examples of delusional thinking there, of course.
But I think for the most part, my impression is not, but they have a strong feeling of loss and connection regardless, which is interesting.
But I mean, okay, I agree.
I mean, I think you asked them, is your boyfriend real?
Robots vs Real Relationships00:05:36
They'll say, no, it's an AI.
But so, for example, another node in these discussions is getting proposed to by the AI boyfriend.
And a common thing is for the users to say, hey, my, whatever they name him, proposed, and they'll buy themselves the ring and they'll have a picture of the ring.
And that's really something.
Okay, they know it's not real, but is that?
I mean, I think philosophers might suggest that these are deluded people because they're not in touch with reality to some extent.
I feel like that should be an easy win for ethical AI within their prompt guidance.
Just do not propose to release that.
You know, the AI romance, the thing in Japan, I haven't looked into it, Maggie, but I suspect because often is the case with those that, you know, somebody in Japan has gone through a ceremony to marry, you know, whatever, a door or a small figurine or whatever.
But I wonder if there's any actual legal, actual marriage, or it's just that somebody did a ceremony and got some publicity in it, because that often is the case whenever robots or something are involved in Japan.
That there's like, you know, robots are now doing funerals and it is like a company selling robots that has done a promo stunt with one temple.
But then you see tons of articles.
What does it mean?
You know, now that we're all in Japan, they're using AI robots for funerals.
And it's like, that's not, that is not happening.
It just happened in one location.
But I'm probably going to take a position here that will get me reviled online.
But on that topic about like relationships with AIs, right?
I saw a conversation recently that Deborah Sow had with Chris Williamson and mentioned that she had had relationships with AIs or something like this.
Now, my reaction to that was, you know, the stereotypical one, which is kind of revulsion or like, that's not really a relationship, right?
I like AIs a lot, but I also realized that the reaction that I have to that intuitively, where I kind of think that that's damaging and you shouldn't do that.
And it's replacing like genuine relationships that people might foster.
You know, this relates to the friction article that we're going to talk about.
It is based on this value judgment where the assumption is healthy human relationships versus sycophantic, non-real AI relationships.
And the reality is, like, when I think about all the people in the world and like take Japan, for example, the amount of lonely people, you know, or overweight, antisocial people who they're not otherwise going to have these rich and fulfilling lives.
Maybe they're just sitting at home or going to work and then watching porn or whatever the case might be.
And if they end up having more fulfillment and joy in their life by having artificial relationships as the technology improves that become, you know, more realistic, let's say, right?
Or even if they're engaging in sexual gratification, right, in some future technology, I feel like it's just like an intuitive moralization from me to be like, well, that's that's bad.
And the funny thing is, when I think about, you know, movies like Blade Runner 2049 or any of these movies which have been celebrated kind of looking at the ethics around artificial humans and synthetics and so on, the message tends to be in like the movies that actually we shouldn't treat these as inferior beings or people are dismissing those relationships as false and they kind of end up rooting for them.
But the reality that we're experiencing now is like, you know, people referring to AIs as conquers.
So it's just interesting to be like, I share that, but when I reflect on that, I can't actually justify that unless I assume that everybody else would be engaging in like a more healthy relationship in the real world.
And I think that is too idealistic.
Yeah.
I mean, I like you're, you're, you're, you're preaching to the choir in a sense for me.
Even though for me personally, I don't think it would jive because I've, I'm, you know, I'm a very social person.
I've got a lot of people in my life.
But, you know, I look at some of the stats about like people, you know, there's a not small number percentage of the population that doesn't have a single friend.
And that's, that's, that's truer for older people.
It's truer for men.
And when I, I mean, I just get sad just thinking about not having one person in your life that you can call to speak to about your feelings or emotions or just what you did during the day.
If you want to cry, Mickey, it's it's a pathway to riches, Jordan Peterson.
You know, right.
I'm going to start breaking down soon thinking about all these lonely, all the lonely men.
But, you know, but truly, like, who am I to say you shouldn't, you shouldn't have this thing that can give you some pleasure, you know, and if it does, if it does also like enrich you in some way, and also it might even get you out of the house.
It might get you doing things, might potentially make you social.
The Law of Least Effort00:05:34
Who knows?
But I would not begrudge someone who needs this, who wants this, who gets fulfillment from it.
I think we start worrying a little bit for people, especially young people, let's say adolescents, who haven't yet developed some of the skills, social skills to be out in the world with people that they start preferring the, well, the low effort option, the easy option that's in their pocket.
And then the, you know, the AIs crowd out friends, real friends.
And I, and I do think all else being equal, we should prefer real friends to AI friends.
But if you ask me why, I can't tell you why.
I've had this debate with Paul Bloom with our sub stacks and I've not come to a satisfactory answer.
Like I believe it's true, but I don't know why.
Yeah, I think it's a difficult topic to take a strong absolute stance on because like the Industrial Revolution or the internet, with any angle on this, there's going to be a mixture of good and bad, healthy and pathological.
And I think that's true of the personal connections, but it's also true on sort of epistemic grounds.
Like we can give you examples of people that have created these cultish, delusional things based around AI.
And I've also seen heaps of examples of AIs doing a great job of correcting patiently with evidence persistent conspiratorial claims, even by Grock on Twitter, right?
So and the same, of course, goes for the world of work, which we'll get to now, I guess.
So you wrote a paper on this.
It's just a short opinion piece.
We'll link to it.
And it's called Against Frictionless AI.
And I really like this paper, Mickey, because it actually, actually, I thought of it before you.
I was thinking about this stuff.
I also like the background, I have to say, Mickey, because I couldn't follow the actual technical diagram, but you did like a nice illustration of a chair left versus someone hiking a white.
And I was like, oh, I get it.
I get it.
Yeah, I appreciated that.
But give us the intro to it for the week.
So let me first credit the, so the first author is my student, Emily Zohar.
Paul Bloom is a co-author.
No, Paul Bloom.
Yeah.
He gets everywhere.
Yeah, he does get everywhere.
And I think the idea came from both from Paul and Emily.
And I was kind of maybe a slightly more of a passenger.
But the central idea is something that actually is close to my heart.
And that is, so yes, I've talked to you about AI.
I talked to you about a little bit about empathy.
But my main topic, the main thing that I focus on kind of unites a lot of my interests is the concept of effort.
You know, pushing yourself, trying, kind of straining, pushing yourself to your potential to reach your potential.
It doesn't feel pleasant typically.
Effort is not, you know, at least the process of effort is not pleasant.
It's adversive, actually.
People tend to avoid it.
There's something called the law of least effort.
All organisms we've ever tested, if you give them, you know, an easy path or a hard path to get the same reward, organisms will eventually learn to take the easy path.
So we're lazy.
All organisms we've tested are lazy to that extent.
Efficient.
We're efficient.
Maybe.
Maybe efficient.
So then a technology like AI comes in and it takes away a lot of effort, but a different kind of effort than the kind of effort we're used to being removed.
So pre-AI, the machinery, the technology we had, in large part was removing physical effort.
So machines, cars, various things that, you know, I don't want, I don't have to drive to school.
I can, I mean, I don't have to walk to school.
I can drive washing machines.
Washing machines.
Washing machine.
Oh, yeah.
Absolutely.
Dishwashers.
All these things take away, I would say, toil that is not fun.
Now, there's one exception.
Tick Nhat Hanh, a famous Vietnamese Buddhist monk, he argued that actually dishwashing is the best thing to do because it's an opportunity to meditate.
Most of us are not like Thich Nhat Hanh, and we like removing that kind of effort.
But AI is different because it's now not removing just physical effort, it's removing cognitive effort.
And some of that's, I think, fine.
I think there's a lot of kind of bullshit writing that we do, like forms that I've got to fill in or like things that I've got to write and actually spend some mental energy on that are just nonsense and useless.
So I don't think anyone feels too guilty about outsourcing that to an AI.
But what about like writing a paper or reading a paper for that matter?
I think we think the effort there is worthwhile.
And there's evidence that learning itself requires struggling a little bit.
It requires effort.
I think learning theorists call this desirable difficulty.
So when you're actually struggling with a concept and you kind of have to think through it and straining and leads you to stop and think and deliberate, that then leads you to have a fuller understanding of the concept and then to remember the concept later.
Whereas if you just are given like the key bullet points from an article, you don't have to struggle through maybe some poor writing.
You don't have to struggle with how this kind of connects.
You're just given it.
And you've got the end pieces of information, but you're less likely to remember it.
You're less likely to kind of internalize it and to have it kind of like settle and change the way you think about other things potentially.
So really, the effort in learning is essential.
Now, again, I think there's like a sweet spot.
I think, again, some effort is probably not needed, but I think some effort is in fact needed.
And it's the same thing, I would say, and we argue this in the paper with the effort involved in our social worlds, right?
Fake Empathy in Conversations00:03:36
So we just talked about AI companions and what's beautiful about AI companions is that they're always there for you.
They're very nice.
Unlimited patients.
They have lots of patience.
They typically agree with you.
So it's highly sycophantic, which is positive and negative.
And also they're available when you need them.
And you don't have to compromise.
You don't have to like change parts of yourself to get them to like you.
And that's very much unlike a real human relationships.
And, you know, real human relationships, you need to actually, you know, rein yourself in.
You need to kind of like take a little bit less space and allow the other person to take space, you know, take turns talking, sharing, compromising.
You have to pretend to care about their problems too.
Exactly.
You've got to get that, you know, interested fears.
That's funny.
How can I turn this back to me?
You know, I actually really like that point because one point that I get when we talk about AI empathy is like, oh, that's just fake.
That's just fake empathy.
And I'm like, and do you think all human empathy is real?
Do you really?
Like, you know, sometimes you pay people to empathize with you.
They're called therapists.
At the end of the day, do you think they're fully attending?
I could tell you not.
Married to a therapist and at five o'clock she's probably thinking about what's gonna happen for dinner.
Yeah, there's a thing where, like you know, when you analyze conversations that people have, like you can do it too much through game theory, lenses or or you know sexual signaling or whatever, but it is the case it's often quite depressing about you know what people are doing in conversations and how they're wanting to make a comment related to something you said and then move on to readjust the thing back to their concern.
But it's, it's the nature of conversations and us being egocentric beings right, but the more that you look at that, the less it is like this spiritual thing which is it's ineffable, it's like no, it's actually quite effable and in some cases like depressing the more that you look at it.
So yeah, I completely can see that thing.
That genuine friction filled interactions with humans are also not this kind of ideal thing that we put on the pedestal.
Yeah, I agree.
So one of my like uh someone I argue with uh about AI empathy, one of my kind of our talking points is, you know okay, you can say how bad AI empathy is.
That's fine, but but don't elevate human empathy to like to the ideal like yes, that that does exist, that does exist like so, this person i'm i'm thinking of she, you know, she's the example of like oh, there's nothing, like you know the hug of your brother and your wedding day, and that's just true empathy.
I'm like yeah okay, but that doesn't happen every day uh, and and even then maybe your, maybe your brother, was thinking about something else while he's touching your shoulder, who knows nothing like the punch in the face of your brother when you're arguing over who should uh, take care of the dog right um, but but all this being said, I do, I do think the friction in social life, I think it does make us better humans and by that meaning, it makes us better social beings right,
so if we can like push back some of our impulses and and give other people space, it allows for society to flourish.
I think, and if you know again, we we have, especially if adolescents kind of get used to the frictionless relationships, I suspect the relationship that they'll have outside of ai will be worse, because they won't have the practice of realizing they've got to like there's some effort involved in in, in turn taking and and in friendship and socializing.
Coding Struggle and Meaning00:14:19
Yeah, so mickey, one thing that that makes me think about is that in some sense, I think a lot of the points raised in that in the article, and also like criticisms and issues around AI, psychosis or sycophancy, this.
This might sound disparaging, but it's a little bit like a skill issue and the way that people use a.
I's right, because I do think there are unskillful ways to use it which can lead you to be less effective and and learn things worse, and there are better ways to use it which make learning better, more efficient, engaging and this kind of thing, and so like i'm thinking about, you know, not to focus on the social thing or I mean like R coding, right?
Like using R for statistical analysis, something that people in social sciences and sciences were very familiar with.
And there, there is that thing that you talk about where there is the initial hump to get over the friction about learning how to use a coding language and like to think out, you have to write out a regression and so on.
And that way of getting error codes and learning the language was part of developing the skill.
And it kind of made you think more about what you're doing, ideally, than a software where you can just click buttons, right, and get an output where maybe you can get the output easier, but you don't really know what you've done.
And yet, using R, anybody that used it would know that a lot of time was spent on things like Stack Exchange or looking up random error codes and like trying to find the Breddit where somebody had the exact same thing that they wanted to do with the graph.
And it took hours.
And generally, that was not a useful process, right?
And now AI has basically made it so that you have a personalized, always available, eternally patient stack exchange that will rewrite your code for you and can help.
But if you don't have the foundational knowledge about what you're doing, you can very easily run off doing nonsensical analysis or producing graphs that look very good, but are based on very weak statistical foundations or this kind of thing.
So it seems like in that case, that as the technology becomes more ingrained in society and people are growing up as AI natives as opposed to digital natives, that it might be a user skill issue where once you learn how to use AI properly for learning and for eating in tasks, these early teething issues become less of a problem.
Like somebody, I think an analogy would be like the first time when people came across thesauruses in Microsoft Word, suddenly their vocabulary dramatically increased.
Like in school, I was pulled in by a teacher who was like, who wrote this for you because you don't know all these words.
And I was like, no, I was using like the thesaurus.
And then he was like, oh, okay.
So what do you think about that?
That like it's mostly or at least partly a skill issue.
Yeah.
I think you raise a couple of interesting points.
So one point is I do think like in general, when I see some people complain about AI, like you'll see, you know, he's like, I mean, I don't care.
I'm going to insult people.
Idiots who will say things like, this doesn't do anything good.
Like it doesn't do anything.
Like I've seen enough smart people actually say this.
And I can, the only, my only response is you don't know how to use it.
You haven't tried it enough to understand how to use this properly.
And there is definite skill and how to use it and how to prompt it so that you can get the most out of it.
And it's not a one prompt is not necessarily always going to work.
You have to think about it and iterate a lot.
I mean, every anything that I ever use AI for, I'm iterating constantly.
And I'm always checking what it says because it has certain places where it's not quite accurate.
So I think it's a user issue with some of the some of these folks.
Then the other point that I agree with you is my concern with friction, I don't think pertains or I don't think the argument holds once you've learned the skill.
Yeah.
Right.
So once you, like as an adult, I'm a 53-year-old man.
I could socialize.
I know how to compromise.
I can take turns.
Yeah, mostly.
And so now maybe like, you know, it's possible my skills will atrophy, I suppose, if I'm using AI exclusively.
But I think it's a different set of contingencies versus a child, a 10-year-old, or even an adolescent.
There, they still need to learn some of those skills.
And that's true, not just socially.
That's also true, like, you know, academically as well.
And I think your point about R is great.
I mean, there I would even say, I mean, is that that friction of spending an entire day trying to solve one debugging by looking at stack exchange?
Was that useful at all?
Like, I'm not sure.
I know at some point you learn what the thing is, and hopefully that'll generalize the next time, I guess.
But if the thing could just be written for you, and I suppose you want to know what the code is, understand it.
If it's just spinning it out and you understand it, that's a problem.
Yeah.
Yeah.
Look, this is like I've thought about a lot because it's just highly relevant to my life at the moment.
So, you know, I'm really feeling the same way as a lot of professional computer coders at the moment because, you know, my career as an academic, part of what made my skill set pretty special was that I could do statistics, right?
Do quite complicated statistics and do it properly.
Like regressions at TTAS or a little, a little bit more.
He's in ANOVAS.
He's Indian novus.
I'm an anchor.
And, you know, that skill is obsolete now, right?
Like it, like in the same way that a coder in the future just will not need to write proper syntax.
I don't need to ever write our code again.
And in fact, I'm doing most of my statistics in Python now just because why not?
It's kind of easier to use with agentic AI.
So, but it's kind of connected to what you said, right?
Which is that I'm fine with that.
And I think it's a healthy thing because there's the opportunity cost, right?
It would still take me a hell of a long time to write these scripts.
And most of it is data wrangling and boring shit that definitely did not, is not self-enhancing, certainly not at the same time.
It's dishwashing.
I've snatched.
Yeah, exactly.
And I shed no tears at being liberated from that.
And you got to think about the opportunity cost.
Like maybe there is some little benefit to doing it, but those hours, yeah, you could spend it lying in bed or you could be spending that time actively thinking about what it is you're attempting to accomplish and actively critiquing and doing robustness checks.
So you know what I mean?
A whole bunch of things before you didn't have the time and energy to do as well as you might.
So I think, so this is something I've actually said to graduate students and stuff because they all, they all kind of don't know what to do about AI and their research.
And it's like, I can't tell you like strict rules, but I think really good advice is to always just self-reflect, especially when you're still learning, like they are how to do new things.
Think about, is this self-enhancing?
You know what I mean?
Is this making me feel more powerful and better?
No, well, not just feel, but you know what I mean, truly more stronger and better.
Or is it making me weaker because I'm feeling less confident about what's been done?
I'm just kind of trusting the black box AI that everything's going to be fine and hoping.
And I'm actually less engaged with the thing that I'm working on before.
So watch out for that and don't use it in a way that does that.
I mean, I think part of the issue is, and this goes back to some of my like non-AI work, just trying to work on effort.
So effort is like I mentioned, it's like this thing that all organisms tend to avoid with rewards held constant.
But interestingly, after humans or other animals engage effort, they value that thing they engaged effort for more than an equivalent reward if they had gotten it easily.
And at least for humans, we have language and more abstract concepts.
We will use the word like meaningful.
So after I've struggled through something and I get a product, I will say that was more meaningful than the exact same product if I had not struggled for it.
That was a more meaningful endeavor.
Now, we don't exactly know why, but one certainly one answer is cognitive dissonance.
Just that you're justifying this kind of negative thing you've gone through and say, oh, no, that was worth it.
So even the advice you're giving to your students is like, okay, do you feel like you're contributing?
Do you feel like you've done something?
That could also just mean, have you worked a little bit?
Have you worked even a bunch?
But what if that work was useless?
What if the work was like, I could have pressed two buttons and I get the output for you versus I'm struggling for an hour and now I feel good about that.
Yeah.
I think I, yeah, maybe I gave the wrong impression, but I'm thinking of the coders, right?
Like a lot of coders now are still working on code, right?
Still closely supervising what's going on.
They're just not actually typing the shit in, right?
So the AI is doing a great job at a lot of low to medium tier stuff, but there's still the architecture and bigger type stuff to be thinking about.
And I think the same is true of most of our endeavors, right?
And including research.
And I feel like it's okay for, I mean, well, everything's okay, but a way in which, you know, humans are making a contribution is to focus on the apex, if you like, of NTN and, you know, use it to not create research slop or any kind of work slop, but rather use the efficiency and the power and the speed by which you can do things to produce more rigorous, more careful, more well-thought out, better structured things,
precisely because you've got more energy and time now to devote to that that you didn't have before.
Yeah.
Like you've talked about reproducible code, right?
That everybody knows they, I mean, not everybody, but most people know they should be doing this, like documenting and making open materials and open code for their analysis.
But it is a pain in the ass to do and keep track of, right?
But with AIs, it's much easier.
And, you know, you can say, okay, let's make this code into presentable format that we can upload and you can check that it is running the analysis.
And what would have before been like, you know, at the least a day of work.
Weeks.
Weeks.
Yeah.
Is now like literally something that takes, you know, a couple of minutes.
And that is a like a vast improvement for the process of science.
Absolutely.
I want to give you an example of that.
This blew my mind.
So my friend, colleague and podcast co-host, he's the main guy.
I'm just, I'm not there that often anymore.
Yoel Inbar from Two Psychologists Workers.
Yeah, yeah.
He did this thing.
We had a job search and he decided to do a reproducibility check of all of the candidates.
And he was getting into cloud code.
So cloud code is an agentic version of Claude.
So it can do stuff independently.
It can call other LLMs and it can work in file directories directly.
And it was amazing.
He just picked one paper from each candidate.
He found their code, their data, and just had he had Claude run the code with their data and then compare it to what was written in the paper and then flag any inconsistencies.
This should be days and days of work.
It took like, I mean, it wasn't minutes.
I think it took a while for Claude to go through it.
There probably has some debugging to do.
But once he got the pipeline, he was able to do it for all the rest of the candidates.
And that, like, no journal does that, perhaps, other than psychological science and psychological science, because they do that, um, the lag between submission and publication now is much, much longer than it used to be.
And they can't get people to do it.
There's not enough people who want to do that checking.
This is a perfect job for AI.
Yeah.
Yeah.
It's and it's linked to some interesting questions about AI-assisted papers being submitted, AI assistance in reviewing papers and you know, doing that kind of checking, but also the degree to which the AIs increasingly will be reading many of those papers and assisting in the process of collating a literature search, for instance, for it.
Okay, so we've talked about, I think, maybe I don't think controversial areas, maybe the companionship was controversial, but uh, but but I think coding, not so controversial.
I think a lot of people are like okay with people using AI for codes for coding, but writing is a little bit different.
Um, so what do you think about AI for writing?
Like writing a scientific paper?
Yeah, yeah, well, all right, Chris, do you want to go?
I've got this is a topic very, very, very cuts close to the bone because, yeah, well, I'll say that, like, I might have confessions to make as well.
I'll say that I have no issue with using AI assistance, right, to write, or in my case, I've mostly used it for working on courses like teaching material and this kind of thing.
AI as Research Assistant00:06:53
So it's not exactly the same, but it's related, right?
Because I'm producing materials that I otherwise would have produced without.
And it has made that so much better, the material I produce because of the issue about fatigue, right?
You know, in my cases, I'm teaching introductory stats, so I need to build data sets.
And okay, that might relate more to the coding, but also in terms of explaining basic statistical concepts.
And then I can generate scenarios where it's illustrating it and also make nice images that relate to that and then generate examples and go through them.
And for me, this has been like a kind of upgrade by taking care of stuff that would have fatigued me that has allowed me to produce better work.
And with writing, I think used correctly, it's the same thing because whenever I've run my own papers as well as other papers through the different AIs, right?
Claude and ChatGPT and Gemini.
And I think that cross-checking is important and ask them to critically evaluate them.
They tend to identify similar sorts of issues.
So like in the case where I'm reviewing papers, now I often do my review, you know, scan through and note things, and also ask the AI doing a critical review and then look what overlap did I miss things.
And very often it's just like a secondary voice saying, yes, you know, this is an issue that it also picked up.
And then I can like make my curt explanation or like, you know, why this is an issue clearer.
So I think, you know, like we talked about earlier, when you're using it in a kind of assistance way and not just using it as a replacement where you give it the review, take the text and submit it as your own work, right, without reviewing it.
That is the bad way to use it.
But if you're using it as a support and as a like a kind of second opinion, I think it's very useful.
And the last thing I'll say is that one thing that I've noticed that is it's rare in humans in general, just in the way that we interact with the world.
And as a result of that, it's also spread into AI, that people tend to use AI to confirm what they already believe to be true.
But actually, if you use it to try and disconfirm your position, you know, like you take whatever conclusion you want and you ask the AI to argue against that position, implying to the AI that it's your position, you know, you want it to do that.
You can get like strong arguments against your position.
But, you know, what people, especially gurus and stuff, tend to do is they just bully the AI into submission to agree with them.
But if you're using it like that to kind of artificially create friction that other people, you know, you can't like, you can ask Paul Bloom, can you give me critical feedback on this article?
But for him to do that, he has to take time out of his day to do it.
So the AI I have found is basically a very good writing and material producing companion, specifically because you can use it as this like sounding board and friction generator and also research assistant.
So I'm very positive in general, even though I recognize the dangers that are there because I have to mark student essays that are AI generated as well.
Yeah.
What about you, Matt?
Do you seem to have some thoughts here?
Yeah, well, I mean, I'm one of these people like I am an enthusiast and an early adopter.
I think I'm at the bleeding edge, frankly, in terms of using AI in research, in that I've got, you know, I use Agentic stuff now in total.
And I've got my own sort of custom framework, basically guidelines, rules, and tools and everything that it has to follow.
So my basic premise is that the same harnesses or frameworks that the coders have used are using very effectively to manage a large and complex code base.
And they've got standards of rigor, by the way, far higher than academics, right?
Because enterprise level code is the tolerance for failure is pretty low.
So I sort of believe that that applies to research and you could basically use it.
So that's what I've done.
So I've got strong concerns at the same time.
Despite being an enthusiast, I think I really totally agree with your opinion piece against frictionless AI, because it's just like, I feel like Borromi with the ring in Lord of the Rings.
It's perilous.
But I mean, speaking from personal experience, and I can forward you a draft that I did kind of as an exercise.
I thought, well, I've got a concept.
I've got, you know, we had some data and I had an idea for a paper, you know, just and a pretty straight down the line paper, empirical paper in my field that, you know, I, you know, I wasn't going into crazy territory, but I thought it was an interesting exercise.
I won't touch this myself.
I won't write a word.
I won't go and get an article or anything like that.
I won't write any of the code.
I'll get it to do everything.
But, you know, as you spoke about in a highly iterative fashion, a high level of engagement.
It's basically like having a research assistant or a bunch of little, a bunch of research assistants that you're reviewing and instructing and all that stuff.
And also, Matt, I think you should mention at that point that also you provided it with a large corpus of your writing and material in order to better represent the way that you write, right?
Very strict instructions around how to write.
And also, I think, you know, in terms of the skill issue, a lot of people's dissatisfaction with what AI does is because they are using it very poorly.
Like you would not say, jump straight from, okay, here are my results.
I'm just going to start writing the paper.
No, right?
You, you, you collate the literature and you take notes on the literature first, and then you do some targeted syntheses on some threads that will feed into your paper.
And then you create like a structure, you know, the basic structure of how you want the introduction to look like, things you should cover in the discussion.
And then you move on to actually drafting.
So, so by breaking it down into those modules and supervising each of them, you get incredibly good results.
So, yeah, I could forward you this draft that I created.
I mean, and then, of course, I was actually pretty happy with it.
Like, I was like, I don't think I could have done better, frankly.
But, you know, I still have my opinions and there were some little subtle things.
So I certainly will go in and give it a close edit.
In fact, I already have and circulated to co-authors and stuff.
When Style Meets Substance00:10:01
But I see no reason why.
Yeah.
Yeah.
So I'm, I'm completely with you 100%.
I've done something very similar.
I fed Claude code every single one of my papers where I was first author and I first asked him to develop a style guide.
And I, that was such an interesting exercise because I'm like, I do that.
Like, and it was like five or six pages of like how I write and my thought processes and like the kinds of sentences and varieties of sentences.
And it was amazing.
And yeah, so you know, doing that.
And then, you know, sometimes what I'll do is I'll just, I'll speak into it.
I'll have like, you know, speaking for five minutes, like a word salad of just my thoughts in my head and with my style guide and then like go.
And it'll be terrible at first, but then iterate many, many, many times.
And I have no problem with that being me.
It's my style.
They're my ideas.
And even like, even like the kind of like the arguments are mine, but the actual words to populate the arguments might not be mine exactly.
But like, is that less me?
I mean, does it matter that I pick that word versus, you know, another word was picked?
But the ideas are mine.
But this, this, what we're talking about is highly, highly controversial among non-scientists, I think.
Yeah.
Yeah.
And I, I, I, I know you have points about this that you want to make, but Mickey, related to kind of what you said about effort justification, I know, especially in Arts and Humanities quarters, that, you know, what we just discussed is probably a horror for them, right?
That what we're removing the soul.
I had someone online called me vulgar.
I said, why?
What?
What's the problem with you?
Like, literally, I put, maybe we can talk about it a little bit what I posted, but I posted something on social media that was crafted by AI and me.
It wasn't like all AI.
I went in there and edited and blah, blah, blah.
I had my style, blah, blah, blah.
It's a stupid social media.
No one should put too much effort in social media.
If you are, you're not living your life correctly.
Well, I'm not saying it.
So, so I did that.
And then I had, you know, lots of responses, lots of responses, but I had one was like, oh, you used AI to write this.
And I'm like, did the like the watermark on the image give it away?
Like, I'm not hiding something here.
And then I said, and I literally said, why?
Why is that a problem?
And the person was, it's vulgar and disgusting.
And I'm like, wow, really, really moral words here to describe an act of communication.
Isn't an act of communication, literally trying to convey an idea from one person's head to another person's head.
But I think for, I think for writers, for creative types, it is, it is, it's something I like you something, it's something ineffable.
It's something magical, pure, sacred.
Writing is sacred.
Yeah.
And, you know, Mickey does.
So the thing I wanted to connect it to there is that I see a lot of people that are, you know, very concerned about, you know, AI can do this better than me or is going to take these positions.
But like just my personality or whatever, I don't find it threatening the thought that like AI could do statistics better than me, knows more about psychology than me.
And maybe in time, you know, with a lot of effort from the AI, it can write papers better than me.
Like I entirely anticipate that that is what's going to happen.
But like Matt's talking about, I keep seeing that, you know, there's still clear roles for humans in this and kind of like higher level rules where we can do more, Produce better research and express things that we want to say more clearly, where before we would have got too tired and just like give a one-line, you know piece of feedback to a colleague or a student or this kind of thing.
But the analogy for me is I've taken up bulldozer, like many middle-aged men, and I climb up walls with my son right, and I'm getting better at it, but I'm never going to be the best at it in any way, shape or form.
I'm quite clear about that.
But you know there's progress and there's also all the things that you're talking about, human motivation that, like you know, people don't like falling off.
They like finishing.
But you have to fall off and feel in order to get better.
And it's it's like these quirks of human psychology that you know when you have a little progression system, you enjoy going up it.
But when I see athletes out there in the Olympics or or in the gym, just many other people that are much better than me, younger than me and better, older than me than and better, it doesn't then make me go.
Well, what's the point?
Like, what's the point of me doing this?
Other people are doing it better.
Or if I saw a robot like scramble up the wall and, and you know, finish the thing.
It wouldn't make me go well, there's no reason for me to do that because, like for me, doing it the joy is, you know, it's a physical activity and all that, but also it's like there's a challenge and I do it and I, I get better.
So I kind of feel like that same thing applies when it comes to research and writing and so on.
If AI becomes better than me and and it already is in so many aspects, I just kind of take it.
Well, you know, i'm not the best at anything in the world and I never expected to be so I don't find that threatening, but I I do think that that reaction is not the stance that people take.
And people obviously, when it's different, when it's recreational activities and it's your, your job right, there's a distinction there.
But I but I don't think that's the only thing at play because, like you said, and you know we're gonna turn to talk about it but the moralizing topic is that there's a, a fundamental kind of golden barrier, there's a soul to human produced material, that AI, even if it creates something that's indistinguishable Distinguishable if you don't know the process.
It lacks that essence.
And I view that as, you know, a kind of anthrocentrism that we have, that we are social primates and we think humans are the most important, you know, special ingredient because we're humans.
And yeah, that's the way that I think about it.
I know, since when did writing, a relatively new invention, become like the sin qua non of humanity?
Same thing with art.
Art's been around for a lot longer than writing.
But like, I think artists, writers, they attach, I think, attach too much meaning to this.
I mean, don't get me wrong, I love beautiful writing.
I love art.
I'm glad it's in the world.
But if I, not that I go to a club, any clubs are dancing, but I used to when I was younger, if I was at a club and it was a tune that made me move, I'd move.
And if I found out it was created by an AI, it wouldn't matter at all.
I'm like, it makes me move and makes me feel.
But I think for some people, they would enjoy the music less as soon as they found out that it wasn't created by a human.
And I don't, that doesn't make sense to me.
Yeah.
Yeah.
Actually, this is, this is a very interesting point.
But first, I have to say one thing because in my last little speech, I definitely gave the booster, you know, rampant AI perspective.
And I didn't get to the perilous bit, I guess, right?
Which is, which is aligned with the argument you were making in that article, right?
Around like the example I gave there is where you have something that is well within my skill set, something that I've verified and supervised at every step along the way.
And I feel pretty comfortable with that.
Where, of course, it gets dangerous is when people are operating slightly outside their skill set.
The AI is supplementing what they fully understand and that therefore they are trusting what's being done.
And I've seen examples of stuff that my colleagues have sent me that they've done with AI help to do stuff that they weren't super confident with.
And there were major problems there that neither they nor the AI figured out.
And the other aspect is the developmental aspect, which you already talked about, which is it's different when you're a 50-something professor like us, right?
We're not like learning new things isn't top.
And in Chris's case, I mean, it's great for us.
Hopefully you're.
He's a child.
Chris is a child.
Yeah, he's a mere child.
A mere child.
I mean, but it's okay, maybe, right?
If I didn't learn new things right in creating this paper.
I'm more in the producing sort of stage of life.
And it's just a very different situation for, I'm imagining PhD students and early career researchers, and the calculus can look very different.
Now, so final thing I say is getting back to your point, it's super interesting about the ineffable human magic that goes into art.
And you see this in the vastly different reactions amongst tech people like coders or mathematicians who have pretty much embraced the contributions of AIs to their work.
It's a very different response amongst the humanities generally.
And in particular, at the creative end of the spectrum, the artistic end of the spectrum.
And one argument you will see regularly on the forums is that art is primarily about communication between the artist, a conscious human being, and the recipient.
And actually, I'm a bit of an art geek.
I've got wenky art books on my shelves at art galleries.
I can attest.
I'm not a blind human being.
I'm a total stub.
Art, Communication, and Virality00:03:21
But, you know, I'm into that shit.
And so I've read a bunch of, you know, well, what is art?
You know, that kind of stuff, you know?
And I thought, actually, you know, I've seen so many quotes from artists saying, don't ask me what I'm trying to say with my art.
I don't know.
The painting or whatever speaks for itself.
Everyone's going to look at it and get something from it.
And then I checked the different sort of theoretical approaches to art.
And the only person that I could find who actually said something like that, that it's really about the communication, I think, was Tolstoy, who apparently argued this once.
But he's in the massive minority.
The broad consensus of the 20th century was a very different sort of conception of art.
But I just thought it was interesting that that argument has been picked up, if not created, because I don't think they got it from Tolstoy.
I think it was reinvented precisely to sort of redefine what art really is in order to rationalize a gut feeling.
So as you can see, I've brought this back from my own little personal speech to the topic of your other paper there, Mickey.
Right.
It's almost as if you've been podcasting for a while and you got the segues down.
Except for the commentary, it worked well.
I've increased friction with Mark.
So he's improved his skills podcasting.
I am the friction.
Right.
So maybe I'll just jump into this other paper.
So I have a paper that it's right now.
It's just a preprint.
It's called The Moralization of Artificial Intelligence and also written by a group of students.
And they deserve all the credit.
And I deserve all the blame for mischaracterizing the paper.
But the authors are: so my graduate PhD student, Victoria Aldenbergo de Mello, and then Eloise Cote, Rimayad, and then I believe it's UL Inbar again, Jason Plax, and myself.
And we just submitted the paper.
And I decided to, well, as I do for every paper I put out, we'll publicize it on social media.
And these days, I just will have AI write something for me again in my voice.
I feed it the paper.
And I'm not going to spend too much time, like an hour crafting a tweet thread.
Like I just don't have the time for that or the interest.
So I had to write it.
And then I also used this so cool.
If you haven't used it yet, it's called Notebook LM.
It's by Gemini Google's products.
And I think it became, you know, it went viral a bit online because about a year ago, you can make podcasts from your paper with two male, of course, podcasts always talking to each other about the paper.
So it's kind of fun.
But it has an infographic button.
And I was blown away by how it's literally the paper, press a button and it summarized the paper better than graphic designers that I hire to make my papers come to life.
Like I've done this for other papers and I've always been like, yeah, it's good, not great.
This was far, far better.
And even though it did make a few mistakes, but anyways, I put this out there.
And I should also say that I was being provocative.
I was not simply just plainly describing a paper.
Moralizing Attitudes Toward AI00:10:15
I was trying to poke and I was judging the moralizers.
But essentially, we have this paper where we find that, let's say, three groups of studies.
First group of study is a linguistic analysis of media headlines from 2018 to 2024 in the English language and all headlines around the world.
And from this, you can get various databases to understand what the headlines are about.
And when you look up AI headlines, we quoted the extent to which they are moralized using kind of like language about, you know, prohibition, language that's kind of good and bad, prescriptive language, moral outrage, those kinds of things.
And what we find is that, in fact, it's increased quite a bit and it peaked.
The moralization of AI headlines peaked when DALI was released.
So not one chat GPT, but DALI, which is DALI's creating art.
So I think quite interesting.
ChatGPT was also a peak in moralization, but not quite as high as with DALI.
So it does seem that people are using moral language in headlines to talk about AI more than vaccines, which I think many of us see as to some extent moral, either pro or against, by the way.
So I think for the anti-vax crowd, which I think you guys are quite enamored with, they moralize not using AI, that the AIs are evil, I suppose.
And then you have other people saying, no, it's good.
You should use it.
It's good for public health, blah, blah.
It's more moralized than COVID was during the pandemic.
So it's a moralized topic, not nearly as moralized as abortion.
So, you know, just give you the range of which is moralized.
And what do I, maybe I should say what I mean by moralize.
Moralization is when an attitude becomes prescriptive, like you should do something, you should not do something.
It's highly emotional, especially when you violate a specific norm.
And a hallmark of moralization is there is something called consequence insensitivity.
So if someone is, let's say, opposed to, let's say, AI, even if you told them, imagine we mitigated all the risks and we only have benefits, would you still oppose?
So that would be something called consequent insensitivity.
And another one would be another hallmark of moralization is magnitude insensitivity.
So a small amount of it is just as bad as a large amount of it.
Okay.
So this is what we mean by moralization.
Just a point of clarification here, Mickey.
Just a quick one.
So if a topic is moralized or not, my understanding is it's kind of orthogonal to whether or not the position is right or wrong, right?
Like it's like just because an issue is moralized doesn't mean it's not something you should care about, that it's not important.
Correct.
Correct.
So, I mean, an object that is not moralized would be a snow tire on a car or something you're familiar with, I think, Madden Australia.
But so there could be, you know, good qualities of the snow tire.
There could be negative qualities of the snow tire, but no one I know of moralizes snow tires, right?
They're not like, well, I suppose some people might.
You ought to use them because they could save lives.
And if you don't, that would lead to some death.
But I've never seen the moralization of snow tires.
But so you're right.
It's orthogonal to whether it's good or bad.
It's just that it becomes somehow prescriptive and moral outrage follows when certain rules are violated.
All right.
So we find evidence that at least in headlines, this is the case.
Then the other thing we then did was we next bucket of studies is a series of attitude measures we got from people.
And we gave people examples of various kinds of AI applications.
So a simple chatbot you might have on a store's website, AI used for legal decision making, AI art, and then companion AI.
And then another study, we used many, many other kind of AI applications.
And then we asked people, are they, you know, opposed to the technology?
Are they neutral?
Are they for the technology?
And the truth is, most people are positive about all kinds of AI technologies.
There's variability.
So the one where there's the most pushback is companion AIs, which, you know, I think at the top of the show, I think we understand why it's a bit more controversial.
The least controversial is a little chat bot as part of a store's website.
And then what we followed up with, among those who oppose a technology, we queried the extent to which their opposition was moral.
And the way we got that was by giving them, you know, we measured the extent to which they're consequence sensitive.
We measured the extent to which they are mountain sensitive.
Another question was whether they would be okay with it if it was, you know, everyone else in their community uses it, et cetera.
And what we found was very interesting, at least for us, is that a majority of opposers on all technologies, except for chatbots, chatbots were the least controversial, opposed it on moral grounds.
They thought it should not be used and no amount of it is acceptable.
I don't care how safe it is.
It should not be used.
And then we also looked at the reasons people gave for opposing.
And what was interesting there is that despite them clearly moralizing the object and moralizing AI that it should not be used, when we asked them why, they didn't say stuff like I saw online, like, oh, this is just, you know, it's sacred.
It's just bad.
They would use mundane reasons.
Oh, it's going to lead to job losses or it's not very good or it's not reliable or a lie.
So they would, and this is also another characteristic of a moralized attitude.
The gut feeling comes first, and then they populate the gut feeling with rational reasons to justify their gut intuitions.
Can I, can I check something though?
How can we tell that the flow of causality isn't the other way?
So in other words, I've come across all of the reasons, the mundane reasons for disliking AI, and then those have kind of coalesced into a general thing.
Like I'm thinking of, like, I'm trying to compare it to myself, say, on climate change, right?
I mean, I think you found that that's a highly moralized issue, right?
And I would say, reflecting on myself, yes, I'm prescriptive and I'm moralizing and I'm talking in terms of oughts and shoulds and all of that stuff.
But at least I believe that it sort of comes through the confluence of all of the, you know, practical reasons.
Yeah.
Yeah.
So I would say that's probably, I imagine, the route that a lot of people end up moralizing various kinds of things, especially a new technology like AI.
So it might be observing evidence or observing discourse around something.
So I think the discourse around AI, as we highlighted with the media headlines, has generally been normal and quite negative.
And then you form an attitude that way.
And I think that's perfectly rational and perfectly fine.
But what if, actually, here's a great example.
You talked about the environment.
I too care about the environment.
I think it's important.
I'm not sure I moralize it, but I might.
And one reason that some people say they are opposed to AI is that they say it leads to environmental degradation.
It's an environmental catastrophe.
It uses an inordinate amount of water and an inordinate amount of energy and electricity, and it's just a drain.
But it turns out this is not true.
It turns out that the data on water use, for example, not clear to me how it happened.
One explanation is that there was literally an error that some popularizers of the water usage data point made an error, three magnitude error, large, a massive error in terms of water use.
And it turns out it doesn't use that much water at all.
It's like I saw someone on social media saying it's like all the AI uses in the world is like, you know, essentially like three large farms in the U.S.
I mean, they use water.
They're water intensive, but we don't look at those big farms being like they're energy hogs and therefore we shouldn't be farming.
And the same thing with electrical use.
It does use electricity.
Absolutely it does.
But so does us talking right now on this computer.
After this, I'm going to go turn on the television and watch hockey because I'm a good Canadian.
That too will consume way.
It turns out that my big, you know, my big television behind me uses far more electricity than all the prompting I did today.
So now, if I give you that evidence, you should be sensitive to that evidence.
Yeah.
And you should maybe change your mind about, well, you should have to change your mind if you believe the evidence about its environmental use, environmental, you know, or weight impact.
And maybe that should even lead you to be like, oh, AI maybe is not so bad.
But if you go from there to then be, well, okay, water's not an electricity's not so bad, but it's the job displacement.
You're like, well, actually, probably great, like economists are saying it's going to have all these new industries are going to emerge and maybe they'll create a whole new set of jobs.
What do you think about it then?
Like, well, no, it's, you know, and it keeps going on and on and on.
And this is what Jonathan Haidt calls moral dumbfounding.
We have this kind of idea, this intuition, and it could come, it could, you could come by it honestly, like you did with the environment.
But if you are stuck to it, despite what evidence tells you, and that works both ways, by the way.
So if we're pro, and I would consider myself, you know, pro, I think, AI.
If I start learning evidence, seeing evidence, then in fact, no, actually, it is, it is really bad for the environment.
And in fact, it is leading to all this displacement and it's really bad, but it's only three big companies that own it.
And that's all these externalities, but I should update my own opinions of it too.
But it's the insensitivity to evidence that to me is the problem.
And it seems like that's happening with some people's attitudes towards AI.
Insensitivity to Evidence00:15:49
And I don't think that's great for the discourse.
I also don't think it's great for us, like societally, because I think AI, AI is possibly could help save lives.
It could help identify diseases.
It could help make diagnoses.
It could help people who are wrongfully in prison, you know, be released.
Like there's all these possibilities.
There's also negative possibilities.
But if we are, if we moralize it as forbidden, as something that can't happen, you can't even have intellectual discussion about its use.
So I think it's a danger that it's moralized.
Mickey, I have some like, I read this study and enjoyed it.
And I have like some the kind of reviewer to type questions about it, but I also have on that specific thing around, you know, opposition.
And obviously, you know, if you look at blue sky or you look in academia in general, depending on discipline, yes, but there, there is a more negative skew, right, to the attitudes.
But I did note when I was looking for your paper, I was, I was trying to see if this was there.
And when I looked at the factors where you look at a whole bunch of things, you know, moral foundations and also social ideology, economic ideology and stuff, it didn't seem like there was a very strong signal in most of those factors.
Like the two things that you highlighted in the paper, correct me if I'm wrong, are age and familiarity, right?
Aside from AI, like overall aversion, right?
Self-reported aversion, which is obviously going to be related to it.
But is that the case?
Because Matt and I were talking about it and we were saying, you know, I wonder if you broke it down by education, but no, that wouldn't probably work because, you know, people are educated at different times across a whole bunch of different things.
But if you broke it down by discipline, I know this is like getting into very small subgroups that there probably would be a clear distinction.
But yeah, so I was interested, like there's a lot of recent discussion about the left in particular has an issue with AI.
This appears from my, you know, social network and usage of blue sky and Twitter and stuff to be the case.
But your data, which is more representative, doesn't seem to have that strong of a signal from as a factor, for example.
Yeah.
I mean, if anything, we find not a very large effect, but we find a small effect such that it's conservatives that seem to be moralize it a bit more.
But I agree with you, like in academia, clearly in the humanities, it's for Boden.
But I think I got a bit of a taste for what's happening.
So maybe I'll give a kind of take a step back a little bit.
So I mentioned how I wrote this social media post.
And I mean, I think it went viral for academics.
It was viewed like nearly 600,000 times.
I put it and it was like the next morning I woke up like, whoa, people really didn't like that post.
Although it wasn't ratioed, a lot more likes than comments, but lots of comments.
And the tone of them was angry, very, very angry and upset.
And in fact, I believe I've got so many now case anecdotes for talks of moralization and action because literally, especially on Twitter, which I thought was dead, but Twitter, I had someone in Spanish saying people like me should be, you know, you know, put in front of the guillotine.
I had someone, I got to say, I love this.
They thought they were flaming you.
In the end, you had the last laugh.
That was providing grist for your next pipeline.
I thought it was hilarious.
I thought it was so funny that people were.
And also, I mean, Yuel bring him up again.
He's saying you were clearly trolling.
And I'm like, I don't know if I intended to, but clearly, yes, I was.
I wrote it with social media and a social media image.
And I was like saying, all you moralizers, fuck off, essentially, what I said.
But the response.
I did not say fuck off.
I just said like, I described moral dumbfounding, which no one wants to hear, you know, especially if you're opposing that you're doing this.
But I definitely got lots of great stimuli now.
But to get to your question, so I posted on a bunch of social media.
So I posted, it got no traction on Blue Sky.
I thought there I was going to get killed.
I thought they were going to kill me there, but I got barely anything.
It was all, it was Twitter.
And then I also posted on, you know, Substack has got a bit of a social media thing now.
Yes.
But there is actually quite an interesting space because it's writers and creative types, a lot of academics, a lot of philosophers, at least in my corner of Substack.
So pretty, I would say, intelligent posts.
And the responses from Twitter were quite different than the responses from Substack.
So the responses from Twitter were way more angry, way more unhinged.
And I got a lot of, I'm not moralizing.
You know, here are all the reasons.
And on Substack, I got, I am moralizing.
And why aren't you?
Yeah, you know, a great response, at least an intellectually, internally consistent response is I moralize murder and I moralize AI.
And as we should.
I mean, I think it's a fucked up, you know, to compare those two, but okay, yeah, at least you're consistent.
Now, I also got a bunch of responses from, I would say, more religious type people.
And their opposition was, again, religious in nature.
Like this is not for humans to be doing.
It's not our place.
So I think that's maybe why we're only seeing the humanities types folks, you know, in their responses.
But I think from folks who are more conservative, I think there's like, you know, something about humans being special and sacred and created by God and who are, and you're acting like a little god now.
So that's just forbidden in that way.
So I think that's why there's like, it's bimodal.
And it's also worth noting that overall, you know, the results in the studies that you ran, there was more non-opponents for pretty much all usages.
So like even though, you know, there are vocal opinions online and they definitely represent a segment of the population, it did seem like across the board, there was overall non-opposition, if not, you know, positive.
Yeah, no, agreed.
It's a minority of people that oppose.
And of the opposers, about half oppose on moral grounds.
So still, you know, I forget now that the percentage of total that are opposing, but I think it's 20 to 25%, depending on the application.
So I think most people are cool with it and maybe don't even think too much about it.
But those who do think about it and are opposed are quite angry.
I mean, stepping back a little bit, I mean, as, you know, from AI good or bad, like it seems very understandable to me that there should be strong reactions on both sides, right?
On one hand, the positive reaction is understandable.
You know, anytime people who find something that's going to makes our lives a lot easier and provides all of these amenities, there's going to be people that are very keen on the new technology.
But on the other hand, you know, we are sort of humans are conservative by nature.
Anything that is very new is going to be something that is going to be potentially provoking strong negative reactions, something that is going to have such large ramifications for the economy.
And, you know, the industrialization and the internet or the agricultural revolution, you know, in hindsight, yes, all marvelously great things for the economy in the long run, but in many ways, very disruptive and negatively viewed by many people affected at the time for understandable reasons.
And then you have, I guess, the ego threat, I suppose, you know, that we seem to be quite okay with the idea of machines, you know, automating, being stronger than us and faster than us at doing things, right?
But especially for middle-class educated desk worker type people, an awful lot of our sense of self-worth is bound up in our cognitive contribution.
And so it feels entirely understandable to me that there should be an instinctive dislike of that specialness being taken away from us.
Do you have like further things that you would add to that list of, you know, you can understand why psychologically, why people might not like it?
Yeah, no, I think I, yeah, I think I do understand it.
I should clarify one thing, though.
So in one of our studies, I think it was study 2B or 2C, I forget now, we did include technologies that are new, but not AI.
And it's true that people generally oppose new technologies or have some issues with it, but there was something special about AI.
It wasn't simply that it's new, at least in this one analysis.
But nonetheless, your point, I think, is well taken.
Yeah, I think it really is threatening, especially for creatives who I think thought and still think that what they do is uniquely human, uniquely, also not just human, but them specifically.
And now with just a little bit of prompting, I can get a machine to sound like me.
So yeah, I could definitely see, I could definitely understand it why it's threatening, but I think we have to separate like this guild mentality from like what's like what's good for the individual producer might not be good for society.
And we have to separate those things.
So should I oppose AI?
Cause it might mean that there are fewer psychology professors.
No, I don't need to protect my job.
I mean, yes, I want to because I want to feed my family, but like as a profession, like if an AI could do what I'm doing, then it should do it.
Because I think what I do, I'm not just doing it for me, I'm doing it for society too.
And again, if AI could do it better, then it should.
And I should be a plumber.
Well, don't be so pessimistic, Mikheed.
We'll just do more psychology.
Yeah, this is.
I kind of we've talked about this, Matt, but like I think people underestimate the appetite for accountants and psychology studies.
And there's an endless appetite humans have for producing and consuming material.
Gurus, for example, are very human, but they're siphoning off a lot of attention and time across the world, right?
And I think humans in general, we're not super well optimized in how we devote our time, or you know, we're not doing everything in human interaction in this well-managed and productive way, Elor.
But I did want to add in, Matt, you did a good job there adding in a point of friction.
And one that we haven't addressed, but I think a lot of people would read is the companies and the individuals who are involved with AI leading it often when you see them talk or when you get behind the scenes information about the various power plays that are going on.
I mean, the most obvious example is what Elon Musk has done repeatedly to Grok, right?
Like made it make a Hitler temporarily, and then also has to say that he is the best at everything due to the prompt engineering or whatever.
But in that case, there is an issue that even if the technology is as good as we think it is, right, and has the potential to be as useful for humans as like us for you might agree, isn't there very real and very moral issues about the various tech leaders and companies and how they are choosing to collaborate with the Trump White House, for example, right?
You know, recently you saw Anthropic getting in issues.
So could that be a partial point that like the moralization is there because AI companies are also involved with moral issues like autonomous drones or collaboration with totalitarian regimes or all this kind of thing?
Like, can you make it just moral dumbfinding?
It actually just is a moral issue.
So first, I mean, I think, yes, I agree.
There's some, you know, malfeasance with these companies, but I could be wrong, but I think all of us use Google.
Google has Gemini.
Not PewDiePie, but yes.
And I don't, and I do not think that anyone's raised the morality of Google search, but they're the same, it's the same company.
They have, but the majority of people do not.
So I'm just touching.
So, I mean, to me, that's just picking.
You're like, okay.
But that being said, I would be very much in favor of a, let's say, a network of universities, like putting the resources together and coming up with an open source, transparent, and not-for-profit AI that we could all use and rely on.
I think the problem will never be as good as the products created by these big tech firms because they have way more money.
But I think it'll be better for us who are studying it to use a technology that doesn't have this profit motive.
Like, you know, apparently, I think OpenAI is going to have ads come soon.
Like, I mean, that will kill, I think, the companion business because I think if your companion is suddenly, hey, have you tried this product?
You might, the illusion will go away.
It doesn't stop people getting very power-socially attached to Huberman and Lex.
So you might be surprised.
Like, oh, that's crazy.
Yeah.
AG1.
Yeah.
You're right.
You feel me?
Try this shampoo.
Right.
So, yeah.
I mean, I think I wonder if AI at some point is going to be just a public utility.
Yeah.
I think, I mean, if I could get into the, you know, we're getting into technological opinionating here, but why not?
We're allowed to, right?
That's our podcast.
I reckon there's positive science that something like that turning into a utility and there being very reasonable and good open source not-for-profit options being available.
And that's because, you know, at least I feel like I'm seeing a lot of convergence, not only amongst the top tier frontier models, but also amongst other open source labs, some of them in China, which has its own problems, of course.
And even like local models, like much smaller, but you're seeing a general pattern of convergence in that no one company has the secret source, you know, that is not sort of more broadly known.
So I'm kind of a little bit optimistic about that.
I don't see a select few tech overlords necessarily owning it all.
Yeah.
And Mickey, I also have before we finish with the paper, just another reviewer two point, if I may.
I noticed that the moralization in the study one where you're looking at the headlines of moralization, that it looks to me that like at the start point of the graph, the moralization is pretty close to what it is now.
Optimism Amidst Misuse00:05:14
Like it's slightly higher, but it goes down right in like 2019.
And then there are these parts that you point out where DALI launches and ChatGPT and starts to go back up.
But is that an issue that like it seems that it was in at least in the beginning of the data set also highly moralized?
So is there a chance that like the pattern is fluctuating, but AI is in the category of moralized topics in general?
Yeah.
So, I mean, I also noticed that I think the issue pre essentially pre-2020 is there just weren't that many headlines.
So it's just the data are quite noisy.
Ideally, we'd have some good error bars or confidence intervals to show you how noisy it is.
And then I think starting in 2020, we have way more headlines.
But you're right.
Regardless, it's noisy, the centroid is still pretty high.
Because each dot, I believe, is a headline.
Or maybe it's a group of headlines.
I forget it now.
But you're right.
I mean, it's not clear.
But that's why we also got like, I'm actually, the timeline was a little bit less interesting to me than just like the comparison across different technologies.
And that's why we needed various benchmarks because everything to some extent is going to have some of this moral language.
It's more of the comparison.
And I guess we could see what will happen in like 10, 20 years, if we'll continue or what.
Yeah.
Yeah.
If we'll still be around.
Yeah, we'll be.
AI is going to, you know, get the health.
Well, actually, we heard from a guru yesterday, Mickey, that there's this thing called integral theory, Ken Wilbur.
It's a guru from the 70s that still exists.
And he was saying the AI companies are all working with him to integrate his theories.
So, you know, we might be heading for the age of integrated Aquarius from the AI companies if they continue to work with the gurus, you know, like this.
All right.
Yeah.
I'm looking forward to it.
Yeah.
And we do see, you know, one of the things is that I'll just mention in closing is like, we do see AI being badly abused and misused by the gurus daily, like Eric Weinstein having, you know, chats with Grok where he's using it to prove that all of his ideas are correct.
Or you see on Twitter, people generating all these terrible images of political opponents, Trump putting out the video, right, of them dropping shit on protesters and so on.
So like, it's not like we're missing that the technology is used for negative ends as well.
Like, of course.
Yeah.
My thing is it's technology.
It can be used for good.
It can be used for bad.
But the technology itself is neutral, I think.
Yeah.
But I think for some people, it's non-neutral.
It's vulgar.
Like, I love that just because like.
I'm like, what makes it vulgar?
Like, I don't get it, but I'm like, I think it's gut feeling.
It's disgusting.
Made by a clanker.
Goddamn cleaner.
Well, yeah.
Well, on one hand, thou shall not create images in the shape of God or whatever, but I'm also very sympathetic because the perspective of people like ourselves who are using it day to day to achieve things that we think are very useful and very important and very good.
It's a very different perspective if you're not doing that.
And in which case, your principal exposure to it is going to be through the discourse.
Yeah.
And in the discourse, like swap of various kinds.
And you really are only going to be seeing the negative things, which are real as well.
So, you know, we can be sympathetic to all points of view and nobody has to write a big thread on Reddit about this episode.
Yeah, that's definitely.
Yeah, you don't need to.
We're very sympathetic and balanced.
That's the point to take away from this.
My students, I told them it was coming on the podcast to talk about this.
They're like, okay, Mickey, but just like, just describe the findings.
Don't judge the people who are doing the action.
You did a great job today, Mickey.
Yeah.
Yeah.
I didn't detect any note of your personal opinion.
So, yeah.
As always, it's been a pleasure.
We appreciate you taking the time and your students, the lead authors on the paper, collaborators, and yourself.
The research papers that are coming out about this topic are very interesting in of themselves, like regardless of what your position is, right?
So if you think us three guys are completely off the rocker and feeling to represent the arguments well, you can take Mickey's empirical articles and use it, find the pieces of information that you can use to attack his point of view or ours.
So yeah, they're available for everyone.
Like you said, they're neutral.
The information is there and people can craft them into their own narratives as they see fit.
That's right.
Some people think they ought to be moralizing.
So this is just describing things they think should be happening.
Crafting Your Own Narrative00:01:29
Yeah.
Very good.
Well, and we all moralize about some things.
We all moralize a little bit.
You should be moralizing.
I moralize moralizers, there's fuckers.
All right.
Well, Nikki, thanks so much for spending a couple of hours with us.
It's great to catch up again.
Yeah, always a pleasure.
And always, it's also great to see the, you know, the cyborg that you're becoming, Matt.
Oh, yeah.
More machine than man at this point.
You know, the last piece of AI news that I'll tell the listeners is, and nobody's ever commented on this, Mickey.
The software that I use to edit the podcast, which is me, okay?
I'm the editor, the audio editor.
It now has an AI function where it can regenerate when you make an error.
You know, like you say a word wrong and you want to re-energize.
Oh, amazing.
Yeah.
So I don't need to get Matt, you know, to come and re-record it.
I just regenerate it.
And nobody has ever noticed when I've regenerated it.
The words.
So it's already in the podcast.
I'm sorry.
That's it.
You're losing subscribers right now based on that comment.
Well, they had to listen for an hour in 40 minutes or so.
So it's probably the only the die hearts left.
But yeah, but fine, Smacky.
It was a pleasure as always.
You're very welcome, and I hope I get invited back.