Sam Harris and Nicholas Christakis examine technology's corrosive effects, citing exacerbated polarization, mental health crises, and a surveillance state. They discuss Harris's exit from Twitter due to toxicity and the rise of AI-generated misinformation, while debating Section 230 liability. Christakis presents a model showing obedient AI assistants make children rude to humans and speculates that intimate interactions with humanoid robots could either reveal psychopathic tendencies or force a return to social graces. Ultimately, the conversation suggests technology is reshaping human behavior and morality in unpredictable ways. [Automatically generated summary]
Just a note to say that if you're hearing this, you're not currently on our subscriber feed and we'll only be hearing the first part of this conversation.
In order to access full episodes of the Making Sense podcast, you'll need to subscribe at SamHarris.org.
We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers.
So if you enjoy what we're doing here, please consider becoming one.
I am here with Nicholas Christakis.
Nicholas, thanks for joining me again.
Sam, it's so good to see you again.
Yeah, great to see you.
Yeah, we don't see each other in person enough or even on the internet enough, but I always love talking to you.
So let's just jump right into it.
I'll remind people you are the director of the Human Nature Lab at Yale.
You are both an MD and a sociologist and have studied many interesting topics related to, I guess, how human beings and now technology affect one another.
And we have too much to talk about.
I think I want to start with a question of, I guess I just want your post-mortem on the present.
Like just this last decade, what has technology, specifically information technology, done to us?
Yeah, so I think for sure, I think we are going to see the other side of our present dilemma.
I think it is going to take half a generation to really be on the other side of it because I think we've dug ourselves into quite a hole.
I share the opinion, I suspect with you, and certainly with people like John Haidt and others, that the kind of technology that we've invented or the turns that our technology has taken, our communication technology has taken in the last 10 years, have so far been quite harmful to us, whatever other benefits they've had.
I think they've contributed to this polarization.
They've contributed to Anomi.
They've contributed to some of the mental health crises we've had.
I think they've also led to a surveillance state, not just abroad, but shockingly in our own country where these technologies are being used in ways that I would regard as quasi-totalitarian or at least pose the threat of that.
I had a friend long ago.
I still have him.
He's still a friend of mine.
And years ago, he told me he didn't use credit cards and he refused to get a cell phone and he was trying to be off the grid because he didn't want to be surveyed.
And I thought he was like a Luddite nut.
Yet now, I worry that my every move is being tracked by someone.
So if to the extent that you are arguing, and I think you are, that some of what ails us at present is due to some of these communication technologies and the ways they've been grafted onto very fundamental human desires and exploit those desires,
to the extent that we grow as a society to cope with those threats, I think we will look back at this period as just that, one in which we yielded to and were adversely affected by and ultimately, let's say, overcame some of these threats.
Not dissimilar, you and I remember when you couldn't swim in the Boston harbor, the Charles was polluted, the air was polluted, and we sort of cleaned everything up in some sense.
So maybe we'll clean everything up in that way, but it'll take some time.
So what is your personal engagement with social media these days?
How do you use it if you use it?
Well, I got very disgusted with Twitter, and I didn't abandon my account because I didn't want anyone to squat on it.
And I found that I was, the reason I went to Twitter was that I used it as a source of information, like it was like access to experts in a way that was really, really helpful to me.
And I found that a lot of the knowledge that I was acquiring, I was acquiring, I curated a list of people with diverse expertise and beliefs and followed them.
And I really enjoyed it.
And then I felt like I had to, it wasn't just appropriate for me to take from the commons.
I had to give to the comments.
So I tried to generate content that would reflect my expertise or my ideas and be useful to others.
But in the last few years, I found it to be just incredibly toxic.
And the feed became even when I just tried to follow only my own people became full of garbage, a lot of trolling, a lot of mostly far-right conspiracy theories, also some left craziness, of course, too.
I just couldn't use it anymore.
So I basically now I stopped using Twitter and I moved to Blue Sky a couple of years ago, where I get mostly, I mean, the politics are another issue, but in terms of the science, you know, I follow about 600 accounts, mostly scientists, and I get good scientific content and I have, you know, reasonable interactions.
I have a tenth of the followers I used to have.
That's fine.
Facebook, I don't really use.
LinkedIn, I don't really use.
I just started a YouTube channel on trying to advance the public understanding of science called For the Love of Science, but I don't really know how to use YouTube.
So we just are posting videos, you know, once a week.
So I'm really just basically blue sky for science.
That's all I'm doing nowadays.
Well, I want to get back to the reputation of science and to your efforts on YouTube at the moment.
But so it just takes, again, social media and what it's doing to us and the toxicity and conspiracism and trolling that you are familiar with and that everyone listening to this will be familiar with.
Do you have any sense of what the remedy is?
I mean, my personal remedy was to just delete my Twitter account and to now only in extremis look at a Twitter feed just because there's some breaking news that is best captured there.
But even that, Sam, do you remember?
Do you remember that guy who was an expert on military tires?
Do you remember that?
No.
It was, I forgot, I think it was during, I can't remember if it was when the Ukraine war started, I think, and there was some guy who was an expert in the maintenance of military vehicles and he sent a long thread out about like how the tires hadn't, the trucks hadn't been moved around properly, the tires hadn't been rotated, how all the tires were exploding.
I had no idea there was such a person.
And I read his whole thread and I was like, oh my God, it's so interesting.
All of that content, that expertise is, as far as I can tell, is gone from Twitter.
It should be invitated by AI slop or how has it gone?
Well, first of all, whatever the algorithm is, I don't get that content.
The AI slop is a serious problem.
And one of my family teases me, I'm known to be particularly gullible.
And actually, my renarration of this is that I'm not stupid and naive and gullible.
I'm trusting.
You're a good person, in other words.
Exactly.
Exactly.
That's my story and I'm sticking with it.
But the thing is, somehow these algorithms figured out that I like to look at like baby elephants.
And initially, I got like real, I think, you know, like BBC photos of like baby elephants.
And then I think the algorithm started feeding me slop, like, you know, like a hippopotamus, a crocodile attacks a baby elephant.
Yeah, yeah.
I guess saved by a rhinoceros.
Yeah.
Yeah, exactly.
The bomby elephant comes and stomps on the whole thing is totally, it's all fiction.
And initially I was like really taken in by this stuff.
So there's a ton of AI slop that's a problem.
There's, I mean, it's just that it's useless, honestly, to me at least.
So I, I mean, I don't, I have nothing particularly good to say about the environment on Twitter right now.
And it's a multiplicative, you know, a profusion of problems from my perspective.
Plus, plus, I wasn't so happy with all of my, I understand that all of our personal DMing and stuff on Twitter basically belongs to X and could be used to train AI algorithms and so on.
So none of that is appealing to me.
Well, I think as we're speaking, there's a lawsuit, I think it's the first of its kind against social media companies in California.
You mentioned John Haidt.
He's been obviously instrumental in bringing awareness to this issue, especially the harm done to teenagers by social media.
What is the path forward?
Do you think it's a successful series of lawsuits, a revocation of Section 230, just a virtuous cycle of social contagion where we all begin to change our minds at once and influence the norms around using social media?
Or is it just that AI slop itself will provide some cure because every video you see, your first question here until the end of the world is, you know, is this even real?
And it will begin to no longer care what's being presented in these non-gate kept channels.
So I have a few things to say about that.
First of all, it's known that, as you, everyone listening knows, that anonymity contributes to a lot of the problems.
And, you know, this is why people used to, you know, torturers used to wear masks, you know, and people would be disinhibited when they went to mask balls, for example, that, you know, these fancy mask balls we imagined from hundreds of years ago, you know, that the aristocracy had.
You know, it's disinhibiting to hide your, and this is also why people in mobs behave awfully.
They have a kind of practical anonymity while you get riots.
It's a sort of well-understood process.
So I think that humans, of course, behave worse when they're anonymous or pseudonymous.
And now I have a hard time arguing.
My problem is that I think that any entity where you can't be anonymous behavior is going to be better.
On the other hand, I don't necessarily want to abolish anonymity either, because I think that's a tool for totalitarianism.
So I think there will be social media companies which require or where people who use them, which afford people the opportunity to be non-anonymous and which people then privilege non-anonymous accounts, which I think will help.
So I think tools to afford people the option and also to exploit non-anonymity will help.
So like the old blue check mark on Twitter was a good idea.
Yeah.
Another thing, you said 230, like I struggle with this as well, because on the one hand, I do think that 230 was crucial actually for the emergence of the internet.
I do think that there is an argument to be made that these social media companies are just carriers and shouldn't be responsible for their content.
On the other hand, I also think, you know, washing their hands of the content entirely doesn't make much sense either.
It allows them to sort of wink, wink, and just ignore horrible abuses taking place on their platform.
So I actually don't have an answer to that struggle either.
But what I do think is going to happen, just as you said, is I think people, and maybe this will be accelerated by AI and AI slot.
I think people will learn.
And I think ironically, we may have a kind of return to a privileging of reputable sources.
Like, you know, we've migrated so far away from, you know, the evening news with Dan Rather kind of, you know, thing to everyone is an expert and, you know, there's all this kind of good stuff, but also crap online.
I think we may, ironically, people may be willing to pay a bit more for reliability.
You may not believe it unless you read it in The Economist.
Then you'll believe it.
You're not going to believe whatever you see otherwise online.
So it may reprivilege sort of credibly real voices.
Yeah.
Yeah.
I know you've done some research of late on AI and how it's how it changes not just human behavior with respect to technology or information sources, but behavior toward one another, right?
It alters the mechanics of human cooperation on some level.
Take that strand if you want.
But I mean just generally speaking.
What are your thoughts about AI and where all of this is headed for us?
So I I want to tell a brief toy story or toy model or toy example of the question you just put.
But before I tell that, I want to go on a slight digression.
Yeah and um, because I struggle a lot, as I suspect you do with.
You know what is happening with the, these incredibly powerful tools that are being so rapidly developed in our society.
And there's this scene in the movie Fiddler On The Roof, where the protagonist, who's a milkman in the town of Anatevka, you know, around the time of the Russian revolution just before, actually is a very poor man, goes to the town center and there's a big argument that's going on there and and someone makes something and Reptevia he's the character says you're right, and someone makes the opposite point and he says you're right too, and then someone says, Reptevia, they can't both be right and he says you're also right.
And this is how I feel when I listen to debates by experts on AI.
I listen to some computer scientists and some tech billionaires who talk about the amazing promise of AI and how there will be some bumps, but mostly it's going to be this extraordinary future and that to oppose it is to be a Luddite, and I think you're right.
And then I listened to other incredibly expert computer scientists and technologists who say the exact opposite, who said, you know, I think I was at an event with Sam Altman a couple of years ago or a year ago, actually, and he said that he thought there was like a 2% human extinction risk from AI.
Yeah, I think, actually, I think it's higher coming from him.
I think his estimate was higher, but maybe he's recalibrated it in the interest of public relations.
But I think he was more like 20% at one point.
Yeah, but I mean, that's crazy to just not.
No, 2% is terrifying, but 20% is psychotic.
So you listen to those guys and you're like, well, they're also right.
Well, they can't both be right.
And, you know, that's also true.
So I have sort of stopped trying to form in my own, because I'm not so expert in this area, but I am expert in another area, which is related to this, which is this issue of how AI is going to change human behavior.
And here, just to preface one set of ideas, the kind of toy model that I like to throw out there to sort of help people fix ideas is imagine the manufacturer of an Alexa digital assistant.
The manufacturer of a digital assistant is very concerned with the human-machine interaction.
You would never buy an Alexa if every time you had to speak to it, you said, you have to say, excuse me, Alexa, I'm very sorry to interrupt you.
If you don't mind, would you please tell me the weather tomorrow?
Robots, Work, and Human Relations00:04:24
Right.
That would be an absurd level of politeness.
You never buy a machine like that.
You expect to be able to say, Alexa, weather, and it obediently responds.
And that's fine until you bring the machine into your home and your children in speaking to that machine learn to be rude.
And then they go to the playground and they are rude to other children.
So what we've been studying in my lab is human-human interactions in the presence of machines.
And specifically what we've been focusing on is little perturbations in the AI systems, in the machine systems that modify how the humans interact with each other.
And in fact, what we are working on is not so much super smart AI to replace human cognition, but dumb AI to supplement human interaction.
And because the humans are smart, you can think of the AI as a kind of catalyst, like platinum in an organic chemistry reaction, that just facilitates the interaction of humans and helps optimize them.
And we've done a broad set of experiments that have shown this is possible, that you can improve human collective and individual performance through the thoughtful injection of AI agents into social systems.
Have you done any research or is there any research on the first point you made, though, that kind of a coarse and instrumental use of AI has bleed through into human relations.
And so kids are actually less socially appropriate if they've been barking orders at their bots all day.
We haven't looked at that specifically.
Like that's just an example.
I think that work has been done.
And I think that work comports with my sort of hypothetical example.
Well, what would you imagine in the case of humanoid robots?
This is something that honestly I haven't spent that much time visualizing, but whenever I have spoken about it, I think we can stipulate that we will eventually get out of the uncanny valley and have robots that look, you know, if not perfectly human, you know, in some sense better than human, right?
They'll be perfect humanoids in some sense.
You know, when we want our AI shaped like that, we'll make it shaped like that.
What do you, I've spoken to Paul Bloom about this some years ago in response to the series Westworld.
We looked at that and we thought one piece of philosophy that was accomplished by that series is that it revealed that a place like Westworld probably couldn't exist because you'd really have to be a psychopath to go on vacation and rape perfect facsimiles of human women and girls and then come home and tell your friends what a good time you had raping and killing robots that were indistinguishable from humans.
And so unless, you know, I mean, maybe you could set up a theme park that would act like a bug light for psychopaths in that way, but I mean, just normal people would not want to have a perfectly seemingly veridical experience of being a moral monster.
And you would imagine some real contamination, both of how they felt about themselves and how other people saw them if we did that.
So just imagine we get to the place where we have, now we're talking to humanoid robots and making demands upon them.
I would imagine that our social graces will come creeping back in.
I mean, honestly, even just in typing instructions into an LLM, I find myself being inappropriately polite, right?
I'll use the word please, and I think that probably costs Sam Altman some number of dollars every time I do it.
How's that going to change us?
Well, believe it or not, first of all, I'm not 100% sure I don't know the answer, but I can speculate along with you.
Believe it or not, this also is an old topic, and it actually came up prior to the, well, certainly prior to the modern instantiation of Westworld after the old movie.
There's a book, I know it's over 20 years old now, called something like Love and Sex with Robots.
People were speculating about what it would mean in some futuristic world in which we had the capacity to have intimate relations with machines.
And there were two schools of thought on this.
If you'd like to continue listening to this conversation, you'll need to subscribe at samharris.org.
Once you do, you'll get access to all full-length episodes of the Making Sense podcast.
The Making Sense podcast is ad-free and relies entirely on listener support.