DarkHorse Podcast with Tristan Harris & Bret Weinstein
Tristan is the Co-Founder and President of the Center for Humane Technology, and Co-Host of podcast "Your Undivided Attention". Find Tristan on Your Undivided Attention: https://www.humanetech.com/podcast Find Tristan on Twitter: @tristanharris Theme Music: Thank you to Martin Molin of Wintergatan for providing us the rights to use their excellent music. Support the Show.
I have the distinct pleasure of sitting today with Tristan Harris, who is co-founder of the Center for Humane Technology and the host of the Center's podcast, Your Undivided Attention.
Tristan, welcome to Dark Horse.
It's great to see you, Brett.
Good to be here with you.
Yes, it's great to be here with you, just so that people understand where we're coming from.
You and I know each other prior to either of our podcasts.
We're friends, and so we've had a lot of informal behind-the-scenes discussion about topics that I'm sure we will get to today.
You are in the Bay Area, am I correct?
Yep, in the San Francisco Bay Area, after a brief sojourn.
Due to a climate, uh, outcome of our house burning down due to the wildfires in California in September.
So, um, issues of, uh, anyway, that's just the reality of my situation.
I'm back in the Bay Area.
Bay Area.
Yes, that sounds slightly beyond what we usually think of as a climate outcome.
Uh, you know, the earth is getting warmer, but it's rarely hot enough to catch your house on fire.
Jeez, your house burned down?
That is, um, shocking.
Well, the crazy thing is that the, We obviously don't have to go into this much, but the house burned down two weeks after the release of The Social Dilemma, which was this global sort of phenomenon.
So actually I had less than 24 hours between that happening and then doing the Today Show at, you know, five in the morning and continuing to do interviews and Everything that was happening because the film The Social Dilemma was seen by something like 100 million people in 190 countries and in 30 languages.
So it's balancing all that after the house burned down and being sort of on the road and having to work it all out was just a unique experience to have.
Yeah, that's remarkable.
I should say, I was going to mention your amazing Netflix documentary.
Anybody who has not seen it should absolutely seek it out and watch it.
It will tell you a lot about what's been on your mind and why.
I thought it did a very good job of putting flesh on the bones that are sometimes hard to get people to see, where, you know, everyone has a sort of vague sense of how big tech works and what influence it may be having on things like our collective sense-making, but very few people have any idea of what the mechanistic underpinnings would be.
The documentary did a good job of outlining that.
You want to say anything about what the core message of the documentary is?
Yeah, I mean, you know, the film is really...
A message about the people who are inside the technology companies, you know, Justin Rosenstein, who is co-inventor of the like button at Facebook, and the person who brought the business model of advertising into Facebook, and early employees at Twitter, at Facebook, at YouTube, etc.
saying, you know, this business model is fundamentally misaligned with the fabric of society being healthy.
And the, you know, it does that by by explaining how the same problems that are with the technology companies are really part of our economic system, that a economic logic that lives on an infinite growth paradigm on a finite substrate, you know, usually it's the planet causing climate change.
In this case, we have an infinite growth economic Paradigm of big technology companies based on a finite substrate of human attention.
And that I think that the film is really summarized in Justin Rosenstein's line at the end that he says, so long as a whale is worth more dead as whale meat than alive and a tree is worth more as two by fours than as a tree in our attention economy to technology companies, human beings are worth more as dead slabs of predictable human behavior.
Then as living breathing choice making meaning making, you know, you know, semi free willed humans, and that that logic is really the core of where you'll see lots of problems so shortening of attention spans political polarization, things like this.
So, of course, as an evolutionary biologist, I see all of these things and I find no end of analogies to my home territory.
And I try not to drag people there more than necessary, because if it's not your home court, it adds an extra kind of step to one's logic.
But there's so much here.
You know, I'm used to situations in which a very small advantage over enough time results in the modification of creatures.
And what we have here with respect to the attention economy and big tech algorithms, etc.
is not so subtle influences operating over a short time but with completely predictable results and the answer is it doesn't really take very much if the signal is consistent it doesn't take very much to drag belief for example our shared narrative which is so necessary to us functioning
As a society, it doesn't take much to drag it into some territory where it serves a private interest at the expense of the public.
And, you know, I think if people realized how delicate the assumptions were on which normal functioning depends, They would be less shocked by how distorted things have become in light of these novel influences that even if they were well intentioned when they were originally constructed, are just simply incapable of maintaining the foundations of thought.
Yeah.
Yeah, I think the breakdown of our shared reality due to a profit motive in which personalization is profitable.
This is really important and subtle to get.
I mean, it's sort of obvious once you see it.
But giving each person their own Truman Show reality is more profitable and better at keeping your attention glued.
than to creating shared realities that are not personalized.
A personalized Twitter with a personalized algorithmically curated news feed is going to out-compete a chronological Twitter that just shows you tweets showing up on, you know, whatever level they do.
So it really becomes this race to hyper-personalize, which then this race to narrow each of our individual construction of reality.
And I think that is actually the deepest harm that I like to focus on these days, because that's really what we're seeing, I think, In January 6, we're seeing it in the country, we're seeing it around the world.
We do not have a shared basis of ground reality.
I know you talked to our mutual friend Daniel Schmachtenberger about some of these topics, but I think that is the kind of shared problem statement that we're now left with.
I sometimes think of it, Brett, to sort of play with metaphors that you and Heather are talking about.
That we now have a few different problems.
We have a technological problem of how social media's business models and fundamental design and virality and the kind of hijacking of all of our different evolutionary impulses causes all these sort of, you know, cultural fallout, this kind of cultural externalities of shortening of attention spans, mental health problems of kids, etc.
Uh, and this breakdown of reality, but now we have a cultural set of problems.
The problem is metastasized.
So now if you took technology away, our minds are all running, you know, not that they were never not running malware, but they're running much worse malware than they would have ever run before.
And it's been exacerbated.
So I think of it kind of like we had the Zuckerberg Institute of virology and that Institute was doing gain of function research.
On memetics, on what kinds of memetic viruses would work and they were literally having engineers tweak things like, you know, the newsfeed recommendations, you know, which kind of things would keep people clicking more and they started evolving, hyper evolving in a certain pathway, in the same way that gain of function I guess research.
Kind of works.
They were also tweaking the friend recommendation systems to say, oh, which friends could we recommend for you to add that would cause you to stay on the product longer and more friends like you, more like-minded friends, more QAnon friends if you start off in QAnon, more Wokistan friends if you start off in Wokistan, more Magistan friends if you start off in Magistan.
That will increase people's sort of tribal affiliations.
Then they were doing gain-of-function research on Facebook groups.
What groups could we recommend you to join?
And we now know from Facebook's own internal documents that 64% of the extremist groups that people joined were due to Facebook's own recommendation systems.
In other words, this was not a user who's going into Facebook and being like, what group, what sewing club, what book club do I want to join?
You know, what dark horse podcast discussion group do I want to join?
Instead, they're being recommended something.
And my friend, Renee Diresta, who's sort of a researcher on these topics, says it really perfectly.
When she was a new mom, she joined an organic baby food moms group, like where you make your own baby food.
It's sort of a natural thing you might want to do.
But what was the most recommended group, do you think, for the new organic baby food moms from Facebook's point of view?
What would keep users who looked like organic do-it-yourself baby food moms on the site the longest if you're just building a naive AI recommendation system?
I am frightened to guess for many reasons.
I'm going to resist guessing because I will tell you if I'm right here in a second.
You tell me what it was.
I mean, it's not maybe that surprising that it was more of the anti-vaccine moms groups, right?
That is exactly what I was going to guess.
I didn't want to say it because I didn't want to add fuel to the fire.
Right.
Right.
Yeah.
Well, so.
So and then once you joined that group, then Facebook said, oh, well, what do the anti-vaccine moms groups, what do they tend to click on?
What keeps them around the most?
And it recommended Pizzagate and Chemtrails and other things like that.
Now, when I say this, I might sound like a blue church left person who's saying, oh, all these conspiracy theories and all these sort of There's a pre-trans fallacy for each of these different sort of concepts, and I want to make sure that we're caveating that.
But when you understand that this gain-of-function research, Zuckerberg Institute of Virology, was doing this research that was hyper-accelerating these kinds of evolved pathways, and now that That memetic virus, that sort of reality-dividing virus, that extremifying reality virus, left the Zuckerberg Institute of Virology in Menlo Park and actually escaped into the world, and it actually has shut down the global sense-making ecology.
It is now, just in the same way that COVID has shut down the global economy, I think that this memetic virus has shut down our ability to see reality in shared ways.
And I think that's really the problem is how do we not just deal with the pandemic, but how do we deal with the kind of reality breakdown pandemic?
And how do we have not just like reality bridging conversations, but really what is the solution to that?
And I think we're all trying to figure that out.
All right.
I want to slow down a little bit because it happens that you have landed just like a millimeter away from the topic I was going to introduce here and get your thinking on it.
So I know we're very close to the same page here.
But one of the things that I think is most difficult to convey is that there is a natural synergistic relationship between human consciousness and creativity and evolutionary dynamics that we are unaware are taking place.
I think we have, you know, there's a dichotomy, a false one, where people expect AI to take off and come for us, but they also intuitively know that that can't be where we are because all of the devices that are supposed to be intelligent are way stupider than they should be, right?
So we sort of know that we've got some time before AI gets to that place.
However, what we don't intuit Is that humans can partner with semi-artificially intelligent or narrowly artificially intelligent or even just learning algorithms.
And they can do the part that the artificial entities can't do themselves.
Right?
So you have a bunch of people inside Facebook thinking about how to enhance Facebook's financial well-being.
Right?
Just increase shareholder value at this point.
And they can narrow the search space of possible functional strategies to a very small canvas.
And then, things like A-B testing can do the heavy lifting, which is, incidentally, you've landed on the perfect analogy.
It is exactly gain-of-function research, because what gain-of-function research Does, or not all of it, but the part of it that is called serial passaging involves taking a natural entity, or a largely natural entity, or a composite of natural entities, and placing it in an environment so that evolution can do the stuff we don't yet know how to do.
Right?
If we could do it directly, we would.
But we can't.
So what we do is we just put it through an accelerated process of evolution.
We basically reduce the noise and increase the signal, and then we take what comes out the other side and we discover what we didn't know enough to desire.
So that is exactly what will happen here.
But imagine, you know, you could have had a thousand Facebooks, right?
A thousand hopeful companies starting out trying to capture this space.
And Selection would have found the ones that happened to have the right things to keep attention on their site, right?
To direct people to stuff that would reinforce their Their biases etc.
Those companies would have risen to the surface instead We had a couple of Facebook like entities, but we had intelligent people within them Looking for mechanisms, you know, how does Facebook get ahead of myspace, right?
Yeah, and That has been like a cyborg Where you have a partially undirected process of evolution and partially human creativity fused together to discover defects in all of our cognition that allow them to steer us into behavior that we otherwise wouldn't be involved in.
And, you know, it is, I think the thing is, if you understand that this isn't wholly dependent on an evolutionary process that is haphazard, nor is it wholly dependent on a conspiracy in which people simply sideline human well-being and go for broke.
But the two things can partner.
It would be hard to overrate the danger of that and what you've been saying forever is effectively it's here and we don't actually know the full range of hazards that emerge from it.
Yeah, that's exactly right.
I'm so excited to have this conversation with you because I don't get to talk to evolutionary theorists where we can really go deep into Each of the sub-aspects of this problem.
The first thing you talked about was this sort of projecting into the future.
We often look into the future and say, when is AI going to take over humanity?
And we think about the singularity.
You think about numbers like 2050, 2060.
You think about superintelligence, Nick Bostrom.
You think about AI taking our jobs.
But there's this much earlier point that you've made, I know, ages ago in your work.
I remember you and Eric Weinstein, both your brother Eric, talking about this.
More like the accession point that technology doesn't actually overwhelm human strength that undermines human evolutionary weaknesses.
And there's a sort of takeover point.
And by the way, that point for me is more informed by my training in magic.
Actually, since the house burned down, I've been buying back all my sort of magic supplies.
I have all this sort of these packages arriving with these kind of interesting magic goodies.
And the interesting thing about magic is it's based on an asymmetry of understanding, where the magician essentially does know something about an evolutionary weakness in your mind that you do not know about yourself.
And it's that asymmetry of information that allows the illusion to work.
And the problem is that technology has become this sort of 24-7 hyper asymmetric magic trick, but we don't perceive it that way because we're enjoying so many things from it.
We enjoy social validation and approval.
We enjoy getting variable schedule reward slot machine type things.
We enjoy photo filters that make us look more attractive briefly.
But these are all essentially exploits, maladaptive, depending on how you see it, exploits on key evolutionary weaknesses.
And it's really a race.
This is where the phrase the race to the bottom of the brainstem it's a race to the deepest evolutionary weaknesses or as your brother has said a race to create a backdoor to the human soul to figure out and reverse engineer Whatever would we would call sort of a dimensionality of free will to turn more and more of those dimensions into predictable outcomes, meaning less and less free and more and more commoditized dead slabs of human behavior.
It's really important to get this because I think if you go back and like you said, it's not a conscious intention by Facebook or Twitter.
to try to ruin the world or to profit greed at all costs.
I think a lot of this was incremental decisions in an evolutionary environment where they're competing for attention.
You know, I actually remember because I a lot of our insights about this work come from Knowing the people who built these products.
I was friends with the founders of Instagram.
I was at South by Southwest in 2009 when Facebook was competing with Twitter for who's going to create the newsfeed.
And they're both competing for that chronological kind of instant newsfeed of people posting things really often and having it feel more alive and more rapid.
And a newsfeed that's giving you new things constantly is going to out-compete a newsfeed that only has new things once every five hours, right?
Because there's a freshness to it.
So in each case, and I hope we go more into this, the race to capture attention led to all sorts of decisions being made that were based on the evolutionary fitness of attention-grabbing machines, but not for the evolutionary fitness of a societal cohesion or a societal intelligence.
Because what I really worry about is that the net result of all of this is when you don't have a shared reality or an ability to agree on reality, or you have such emotional grievances and resentments because you've seen infinite evidence of hypocrisy and awful things the other side has done, that you don't even want to have a conversation or you have such emotional grievances and resentments because you've seen infinite evidence of hypocrisy and awful things the other side has done, that you don't even want to have a conversation about what you So when you kind of add this all together, the question I'm always asking is, what is an attention economy?
What is a digital environment?
What is the cyborg-like fusion of the man-machine, human-machine interface that leads to a smarter democratic plural collective psyche, but is making better and better choices together?
And I think because we have limited time on various problems, especially climate change, that's the real question that I kind of land on.
But we probably should reverse engineer a little bit more of how we got here with the evolutionary hacking of tech.
Sweet.
Now, unfortunately, when I talk to you and a few others, each, you know, little soliloquy opens up, you know, 10, 12 hours worth of topics that we could cover.
And so there's a bunch of stuff I want to recover here, and I'm sure we're going to miss some important things.
Let's make sure we try to capture a lot of it, though, because I think it's going to be good to flesh out.
Yeah, I think it's really important stuff.
One has to do with magic, and I wanted to point to something that you didn't exactly say, but I'm pretty sure you'll agree with.
The thing that's really fascinating about magic, it's not so surprising for reasons we can come back to, that human perception has blind spots and defects that can be exploited, right?
But there's a big difference between running into a con artist, who you don't know is a con artist, who is attempting to exploit those things for their gain, And a magician, where you go and the person says, hey, I'm a magician, I do magic, and you know damn well they don't do magic, right?
But the point is, you are constantly, at a good magic show, trying to figure out how it's being done.
And unable to.
Right?
And so the point is, the fact that you're aware of what's going on at Facebook, Twitter, etc., is no protection.
And this is one of the things that I think is most telling, is that the very people who constructed these things The smart ones are aware that they are not in control.
That when they are the users, they're as vulnerable as anybody else.
And, you know, the fact that they very often have very strong limits for their own children's interaction with these things tells us what we're dealing with.
We're dealing with something so potent that the awareness that it is happening is no protection.
And that's what's so dangerous, right?
Because it means that there is no sort of safety place where we actually still have control.
Because as you know, in the film, The Social Dilemma, Tim Kendall, who, you know, is a father and has a couple of kids and talks about how he was He was promising himself that he wouldn't use Pinterest.
He was the president of Pinterest.
And he found himself in the pantry, you know, scrolling on Pinterest, ignoring his own children.
He said, this is classic irony.
I'm the one who's building Pinterest on a daily basis.
It's like getting high on your own supply.
He can't get off the thing, even though he's making commitments to himself to not do it.
And that's actually one of the parts of the film that I think people really latched onto.
Aza Raskin, my co-founder at the Center for Humane Technology talks about how he actually wrote himself software.
You actually will appreciate the intervention that he created to make himself less addicted to Twitter and to Reddit.
You know the way he did it, Brett, is he actually created a random slowdown.
So as you used Facebook or Twitter or Reddit more during the day, it would actually, at a variable random level, slow down how long it took for pages to load.
If you put a time limit and put a seatbelt on and say, just like Apple does in screen time, You can't do this.
You know, don't, don't look at more of this.
You know, you've hit your 15 minute limit.
Your brain just gets upset at that and says, no, no, get out of my way.
I'm going to keep going anyway.
Right.
You're not the boss of me.
You're not the boss of me.
Right.
That's all that's the same paleolithic emotions that sort of come up in that, in that context.
But if you do a random slowdown where it just gets slightly more frustrating, you Not a lot more frustrating.
It doesn't take forever.
It just gets slightly more frustrating over time and you just, your brain eventually, it gives your brain enough time to catch up with its impulses.
Your prefrontal cortex can catch up with your sort of emotional impulses.
And, you know, when you're making this metaphor of, you know, we're losing control, it's almost like the inmates are running the asylum, but really the amygdala is running the asylum, our kind of reptilian brain is running the asylum, because if no one has control and no one's prefrontal cortex gets to sit in the pilot seat of our sensemaking and choice making, and instead we're left more and more with the lower parts of ourselves.
Then that's the representation of where all of civilization goes, because when you zoom way out, and this is when I was a design ethicist at Google prior to doing this work, my biggest concern was that, you know, we have three billion human beings who are jacked into this real, you know, instantiation of the matrix.
You know, we say, like, we actually did build the matrix.
It doesn't look like it did in the movie.
And at least in the movie version of the matrix, we had a shared matrix, whereas in this matrix, we each have our own different hyper-personalized Truman Show!
matrix.
All right, awesome.
This is fantastic.
All right, let's go to the next part of what you raised before.
So we've got magic, which demonstrates that the fact that you're aware of these defects in your own perception is really no protection from having them gamed.
But you also have Let's open up the parallel realm, okay?
The parallel realm, I would say, is the world of optical illusions.
Yes.
I was just thinking the gray, the checkerboard, right?
Is where you're going to go with this?
I was going to go with the general category.
So I would say that the world of optical illusions, which is now effectively a realm of academic study, is all in search of evidence of how we perceive things based on there may be more categories in this but at least two categories of phenomena one
Where some heuristic, and we'll come back to what heuristics are in a second, but some heuristic in our perceptual apparatus decides something is one way or it's another, and a carefully presented set of data can be perfectly ambiguous between two possibilities, and therefore the brain You know, to the extent that, you know, some prior bit of cognition or the way the light falls suggests one way, right?
It sees it that way, and then if something changes, you may see it the other way.
So, you know, these sculptures, are they convex?
Are they concave?
The necrocube, you know, are you looking... which corner of the necrocube are you looking at?
There's those things.
And then there are other heuristics where When something shows up, so, you know, we have a frame rate in the eye.
Many people will claim we don't have a frame rate in the eye because the frame rate isn't regular in the same way that a movie film would be, but we do clearly have a frame rate in the eye, which is why, for example, when a car with perfectly regular wheels drives into the parking lot, you may see them turn backwards because they advance not quite to the position of the other Spoke, you know, in one click of the frame.
But in any case, you've got all of these cases where something in nature, if your retina saw it at this position and then that position, it would be safe to conclude that it had moved, right?
So now if we flash things in a particular way, the Mario brother appears to move through the scene, but in fact all the Mario brothers are perfectly static.
Anyway, The point is that is exploiting a perceptual heuristic to cause the brain to conclude wrong things, right?
That you can then test are wrong, like the checkerboard case that you show where the two squares look very different in hue based on a gradation, but in fact you can check that they're identical.
So anyway, what I want to get at is These heuristics are there for your well-being, right?
In fact, they're the key to any rational perception.
If you tried to perceive every pixel on your retina and process it, you just, you'd never get through a frame, right?
It's just too much data, right?
So you've got to break it down into chunks that are so reliable that you just get a huge payoff for using a subset of the data to draw the full conclusion.
And it's very rarely wrong.
But the fact that these things are riddled throughout our perception means that a magician or a person who is interested in creating optical illusions or a corporation that is interested in keeping you on their site all find them, right?
In other words, they're littered through your perception for your well-being but they are exploited by others all too easily because of their ubiquity and predictableness.
So the question is Without an agreement that, hey, it's not cool to game other people's perception.
I mean, maybe it is cool if it's a surprise party.
And the point is, you're going to enjoy the surprise party when it's revealed to you.
So it's okay for me to, you know, to take advantage of your trust briefly.
The film The Game is a good example of that with Michael Douglas.
Say it again?
The film The Game is a good example of that with Michael Douglas, where you could have a entire false reality constructed for you that then the end of the movie is the curtain gets pulled off.
However, you've been thrown through the ringer.
You thought you almost died.
And there's this question of what is an ethical sort of false reality or manipulation of someone else's experience?
If, you know, one way to do, as you're saying, is you sort of reveal it at the other end.
There's often transformational experiences or cults or transformational workshops, which actually do more confrontational alternate reality creation by manipulating your experience.
But then they say at the end, it was all for your benefit.
And there's this, I'm saying this actually, because my motivation for this work, which I don't often talk about, but when I was at Google as a design ethicist, The basis for this inquiry of how technology is impacting society has to do with what is an ethical, asymmetric, manipulative, or persuasive relationship when something is influencing some of your evolutionary weaknesses, in or outside of your influence, and when is that a grounded, ethical, humane relationship?
Because the degree of asymmetry between what technology knows about you that you do not know about yourself is going to grow.
The number, if you point an AI at your brain, you think of the newsfeed as like a kind of a chess game where your brain is the chessboard and you think you're going to scroll one more video or watch one more thing and then you're done and you think that's going to be it.
But you have this asymmetric AI that's playing out a trillion simulations from all the little human voodoo doll animals, the human social animals, that it's seen used Facebook with the exact same usage patterns.
Like it's seen a thousand other people watch those same cat videos, or a thousand other people watch those Brett Weinstein podcasts.
And it's seen, what is the next thing that I could show you that would get you to stay?
And it knows way more about that than you are aware of it because it's literally ran more simulations.
So the degree of asymmetry over time is going to grow.
Which means that we have to actually have a relationship with something that knows more about us than we know about ourselves, that is ethical or grounded or humane.
And that brings us to, we shouldn't escape the topic we're in, so I'm not trying to diverge us, or I know we keep opening up threads, but you then ask, what are other relationships where one party knows way more about our psychological weaknesses than another?
You might think about a therapist who is licensed, who says, hey, I'm going to know your deepest You know, secrets, I'm going to know your deepest vulnerabilities, your biggest concerns, your your inquiries for yourself, your sexual identity, your relationship, how you feel about your romantic partners.
They're going to know all that information and they can't go use that to manipulate you.
So you spend more time in a therapy room or they can't use that to sell it to some advertiser so that someone can basically say, oh, you have these kinds of anxieties.
Guess which political party would love to know those anxieties that I would want to target you.
So once we see it in that way, This is both opening up an inquiry, but I think we can we can leverage insights from different fields, whether it's psychotherapy, counseling, law, you know, a lawyer knows everything about your vulnerabilities in your case more than you know about it.
And you have to trust that agent to act on your behalf.
And so, gosh, there's so many things that we can open up here.
But where you get to in technology is you simply cannot allow categorically a business model based on exploiting that asymmetric relationship.
What I like about this framing is it's a kind of a one and done way to simply nail the business model and say, it's super clear.
You would never have a therapist whose business model could be advertising to political parties to manipulate you based on what they hear in the therapy room.
Or a priest who's sitting in a confession booth who listens to 2 billion people's confessions, literally like all the things that they know about you.
Then they have imagined worse than that.
The priest has a supercomputer next to them.
It's literally calculating patterns of all confessions they've ever heard.
So they're actually making predictions for the person who's about to walk into the confession booth before they even say the confession that's on their mind.
They know what that confession is going to be, and they can sell that to another party.
This is the kind of asymmetry of power we now have.
And so, gosh, I know we're opening up so many different threads because we can still go deep into the evolutionary stuff and the cognitive biases, and there's so much to say there, but I wanted to make sure I brought that in.
Yeah, fascinating, and you're right, so many interesting threads.
But let's see if we can nail down the point that you and I are converging on here.
There is either a basis for trust, in the case of the therapist there are actual regulations, there's ethical regulation, ethical being about the rules of psychology for example, that say for, you know, just to take one example here, your therapist is not allowed to sleep with you, right?
There's some sort of waiting period after you've left therapy before your therapist could start a sexual relationship with you.
Right?
That's a protection, because this person, as you point out, knows all about you, and therefore would be in a position to manipulate you.
Right?
And then, financially speaking, there is an explicit contract that you have with this person.
And so, those protections, I don't know that they work or they don't work, but I know that the point is to establish a basis for trust that you actually know what it's made of.
Right?
That's right.
And then there's another kind of situation.
So let's say that this is the flip side.
Where somebody is bound to you in a way that your trust isn't about the rules, but it's about the fact that they don't have an interest in using this against you, right?
So family members, for example, you know, Heather has I don't know that we've ever explicitly even talked about it, but it's very clear to both of us that she has the right within limits to lie to me for our collective well-being, right?
So, for example, if, you know, I'm up against a deadline and I need to get something done and, you know, our collective well-being is dependent on it, she may adjust what it says in the calendar about when it's actually due, right?
Now, there's a limit to how well that works, because obviously I know that it's a possibility, but the basic point is, look, I know she's not going to use this to her advantage and my disadvantage, so... She's not making money the more she lies to you, right?
She's still using it for your best interest, and there's actually a shared partnership of, you know, you being your own children that creates skin in the game, so that relationship is grounded in a real sort of, you know, sense.
Yeah, the term we use for it is shared fate, right?
Because we have shared fate.
The point is, if you're going to lie to me, you're going to do it honorably, right?
And there's obviously ways in which that, you know, she started up with some guy and started lying to me.
That wouldn't be, it would be the exact opposite.
But in the absence of such a thing, she has license to figure out where the line is.
And even if we slightly disagree over it, it's done with good intentions.
And I should say, this is now uncomfortable, but I also find that my relationship with my children involves a huge amount of my gaming their perceptual apparatus.
My kids are, you know, 16 and 14 at this point, but this has been true all along.
And it is always done in the context of humor.
But the basic point is some part of me knows that they live in a world in which they're going to be confronted with bullshit, right?
Every waking hour, right?
There's going to be advertisements, con men, algorithms, all sorts of stuff.
And so I use the license that comes with humor To misportray things so that they will get really good at detecting falseness, right?
And, um, I mean, I know it works and they know I'm not doing it for my benefit and against theirs.
And so, you know, it's rather like the example of, uh, your, your spouse having license.
And in this case, it's even tighter because, you know, you can lose a spouse, but your kids are your kids forever.
So, the question then is, what is the basis on which departures, you know, in effect the honorable short-cutting or white lies that exist in the context of family are effectively a heuristic, right, for family well-being in the same way that your perceptual apparatus makes jumps in order for you to process things in real time?
So the question is, is there a basis of trust?
And this is a long way of saying, the thing we can very clearly see when a for-profit corporation is engaged in finding your perceptual defects and exploiting them, is that a transfer of wealth is directly possible, right?
You have the ability to decide where to spend your money and where to spend your time, and they have an interest that departs from yours.
And so they can effectively take time or direct wealth or whatever it is in some direction that you wouldn't have it go to your detriment and they can come out ahead by doing it.
And I think, you know, the ultimate upshot of this is if they do that, that it actually deranges us collectively.
Something they didn't anticipate.
Right?
The side effect is potentially fatal to humanity.
Correct.
Because I think we often focus too much, my work, especially when people reference time well spent, started in this way and people thought the only critique here was addiction or the harm was depression or isolation or one of these individual harms.
But really the emphasis in the long run is this sort of collective derangement, which is if you think about this as a business model that privately profits, but then collectively harms, meaning it puts harms on the balance sheets of society and pollutes there.
On the balance sheet of society is an inability to see a shared reality, to have a different notion of history, of what's happened, or what's fair, what's equal, because we've each seen different levels, literally infinite evidence of the other side lying or being hypocritical, and it creates no room for shared growth.
I wanted to bring back, I know that your previous metaphor that I've loved about this is the notion of a metaphorical truth versus a real truth, and the porcupine story, the idea that porcupines shoot their quills.
I just thought this was so important in bringing it out because this is a deception, but a deception that is in the greater benefit.
Now the question is, who do you trust to construct that?
Because you could have tribes that, let's say, have false wisdom that are creating false deceptions.
There's also a developmental aspect here, where maybe when you're young, I can say, you know, porcupines shoot their quills, and so please stay away from porcupines.
But when you're older, you've outgrown that metaphorical fiction.
Maybe I need a new one, and I'll tell you the truth, right?
So maybe, you know, when you're young, I'll tell you about Santa Claus.
When you're older, I'll tell you, you know, there's a developmental new place to land.
These are all interesting philosophical, I think, areas of, you know, Asymmetric persuasive relationships where we have, you know, more more ability to influence people but I also think I want to say it's not just psychological or perceptual vulnerabilities we're talking about here there's also identity level vulnerabilities and that's where this gets really insane I mean in the film The Social Dilemma we talk about
How if I'm, you know, a foreign adversary of the United States, and I want people to not think this is not some partisan just Russia thing or China, there's lots of countries that are now doing this, I can go into a conspiracy theory group, like a QAnon, like a Pizzagate or whatever, and I can actually get the user IDs of people that are of that mindset.
Then I can pump them into a Facebook lookalike model and say, give me users who have the same kind of profile like that.
So Facebook is giving me, it's not just that Facebook was doing gain-of-function research on memetics or on groups or extremism, it's that they're actually offering gain-of-function.
It's gain-of-function as a service.
They're offering that to other parties to actually get more and more clever at being able to find and navigate.
And then further there, split test which messages would work for those audiences.
And one of the things that we know, and again, I want to be really careful because we can steal men and there's a pre-trans fallacy on what a conspiracy is, and there's this sort of You know, using it to deride people who think versus the actual genuine real conspiracies that are that are that are true.
But I think this is really important because one of the other things we know about conspiracy thinking is the best predictor of whether you'll believe in another conspiracy is whether you already believe in one because it starts to warp the basis of your perception that kind of trust that you have.
Like if it was true that the entire government was lying about COVID and actually Did come from the U.S.
I mean, which, who knows where it came from?
That's a topic of conversation.
But if it were intentional in that way, that would certainly warp the perceptions downstream of everything else.
Because if that were true, then all these other things were true.
Now we're getting we're getting into I don't want to go too deep into, you know, one of these specific ones or not because I want to stay on the kind of identity hacking but I wanted to make sure we were clear that the level of psychological and evolutionary hacking is memetic, it's perceptual, it is social validation based, it is social relationship based.
It's also identity-based.
In fact, in Facebook's own documents in 2009 or 10 that it was, they had a Facebook marketing team in Australia that said, we literally know when a user has low self-esteem.
They were talking about teenagers.
They literally can predict, based on usage patterns, when you are low self-esteem.
And that's an identity-level manipulation, which is very much like cults.
Cults actually catch people in moments of vulnerability.
What we know from cult recruiting is that they're actually in between huge moments of transition.
Something in their life, a meaning-making structure has collapsed, whether it's a relationship or a job or a deceased loved one or divorce.
And they're in that moment of low self-esteem and kind of low meaning, and that's exactly what makes them more vulnerable.
And what we've enabled, and Sam Harris and I talked about this on his podcast, is a kind of cult factory Where there is a mass finding of people in vulnerable states and then being able to personally target to a whole new degree, again, with this sort of gain of function metaphor, what we could do to steer them.
And the idea that this is the default infrastructure that runs our digital economy, that we would allow for and even make money from the gain of functionization of everything is incredibly dangerous.
And I just want to drive people there because I want people to see just how deeply the reform needs to be.
Yeah, marvelous.
So I do want to briefly touch on this issue of conspiracy predicting whether or not, you know, the belief in a single conspiracy results in a change in your likelihood of, or at least it predicts that you will believe in a second one.
That actually this is about a heuristic being built, right?
To the extent that something persuades you that collusion has taken place in a given circumstance, It naturally adjusts your sense of how common collusion is.
And corrosively, it adjusts your sense of how to weight evidence.
Because if there's one thing that's common to all conspiracies, it is that they portray some other possibility as more likely.
They inherently frame some alternative.
So basically what it does is it forces you to sideline your obligation to Occam's razor.
And there are cases where a little bit of sidelining of Occam's Razor is absolutely necessary to make progress, but it's also the kind of thing that if you give up on it, you just become a sucker, right?
That's right.
So it's very hard to navigate that space.
And basically by pumping noise in, we are breaking people's relationship to reality and evidence very directly.
And the consequences is all but inevitable, I would say, right?
Of course people will just become so distrustful that they will effectively become full-time superstitious.
That's right.
And that's kind of where we are is what it feels like.
Right.
I mean, and the challenge, as you said, is there are ways of navigating the beyond Occam's razor explanation for something.
And you're doing that a lot here on your program.
And I think that's really fantastic.
But people need the tools to do that carefully because that process can externalize a kind of naive distrust or a kind of naive epistemic process that starts to, as you said, kind of get
Further and further superstitious if you don't have the tools and I think much like I always use a developmental paradigm when I think about these things you know you it's certain ages we we we speak in some ways it's other ages we speak in other ways we tell you about Santa Claus is our friend Daniel Barquet's point that you know he said life is a lifelong process of killing Santa Claus like you're sort of taught one Santa Claus metaphorical story the porcupine thing the simplistic story That suits you for a period.
It's a jacket you can wear for that period, but then you outgrow it and there's a different developmental paradigm.
So maybe if we're going to engage in non-Occam's Razor narratives and really do investigations, we have to be aware of, developmentally, which minds we're speaking to.
Am I talking to someone who's literally kind of a basement conspiracy theorist with a bunch of guns in their closet and they're ready to go take arms to something?
Consequentially, I have to be more worried about the way that my communication might impact that person than if I'm talking to, you know, someone who's really thoughtfully engaging with alternative narratives and epistemology and actually has a whole process that they've developed to navigate that.
And I think that's one of the things that we lose, especially in a social media landscape when you're broadcasting to, you know, 10 million people at once.
We don't have a developmentally sensitive communication, meaning our communication is going to so many people that we assume And this is probably the wise critique of engaging in any kind of alternate narrative thinking is, hey, you Eric and you Brett are not sufficiently engaging with how people might hear you consequentially.
But then you're saying, yeah, but you don't see that we're ignoring really important truths that if we don't get to talk about, we're going to get trapped by political correctness or we're going to get trapped by these other things.
And this is where I think it gets very, very interesting.
Yeah, and actually, this is another uncomfortable topic, which is that I know...
I am capable of calculating that some of the places where I question the official narrative, it is essentially certain that people are going to get hurt as a result of my revealing that these things are questionable.
So, for example, if we talk about what we don't know about vaccine safety with respect to the novel COVID vaccines.
It is essentially inevitable, given the size of the audience, that a certain number of people will change their view on whether to get the vaccine.
Some of them will get COVID.
Some of them may die.
Other people may catch COVID from them who will die.
That's a terrible responsibility.
I don't want it.
No rational person could want it.
But what I believe is that our sense-making around these things is so broken that we are already killing people as a result of our obligation to sign up for these official narratives that aren't true.
And so I guess, you know, it's hard to know what the right analogy is, but the mechanism that you have been exploring through which attention is captured and sense-making is deranged That mechanism is like somebody has been working on a, you know, a computer or some sort of robotic device that you can put in the driver's seat of your car and it can pilot you, right?
Now, I don't doubt that you can make a self-driving car.
We all know that's coming sooner or later.
But if you tell me that you've built this in your garage and you're gonna put it in the driver's seat of the car and we're gonna go somewhere, We're going to crash.
This is a prototype, right?
And the nature of prototypes is that you're going to discover all of the hazards you didn't know about along the way, and you really want to discover that on a closed track.
You don't want to discover that with civilization in tow, right?
And that's where we are, is we're running the biggest experiment that has ever been run in real time with all of us on board.
There's not some population living somewhere who isn't downstream of the effects of this, even if you're offline, right?
We're all in.
Totally.
It's, you know, you and others have said it's the largest unregulated psychology experiment we've ever run with no IRB board.
If you want to use the Zuckerberg Institute of Virology metaphor and you have level, what is it called?
The level five?
Yeah, right.
Level four.
So if you have kind of the level four, you know, societal engineering lab, but you're literally not even doing it in the Big white suits and in the kind of enclosed spaces, you're just doing it on civilization constantly.
You're in this sort of 24-7.
And we're actually still there, right?
It's not as if right now there are not three billion people who just jacked into Facebook, Twitter, YouTube, etc., who are now in the live sort of, you know, memetic tweaking of where these things are going.
And so now the virus is sort of out of control.
We're doing live tweaking to try to steer it.
I think at this point, by the way, I want to say I often perceive this sort of I don't know, a critic of the system or making people evil.
I don't believe any of these people are evil.
I think they have not seen the consequences of the process that they were in, the race to the bottom of the brainstem evolutionary landscape that they were competing within.
But the problem is now they don't have, you know, even the tools, the levers to actually do the gain of function or the sort of the tweaking back towards some kind of stabilizing force.
And that's, I think, the predicament that we're really in.
And frankly, this is the open inquiry I'm in.
I think we obviously need tweaking of the technology and to fix it at that level, because that's the thing that we're continually jacked into.
But I also think we need almost like once the pandemic of COVID is out there, we do need some kind of protection mechanism to isolate the virus.
I think we need a cultural antibody.
We need a cultural awareness.
We need a cultural, I don't want to say vaccine, because that itself is polarizing, but we need some kind of common understanding.
And the thing that I don't want to speak too highly of the film's impact for the social dilemma, but I think one thing it does do Since about 100 million people have seen it, is it creates a new shared common ground about at least one very primary explanation about how we lost common ground to this degree.
So to have shared common ground about where and how we've lost common ground is at least a place to stand from so that we can then say, how do we want to proceed back into more common ground?
And I think that's kind of an open inquiry that I'm in, is what does that look like?
Alright, two things.
One, we are in, if we go back to the earlier part of our conversation about shared fate and what it allows you to assume is true.
The irony here is that we have shared fate, right?
This is a tiny planet.
Our industrial processes are at a scale now that we are our own biggest hazard.
And so our fate is tied, and to the extent that our sense-making is deranged because in the moment one group can get financially ahead of another by doing things that derange us, in the long run we may all perish together as a result of the process, having not realized, having not had it built into us that our fate is shared and we should treat it carefully the way you would treat your spouse, for example.
But I also think there's a way in which even the dire picture that you just painted doesn't fully describe the hazard because let's say that there was one, let's say that there was only Google, right?
If only Google was involved in deranging us, then Google Stan's a chance of figuring out the hazard and Correcting things right backing us away from the precipice But the problem is let's say that Google figured out that they were a hazard and realize they repented Right, and they, you know, invested a huge amount in figuring out how to un-derange us.
But Facebook and Twitter and Instagram and all the other forces weren't on board with the program, right?
It's a little like imagining we're hurtling through an asteroid field, right?
And you've got six people in six different control rooms Trying to adjust the cameras that allow us to see where the asteroids are, and so as one person adjusts a dial that makes the image better, somebody else may adjust a dial to put it back the way it was because they were trying some other correction, right?
All of these disconnected things are basically an amplifier for noise, right?
At best.
At best.
So, um, now the question, really, Let's say that something like that is going on.
The question really is, what conceivable solutions exist here?
Do you know?
I mean, this is our daily work at the Center for Humane Technology and just on a daily basis just among peers.
I mean, I feel that the first point you're making about shared fate is exactly where we've just been the whole time.
It's just we, we're all in this thing together, right?
Because I think to the earlier point we were making about we're all addicted, our amygdalas are all available for mining and hacking.
Whether we know it or not, even if you're aware of the amygdala mining, it still works.
If you show me something that makes me angry, I will get angry.
I can try to be mindful of it, but again, the asymmetry of influence and the infinite feeds of the outrage are more powerful than my sort of daily mindfulness practice.
And we shouldn't put that burden on individual minds anyway, to have to develop the superstructure, or I think your brother calls it the metacognitive perch, upon which to not be so hijacked by, whether it's narrative control or amygdala control.
Yeah, I mean, when it comes to... I think we have to see some kind of shared fate.
I mean, when you ask it that way...
Something I've been pondering recently, and you know this history better than I do because you were around when this happened and I was just born a year later, is in 1983, the film The Day After comes to mind.
I became obsessed my whole life with sort of the history of nuclear weapons and how close we got and how in the world, after we invented the technology to destroy ourselves, I mean, there was all these nuclear scientists who were convinced seeing this exponential power They actually committed suicide.
I mean, there's, I think the story of, is it Feynman or one of the scientists who was in the back of a taxi looking out at, you know, the bridges of New York City and just thought like, what's the point?
Why are they building these bridges when it's, we already screwed it up, right?
It's over.
It's over.
We built the nuclear weapons.
It's, you know, and they actually committed suicide thinking that was the end of the whole thing.
And I think, you know, it's kind of remarkable because if you lived in that period, you probably would think, as I felt if I was there, that it that it was over.
And the interesting thing, for those who don't know, this film The Day After was the largest primetime TV event in American history.
I think it was a Tuesday or Wednesday night.
It aired at 7 p.m.
on television.
It was a film made by the director Forgot his name, actually, but we'll look it up later.
And it's about a hypothetical nuclear exchange between the US and Russia, a full scale nuclear exchange.
I'm told through, you know, narrative of regular people living in Kansas and a couple of Montana and a couple other places where the, you know, the missiles are underground, etc.
And, you know, airing this on primetime television and having essentially the whole psyche of the entire country, and it was aired around the world, essentially see the shared fate.
Because we can't, you know, if the shared fate happens, it's too late.
So we have to actually simulate the shared fate, put it into the collective psyches of everyone so that the omni-lose-lose outcome becomes visible to every actor realizing that the escalation doesn't work.
And this film was later credited with shifting Ronald Reagan's personal psychology, because he saw the film, it sent him chills, it gave him chills.
It led to him wanting to do the Star Wars project, which was a failed project, but at least it led him away from the idea that we could win a nuclear war.
And what's interesting about this film is it was aired in primetime TV, 100 million Americans saw it, largest TV event in history in the US.
Then they had a panel afterwards, I don't know if you remember this, but they actually had a panel of Henry Kissinger, William F. Buckley Jr., Carl Sagan, Brent Scowcroft, you know, the sort of The best military minds of the generation having a public discussion with the American public reckoning with hate.
Hey guys, we actually have had the power to destroy ourselves we all just saw it.
Now we're actually having the uncomfortable conversation we can't be in denial about anymore, what are we going to do about this.
And, you know, I just think that kind of film, that kind of initiative, that kind of shared fate visualization, stuffing that into the minds of everyone, is the kind of thing that we need to ground the much more extreme action we need to take, that this will not work unless we take some action together, that this is a shared fate.
That's beautiful, but...
There's a problem with it.
I mean, I certainly remember the day after and, you know, basically someone decided to traumatize us all together.
I assume their motives were good, but the effect, I think, was certainly positive, as you point out.
The problem is that in the case of nuclear weapons and an all-out exchange, The likelihood of coming out ahead is pretty small.
Right, and so it is a much easier problem to solve by revealing its nature because in general the mindset that you have to, you know, you basically get into Dr. Strangelove territory where, you know, the crazies are interested in, you know, winning the post-apocalypse, right?
So very few of us are on board with that.
And so it's much easier to say, hey, actually, this nuclear weapon thing is a special kind of predicament that requires a new kind of cognition in order to even understand how to calibrate it.
And of course, there's a lot to be said here.
One thing that's true is It's in some ways an unsolvable problem until you have global agreement to get rid of the weapons because sooner or later they will be used by accident or some other mechanism.
So we have temporarily solved the problem on a time scale that many of those who saw the invention of these weapons did not expect.
It is also quite possible, and in fact likely, that the weapons themselves prevented a lot of other warfare.
Correct.
Yep.
Right?
And so anyway, it's a complex puzzle.
We have to have a full accounting of the effects of these things.
Right.
Completely right there with you.
Right.
But the other thing I would say is that the difference between the nuclear weapon situation and the derangement of civilization via feed and search algorithms, etc.
Is, it goes back to the question of magic.
The fact that you know that the magician is fooling you does not allow you to see in what way you are being fooled.
And this is, you know, the problem is there's no winner in the nuclear exchange.
There is very definitely a winner, short term, in the derangement problem.
And so we need some sort of I mean, I think the point is in some ways, you know, the social dilemma is the day after for derangement.
And yet, as much as a lot of us saw it, understood the message, agree on it.
I don't know how much farther ahead we are on the puzzle.
Completely.
Well, I think this is critical, Brad.
I'm so glad we're talking about this step-by-step because where I'm lost, frankly, where I don't have an answer is the outcome of the day after was, if you think about it from a theory of change, it influenced one person, I mean, a set of leaders, especially Ronald Reagan, to shift policy.
And then there was a shift in what could happen because those were people in control of a centralized exponential technology that could destroy everything into the shared fate.
In this case, I guess you could say we have a small set of technology leaders.
So those could be the five Ronald Reagans governing the five nuclear armed intersubjective.
I think of the narrative derangement syndrome technology infrastructure we have as kind of nuclear weapons for the intersubjective world.
So nuclear weapons were for the outer environment, but Facebook and Twitter and YouTube, et cetera.
Again, I also would like to make sure that your audience hears me acknowledge that I'm very aware of the incredible benefits that these things provide, too.
And I just want to say blood donors finding each other, high school sweethearts finding each other, YouTube producing lots of learning opportunities, music videos, pumping people up with exciting, courageous ways of waking up in the morning.
I see all that.
The challenge is on the inner subjective side, going nuclear on a shared reality is an unsurvivable outcome if you can't recover from it.
So there's two kinds of harm.
There's reversible harm where you cause the harm Cause a, you know, maybe a disease in the body and there's, if there's some kind of medical way to reverse it or to rebalance your system symbiotically by, you know, whatever, that's great.
But then the irreversible harm, the kind of, I think losing trust, for example, in sense-making, losing trust overall, not trusting any institution, not trusting each other is a very hard place to recover from.
That's in the more irreversible harm category.
So what I worry about is that when you go nuke the inner subjective realm, especially at the trust layer, not just the informational layer, that's a place that I don't know how you recover from.
Now, again, to your point, Who would you convince?
So we have these kind of five Ronald Reagan type figures who are in control of the tech platforms, but they're governed not just, you know, Reagan could theoretically make the decision himself.
And yes, he had a country to report to, but he was now influenced by that country.
In this case, we have five Ronald Reagan tech platform CEOs.
I'm just throwing the number five around.
There's lots of different tech platforms, but let's just use that.
They report to their shareholders.
They're publicly traded companies that actually can't get off this boat.
So they're trapped just like we are, you know?
And your brother, Eric, has this great line that, you know, listening to tech company CEOs talk is sometimes like watching a hostage in a hostage video.
Have you ever seen a hostage in a hostage video?
And it's like, they're saying things that don't really add up.
You're like, why are they saying this gibberish that doesn't make any sense?
And so you see off stage, there's someone holding a gun to their head.
And the gun to their head is their business model and it's their share price that has to keep going up.
And so that's, I think, you know, the kind of situation that we're in, that we're in there.
Okay, two things.
One, I want to put an asterisk around this question of shareholders, because I'm less and less convinced that it's playing the role we think it's playing.
I know that it should, and I don't know why it's not, but I think we can see evidence that something else is afoot, and it may be somebody offstage with a gun to the head, but I don't think it's the shareholders at this point.
But let's talk about the question of your analogy to a disease and the ability to restore the body to health.
Yeah.
The problem is analogous almost to half a bone marrow transplant, right?
So you've got an immune system that's out of control, you've got leukemia or something, right?
And the way to solve it is to get rid of the immune system, which would be fatal in and of itself if you didn't replace it with another one.
You've got to replace it and it has to take, right?
But if you just go halfway, the point is, okay, you've just killed the patient.
They were gonna die of leukemia, and now you've killed them on a much shorter timescale.
And in effect, the problem here is that the destruction of... Here, we've got a problem.
You know, there's this old ironic line, you know, who you gonna believe, me or your own eyes?
What we've got is, you know, the irony of who you're gonna believe, me or your own eyes, is that obviously the answer is I'm gonna believe my own eyes, right?
Okay?
But the problem is, if somebody is gaming your visual apparatus such that your eyes are not reporting accurately, then the answer is actually, you know, I'm gonna believe the magician when they tell me what they did, rather than what I saw.
In that case I have to make an exception.
Here what we've got is a cyborg that is a fusion between intelligent creative human beings and unintelligent AI-like stuff learning to disrupt our own ability to manage our time and to understand how to evaluate claims that we are seeing.
And ultimately, a smart person who is not in a position to control this phenomenon, and none of us are, ends up distrustful of their own eyes and everything else.
And the point is, You cannot just go back, because once you've learned that your own eyes are lying to you, right, that your own instincts about what's true and false are unreliable, how does something use those instincts to inform you, okay, we've now restored sense-making, or sense-making is getting better, right?
The point is, once you've turned off the input, because you just don't trust it, you are now that much harder to cure, and it's very much like having had your bone marrow nuked with radiation, and then Nobody bothered to infuse you with a new immune system.
So anyway, I'm concerned that there's an asymmetry here, where the damage is all too easy to do with this level of technology, and the process of undoing it is, you know, a hundred times harder.
By the way, this is actually why there's no kind of told-you-so phenomenon behind the words I'm saying now, but there's a You know, so much of what was so hard looking at this three years ago.
I remember walking into senators offices talking about this and just feeling like, oh my god, there's no adults in the room.
There's so few people who get that this process is happening.
The more people who fall into these narrower, you know, more extreme views of reality is a hard process to reverse once you go there.
And we have self-reinforcing feedback loops, the gain-of-function process, That is continuing to make this happen.
This is going to be so bad in three or four years.
And here we are.
And I say that again, not there's no.
You know, this is just why it was so important to try to stop it earlier, because once you get here, it's very, very hard.
Now, there are epistemic processes to, you know, do more grounded sense making.
You can recover.
I mean, I know one of my favorite teachers, Byron Katie, has It's very similar to cognitive behavioral therapy.
new age spiritual teacher, but she has a wonderful, simple process you can use to just question your beliefs.
It comes down to four questions for anything you're holding in your mind that gives you stress.
You just ask, and it's very similar to cognitive behavioral therapy.
You ask, is it true?
Can I be absolutely sure that it is true?
So you're destabilizing the full grip of whatever belief that it is, because no belief can fully represent 100% of the time reality, or very few can.
Then the third question is, how do I react?
What happens when I believe this thought, when I believe this belief?
And the fourth question is, who would I be?
Not what would happen, like behaviorally, but who would I be?
Identity level question without this thought.
And you can start to kind of do cult deprogramming on yourself, but deprogramming yourself from the cult of your own lifelong process of tribalism, identity.
Experiences, etc.
And we can all be part of a culture that was trying to deprogram ourselves from the shared cults or, you know, fractal cults that we've all been part of in this cult factory process.
But as you said, we don't really have a lot of time here.
And so that's where I just jumped to.
OK, what can we agree on that are our problems that we have limited time on and we try to figure out what will actually be sufficient there?
And I know, you know, people like our mutual friend, Daniel Schmachtenberger, are people I trust to be doing that kind of process, to be thinking about these various existential threats and societal problems and what, you know, how could we get agreement on these things more quickly?
It's very hard.
You know, this is, you could sort of say game over where we are right now, but I prefer to say instead, what would it take?
What would be most helpful?
To align ourselves back on that track and how can we align our actions on a daily basis, you know, towards that kind of world.
So, you know, I find myself in a very parallel situation and in fact it may be two facets of the same gem, right?
So as you were trying to raise the alarm about the derangement of civilization and finding no adults in the room, as you know, I was watching my college go insane and recognizing that it was actually a manifestation of a thought pattern that was sure to spill out into the world.
And start deranging us at a much larger level, and of course the two things certainly interact.
Right.
And there is this very frustrating feeling of, you know, I try never to say I told you so, right?
It's hollow and pointless.
On the other hand, I think what I want people to get from someone in your position or my position is, look, I clearly did know what was coming, right?
That could be a fluke.
But it probably isn't, because you can look at what I said about why I thought it was coming, right?
And you can see that the thought process was right.
In light of that, the fact that I am currently alarmed about where we're headed, that I believe that the problem is dire, that as hard as it is to solve today, it only gets harder, right?
Those are things which should have our near complete attention because of what is at stake.
And we do not need to know the details of what will go down.
It doesn't matter.
It's like not knowing how the robot that you've just put in the driver's seat of the car is going to crash it.
Right?
But you're going to find out.
And so anyway, you know, I have little things that I say to myself and others if they want to hear them.
One is, it's very late.
but it's not too late.
The point is, any part of you that says, oh my God, how are we going to solve this?
It's hopeless.
The answer is, well, if you think it's hopeless now, try ignoring it another two years, right?
So getting people motivated to confront this now, and really, your point about Ronald Reagan, it's a complex point, but let's just say superficially, the point was there was a Ronald Reagan, and he could be persuaded by a narrative, and it could affect policy. and he could be persuaded by a narrative, and it And unfortunately, you know, five major tech platforms go
Governed by people who frankly don't have the right kind of expertise to even navigate the hazard, right?
They know their business very well, but they don't know how the rest of civilization functions very well.
That this is this is urgent and what we really need is human discretion, right?
There has to be somebody with veto power over the tech platform's ability to try any new experiment in order to do what they see as their job, right?
There's too much at stake.
No matter what else is true, they are putting us all in jeopardy.
It's not their right, and it's obviously a dangerous and foolish experiment.
You're hitting on such great points here because then we get to some of the areas that you're closer to, which is what does it mean for tech platforms to be governing all of civilization and doing so not in an autocratic way?
And I know you've been the victim of, you know, sort of autocratic shutdowns in your own right.
And moreover, you said, you know, let's take Apple, for example, because I've actually used the metaphor that Apple is kind of like the central bank or federal reserve of the attention economy.
And they actually have the ability to do quantitative easing or to maybe this is not the right metaphor because people are against quantitative easing, for reasons I understand, but they have the ability to sort of tweak the rules incentive guardrails IRB standards if you will, of the attention economy.
Apple can say, hey, we actually just don't allow this kind of psychological experimentation.
You can do sandboxing, you can do proofs about what you think is going to help or not.
We're going to make a list of what we currently deem to be just unsafe levels of harm or derangement that is occurring from certain platforms.
Now, again, what I'm describing you could say is what led to Apple taking down Parler and doing that autocratically and on their terms of, hey, you don't have enough content moderators.
Well, ironically, due to the cultural derangement syndrome, people saw that in multiple different ways, depending on which part of the reality derangement You know, fractal, you were you were inside of.
So if you were on the right, you saw that as part of a left power grab and a left oriented, you know, political power, you know, power within Apple and Amazon, et cetera, being politically motivated to de-platform a right leaning technology product.
If you were on the left, you saw it as something different.
If you were trying to see it neutrally and assuming that it is not politically motivated, you could say that they actually didn't have the kind of moderation to deal with actual brewing problems that they were not doing.
So again, what would make Apple trustworthy in setting up some kind of IRB process or in some kind of democratic council for all of these issues?
And as you said, the people who run Apple, you know, Tim Cook, Phil Schiller, Craig, I forgot his last name, who runs all of product at Apple.
These are people with expertise in business, with technology.
These are not people who have expertise in civilization design, or essentially brain implants for 3 billion people, or a brain implant for the collective psyche of humanity, which is essentially the expertise they need to be in to make the decisions that are protective of humanity at this level.
I want to point to this because it's actually an area of success.
Just recently, about a week ago, Tim Cook gave a speech.
In fact, he said in that speech, we cannot allow a social dilemma to become a social catastrophe.
And he was referencing the social dilemma in reference to a set of changes that Apple was making to go after app targeting business models, the ability for apps to track you across different applications.
In other words, sort of taking down the advertising business model in its micro-targeted form and moving back to kind of a 1960s, 70s era billboard advertising, contextual advertising, topical advertising, but not micro-targeting.
So this is Apple essentially saying, hey, we have this whole ecosystem and we're going to nudge the ecosystem away from some of the toxic components of the attention economy towards a less personalized, not micro-targeted form Of the attention economy.
Now, this is a very small change, but it deeply has affected many of especially the kind of Facebook and Google parts of the world, which they're very not happy about and claiming antitrust sort of motivate, you know, actions against against Apple and taking this unilateral action.
But I think we can zoom out and say, OK, there's two ways to fail here.
One is autocratic tech CEOs wake up and just make instant reactionary decisions based on the impulses of a present moment, and that's bad.
But the other way to fail is to not make decisions about things that are actual threats.
So we don't want to not make a decision and allow the Frankenstein to keep eating up the world and deranging the world.
But we also don't want autocratic decisions where some algorithm gets produced and suddenly Brett Weinstein is off of Twitter and Facebook and there is no appeal process.
And you happen to, you know, hear the person from Facebook get back to you later compared to the thousands of people who would have no appeal process.
So what we need is some kind of, you know, it needs to be in a governance of humanity.
It's essentially some kind of democratic governance with the expertise of what it means to be a brain implant for the collective psyche of humanity.
Apple's in a very unique position because it governs the App Store and Google because it governs the Play Store on Android.
Yeah, it's both hopeful, because you point out that the power exists there and hopeless.
people aren't going to like the sound of that because they don't like the power that that that what that means but i think we have to say it's we have to have some governance process yeah it's both hopeful because you point out that the power exists there and hopeless i mean even just the idea that we have to prevent this from becoming uh a what did you say derangement catastrophe
uh that idea is i don't know six seven years too late We're already there, right?
It's the choice between a crash and a crash landing.
It's not a hard landing at this point, right?
We've got a serious problem and nobody knows how to turn the ship around.
And so it's a little bit like, you know, the captain of the Titanic Beginning to grapple with the hazard of you know a iceberg as They show up on the horizon.
It's like well, it's the goddamn Titanic right?
It's not gonna turn on a dime You need to wake up faster, right?
And there's no there's also the point that okay Apple finds itself in the position that you describe and It doesn't deserve the position you describe?
It doesn't have even the superficial characteristics that you would imagine would allow it to navigate that position well?
In other words, okay, what is it that's supposed to tell, to inform Apple of the way in which it is supposed to steer things away from dangerous derangement phenomena?
Is it shareholders?
They are obligated to maximize shareholder value which I already told you I suspect is not playing the role we think because we see corporations now routinely making decisions that are nakedly political about who they don't want on their platform even though it's hard to imagine how that enhances their They're bottom line, right?
You would expect a kind of indifference about content in order to get everybody on the platform and not create a market of a competitor for a competitor to, you know, leap up and serve customers who've been thrown off the platform.
You would expect a lot of things that we're not seeing.
So, what is that?
Is that shareholders deciding that they value something above financial return?
Possibly, in which case this is novel territory, I would say.
Maybe I'm missing something, but The idea is, you know what?
Apple can't do it.
You're right.
Global processes need global governance.
And of course, this freaks people out.
But my point is not, we need global governance.
My point is, global processes need global governance, right?
The alternative to a governed, a well-governed global process is an ungoverned global process, and that is a disaster.
That's where we are now, right?
And without taking any actions, we're in that dystopia.
If we take actions, we're in autocratic 1984 dystopias.
We need something that is actual global governance.
And, you know, and you're right to say, you know, it's not like we would have wanted to be in this situation where this particular set of human beings ends up at the top of this decision-making hierarchy because they had the expertise.
they got there because they were good at building attention grabbing tech products or the hardware software, you know, 40 year trajectory of Apple.
But, you know, how we navigate from there, what that looks like.
I mean, what, you know, I, Justin Rosenstein, who's in the film and has a new book out called "The New Possible" and I wrote an essay for it.
And it's actually about visions of the possible from where, you know, across the sort of economic landscape where we see these systemic problems.
And one of the things that's brought up a lot is sort of these citizen council or citizen juries to actually, if you know the James Fishkin work on deliberative democracy, bring groups of citizens into a process where they actually hear all the evidence from experts.
And they're actually, it's proven to lead to actually pretty even dramatic changes in people's understandings and beliefs and then actions about what they would want to happen.
In Ireland, for example, there was a whole thing, a referendum, I think, on gay marriage, and it led people who had even been against that in the beginning to, through, you know, a two or three day process, lead to actually voting, I think, for legalizing, I think, gay marriage.
And we could have processes like that.
But again, as you said, we have this sort of global, we have a global thing.
So we do that in English.
You know, how many courts do we need?
How many juries do we need?
You know, one of the things I know I was planning to be ready to talk about is With your situation with getting de-platformed without any kind of review, you know, we need appeals processes in courts, but if you think about the scale of accounts, the scale of messages that are needing to be adjudicated, we don't have juries in courts for enough of the people, messages, etc.
that are going to be flagged.
We're generating more, you know, discrimination or cases than we have the ability to have courts to govern.
Um, and I think this is actually a fundamental paradox in technology because the benefit of technology is automation and scale, the fact that you have fewer human being decision makers, and you get more leverage through these automated processes but when a human judgment situation comes up.
We need to have as many courtrooms and lawyers and judges and juries as we have issues that we need to be adjudicated.
And if we have things that are consequential enough to cause irreparable harm faster than the court case date comes up, where your case can be heard, you know, you don't want to wait, you know, a year, two years, five years until you get to decide whether you get to come back to Facebook or Twitter.
We don't want to have Apple wait years to make decisions about You know, the derangement syndromes and the IRB standards that they would try to put to, you know, enclose these perverse Frankensteins.
I think we need a ranked priority list, though, of how we deal with these issues and protect against all the forms of tyranny and dystopia that we want.
And, you know, you've many times pointed out the sort of Orwellian tyranny that you were the victim of.
And I think the other tyranny we've been talking about for much of this conversation so far has been the Huxleyan tyranny, the Huxleyan dystopia, where it is not, and this is stolen from Neil Postman's book, I'm Using Ourselves to Death, but we always worried about 1984 and the ability to ban books and censure information and censorship, but then we didn't worry about this other world where we give people so much information that no one No one actually reads books.
No one actually has the attention spans.
Everyone is sort of stuck in their pleasure machine, in their affirmation machine, their SOMA machine, that they don't have a legitimate society.
And so I think if we make some lists of here's the dystopias that we're either in or we don't want to fall into, here's the ranking of those problems.
What are the things that need to happen at a global governance level to do it?
You know, this does look pretty bleak.
I mean, let's be honest about where we are.
You know, I you and I have talked in the past about how hard it is psychologically to wrestle with staring these things face to face on a daily basis.
And I think that's actually part of this is psychologically.
How can you look at some of these things?
But I think that's where we are.
And we have to ask what would help from that point.
And I'm curious if you see things differently or.
No, I mean, I think, you know, you and I have a difference in the way we look at this.
I think we both see a bleak picture, and I think there's an emotional difference in how we react to it.
But there's not a substantive difference.
I think the number of dystopias that we can allow ourselves to fall into is zero, right?
We agree that there are many, and there are therefore tensions between solutions.
But I want to Divide two things, because you talk about the need for a vast court system in order to address all of the potential claims of an infraction, etc.
That's one way to do it.
The other way to do it is to separate the question of the derangement that arises through algorithmic competition.
from the question of speech entirely, right?
In other words, I don't know of an effort to not sell pens, ink, paper, any of these things to people With a political perspective.
In other words, I think we have swallowed the idea that actually we're not going to prevent Nazis from rising by denying them paper.
Right?
It's not a good bet.
Right?
The world you have to build to keep Nazis from getting paper, but allow everybody else to have it, is not a tolerable world.
So, the point is, okay, the ship has sailed.
They're going to have paper.
That's not.
We have to fight them on the landscape of ideas, not the landscape of writing implements.
And the question is, what is the online environment?
Well, it's two things.
It's a bunch of businesses battling and putting us in the danger that you're talking about, and it is also the public square, whether you like it or not.
And so, at the level of the public square, I want to know very clearly what the limits are of what can be said, and I do not want people policed for even obnoxious points of view, right?
Obnoxious points of view are going to have to be tolerated and we have to trust in the fact that given complete freedom to talk, the obnoxious points of view are driven out.
Bigotry declines, right?
That's been the nature of it and trying to police bigotry is a recipe to create exactly the opposite phenomenon.
So, So anyway, let's separate that out and let's say the way to deal with the unimaginably large number of courts that we would have to create to police every infraction is to say actually the infractions are going to be fairly clear and few in number because what you're going to be allowed to say online is going to be the same thing you would be allowed to say standing on a street corner.
Right?
And then, we can separate off the issue that's really a hazard to us, which is... So, we create a hazard if we try to police speech online, right?
There's no conceivable system that could do that well, and the dystopias that arise from bad attempts to do it are several.
But then we've got the other dystopia, which is the evolutionary novelty dystopia.
Where something decides to borrow, you know, something decides to play magician and it begins to evolve and before you know it, nobody even knows how it works, much less how to regulate it.
There has to be some body of people, and here's the hard part to sell to people who have absorbed the true lessons of the magic of the market.
Is somehow the people with discretion to say what we cannot do, what we must not be allowed to do to each other using technology?
You have to be independent of this.
You can't be, you know, a wealthy person with a stock portfolio in which you're invested in all of this stuff and decide whether or not it's smart to engage in these experiments.
The perverse incentives are too many and people are not, even good people, are very difficult at managing perverse incentives.
So I don't know how we get there, but I do have the sense that in the end either you have to forbid the market to govern anything, or you have to have somebody who's immune to market forces in a position to say, actually, you can't do that because you're going to derange people and you're going to result in a tremendous amount of harm.
Right?
You can't do what?
X.
In other words, if somebody had proposed Facebook and the idea is, oh, well, Facebook is a place where people can gather across arbitrary distances and talk about topics they want to talk about.
Okay, so far so good.
And, you know, they'll be able to see a feed of what their friends are doing, saying, etc.
Okay, so far so good.
And now that feed is going to be adjusted according to an algorithm that And then the answer is, well, what is that?
At the point you begin to propose the capture of attention, something needs to say, wait a second, here's what happens downstream of that.
Or, you know what?
We don't know what happens downstream of that.
So in other words, if we go back to your analogy about gain-of-function research, We had a running battle about whether or not gain-of-function research was necessary in order to protect us from a pandemic, or was more likely to cause a pandemic than solve one.
Right?
Right.
Those of us who were on the latter side of that didn't win.
And those of us who were on the, hey, we got to do this side, did.
Now, is that the reason that we are now fending off COVID-19 and losing the battle, frankly?
Somebody needed to say, actually, no, because not necessarily here's the dystopia that arises if you do this, but we don't know what arises if you do this, right?
Precautionary principle.
Yeah, precautionary principle, which is, I admit, difficult to instantiate well, but some version of it has to govern us.
And this is kind of an age-old argument between, you know, allowing Markets are just another form of evolution.
So do you just allow system and technology to just, you know, let people build things and, you know, fix it later versus when do those things have the capacity to cause irreparable harm?
And we should instill the precautionary principle and say, Let's not just jump ahead and you get the kind of techno libertarians that just want to, you know, build build build no regulation deregulate everything but again I think what we're finding is we have to be able to spot when the things that we're proposing building are closer to this sort of I know we keep referring to gate of functions.
I hope people don't center on that.
That's the center point of our whole conversation here or what it could mean.
But I just that there are dangerous forms of tinkering that if you're doing social tinkering or mind, you know, meaning making tinkering or social group tribalist tinkering, which all of these things Facebook had engaged with.
You could cause some real harm, and there has to be some process by which we stop that.
But there was another thing you were saying about speech and the Nazis and the paper and the pens.
I want to make sure we're at least acknowledging a principle that we can, I think, agree to and want to honor, which is that to whom much is given, much is expected, or with great power comes great responsibility.
And the greater the power, the greater the responsibility and accountability.
And we went through a decoupling of that.
And I'm really borrowing this from Daniel Schmachtenberger, and this is this is his insight, not mine.
But, you know, I can go to a kitchen store as a Nazi or as a regular person, and I can buy a set of knives.
Knives can be used to kill people, but they don't do a background check.
They don't force me to do knives training.
They don't have, you know, I can just do that because there's a limited degree of harm that I can instantiate as an individual.
And even though it sucks, we have a court process to later be able to find those people, et cetera, whatever.
But if I'm going to buy a gun, we have the Second Amendment, so I guess we have a fair equal access to guns, but we also have background checks, we have ID checks, we have sometimes cases of training that's required to wield these kinds of weapons.
But if you're going to buy a tank or buy a bioweapon manufacturing system or ammonium nitrate or these other things, That we don't just say, hey, everyone gets access to that.
Or if you're going to, again, do gain-of-function research or have access to ammonium nitrate or have any of these other things, we have to make sure that we're putting protections around it.
So now with speech, I think we have this temptation to say speech is just speech.
But if I'm actually broadcasting to 100 million people, And I'm ability to do that with no regulation, no limits.
I can say whatever I want.
I can say it with the most malicious intent.
I can do consequential analysis.
I can run gain of function simulations on what I could say that would cause different tribes to react in the exact same way.
I can literally build a machine that's generating speech and split testing it across trillions of simulations, knowing the communities that respond to my messages in which ways.
And I can engineer Um, almost like a lab generated red meat, political red meat that I'm going to throw into the ecosystem.
And so I'm, in other words, I'm doing kind of gain of function political research on what would cause the most havoc.
The fact that I can do that degree of mimetic manipulation and cause global chaos at that level.
And there's literally no distinction between the responsibility of someone with a hundred million Followers on Twitter and someone with 10.
I think we've lost the principle of to whom much is given, much is expected.
Or at the very least, if you compare standards for broadcast TV broadcasters who reach maybe a million people in a local area, they face more regulations and standards than we grant online.
And so I think these are new kinds of services.
Even Facebook is very different than Twitter in terms of text communication versus, you know, the kinds of posts that you can make on Facebook.
But we're certainly missing that principle that the responsibility, accountability, standards, and practices are needing.
And I would put that in the list of the agenda of the Constitutional Convention that I believe is necessary to make this civilizational infrastructure more democratic and globally governed for the public interest and public good.
Well, I agree with a lot of that.
I sort of feel like we need a constitutional convention, and we certainly cannot afford to have one.
It will end up a disaster if we do.
So the answer is we need something to reckon with things at that scale.
Yeah.
One of the things that I say about the market, what is the market?
What can we allow it to do and what shouldn't we?
Is that there's a natural distinction between the following things.
Markets are excellent at figuring out how to do things.
And they are appalling at figuring out what should be done.
Right?
They tend to do things like exploit defects in your personality, find weaknesses, amplify insecurities in order to make money.
And we can't let them do that.
It's not kind.
It's not decent.
Right, so there has to be some sort of human discretion on what do we allow the market to do and then there has to be liberty to allow the markets to do things.
But the other thing that I'm increasingly compelled of Is that we have a process in which once things show themselves to be profitable, they cannot be reversed, right?
That the political dynamics are such that once you've made a huge profit at something, you're unstoppable.
So we can't go in reverse.
And that is in and of itself an existential threat.
I won't do it here because we're pressed for time, but that will actually cause the equivalent of civilization's senescence.
It will grow feeble and collapse because of that dynamic.
And so, I mean, it is amazing.
Look at the technology we are using to have this conversation, right?
We are almost as if face-to-face having a conversation in real time which will be broadcast to hundreds of thousands of people, okay?
That's an amazing level of technology and yet we cannot figure out how to stop a technological process that everybody smart understands is putting us at existential risk.
You can unplug it, right?
And, you know, the problem is even to you and me, I'm sure the idea of tech platforms being unplugged because civilization depends on them being unplugged is almost unthinkable.
But how did we get to the place where there are processes that we have unleashed in the world that are both a threat to us and inconceivable that they could be unplugged?
Yeah.
There are days when I just think we need to unplug these things.
And I actually want to make a distinction because what we're doing right now is a good example of, I think, a much more humane technology in the sense that our representation to each other, the fact that I'm making almost eye contact with you, the fact that my trustworthiness and the credibility of what I'm saying and how I'm saying it would be very different than, I think, when you think about the medium of Twitter or Facebook, if you were sort of harboring really crazy views or whatever.
You would have almost an inflated sense of credibility talking about those things in those medium because they subtract away how credible does that person look when they're saying all those things and I think if you were to reverse the last 15 years, and you just have zoom conversations you had zoom conversations that were deep like this there were two hours long, you had even 1020 people on them.
Would we see the kind of mass bifurcation of reality?
I mean, we still would have partisan media, I want to make sure I'm acknowledging all that, etc.
But would we have the mass, you know, fractal, you know, fracturing of reality?
Would we have the degree of sort of out there crazy town kind of thinking that we have?
Would we have I just don't think we would have those things because we would be in these long-form mediums where we can say, wait, hold on, Brad, I want to make sure I understand you correctly.
Were you saying this or were you saying this?
Now we need modeling of that kind of turn-taking too.
We need modeling of good epistemic process.
But that's, I think, the thing that you're doing with this podcast and that others are doing.
And I think This kind of medium is much more humane than something that is an automated gain-of-function research, sort of, you know, tinkering with human evolutionary instincts and then letting things go, you know, basically pandemic mimetics.
It's a pandemic mimetic complex.
That's what it does, is it creates this sort of global spread of information algorithmically.
That was not safe.
This is much more akin to safe, and I would prefer, if I could pull the plug, and you asked me in another thing we saw each other at, I think about two years ago, You know, you'd be one of the few people I would trust to kind of pull the plug, and in some ways I think this would be a plug I would be willing to pull, is to take away the sort of algorithmic social media and leave ourselves with this sort of long-form, you know, podcast-like, long Zoom conversation-like, with even 100, 200 people listening and interacting.
That would be a much better world to live in.
This is going to be a marvelous point to end on, because it sort of wraps up much of what we have been discussing.
Because the point I would add to what you just said, with which I 100% agree, is that there is a reason this is safer.
It's not arbitrarily safer.
It's safer because this treads, it is fully technological in every way, but It allows your built-in heuristics as a human being to function, right?
In other words, somebody who was standing three feet from you and me talking at a cocktail party, were we to have the same conversation, would have the same ability to evaluate, are these people full of shit?
Do they know what they're saying to each other?
Are they Attempting to persuade others of their credibility.
Do they have a financial interest here that isn't obvious?
You could make that evaluation in a normal room and it would, you know, it wouldn't necessarily be right, but it would be plausible that you could do it.
You have enough information to go on.
And so anybody tuning into this conversation will be able to detect, for example, They may not perfectly be able to detect whether there are any edits.
I don't think there are going to be any edits here, but they will be able to see whether the conversation flows as if unedited.
They will be able to tell that you and I are responsive to each other, that neither of us is working from talking points because each point follows from the next in a very natural way.
And so the point is, your built-in set of heuristics for how human beings sound when they do or don't know what they're talking about is applicable here in a way That it can't be when things are spliced together and an algorithm is playing an unknown role and the algorithm is changing and you can't even detect that if your heuristic worked yesterday it might not work tomorrow, right?
All of those things are the challenge and the closer things, just as with diet, the closer things look to what your ancestors have been eating, the more likely they are to be well dealt with by your system.
And by the way, everything you're describing, when we use the word humane in our founding of the Center for Humane Technology, it actually comes from my co-founder Asa Raskin.
His father, Jeff Raskin, started the Macintosh project at Apple.
And the primary notion that humane means being respectful of human vulnerabilities and essentially understanding what was Adaptive about all the evolutionary architecture that you would want to express in its fullest capacity in health in a healthy way, that that's where we need to look to get this kind of dimensionality, this kind of outcome, like you said.
So just to say that the philosophy you're talking about is exactly what we say when we think about you have to be looking not at what would make technology more sophisticated.
How could we be more sophisticated about human nature and the evolutionary roots that got us here in a healthy way?
And how can we reappropriate more and more of those lessons?
I think that there's interesting people across the board that are doing this.
I think we need to do this kind of mass excavation for the babies we threw out with the bathwater of other cultural traditions.
I mean, I think even some of Jordan Peterson's stuff of taking personal responsibility, as simple as it is.
These are kind of wisdom hacks that bolt onto the human evolutionary system.
One more that I might leave you with, one of my favorite ones recently, Is the serenity prayer, the notion that, you know, God gave me the wisdom to, I'm going to botch it, what is it?
God gave me the wisdom to know the things that I can't change, to accept the things that I can't, and have the wisdom to tell the difference.
If you think about that, I'm curious if you agree, from an evolutionary perspective, in the evolutionary ancestral environment on the Savannah, attention was coupled with agency.
I put my attention on that rock, I can move that rock.
My attention, I put that on that lion, I can run away from the lion.
So what I put my attention on is directly coupled with how I can act in relation to it.
And so our brains don't naturally distinguish when we put our attention on things between what we have agency to do something about versus that which we don't.
And that's why I think of that serenity prayer, that wisdom prayer, as a sort of bolt-on humane technology that says, hey, your mind in this new environment doesn't make that distinction.
So we have to remind ourselves to point our attention to the things that we can change.
And humane technology is like that.
It's based on an insight about the fact that our attention is agency blind or non-discriminating.
And that we need help to put our attention on the things that we have agency over just to leave the audience maybe with an optimistic vision is imagine a humane social media or you know other sort of tech digital technology that is helping us put our attention on the things that we can actually change and that ladder up to bigger changes.
In a world in which every single time we use technology, it's helping us put our attention on things that can actually change, and we can get on that treadmill of positive feedback, of feeling like the daily choices we make are leading to a better world for ourselves, for the people around us, and for the world.
And that kind of attention economy is a much better attention economy Then the one we have today, which is basically infinite learned helplessness.
Because even if I'm pointing my attention at the important stuff like, say, climate change in a Facebook news feed, it's basically, yeah, it's worse than you thought and there's nothing you can do.
And I can scroll my finger and see it's worse than you thought, nothing you can do, worse than you thought, nothing you can do.
We want a world where we have agency over the things we put our attention over.
And technology could make that the default setting, as opposed to something you have to consciously bolt onto your own consciousness.
Wonderful.
I want to just point out the flip side of it.
So you're absolutely right that our attention is essentially scheduled to those things that an ancestor would have had agency over and they would be programmed, they'd be wired to ignore everything that there was no point in thinking about it because there was nothing to be done, right?
Now the problem is if you're Mark Zuckerberg, Then you could make decisions about algorithms that could plausibly end the world.
So your attention really ought to be on that hazard in a way that an ancestor would never think about it.
Because there's nothing an ancestor could do that would end the world, right?
Right.
There's no decision that works that way.
Or very, very rarely.
You know, you could maybe screw up your... You could take your population across something or other and go extinct.
But in normal functioning, There was no mistake you could make that would have that kind of impact, and so we need to get these people upgraded so they understand We should definitely do something like this again sometime.
It's been a fantastic conversation.
which I realize is part and parcel of your formulation here.
So anyway, okay, I feel like we've had the introduction to a very good conversation that we now need to have. - We should definitely do something like this again sometime.
It's been a fantastic conversation.
I really appreciate getting to dig into these topics with you and go deep.
Great.
It's always great to talk to you about this, and just to say in closing, I do think there are people who can be trusted to think this way on humanity's behalf, and you are very high on my list of that small fraction of people who has that kind of wisdom and decency.
So anyway, thank you for all you do, Tristan.
I really appreciate it, and I look forward to the next one.
It means a lot to me, Brett.
Thank you so much.
Where should people find you?
You can look up more of our stuff on HumaneTech.com.
You can find me on Twitter at Tristan Harris, although I don't really use it very much given the post-social dilemma world.
But I think our podcast, Your Undivided Attention, is a great place to look for going more deep into some of these topics.
Great.
That's excellent.
We will post links to those in the description.
And be well, Tristan, and to everyone else, thanks for tuning in.