This episode is co-hosted by Dr. C. Thi Nguyen, a philosopher who teaches at the University of Utah. His breakthrough book is about agency in games, in which he shines a light on disquieting aspects of our gamified lives and the question of whether we’re still able to act on our own values. We’ll talk about Dr. Nguyen’s key ideas, all of which are super useful for navigating conspirituality: the gamification of Twitter and other systems of “value capture,” the feeling of knowing something really clearly can foreclose on the desire for nuance, and the pleasures and perils of “moral outrage porn.”Show NotesPolarization or Propaganda? (Boston Review)Why We Call Things ‘Porn’ (New York Times)Escape the Echo Chamber (Aeon Magazine)Why Games are Good but Gamification is Terrible (Conceptual Foundations of Conflict lecture)Who Trains the Machine Artist? (Daily Nous)The Gamification of Public Discourse (Royal Institute of Philosophy lecture)How We Can Understand Ourselves Through Games (OUP)Group-Strapping, Bubble, or Echo Chamber? (SERRC)Am I in an Echo Chamber? (Open for Debate)On gurus and seductive clarity (Decoding the Gurus podcast)What’s in a game? (Philosophy Talk radio show/podcast)Games: Agency as Art (New Books in Philosophy podcast)Cheap talk (Escape the Void podcast episode, talking about clarity porn)Echoes in the void (Escape the Void podcast episode, talking about echo chambers)
-- -- --Support us on PatreonPre-order Conspirituality: How New Age Conspiracy Theories Became a Health Threat: America | Canada Follow us on Instagram | Twitter: Derek | Matthew | JulianOriginal music by EarthRise SoundSystem
Learn more about your ad choices. Visit megaphone.fm/adchoices
I just want to remind you that we are on all social media platforms, at least the ones that we are on, which is Instagram, where we post most of our stuff, YouTube, Facebook, and then on Patreon at patreon.com slash conspirituality for as little as $5 a month.
You can support us, keep us editorially independent and get access to our Monday bonus episodes and other material.
And I'll just say that we also created a t-shirt for fun.
It's on our social media channels now, but we might do a few more of those.
We take requests, just something to keep the conspiritualists in check if you want to rock it in public.
Conspirituality 55, Games Against Humanity with C.T.
Nguyen.
This episode is co-hosted by Dr. C.T.
Nguyen, a philosopher who teaches at the University of Utah.
His breakthrough book is about agency in games, which allows him to shine a light on disquieting aspects of our gamified lives and the question of whether we're still able to act on our own values.
We'll talk about, excuse me, We'll talk about Dr. Nguyen's key ideas, all of which are super useful for navigating conspirituality, the gamification of Twitter and other systems of value capture, the feeling of knowing something really clearly can foreclose on the desire for nuance, and the pleasures and perils of moral the feeling of knowing something really clearly can foreclose on the desire
So the typical thing to do in having a guest on like C.T. Nguyen Nguyen is to make them jump through all the basic hoops of their research before getting into specific applications and nuance.
And Dr. Nguyen has been on a bunch of pods already jumping through all of those hoops and we'll link to those as well as to a concise lecture he gave at the Royal Philosophical Society called Gamification of Public Discourse.
So what I'll do in this intro to our conversation by way of groundwork is to throw out summary style, some basic premises and definitions so that we can save Dr. Nguyen some work and dive in deep.
And I'm going to start with a little context around some of the challenges we face with this podcast, but also more broadly in our lives.
It's abundantly clear at this point that our global communications are as precarious as they are networked.
We speak and post constantly into this paradox of ultra-connectivity on one hand and hyper-polarization on the other.
And we have the impression we are communicating with the world even as we narrow our filter bubbles and even as we soundproof our echo chambers to outside information.
And two of us on this podcast lived in cultic environments and so we know what this narrowing does.
We also, in this online world, can search for anything we want while being blind to the values that prescribe our desires and how they are enabled, if not outright generated, by the technology that promises freedom at the same time that it delivers ads.
And we hear this phrase, epistemic crisis.
We know it means that finding shared sources of reality seems increasingly hopeless.
And some big journalists recently are even throwing in the towel and moving to subscription-based models of monetization.
And that's beyond the reach of editorial guidance and fact-checking.
And at the same time, we are barraged by charismatic personalities who have created fortress-like edifices of supposed knowledge and perhaps feigned certainty.
We use reductive technologies to opine about complex subjects, and in the worst cases, we pornify political positions and deeply felt social realities through memes.
And, as we know, this is the communication structure of deep web, nihilistic spaces like the Chans, which gave birth to QAnon, and then January 6th.
And worst of all, the technological pressures, especially those on social media, that we're navigating every day, where ideas are driven by likes instead of concrete value, push us not only away from nuance and complexity in our public conversations, they reward the inflammatory, the engorged, the outrageous and polarizing.
It's kind of like automation and food production and how it drove us towards nutrient-poor but plentiful food.
For many, anyways.
In the same way automation and dialogue has driven us towards a kind of babble.
Now, all three of us come with fairly broad experience in navigating online conflict.
I've been an anti-cult journalist and activist.
Julian has been very active in bringing a more psychological and science-informed lens to embodied awareness practices.
And, you know, he asserts that critical thinking and contemplative practices should go together.
And meanwhile, Derek has fought many good fights in science communications.
And along the way, we've picked up personal strategies for less clutter in communication.
I've been influenced by non-violent communication and trauma-sensitive language, although both have different limitations.
Julian has committed logical fallacies and spiritual bypass tactics to memory, and he can really sniff them out like a bloodhound.
And Derek is really good at evaluating research literature.
But I think like many of you, perhaps, we have become increasingly aware of being caught in the social media hall of mirrors.
We're haunted by the social dilemma, even if that documentary lets our tech bro overlords off a little bit easy.
So here we are, and happy to have C.T.
Nguyen to help sort through these issues.
I first came across him on Twitter in conversation with Decoding the Gurus and Embrace the Void, two podcasts he went on to guest on.
I devoured his papers, shared them with the team, and I'm happy to report that Nguyen's point of view on these issues is refreshingly clear in a way that perhaps only really good philosophy is suited for.
So he's got a lot of articles out there and we'll link to his research archive page but for our purposes I'm just going to point to four ideas, four phrases that he has coined or helped to coin.
They build on the philosophical literature that he'll help unpack and I think these ideas specifically help with thinking about the topics we cover.
Number one is the gamification of discourse.
So, as a student of the philosophy of games, Nguyen argues that just as the medium of paintings is paint, the medium of games is human agency.
The game designer tells you who you are and defines for you what you want.
And this can be very relieving because it puts you in a space in which there can be certainty of purpose according to rules and standards for success or failure.
But what happens when social media engineers the agency of speakers in a game-like relationship?
What happens to their values and morality?
Do they play for truth or for the win?
The second idea is value capture.
So if Twitter engineers an environment that encourages us to win rather than to be honest or to offer nuance, it has captured our value-making process, according to Nguyen.
How can it not begin to impact how and what and why we post?
And how will we twist our values to fit the contours of the algorithms?
And this is a smaller sector within a larger problem that Nguyen brings up.
What happens when our values become driven by any form of quantification that strips away complicated human stuff like direct experience and emotional presence?
What happens when teachers teach to the GPA, when economies are evaluated by GDP, or when human health is snapshotted by the Fitbit?
This idea is at the heart of Nguyen's work, in my opinion, because it's really about questions of integrity and autonomy and inner freedom.
Are we allowing ourselves to develop and express our own proper values, or are we, to use his phrase, buying our values off the rack?
Thirdly, Nguyen talks about the seductions of clarity.
How everyone's going for the mic drop, the viral tweet, the hot take.
In an age of uncertainty, the vibe of clarity and universality is a hot commodity.
But what relief does it really bring, and for how long?
With this phrase, Nguyen really brings out an eerie quality of charismatic communications.
That the clearer something sounds, the less inclined the listener will be to continue thinking about it.
Is the hot take an invitation to learning, or a spectacle of performed mastery?
And finally we have Moral Outrage Porn.
And this idea comes in collaboration with Becca Williams, and it's quite succinct in describing the emotional junk food of the performative outrage cycle.
Especially, I feel, in left-progressive or pseudo-left, pseudo-progressive spaces.
Porn, as Nguyen and Williams define it, is anything that offers pleasure without investment or commitment.
So what happens in this age of outrage when outrage itself becomes a pleasure inflamed for its own sake?
Here's our conversation.
So welcome, T.
Thank you so much for taking the time.
Thanks.
It is great to be here.
I've been actually, like, plowing through your back episodes, mostly while I'm climbing or taking care of my kids, and they're amazing.
Oh, that's really good to hear.
Well, I mean, maybe we should just start with the most obvious thing, which is that in your framework for how Twitter gamifies public discourse.
So we've linked to various articles that you've written and you're going to go into detail, I'm sure, here as well.
What we've been dealing with on the podcast, as you know, is how there's kind of like another level of this point scoring that happens within the content that we cover.
So not only is it influencers on social media who are often bullshitting at this micro level of trying to score points rather than focusing on what is morally relevant or true, but then also there's like a storyline that they're pushing as well.
It's not just that the technology is something that they can use as a game, but it's also that they become heroes and they can save the world.
And so I'm wondering if you think about gamification, not just on the micro level of individual posts, but how people build identities and storylines as well.
Oh yeah.
So I mean, so there's like two levels to thinking about what's going on as a gamification.
So like the really simple mechanical level is something like, look, something like Twitter, something like Facebook offers you points.
And games are thrilling because you can see those points accumulate.
And the basic, I mean, my basic view is something like, look, In games, you get to have this moment where you actually see every success clearly and arguably recorded.
But to do that, you have to align your values with whatever the game's criteria for scoring is.
And I think that's okay in actual games.
A big difference between me and a lot of other people who think about games is most people out there are like, games are great, so let's gamify fucking everything.
And I tend to think, no, games are great because they're secluded and temporary, because we can adopt these really clear point systems for a small amount of time to get absorbed in them.
But that's really damaging when you try to do something like that on Twitter, because on Twitter, you have to change You can change what you value.
I mean, I think there are a lot of reasons that you might come to Twitter, but the point system of Twitter, for a lot of reasons, some of which are just the limits of what can be mechanically captured by a large-scale mass technology, Twitter can only capture likes.
Among other things, I think it's really important that likes are really short.
The input for likes, uh, focuses you on the first moment.
I mean, literally, this is like a simple technological thing.
Like if I see a tweet and it really moves me, uh, a week later, if I, I'm like, ah, that's stupid.
And then like, it just ferments in me and I changed my mind a week later.
Not only is it hard to like, like, it's really hard to just even find the same tweet, right?
So tweets capture short term data.
And I think, It's much more likely that you'll like something if you already agree with it in the moment.
But then if it changes your mind, that data is lost.
So there's the first level of gamification that I often think about is just what happens when you get aligned with a simple system of points.
But there's something bigger.
And I was actually thinking about this a lot while I was listening to your back episodes.
Because for me, like, one of the great draws of games is a profound sense of Value clarity and clarity about your purpose and meaning, right?
Right.
So, and there are two levels of this.
So one level is just like what the point is of what you're doing.
So I think like in our actual life, things are like incredibly complicated.
Like, you know, I have 50 billion different values I'm trying to pursue and each of them is really hard to adjudicate.
I've been thinking about this a lot because I'm a parent and I have a kid and I'm trying to be a good parent and then I go up and I see that the kid has ripped apart his room and made this towering stack and I'm like...
Have I succeeded in raising a happy and creative child?
Or am I raising a monster?
So the criteria for evaluation are really unclear.
And there's this vast conflict between us about which of the values are.
And so in a game, you get this experience.
Where you know for one moment exactly what the point is.
You know exactly what you're supposed to be doing.
You know exactly how you can make moves there.
And I think this is, I was thinking about this a lot while I was listening to your back episodes.
In games, you get this momentary agreement between everyone about what matters, about what's important, right?
You get this, like, crystallized social clarity where we're like, yeah, okay, points.
That's what matters.
Victory points.
We settle them the same way.
We all care about the same thing.
We all agree on how they're counted.
And, like, the basic existential nausea of living in a morally complicated pluralistic world goes away.
And so the larger gamification What I'm interested in is the possibility that there are systems out there and communities out there that might promise you that kind of clarity in the real world.
And I mean, my own work, I started with conspiracy theories, but also like bureaucratic language, where you all accept that like, whatever, like, the internal metric that your company uses is what matters.
Like, both of those, if you buy in, offer you the same kind of value clarity.
Actually, I wanted to ask you all since you're, I mean, I feel Kind of silly being on this podcast because I'm like the super abstruse theoretical person that doesn't have any of the rich knowledge that you all do.
But I really wanted to ask you all like how much of that description rings with your experiences with the world you're in?
Well, if I could jump in right here, Thi, I feel like, you know, looking over your stuff over the last couple weeks, I've been struck by this perception that, you know, if conspiracy theories are kind of like, you know, philosophical rigor porn or intellectual exploration porn, right, you're offering something so much more nutrient dense.
And I feel like I'd love to hear, especially for our listeners, just the two or three sentence summary of, you know, what you this idea you have of what the evil belief manipulator out there would do if they were trying to make narrow mindedness sticky.
Right.
I think is the way that you say it, because it's such a it's such a beautifully thought through kind of set of interlocking ideas.
And and for me, it really relates to everything we talk about.
One approach that a lot of people have when they're looking at things like MAGA, or post-truthers, or whatever the wild conspiracy theorists that are out there, is to think in a very individualistic basis.
So, the presumption is, for a lot of people, if a person believes wrong stuff, it must be their fault, right?
They must have done something wrong, they must be lazy, they must be cowardly, they must be just like Brutally irrational.
And I'm really interested in the way in which our thinking is deeply socially embedded and dependent on other people, and also the ways in which, I think in terminology you might like, that our thinking processes might get hacked.
So here's a basic idea, and let me put this in the way that philosophers put it.
So There's this dream that a lot of people have of perfect rationality, of what it would be like if you had infinite time and infinite resources and infinite intelligence to figure out everything.
But that's not us.
We are incredibly limited beings, and we live in a world in which I think the basic fact of our world is that none of us can know more than 0.001% of everything there is to know.
I mean, there's this dream, I think, in the background of intellectual autonomy, of like, yes, I can know everything!
By the way, we should talk about this later.
I think one of the things that a lot of conspiracy theory and other things from this world give people is the sense that they do have a mechanism that's powerful enough that they can understand the world in themselves.
I mean, but the basic problem of this scientific moment, which a philosopher named Elijah Milgram made really clear to me, is That everything we understand and know is distributed across such a wide amount of interlocked specialist scientists that any, I mean, one way to put it that freaks some of my students out is, how many people are you trusting when you get in an airplane?
You're trusting the pilot, you're trusting the engineers, you're also trusting the instruments they make, you're trusting all the sciences of aeronautical engineering, which means you're trusting the statistical analysis tools they're using, which means you're trusting all the way down, and you couldn't possibly hope to understand each of those levels of trust that you are implicitly involved in, right?
Not only can you not understand it, you don't even know who you're trusting.
I mean, I think the point is, the content of who you're trusting, you immediately, like, if you trust your doctor, immediately you're trusting everything that was, like, the peer review system and the medical, it's just, it's wild, right?
So, the basic fact of us, for me, is that we are limited beings that are dependent on other people.
Annette Byer, who's one of my favorite philosophers, puts it this way.
The basic fact of our social existence is trust, and the essence of trust is vulnerability.
To trust is to entrust other people to figure things out for you, which we have to do.
It's weird that people lump together science and intellectual autonomy, because what science has given us is the opposite, right?
So now we have to do this wild thing.
So we are supposed to understand the world and navigate it, but we have like enough time to understand 0.00001% of it.
And so we need to have, this is my idea from this paper you're talking about, the sections of clarity, we need to have a way to guesstimate What's worth spending your time on, and what's not worth spending time on.
So, cognitive psychologists often put a lot of this stuff in terms of heuristics.
Like, since we can't figure things out exactly, we need heuristics that are quick rules of thumb that roughly get us onto whatever's approximated.
So one thing that I think is really interesting is we need a heuristic for what we're thinking about and what's settled, what's done.
And so here's a theory.
This is partially based on philosophy, partially based on reading empirical work, partially on how conspiracy theories work on people.
So my suggestion is that for many of us, we use the feeling of clarity As a sense that we understand something and a feeling of confusion as a sense that we don't and we need to understand it more.
And so I think what you need to do if you want to really use this to manipulate people is a powerful tool would be if you could simulate the feeling of clarity.
If you give people the feeling of understanding separate from normal, from actual understanding.
So, what might this involve?
So, what's really interesting for me is, there's actually a bunch of philosophy about this, mostly from the philosophy of science, which is interested in the fact that what scientists seek is understanding.
So, those philosophers have been talking about what real understanding is, and what they say is, like, real understanding is not just, like, knowing a bunch of facts, right?
Separated facts.
It's this coherent whole, and what the coherent whole looks like is, you have a model.
The model has powerful explanations.
When you adopt the model, Parts of the world fall into place and become coherent, and you can communicate the model to others and communicate your understanding to others.
And so my thinking was, like, what would it be like if I was, like, evil?
And I wanted to fake this.
And what you would want to do is to create a model that really gave people this powerful sense that suddenly all the world fell into place.
What it's like to have an epiphany is to have things fall into place.
So if you want to manipulate this, you want to create a system that lets people hyper-easily make everything fit into some kind of model.
And this is, to me, what a conspiracy theory looks like.
It's also, by the way, to me, what we know as bureaucratic language about how you justify expenditures in a university looks like.
But I think what's probably interesting to you is a conspiracy theory is something, it's not like a conspiracy, I mean, I'm sure that you know all of this, other people I talk to don't believe this, but Conspiracy theorists are incredibly intellectually active.
And the conspiracy theory is made so that they can enact over and over again the experience of having unexplained phenomenon and explaining it quickly and easily and fitting it into a model and making it make sense.
When you do that, when you have that capacity, that confirms to you the sense that your background explanation is finished, is the right one, and that you don't need to investigate it any more deeply.
It gives a sense of mastery.
It gives a sense of control and confidence.
But I think, you know, you asked before this last piece, you know, how does your work resonate for us?
And for me, Much of this ties into the investigation you do into agency in all of this.
That the sense of mastery that the conspiracy theorist feels when they have at least captured what they feel is a whole picture, or at least they have a model for baking out the new clues and they know what they're going to do with each new Q-drop or whatever, Or they know how they're going to interpret the Fauci email drop before it even happens.
that that process of certainty making gives people a sense of control, but it's fundamentally illusory because the preordained rules of the gamification have already sort of taken but it's fundamentally illusory because the preordained rules of the gamification have already sort of taken the reins and pointed people towards So there's a relationship between scoring points and becoming certain.
It's almost like the more points you score in the gamified discourse or communication platform, the more your content feels certain.
So that's one of the things that I get from the ideas that you put together.
One of the things that you can pull, there's really interesting literature on The sociology and history of quantification culture.
I can really recommend a bunch of books.
Theodore Porter's Trust in Numbers, Sally Engle Mary's Seductions of Quantification, and Susan Starr and Greg Bakker's Sorting Things Out.
Basically, the numbers present an air of finality.
Um, and one of the, so Sally Gilmeri in her discussion of UN indicators, she says, look, you can generate numbers that are really good.
You can generate numbers using good data and good methods, and there is a finality to them.
But even if the methods are bad, numbers present with an aura of finality.
They look finished.
They look objective.
Right.
So here's one, this is something I think about a lot in life.
I think there's a kind of, I don't know, Objectivity laundering with a lot of numbers.
You can have a system that has really crappy subjective inputs, and you run it through something mechanical, and people come out the other end and say, it's an objective process.
For me as a teacher, what this looks like is grade point averages, right?
A lot of the inputs Are really subjective based on teacher judgment.
Everyone knows that it's really subjective, but you have the system for processing and you output this number.
People are like, oh, well, that's a good way to judge someone because, uh, anyway, um, uh, but I, what I really want, I want to, hold on, give me a second.
You were saying something and I wanted to go back to it cause I think this is really important.
Okay.
This is, I think this is, this is the most important thing.
That's been in my mind as I've been listening to your stuff and thinking about what I've been working on.
So for me, games shape agency.
They shape what you want and what you can do.
So there are three basic things that game designers manipulate.
Game designers manipulate what your goal is.
your motivations, right?
They tell you what the points are.
Game designers manipulate your affordances, they tell you what your abilities are in the game.
And they manipulate your ability, sorry, the environment that you're running up against, the obstacles.
And I think one of the basic elements of game design is that game designers can harmonize these so they fit.
So what I mean is like, in our actual life, what we want to do, our abilities and the world do not fit, right?
A lot of the times what we want to do either involves an atrocious amount of horrible mind-numbing work or is just impossible, right?
Like we don't, we don't easily fit the world.
But in a game, It's been made, I mean, a lot of the times I think about Mario, like Super Mario Brothers is a game where you're, you want to go right, you're given the ability to run and jump, and the world is full of things that are just the right size that you can barely run and jump over, right?
So it's like something you can accomplish that's a little bit difficult.
It's all been right sized for each other.
And I was thinking about this because one of the things that keeps coming up, at least in my reading, which is less about I've been spending more time reading about alt-right world and certain online conspiracy theory networks.
One of the things you keep seeing is this comment that people feel empowered when they acquire this belief system.
There's some really nice I can't remember the journalist, but there's some really nice reporting work on flat earthers.
And one of the comments that people say is something like, when they became a flat earther, they no longer felt helpless.
They felt, like, empowered to understand the world themselves.
And I keep thinking that, like, the nice thing about a conspiracy theory is it fits in your head.
Like, you can use it.
Now, suddenly, The tool you've been given fits the world and you can keep, like you have the, like right now, if I wanted to explain something, I'd have to like, basic, any real thing in the world, I have to start asking other experts.
Cause I can't like, but you know, I can't hold a 10th, uh, one one hundredth of it is my head.
But my sense is that when you have one of these conspiracy theories, it's, it's some work to apply, which is nice.
That's why it feels like a game.
You don't want a game to be too easy, but.
You can find an explanation.
You have the materials to make that explanation work, given some time and effort.
And so it reshifts things.
I just covered a study on anti-vaxxers that looked at anti-vaccination as a sort of social network.
How dependent do the medical beliefs of people have to do with fitting into a community?
They came up with 22% of Americans are anti-vax, 14%, sometimes 8% always, and predominantly in the 8% cohort, you had people who beyond medical beliefs, they were really focused on the community aspect as part of feeling in.
And then just last night, I started reading a galley of Will Storr's new book, which is called The Status Game, where he makes the argument that Pretty much everything we do is to try to position ourselves on some sort of level of status.
So in your work on games, how important is community and how important is status from what you've researched?
I'm not sure as much about status, but I think one of the strange links that I keep finding between my thinking about games and looking at conspiracy theories and conspiracy theory worlds is the way in which communities can heighten the sense of clarity and purpose.
So, I mean, um, let me back up a second.
An interesting experience you can have when playing a game is even when you you're arrayed against opponents, you have the same Victory comes in the same terms.
You share a currency of meaning and value, but then when you're on a team together, you have this remarkable experience of doing something with other people using exactly the same tools and exactly the same purpose, right?
I think there's this weird sense in which being in a basketball team is this artificial sense of total social harmony for a moment, right?
At least united in purpose.
And this feels to me like the experience of being in some of the communities that you describe.
I mean, again, I want to ask you whether this rings true to you.
It does, for sure, in the sense that individual choice-making and individual agency is always complicated by You know, the charismatic glow or kind of emotional contagion.
And then in the influencers that we study in their groups, there's always this question of whether or not the rules of the game, especially with regard to how they are extending their messages, Are actually throwing them deeper into their content without them choosing that content as per their values, right?
So we see all of these people who are, I mean, we started using the term gaming the algorithm, you know, probably eight months ago when I started seeing somebody like Sayer G using QAnon type keywords, but not entirely endorsing the Fever Dream content.
He was coming right up to the edge of something and I think he was getting positive feedback in terms of follower counts and so on.
And so there's this interplay between the game rules and group cohesion that seems to push whole masses of people towards content and value systems that they may not even They might not even endorse or have thought about before they actually were rewarded for being viral vectors for them.
I was thinking about that concept that you brought up Teams last night.
I was thinking about this in terms of I have my own beliefs on things, but I very much consider Matthew and Julian part of my team.
Even though we don't agree on all of our philosophical constructs that we have, we come together because we see all of this disinformation and we're trying to point it out together.
The question that I'll pose though, to follow that, We had posited that there's a potential, or I had, and then we talked about it, a potential cult in Austin growing with all of these charismatic figures, and Matthew pointed out that it would be very difficult for a conspiracy theory-based medical freedom cult to emerge when you had so many charismatic figures at once, because who would be the leader?
And there'd be some dominance hierarchy going on.
And that makes me think about the conspiratorial mindset, because A victory for Matthew and a victory for Julian is a victory for me.
When they book interviews, podcasts, they get press, if something happens, my first action is, this is awesome.
And I wonder how much that translates when you get into these more charismatic figures that a lot of the people we cover, the disinformation doesn't.
Are they leveraging each other or are they actually Excited by the other people gaining more status.
So you, I mean, being that conspiracy theories in gaming is your lane.
So I would like to hear you talk a little bit about the techniques that some of these players use and if there's room for other, is there a team spirit there at all?
Or is it always about that figure?
Let me, let me try to come in from something sideways because your question makes me think of something that I've been mulling over in the back of my head.
So a lot of the times, A lot of the times what I think I'm thinking about is that there's a goal we have, and there's a hard way to get there, and there's a cheap and easy way to get there.
This is in the background of the stuff I've been writing about moral outrage porn.
My worry is never that moral outrage is bad, but there's this quick way to get this cheap sense of unity if we all converge on one thing.
And one thought I have is like, in real, don't ask me what real communities mean, but in real communities, like togetherness, teamness, it's kind of hard, right?
You share a common view, but you have to negotiate it.
You have to like, there are little fights that break out.
You have to renegotiate.
You have to compromise.
Parts of you agree and parts of you don't.
Unity is a work in progress.
Yeah, you're describing our Slack.
And marriage and any other kind of collaborative thing.
Some institutions offer what look to me like Prefabricated value standards, like a prefabricated value currency.
So in CrossFit, this is like, you know, your number of whatever cycles or reps in the day.
In rock climbing, which I do, it's like, what's the highest grade you've climbed, right?
In my world of philosophy, it's like, What's your citation rate?
Or what's the status of your institution on some ranked list?
And I think one interesting thing that happens is there's a temptation to just take on one of these value systems, because then suddenly you get cheap and easy conversions with other people.
If you all converge on the same value standard, then you don't have to negotiate.
You just jump to the part where you feel unity.
And what you haven't done is negotiated for your place, for your values.
You just jump there, right?
And one thought I have, maybe this is again in your train, not mine, but like, is something like, One reason you might have to buy into a single charismatic leader is something like, not just their charisma, but once there's a single voice that's already there, if you converge on it with other people, then your difficulties are gone.
Then you have unity.
Unity is really easy if you have one fixed... If we have to negotiate for all our values, it's a really hard, moving, shifting progress.
If there's one concrete figure out there that we can all converge on, Then Unity's easy.
All you have to do is give up your own values.
Yeah, yeah, to kind of paraphrase you back to yourself, I have a note here that says, a belief manipulator, so this could be like one of the charismatic figures we cover, builds a sticky system by easy explanations, which create a sense of false clarity, a simplified moral system that condemns everyone on the outside, thus creating the echo chamber effect, and the gamification so that people get pleasure from focusing on a narrow goal as well as pleasure from what you call moral outrage porn.
It feels like there's this convergence of quick and easy unities.
So, you're converging on a quick and easy value, you're converging on a quick and easy set of moral rules, you're converging on a quick and easy set of who counts as an insider and who counts as an outsider, and you're converging on a quick and easy
scientific belief like science presenting explanatory mechanism towards the world and a lot of the world we look at seem to offer all of those in like a package right so if you jump in suddenly all the difficulties of negotiating your values your morals your world your world belief system that's all gone and you can talk to people near you about any of those things And you're converged.
So do you think that nuance is even possible in social media?
Because you keep pointing out the quick and easy, but I always wonder, like, when I actually get into the amount of discussions that have been hard, but ultimately rewarding because there was a challenge on social media compared to the amount of trolling that I've seen, or it's very unbalanced.
Yeah, I mean, it's possible.
Well, it seems like it's possible in little groups, right?
Little bubbles of people who want that.
Twitter makes it really easy to feel intimate with people.
Because for a lot of us, our Twitter worlds are really small.
They're really small, and we talk to each other.
And a lot of our Twitter worlds are with people who are... I mean, my Twitter world is mostly philosophers.
Philosophers share a pretty similar outlook and sensibility on the world.
And a lot of the problem happens because Twitter Rewards, like, in-jokey, snarky, context-heavy stuff.
And then because of the retweet mechanic, Twitter just makes it easy to, like, get that shoved out and have, like, people that totally don't get you attacking you.
But your original question is, is nuance possible on Twitter?
Sure!
Like, anyone can write a nuanced thing or a nuanced thread on Twitter, but I think the system fights you.
Like, it's possible.
But the structure of the system, A, the shortness, B, the fact that any short bit in a long thread can be retweeted and mostly read out of context, and C, the fact that the incentives mostly align with non-nuance conversation, the whole system pushes you.
I can feel it.
I fight against this all the time on Twitter.
This, I wanted to ask about this specific point, which is that you have a really cogent explanation of the mechanics of gamification, and then you also, and I think this comes from philosophical discipline, you do have some solid language around How a person might feel their values narrowing to accommodate the game.
Now, but I'm not, I'm never quite sure whether you're, how we're measuring those feelings and how we know what questions we ask ourselves.
Like, because when Derek says is nuance possible on Twitter, my answer is from personal experience that It is to the extent that I am, I totally Bhagavad Gita that stuff, which is like I completely give up on what I believe the reward will be, and then I'm free to do something that feels true to my skills and my experience, but
Also unconcerned with payback or feedback or with points.
And so I know that as an internal sort of measuring stick for myself, but does everybody, is everybody just sort of condemned to the fate of finding an internal measuring stick?
Or are there a way of, is there a way of talking about that in a generalized fashion?
That is a really deep question.
Okay.
By the way, I listened to Your lovely essay about no moral outrage porn, no punching down.
I was really struck by your comment, because I feel this too, that the first thing you want to check before you put up this piece of content is, are you thinking about the content or are you thinking about All the likes you're going to get.
And the latter is a problematic sign.
I mean, I, I should say, by the way, I write about this stuff as a person that is very easily addicted to computer games and very easily addicted to like any of these metrified systems.
Like give me points.
I'm like, right.
And then I have to fight.
So the question is like, what, I think what you're asking is what star do we steer by?
Yeah.
If it's not the metrics.
And I mean, my real worry here with a lot of this stuff, with this thing that I've been talking about, that I've been calling value capture.
Maybe I should just stay for your audience.
Yes.
Value capture, this is something that I started thinking about, that I write about a little bit at the end of my games book.
Value capture is a case where your values are natural or subtle or inchoate, and then you get parked in an institutional or social system that gives you a really clear, simplified, typically quantified version of this, and then that takes over in your motivations and your reasoning.
So I'm thinking something like Going on Twitter for connecting to people and then coming out just wanting to go viral.
Or, in my case, I know so many people who, I don't know, there's probably an equivalent from your world, but people who go to philosophy for their love of wisdom, and then they come out institutionalized into being like, well, there's a ranking system for how high status a publication venue you have, and the point of my life is to get into the highest, like, it's just, you would think that philosophers wouldn't do that, but most of them do that.
And so my worry is, My worry really is, is that you have, that what we're seeing is a kind of swamping of your values by an external value signal that's clearer and louder.
And now you might ask like, well, what, I mean, I think, so I'm like, what makes the internal real and the external not real?
And there's, I mean, there's a, Hostess, I have to think about there.
But what I think about in the background is, so you said, like, how do you do it without feedback?
And I don't think that's what's going on.
I think there's a different kind of feedback, right?
One feedback is external, quantified, crisp and clear.
And the other feedback is this messy fuzz inside.
And I think we've also been, I mean, we're definitely in an era in which there's a lot of There's a lot of thinking that says something like, don't trust yourself about your well-being or happiness or something like that.
Trust these external measures.
Trust these external marks.
Our subjectivity is crap.
Like, you have no idea what your health is.
Listen to a Fitbit or something like that.
But I'm really worried about how much of our well-being and our goodness and our sense of meaningfulness in life occurs in terms of various signals that do not capture easily by things like Twitter or a Fitbit or something like that.
So there's, okay, this is great.
I mentioned it before, but Elijah Milgram is one of my favorite philosophers living.
And also, I now had the luck to meet and befriend.
And he's mentoring me now.
So it's awesome.
It's one of the luckiest things in my life.
But he has this view that's really interesting where he says he's a book called Practical Induction he says like look a lot of us a lot of people think that you can figure out the point of life or What good is or what your values should be by like deduction from some like top-down conception of the good but that's not how it works.
How it works is you pick a valid you try out a value and you see how it goes for you.
You try out pursuing something and then you see whether your life sucks or is great and I think he means things like I have tried to be... I mean, at one point, I had the role of someone who's trying to be in tech, and I aimed at making a lot of money and succeeding in business.
And that made me into a certain kind of person with a certain kind of mindset and experience, which, by the way, sucked.
I was just constantly looking for ways to find leverage over people, and it was horrible.
I have aimed parts of my life at like, you know, valuing being an artist or valuing being a teacher or valuing myself as like a rock climber or valuing myself as a good Uh, what's, what's the, I'm trying to think of like the, like a good cook or someone with a really good collection of guitars or someone with like a really good stereo system.
And each of these drags you into a form of life and some of them are great and some of them are miserable and make you hate yourself.
And I think like those, the subtle signals of what it's like to be this kind of person, right?
That's the kind of things that you use to steer, What your values should be.
My worry is just that, like, in some of these cases, external signals are swamping just because they're so loud and clear and shared that, like, my internal sense of what it's like to be a person saying these kinds of things in public, like my discomfort or my happiness or my joy or my engagement is swamped by the fact that, oh, my God, this kind of thing gets so many fucking likes.
Yeah, it's such a fascinating conundrum, right?
And the way it intersects with what we look at is I think really interesting because there's a paradox here, right guys?
Which is that so many of the conspiracist influencers that we follow, they're all about don't outsource your truth, right?
into some sovereign sense of knowing that comes from some, from some other place because you're not, and this, and this is actually, I'm, I'm, I'm an epistemology nerd like you are T and, and just this, this notion of how, uh, there are layers of, of expert opinion this notion of how, uh, there are layers of, of expert opinion that we have to And if we're not, if we're not informed enough on a particular topic, then it becomes impossible to tell the expert apart from the grifter.
Right.
And so, so this, this, the other piece of this that I think is intersecting with it is like, okay, if we, if we live in a world where we're continuously given this message that, that we should be outsourcing our sense of metrics or our sense of, you know, what to believe or, that we should be outsourcing our sense of metrics or our sense of, you know, what to believe or, or where we're getting validation from, then it makes us particularly vulnerable to someone who comes along and says, no, I can mainline you directly into a conspiracy theory or a sense of enlightenment I can
Right.
I can enhance your intuition, your sense of inner agency.
I can do all of the things that T actually thinks are good ideas, right?
Yeah.
Yeah, that's what the grifter can say.
Sorry, man.
Sorry, man.
That's such a good question.
I mean, I mean, this is okay.
I have, I can say something really quick and simple that'll be totally unsatisfying, but maybe I should start.
I mean, I do think That science and your personal values are different.
Yeah.
I mean, I think they're different realms.
I think science is about objective reality, and that demands that we have this vast, massive layer of expertise, and I think your values are embedded in you.
Like, I mean, one way to put it is, am I the best person to know whether my antibodies are working?
Hell no!
That's not something I have direct access to.
Am I the best person to know whether I'm happy and fulfilled?
Yeah.
And so I think there's this stark divide.
You can, like, screw up in two ways.
One way is you can trust everybody about everything.
You'd be like, okay, I'm going to outsource my scientific beliefs, which, by the way, most of us have to do.
And I'm going to outsource my values, too.
World, tell me what to care about.
Oh, I guess it's money and clicks.
That's one problem.
Another problem I can imagine is, no, it's all in you.
It's all your values, but also the world of science, understanding whatever your spirit moves, right?
And I think there's this different domains.
You are the expert on whether you're happy, but you are not the expert.
Well, yeah, so you're the expert on whether or not you're happy.
I mean, I feel like in both domains that we're just setting up for the point of the discussion, there's, for me, the real question is on what basis do you choose who to trust?
Because we are still, we're going to trust philosophers.
We're going to trust meditation teachers.
We're going to trust our art to reflect something back to us that gets us in touch with a sense of our values, right?
Or a sense of meaning.
And with science, I feel like the kind of trust that we do, if we really are science informed, it's based on a kind of skepticism that has a coherence about it, right?
And that is somewhat educated on how that methodology works, whereas what I call freshman skepticism sort of goes into this place of like, you know, no knowledge is possible whatsoever, so you shouldn't trust anyone except for the person who gives you that full certainty, right?
I literally think you've just asked the hardest question in philosophy.
Sorry.
I'm an outlier here in this world.
Think for people in this space, epistemology.
For listeners that don't know, this is the study of knowledge.
For a lot of them, the classic question is, how do I know anything?
Or, how do I know the external world exists?
But for me, the most important question is, how do I figure out Is that an innovation in terms of philosophical literature?
To turn epistemology into an interpersonal question of trust?
Is that something that's new?
Because that just blows me away.
In the Western European Anglo-Analytic tradition, it is largely New, and I really date it to 70s and 80s feminism and feminist critiques of traditional epistemology.
Especially Annette Bayer, who has this, my favorite philosophy paper, if you're interested, in like the last 50 years is Annette Bayer's Trust and Antitrust.
And she starts as this thing where she's like, she says something like, She says, okay, you all have this moral theory that's called social contract theory that says morality begins in free individuals making contracts with each other.
And she basically says, that's something only a bunch of dudes who spent their life wealthy in a gentleman's club could imagine as the basis of morality that leaves out like families and, anyway.
Property.
If you want the technical interesting stuff there's been a I think a lot of Western European philosophy has been based around this singular question of like what can an individual do on their own to figure everything out?
I mean I mean I think it's so important for our project because there's really two fields of of intersecting uncertainty and one is what How do we know what we know and what sources do we use to establish, you know, the difference between a conspiracy theory and an actual conspiracy that is harming people?
And then on the other hand, there is this morass of cultic dynamics that we're always looking at that is explicitly about How are people treating each other?
And so, to frame the question of how do we know what we know, or to reframe it in terms of who are we going to trust and how are we going to establish their trustworthiness, that's very helpful for me actually.
Because the drier question of how do I know something is true, It doesn't really get to that secondary level of who's trying to use knowledge in what way against you.
My philosophical career started with me in graduate school being obsessed with one question that almost no one else thought was important, which is the expert identification question.
This is a problem that's literally from Socrates that most people ignore, which is how can a non-expert who's looking for a teacher pick the right expert in the domain to teach them if they don't already have the expertise?
So most people, at least when I started, were like, oh, of course there's a solution to that problem.
And I was obsessed with the possibility that there might not be.
Right.
And basically, I feel like the world has collided with my interests.
I had a hard time explaining why this was interesting to people like 20 years ago.
Can I just interject and say that this is the problem of finding the spiritual teacher, right?
Right.
Which has now intersected with the problem of finding the epidemiologist.
Exactly!
If you're not an expert, In a scientific domain, there's no way for you to pick the right expert.
I think that's a little too simple.
There's a way out of that, for science maybe.
The way out of it, the best articulation is a philosopher of science named Philip Kitcher, and he calls it, I don't like this name a lot, it's called indirect calibration.
And the idea is, look, you and I can't figure out what the right physics is, but we can trace a line to people where we can make a judgment.
So here's something I can make a judgment about.
I can figure out who's good at building bridges and who's good at building airplanes because their shit doesn't fall out of the sky, right?
And then I can see who they trust.
Who are the statisticians they trust?
Who are the physicists, who are the applied chemists they trust?
And then I can trace a line.
So my worry is that A, that line is actually really hard to trace, and B, that doesn't apply to all fields.
One of the first things I ever wrote in this space, in a paper called Cognitive Islands and Runaway Echo Chambers, was the worry that some fields you couldn't do that.
In particular, I was worried about Picking your moral advisors.
There's no moral equivalent of a bridge that falls down, right?
My worry is that you can only pick someone... You always have to exercise your moral sensibility to pick who your moral advisors are.
And if you're a white supremacist, you're going to pick white supremacist moral advisors.
And my real worry is, and I think this is something, my real worry is that there's not a way out of this trap.
That if your morality is already screwed up from the start, then you're just going to bootstrap yourself up by picking crappy advisors.
You know, my first cult leader.
Um, taught me a very, uh, interesting Buddhist principle that I didn't apply to him, which was, uh, if you're going to try to find the moral bridge to accepting an authority, uh, you wait and hang around the Dharma teacher for 10 years, or something exaggerated like that, for 10 years, and you interview and you watch the lives of their students progress.
And it's only after you're satisfied with, oh, these people are well integrated or they're healthy, or they have, you know, achieved their goals, or this person has good morality that you would adhere yourself to them.
And so there's a...
So that's the bridge.
That would be a pre-modern bridge, which may or may not have actually ever been walked upon because it seems like, you know, even in pre-modern monastic and coolest circumstances, even in pre-modern monastic and coolest circumstances, most of the time we're talking about young boys who are...
Basically thrown into pedagogical relationships without the benefit of being able to judge.
Or they outsource that through their families.
Their families will send them to the particular teacher or the monastery or the convent.
So yeah.
That's one moral bridge I can think of, but I don't know how.
It's not like the plane that didn't fall out of the sky.
This is another point where I would love to hear what you all think about whether this is just abstraction in theory or actually fits with what you're seeing.
My worry about the description you're giving is it depends on your sense of what counts as a good, healthy, flourishing life.
Yeah.
To begin with, to begin with, right.
Right.
And the worry is, it feels like most people who enter into communities like this go all in and start listening first.
And the worry is that, or don't have a strong conception and are already being swamped by the community they're around.
And my worry is something like, if such a community can redefine what counts as health and flourishing, then it's going to look
We have differing opinions on Sam Harris on this podcast, and I don't like the direction he's gone in recently, but I think his most undervalued book is The Moral Landscape, where he tries to create a neuroscience of ethics and morals, and doesn't, you know, know if it's possible, but at least starts to look at weighing harms that are done and trying to create some sort of system for that.
And so when I hear this conversation, It just reminds me that we're not built for globalism.
So when we discuss morals on a large scale, we're an animal that could never, you're never going to find a moral system that works for 8 billion people.
It would be impossible.
It'd have to be a really rigorous autocracy to be able to do that.
So then where do you start to even define morals and whether they're helpful or not for people if There are just so many people on the planet that you can't really gain insight into what works for certain people and not others.
Now we're heading into territory I did not expect to be talking about on this podcast.
But I mean, I think you're asking a really deep question.
Let me try to put it without too much philosophy wonkery and geekery.
Which you don't do, by the way.
I just gotta say, I read your papers very thin on the footnotes.
You have a very spare and conversational style.
I just want to throw that out there.
I think you apologize a little bit too much for a lack of clarity that you lack.
Thanks.
But also what you're seeing on the page is a construct of a lot of effort.
And if you get me drunk around philosophers, I'll just, you know, vomit, jargon, crap.
So, okay.
So let me try this.
What do you drink?
That's the important question.
What do I drink?
Everything.
Whatever they serve.
I used to be a food reviewer.
LA Times, I know.
I really like cocktails.
I brew my own beer.
I got really into, I mean, I'm really interested in weirdo wines.
We can have an entire different conversation related to the one we're having about how the point scoring system in the wine world is like undercutting the life of wine, but whatever.
The idea that there's a single moral code, a determinate moral system that would work for everyone in the world, is a ridiculous fantasy.
Among other things, I think this is well-established in moral and political philosophy, at least among people that I trust.
Different psychologies, different people, there are different social circumstances, you need different rules and heuristics, and so this is basically a reason to have what you might call social and moral federalism, like different communities get to set different standards.
Is there anything in the center of those Venn diagrams though?
Maybe.
I mean, literally, you realize the question you're asking me is like, if there was a physicist here and you're like, but what underneath the hyper strings?
And you're like, we've been fucking working on this for like 2000 years!
But no!
Solid morality for me!
I've been trying!
We've been trying for thousands of years!
It's hard!
Okay.
So, sorry.
When Matthew said you were coming on, I thought you were going to answer that question.
This was the big reveal.
Yeah.
Oh, by the way, can I ask you a question?
How do we make everyone not be attracted to shitty beliefs?
Can you give me a quick answer?
I mean, I've got a minute.
Just give me an answer.
We get expert guests to come on and explain.
Let me tell you the space where I'm puzzled, right?
So one answer that you see in a lot of political philosophies, you should let independent communities, communities mostly set their, I mean, set the rules and norms that work for them, and then you need a small number of principles to bridge between, like, so that we have a minimal number of principles that all the communities, you know, obey and interact with each other.
That looks like something like the UN, if it worked, whatever.
So, right, Now what we need is to establish that particular communities, whether those values and morals are actually being set by the people in them, in response to their own interests, or whether They've been hijacked or brainwashed.
And this becomes, this is a genuinely hard problem.
It's the problem of how you can tell whether someone's expressed wishes are actually their wishes, right, or whether they've been hijacked in some way.
There's a lot of philosophy out there that I'm really unsatisfied with, that says something like, look, whatever people say is their wishes, however they vote, those are their wishes.
So if you meet their expressed wishes, then Morality is fine.
And for reasons I think that the reason that I'm in this space, among other things, is that I think that's not always true.
I think people can get hijacked.
I think it's a weird thing because it's not like I think we are individuals who can be on our own or develop our values in a totally independent way.
There's a healthy way of being in the community and developing your values in relation to the community.
But it also seems there are obvious cases Where people get hijacked.
And try to give an account of what that is.
Is one of the hardest questions.
It's like in philosophy, it's an ongoing research.
It's like when I'm trying to think about metrics and institutions in the background, this is what I'm really trying to work on.
This philosophical puzzle of what it is to be, to have your autonomy and values hijacked.
Well, it sounds like there are some normative sort of statements you're comfortable with around, you know, what healthy relations look like, right?
As a person, but not as an academic philosopher.
I can't justify that stuff.
Gotcha.
Although there's, okay, oh my god, we're going so deep in the philosophy geekery hole that I did not expect to be.
Do you want my argument why I now believe in objective morality when I used to doubt it?
Yes.
Okay, here we go.
Oh my god, philosophers listening are going to be so angry at me.
They're going to all lose trust in me.
Good.
So I got into philosophy because of two parallel questions.
How do I know the external world exists?
And how do I know that morality exists?
And all these arguments are against that.
And there's an argument that some people attribute to G.E.
Moore that basically says something like, there's a dumb way to say it in a smart way.
I'll skip the dumb way.
The smart way to say it is, So there's a complex argument that I shouldn't believe the world exists, and then there's this espresso cup I'm holding in my hand.
They conflict, right?
One argument that we know from philosophy is the world, you can't know for sure the world exists, and there's this coffee cup here.
And a lot of philosophers and people in philosophy, especially freshmen in philosophy, want to be like, well, you know, here's this really complex argument, so you can't believe this coffee cup exists.
I think Moore's argument is, I'm actually more sure that the coffee cup exists than of my ability to evaluate complicated philosophical arguments.
Because it's not like that stuff isn't fallible either.
Right?
So here's another, this is what really convinced me.
I mean, to put it in modern terms, here's a book I can show you that has a 200-page argument that morality doesn't exist or is relative, and then here's some kids in fucking cages that Trump put there.
I am more sure Of the immorality of that than I am of the theory, because my ability to evaluate philosophical theories is also subject to fallibility.
So, I mean, if you believe this, I mean, this is not a hugely popular, some people are into this, this is not a hugely popular view, but I myself I'm more sure that the people in some of, for example, in some of the people in the cults that you're talking about, that you've described, that their lives are screwed up than I am.
So that's like a starting point.
I mean, another way to put it is you have to start somewhere.
You have to have some kind of foundations.
You know, I think to apply that to cultic studies, I do have, not just because I'm a survivor and I've done a lot of journalism in it, but I just have a general feeling that I do know that these lives are negatively impacted and I am more sure of that than I am sure of the literature that the cult has produced about itself being valid or reasonable.
No matter how good it sounds, no matter how many people want to buy those books, no matter how many people say, this turned my life around, I feel my ability to understand the rationalizations presented in the literature is Not as useful, it's not as well-developed, it's not as relevant as my ability to actually connect with what I can see happening on the ground.
And so, I've never really thought about that as kind of like a comparison before, but that's pretty interesting.
Those are two forms of knowledge, obviously, and one, you know, because it's based in texts, is more measurable than the other.
So, but Matthew, how then do we differentiate, how do we differentiate that relational, empathic, sort of experiential knowledge from claims of special revelation?
Well, what I was going to say was when T brought up, you know, I'm more sure of the peril of the children locked away in cages at the border than I am of, you know, this book that says there are no objective there is no objective morality.
I'm immediately thinking of the conspiracy theorists who will say that that's fake news, that the, that the children aren't locked away in, in, in cages, that, that we're, we really are comparing direct experience to the abstraction of argument.
And so there would have to be some sort of qualification in their tea around.
I think what, what you have personal direct, like phenomenological contact with, right?
I should say, the things I've said, you can easily imagine perverting in a bunch of ways.
Yeah, right.
Justifying it kind of like nasty imperialism.
I'm more sure these people are living the wrong way.
Yeah.
So let's endear it.
Yeah, right.
Exactly.
Right.
Yeah.
And also not to put too fine a point on it, but the argument against it wouldn't be that it's fake news because we could both objectively agree that the kids are in cages.
T's point is more he knows that it's wrong that the kids are in cages.
The thing that you are struggling with is the nausea of where I live as a philosopher.
So when I started thinking about echo chambers and conspiracy theories, I think a lot of people, other people, wanted to say that it was really easy to differentiate the epistemic and rational habits of people in conspiracy theories from people without.
So one thing people would say is like, oh, people inside conspiracy theories are just trusting, you know, the cult leader and not doing this Independent intellectual investigation.
And I want to say, look, we all start by trusting vast institutions we don't understand.
Like, there's much more parity of position than we think.
I think also something's really like, I mean, I think people want to say something like, oh, but I have completely rational explanation for everything.
I can justify every step.
And those other people can't.
They're just accepting something on faith.
And I would say, no, no.
If you think about how knowledge works, we all have starting points that we can't justify, because that's how the structure of justification works.
You have to start somewhere.
You have to start with basic Assumptions.
You have to start with some basic lever points to lever against somebody else.
And the deep problem of rationality that I think, the abstract version of what I'm interested in is, what if people start with radically different lever points and starting points?
Can we show that one of them is more irrational?
And I don't know.
I mean, I think the point of this, the thing that That you're repelled by is the idea that there's actually much more similarity in The way that I think all of us reason with the people that you're worried about than you might think.
Based on everything that you were just saying and coming back around to some of your key ideas that we started with, is there a way of knowing that we're not in an echo chamber?
Like if we spend all this time, right, talking about, well, these people are in an echo chamber.
Yeah.
So that's one part.
And then the second part of that is you talk about this kind of prophylactic reinforcement that the cult leader or the Rush Limbaugh kind of figure will do that sort of perpetuates the echo chamber.
So I'd love to hear about that too.
Can we talk about echo chambers?
Yeah.
People keep asking me, so how can I tell for sure if I'm in one or not?
And I would say, I've been trying to figure that out for four years and I don't have a good answer.
But I can tell you a little bit.
One thing that a lot of people immediately want to say is, Oh my God.
So PS reminder, an echo chamber for me is not, it's not a structure where you don't hear the other side.
That's something different.
It's a structure where you don't trust the other side, where you're taught that everyone outside of your echo chamber is untrustworthy.
So, a lot of people want to say, oh, so if you're in a community that thinks that everyone on the outside is untrustworthy, you must be in an echo chamber.
And I want to say no, because sometimes you're right.
So, members of the resistance in Nazi Germany could plausibly think that you shouldn't trust anybody on the outside.
Abolitionists in the south, in the antebellum, could plausibly think, don't trust southerners.
So, the deep intellectual problem for me is that It's not that you can't ever justify a stance of radical distrust for most people outside of a small community, because sometimes they're right, because sometimes your society is fucked up.
But at the same time, that's exactly the story you hear inside conspiracy theories, which is the world is fucked up, right?
But that mere indicator isn't enough.
So what are the right indicators?
I mean, here's a few.
Is there a plausible path to trust someone on the outside?
Is there something they could do to show themselves trustworthy.
My worry, though, is that a lot of clever conspiracy theory groups offer such a criterion and then move the goalpost if someone actually hits it.
But that's one possible criterion.
The other thing I've been really thinking about is The best indicator I can give is how comfortable you are with your belief system.
Because at least if you're in one of these belief systems I'm really worried about that's engineered to trap you for pleasure, then you should expect that most of the experience is really pleasurable.
So I'm not saying, seek the most disgusting worldview possible.
I'm saying, like, an indicator that you might be in a trap is that most of the belief system makes you feel really good.
I mean, there's a variation of this.
People are like, I think there's a meme somewhere that's like, if studying history makes you feel good about yourself, you're not studying history.
Yeah, you're talking about a kind of cognitive closure where you've stopped investigating, right?
Yeah, and one sign of closure is that things feel relatively easy and pleasurable.
So, some grit is kind of a hint.
Some discomfort is kind of a hint.
But I don't have anything guaranteed.
So, I have a question or a proposition about value capture and spiritual belief and cultism all wrapped up together.
So, you have illustrated in your work and a little bit here that the Fitbit will capture your value around physical fitness by making you focus on the number of steps you have rather than how many activities you can do.
Or the GPA average will have you, you know, a teacher teaching to the test and the student teaching or learning to the test instead of actually enriching their lives and so on.
I was struck today by the thought that a lot of the spiritualities we encounter are also value capture systems in the sense that Um, people, and this is what the, you know, dubious cult leader, predator guy Chogyam Trungpa was getting at with his idea of spiritual materialism.
Oftentimes practitioners are trying to accumulate virtue and merit.
But I realized that there is no Fitbit for spiritual practice.
But where this intersects with my understanding of cults is that the cult leader can actually act as the adjudicator of the point system.
The cult leader can actually act as the Fitbit, but it's not open source.
That's the problem of the cult, is that there's one person in charge, or there's a cadre of people in charge, of this very subjective process of telling people whether they're winning or not.
And those goalposts can change all the time.
So I'm wondering if that resonates with you.
Absolutely.
So I mean, my basic theory about value capture senses is, it's not that you should never use a Fitbit or never use an external metric.
It's that the thing that you should do is check in to see if your life is going well.
To step back and listen sometimes about whether or not It feels good about whether all the subtle signals are that your life is going well when you follow this thing or not.
You can use one of these things as a proxy or a guide for a while, but if you don't check in to see whether it's really valuable to you, if you just let that external metric set what you're supposed to do, then you're not on a path to well-being.
And I really like what you said because it does strike me that even though a cult leader is not A mechanical Fitbit.
If you're in a community that believes in the same cult leader, then you get much of the same game-like effect.
There is clarity, univocality in a shared sense.
There's an arbiter that tells you whether you're doing well or not, and you don't have to question it.
But it's a black box.
It's a black box.
Nobody has access to it.
You know it's there.
You know it's recording your actions.
You know that it's assessing whether you're winning or not, but you don't know how it's doing that.
And that's profoundly disempowering while it pretends to give you agency.
That's kind of the theme of all the crap that I've been worried about.
I mean, it's mighty weird to say, but I have the same worry about cult leaders as I do about Fitbits, which is they present you with a notion of empowerment and increasing your agency, but they do so by having you short circuit the process of figuring out what you really want to do with your life.
And so it's like an illusion of agency.
It's agency on the cheap.
Because in order to get it, you have to ingest this large prefabricated, non-adjustable system that is a black box that's outside of you.
And my worry is that it feels good, but it really...
It leads you astray.
It leads you astray, like at a certain point you don't know who you are.
You forget where you were coming from, where you were going to, and you're kind of thrown into something, a huge machine that is impersonal and impossible to understand.
And the tool that was supposed to be generating something deeper in terms of your Your sense of your life has actually become the main focus instead of what gets you there, right?
This is something that we keep seeing in these capture cases.
People, they were supposed to use the Fitbit to feel better about their lives, but instead, like, they're obsessed with Fitbit.
Or, I mean, people thought they were buying a stereo system to get better music, and then they get obsessed with the stereo system and optimizing the music, and it does, right?
Sorry if that's too tangential, but I did a lot of yoga.
I mean, I was actually interested in a lot of the stuff that you all have talked about ashtanga, because I was in LA and I did ashtanga seriously.
And I definitely did it to feel better, and it did feel better in the beginning.
But at some point in the precise school I was in, My knees started hurting all the time.
My back started hurting all the time.
My lower back started hurting all the time.
I was tired all the time.
And I feel like in that, I don't want to speak about all, but like in the community I was in, like, My sense of what counted as having well-being had gotten a little shifted.
And I was pursuing, and it's really hard to say how this is, but I think I was wrong.
I mean, this, I know this contravenes some stuff I already said, but I think I had a moment where I was wrong about, I had like, I hurt in many ways.
And, but I had this fixed idea of what counted as well-being.
But did you get to the third level, to the third series?
But that's the thing, right?
The series are a game that way.
I'm really glad that you brought up Ashtanga Yoga because, I mean, there are many systems that are like this that are actually organized around the principles that you outline in games.
So the rules, the stages, the achievements, and yet there's something so weird because the You know, you're very cogent on agency being the medium of games, and yet it's the sculpting of agency through these game-like yoga systems that is the actual target.
And it's not just that you have agency, but you're also supposed to be egoless about it, and you're also supposed to like...
not want things and you're also supposed to not have a goal and yet it's laid out for you with this fantastic sort of pathway towards higher levels and so on.
It's a real paradox.
It's amazing.
Confusing it is.
And not only that, if you're hurting, the answer is to do more yoga.
Right.
More correctly.
Right.
More correctly, well, to go back to the primary series, which is called Therapy for the Body.
Right.
I don't know why, y'all, but you're asking every single one of the hardest unsolved problems in philosophy.
But like, I mean, so there's this...
I do think there's this weird thing in which we do look to these practices to sculpt our agency.
That's one of the things we want, right?
A lot of the practices we're looking for are ones, I want to have more willpower, I want to be stronger, I want to love exercise.
You want to be changed, and sometimes you are changed, and that's not necessarily bad.
We are social beings, and sometimes when we want to change a direction, we do it We do it by starting a practice that changes who we are.
I had to go from being a completely sedentary person to a person that Loves and still does yoga and rock climbing and lifting and all these things.
And part of the way we do it is we submerge ourselves in these worlds for a little while.
I mean, my model in games is that what you do in a game is you take on a different agency and you become for a little while this person that just cares about winning on these terms.
I think a lot of these things that we do, like, I mean, I've had all kinds of weird ass hobbies from like making Korean pickles to like fishing to like climbing to like I don't know I don't even want to talk about the more embarrassing ones but like collecting weird shit that people collect because I get into some internet practice and it's like oh my god now I have ten whoops I mean so Each of these is a transformation of you for a while.
And sometimes you find out you like it and sometimes you leave it and you take parts of it with you.
What I keep getting worried about is the cases where it short circuits the step where you ask yourself if this is what you really wanted.
Right?
I mean, I think it's great if someone uses Fitbit to get really motivated and then uses that as the springboard for slowly developing a lifestyle and a practice, for changing themselves in certain ways under their control.
And it's really hard to say why in philosophical terms, but I also think there's a thing you can do.
You just put on a Fitbit and you're just like, okay, this is what I'm doing for my life, just hitting step counts without reflecting on what that means and how that's changed you.
You're using it to bootstrap some kind of overcoming of an inertia, right?
A kind of status quo bias that keeps you stuck in, you know, just avoiding the pain of doing the difficult thing.
And then there's the other side of just letting it swamp your...
Yourself completely.
In a lot of your episodes, I see a younger version of myself, like getting totally into this world and letting it define what I care about.
And like being like, some of those worlds are Snugget Yoga, and some of those worlds are professionalized philosophy.
And some of those worlds are like, you know, am I collecting the coolest guitars by the terms of like, it's Yeah, some of it I think, some of it plays out politically too, right?
Where there can be, there can be this sort of like, it's a loaded topic to talk about, but like political correctness, like in its worst forms, political correctness is I'm so invested in these, in these taboos that no matter what the person's intention or context is, when they break the taboo, I'm going to, I'm going to, the mob is going to destroy them, right?
Like, it's so hard here because, like, you don't want to immediately say that, like, if you're a part of a group of people working together intensely for a cause that's important, it's definitely all this bad stuff.
I mean, the whole point, the stuff that makes it so difficult is that a finely engineered Moral outrage porn, like, seductive community is designed to look a lot like a genuine activist community that's working together to get real important change done.
Like, they look similar for a reason.
Yeah, and isn't there even a way you might frame that too, which is that the shadow Of all sorts of really positive things like that, the danger is that it can verge over into that shadow zone if there isn't some kind of guardrail, right?
Yeah, I mean, this is why we keep reaching for the term gaming.
What it is to game something is to, like, try to grab Cut away the hard stuff and grab the superficial features and look like it.
Right?
So this leads maybe to the last question that I have, which is, I think the material of our podcast generates a shit ton of moral outrage.
Uh, finding the line between moral outrage at influencers who literally have blood on their hands because they've increased vax hesitancy in certain populations is one thing.
Making choices that determine How we approach and perhaps cross over sometimes the line into the porn territory, which is really defined by you didn't really have a commitment to the issues to solving them.
And you were more interested in the bang and the and the and the the the inflammatory nature of the thing than you were in actually resolving something.
Like what?
This is similar to the guiding star question I asked you earlier, which is how do we assess where the line is between valid moral outrage and within this gamification system in which which is how do we assess where the line is between valid moral outrage and within this gamification system in which
Do we only have our internal senses to go on there?
Quick answer.
It's not just your internal senses, because you're a friend.
That's what we use friends and moral advisors are.
Then we have the complication I had before, which is, but if your morality is already fucked, you've probably chosen bat- Wait, that's one thing.
So, to your deep question, I can give a really pat, simple, abstract philosophical answer, and then the answer on the ground is nauseatingly hard.
Okay, the pat philosophical answer is...
Good moral outrage is directed at the genuinely evil and bad, because it's evil and bad, and oriented towards actually trying to change it.
And moral outrage porn is oriented towards using your moral sensibilities for pleasure.
So that's, that's, that's the pat version.
It's not going to help on the ground.
I admit, that's not going to help you actually figure out what it's like.
And one of the reasons it's really confusing is because, I mean, we are built to take pleasure in doing good things.
One of the reasons this is so close is it's not like doing, I mean, the whole point, the reason it's easy to game intellectual epiphany for fake pleasure is that there's a genuine pleasure connected with the real genuine epiphany.
That's so arrayed.
The reason it's easy to like the, We can game nutrition by pumping it full of all the delicious stuff, but in the background is a system that was always meant to give us pleasure when we ate good stuff, right?
But there's a little gap, and so Frito-Lay can game it.
The deep problem is, A, real moral outrage can feel good, because it feels good to do right and do justice if you're an appropriately constituted person.
But then, but it doesn't perfectly track, right?
So if you're in it for the pleasure, then you're going to start leaving the path.
Can we figure out where that line is?
And in a kind of consequentialist sense, in the sense that, like, we let's say you're at The Conspirituality Podcast Project for enough time to see whether or not your interventions in the culture are having material impacts.
There's no way of measuring this, really.
We get a lot of DMs that say, you know, I've been anti-vax all my life.
I just got my first COVID vaccine.
You really helped me with my family member.
There's this and that.
We also get negative feedback as well.
It's very difficult to tell How effective our interventions are.
However, if I step back and at the end of this project, whenever it is, I say, okay, um, you know, am I going to do one more analysis of X influencers, uh, you know, evangelical take on disaster spirituality, or am I not going to do it because the 40 that I did before didn't really sort of turn the needle.
Is looking at it in retrospect a way of assessing was I in it for the inflammation of it or was I actually doing something?
Does that make sense?
Can I rearview window that?
I don't think mere retrospectiveness is enough because it depends on the terms of evaluation that you have from your current.
I mean, what if it's your current self that's more value captured and looking at the past in more captured and more captured I see, right.
Okay.
You can totally imagine someone being like, retrospectively, my life was great.
Look at all the money and clicks I got, right?
I mean, I guess I would say, I would say that let's, you know, let's fast forward to 2023 and like, yeah, we have 70,000 Instagram followers and Patreon is booming and whatever.
And I'm like, that was the, that was the porn reward.
What was the Vax Hesitancy reward?
Did that go down?
Do you know what I mean?
Like, would that be a measure of a way of adjudicating, you know, how did I do there?
Well, I mean, I have to say that all of the other metrics notwithstanding, the thing that has moved me the most about this project is all of the DMs we got from people who said, I didn't want to get vaccinated and now I'm getting vaccinated.
Thank you for helping me.
That's a big deal.
Yeah.
You're asking a philosopher who's worried about the kinds of things that can't be captured by institutional metrics what the right metric is for your success.
Yeah, sorry about that.
I mean, all I can say is when it feels bad to me is when it feels like I have a really narrow, thin grip.
On what's valuable.
And the parts of my life that I'm happier with are the parts where I feel like I am more open and sensitive to different forms of value.
So here, let me try one thing.
So in that gamification of Twitter paper of mine, one of the things I was really interested in is the fact that Twitter filters out a lot of information, and one of the kinds of information that Twitter filters out is how deep an impact you had on someone.
Because that's not captured in the like, right?
A like just captures... So, this is gonna be a weird place, but I think what really helps us to understand this is rotten tomatoes.
So my buddy Matt Stroll, who's a philosopher of art, Has this amazing post, blog post and philosophy of art called Against Rotten Tomatoes.
And he says this, okay, here's the problem with Rotten Tomatoes.
It flattens out the information you take in.
So a really great, great art is often really controversial and half the people love it or move and half the people hate it.
But when you put it into Rotten Tomatoes, all that registers is an average of who is for or against.
And a great movie like that shows up as 50%.
On the other hand, if you have some movie like That everyone in the world is like, yeah, that's pretty good.
That'll show up as a hundred percent on Rotten Tomatoes.
Does that make sense?
Because Rotten Tomatoes registers only whether the needle is positive or negative, throws out all the information and then aggregates.
So Rotten Tomatoes, right, you can see all the information it loses.
Now, for me, that explains a lot about Twitter because, okay, in the classroom, when there's not a metric, I'll be talking to 100 students sometimes, and I'll say something, and one student will be profoundly moved.
You can see them, their eyes open, and I'm like, boom, that's it.
That's all that matters.
On Twitter, that's gone, right?
Twitter just looks, gives you the mass number.
And so I just, I mean, all I can say is like, the thing I've been doing leads me to think, no, you just have to be open to different venues of value.
You have to be like, look, it's not a metric, but one person was like, holy shit, you changed my life.
I'll tell you that, like, you know, there are all these metrics that we use in philosophy, but the most meaningful thing that's happened to me is that a student who heard one of my talks wrote to me and she was like, I've been significantly less depressed since I heard your gamification talk, because I realized how many of the things I've been doing in life were actually games I didn't want to be involved in.
And she was talking about the game of weight loss and the game of Athletic and academic success.
And what she told me was, this is like the most moved I've ever been by anything like this.
She sent me a picture and she says that she now has programmed her phone so that the background she sees whenever she opens it just says, is this a game you really want to be playing?
And I was like, Well, that's if I spent five years of my life and this is the one thing it did, that's fine.
That's great.
That's good.
I don't like that.
I should like as a very metric, susceptible person, I should like pay much more attention to this.
Well, T, that's a really good place to stop.
We've named this episode Games Against Humanity, but I want to propose a second part, which is Games For Humanity, because I kind of want to ask the question, which would take another hour and a half, is what would really good games be like?
And what kind of games could we play?
What types of games could we play that would sort of educate us away from gamification?
I mean, do you want a fast answer?
No, let's leave that one.
Leave it for next time.
No, he's got a fast answer.
I don't know.
What do you think?
All right.
Fast answer.
Fast answer.
Here's the fast answer.
I mean, for me, the benefit of games is when they're playful and playfulness involves shifting across different perspectives to get to experience different perspectives.
So the playful spirit about games plays a lot of different games.
And uses them to integrate and understand a wide variety of perspectives.
Gamification and gamification systems like cult, instead of letting you experience a wide variety of things lightly, forces you into a one pervasive value system and approach and freezes you.
So the really important question is, is it pervasive and inflexible or is it playful, broad and light?