All Episodes
Feb. 11, 2021 - Dark Horse - Weinstein & Heying
03:12:14
DarkHorse Podcast with Daniel Schmachtenberger & Bret Weinstein

The first DarkHorse podcast with Daniel Schmachtenberger.From his website: "Daniel Schmachtenberger’s central interest is long term civilization design: developing better collective capacities for sense-making and meaning-making, to inform higher quality choice-making…towards a world commensurate with our higher values and potentials." Find Daniel at: http://civilizationemerging.com/ Find Daniel on the Neurohacker Collective: https://neurohacker.com/people/daniel-schmachtenberger&nb...

| Copy link to current segment

Time Text
Hey folks, welcome to the Dark Horse Podcast.
I am very pleased today to be sitting with my good friend Daniel Schmachtenberger, who is the founder of the Consilience Project.
There's a lot more that I could say about you, Daniel, but I think we will leave it at that for now.
People can look up your bio.
if they want to do so.
I should probably start by telling people that when I say that you are my good friend, I really mean that.
You and I are good friends, although, to be honest, we haven't spent all that much time together.
This is one of those cases where you meet somebody, somebody who has started from a very different place, and you discover that you have all kinds of thought processes that have reached similar conclusions, and every discussion is fascinating.
The more I learn about what you think, The more I realize I've got a lot to learn from you and that there is essentially infinite ground to be covered.
So welcome, Daniel.
Thank you, Brett.
It's really good to be here.
I feel the same way.
We were introduced by our friend Jordan Hall and we've never had a conversation where I didn't learn something and where I didn't appreciate the good faith way that you showed up when we had a disagreement to talk through, which is always fun.
It is always fun, and I must say, I did a little bit of poking around, seeing recent interviews you'd done, but I deliberately did not overly study for this.
My sense is that the audience will get a great deal out of hearing you and me go back and forth and finding out what we agree on, where we disagree, and maybe most tellingly, There's a phenomenon in which anybody who has learned to think more or less independently tends to have their own language for things, their own set of examples that they use to explain things that recur over and over again.
And so, in order to have high-quality conversations, there's this period in which you're effectively teaching the other person how you phrase things, and seeing those things line up is great, and in the rare case where they don't line up, it's even better, because you know there's something to be learned one way or the other.
So I'm hoping that will emerge here.
I'm looking forward to it.
All right, good.
So, let's start here.
When I say that you and I come from different starting points, I mean to imply something in particular.
You, as I understand it, were homeschooled, and as you have described it, that was actually closer to what most people would probably refer to as unschooling.
Somehow, this did not mess up your motivational structures, your parents were alert about what they were doing, and so you ended up pursuing what you wanted to pursue, and you did not get in your own way, and lo and behold, You end up a wickedly smart, independent, first principles thinker.
Now that's not my story at all.
My story is I went to school, and it didn't work.
I had something that most people would call a learning disability, and it got in the way of school functioning for me, and more often than not, I got dumb-tracked.
And basically, that complete failure of school to reach me accidentally worked like some kind of unschooling, I would say.
And so, there are maybe many paths, I don't know, but I'd be curious, for people who have traveled some road to the land of high-quality, independent thought, what can they expect the experience to be like when they arrive there?
The experience of high quality independent thought.
Yes, if you imagine that, well, it would be lovely to think independently and to do so well, and the world is going to be a paradise if you start doing that, because, of course, that's a very desirable thing to do and people will appreciate it, you're going to be surprised.
That's not necessarily what happens when you arrive there, and certain experiences show up all over the place.
And without telling you what my experiences might have been, I'm curious as to what you might have encountered and whether or not those things will be similar.
Ah yeah, it's an interesting question.
I think you will experience, most people will experience a higher degree of uncertainty than people that are part of any camp that has figured most things out and they can cognitively offload to whoever the authorities or the general group consensus is.
And certainty is certainly comfortable in a particular way.
And if you're really thinking well about what is my basis for what I believe, what is the epistemic grounding?
What's the basis of my confidence margin?
And you really think about your confidence basis clearly.
As you try to find more information to deepen the topic, the known unknowns expand faster, like at least at a second order to what the known knowns do.
And so you keep being aware of more stuff that you don't know that you know is relevant and pertains to the situation.
So there's a complexification of thought.
There's an increase in uncertainty.
Hopefully, there's an emotional development.
And kind of a psychological development where there's an increased comfort with uncertainty so that you can actually be honest and not defect on your own sensemaking into premature certainty that creates confirmation bias.
And there's also a, and I think that's why one of the reasons you used the term independent, there is a certain aloneness of not having a whole camp of people that think very similarly.
I don't find that to be a painful thing.
But it's a thing.
No, it's actually, in its own way, it's freeing.
Because the fact is, if you follow logic to natural conclusions, you'll end up saying a lot of things that are alarming or discordant with the conventional wisdom.
And the world neatly divides into people who will be so enraged or thrown by what you're saying that they disappear, or maybe they become antagonists at a distance, and the people who have a similar experience and therefore aren't thrown by the fact that you're saying things that are out of phase.
And so editing the world down to those who are Comfortable with what they don't know, who are interested in following things where they lead, irrespective of who that elevates and who it hobbles.
Those people are interesting people to hang out with.
And so, yes, the alienation may be a blessing in disguise in some ways.
There's two other thoughts that came up as you were just talking.
I wouldn't call myself an independent thinker.
I'm being particular about the semantic of the word independent.
I wouldn't call anyone an independent thinker, because I think in words invented by other people.
I think in concepts invented and discovered by other people.
I don't necessarily have a specific school of orthodoxy from which I take an entire worldview, but almost every idea that I believe in, I did not discover.
And so I think that's a very important concept, because I think the ideas we're going to discuss today regarding democracy and open society have to do with the relationship between an individual and a collective.
And I think the idea of an individual is fundamentally a misnomer.
Without everybody else, I wouldn't be who I am.
And I wouldn't think the way I think.
I wouldn't think in the language I do.
I wouldn't have access to You know, the knowledge that came from the Hubble Telescope and the Large Hadron Collider and so many things like that.
So, I can say that there's a certain, like, ultimate authority of what I choose to feel that I believe in and trust that has an internal locus of control.
But the information coming from without and my own internal processing of it are part of the same system.
So this is a perfect example of what I was suggesting up front, where two people who do whatever we're going to call this will have their own separate glossaries.
So if I can translate what you've just said, I, Daniel Schmachtenberger, am not an independent thinker because such a thing is inconceivable in human form.
Right?
And I totally agree with that.
The fact is not only are you interwoven with all sorts of humans who are responsible for conveying to you in one way or another conclusions that you couldn't possibly check.
You know, these are thoughts that would be familiar to Descartes, for example.
But you are also building from the entirety of cumulative human culture, right?
All of the tools with which you can think are... almost all of them are too ancient to even know where their rudiments originated.
So anyway, I don't disagree with any of that.
So to me, I would say there is such a thing as an independent thinker, and in your schema it has to do with whether or not they are thinking a la carte.
That is to say, Using that set of tools that is most effective, irrespective of the fact that those tools don't all come from one tradition.
And you would say there is no independent thought because a la carte is the most you can do or something like that.
I might say that I like a term like interdependent better because it doesn't mean that there isn't an individual involved, but it means that the individual without everyone else is also not a thing.
And so the recognition that Sovereignty and individuality is a thing and it is conditioned by, affected by, and affecting other groups of people are both necessary in the Hegelian synthesis to understand what is the nature of a human.
Are they fundamentally individual and then groups emerge?
Are they fundamentally tribal and they're formed by the tribe?
And it's very much both in a kind of recursive process between them.
100%.
In fact, I once wrote something I called the Declaration of Interdependence.
It was a sort of proto-Game B attempt to define what the rules of a functional society would be like.
And I also frequently say that the individual is more an illusion than it is real.
And what I describe is that an individual is a level of understanding that evolution has focused us on because historically it has been sort of the limit of that where we might have some useful control, right?
Evolution might ultimately care about whether or not you are successful.
Your genes are still around a hundred generations from now.
But your focus on a hundred generations from now is unlikely to have any useful impact whatsoever, whereas your focus on your life and your children is likely to be useful.
So we have been delivered a kind of Temporal myopia in order to keep us focused on that which could be productive for an ancestor But of course we are now living in an era in which we can have dramatic impacts on the distant future in fact you and I are both quite focused on the strong possibility that our
foolishness in the present will result in the end of our lineage and that that is something that evolution, were it capable of upgrading, would certainly have us concerned about because the hazard is very real and our tendency not to think about it is a big part of the danger.
I think temporal myopia and The collective action collective coordination problem is a good way to describe all of the problems we face, or one of the generator functions of all the problems we face, that you have a bunch of game theoretic situations where each agent within their own agency pursuing The choice that makes most rational sense to them pursues local optimums where the collective body of that drives local global minimums.
But if anyone tries to orient towards the longer term global maximum, they just lose in the short term.
That's an arms race.
That's a tragedy of the commons.
And so how do we reorient the game theory outside of those multipolar traps?
I would say is one of our underlying questions that when the biggest harm we could cause was mediated by stone tools or bronze tools or iron tools or even industrial tools, we didn't have to cause it immediately because the extent of harm was limited in scope.
When it is mediated by fully globalized exponential tech running up against planetary boundaries with many different kinds of catastrophe weapons held by many different agents, we actually have to solve the problem.
Yeah, I of course agree completely with this as well, that effectively Maybe it really is.
Every single important problem is a collective action problem of one kind or another.
We've got races to the bottom, we've got tragedies of the commons, we've got these things intermingled.
But once you start to see that, on the one hand, you could take from it a kind of reason for despair, because these are not easy problems to solve.
On the other hand, the discovery that effectively it's not a thousand distinct problems, it's a thousand variations on one theme, and that that theme is solvable.
In fact, we have, for example, Elinor Ostrom's work, which points to the fact that evolution itself has solved these problems many times.
That that is hopeful.
So I don't know where you are in terms of how hopeful you find yourself about humanity's future, but I'm quite certain that you and I will align on the idea that, yes, if we could focus on the problem as it was, it's more tractable than many people think it is.
Yeah, I mean you mentioned hopefulness, you mentioned a bunch of good things there, that rather than a bunch of separate problems, you have a few problems with lots of expressions.
This was a big chunk of the kind of work I engaged in and with a number of people that you were part of looking at when we inventory across all of the catastrophic and existential risks, ones involving AI and problems with biotech and other kind of exponential tech and environmental mediated issues and things that escalate to large scale war.
Is there anything in terms of the patterns of human behavior that they have in common?
And so this kind of race to the bottom collective coordination thing is one way of looking at that.
But there's a there's a few ways of looking at what we'd call the generator functions of catastrophic risk.
And it really is simplifying if you can say are there categorical solutions to those underlying generator functions?
They're hard, right?
They're hard.
Now, when you talk about hopefulness, I notice that the way that I relate to the optimism-pessimism thing is there's an optimism, which is almost like a choice.
To say, I'm going to have optimism that there is a solution space worth investigating even if I don't know what it is.
And if I'm wrong that's it's the right side to be wrong on, as opposed to there was a solution I didn't look for it.
And then I'm going to have pessimism about my own solution.
So I'm going to try to red team my solution so that I can find out how they're going to fail before finding out how they fail the hard way in the world, but then not be devastated by the fact that that solution wasn't it and keep trying.
And I think that's kind of how the committedly inventive, innovative principle works.
So, again, we could do almost a one-to-one mapping of your schema onto mine.
I do this in terms of prototyping rather than red teaming and discovering it's wrong.
It amounts to the same thing.
When you say, actually, it's hard, you and I would have to define two different kinds of hard, probably.
There is hard to make function to stabilize and there's hard to figure out what the solution is and those are distinct.
We might find elements of both of them here, but let me just give a maybe it's a The canonical example of a solution to a game theory problem that everyone will recognize.
I divide, you choose.
Right?
It's the perfect solution to an obvious problem of choice and selfishness.
Right?
If there is cake and I know that you're going to get your choice and that you are incentivized to pick the larger piece, then I am incentivized to get the cut as even as possible.
And the point is, it neutralizes the concern.
We are looking for solutions of that nature.
Now, I don't think they are all that hard to understand in broad terms, in general.
There may be a lot of work on the discovery end, but when you see them, they end up being surprisingly simple.
My biggest fear is that it is very rare for people to understand how much danger we're in and why.
And therefore, what solution we are inherently looking for And how urgently we should be seeking it.
In other words, as long as things function pretty well in the present and people get fed and housed, it is very easy for them to ignore the evidence that we are in grave danger, even if we are fat and happy and enjoying a, you know, a period of good luck.
Yeah, when you like one of the Interesting things in the study of previous civilizations is that none of the great civilizations of the past still exist.
They all failed even if they had been around for hundreds or thousands of years.
And so to understand that civilizations failing is the only thing that's ever happened and then recognize that Since World War II, we have for the first time a fully globalized civilization where none of the countries can make their own stuff, that the supply chains that are necessary to make the industrial and productive base are globalized, and that we're running up against the failure points of a globalized civilization, which that's an important thing.
And what's so interesting is that all the previous civilizations that failed Had so much hubris before their fall because there had been so many generations where they had succeeded that they had forgotten that failing was a thing.
It was just some ancient myth.
It didn't feel real.
So we don't have an intuition for things not working or for catastrophe because we haven't experienced it, and our parents didn't experience it, and it's only myth.
And as a result, we just make bad choices.
And I mean, this is where studying history and studying civilizational collapse is really helpful.
And you can see that even as the system starts to fail in partial ways, you know, to me, it seems very clear that when we look at the George Floyd protests turning into riots over the summer that happened, They were following the COVID shutdowns and specifically all the unemployment from it.
Whenever the unemployment goes up, whenever the homelessness goes up, when the society makes it to where people who are trying can't meet their basic needs, then it gets a lot easier to recognize there's something wrong with the system as a whole and go against it.
But we also never had a point in human history where it was like, no matter how outraged I am, all I have to do is start scrolling for a second and I've forgotten everything.
Not to mention the fact that I'm probably on opioids and benzos.
And so that makes it to where the frog can keep boiling in hot water longer.
Yeah, so I often say that people are too comforted by the idea that people are always predicting the end of the world and it hasn't happened yet, because in fact it happens all the time, right?
The ends of these civilizations.
But it's even worse than the analysis that you and I appear to agree on here, because many of those civilizations that have ended, in fact most of them, The civilization, the organizational structure ended, but the people didn't, right?
So the Romans continued on as other things.
The Maya are still with us, right?
They are not with us as the Maya.
And the point is actually in this case, the jeopardy that we are creating is to our very capacity to continue as a species, not just to our ability to continue with the structures that we have built.
So not only are we all in it together this time, but we're all in it in a way that we never have been before, or at least very rarely have been before.
And that really ought to have people's attention.
But you're right, the capacity to distract ourselves from it has never been better either.
I think something that I find particularly important when thinking about catastrophic risks now, relative to previous examples of civilization collapse, is that until World War II, we couldn't destroy everything.
Like, we just didn't have enough technological power for catastrophe weapons.
And so, you could fight the biggest, bloodiest war, violate all of the rules of war, and there would still be a local phenomena.
And with the invention of the bomb, we had now the new technological power to actually destroy habitability of the planet kind of writ large, or at least enough of it that it was a major catastrophic risk.
And on the timescales that you think about as an evolutionary biologist of how long humans have been here and the, you know, proto-humans, Since World War II is no time at all to have really adapted to understanding what the significance of that is.
And the only reason we made it through was because we created an entire global order to make sure we never use that new piece of tech.
And then all of history, we always use the arms that we developed.
And so we made this whole Bretton Woods world and mutually assured destruction that said, OK, well, let's have so much economic growth so rapidly that nobody has to fight each other and they can all have more because the pie keeps getting bigger.
But that starts running up against planetary boundaries and interconnecting the world so much it gets so fragile that, you know, a virus in Wuhan shuts the whole world down because of supply chain, you know, interconnected supply chain issues.
So that thing can't run forever.
And the Mutually Assured Destruction was one catastrophe weapon and two superpowers, so Mutually Assured Destruction works, the game theory of it works.
Well, as soon as you start to add to that the bioweapons and the chemical weapons, the fact that bioweapons can be made very, very cheaply now with CRISPR gene drives and things like that, drone weapons, we have dozens of catastrophe weapons held by many dozens of actors, including non-state actors.
And that just keeps getting easier.
Mutual Assured Destruction can't be put on that situation.
It doesn't actually have a stable equilibria.
So now we have to say, how do we deal with many, many actors having many types of catastrophe weapons that can't have a forced game theoretic solution with a history where we always used our power destructively at a certain point?
How do we deal with that?
It's novel, right?
Like, we have no precedent for that.
Yeah, it's absolutely novel.
I mean, when I became cognizant, you know, let's say 1975 is where I first started having, you know, coherent thoughts about the world.
That was only 25 years after the end of World War II, and it seemed like World War II was a very long time ago, but of course we've covered that distance twice since then.
So, The ability for, you know, the tools with which for us to self-destruct as a result of aggression are brand new.
And you're absolutely right.
The thing that caused us from using them, or prevented us from using them, that force disappeared.
It no longer exists.
There's no stable equilibrium here.
So what's protecting us is not well understood at best.
And then add to that all of the various industrial technologies that we are now using at a scale where they imperil us.
And I don't know about you, but I keep having the experience of a catastrophe happens and that's the point that I get alerted to some process that is very dangerous to humanity that I didn't know about until the catastrophe, right?
This has happened with the financial collapse of 2008, it happened with the triple meltdown at Fukushima, it happened at Aliso Canyon, I believe it has now happened with COVID-19 and gain-of-function research, and the point is it paints a very clear picture.
We do things because we can't see why we shouldn't, or this is also a game theory problem, those who can see why we shouldn't don't, and a certain number of people don't see why we shouldn't, and they do, and we all suffer the consequence of their myopia.
And so, on multiple fronts, we are playing, you know, we are rolling the dice year after year, and the people who can think independently, looking at that picture, looking at the series of accidents, looking at the hazard of something like a large-scale nuclear exchange without an equilibrium to prevent it, those people wake up, but
The problem is the mechanism to actually begin to steer the ship in a reasonable direction in light of these things doesn't seem to exist, for reasons I've heard you explore many places.
So what does it mean, as far as you can tell?
There's one thing that you said that I think is worth us addressing first.
That some of the things that caused the catastrophe either were unknown or those who knew them were game theoretically less advantaged than those who were oriented on the opportunity rather than the risk because those who orient towards opportunity usually end up amassing more resource that equals more ability to keep moving stuff through.
There is a article in a conversation in the less wrong community about regarding catastrophic risk mistake theory versus conflict theory.
What percentage of the of the issues come from like known stuff that we knew would cause a problem, or at least could cause a problem.
And game theoretically, we went ahead with it anyways, versus stuff where we just couldn't have anticipated or really didn't anticipate.
And I think it's fair to say these are both issues, right?
There's, there's true mistake theory stuff, like we just couldn't calculate.
And then there's true conflict theory stuff, we knew that escalating this military capacity would drive an arms race where the other people would that if we calculated it, There's an exponentiation on all arms races that takes us to a very bad long-term global situation.
One of the insights that I think is really interesting is that the fact that the mistake theory is a thing and everyone acknowledges it ends up being a cover, a source of plausible deniability for what's really conflict theory.
So we know there is an issue.
We pretend not to know.
We do a bullshit job of due diligence and risk analysis and then afterwards say it was a failure of imagination and we couldn't have possibly known.
I have actually been asked by companies and organizations to do risk analyses for them where they did not want me to actually show them the real risk.
They wanted me to check a box so they could say they did risk analysis so they could pursue the opportunity.
And when I started to show them the real risk, they're like, fuck, we don't want to know about that.
And so when it comes to the could we have possibly factored that,
So, I mean, a classic example I like to give because it's so obvious in retrospect is could we have known in the development of the internal combustion engine that making streetcars, which seemed like a very useful upgrade of having the horse internalized to the carriage, would end up causing climate change and the petrodollar and wars over oil and oil spills and mass and ocean oil spills, whatever.
It seems like that would have been hard to know a hundred years in the future that it would do all that stuff.
And this is a classic example of also where we solve a problem and end up causing a worse problem down the road in the nature of how we do it, which you can't keep doing forever.
The story is, oh, we cause a worse problem, then that's the new market opportunity to solve that problem in the ongoing story of human innovation.
But when you start running up against it, the problems are actually hitting catastrophic points.
You don't get to keep doing that ongoingly.
You don't get to regulate after the fact.
The way that we always have once a problem hits the world, things that are catastrophic.
Could we have known?
Well yeah, in London, before that, One, there were already electric motors.
And two, people were already getting sick of burning coal from lung disease, from the burning of the hydrocarbons.
If we had tried to do good risk analysis, could we have?
Yeah.
But there's so much more incentive on who pursues the opportunity first.
And then there's this multipolar trap of, well, let's say we don't, the other guy is going to, so it's going to get there anyway.
So we might as well be the ones to do it first.
And that thing ends up getting us all the time, which is why collective action again comes in.
Well, it's really interesting how much of this is, again, parallel.
Heather and I used the example of somebody, you know, driving the first internal combustion engine and somebody chasing them down the street saying, don't do that, you'll screw up the atmosphere, right?
How crazy is that person running down the street saying that?
Because You know, you would have to scale it up to such a degree before that's even a concern that that person seems like a nervous Nelly, but of course they would also have been prophetic.
But the other thing I want to ask you about is, you say that we have these two categories where sometimes we could have known and we knew in fact, and we went ahead anyway, and then in other cases We didn't know, and something snuck up on us, and I want to clarify what you just said, because my understanding here is that if you dig deep enough, somebody always knew, right?
In general, there's some mechanism whereby the person who correctly predicted what was going to happen has been silenced, often they lose their jobs, they disappear from polite society, at the point they turn out to be right, their reputations are never resurrected as far as I can tell.
So, am I wrong that even in the cases where people who made the decisions may plausibly have not known that the reason they didn't know is because there's some sort of active process that when there's a profit to be made shuts down anything that could be the basis of a logical argument that we shouldn't do it?
I don't know that I'll say always.
I'll certainly say most of the time.
Um, and let's say there was a case where we like really nobody knew.
It's usually my guess is we probably could have had we tried harder.
And then let's say there's going to be some unpredictable stuff like we know in complex situations are going to be unpredictable stuff.
So you do the best job you can to forecast the risks and harms ahead of time.
But then you also have to be ongoingly monitoring.
Well, what would the early indicators that there's a problem be?
And how do we take it when we find that there's something we hadn't anticipated?
How do we factor that into a change of design?
Well, once Once the profit stream is going and the change of design fucks up the profit stream, how does the recognition of that there's a problem actually get implemented when those who have the choice implementation power are not the people who are paying attention to those indices?
So yes, I would say, and it's easy to just say, Hey, yeah, there was some whack-a-doodle who was saying that there was going to be some risk, but there's always some whack-a-doodle saying there's going to be risk about every new tech, and if we really listened to all of them, we'd have no progress.
That's the story, right?
It's a bullshit story.
Could we?
Now let's, but there's a collective coordination issue because it is fair to say, so like, let's take AI weapons right now, specifically automated drone weapons.
There is an arms race happening on automated drone weapons.
And I think every general and military strategist knows that all of our chance of dying from AI weapons goes up.
They're kids, everybody's, as we progress in that technology.
It's a bad technology, it shouldn't exist.
We should create an international moratorium that says nobody builds AI drone weapons.
That we don't want automated weapons with high intelligence out there.
But we can't make that moratorium because if one country doesn't agree, if one non-country, some non-state actor doesn't agree that has the technology, or let's say everybody agrees, how do we know they're not lying and developing it as an underground black project?
So either we don't even make the agreement, or we make the agreement knowing we're going to lie, defect in a black project, spy on their black project, and try to lie to their spies.
Who are spying on us.
And so it's like, how do you get around that thing?
Where if anyone does the bad thing in the near term, it confers so much game theoretic advantage that anyone who doesn't do it loses in the long term.
Why it was that the peaceful cultures all got slaughtered by the warring cultures.
And so what ends up making it through is those who end up being effective enough at war.
That's an underlying thing we have to shift.
Because that has, as its eventual attractor space, self-destruction in a finite space.
Yeah, I totally agree.
And the, I think, fascinating thing when you interact with the incarnate aspect of the process you just described is that the people who are telling the lies that explain why we're doing something that we know is reckless, Often don't know that that's what they're doing, right?
They actually believe their own press, and instead of saying, well, yes, this is terrible, but we don't really have a choice, or somehow indicating that they know that, what you encounter is a true believer who thinks that this is safe.
And that's very frightening because it means that the mechanism at the point something begins to go awry to do anything about it doesn't exist, right?
Or at least it's not connected to the part that you can talk to.
And so, again, not too surprised to find overlap in our map.
I would say the process that you describe of By the time you discover what the hazard is, that there's a prophet that has accelerated the process, I call this the senescence of civilization, because it's actually exactly a mirror for the process that causes a body to senesce.
The evolution of senescence involves
processes that have two aspects one which benefits you when you're young and another which harms you when you're old and because many individuals don't live long enough to experience the harms in old age they get away with it from evolution's point of view and evolution favors the trait in spite of the late-life harm so those late-life harms accumulate and that's the reason that we grow feeble with age and die and that's an exact mirror for the way we've set up our economic and political system
Where any process that is profitable in the short term at the consequence of having some dire implication for civilization later on, those processes are so ingrained by the time we discover what the harm looks like in its full form, there's nothing we can do to stop them.
Okay, so let's use two really important current examples.
So let's take Facebook and social media and the way they've affected the information commons and the epistemic commons writ large.
So we know that the nature of the algorithms optimizing for time on site while being able to factor what I pay attention to the whole Tristan Harris story makes almost it.
Very few people wake up and say, I'd like to spend six hours on Facebook.
And so I'm going to spend more time on Facebook if I kind of leave executive function, rational brain, and get into some kind of limbically hijacked state where I forget that I don't want to spend my whole day on Facebook.
And so time on site maximization appeals to existing bias and appeals to limbic hijacks.
So, if I piss off and scare the whole, and elicit sexual desire and whatever of the whole population while doubling down on their bias and creating stronger in-group identities associated with out-group identities, the algorithm's optimized.
Well, it is an AI of the power that beats Kasparov at chess.
Beating us at the nature of the control of our attention so we can see that the right got more right the left got more left the conspiracy theories got wackier the anti conspiracy theory people became more.
Uh, upset at the idea that a conspiracy could ever exist.
Basically, everybody's bias doubles down and they all move apart from each other faster.
Well, society doesn't get to keep working.
That, that is a democracy killer, right?
That's an open society killer.
There's, there's a reason China controlled its internet is if you don't want your society to die, you have to be able to have some shared basis of thought.
So we can say, and the story is, oh, we didn't know that was going to happen.
Well, you go back and look and guys like Jaron Lanier, We're at the very beginning of Facebook and Google, whatever, saying, hey guys, this ad model is going to fuck everything up.
You can't do the ad model thing.
You've got to pay for subscription or some other kind of thing.
And I was like, shut up, dude.
Or just don't even engage in the conversation.
And then get a say afterwards, failure of imagination.
But now, how do you regulate it when those corporations are more powerful than countries?
Because the regulation is going to happen in a court where the lobbyists have to be paid for by somebody, right?
So who are the lobbyists paid for by?
And it has to be supported by popular attention.
And those who can control everybody's attention can also affect what is in popular attention.
So this is a very real example where we know the harms were known.
And it actually got large enough that it killed the regulatory apparatus's capacity.
Absolutely.
In fact, again, this is going to be another alignment of our maps.
So what I've been playing with is the idea that we are incorrect in imagining that people necessarily want their expectations flattered.
That people actually may like to be challenged, but that it's inconsistent with the well-being of advertisers.
That the very fact is Because advertising is only a tiny fraction informative and is mostly manipulative, you have to be in your unconscious autopilot phase in order for it to cause you to buy a car you wouldn't have otherwise bought or buy different deodorant than you would otherwise buy.
And so, the point is, in order for us to be... The thing gets paid for by advertising.
In order to be useful to advertisers, we have to be unconscious.
And the only way to keep us unconscious is not to challenge us.
Basically, to tell us what we think we already know, rather than what we need to know.
And so, they're lulling us into this, even though we would still be interested in the platforms, if we weren't being advertised to.
But we would be interested in having more important conversations there, which is really, in some sense, what The growth of heterodox podcast space is about.
Oh my goodness.
Okay.
There's two directions.
I want to go at the same time.
I'll just pick one.
There's a reward circuit on exercise and there's a reward circuit on junk food, right?
And they both have a dopaminergic element and reward circuit, but they're of a very different kind.
And the reward circuit on exercise is that it actually feels like shit at first.
And it's hard, but your baseline of happiness measured in whatever, dopamine, opioid access, whatever, gets better over time.
And then you start actually feeling over time, but not quickly.
This is another place where temporal myopia ends up mattering because there's a delayed causation on the healthy one and no delayed causation on the unhealthy one.
So I start getting the reward circuit on exercise when I start seeing results and then I want to push hard and then I'm willing to actually go against entropy and put energy into the system so the energy grows.
Whereas the chocolate cake I get a reward instantly and I don't have to apply any energy.
But as I do it my baseline gets worse and this is the like addiction versus health reward circuit direction.
And the same is true for Scrolling Facebook compared to reading an educational book.
At the end of a month of reading the educational books, my life feels better.
I feel more proud of myself.
At the end of a month of scrolling Facebook, I'm like, what the fuck am I doing with my life?
And, and yet that one will keep winning for the same reason that 70% of our population is overweight and over a third of them obese.
And so, but not, but my only hope is that not everyone who has access to too many calories is obese.
Right?
Like, there are some people who figured out, hey, that's a reward circuit I don't want to do, and I'm going to exercise and I'm going to not eat all of the fat, sugar, salt that Evolution programmed me to have a dopamine hit for because it's a shitty direction.
Now, we need to get that number of people who actually have taken some sovereignty over their fitness and well-being in the presence of the cheaper reward circuit.
We need to get that number up to everybody, because right now, obviously, overweight is one of the main causes of death in the developed world.
But we have to then apply that to the even more pernicious hypernormal stimuli.
Because salt, fat, sugar are hypernormal stimuli in the agustatory system, right?
We have to apply that to the sensory system that's coming in through things like social media.
And that means less social media, less entertainment, more study.
And it doesn't have as fast a reward circuit.
It just doesn't.
But it has a much better longer-term reward circuit where your baseline goes up.
And this is where enough mindfulness and enough discipline have to come in.
Because otherwise, the orientation of the system is that it's more profitable for corporations for me to be addicted because you maximize lifetime value of a customer through addiction.
It's an asymmetric war because they're a billion or trillion dollar company and I'm me.
So how do I win that asymmetric war where it's in their profit incentive, whether it's McDonald's or Facebook or Fox, for me to be maximally addicted?
I have to recognize, holy fuck, right?
Like, I actually have no sovereignty, even if I claim to live in a democracy, against these autocracies who want to control and manipulate my behavior in a way that is net negative for me holistically while having the plausible deniability that I'm choosing it because they're coercing my choice.
So I have to get afraid of that enough That I mount a rebellion, right?
A revolutionary war in myself against those who want to drive my hypernormal stimulus reward circuit.
So the whole, how can everybody become more immune to the shitty reward circuits and notice them and become immune to them?
And how can they become more oriented to the healthy reward circuits?
That's another way of talking about what we have to do writ large.
Yeah, that's beautiful.
Completely agree.
In fact, it dovetails with another thought that... First time I thought it, I thought it was original, and then having said it, I discovered, lots of people had said it before me, that there's a very close relationship between wisdom and delayed gratification.
That it's the ability to bypass the short-term reward circuit in order to get to something deeper and better that is what wisdom is about.
You didn't include on your list what I consider to be maybe one of the most important instances of the failure that you're talking about, which is sex.
There's a very direct comparison, at least for either males who are wired in a normal fashion for a straight guy, or women who are toying with that same programming, which I believe there are many,
But the comparison between casual sex, which is certainly... We are, as males, wired to find that a very appealing notion because it's such a bargain.
If you can produce a baby where you're not expected to contribute to its raising, that's a huge evolutionary win.
And then you have to compare that to the rewards of a deep romantic lasting relationship with commitment.
And the problem is that the deep lasting relationship stuff has a hard time winning out over the instant gratification thing if the instant gratification thing is at all common.
And so that's really screwing up people's circuitry with respect to interpersonal Relationships and bonding and I have a sense that it is also in a way that's much harder to demonstrate Contributing to the derangement of civilization that people many fewer people have a relationship.
You know, it's not like Marriage is easy, right?
It's not it's super complex but
Having somebody who you can fully trust somebody who you've effectively, you know fused your identity with to the level that they share your interests and You know, they may be the only person who'll tell you what you need to know at some points and the fact that many people are missing that I think is deeply unhealthy Yeah, so I would say that Market type dynamics
Benefit from exploiting the shitty reward circuits across every evolved reward circuit axis.
And so from an evolutionary point of view, survive and mate are the things that make your genes get through primarily.
So we've mentioned the survive, the calorie one earlier.
So in an evolutionary environment, I could get plenty of green leafy things in many environments.
It was very hard to get enough fat, enough sugar, and enough salt.
Those were evolutionarily rare things.
So more dopaminergic hits on those.
So fast food ended up figuring out how to just combine fat, salt, sugar, with no other nutrients, with maximized Uh, ease of palatability and textures.
And there's like a scientific optimization of all of the dopamine hit with none of the nutrients.
So you can actually be obese and dying of starvation, right?
And what that is to nutrition where you would should have a natural dopaminergic hit on something that has nutrients built in for, you know, adaptive purposes is what porn and online dating is to intimate relationship.
Is what Facebook and Instagram is to tribal bonding.
Is how do we take the hypernormal stimuli part of it out, separate it from the nutrients and make a maximally fast addictive hit that actually has none of what requires energetic process?
Yeah.
I've called this the junkification of everything, and it is directly an allusion to junk food, where we can most easily see this.
But the idea is you will be given a distilled... So if I can rephrase what you said in terms that are more native to me.
When you are wired to seek, you know, the umami taste that tends to be very tightly correlated with meat, you will tend to get a lot of nutrients along with it in the ancestral context.
In the modern context, we can figure out how to just trip the circuit that tells you to be rewarded, And it's no longer a proxy for the things that it was supposed to lead you to.
And, as you just said, you can now look at that across every domain where you have these dopamine rewards.
And understand why people are, you know, living in the world that Pinker correctly identifies we are living in, where we have just a huge abundance, and yet are so darn unhealthy, certainly unsatisfied, right?
It explains that paradox of being better off in many ways than any ancestor could have hoped to be, and yet being effectively ill across every domain.
Yeah, I will say something about this that's important.
I mean, briefly, the fact that life expectancy started going down in the last five years in the US and certain parts of the developed world is really important to pay attention to.
But the deeper point I want to make is the Hobbesian view on the past, I think, is one of those like mistake theory, information theory things.
I mean, mistake theory, conflict theory.
I think the dialectic of progress is such a compelling idea, and we're oriented to the opportunity and not the risk.
In the same way we don't want to look at the risk moving forward that would have us avoid an opportunity, we don't want to look at good things in the past, and we don't want to look at good things of cultures that we want to obliterate.
So we want to call the Native American savages so that we can, of course, emancipate them.
Historically, and we want this Hobbesian view that people had brutish, nasty, mean, short-lived lives in the past so that we don't have to face the fact that advanced civilizations failed and that is what our own future most likely portends.
I think that is a convenient wrong belief system in a similar way.
Well, I hope you don't hear me doing that.
I certainly don't.
I just I had to say it.
You have to say it.
So it's clear to our listeners.
Well, I appreciate you doing that.
I did want to go back to a couple of things you said.
And, you know, of course, this happens every time you and I talk where every every thread, you know, takes on multiple possible directions.
We could go and there's no way to cover them all.
But in any case, you pointed to survival and mating being the primary mechanisms to get your genes into the future.
And I want to point out that this is one of these places where our wiring, which is biased in the direction of those places where our ancestors had agency that was meaningful,
uh up ends us and in fact this is something i think you and i are struggling against as we try to compel people of the kind of danger we're in and the necessity to upgrade our system you know before we run into a catastrophe too big to come back from and so in any case Within your population, survival and mating makes sense as an obsession.
But probably the biggest factor in whether or not your genes are around a hundred generations from now is whether the population that you were a part of persists.
And so, you know, my field has done a terrible job with this.
We have gotten pretty good at thinking about individual level adaptation and fitness.
And, you know, when I say lineage, people still don't know what I'm talking about, and they're confused about why I'm focused on it.
And my sense is it's like, two components to an equation and you know you're either aware of the lineage thing but you misunderstand it as group selection or you're not aware of the lineage thing and you think group selection is a fiction and it's all about individuals and you know both of those are ways to misunderstand the point.
I'm so happy to hear you saying this because I'm sure this is a conversation I would love to go deeper and understand the The distinctions between lineage and group selection the way that you see them.
But if I just even take the concept of group selection as opposed to just individual selection and take a species like sapiens and say there was no such thing as an individual that got selected for that was not an effective part of a group of people.
And the tribe, the band, was the thing that was being selected for.
So there was fundamentally kinds of pro-social behavior that were requisite.
But then we get bigger than The Dunbar number only like yesterday, evolutionarily.
And that whole, the whole evolutionary dynamics break because that pro-social behavior only worked up to that scale when everybody could see everybody and knew everything, right?
Like there's, when we start looking at how do we solve collective action problems, you start realizing, well, how do, if we make some agreement field as to how nobody does the near-term game-theoretically-advantageous relative to each other long-term bad thing.
There has to be transparency mechanisms to notice it.
So the beginning of defecting on the whole, defecting on the law, the agreement field, the morals, is the ability to hide stuff and get away with it.
Well, you can't hide stuff in a tiny tribe very well.
Even if you can do it once, twice, ten times, sooner or later, if hide is your instinct, you'll be revealed, and the cost will exceed what you've built up by pulling it off however many times you've done it.
And so there's a forced transparency in that smallness of evolutionary scale.
When you start to get up to a large scale, and now there have to be systems where everybody isn't seeing everyone, and I'm smart enough, I can figure out how to play it and fuck the whole while pretending that I'm not, hiding the accounting of it and getting ahead.
That's the evolutionary niche for corruption, for parasitic behavior.
So one way I would describe, and as you've described on here before, if there's a niche with some energy in it, it's going to get exploited, right?
We have to rigorously close the evolutionary niches for human parasitic behavior.
Humans parasitizing other humans.
And the first part of that is a kind of forced transparency that if someone were to engage in that it has to be known.
And now the question is that's all the versions of that we've explored at scale look like dreadful surveillance states.
So how do you make something that doesn't look like a dreadful surveillance state?
That also doesn't leave evolutionary niches for parasitic behavior that ends up rewarding and incenting sociopathy.
Absolutely.
So, a bunch of different threads.
One, the Elinor Ostrom work is important because it does point to the fact that you can scale these mechanisms up.
In fact, Selection has scaled up these enforcement mechanisms beyond a tiny number of people who know each other intimately.
Now, it hasn't scaled them way up, but it's proof of concept in terms of the ability to get there, and it's a model of what these systems might look like.
The other thing though, your focus on corruption, I think, is absolutely right.
And one way to just detect how stark the difference is, is the recognition of how many times in an average day you encounter bullshit.
Right?
In other words, how many advertisements do you encounter in an average pre-COVID day, let's say?
Right, these are all cases where somebody you don't know, or almost all of them are cases where somebody you don't know is attempting to manipulate you into spending resources differently than you would otherwise spend them.
So this is an overwhelmingly dishonest interaction with your world.
There would have been some dishonesty for an ancient ancestor.
You know, obviously there are creatures that attempt to look like what they are not.
But in general, one could see the world as it was, and the deception was the exception, not the rule.
And in some sense, we live in A sea of bullshit, right?
And we're so used to it that we don't even recognize that that's abnormal, that it is the result of a gigantic niche that has opened up as a simple matter of scale, as you point out.
And that restoring a state where you can actually trust your senses, you can, by and large, trust the people who you're interacting with to inform you rather than lie to you, would be a huge step towards reasonability.
Oh, I really hope that we follow all the threads here, because this is getting so close to the heart of what we have to do.
As scale increases, the potential for asymmetry increases.
And as the asymmetry increases, the asymmetric warfare gets more difficult to deal with.
So let's think about this in terms of market theory.
Let's think about an early hypothetical idealized market, like literally people just took their shoes and their cows and their sheep and their service offerings to a market and they looked at exchanging them.
And then because trading cows for chickens is hard, we have some kind of currency to mediate a barter of goods and services.
But we're talking about real goods and services.
Maybe there's two people.
Maybe there's five people that sell shoes.
There's not 5,000 of them.
And I can go touch the shoes myself.
I can talk with them.
I can see what the price is.
And there is no hypernormal stimuli of advertising.
It's like somebody yelling from his thing.
So there's a symmetry between the supply and the demand side, right?
The supply side is a guy or a few guys selling something, and the demand side is a person or a family trying to buy something, and they can kind of tell each other's bullshit to some degree of symmetry.
Buyer beware becomes an important idea.
But now when this becomes Nike, and then still one person, there's still a symmetry between supply and demand in aggregate, meaning The total amount of money flowing into supply equals the total amount flowing out of demand.
But this side is coordinated and this side isn't.
You don't have something like a labor union on all purchasers where it's like all Facebook users are part of some union that puts money together to counter Facebook and lobbying and regulation.
You have Facebook as like a close to a trillion dollar organization against me.
As a person, and I'm still the same size person that I was in those early market examples, but there wasn't like a trillion dollar organization.
And now, when that happens, manufactured demand kills market theory and classical market theory, which is the idea of why a market is like evolution, right?
It's like some evolutionary process is that the demand is based on real People wanting things that will actually enhance the quality of their life.
And so that creates an evolutionary niche for people to provide supply and then the rational actor will purchase the good or service at the best price and of the best value.
But of course, as soon as we get to a situation where And you look at Dan Ariely and all the behavioral economics saying that the homo economicus, the rational actor, doesn't exist.
We end up making choices based on status that's conferred with a brand, based on the compellingness of the marketing, based on all kinds of things that are not the best product or service at the best price.
But you also get that I want stuff that will not increase the quality of my life.
I desperately want shit because the demand was manufactured into me.
So it's not an emergent, authentic demand that represents collective intelligence.
It's a supply side saying, I want to get them to want more of my shit.
And I actually have the power to do that using applied psychology.
And as soon as you get to split testing and the ability to AI split test a gazillion things, we're talking about radically scientifically optimized psychological manipulation for the supply side to create artificial demand and then be able to fulfill it.
And And most of that ends up being of the type that is actually bad for the quality of the life of the people, but you have the plausible deniability they're choosing it.
Hey, I don't want to be patriarchal and control what they're doing.
The people are choosing it.
I'm just offering the source of supply that they're wanting.
Bullshit.
That's like offering crack to kids.
And then when they come back for more of it, like saying, Hey, so this is, that was one of the threads I wanted to address.
Well, I love it.
Back in, must have been 2013 when Game B was actually a group of people who met in a room and talked about things, one of the points that I was making in that context was this inherent asymmetry around unionization.
And that the problem is unions have gotten a bad rap because of the tight association cognitively that we have with labor unions.
Right?
We think of unions and labor unions as synonymous, but union is actually a category.
It's potentially a very large category, and effectively, management always has the benefit of it.
The question is, will workers have a symmetrical entity?
Right?
That's the labor case, but you can make the same case with respect to, you know, banking.
Credit unions don't work.
They're very bank-like.
But if they were structured in such a way to actually, you know, unionize people who utilize the bank, could be highly effective, could be a complete replacement for the insurance industry, which doesn't even make sense in a market context.
But as a risk pool, you could do a very effective job.
So anyway, yes.
The question is how do you scale up the collective force, and especially how do you do it in light of the fact that the entities that are already effectively unionized see it coming, and they disrupt it with all of their very powerful tools.
And so... Well, anyway, go ahead.
I want to say the beginning of an answer to that because I think it brings us to what you've been largely exploring in this show of late of the breakdown of democracy and open society and what do we do about that and how that relates to breakdown in culture and breakdown in market.
We can look at the relationship between those three types of entities.
So a way of thinking about what the architectural idea of a liberal democracy is and why, say, the founders of this country set it up not as a pure laissez-faire market but the founders of this country set it up not as a pure laissez-faire market but as a state that had regulatory
Was the idea is that a market will provision lots of goods and services better than a centralized government will.
So let's leave the market to do the kind of provisioning of resource and innovation that it does well.
But the market will also do a couple really bad things.
It will lead to Increasing asymmetries of wealth inexorably.
This is what Piketty's data showed, but it's just obvious.
Having more money increases your capacity to have access to financial services and you make interest on debt and on compounding interest on wealth.
And so you end up getting a power law distribution of wealth.
So then a few people in just the market dynamic would be able to have way outsized control over everyone else against everyone else's interests.
And the market creates opportunities for things that are really bad.
We all know that we want there to be a thing called crime where even though there's a market incentive for child sex trafficking and whatever else, we say, no, we're going to create some rule of law that binds that thing and not just have market drive it.
So the idea is that we create a state that we actually give a monopoly of violence to.
So it has even more power at the bottom of the stack of what power is than the top of the economic power law distribution.
So the wealthiest people and the wealthiest corporations will still be bound by this rule of law.
And the rule of law is an encoding of the collective ethics of the people, right?
The ethics are the basis of Jurisprudence.
And there is some kind of democratic process of getting to say, what is it that we consider the good life and important that we want and try and rule of law?
We give that a monopoly of violence.
And really, then the goal of the state is to bind the predatory aspects of market incentive while leaving the market to do the things that it does well.
But pretty much every law is where someone has an incentive to do something, which is a market type dynamic that is bad for the whole enough that we make a law to bind it.
Okay, so the purpose of a state is to bind the predatory aspects of a market.
That only works as long as the people bind the state.
And the people bind the state if you have a government of, for, and by the people of an educated populace who had a quality of education that were capable of understanding all the issues upon which we are governing and making law, and a fourth estate where the news that they are getting is of adequate quality and unbiased enough that they're informed about what's currently happening.
For if you think about that, that's what a republic would require.
And you realize that both public education and the fourth estate have eroded so badly for so long.
It's not that we're close to losing our democracy.
It's dead.
We don't have a republic.
We have a permanent political class and a permanent economic lobbying class.
And the people who aren't really actively engaged in government in any way at all, beyond maybe jury duty now and again if they can't get out of it.
And if the people, to be engaged in government in any meaningful way, had to tell the DOE what they think should be done about grid security and energy policy, or tell the DOD what should be done about nuclear first strike policy, or tell the Fed and Treasury what they think about interest rates, they have no fucking idea how to have a governance of, for, and by the people.
They don't have that education.
They don't have The media basis.
So if the culture, if the people can't check the state, then the state will end up getting captured by the market.
And so you'll end up having the head of the FDA be someone who ran a big drug or a big ag company and the head of the DOD being somebody who ran Lockheed or some military industrial complex manufacturer.
You'll have just lobbying, just straightforward lobbying gets paid for by somebody.
Who's it get paid for?
Those who have the money to pay for lots of lobbyists.
And so then you end up getting a crony capitalist structure, which is worse than just an evil market, because now it has the regulatory apparatus of rule of law and monopoly of violence backing up the market type, the dynamics.
So then we say, OK, well, what do we do here?
And we see that civilizations fail towards either oppression or chaos, right?
Those are the two fail states.
They fail towards oppression if trying to create some coherence happens through a top-down forcing function.
They fail towards chaos if not having enough top-down forcing function.
Everybody kind of believes whatever they want, but they have no unifying basis for belief.
And so then they will end up going into, they'll balkanize, they'll tribalize, and then the tribal groups will fight against each other.
If you don't want to, so and so either we keep failing towards chaos, which we can see is happening in the West and in the US in particular right now.
And then China, which is happy to do the oppression thing and oppression beats chaos and war, right?
Because it has more ability to execute effectively, which is why China has built high speed trains all around the world when we haven't built a single one in our country.
So either we lose to China in the 21st century and oppression runs the 21st century, or we beat China at being China, meaning beat it at oppression, or it's like, fuck, those are both failure modes.
What is there other than oppression or chaos?
Order that is emergent, not imposed, which requires a culture of people who can all make sense of the world on their own and communicate effectively to have shared sense-making as a basis for shared choice-making.
The idea of an open society is that some huge number of people can all make choices together.
A huge number of people who see the world differently and are anonymous to each other, not a tribe.
That was an Enlightenment-era idea, right?
Born out of the idea That we could all make sense of the world together, born out of the philosophy of science and the Hegelian dialectic.
That we could make sense of base reality and that we could make sense of each other's perspective, dialectic, find a synthesis, and then be able to have that be the basis of governance.
So what I think is This is not an adequate long-term structure, because we can talk about why tech has made nation-state democracies obsolete, and it's just not obvious yet, but it has.
But as an intermediate structure, the reboot of the thing that was intended has to start at the level of the people, at culture, and at collective sense-making and collective good-faith dialogue.
Because without that, you can't bind state.
Without that, you can't bind market incentive.
Okay, I love this riff of yours, okay?
I think there's a tremendous amount that's really important, and the synthesis is super tight.
I know people will have a little bit of trouble following it, but I actually would advise them to maybe to go back through it and listen to it again, because it's right on the money as far as I'm concerned.
There's one place where I wonder if it doesn't have two things inverted.
Okay.
So you talk about the two characteristics that are necessary in order for, uh, what did you call it, liberal democracy or whatever it was that you used as a moniker, to function.
One of them had to do with the idea that the state was big enough to bind the most powerful and well-resourced actors.
And the second was that the people have to be capable of binding the state.
Now, I understood you to say that what failed first was the people's ability to bind the state.
Is that correct?
I'm saying that's at the foundation of the stack that we have to address.
The failure was recursive.
So as I see it, what happened was the power, the fact that there is always corruption, it's impossible to drive it out completely.
Exactly.
The corruption self enlarges the loopholes and becomes subtle enough that it's hard to see directly.
The most powerful actors suddenly got an infusion of power and we could trace down the cause of it.
But let's just say somewhere in recent history the most powerful actors became more powerful than the state.
And what they did with that power was they unhooked the ability of the state to regulate the market.
I believe the reason for this was that each individual Industry had an interest in having its regulations removed in order to create a bigger slice of the pie for it And so effectively what you had was each industry agreeing to unregulate every other industry like you can unregulate if I have a pharmaceutical company right and you're an oil company and You want to make money?
But you have to be able to fuck up the atmosphere to do it and I want to make money giving people drugs that they shouldn't have and you know corrupting the The FDA then will partner and see what you got was many industries partnering to unhook the ability of the state to bind the market.
But one of the things that they had to do in order to make that work was they had to eliminate the ability of the people to veto, right?
And so this is where we get this incredibly toxic duopoly that pretends to do our bidding and pretends to be, you know, fiercely opposed, the two sides of it.
But in fact, the thing they're united about is not allowing something else to compete with them.
For power so it's you know the wolf in sheep's clothing is In charge of the thing that is supposed to be protecting us from wolves In any case we don't have to go too deep there, but this is actually super important go for it this is related to the thing we said about as the market as a whole gets bigger then
The individual consumer stays an individual consumer, but the supply side, the company, gets much larger.
As that happens, the asymmetry of the war between them, of the game theory between them, gets larger, and so manufactured demand becomes a more intense thing.
Well, the same thing is true in terms of the market capacity to influence the government and the market government complex's capacity to keep the population from getting in the way of the extraction.
And so there's a heap of mechanisms that happen.
And there's not like five guys at the top who are coordinating all this.
It's a shared attractor or incentive landscape that orients it.
- Yeah, largely emergent. - Yeah, and where there are people conspiring, it's because there's shared incentive and capacity to do so.
Which, so the conspiracy is itself an emergent property of the incentive dynamics, which then in turn doubles down on the types of incentive dynamics and make things like that succeed.
So, okay, let's take a couple examples.
If people haven't read it, they should all read at least the Wikipedia page on public choice theory, a school of libertarian thought that critiques why representative democracy will always break down, that the founders of the U.S.
basically said this, which is... All right, we'll come back to symmetry for a moment.
At the time that we were creating this structure of liberal democracy, the size of choices And the speed of them was smaller and slower such that the town hall was a real thing.
And when the town hall is a real thing, the coupling between the representative and the people is way higher, right?
Because the people are actually picking representatives in real time that are really representing their interests and they get to have a say in it.
There was a statement by one of the founders of the country that Voting is the death of democracy because the idea is we should just be able to have a conversation that is good enough that we come up with a solution and everyone's like, that's a good idea.
If we can't, then we vote.
But that means that some big percentage, close to half the population, feels unhappy with the thing that happened.
And so it's a sublimated type of warfare.
It's a sublimation of violence, but that leads to a polarization of the population.
And so the goal is not voting.
Voting is the last step of when we couldn't just succeed at a better conversation and specking out what is the problem?
What are the adjacent problems?
What are the design constraints of a good solution?
Can we come up with a solution that meets everybody's design constraints as best as possible?
I disagree with this at one level, as I'm sure you will as well, or I'm not sure, but I suspect, but I love something about the formulation that voting is itself a kind of failure mode, right?
That ideally speaking, if you had a well-oiled machine, if you had a, you know, military is the wrong analogy here, but Let's say you had a ship of people fighting impossible odds to make it back to safe harbor, right?
The point is you really shouldn't want a system in which you're voting between two different approaches to the problem.
You should want a discussion in which everybody by the end is on board.
And if you tried to do that in civilization, we'd never accomplish anything, right?
You effectively have to give the majority the ability to exert a kind of tyranny over the minority in order to accomplish the Most basic stuff.
But that's because the system is incapable of doing what a better system would do, which is to say, this is the compelling answer and you're going to know why by the time we decide to do it.
Wait, there's a symmetry here between the conversation that we had about the market incenting people who focus on the opportunity and not the risks.
That's that it actually suppresses those who look at the risk.
Once you say, hey, there's always going to be somebody talking about a risk that isn't going to happen.
We'll innovate our way out.
And that becomes the story.
Now you have plausible deniability to always do that.
Once you say, there's no way to get everybody on the same page.
We can't do that.
It'd be too slow.
Now I don't even have any basis to try.
Right.
And so I don't ever even try to say what is it that everyone cares about relative to this.
So I even know what a good solution would look like to craft a proposal.
No, we're going to vote on the proposition, having never done any sense making about what a good proposition would be.
That's just mind-blowingly stupid, right?
And so then who's going to craft the proposition?
A lawyer.
A lawyer is paid for by who?
Some special interest group.
And so now, so most of the time what happens is You have some situation where one thing that matters to some people has this proposition put forward that benefits it simply in the short term, but it externalizes a harm to something that matters to other people.
But ultimately, all of it matters to everybody, just differentially weighted.
How do we put all those things together?
Okay, we're going to do something that's going to benefit the economy, but harm the environment.
Well, everybody cares about the economy and everybody cares about the environment.
But if I put forward a proposition that says, in order to solve climate change, we have to agree to these carbon emission controls that China won't agree to, and therefore China will run the world in the 21st century, and we all have to learn Mandarin or be like the Uyghur or something.
Okay, well, now I have a bunch of people who, because they hate the solution space, because it harms something else they care about, Don't believe in climate change.
It has nothing to do with not believing in climate change or caring about the environment.
It's that they care about that other risk so much as well.
But if I said, OK, well, let's look at... It's a negotiation tactic is what you're saying.
That at the point that you want X prioritized over Y, you'll descend into a state in which you'll make any argument that results in that happening, including Y doesn't exist.
Exactly, because I'm so motivated by this other thing, and the solution has a theory of trade-offs built in that is not necessary.
Sometimes the theory of trade-off is necessary, but oftentimes a synergistic satisfier could be found, but we didn't try.
In the same way that a way to move forward with the opportunity without the risk could have happened.
We could have found a better way to do the tech that internalized that externality.
We just need to try a little bit more.
But there isn't the incentive to do it.
So let's say we said, no, we don't care about climate change by itself.
We care about the climate, and we care about the economy, and we care about energy independence, and we care about geopolitics.
And we're going to bring those design constraints together and we say, what is the best choice that affects these things together?
Then we could start to think about a proposition intelligently.
We don't do this in medicine either.
We make a medicine to solve a very narrow definition of one molecular target of a disease that externalizes side effects in other areas without addressing upstream what was actually causing the disease.
And then the side effects of that med end up being another med, and then old people die on 20 meds of iatrogenic disease.
So in complex systems, you can't separate the problems that way.
You have to think about the whole complex thing better.
And so the first part of fixing, one part of fixing democracy that we have to think about is we have to define the problem spaces better, more complexly.
And we have to be able to actually have a process for coming up with propositions that are not stupid and intrinsically polarizing.
Because almost no proposition ever voted on gets 90% of the vote.
It gets 51% of the vote, which means half of the people think it's terrible.
And so what that means is you care about the environment, I care about the economy on proposition A. Well, You petitioned to get the thing to go through because you care about the owls there, but I think that you're making my kids poor.
You're my fucking enemy now, and I'll fight against you.
Now all the energy goes into internal friction fighting against each other, and any other country that's willing to be autocratic and force all their people onto one side will just win.
And we will increasingly polarize against each other over something where we could have found a more unifying solution.
Now, this is fascinating.
For one thing, you blazed by it there, but I think, so there's a place where Jim Rutt tells me that some place that you and he overwhelmingly agree also, but there's a place in which you and he have hung up, where he says that you No.
believe that a properly architected system can do away with the trade-offs.
No.
Right?
Right.
I think I just heard you give the answer that he must have understood to be that, but wasn't it.
Am I right that the answer...
There are lots of times when you don't see a trade-off because you have two characteristics, both of which are suboptimal, and you could improve them simultaneously.
And so it looks like there's no trade-off between them.
If you push it far enough, you'll eventually reach the efficient frontier, where you do have to choose, but if you're not near the efficient frontier, there's no reason to treat it as a trade-off.
Is that...?
Yes, I'm not saying that we get out of having constraints.
I'm saying we can do design by constraints much better than we currently do.
And so I'm saying that there's a lot of things that we take as inexorable trade-offs that aren't.
Well, so, you and I will have to chase this down at some point.
My argument will be, any two desirable characteristics have an inherent trade-off between them, even if you never see it.
Right?
There are reasons you wouldn't see it, but that if you push these things far enough, you'll find that there are no desirable things that can be components of the same mechanism that will not exhibit a trade-off relationship.
Right?
Hmm, that's interesting.
Uh, initially I don't agree with that at all, but I'm sure you've thought about it a lot, so I'm curious why you say it.
Well, let me give you, let me give you the example.
I used to, uh, battle, uh, my friend Scott Pecour, uh, over with this, which is, he said, why can't you make a car that's the fastest and the bluest?
Right?
And it, you know, the first time I heard that, I was like, well, Okay, maybe blue is trivial enough, but it's not.
In fact, if you wanted to make a car that was the fastest, and by fastest, let's say, fastest accelerating, well, you're going to have to decide how to paint it.
If you also decide that there's some color of blue that is bluest, and you want the car to be that color, well, then it has done a lot of the choosing of what paint you're going to put on it at the point you decide to paint it that color.
That paint will have components that will weigh something, right?
The chances that the bluest, whatever you define that to be, is also the lightest and has the best laminar flow characteristics are essentially zero, right?
Because they're an infinite diversity of colors, they will be made out of a wide variety of materials, and the chances that the blue just happens to be the one that is lightest and has the best, you know, slipperiness relative to the wind are going to be vanishingly small.
And that means that if you want to make truly the fastest car, its color will be chosen by whatever paint has the best characteristics.
And if you want to make it the bluest as well, you'll make some tiny compromise that will, you know, probably not matter to you, but it's there.
So the trade-off is there, even if we don't see it.
But here's the thing, Daniel.
Um, I discovered many years after my argument with Scott was long since put to bed, that I was right about this.
And the way I found out was that there is a, a case where the Navy wanted to set the time to climb record for an aircraft.
And they took an F-15 and they souped it up a little bit.
And in order to set the, basically the vertical climb rate of this aircraft, they stripped the paint off it.
And so if you look at pictures of this aircraft, uh, in its, you know, uh, It's record-setting run.
It isn't any particular color.
It's many different colors because effectively you've got the bare metal underneath with the paint stripped off it to save however many pounds of paint they were able to remove.
Okay, there are three points that come up to address my initial thoughts on this here.
So one is, with this particular case of a car, The difference between the blue and the optimal color might be at the boundary of measurement itself.
Yep.
And so while it's true that there it might not be a perfect optimum of both at the level of like a nanoscale optimization, it is irrelevant to the scale of choice making for the most part.
And when we look at something like... 100%.
100%.
And when we look at something like Tesla cars, that became faster off the line than Ferraris and safer than Volvos and greener than Priuses at the same time, you could see that ground up design, just doing a better job of ground up design, was able to optimize for many things simultaneously so much better.
Now, had they made it less comfortable, could it be faster still?
Sure, of course.
So it's optimizing for a product of a bunch of things together, but still in a whole different league than things had been previously.
First of all, this is beautiful, okay?
Because this is exactly what I was hoping for, okay?
This is a question of us tripping over each other's language.
Jim misunderstood what you were saying, right?
And he asked me about it, and I said, uh, yeah, Daniel can't be right about that if he's saying what you think he's saying, but of course it wouldn't make sense that you would think that you could.
So your point about this being trivial You're in complete agreement with me, and I suspect it would take nothing to get Jim to agree to that formulation as well.
Wait, there's a difference.
There's one more thing I have to say here.
Okay.
Of course I'm not pretending that thermodynamics don't exist, right?
And that once you get down to the quantum scale arrangement of a thing, that orientation in one direction doesn't have effects on other things.
Duh.
Yep.
There's a difference also between the blue and fast are two different preferences that are arbitrary, that both want to be associated with a car, that don't have some intrinsic unifying function.
And we can say blue is a thing that's reasonable to be preferential about color.
Whereas I would say that there are some characteristics that have a synergistic effect that increasing one increases the other one because of the way they are part of a overall Increase in system integrity.
And so synergy is the key concept I'm trying to bring about here, which is behavior of whole systems more than the sum of and unpredicted by the parts taken separately.
So when I say I'm looking for synergistic satisfiers, the idea that I have X amount of input And that input has to be divided between these various types of output and it's linear is nonsense.
I can have X amount of input and have something where the total amount of output has increased synergy based on the intelligence of the design.
The question of how do we design in a way that is optimizing synergy between all the things that matter becomes the central question.
Yes, which is of course the central question that selection must be dealing with in generating complex life.
And, you know, I don't, again, I don't think we have a hair's breadth of difference on what we turn out to believe about this trade-off space.
But what I would say is, and I don't want to drag the audience too far down this road, it's probably not worth it for what we need to do here, but The benefit of being able to say, so let's take your example of there are certain characteristics that will co-maximize.
Not really, because of the following thing.
Let's say that we figure out what Color is best for making the fastest car.
And then we say, well, I want to maximize gray 37 and speed.
Now I can do it.
I can maximize gray 37 and speed because it just so happens that gray 37 is the color that has the best characteristics for speed, right?
But then the point is you can't separate these two things.
Whatever characteristic it is you're actually maximizing, you've just found two aspects of it.
So your point about synergy is that perfectly aligned characteristics we could describe that joint that fusion of those two things as one thing and we could maximize it right but then if we take the next one over right the next characteristic that we want to add to the list of things then again we're back in trade-off space so my only point here is that there is a value in order to be able to get the maximum power out of a trade-off theory
What we want to do is make it minimally complex and the ability to say every two desirable characteristics have a trade-off between them.
The real question is the slope or the shape of the curve, right?
And that many of these slopes and shapes mean we will see no meaningful variation on it because one side is a bargain.
And we will always see that manifest, right?
That's the reason we don't see trade-offs everywhere, is that in some cases a trade-off is so dumb that we don't see anybody exercising variation.
Everything is made the same decision.
Yes, and I think for all practical purposes we agree that being able to make A Tesla that is safer than a Volvo and faster than a Ferrari and greener than a Prius is a possibility, and that we can, if we apply that to all of the problems in the world, we could do a fuck ton better job.
Yeah.
I think we also agree, and I love the last point that you made, that to the degree that two things can be simultaneously optimized, they can be thought of as facets of a deeper, integrated thing.
Yep.
Okay, so now, to answer the way that I actually think about it, Though this is irrelevant, if people disagree, it doesn't matter at all to the earlier point.
I have to wax mystical a moment.
When Einstein said it's a optical delusion of consciousness to believe there are separate things, there's in reality one thing we call universe, and everything is a facet of it.
If I look at the real things that we have theory of trade-offs between in the space, in the social sphere and the associated biosphere that we're a part of.
So let's say we talk about in the very beginning of our conversation, individual, what would optimize my individual wellbeing and what would optimize the individual and what would optimize the wellbeing of all humans?
I believe that I only find that those are differently optimized If I, again, take a very short-term focus.
If I take a long-term focus, I find that they are one thing, because the idea that I'm an individual and the idea that humanity is a separate thing is actually a wrong idea.
They're facets of an integrated reality, and that if I factor all of the things that are in the unknown-unknown set over a long enough period of time, they're simultaneously optimized.
And this is the essence of dialectical thinking, is looking for the Thesis and the antithesis, and not voting between thesis and antithesis, but seeking synthesis that's at a higher order of integration and complexity.
Totally agree.
And, you know, so I don't know how many people will be tracking it, but, you know, effectively saying on a indefinitely long time scale, these things converge is an acknowledgement that we are not talking about design space when we make this recognition, right?
It's more like trajectory.
And that is perfectly consistent.
And frankly, I think if everybody understood At some level, the kind of picture we're painting, people would be really comfortable with the degree to which it doesn't do exactly the thing they most hope it will, right?
In other words, the level of compromise is small, right?
Which is why the compromise in a healthy democracy even was tolerable, even though that was nowhere near as optimal a system as we could develop.
Okay, there's a point a number of minutes back that I want to return to, and I want to drop an idea on you.
It's actually a place where something you said caused me to complete a thought that I've been working on for some time.
So the thought as it existed is that markets are excellent at figuring out how to do things, and they are atrocious at telling us what to do.
In other words, they will find every defect in human character and figure out how to exploit it if you allow them to do that.
But when you have a problem that you really want solved, right?
How can we make a phone that doesn't require me to be plugged into the wall, allows me to get a message across a distance to report an emergency, whatever.
Markets do a better job than we could otherwise do of figuring out what the best solution is.
And so in some sense, the question is, How can we structure the incentives around the market so that markets only solve problems that we want them to solve, but they can be free to solve them well?
And what I think I realized in this conversation here is that in some sense the role of the citizenry in a democracy is to discuss the values that we want government to deploy incentives around.
In other words, the people, by deciding what their priorities are, what their concerns are, which problems are top of the list to be solved, and which ones could take a back seat.
That that's the proper thing that we are to be discussing.
That the role of government, freed from corruption, would be to figure out what incentives will result in the best return on our investment, structuring the incentives of the market, and then the market can be freed to solve the narrowest problems on that list.
And I think we fail at every level here, but from the point of view of what we're actually shooting for, I would say it's somewhere in that neighborhood, that division of labor between the citizens, the apparatus of governance, and the market.
I'm suffering a little bit here because there's like 10 simultaneous threads that I really could address that are important, and I know we're going to open up more in starting.
It would be really fun to go through the transcript of this and come back to the most important threads.
Might be worth doing, actually.
So first, I want to say something against heterodox market theory, is I don't think the market is the best system for innovation of a known what.
And I think World War II and the Manhattan Project is a very clear example.
And the Apollo Project.
And our failure at fusion, I would say.
The point you're about to make, to me, fusion would be our top priority because it's the only plug-and-play solution to a large piece of our problem.
And the fact that we, decade after decade, are awaiting a proper fusion solution says, you know, despite the fact that the market could potentially solve it.
The problem is the investments are too large on the front end and the reward is too delayed for the market to actually even recognize the problem correctly.
Venture capital is not going to put up the amount of money that a nation state can for the amount of time that's necessary.
And when you look at the very largest jumps in innovative capacity, a lot of them happen by nation state funding, not market funding.
And then A market emerging in association with kind of government contracting.
And so if we like, if we look at why the Nazis were so technologically farther ahead than everyone else going into World War Two, with the Enigma machine in the beginning of computing with the V2 rocket, It was not a market dynamic.
It was a state dynamic where they invested in science and technology development for a long time, which is why this tiny little country with limited industrial supply capacity had more technological advancement than the Soviets or the US, and it was It was our ability to steal their shit and rip it off and then be bigger than them that was a big part of how we were able to succeed in the war effort.
And so that's a clear example that computers were developed by a state, not the market.
Well, hold on a second.
I want to be careful because I don't want to falsify something that isn't false.
I again think this is a place where our mappings, or at least the language surrounding them, is going to upend us.
This sounds like a place where a government is capable of generating a massive incentive to cause a problem to be solved that the market won't even find on its own.
Right?
So that does not strike me as inconsistent with what I was just saying.
The state recognizes there's a problem, creates an incentive big enough to find the solution, and that incentive can be big enough to cause people to get different degrees than they would otherwise seek, and But in these cases, it wasn't like.
So let's take the Manhattan Project.
It wasn't private contractors that solved it because the government had made the incentive.
It was actually government that solved it.
It was government employees.
And so this is a this important distinction.
NASA was not a private space contracting thing that did the Apollo project.
It was a government project.
So I would say the largest jumps we ever made in tech did not happen in the market.
For the most part.
Well, so then I guess the test of your falsification here is the following question.
If the Manhattan Project had consisted of a state yanking people out of their beds and standing over them with rifles, would it have worked?
I mean, and maybe, you know, the Russian version is closer to that.
But I think the point is you still have a You have a system of incentives Correctly solving a problem that the market would not have found on its own and no entity in the market would have been big enough to solve So I still see it as consistent, but you might you might convince me otherwise Especially if it turns out that a negative incentive would be just as effective at creating the solution
There's a story that people don't innovate well under duress.
That innovation requires executive function and prefrontal function, and if they're too limbically oriented, they won't innovate well, which is one of the reasons why we need an open society.
And I think there's probably some truth to this, but less truth than we would hope.
I believe it was called the Shoroska system, which was a Russian Uh, basically prisoner of war type camps that had scientists that were doing real innovation, um, up to, you know, early Sputnik like work.
So we know that people under rifled arrests can innovate.
Um, we know that people conscripted by draft into an army can actually innovate on behalf of the military.
Now, I think that it's true that something more like a market will explore more edge cases that are not known what's and come up with interesting things, whereas the centralized thing can do a better job sometimes of existing what's that require very high coordination.
Because if you look at the Manhattan Project, the scale of the budget and the scale of coordination, no company has that and a bunch of companies competing for intellectual property and whatever, it wouldn't have worked.
Right.
One of the reasons I bring this up is because there's a whole bunch, you mentioned fusion, whether it's fusion or whether it's thorium or whether it's closer to room temperature superconduction or any of the things that could possibly generate, whether it's 65% efficient photovoltaic through nanotech.
There's a bunch of things where we're like, we kind of know the science that could lead to the breakthrough, but the level of investment just isn't there.
And I think there's a heap of examples like this where the percentage of the budget of the national budget that used to go to R&D has went down a lot and it shouldn't.
And the Apollo project was kind of the last thing of its type.
And then the government starting to shift to government contractors started to be a source of massive bloat.
Where the government contractors had an incentive to just charge whatever the fuck they wanted, which is why then Elon could beat Lockheed and Boeing at rockets so much cost-wise, because in that situation, he didn't have to do the fundamental innovation on rocketry.
He could just out-compete them with market incentive.
And then that could create enough money for iterative innovation.
I think fundamental innovation of certain scales does require larger coordination than markets make easy.
Okay, so then I want to modify what I said, because you've convinced me I didn't have it right in the initial one.
So the point then is you have to extend the governmental structure so that it can deal with two types of market failure.
One surrounding the natural system of incentives, which will cause you to innovate things that do net harm, for example.
And the other is a failure where the scale of the market is not sufficient to solve certain problems that are in our collective interest to solve.
Yes, and we don't want to give the government that much power because we don't trust that kind of authority.
But that's because the people aren't checking the government, which comes back to the thing that we talked about earlier.
And now this becomes one of the central questions of the time is what is the basis of legitimate authority and how do we know and what is the basis of warranted trust?
Because we all know what it means to have trust that isn't warranted.
Everyone who disagrees with us, we think that their trust isn't warranted, right?
Like, if we're on the left, we think people who believe in, who trust Trump, it's unwarranted.
And they think that the people who trust the FDA or vaccine scientists or the CDC have trust, it's unwarranted.
We also know that Legitimate authority, the idea of legitimate authority is so powerful to be able to be the arbiters of what is true and what is real, that anyone who is playing the game of power has a maximum incentive, however successful they are, to be able to capture and influence that for their good.
We also know that it's possible to mislead with exclusively true facts that are cherry-picked or framed.
So I can I can cherry pick facts on one side or the other side of a Gaussian distribution and tell any story I want that will make it through a fact checker.
So fact checking is valuable, but not even close to sufficient.
So I can lie through something like the Atlantic, as well as I can lie through something like Breitbart, through different mechanisms for different populations.
Yeah, this this is a super excellent point as well, that a fact checker errors in one direction.
And if you can build a falsehood out of true objects that have been edited, then the fact checker won't spot it.
So love that point.
And so I can do a safety analysis on a drug.
And I'm not looking at every metric that matters.
I'm looking at some subset of the metrics, and it might be that it's safe on those metrics, but all-cause mortality increases, life expectancy decreases, but I only did the safety study for two years, so I wouldn't notice that.
So I can say, no, methodologically, this was perfect and sound.
It just also doesn't matter because I wasn't measuring the right things.
Right, and so this also, basically what you have just said, means that the replication crisis can be understood as a mechanism for generating data which can be cherry-picked to reach any conclusion you want about the effects of this intervention or that intervention, right?
Because effectively what you have is the ability to choose between experiments where sampling error will result in both outcomes being evident somewhere.
This is another one of those, is it conflict theory or information or mistake theory thing, is I can intentionally manipulate an outcome that looks methodologically sound and then say, oh, we just didn't know those factors, right?
I'm not saying that whether that's happening or not, it certainly can happen.
Okay, so now we get back to, so what is the, how do you have a legitimate authority that has the power of being the arbiter of what is true and real and all the power that's associated and have it not get captured by power interests?
Is a very, very important question.
How in the name of the Bible and Christendom and Jesus saying, let he who has no sins cast the first stone, did we do the Inquisition?
Right, like weird mental gymnastics by which the authority of that thing was able to be used for the power purposes of the time.
And so now when you start to have increasing polarization between the left and the right and historically more academics being left leaning and the social scientists, the social sciences being so complex that you can cherry pick whatever the fuck you want and do methodologically sound and yet still misrepresentative stuff.
Then you say, is that actually a trustworthy source?
And then we say, OK, well, do we want a bunch of wacky theories going out over Facebook and Twitter and whatever?
Do we want to censor it?
Well, if we censor it, who is the arbiter of truth that we trust?
If we don't censor it, we're appealing to the worst aspects of everyone and making them all worse in all directions.
Like those both suck so bad, and that's the oppression or chaos, right?
And the only answer out of the oppression or chaos is the comprehensive education of everyone in the capacity to understand at least three things.
They have to increase their first person, second person, and third person epistemics.
Their third person epistemics is the easiest.
Philosophy of science, formal logic, their ability to actually make sense of base reality through appropriate methodology and find appropriate confidence margin.
Second person is my ability to make sense of your perspective.
Can I steel man where you're coming from?
Can I inhabit your position well?
And if I'm not oriented to do that, then I'm not going to find the synthesis of a dialectic.
I'm going to be arguing for one side of partiality, harming something that will actually harm the thing I care about in the long run.
And then first person, can I notice my own biases and my own susceptibilities and my own group identity issues and whatever well enough that those aren't the things that run me?
When I look at kind of the ancient Greek enlightenment, The first person was the Stoic tradition.
The second person was the Socratic tradition.
The third person was the Aristotelian tradition.
There's a mirror of all those in modernity.
We need a new cultural enlightenment now where everyone values good sense-making about themselves, about others, about base reality, and good quality dialogue with other people that are also sense-making to emerge to a collective consciousness and collective intelligence that is more than our individual intelligence.
And with so that we have some basis of something that isn't chaos, but that also isn't depression because it's emergent more than imposed.
So it's like it's cultural enlightenment or bust as far as I'm concerned.
All right, so I don't disagree with you fundamentally, I believe.
This is a place where when I say my version of this, which is much less sophisticated in some ways and focused elsewhere, but when I say my version of it, I lose people.
Because my version of it is something like What we need to do is doable.
We can see the trajectory from here.
You can't see the objective, but you can see the direction to head.
And it will take three generations to get there.
Right?
I agree.
What you're describing, you couldn't just simply take that curriculum and infuse it into any system we've got and have any hope of people learning it or giving a shit about it or whatever.
It wouldn't work.
So you have to build the scaffolding that would allow a population to be enlightened in this way such that the governance structure you're imagining might arise out of it could flourish.
But let's put it this way.
It's at least three generations out before you had gotten there, even if you started doing things right now.
And so what I try to say to people in order that they don't completely lose interest in the possibility of a solution because it's too far out is things can start getting better right away.
We are not going to live to be in that world that is the objective, and even if we did, we would never be native there, right?
Our developmental trajectory will have been completed in a world that doesn't function like that, and so...
You know, you can be happy as an expat, but we would be expats in the world we're trying to create, and that's fine.
You know, if our grandchildren, or our great-grandchildren, were native there, and we could be expats there, that would be a perfectly acceptable solution.
But, I think in general, people have the sense that a solution sounds like something that we could have in the next few years, and I just don't see the possibility of it.
No, you're going to see things, anything that can be implemented quickly, you want a red team and say either how does it fail or where does it externalize harm?
And also, what arms race does it drive of whoever doesn't like it?
And if you factor what arms race it drives, where it externalizes harm and where it fails, you'll get much less optimistic about most of those things.
And if you don't go into despair, you'll start thinking long term and things that converge in long term direction.
And when you start to think about That the thesis and the antithesis are both not true.
They have partial truth, but they are not actually true.
Synthesis is in the direction of more integration of truth and still not true, but in the direction of.
If I optimize for one of these, It will externalize harm in a way that messes the whole thing up, right?
And that's why there's a forcing function of the failure modes on both sides.
That's why it's important to look at oppression and chaos and say these both create failure modes.
So what is it that doesn't orient in either of those directions?
It's not more power to authorities.
It's not more pure libertarianism.
It's something that's outside of that access.
Or it is going to involve the equivalent of negative feedback.
Right, in other words, right, thermostat works by virtue of not embracing it being hot or cold, but by pushing it in the right direction as it diverts one way or the other.
So I very much like your point about synthesis here.
Just to make it clearer, synthesis is two things, even linguistically speaking.
We can talk of asynthesis, right, which is an object.
You could write it in a book.
Asynthesis between several different concepts could exist in a book.
Incidentally, that's sort of what I see myself doing in biology, is synthesis.
But your point is the most important aspect of synthesis is it is a process, right?
And so that process is the thing that takes these competing failure modes and rescues from them something that suffers neither consequence and heads towards optimality.
So I agree, we have to get good at it.
Yeah, so synthesis is an ongoing process and let's say I have Some bits of true information in a thesis and some bits in the anti-thesis.
So the synthesis will have more bits than either of them, higher order complexity, but it will still have radically less bits of information than all of reality about that thing.
The model is never the thing, right?
It's just, it's the best epistemically we can do at that moment.
So now I want to go back to the earlier topic around theory of trade-offs that you said, because I let it go, but as soon as you mention optimization, I have to bring it back because it comes back exactly here.
And it also brings back this question you had of that markets can do a good job with the how but not the what, which is the is-ought distinction that comes up in science, right?
Yes, it is.
Science can do a good job of what is but not what ought, which means applied science, i.e.
technology, i.e.
markets, can do a good job with changing is.
But not in the direction of ought.
And so that is ethics, which is to be the basis of jurisprudence and law.
That's exactly why you bring those things together.
And it's because is is measurable.
Third person, measurable and verifiable, repeatable.
It's objective.
It's objective, right?
Whereas ought is not measurable.
You can do something like Sam Harris does in Moral Landscape and say it relates to measurable things, but it doesn't relate to a finite number of measurable things.
There's a girdle proof that whatever finite number, there are some other things that we end up finding later that are also relevant to the thing that weren't part of the model that we were looking at.
And so the thing that is worth optimizing for, you talked about The blue and the fast would be part of the same thing.
The thing that is worth optimizing for is not measurable.
It includes measurables, but it is not limited to a finite set of measurables that you can run optimization theory and have an AI optimize everything for it.
Yeah, I agree.
You will have a long list of characteristics that you can measure, and as you go from the most important to the least important, you'll eventually drop below some threshold of noise where you're not noticing things that contribute.
So yes, you've got a potentially infinite set of things that matter less and less, and you will inherently concentrate on the biggest, most important contributors up top, and that's natural.
It's an issue of precision at some level, but one that we shouldn't convince ourselves that we're solving the puzzle completely at a mathematical level.
An engineering solution is not a complete mathematical solution.
Right.
Okay, so now I'm coming back to the waxing mystical thing.
I don't think it has to be thought of that way.
I think the way Einstein was doing it, and he says Spinoza's God is my God, I'm happy to do it that way.
So the first verse of the Tao Te Ching is the Tao that is speakable is not the eternal Tao, right?
The optimization function that is optimizable with a narrow AI is not the thing to optimize for, is a corollary statement.
And the Jewish commandment about no false idols is that the model of reality is never reality.
So take the model as this is useful, it's not an absolute truth.
The moment I take it as it's an absolute truth and I become some weird fundamentalist who stops learning, who stops being open to new input, and in optimizing the model where the model is different than reality, I can harm reality and then defend the model.
So I always want to hold the model with this is the best we currently have, and in the future we'll see that it's wrong, and we want to see that it's wrong.
We don't want to defend it against its own evolution.
And so what we're optimizing for can't be fully explicated.
And that's what wisdom is.
Wisdom is the difference between the optimization function and the right choice.
Oh, I love this.
This is great.
Obviously, it dovetails with the basic sense of what metaphorical truth is and the recognition that actually metaphorical truth isn't something that applies to religious style beliefs.
It's actually the way we do science also.
You know, we have approximations and, you know, things get ugly when people forget that that's what they're dealing with, right?
And they start treating it as the object itself.
A very important example in my field is the instantiation of the term fitness, right?
Which, in most cases, has so much to do with reproductive success that we actually just synonymize them most of the time, and we speak as if they're interchangeable.
Which is great, except for all those cases where they go in opposite directions, which we are perennially confused by.
And so anyway, sooner or later I will deliver some work that will take the cases that we can't sort out because we've misdefined fitness and forgotten that it was a model in the first place and shows how you would solve it differently if you defined fitness in a tighter way.
But story for another day.
All right, so where shall we go?
you were on a roll.
So you'll see conversations from really smart people like Nick Bostrom and Max Tegmark and whatever of, because of the collective action problem and the multipolar trap race to the bottom.
And yet because of the complexity of the issues that we face that are beyond what the smartest person could manage by a lot is the only answer to build a benevolent AI overlord that can run a one world government because it can process the information to make good choices.
So as you can guess, my answer is vigorously no.
Yep.
Not just because I think the optimization function that it would run, no matter how many variables, would end up becoming a paperclip maximizer, but I think its own existential risks are bound up in that process.
These guys know this, but it's easy to pick solutions like that compared to the other ones that seem maybe even more likely to go terrible.
So then we say, okay, We don't want a one world government run by any of the people we currently have.
And we also don't want separate nations where any of them that defect lead everybody into a race to the bottom.
So that means that they have to have rule of law over each other because they affect common spaces.
So how do you have rule of law over each other without it being one world government and then capture?
Oppression or chaos at various scales, and the only answer is the comprehensive education and enlightenment of the people that can check those systems.
Now, obviously, the founding of this country was fraught with all the problems that we know of now in particular, and it was still a step forward in terms of a movement towards the possibility of some freedoms from the feudalism it came from.
And so I I find the study of the foundation of it, the theoretical foundation of it, meaningful to what we're doing right now.
And famously, there's this quote from George Washington where he says something to the effect of, I'm going to paraphrase it, the comprehensive education of every single citizen in the science of government should be the main aim of the federal government.
And I think it is fascinating.
So science of government was his term of art and science of government meant Everything that you would need to have a government of, for, and by the people, which is the history, the social philosophy, the game theory and political science and economics, as well as the science to understand the infrastructural tech stack and whatever, right?
The Hegelian dialectic, the Enlightenment ideas of the time.
But the number one goal of the federal government is not rule of law.
And it's not Currency creation, and it's not protection of its borders, because if it's any of those things, it will become an impressive tyranny soon.
It has to be the comprehensive education of the people if it is to be a government of, formed by the people.
Now, this is the interesting thing.
Now I remember where I wanted to go.
Comprehensive education of the people is a force, is something that makes more symmetry possible.
Symmetry of power possible.
Increasing people's information access and processing is a symmetry increasing function.
So everyone who has a vested interest in increasing asymmetries has an interest in decreasing people's comprehensive education in the science of government.
And so now let's look at the education changes that happened following World War II in the U.S.
There is a theory, there's a story that I buy that The US started focusing on STEM education, science, technology, engineering, math, super heavily, partly because it was an existential risk, because look what happened with the STEM that the Germans did.
And now we know that a lot of the German scientists that we didn't get an operation paperclip, the Russians got in Sputnik.
And so it's an existential risk to not dominate the tech space.
So we need to really double down on STEM.
And we need all the smartest guys, we need to find every von Neumann and Turing and Find them there is all the smarter you are the more we want to push you into stem So you can be an effective part of this system That's part of the story But also the thing that Washington said the education and the science of government we start cutting civics Radically and I think it was because social philosophers at the time like Marx were actually problematic to the dominant system And I'm not saying that Marx got the right ideas.
I'm saying the idea of okay.
We have a system and Where let's have the only people who really think about social philosophy be the children of elites who go to private schools, who learn the classics.
And otherwise, let's have people not fuck the system up as a whole, but be very useful to the system by becoming good at STEM.
I think this is a way of being able to simultaneously advance education and retard the kind of education that would be necessary to have a self-governing system.
That's fascinating.
That's fascinating.
Because, of course, if you have the elites effectively in charge of governance, they can do exactly what you would imagine the elites would hope for, which is to govern well enough that the system continues on no matter what, but to continue to look out for the distribution of wealth and power and make sure nothing upends it.
Right?
They'll do it.
They won't even realize necessarily that that's what they're doing.
I also love the fact, you know, George Washington is one of these characters who it's very easy to misunderstand how good he was because, you know, he wasn't the most articulate founder or in, you know, classical terms, the smartest founder by far.
On the other hand, An awful lot of wisdom buried in George Washington and this idea of, you know, ultimately he was looking very deeply into the future potentially to understand why the education of the populace would be effectively synonymous with the job
Of government, and it's not because the purpose is the education, but it's because that's the only hope that a democratic system will spit out the kind of solution that you want it to generate.
Which is, I don't know, it's a very interesting analysis.
So it raises something else here, which is on my list of notes arising.
Which is, I noticed this pattern all over the place.
There's a state which is awesome, very powerful in terms of what it can do, but it's fragile and so it falls apart, right?
In other words, we will never have a better system, as far as I can tell, than science for figuring out what's true and what is possible.
So it's the most capable state.
There are measures by which it is the strongest state, but it is also terrifically susceptible to market forces.
In fact, it can't be in the same room with them, right?
So we could look for many examples of this, where something marvelous requires a very careful arrangement of conditions in order for it to survive.
And I'm wondering what you make of that in light of this discussion.
I guess it's not hard to make an argument for why that those two things go together, capacity and fragility.
But what are we to do about it going forward?
Because surely we're trying to build these states, but do so in a robust form.
They go together because of synergy.
Which is, you have properties that none of your cells on their own have.
You as a whole.
There's a synergy of those cells coming together that creates emergent properties at the level of you as a whole thing.
But if I run all the combinatorial possibilities of a way of putting those 50 to 100 trillion cells together, very few of them produce the synergy of you.
Most of them are just piles of goo, right?
And so it's a very narrow set of things that actually has the very high synergies.
And it's lots of things that are pretty entropic.
And entropy is also obviously easier.
I can take this house down in five minutes with a record ball, but it took a year to build.
Yep.
And I can kill an adult in a second, but it takes 20 years to grow one.
So this is why the first ethic of Hippocrates and of so many ethic systems is first do no harm.
Then try to make shit better, but first do no harm.
If you can succeed at the maintenance function, then you can actually maintain your progress functions.
And...
Come back to where you were going with that. - Well, so here's what I'm after.
I agree with your basic entropic analysis, that it is easier to destroy than to build.
The number of states that work is vastly exceeded by the organization of the same pieces that don't.
But what I'm wondering about is, is there In effect, one has to be able to build a system that is resistant to that.
In other words, and life does this, right?
Living creatures manage to fend off entropy beautifully.
And the fact, we need a governmental structure that has that same trick, and we haven't seen it yet.
And the question is, unfortunately, I fear that It is almost a prerequisite that if you build the capable structure and you haven't built the thing that protects it first, then it will be captured before the wisdom develops to preserve it against that force.
And now I remember why I use the analogy of the body.
What I'm going to say here is wrong, so let's just take it as a loose metaphor.
Let's take in the body that the closest thing to top-down organization is neuroendocrine system.
But there's a bunch of bottom-up that is at the level of genetics and epigenetics and cellular dynamics and whatever, and there is a relationship between the bottom-up and top-down dynamics.
Well, obviously I can take a cell out of a body and put it in addition.
It has its own internal homeodynamic processes.
It's dealing with entropy on its own.
They don't need a top-down neuroendocrine signal for how they do that.
So let's say we tried to make a perfect top-down neuroendocrine system and the cells had no cellular immune systems or redox signaling homeodynamics or anything else.
You would die so quickly, right?
There is no way to have a healthy body at the level of the organization of all the cells if the cells are all unhealthy.
And that's the comprehensive education of the individual thing we're talking about.
Can you make a healthy system of government as a system?
Can you just get the cybernetics right that is separate than that which develops all of the individuals and the relationships between them?
And the answer is definitely not.
Okay.
Agreed.
But then here's the problem that I'm trying to articulate.
Okay, so we agree that the cells have to be coherent in and of themselves, that there has to be a fractal aspect to this organization of things across many scales, from the individual up to the body politic.
But if it is true that the key to making that work is that individuals, which are analogous to cells here, have to be educated in the nature of governance, the theory of governance, in order for this to work, how would they end up that way?
Well, they would end up that way because governance will have created the conditions that would cause that education.
So, are we not now saying that what is necessary in order for the system to function is that the system is already functional in order that it can generate the conditions necessary?
No, there's no hole-in-the-bucket situation.
There is a recursive situation between bottom-up and top-down dynamics.
And so, let's take the classic dialectic that relates to right and left, it's not the only one, of individual and collective, for a moment, and say, okay, Fundamentally the right is more libertarian individual pull yourself up by your bootstraps we want to have advantage conferred to those that are actually doing their conferring their own advantage and doing well.
And then the left model, the more socialist model is, yeah, but people who are born into wealthy areas statistically do better than people who are born into shitty areas in terms of crime and education and access to early health care and nutrition and all those things.
And you can't libertarianly hold yourself up by your bootstraps as a infant or a fetus.
And so let's make a system that tends to that well.
But then the right would say, but we don't want something like a welfare state that makes shitty people that just meets their needs for them and orients them to lay on the couch all day and do TV and crack.
Okay.
I think it's mind-bogglingly silly that we take these as if they are in a fundamental theory of trade-offs as opposed to a recursive relationship that can be on a virtuous cycle.
What we want to optimize for is the virtuous cycle between the individuals and the society.
So that, do we want to create social systems that take care of individuals but make shittier people?
No.
Do we want to create social systems that condition people that have more effectiveness and sovereignty and autonomy?
Yes.
And do we want to condition ones that in turn add to the quality of society?
Yes.
We don't want to make dumb social systems, right?
So a social system that is more welfare-like is much dumber than a social system that provides much better health care and education and orientation towards opportunity for advancement rather than towards opportunity towards addiction cul-de-sacs.
And so we already have some people, all the listeners of your show, I think, we already have some people who are trying to educate themselves independent of not having a government that is doing that.
And this is why I say it has to start at culture before state or market.
It has to boot in that direction.
So those people can start to work together to say, how do we influence the state and to start to then influence better education for more people, better media and news for more people?
And how do we influence it to affect market dynamics where the market dynamics are more bound to the society well-being as a whole rather than extractive?
I like this because we actually do see this dynamic.
We see people actually seeking out nuance, even though we're told that they won't do it.
And so the other thing we're seeing is for various reasons, including COVID, the absurdity of the educational system that we have is being revealed in a way that it never has been before.
So many more people are recognizing that school will flat out waste your time if you give it that opportunity.
And therefore, they have more license than ever to seek out high quality insight and exercises or whatever, and to discount the value that we are assured comes along with a standard degree, etc.
So yeah, I'm favorable to this idea.
Go ahead.
Also, the other thing you said that's interesting is Okay, so George Washington's quote, comprehensive education of every citizen science of government.
Well, how can we afford that when most of them are going to be laborers?
Because them having a strong background in history and in political science and social science and the Infrastructural Tax Act, does that help them be better farmers?
Not really.
It helps them be better citizens in government, but not better farmers.
And so how do we afford to pay for all that additional education?
And how do they maintain that knowledge when they're just engaged in a labor type dynamic?
And so this is why the children of the elite who are actually going to become lobbyists and senators and whatever, go to that private school and get that education.
Well, now we have this AI and robotic technological unemployment issue coming up.
And it's definitely coming up, right?
Well, the things that it will be obsoleting first are the things that take the least unique human capabilities, because those are the easiest to automate.
So labor type things.
So either this is an apocalypse that just increases wealth inequality and everybody's homeless and fucked, or on the absolute minimum amount of basic income so the elites can keep running the robots as serfs rather than the people as serfs and just hook the people up to Oculus with a basic income so they don't get in the way, or this actually makes possible a much higher education of everyone so they can be engaged in higher level types of activities.
Yeah.
Yeah, I agree with that completely.
And I also agree, you know, we should make sure people understand.
I mean, I think it was very clear the way you said it, but we are headed for a circumstance in which a shift in the way the market functions and what it requires is going to cause an awful lot of people to be surplus to it all at the same time.
And that can only play out in a few ways.
None of them are good if we don't see it coming and plan for it.
It's coming.
It's not the fault of the people who will be obsoleted.
And so in any case, yes, this makes sense.
You were mentioning, you look at COVID and you look at how many small businesses shut down and how much unemployment happened and then how much the market rallied because six companies made all of the money of the market.
And if you take those companies out, the entire stock market is down, but it's cap weighted.
And you basically have network dynamics, Metcalfe law dynamics, creating winner-take-all economies, where you have one winner per vertical.
The wealth consolidation, the wealth inequality has progressed so rapidly.
That all the measurements of GDP and market success and the measurements of quality of life are totally decoupled.
They're moving in opposite directions in really important ways.
When you combine how intense that is, and of course the forces with the most money are the hardest to regulate because they have the best lawyers and the ability for offshore accounts and for lobbying and whatever else.
So how do you do anything about this?
Combined with the fact that the debt to GDP ratio is unfixable.
You realize that a reset of our systems will happen because this system cannot continue.
And we can either do a proactive one.
Or we get the reactive one, and the reactive one is worse.
The reactive one is going to inherently be arbitrary, and therefore much more violent in every sense of that term.
And so, yes, you are programming some kind of a... Unfortunately, none of the terms that one would like are still available to us, because Great Reset has obviously been branded in somebody's interest.
Yes, we need some sort of a reboot that takes heed of this dynamic and sets us on a path where it doesn't turn into a catastrophe or it doesn't turn into a spectacular win at everybody else's expense for some party or other.
And unfortunately, of course, if we circle back to an early part of this discussion, Convincing people of the hazard of this, essentially the certainty that something of this sort will happen if we do nothing, that we must do something, that that something must be coordinated, that you can't pass it through your inherited lens of, is this left-leaning, is this right-leaning, is this for my team, is this against my team?
Convincing people of that is extremely difficult in this environment because for one thing, everything we would do to convince passes through these platforms that if they haven't flexed their muscle yet, as soon as we start talking about what would need to be done to save as soon as we start talking about what would need to be done to save civilization in ways that they can recognize it, they will
You've had this conversation on here before that, Let's say we can we look at a particular group and we can predict how they're going to respond to something we're going to say with quite high accuracy.
So we can take a particular woke SJW group and if we have a conversation of a certain type we can predict that they'll say, Oh, that thing you're calling dialectic is giving platform to racists when you should be canceling them therefore you're, you know, racist by association or whatever, you can take a
QAnon group and predict that they are going to say that because we talked to someone that was four steps away from Epstein in a network that we are probably part of the deep state cabal of pedophiles or whatever it is.
And to the degree that people have responses that can be predicted better than a GPT-3 algorithm, they can't really be considered a general intelligence.
They are just a memetic propagator.
They're taking in memes, rejecting the ones that don't fit with the meme complex, taking in the ones that do fit and then propagating them.
And I think people should, I think if people think about that, they should feel badly about not being someone who's actually thinking on their own and being a highly predictable memetic propagator.
And be like, I would like to have thoughts that are not more predictable than a GPT-3 algorithm.
I would like to know what my own thoughts about this are.
And in order to know what my own thoughts about it are, can I even understand and inhabit how other people think all the things that they think?
So that's one thing, because it's not only going through the filters like Facebook, it's going through the filters of the fact that people have these memetic complexes that keep them from thinking.
And so the cultural value of trying to understand other people so that we can compromise, because politics is a way to sublimate warfare, right?
And if you don't understand each other and compromise, you get war.
And the people who are saying, yes, let's bring on the war, they're just fucking dumb.
They just don't understand what war is actually like.
like they haven't been in it.
Right.
Well, I think you have brought us to the perfect last topic here.
Now, of course, I'd like this conversation to go on and we should pick it up at another date, but the point you make about if we can demonstrate that we know what you're going to say, then it isn't a thought worthy of a human, right?
If we can predict you, and it's not by virtue of us having modeled some beautiful thought process of yours, it's because your thought process looks like that of, you know, an indefinitely large number of other people who are totally predictable.
And that's nothing you should be comfortable with.
I think we, this goes back to the question I asked you at first, which is, when you engage in what I would call independent first principles thinking, You immediately run into challenges that somebody who's not deeply involved in such a thing doesn't intuit, right?
And so I'm imagining a person, somebody who is decent, who has compassion, has all of the basic capacities you would hope they would have, who has fallen into one of these automatic thought patterns.
And I'm imagining you manage to sit down with them and show them that their thought pattern is automatic and totally predictable and therefore nothing that they should be comfortable with.
And let's say that they walk out of the room and they start behaving differently and they start thinking for themselves.
They stay awake, right?
Well, they're going to run into some stuff because they are of course going to end up landing on some formulations that as soon as they say them out loud are going to get them punished.
Right?
That is inevitable.
Now, those of us who live out here learn how to say things in ways that sometimes the punishments don't stick.
We learn where they are best stated.
We learn what we shouldn't say yet.
But all of this speaks to what I think is—it's not—we don't live in an authoritarian State.
But we live in a state in which thought is policed as if we did.
Right?
Not perfectly, but enough that one who wishes to escape from the accepted, the sanitized narrative has to be ready for what happens next and That's something that is, it's very hard to generate that.
In other words, it's a developmental process that causes you to learn how to navigate that space.
So somebody who just simply recognizes, I don't want to be an automaton, and I'm going to start thinking for myself.
If their next move is to start thinking for themselves and speaking openly about it, what comes back next is something for which we don't have a good response.
Earlier you said, when you were defining near the beginning of our conversation what you meant by independent thinker as someone who wants to go wherever the facts and information that are well verifiable actually lead them.
I would say that there's something like the spirit of science, which is a reverence and respect for reality, Where I want to know what is real and be with what's real more than I want to hold a particular belief, no matter how cherished or whatever in-group it I'm a part of, and the uncomfort of not belonging with the in-group.
If I want to belong with anything, I actually want to have a belonging with reality first, and a belonging with my own integrity, and then with those who also share that.
And that the other belongings that I give up, I don't stop caring about those people.
I care about them still, but I don't necessarily care about their opinion of me enough that I'm willing to distort my own relationship with reality.
All right, so here's the question I want to ask about this.
And I'm basically trying to surface some part of my own process in order to figure out what it is.
Can it be improved?
Can I teach it to others to the extent that it works?
There are... So I was on Bill Maher with Heather last Friday.
And I said something that got an awful lot of pushback online, which I knew was coming.
I said, he asked if I thought the probability that COVID-19 was the result of a lab leak was at least 50%, and I said something quite honest and shouldn't have been new to anybody who had been paying attention to my channel, which was that I had said back in June that I thought the chances were at least 90%.
Now I can imagine that that number would be shocking to many people.
But I also know that were I in their shoes, I would process it this way.
I would say, all right, this person seems intelligent.
I don't know of a conflict of interest.
That number is way off of what I would calculate.
Therefore, I need to file this as a flag.
Do I not know something?
Maybe the person has a conflict of interest, and that explains it.
But if it's not that, how have they arrived at a number that is so far off of what I would calculate, and what does it tell me?
In other words, I would become People don't give enough benefit of the doubt to people who agree, who think differently, and they give too much trust to those who think the same.
Right, but then here's the place that the thought goes.
So, is it true that if somebody intelligent says something that is completely inconsistent with my model of the universe, that I will inherently give it enough credence to look at it?
It's a tough question, because if I try some test cases, if you told me that You believed that there was a strong chance that the Earth was flat.
Okay?
That would throw a huge error for me, right?
Because I know...
A, that I've checked, right?
In fact, I have years ago and several times said, what are the chances there's anything that these flat earthers, that they're not just a joke?
And then it's a trivial matter to find out what you need to know from your own experience that is inconsistent with that possibility.
And so the answer is, okay, I'm not going to spend too much time checking with it, right?
Then we get to, is the moon landing fake?
Right?
This one is tougher, right?
It's tougher because when you look at the actual evidence that people are motivated to hypothesize that the moon landing is fake, there are some things in it that are hard to know.
I don't offhand know what the explanation for them is.
So anyway, my point is, there are some ideas I wouldn't be shocked at all to find that you believe.
There are some ideas I would be so shocked that I would imagine you're kidding or You've lost your mind, or I don't know what.
And so we all draw that line somewhere.
And I guess my point is, I think almost everybody, even very, very smart people who don't happen to be experienced in first principles independent thinking, draw that line somewhere that creates a fatal error when independence is experimented with.
Right that the number of things that you know it is the matrix in some sense once you start experimenting with what would I conclude if I was independent of all incentives and I just went based on the evidence and I gave everybody a chance to articulate their position.
What comes back is so jarring that most people are driven back into conventional automatic thinking because the frightening aspects of what they get in response are enough to drive them off the instinct.
Yes, okay.
God, there's so much in here that's really good.
The thing about the flat earth is that it is – the hypothesis is formally falsifiable.
And the alternate hypothesis – Even by an individual?
Yes.
And the alternative hypothesis is formally verifiable with the best methods that we have, with the highest confidence we can have.
And now one thing I would still say is interesting is I know many people who refer to flat earthers as the moniker of maximum stupidity who cannot do the Copernican proof.
So they take as an article of faith that the earth is round, but they actually don't know how to drive it, have never tried.
And so then they also move to taking as an article of faith similar things that don't have the same basis.
So does someone even understand what falsifiable and verifiable mean?
Does someone have a basis for calibrating their confidence margin?
Because if I start to talk about the moon landing, or then I go a little bit further and talk about long-term autoimmune effects or epigenetic drift or whatever that come from a vaccine schedule of 72 vaccines together, Is the standard narrative falsifiable or verifiable?
Is the alternate narrative falsifiable in the way Flat Earth is?
No.
So the fact that we put Flat Earth and anti-vax in the same category is an intellectually dishonest bad thing to do.
But the fact that most people Don't even know how to do verify or falsify.
And so like with the lab hypothesis, when you come to 90%, I'm guessing you have a process for that.
What I would say is I haven't studied it enough to put a percentage, cause I don't have enough Bayesian priors to actually come up with a mathematical number.
What I would say is I consider the idea of it coming from a lab and some kind of dual function gain, gain of function research, dual purpose gain of function to be very plausible.
And I have seen nothing that falsifies that, and the few attempts that I saw early to falsify it were theoretically invalid to me.
Now, to be able to go from plausible to a probability number, I would need to apply different epistemic tools than I have already applied.
Well, wait a second.
I'm not sure that that's the case, because to me, as a theoretician, there is a hypothesis There are multiple hypotheses.
One is the virus escaped from a lab unmodified.
Another is that it was enhanced with gain-of-function research and then it escaped.
Another is that it was weaponized and deliberately released.
All of these things.
Each of them is a hypothesis.
Each of them makes predictions and they are all testable.
Now, I am not required to have any guess as to which one will turn out to be correct, nor an assessment of how probable it is.
It is natural to have a guess.
But the two things function independently, right?
As a scientist, I am obligated to treat a hypothesis by the formal rules of science.
I know what they are, I know how they work, and therefore I know at what point it's going to be falsified, any one of them, and what would be necessary for one of them to become a theory, that is to say, for all of its challengers to fall.
Now, I can also say, look, if I had to bet, here's where I'd put my money.
I happen to be a scientist who would be placing a bet, but my bet is not a scientific bet.
Yeah, we're aligned.
Clarification agreed.
I can place a bet that is my hunch that I didn't come to that number through an actual Bayesian or other kind of Mathematical process.
But if I was actually trying to formally give my my percentage basis, I would go through some epistemic process.
And now if I had to make a consequential choice based on it, The more consequential the choice is, the more process I would want to go through to calibrate my confidence of it, because the more problematic it would be for me to be wrong.
Right.
Okay, so that all makes sense.
But the ultimate question here is, given that we can see, we want people not to behave in an automatic way, in a way that is below the nature of human cognition's capacity to think and to react.
But we also know that when people experiment with that under the current regime, it is not that they will...
produce conclusions that are different than they would otherwise produce, say them to their friends, and their friends will say, oh that's interesting, I didn't realize you'd think that.
Their friends will say, oh my god, I can't believe you're one of them, right?
And that that thing is so powerful that it is artificially depressing the degree of independent thought because anybody who has experimented with it is likely to have effectively, you know, touched some third rail and retreated As a response, so we don't know.
There's a failure mode on both sides.
There's a failure mode of not of creating artificial constraints where we don't explore the search space widely enough, which is the one you're mentioning.
There's another one of exploring the search space without appropriate vetting and jumping from hypothesis to theory too fast.
Yes.
And those two are reacting against each other.
Right.
There are people who say, because it's plausible, it is.
They jump from hypothesis straight to theory without proof.
And then they believe wacky ass shit.
Yes.
And they insist that it's true.
And then people over here are like, wow, that's really dangerous and dreadful.
And anything that looks like that, I'm going to reject offhand.
And similarly, people over here believe standard models that end up getting either captured or at least limited, and people over here react against that.
So this is another place that I would say the polls are driving each other both to nonsensical positions.
Well, yes.
And the way that works in practice is There is a team that in principle knows that it is in favor of doing the analysis, but it does not believe itself capable of doing the analysis.
So effectively it signs up for the authority of those who claim to have done the analysis and in principle have the right degrees or whatever.
But then we run into this thing which goes back to something you've said in several places in this discussion.
Which has to do with the bias amongst those involved in certain behavior.
In other words, if you're an epidemiologist at the moment, or a virologist, there's a very strong chance that you believe the lab leak hypothesis stands a very low chance of being true.
But you also very likely have a conflict of interest.
You may be directly involved in the research program that would have generated COVID-19, Or you may simply be involved in social circles in which there is a desire not to have virologists responsible for this pandemic, and therefore there's a circling of the wagons that has nothing to do with analysis.
But either way, the tendency to converge on a consensus is completely unnatural.
And those who are trying, who earnestly are trying to follow science, end up following consensus delivered by people who claim the mantle of science while not doing the method.
And that is a terrible hazard.
Yeah.
Yeah, I agree.
And there's one step worse, which is the thing that we mentioned earlier, which is you can do the method, have all of the data coming out of the method be right and still have the answer be misrepresentative of the whole because you either studied the wrong thing or you studied something too partial.
And so this question of what is worth trusting comes up again and is Okay, I don't want to defect on my own sense-making to just join the consensus so that I am not rejected.
At the same time, if everyone is sure that I'm wrong and I'm sure that I'm right, I should pay attention to that, right?
Because very possibly I have a blind spot and I'm a confused narcissist.
Every once in a while, they are all in an echo chamber and I'm actually seeing something.
And it's, it both can be the case sometimes.
So you're like, okay, do I always stick to my guns or do I always take whatever the peer review says?
Neither.
This is again, the optimization function, isn't it?
Wisdom ends up being a, I don't know the answer to this trolley problem before I get there.
Right.
So what I have to say is, is the basis by which the other people all agreed that you were wrong deliberative and methodological and earnest and free of motivated reasoning?
Does it have a group-motivated reasoning that's associated with it?
Are there, you know, clear blind spots in the thing you're thinking?
So, I don't think there's an answer to the what actually is right.
There is no methodology.
The Tao that's eternal is not, the Tao that's speakable is not the eternal Tao.
The methodology that's formalizable is not the thing that reveals the Tao, right?
Like, ultimately, You have to end up adding placebo at a certain point, and then double blinding, and then randomization.
The methods have to keep getting better because there's always something in the letter of the law that doesn't get the spirit of the law, and in the letter of the methodology that doesn't get actual science right.
Right, and in fact, so a couple things here.
One, There's a part of the scientific method which is a black box.
There's a part that actually, I believe, literally cannot be taught, right?
It is the part where you formulate a hypothesis, right?
That is a personal process.
If I taught somebody to do it my way, I don't think they'd do it very well, right?
So the point is that's something that you learn to do Through some process that is mostly not conscious, hard to teach, and hard to discover, but everybody who does it well does it in some different way.
And so at that level, even just saying do the method is incomplete because not everybody can do the method.
Let's see, there was something else.
Oh yeah.
There was a missing thing on your list.
I realize you weren't trying to be exhausted, but there was a missing thing on the list of possible reasons that you could come up against a consensus and still be right even if you're the only person who disagrees.
And it has to do with the non-independence of the data points on the other side based on, let's say, Either a perverse incentive or a school of thought having won out and killed off all of the diversity of thought over some issue that turns out to matter.
And these things can vary easily.
So I would say yes, if you always think you're right and when everybody's against you, they're wrong, then yeah, narcissism is a strongly likely reason.
On the other hand, it is, as you point out with Tesla and their competitors, Sometimes you find that a field or an industry is easy to beat, that there's something about them that is, you know, maybe economically very robust, but with respect to their capacity has become feeble.
And this is true again and again in scientific fields, that scientific fields go through a process where they A school of thought delivers handsomely on some insight.
It wins the power to own the entire field.
That insight runs its course.
Diminishing return sets in.
It stops delivering anything new.
It doesn't give up the reins and hand them over to somebody else because there's no mechanism to do that.
So the people who have the school of thought that's already burned out its value stick to their, you know, their power.
And that means that the field is wide open to be beaten by an outsider who just simply isn't required to subscribe to whatever the assumptions of the school of thought are.
And that happens so frequently that there is this... It's artificially common that you have the experience, if you think independently and you know what you're doing, that you'll disagree with just about everybody and they'll actually turn out to be wrong because they're proceeding from a bad set of assumptions.
So I think this is actually one of the most interesting applications of blockchain or decentralized ledger technology is this idea of an open science platform.
So imagine every time someone did a measurement.
The fundamental measurement, it had to be entered into a blockchain.
And then the other places that independently did, it was entered into a blockchain, so it was uncorruptible.
And then the axioms and the kind of logical propositions get entered in.
And then the logical processes of whether I'm using an inductive or deductive or what abductive process gets put in.
And then we get to kind of look at the progression of knowledge.
Then at any point, we come to realize that a previous thing in there was wrong.
Now we can come back to that point and look at everything downstream from it and reanalyze it.
Of course, you still have the Oracle problem of the entry in the first place.
So if I'm doing motivated science and I get some answers I don't like and I can't hide them and not enter them, then that'll happen.
So you still have to have then the proper entry into the system.
But this addresses something with the integrity of science and also the integrity of government and government spending and the capture of market forces of the regulators rather than the regulators being able to regulate the market is We only know when the fucked up thing happens if we can see it, which means that everyone who wants to do something asymmetric or predatory has a maximum incentive for non-transparency.
So certain kinds of uncorruptibility and transparency are very interesting in what they can do towards that.
Interesting.
Now this actually comes back to something I wanted to raise earlier but didn't get to it, which is I started out Very focused on sustainability.
I believe sustainability is something that the system... You can't measure too finely.
If you measure too finely, then sustainability becomes an absurd block to progress because you can't dig a hole in your own backyard because you couldn't dig a million such holes.
But if you relax the system so that you're measuring processes that actually potentially matter, Sustainability has to be a feature of the system long term, right?
It doesn't have to be the feature of the system in any given time period, but overall it has to net to a sustainable mode.
I wouldn't say a system has to be sustainable.
I would say the meta system or increasing orderly complexity has to be sustainable, but that might mean a system intentionally obsoleting itself for a new better system.
Okay, I accept that.
But what I've realized down this road is that the system actually, or the set of systems, or the metasystem, however you want to describe it, needs a failsafe which I call reversibility.
Right so the point is if you set the goal of sustainability and you say well we have to measure things that matter, sooner or later you're going to fail to measure something that matters and you're going to deal with it unsustainably and at the point that you figure it out it's going to be too late.
So my point would be, and you know this is a tough one, people don't like the implications of this if they understand it, but any process that you set in motion has to be something you could undo if it turns out to be harmful in a way that you didn't see coming, right?
So that is to say you can alter the atmosphere.
Carbon dioxide is not poisonous, right?
The changes in concentration that change the degree of heat trapping are not terribly meaningful to the well-being of living creatures.
But at the point you discover that the heat trapping is going to massively change the way the atmosphere functions and the oceans, etc., etc., you have to be able to undo it.
Now, undo it means you could change the concentration back to what it was.
Now, what this would mean in practice was that you would have to slow the process of change down such that you scaled up the process that would reverse the change in proportion.
Now if you imagine all of the disasters that we have faced, all the ones I named up top and all of the other ones that look like it from, you know, Fukushima to Aliso Canyon to the financial collapse of 2008, and you imagine That in proportion to the process that went awry, we had scaled up the reversal process so that it was there if we needed it, right?
We would have been in a very different situation because A, the process would have run away much, much slower, and B, the tools to undo it would have been present and ready.
Oh, before you respond to that, I do want to say that the only way that that would work is if it was over the entire system.
In other words, if one nation, for example, were to decide that it had to adhere to a standard of reversibility while other nations weren't restricted in the same way, you'd get a tragedy of the commons where the atmosphere or whatever other resource would ultimately be destroyed by the nations that didn't participate in that system and the nation that was most responsible would pay the cost of building a reversibility system that wouldn't work in the end.
But other than that I think the principle makes sense.
What do you think?
Something like sustainability having a consideration like reversibility as one of the factors to inform choice making And that is a valuable consideration, and that it doesn't matter at all if we don't have collective coordination capacity to be able to make the right choices, period.
So yes, agreed.
Now, regarding reversibility, I think reversibility is a valuable consideration that is impossible in important ways.
But it's still an important consideration.
So, can I decrease the amount of CO2 in the atmosphere if we realize we need to?
Kind of, yes.
The CO2 in the atmosphere went up along with a lot of mountaintop removal, mining for coal, and a lot of species extinction in the process.
A lot of people who died over wars for oil.
Can I reverse and get those dead people back and those extinct species back and those pristine ecosystems back?
Nope, they're gone.
And then also reversibility over what time frame.
Will new old growth forests come back thousands of years from now?
Sure.
Does that time scale matter?
If I ever extinct a species, is that reversible?
Does every species matter?
What about killing an individual element within it?
You know, so it's like, I can only think about reversibility on very narrowly defined metrics, but the thing that harms that one metric has lots of other effects simultaneously.
And so we have to understand the reversibility by its by itself is an oversimplification, because we'll always be thinking upon metrics that are subsets of all that's affected.
Yep, I agree.
It is a, it is a oversimplification as is sustainability.
But my sense is that you have to instantiate it in some way in order for the system to be safe.
And I would say, if it prevents you from Removing mountaintops, as long as it prevents everybody else from removing mountaintops, it's the right idea.
In other words, if we are allowed to degrade the earth a little bit at a time by removing mountaintops now and, you know, drying up rivers next time, then eventually you have a world that isn't very worth living in.
And I do believe that we have a moral obligation Not to degrade the planet, right?
That our highest moral obligation has to be to deliver the capacity to live a complete, fulfilling human life to as many people as we can.
And that means not liquidating the planet.
It means a renewal process, which is the very definition of sustainability.
And it's inconsistent with removing mountaintops.
Now, lots of species don't matter.
Right, there are lots of little offshoots of species and they can go extinct and they do go extinct and nobody is harmed by their doing so, which it's not the same thing as losing orcas or, you know, elephants or eagles or whatever.
So obviously you need to have a a rational threshold in which you protect against degradation and allow degradation that doesn't have an important implication.
But the question is really, is it so compromised by those considerations that it's not worth considering?
Or is it rescuable if one figures out how to apply a threshold?
So we said that one of the dialectics that defines the left and right in its most abstract form generally has to do with a focus on the individual versus a focus on society or the collective or the tribe or some kind of group.
Another Another one is an orientation towards conservation or conservativeness, traditionalness, with an orientation on the other side towards progress or progressiveness.
And again, these are confused all over the place.
And even what we call left and right have shifted in the last few decades in a number of ways.
But it's interesting here because when you talk about reversibility and sustainability, another synonym is conservation, What is it that we want to be able to conserve?
And so the conservative principle is focused on what has made it through evolution that is valuable enough that we should conserve it and not fuck it up.
And yet, so interestingly, the people who are often called conservatives are not focused on critical aspects of conservation.
But if you you're talking about biosphere conservation right now, oftentimes they're talking about sociosphere conservation, the conservation of social systems.
And you're saying that underneath it is the capacity for humans to thrive and have meaningful lives and relationships.
And we would say that that is a function of the biosphere, the sociosphere and the technosphere and the relationship between them.
And so And we can say very clearly, it's the technosphere ruining the biosphere most of the time.
And yet, if it ruins the biosphere enough, the technosphere goes, because the technosphere depends upon the biosphere.
So we have to learn how do we make a technological built environment that is replenishing regenerative with the biosphere.
And the sociosphere is another really critical one.
And I think you'll probably actually have something to add to this that I haven't thought of.
When I think of what the fundamental intuition of a conservative is, Even if they don't articulate it like this.
And the traditionalist kind of impulse, which is, let's go back to the Constitution, let's go back to Christianity or European ideas or the free market or whatever it is, or rigorous monogamy, whatever social structure lasted for a long time.
There's an intuition, even if they don't formally think of it this way logically, that almost everything didn't make it through evolution in terms of social systems, and the few things that did weren't the things that people thought would.
So there's a lot of embedded wisdom that wasn't understood, that was very hard-earned, and we want to preserve that and not break it because we think we understand it well enough and we might not.
And that fundamentally the progressive intuition is that we're dealing with fundamentally novel situations that evolution didn't already figure out and we need innovation.
And of course the synthesis of that dialectic is we need new innovation that is commensurate with that which should be conserved.
And not everything should be conserved.
Because some things made it through because they won short-term battles while fucking up the long-term hole.
And so what things are worth being conserved?
What things are not worth being conserved?
Did we understand it well enough that we didn't say this isn't worth being conserved out of hubris?
And then what progress is commensurate with that, I think is a good way of thinking about that dialectic.
Yeah, I like it, and I think there's the flip side of it as well, which is that captured inside biblical traditions are some bits of, so basically responses to game theoretic hazards that are consistent with things we've talked about.
So for example, The Christian sense that not only is the world here for humans to make use of, but that we are in effect obligated to do it, that belief fits perfectly in a world where if your population doesn't capture a resource, somebody else is going to.
So in other words, that belief structure travels along with a tendency to capture the resources that are available.
And to the extent that what that does is it causes the exploitation of a resource, the tools with which those resources could have been exploited in biblical times almost always left a system that would return itself to equilibrium given an opportunity, which isn't true in the modern circumstance.
So what we have is a place where there's lots of stuff that is
Conservative that there's a very good and often hidden reason that we should preserve and then there are some places where we'd actually have to upgrade the wisdom because it doesn't fit modern circumstances and the conservation of The natural world is I think a clear case Just because you mentioned this case when when people realize that Christendom spread
Largely by Holy War, not exclusively, but largely.
You need a religion that makes a lot of people that are willing to die in Holy War because of a good afterlife and who you can spare, right?
A large population of people that can die in war.
Islam and Christianity both had this.
They both had be fruitful and multiply and proselytize because they both had war as a strategy for propagation of the memes, so you needed numbers.
Whereas Judaism didn't have it, right?
And Quakerism and some other ones didn't.
Judaism had to actually make it hard for people to join the religion because you're not gonna lose a lot of people as soldiers.
You're gonna embed yourself as a diaspora within dominant cultures and end up affecting the apparatus of those cultures.
So it's interesting to think about how those different meme plexes had different evolutionary adaptations, but it's important for the reason you mentioned is that those traditions were influenced by politics and economics and war and philosophy and culture and a lot of things.
So you can't wholesale throw them out or keep them.
You have to actually understand what allowed those memes to propagate and what their memetic propagation dynamics were.
And so that conservative impulse that says the things that made it through made it through for a reason, yes, but some of the things that made it through for a reason won't keep making it through.
Dinosaurs were around for a long time, and then they weren't, right?
So, and as we've mentioned, evolution can be blind and run very effectively into cul-de-sacs, and yet the other side is all too often We will criticize a tradition for being dumb when we don't understand what made it work well enough and we throw something out that was actually worth not throwing out.
So how do you do a deep enough historical understanding to be able to decide what should be conserved and not is also a really good question.
It's a really important question because it's Chesterton's fence factory, effectively, right?
Nobody knows what What actually was functional and what, you know, had no function but traveled along with it because they were paired very closely in a biblical text.
And, you know, what functioned in ways that we don't want it to function now.
These things are all invisible because the whole thing is encoded in myth.
So it's not in there, right?
So yeah, that's a huge hazard and it's a tough one for those of us who want to build reasonably and recognize that there's an awful lot that we have to do that's novel because it hasn't been accomplished before.
We have to grapple with the fact that it's not like these traditions are simply backward.
Some of them are very insightful and non-literal and we need to exercise great caution in approaching them.
Okay, so I want to come back to your three generations least problem.
It's easy to look at the nature of the problems and just assume that we are fucked.
And usually to tie that to some conversation about human nature.
And to say, okay, well, we were able to figure out technology that was extraordinarily powerful.
To speak mythopoetically, the power of gods.
The nuke was clearly the power of gods, right?
And then lots of Texans then.
We can genetically engineer new species, gain a function, whatever.
Without the love and wisdom of gods, that goes in a self-terminate direction.
Is it within the capacity of our nature to move towards the love and wisdom of gods to bind that power?
Or are we inexorably inadequate vessels for the amount of power we have?
So then I do a positive deviant analysis to look at what are the best stories of human nature to see if they converge in the right direction.
And then also where there are conditioning factors that we take for granted because they become ubiquitous and think that they're nature.
So if we go back to the Bible for a moment, we look at Jews and we look at, was there a population of people that were able to educate all of their people at a higher level than most other people around them for a pretty long time in lots of different circumstances?
Yes.
You look at the Buddhists.
Were there a population of people that across millennia in different environments were able to make everybody peaceful enough to not hurt bugs?
Yes.
Across all the genetic variants and across all of the economic factors and the whatever else, do we have examples of very high level of cognitive development and very high level of ethical development of different populations based on cultures?
We do.
And then we say, oh, well, but, you know, look at how well the Founding Fathers ideas failed here.
The comprehensive education of everyone is not in the interests of the elite that have the most power, as we mentioned, and so making it seem like that that's an impossible thing is actually really good to support the idea that there should be some kind of nobility or aristocracy or something like that.
There should be elite to control because they're more qualified.
I would say that we have not in modern times ever tried To educate our population in a way that could lead to self-governance because there is no incentive to do so.
Or those who had the most capacity had incentive to do something else even when they said they were doing that.
So do I think that it's possible?
Do I think that we have examples historically of people who developed themselves cognitively and ethically enough that if we did those together, right?
Buddhist Jews, however we want to talk about it, Do I think that's possible within human nature and basically untried?
Yes.
Yeah, I love that, and I agree with you.
It's dependent on something which we might as well spell out here, which is that the capacities, the difference in capacity between human populations is overwhelmingly, if not entirely, at the software level, which I firmly believe.
I'm speaking as a biologist.
I've looked at this.
I will have to defend it at length elsewhere, but the degree to which it's software that distinguishes us and therefore we can innovate tools, we can democratize tools, all of that is at our disposal.
And I agree with you, it hasn't been tried and it might be our only hope, but at least we've got prototypes.
Now I will say why I'm grateful for what happened at Evergreen is that you wouldn't be here doing this otherwise.
And on Bill Maher, you and Heather are both exceptional educators.
So the fact that your tiny little niche for education got blown up so that you took this quality of education to all the people who were interested, this larger scale I'm really happy about.
Thank you, my friend.
This exact thing is the thing that has a chance.
It's a strange attractor of those who are called to a cultural enlightenment starting to come together in a way that can then lead them to coordinate to build systems that can then propagate those possibilities for other people.
Well, I really appreciate that.
And I must say, I feel it as a calling, as I'm certain you do.
And so, yes.
And I also love the point that you made earlier about the fact that the audience for this really is people seeking a kind of enlightenment and community.
And so, yes, as much as you and I both focus on existential risks, there is hope in that.
Yeah.
Okay, Daniel.
Well, this is, I think we've gone more than three hours.
It's certainly been a great conversation, and there are so many threads that are worth revisiting, which we should do sooner rather than later.
This was super fun.
I really enjoyed it.
Yeah, it was.
So, Daniel Schmachtenberger, where can people find you?
You mentioned in the beginning we have something called the Conscience Project that'll be launching soon via a newsletter in March and then a website in a few months.
So tune back in on that.
It is a project in the space, a non-profit project that is seeking to Do a better job of news with education built in.
So we actually make the epistemics.
We look at very complex issues that are polarized and we make the epistemics that we're applying explicit.
So we're actually teaching people, how do you sense make complex situations in situ?
And then if anyone ever thinks we missed any of the data about something wrong, they can let us know and we'll publicly correct it and credit them if that's right and et cetera.
So, and the goal there is Helping to catalyze cultural enlightenment of this type, recognizing that both education and fourth estate are requisite structures for open society, and open society being rebooted has to be rebooted at the culture level first.
Right now, you can find me on Facebook, one of those platforms, or I have a blog An old blog, everything's out of date on it.
CivilizationEmerging.com.
CivilizationEmerging.com.
And are you not on Twitter?
And does that explain how you're so clear-headed?
I'm not on Twitter.
And I'm on Facebook because of Metcalfe Law.
Because everyone is, so it ends up being a useful introduction and messaging tool.
But yeah, I am not part of the Twitter crew.
More power to you.
All right, Daniel, this has been a pleasure, and I look forward to our next one.
Be well, and everybody else, thanks for tuning in.
Export Selection