Marc Andreessen, co-creator of Mosaic (1992), traces the internet’s revolutionary potential—dismissed by IBM’s Thomas Watson Sr. and The New York Times—and its unfinished decentralization despite AI’s algorithmic dominance. He frames religions and modern ideologies (e.g., woke culture) as ethical systems prone to dysfunction, like Nazism or crypto’s speculative volatility, where group cohesion often overshadows logic. Skeptical of AI sentience claims, he compares them to flawed metrics like the Turing Test while defending nuclear energy as a safer alternative to coal, despite 40 years of U.S. regulatory stagnation. Ultimately, he argues that progress demands balancing innovation with cultural resilience, warning against emotional resistance to solutions that could reshape society. [Automatically generated summary]
Yeah, we had one somewhere around that time, and I remember thinking it was the most crazy thing I've ever seen in my life, that you could play a thing that's taking place on your television.
You could move the dial, and the thing on the television would move.
I mean, it was magic.
It's so crude and dumb for kids today, they would never believe the impact that it had on people back then.
So if you did it correctly, you would get this video where you went through all the right moves and you got to the place, but you would have moments where you had to make a quick decision, and if you made the correct decision, like here, like jumping to the flaming ropes, if you made the correct decision, you would get across.
But if you screwed up, they would play a video of you dying.
Well, there was a famous statement of the founder of IBM, this guy Thomas Watson, Sr., and he famously said one of these things, maybe he said it, maybe he didn't, but he said there's no need for more than five computers in the world.
Right?
And the theory was basically the government needs two, right?
They need like one for defense and one's for like civilian use.
And then there's like three big insurance companies and that's like the total market, right?
And that's all anything needs.
And then there's a famous letter in the HP archives where some engineer told basically the founders of HP they should go in the computer business.
There's an answer back from the CEO at the time saying, you know, nobody's going to want these things.
So like, yeah, it's really, it's tenuous.
I mean, the famous New York Times wrote a review of the first laptop computer that came out in like 1982, 1983. And the review is, you read it, it's just scaling.
It's just like, this is so stupid.
I can't believe these nerds are up to this nonsense again.
This is ridiculous.
And then you realize like what the laptop computer was in 1982, it was 40 pounds.
It was like a suitcase, right?
And you open it up and the screen's like four inches big, right?
And so like the whole thing's slow and it doesn't do much.
And so if you just like take a snapshot at that moment in time, you're like, okay, this is stupid.
But then, you know, you project forward.
And by the way, the people who bought that laptop got a lot of use out of it because it was the first computer you could carry.
But, like, this idea that we got from, like, that's just absurd to literally everybody carrying a supercomputer in their pocket in the form of a phone in 30 years.
Oh, so Microsoft actually, they had a very simple operating system, and then they had Microsoft actually made what's called BASIC at the time, which was the programming language it was built in.
The big debate at the time actually was, do these things actually serve any function in the home?
The ads would all say, basically, it's because the ads are trying to get people to basically pitch their parents on buying these things and be like, well, tell your mom she can file all of her recipes on the computer.
That's the kind of thing they're reaching for.
And then your mom says, well, actually, I have a little 3x5 card holder.
I don't actually need a computer to file my recipes.
So there was that.
A lot of it was games.
A lot of it was video games.
And then kids like me like to learn how to code.
First, it's like play the game.
It's like, well, how do you actually create one of these things?
When the spreadsheet arrived, that was a really big deal because that was something that people, that was something people, the capability that business people didn't have until they had the PC.
So if you're a kid with a computer in 1980, you have a cassette player, and so they would literally record programs as like audio garbled, you know, electronic sounds on cassette tape, and then I'd read it back in.
But you had this like tension, you had this tension because cassette tapes weren't cheap, they were fairly expensive, and the high quality cassette tapes were quite expensive.
But you needed the high quality cassette tape for the thing to actually work.
But you were always tempted to buy the cheap cassette tape because it was longer.
Right.
And so you would buy the cheap cassette tape and then your programs, your story programs, then they wouldn't load and you'd be like, all right, I got to go back and buy the expensive cassette tape.
This is, I think, an original Model 1. Was there a feeling back then when you were working with these things that this was going to be something much bigger?
And basically what that represented, if you were of a mind to be into this kind of thing, that represented unlimited possibility, right?
Because basically it was inviting, right?
It was basically like, okay, ready for you to do whatever you want to do.
Ready for you to create whatever you want to create.
And you could start typing, you could start typing in code.
And then there were all these, you know, at the time, magazines and books that you could buy that would tell you how to like code video games and do all these things.
But you could also write your own programs.
And so it was this real sense of sort of inviting you into this amazing new world.
And that's what caused a lot of us kind of of that generation to kind of get pulled into it early.
Yeah, so that started in 92. Not even Windows 95. Hit critical mass in Windows.
Yeah, so that was pre-Windows 95. Windows 3.1 was new back then, and Windows 3.1 was the first real version of Windows that a lot of people used, and it was what brought the graphical user interface to personal computers.
So the Mac had shipped in 1985, but they just never sold that many Macs.
Most people had PCs.
Most of the PCs just had text-based interfaces, and then Windows 3.1 was the big breakthrough.
Well, so there's a long, this goes to the backstory.
So Xerox had a system, yeah, Xerox had a system called the Alto, which was basically like a proto, sort of a proto Mac.
Apple then basically built a computer that failed called the Lisa, which was named after Steve Jobs' daughter.
And then the Mac was the second computer they built with the GUI. But the story is not complete.
The way the story gets told is that Apple somehow stole these ideas from Xerox.
That's not quite what happened because Xerox, those ideas had been implemented earlier by a guy named Doug Engelbart at Stanford who had this thing at the time called the Mother of All Demos, which you can find on YouTube, where he basically in 1968, he shows all this stuff working.
And then again, if you trace back to the 50s, you get back to the Play-Doh system that I talked about, which had a lot of these ideas.
And so it was like a 30-year process of a lot of people working on these ideas until basically Steve was able to package it up with a Macintosh.
Well, so early on, they were kind of the same thing.
So actually, early internet was actually integrated with dial-up.
And so early internet email actually was built.
It didn't assume you had a permanent connection.
It assumed you would dial into the internet once in a while, get all the data downloaded, and then you'd disconnect because it was too expensive to leave the lines open.
Yeah, it was the first, it was a productized, it was the first browser used by a large number of people.
It was the first browser that was really usable by a large number of people.
It was also one of the first browsers that had integrated graphics.
The actual first browser was a text browser.
The very first one, which was a prototype that Tim Berners-Lee created.
But it was very clear at that point.
We have Windows, we have the Mac, we have the GUI, we have graphics, and then we have the internet, and we need to basically pull all these things together, which is what Mosaic did.
We got it to the point where normal people could use it.
You could do this stuff a little bit before, but it was like a real black art to put it together.
So we got to the point where it was fully usable.
We made it what's called backward compatible.
So you could use it to get to any information on the internet, whether it was web or non-web.
And then you could actually have graphics actually in the information.
So webpages before Mosaic were all text.
We added graphics and so you had the ability to have images and you had the ability to ultimately have visual design and all the things that we have today.
And then later with Netscape, which followed, then we added encryption, which gave you the ability to do business online, to be able to do e-commerce.
And then later we added video, we added audio, and it just kind of kept rolling and kind of became what it is today.
But your unique perspective of having been there early on with the original computers, having worked to code the original web browser that was widely used, and seeing where it's at now, does this give you a better perspective as to what the future could potentially lead
Because you've seen these monumental changes firsthand and been a part of the actual mechanisms that forced us into the position we're in today, this wild place.
In comparison, I mean, God, go back to 1980 to today, and there's no other time in history where this kind of change, I mean, other than It's catastrophic natural disasters or nuclear war.
There's nothing that has changed society more than the technology that you are a part of.
So when you see this today, do you have this vision of where this is going?
There are a lot more smart people in the world than have had access to anything that we would consider to be modern universities or anything that we consider to be kind of the way that we kind of have smart people build careers or whatever.
There's just a lot of smart people in the world.
They have a lot of ideas.
If they have that capability to contribute, if they can code, if they can write, if they can create, You know, they will do it.
Like, they will figure out.
I mean, the most amazing thing about the internet to me to this day is I'll find these entire subcultures.
You know, I'll find some subreddit or some YouTube community or some rabbit hole and there will be, you know, 10 million people working on some crazy collective, you know, thing.
And I just didn't, you know, even I didn't know it existed.
And, you know, people are just like tremendously passionate about what they care about and they fully express themselves.
It's fantastic.
And I feel we're still at the beginning of that.
Most people in the world are still not creating things.
Most people are just consuming.
And so we're still at the beginning of that.
So I know that's the case.
Look, it's just going to keep spreading.
So there's a concept in computer science called Metcalfe's Law that basically expresses the power of a network mathematically.
And the formula is x squared.
And x squared is the formula that gets you the classic exponential curve, the curve that arcs kind of up as it goes.
And that's basically an expression of the value of a network is all of the different possible connections between all the nodes, which is x squared.
And so quite literally, every additional person you add to the network doubles the potential value of the network to everybody who's on the network.
And so every time you plug in a new user, every time you plug in a new app, every time you plug in a new, you know, anything sensor into the thing, a robot into the thing, like whatever it is, the whole network gets more powerful for everybody who's on it.
And the resources at people's finger steps, you know, get bigger and bigger.
And so, you know, this thing is giving people like really profound superpowers in like a very real way.
What do you anticipate to be, like, one of the big factors?
If you're thinking about real breakthrough technologies and things that are going to change the game, is it some sort of a human internet interface, like something that is in your body like a Neuralink type deal?
Is it something else?
Is it augmented reality?
Is it virtual reality?
What do you think is going to be the next big shift in terms of the symbiotic relationship that we have with technology?
Yeah, so this is one of the very big topics in our industry that people argue about, we sit and talk about all day long trying to figure out which startups to fund and projects to work on.
So I'll give you what I kind of think is the case.
So the two that are rolling right now that I think are going to be really big deals are AI on the one hand and then cryptocurrency, blockchain, Web3, sort of combined phenomenon on the other hand.
And I think both of those have now hit critical mass and both of those are going to move.
Really fast.
So we should talk about those.
And then right after that, you know, I think, yeah, some combination of what they call virtual reality and augmented reality, VR, AR, some combination of those is going to be a big deal.
Then there's what's called Internet of Things, right, which is like connecting all of the objects in the world online, and that's now happening.
And then, yeah, then you've got the really futuristic stuff.
You've got the Neuralink and the brain stuff and all kinds of ways to kind of have the human body be more connected into these environments.
That stuff's further out, but there are very serious people working on it.
So let's start with AI, because that's the scariest one to me.
This Google engineer that has come out and said that he believes that the Google AI is sentient, because it says that it is sad, it says it's lonely, it starts communicating, and you know, Google is, it seems like they're in a dilemma in that situation.
First of all, if it is sentient, does it get rights?
Like, does it get days off?
I had this conversation with my friend Duncan Trussell last night, and he was saying, imagine if you have to give it rights.
So let's start with what this actually is today, which is very interesting.
Not well understood, but very interesting.
So what Google and this other company, OpenAI, that are doing these kind of text bots that have been in the news.
What they do, it's a program.
It's an AI program.
It's basically, it uses a form of math called linear algebra.
It's a very well-known form of math, but it uses a very complex version of it.
And then basically what they do is they've got complex math running on big computers.
And then what they do is they have what they call training data.
And so what they do is they basically slurp in a huge data set from somewhere in the world, and then they basically train the math against the data to try to kind of get it up to speed on how to interact and do things.
The training data that they're using for these systems is all text on the internet, right?
And all text on the internet increasingly is a record of all human communication, right?
Well, Google's core business is to do that, is to be the crawler, you know, famously their mission to organize the world's information.
They actually pull in all the text on the internet already to make their search engine work, and then that's And then the AI just scans that.
And the AI basically uses that as a training set, right?
And so – and basically just – just basically choose through and processes it.
It's a very complex process.
But like choose through and processes it.
And then the AI kind of gets a converged kind of view of like, okay, this is human language.
This is what these people are talking about.
And then it has all this statistical – when a human being says X, somebody else says Y or Z or this would be a – A good thing to say or a bad thing to say.
For example, you can detect emotional loading from text now.
So you can kind of determine with the computer.
You can kind of say, this text reflects somebody who's happy because they're saying, oh, you know, I'm having a great day versus this text is like, I'm super mad, you know, therefore it's upset.
And so you could have the computer could get trained on, okay, if I say this thing, it's likely to make humans happy.
If I say this thing, it's likely to make humans sad.
But here's the thing.
It's all human-generated text.
It's all the conversations that we've all had.
And so basically you load that into the computer, and then the computer is able to kind of simulate somebody else having that conversation.
But what happens is basically the computer is playing back what people say, right?
The guy who went through this and did the whistleblower thing, he even said he didn't look at the code.
He's not in there working on the code.
Everybody who works in the code will tell you it's not alive.
It's not conscious.
It's not having original ideas.
What it's doing is it's playing back to you things that it thinks that you want to hear based on all the things that everybody has already said to each other that it can get online.
And in fact, there's all these ways you can kind of trick it into basic.
For example, you can have it.
He has this example where he like has it where basically he said, you know, I want you to prove that you're alive.
And then the computer did all this stuff for us live.
You can do the reverse.
You can say, I want you to prove that you're not alive, and the computer will happily prove that it's not alive.
And it'll give you all these arguments as to why it's not actually alive.
And, of course, it's because the computer has no view on whether it's alive or not.
And so let's talk about, there's something called the Turing Test, right, which is a little bit more famous now because the movie they made about Alan Turing.
So the Turing Test basically, in its simplified form, the Turing Test is basically you're sitting in a computer terminal, you're typing in questions, and then the answers are showing up on the screen.
There's a 50% chance you're talking to a person sitting in another room who's typing the responses back.
There's a 50% chance you're talking to a machine.
You don't know.
You're the subject.
And you can ask the entity on the other end of the connection any number of questions.
He or she or it will give you any number of answers.
At the end, you have to make the judgment as to whether you're talking to a person or talking to a machine.
The theory of the Turing test is when a computer can convince a person that it's a person, then it will have achieved artificial intelligence.
Then it will be as smart as a person.
But that begs the question of how easy are we to trick?
So actually it turns out what's happened, this is actually true, what's happened is actually there have been chatbots that have been fooling people in the Turing test now for several years.
But men, like, you get a man on there with a sex chatbot, like, the man will convince himself he's talking to a real woman, like, pretty easily, even when he's not.
And so just think of this as a slightly more, you know, you could think about this as a somewhat more advanced version of that, which is, look, if this thing, if it's an algorithm that's been optimized to trick people, basically, to convince people that it's real, it's going to pass the Turing test, even though it's not actually conscious.
Meaning, it has no awareness, it has no desire, it has no regret, it has no fear, it has none of the hallmarks that we would associate with being a living being, much less a conscious being.
So this is the twist, and this is where I think this guy, Google, got kind of strung up a little bit, or held up, is that the computers are going to be able to trick people into thinking they're conscious way before they actually become conscious.
And then there's just the other side of it, which is like, we have no idea.
We don't know how human consciousness works.
We have no idea how the brain works.
We have no idea how to do any of this stuff on people.
The most advanced form of medical science that understands consciousness is actually anesthesiology, because they know how to turn it off.
They know how to power back on, which is also very important.
But they have no idea what's happening inside the black box.
And we have no idea.
Nobody has any idea.
So this is a parallel line of technological development that's not actually recreating the human brain.
It's doing something different.
It's basically training computers on how to understand process and then reflect back to the real world.
It's very valuable work because it's going to make computers a lot more useful.
For example, self-driving cars.
This is the same kind of work that makes a self-driving car work.
So this is very valuable work.
It will create these programs that will be able to trick people very effectively.
For example, here's what I would be worried about, which is basically what percentage of people that we follow on Twitter are even real people.
He's trying to get to the bottom of that, you know, specifically on that issue from the business.
But just also think more generally, which is like, okay, if you have a computer that's really good at writing tweets, if you have a computer that's really good at writing angry political tweets or writing whatever absurdist humor, whatever it is, like, and by the way, maybe the computer is going to be better at doing that than a lot of people are.
You could imagine a future internet in which most of the interesting content is actually getting created by machines.
There's this new system, Dolly, that's getting a lot of visibility now, which is this thing where you can type in any phrase and it'll create you computer-generated art.
So basically what they do, and Google has one of these and OpenAI has one of these, what they do is they pull in all of the images on the internet, right?
So if you go to Google Images or whatever, just do a search.
On any topic, it'll give you thousands of images of you, whatever.
That's just basically doing, yeah, sort of psychedelic art.
The Dali ones are basically, they're sort of composites where they will give you basically, it's almost like an artist that will give you many different drafts.
Like, what is a human consciousness interacting with another human consciousness?
I mean, it is data.
It is the understanding of the use of language, inflection, tone, the vernacular that's used in whatever region you're communicating with this person in to make it seem as authentic and normal as possible.
And you're doing this back and forth like a game of volleyball, right?
This is what language is and a conversation is.
If a computer's doing that, Well, it doesn't have a memory, but it does have memory.
Because if that's what we are, then that's all we are.
Because the only difference is emotion and maybe biological needs, like the need for food, the need for sleep, the need for touch and love and all the weird stuff that makes people people, the emotional stuff.
But if you extract that, The normal interactions that people have on a day-to-day basis, it's pretty similar.
At what point in time does the program figure out how to manifest a physical object that can take all of its knowledge and all the information that's acquired through the use of the internet, which is basically the origin theme in Ex Machina, right?
The super scientist guy, he's using his web browser, his search engine, to scoop up all people's thoughts and ideas, and he puts them into his robots.
This is basically what these companies are doing, hopefully with a different result.
There's another topic.
There's another topic.
A friend of mine, Peter Thiel, and I always argue, he's like, civilization is declining, you can tell, because all the science fiction movies are negative.
It's all dystopia.
Nobody's got hope for the future.
Everybody's negative.
And my answer is the negative stories are just more interesting.
Nobody makes the movie with the happy AI. There's no drama in it.
So anyway, that's why I say hopefully it won't be Hollywood's dystopian vision.
But here's another question on the nature of consciousness, right, which is another idea that Descartes had that I think Therefore I Am Guy had is he had this idea of mind-body dualism, which is also what Ray Kurzweil has with this idea that you'll be able to upload the mind, which is like, okay, there's the mind, which is like basically all of this, you know, some level of software equivalent coding something, something happening and how we do all the stuff you just described.
Then there's the body and there's some separation between mind and body where maybe the body is sort of could be arbitrarily modified or is disposable or could be replaced or replaced by a computer.
It's just not necessary once you upload your brain.
And of course, and this is a relevant question for AI because, of course, the AI, Dolly has no body.
You know, GPT-3 has no body.
Well, do we really believe in mind-body?
Do we really believe mind and body are separate?
Like, do we really believe that?
And what the science tells us is, no, they're not separate.
In fact, they're very connected, right?
And a huge part of what it is to be human is the intersection point of brain and mind and then brain to rest of body.
For example, all the medical research now that's going into the influence of gut bacteria on behavior and the role of viruses and how they change behavior.
I think the most evolved version of this, the most advanced version of this, is whatever it means to be human, it's some combination of mind and body.
It's some combination of logic and emotion.
It's some combination of mind and brain.
It leads to us being the crazy, creative, inventive, destructive, innovative, caring, hating people we are.
The sort of mess that is humanity.
That's amazing.
The 4 billion years of evolution that it took to give us the point where we're at today is amazing.
And I'm just saying we don't have the slightest idea how to build that.
We don't even understand how we work.
We don't have the slide to see how to build that yet.
And that's why I'm not worried that these things like somehow come alive or they start to.
Yeah, I'm much more worried than you because my concern is not just how we work, because I know that we don't have a great grasp of how the human brain works and how the consciousness works and how we interface with each other in that way.
But what we do know is all the things that we're capable of doing in terms of we have this vast database of human literature and accomplishments and mathematics and all the different things that we've learned.
All you need to have is something that can also do what we do, and then it's indistinguishable from us.
So, like, our idea that our brain is so complex, we can't even map out the human brain.
We don't even understand how it works.
But we don't have to understand how it works.
We just make something that works just as good, if not better.
And it doesn't have the same cells, but it works just as good or better.
We can do it without emotion, which might be the thing that fucks us up, but also might be the thing that makes us amazing, but maybe only to us.
To the universe where these emotions and all these biological needs, this is what causes war and murder and all the thievery and all the nutty things that people do.
But if we can just get that out, then you have this creativity machine.
Then you have this force of constant, never-ending innovation, which is what the human race seems If you could look at it from outside, I always say this, that if you could look at the human race from outside the human race, you'd say, well, what is this thing doing?
What's making better stuff?
All it does is make better stuff.
It never goes, ah, we're good.
It's just constantly new phones, better TVs, faster cars, jets that go faster, rockets that land.
That's all it ever does is make better stuff.
Collectively.
And even materialism, which is the thing where people go, oh, it's so sad.
People are so materialistic.
What's the best fuel for innovation?
Materialism, because people get obsessed with wanting the latest, greatest things, and you literally, like, sacrifice your entire day for the funds to get the latest and greatest things.
You're giving up your life for better things.
That's what a lot of people are doing.
That's their number one motivation for working shitty jobs is so they can afford cool things.
People, you know, look, people driving in the car love it.
The people who run over by a car hate it, right?
And so, like, technology is this double-edged thing, but the progress does come.
And, of course, it nets out to be, you know, historically at least a lot more positive than negative.
Nuclear weapons are my favorite example, right?
It's like, were nuclear weapons a good thing to invent or a bad thing to invent, right?
And the overwhelming conventional view is they're horrible, right, for obvious reasons, which is they can kill a lot of people.
And they actually have no overt kind of – you don't – the Soviet Union used to set up nuclear bombs underground to, like, basically develop new oil wells.
They'd be opening up a new well or they'd be trying to correct a jam in an existing well.
They're like, well, what do we have that could free this up?
It's like, oh, how about a nuke?
I'll give you another example.
The U.S. government had a program in the 1950s.
The Air Force had a program in the 1950s called Project Orion.
It was for spaceships that were going to be nuclear-powered, not nuclear-powered with a nuclear engine, but they were going to be a spaceship and that would be like a giant basically lead dome.
And then they would actually set off nuclear explosions to propel the spaceship forward.
So they never built it, but they thought hard about it.
I go through these examples to say these were attempts to find positive use cases for nuclear weapons, basically.
And we never did.
So you could say, look, nukes are bad.
We shouldn't invent nukes.
Well, here's the thing with nukes.
Nukes probably prevented World War III. At the end of World War II, if you asked any of the experts in the U.S. or the Soviet Union at the time, are we going to have a war between the U.S. and the Soviet Union in Europe, another land war between the two sides, most of the experts very much thought the answer was yes.
In fact, the U.S. to this day, we still have troops in Germany basically preparing for this land war that never came.
The deterrence effect of nuclear weapons, I would argue, and a lot of historians would argue, basically prevented World War III. So the pros and cons on these technologies are tricky, but they usually do turn out to have more positive benefits than negative benefits in most cases.
I just think it's hard or impossible to get new technology without basically having both sides.
It's hard to develop a tool that can only be used for good.
And for the same reason, I think it's hard for humanity to progress in a way in which only good things happen.
Having said that, there were thousands of years of history before 1947, there were thousands of years of history before that, and the history of humanity before the emission of nuclear weapons was nonstop war.
So the original form of warfare, like if you go back in history, the original form of warfare, like the Greeks, the original form of warfare was basically people outside of your tribe or village have no rights at all.
Like, they don't count as human beings.
They're there to be killed on sight.
Right?
And then the way that warfare happened, like, for example, between the Greek cities.
And this is like the heyday of the Greeks, Athens and Socrates and all this stuff.
The way warfare happened is we invade each other's cities.
Russia, this is the big question for the United States on Russia right now, which is like, okay, what's the one thing we know we don't want?
We don't want nuclear war with Russia, right?
We know we don't want that.
What do we want to do?
U.S. government, what does it want to do?
Well, it wants to arm Ukraine sort of up to the point where the Russians get pissed off enough where they would start setting off nukes.
And this is the sort of live debate that's happening.
And it's a real debate.
You could look at it and you could say, well, nuclear weapons are bad in this case because they're preventing the U.S. from directly interceding in Ukraine.
It'd be better for the Ukrainians if we did.
You can also say the nuclear weapons are good because they're preventing this from cascading into a full land war in Europe between the U.S. and Russia.
Like, we've evolved as a culture where whatever war we have is nothing like World War I or World War II. Well, there's an argument in sort of defense circles that actually nuclear weapons are actually not useful.
They seem useful, but they're not useful because they can never actually get used.
Basically, it's like, okay, no matter what we do to Putin, he's never going to set off a nuke because if he set off a nuke, it'd be an act of suicide because if we nuked in retaliation, he would die.
You know, a priest of a marginal whatever, maybe we don't take that seriously.
But now we get back to the big questions, right?
Which is like, okay, like, historically, religion, capital R religion, played a big role in the exact questions that you're talking about.
And, you know, traditionally, you know, culturally, traditionally, we had concepts like, well, we know that people are different than animals because people have souls.
Right?
And so, you know, we in the sort of modern evolved West are, you know, a lot of us at least would think that we're beyond the sort of superstition that's engaged in that.
But we are asking these like very profound fundamental questions that a lot of people have thought about for a very long time and a lot of that knowledge has been encoded into religions.
And so I think the religious philosophical dimension of this is actually going to become very important.
I think we as a society are going to have to really take these things seriously.
Well, in the same way that it plays in the same way that it plays in basically any.
So religion historically is how we sort of transmit ethical and moral judgments, right?
And then, you know, we basically sort of, you know, it's the sort of modern intellectual vanguard of the West a hundred years ago, whatever, decided to shed religion as a sort of primary organizing thing, but we decided to continue to try to evolve ethics and morals.
But if you ask anybody who's religious what is the process of figuring out ethics and morals, they will tell you, well, that's a religion.
And so Nietzsche would say we're just inventing new religions.
We think of ourselves as highly evolved scientific people.
In reality, we're having basically fundamentally philosophical debates about these very deep issues that don't have concrete scientific answers and that we're basically inventing new religions as we go.
Well, it makes sense because people behave like a religious zealot when they defend their ideologies, like when they're unable to objectively look at their own thoughts and opinions on things because it's outside of the ideology.
It has something to – from what I've been able to establish from reading about this, it has something to do with basically what does it mean for individuals to cohere together into a group?
And what does it mean to have that group have sort of the equivalent of an operating system that it's able to basically all agree on and prove to, you know, members of the group are able to prove to each other that they're full members of the group.
What religion does is it encodes ethics and morals.
It encodes lessons learned over very long periods of time into basically like a book.
You know, parables, right?
And lessons, right?
And, you know, commandments and things like this.
And then, you know, a thousand years later, people in theory, right, or at least, are benefiting from all of this hard-won wisdom over the generations.
And, of course, the big religions were all developed pre-science, right?
And so they were basically an attempt to sort of code human knowledge.
Yeah, I think everything ultimately is some – I think basically all human societies, all structures of people working together, living together, whatever, they're all sort of very severely watered down versions of the original cults.
Like if you go far enough back in human history, if you go back before the Greeks, there's this long history of the sort of – and I'm going to specifically talk about Western civilization here because I don't know much about the Eastern side.
But Western civilization – there's this great book called The Ancient City that goes through this and it talks about how the original form of civilization was basically – it was a fascist communist cult.
And this was the origination of the tribes and then ultimately the cities, which ultimately became states.
And that's what I was describing earlier, which was like the Greek city-state was basically a fascist communist cult.
It had a very concrete, specific religion.
It had its own gods.
People who were not in that cult, right, did not count as human, had no rights and were to be killed on sight or could be like freely.
Like they had no trouble.
They had no moral qualms at all about enslaving people or killing people who weren't in their cult because they worship different gods.
And so that was the original form of human civilization.
And I think the way that you can kind of best understand the last whatever 4,000 years and even the world we're living in today is we just have these – we have very – you know, we have a millionth the intensity level of those cults.
Like we've watered – I mean even our cults don't compare to what their cults were like.
We watered the idea from that all-consuming cult down to what we called a religion and then now what we call whatever – I don't know – philosophy or worldview or whatever it is.
And now we've watered it all the way down to CrossFit.
So in an important way, it's been a process of diminishment as much as it's been a process of advancement.
But you're exactly right.
And this is actually relevant in a lot of the tech debates because you can see what happens.
We want to be members of groups.
We want to reform into new cults.
We want to reform into new religions.
We want to develop new ethical and moral systems and hold each other to them.
By the way, what's a hallmark of any religion?
A hallmark of any religion is some belief that strikes outsiders as completely crazy.
What's the role of that crazy belief?
The role is that by professing your belief in the crazy thing, you basically certify that you're a member of the group.
You're willing to stand up and say, yes, I'm a believer.
They have recreated a form of basically evangelical Protestantism in sort of structural terms.
That's what they've actually done.
Nietzsche actually predicted this.
Nietzsche wrote at the same time that Darwinism, right?
Nietzsche wrote at the same time that Darwin was basically showing with natural selection that the physical world didn't exist necessarily from creation but rather evolved.
It wasn't actually 6,000 years old, it was actually 4 billion years old, and it was this long process of trial and error as opposed to creation that got us to where we are.
And so Nietzsche said, this is really bad news.
This is going to kick the legs out from under all of our existing religions.
It's going to leave us in a situation where we have to create our own values.
So there's nothing harder in human society than creating values from scratch.
It took thousands of years to get Judaism to the point where it is today.
But even the thousands of years that people did create various religions and got them to the point where they're at in 2022. They did it all through personal experience, life experience, shared experience, all stuff that's written down, lessons learned.
I mean, wouldn't we be better suited to do that today with a more comprehensive understanding of how the mind works and how emotions work and the roots of religion?
You're much better off constructing this from scratch using logic and reason instead of all this encoded superstition.
However, what Nietzsche would have said is, boy, if you get it wrong, it's a really big problem.
Like if you get it wrong, you know, he said that was that God is dead and we will never wash the blood off our hands.
Right?
Like, basically meaning that like this is going to lead, you know, he basically predicted a century of like chaos and slaughter and we got a century of chaos and slaughter.
What seems to do that kind of religious thinking applies to so many critical issues of our time, like even things like climate change.
I've brought up climate change to people and you see this, this almost like ramping up of this defending of this idea that upon further examination.
they have very little understanding of, or at least a sort of a cursory understanding that they've gotten through a couple of Washington Post articles.
But as far as a real understanding of the science and long-term studies, very few people who are very excited about climate change It seems almost like a thing.
Clearly, don't get me wrong, this is something we should be concerned with.
This is something we should be very proactive.
We should definitely preserve our environment.
That's not what I'm talking about.
What I'm talking about is this inclination for people to support or to robustly defend an idea that they have very little study in.
So I was going to say, the funniest thing, and I was going to bring up that term, the funniest thing that you hear that tips on when it sort of passes into a religious conversation is this idea of the science is settled.
Richard Feynman, the famous scientist, said science is the process in not trusting the experts.
Very specifically, what we do in science is we don't trust experts because they're certified experts.
What we do is we cross-check everything they say.
Any scientific statement has to be what's called falsifiable, which means there has to be a way to disprove it.
There has to be vigorous debate constantly about what's actually known and not known.
Right.
And so this idea that there's something where there's a person who's got a professorship or there's a, you know, a body, a government body of some kind or a consortium or something, and they get to, like, get together and they all agree and they settle the science, like, that's not scientific.
And so that's the tip-off at that point, that you're no longer dealing with science when people start saying stuff like that, and you weren't dealing with science when they did it with COVID, and you're not dealing with science when they do it with climate.
That's a great example.
Then you're dealing with a religion, and then you're getting all the emotional consequences of a religion.
Have you thought back on the origins of this kind of the function of the mind to create something, this kind of structure?
And do you think that this was done to – because it's fairly universal, right?
It exists in humans that are separate from each other by continents and a little far away on other sides of the ocean.
Is this a way – I mean I've thought of it as almost like a scaffolding for us to get past our biological instincts and move to a new state of whatever consciousness is going to be or whatever civilization is going to be.
But the fact that it's so universal and that the belief in spiritual beings and the belief in things beyond your control and the belief in omnipresent gods that have power over everything, that it's so universal.
It's fascinating because it almost seems like it's a part of humans that can't be removed.
Like, there's no real atheist societies that have evolved in the world other than, I mean, there's atheist communities in the 21st century, but they're not even that big.
So, yeah, so look, it goes to basically, I think, the nature of evolution.
It goes to the nature of how we evolve and survive and succeed as a species.
Individually, we don't get very far, right?
The naked human being in the woods alone does not get very far.
We get places as groups, right?
And so do we exist more as individuals or as groups?
I think we exist more as groups.
You know, it's very important to us what group we're in.
There's this concept of sort of cultural evolution.
Right, which is basically this concept that basically groups evolve in some sort of analogous way to how individuals evolve.
You know, if my group is stronger, I have better individual odds of success of surviving and reproducing than if my group is weak, and so I want to contribute to the strength of my group.
You know, even if it doesn't bear directly on my own individual success, I want my group to be strong.
And so basically you see this process.
Basically the lonely individual doesn't do anything.
It's always the construction of the group.
And then the group needs glue.
It needs bonding and therefore religion, right?
Therefore morality.
Therefore the binding and binding process.
Yeah, I think it's just inherent.
I think it's just inherent.
And like I said, I think what we're dealing with today is a much diluted version of what we had before.
These things seem strong today.
They're much weaker today than they used to be.
For example, they're less likely to lead to physical violence today than they used to be.
There aren't really violent religious wars in the U.S., in the West.
That doesn't happen now.
We have virtual religious wars where at least we're not killing each other.
You know, you can kind of extend this further and it's like, okay, what is a, you know, what is a fandom, right, of a fictional property, right, or what is a hobby, right, or what is a, you know, whatever, what is any activity that people like to do, what is a community, what is a company, what is a brand, what is Apple, right?
And these are all, we view it as like these are basically sort of increasingly diluted dilution, increasingly diluted cults, right, that basically maintain the very basic framework of a religion.
But this is one of the things, even in the Googlebot, this is one of the things, which is, like I said, you can interrogate at least these current systems and they will protest.
You can interrogate these systems in a way where they will absolutely swear up and down that they're conscious and that they're afraid of death and they don't want to be turned off.
And this guy did that at Google.
You can also, like I said, you can interrogate these things and they will prove to you that they're not alive.
I have no idea how to produce human consciousness.
I know how to write linear algebra math code that's able to like trick people into thinking that it's real, like AI. I know how to do that.
I don't know how to code AI. I don't know how to deliberately code AI to be self-aware or to be conscious or any of these things.
And so the leap here is like, and this is kind of, it's like the Raker as well leap.
You know, some people believe in this as a leap.
The leap is like we're going to go from having no idea how to deliberately build the thing that you're talking about, which is like a conscious machine, to all of a sudden the machine becoming conscious and it's going to take us by surprise.
And so that's a leap, right?
I don't know.
It would be like carving a wheel out of stone and then all of a sudden it turns into a race car and like races off through the desert.
We're just like, what just happened?
It's like, well, somebody had to invent the engine or the engine had to emerge somehow from somewhere, right?
Like at some point.
Now, what Ray Kurzweil and other people would say is this will be a so-called emergent property.
And so if it just gets sort of sufficiently complex and there's enough interconnections like neurons in the brain at some point, it kind of consciousness emerges.
It sort of happens kind of, I don't know, bottoms up.
As an engineer, you look at that and you're just kind of like, I don't know, that seems hand-wavy.
Nothing else we've ever built in human history has worked that way.
Everything a computer does today, a sufficiently educated engineer understands every single thing that's happening in that machine and why it's happening.
And they understand it all the way down to the level of the individual atoms and all the way up into what appears on the screen.
And a lot of what you learn when you get a computer science degree is like all these different layers and how they fit together.
Included in that education at no point is, you know, how to imbue it with the spark of consciousness, right?
How to pull the Dr. Frankenstein, you know, and have the monster wake up.
Like, we have no conception of how to do it.
And so, in a sense, it's almost giving engineers, I think, too much, I don't know, trust or faith.
It's just kind of assuming—it's just like a massive hand wave, basically.
And to the point being where my interpretation of it is the whole AI risk, that whole world of AI risk, danger, all this concern, it's primarily a religion.
Like it is another example of these religions that we're talking about.
It's a religion and it's a classic religion because it's got this classic, you know, it's the Book of Revelations, right?
So this idea that the computer comes alive, right, and turns into Skynet or X-Machina or whatever it is and, you know, destroys us all, it's an encoding of literally the Christian Book of Revelations.
Like we've recreated the apocalypse, right?
And so Nietzsche would say, look, all you've done is you've reincarnated the sort of Christian myths into this sort of neo-technological kind of thing that you've made up on the fly.
And lo and behold, you're sitting there and now you sound exactly like an evangelical Protestant, like surprise, surprise.
I do see what you're saying, but is it egotistical to equate what we consider to be consciousness to being this mystical, magical thing because we can't quantify it, because we can't recreate it, because we can't even pin down where it's coming from?
Right?
But if we can create something that does all the things that a conscious thing does, at what point in time do we decide and accept that it's conscious?
Do we have to have it display all these human characteristics that clearly are because of biological needs, jealousy, lust, greed, all these weird things that are inherent to the human race?
Do we have to have a conscious computer exhibit all those things before we accept it?
And why would it ever have those things?
Those things are incredibly flawed.
Right?
Why would it have those things if it doesn't need them?
If it doesn't need them to reproduce, because the only reason why we needed them, we needed to ensure that the physical body is allowed to reproduce and create more people that will eventually get better and come up with better ideas and natural selection and so on and so forth.
That's why we're here and that's why we still have these monkey instincts.
But if we were going to make a perfect entity that was thinking Wouldn't we engineer those out?
Why would we need those?
So the very thing that we need to prove that a thing is conscious, it would be ridiculous to have it in the first place.
They're totally unnecessary.
If I had a computer and it's like, I'm sad, I'd be like, bitch, what are you sad about?
So, I mentioned how war happened between the ancient Greeks.
It took many thousands of years of sort of modern Western civilization to get to the point where people actually considered each other human.
Right?
Like, people in different Greek cities did not consider each other human.
Like, they considered each other like, you know, I don't know what this is, but this is not a human being as we understand it.
It certainly has no human rights.
We can do whatever we want to it.
And, you know, it was really Judaism and then Christianity in the West that kind of had this, really Christianity that had this breakthrough idea that said that everybody basically is, you know, basically is a child of God, right?
And that there's an actual religious, you know, there's a value, there's an inherent moral and ethical value to each individual, regardless of what tribe they come from, regardless of what city they come from.
We still, as a species, seem to struggle with this idea that all of our fellow humans are even human.
Part of the religious kind of instinct is to very quickly start to classify people into friend and enemy and to start to figure out how to dehumanize the enemy and then figure out how to go to war with them and kill them.
We're very good at coming up with reasons for that.
So if anything, our instincts are wired in the opposite direction of what you're suggesting, which is we actually want to classify people as non-human.
I had a feral cat at one point in time, and he didn't trust anybody but me.
Anybody near him would like hiss and sputter, and he had weird experiences, I guess, when he was a little kitten before I got him, and also just like being wild.
I think that's what human beings had before they were domesticated by civilization.
I think we had a feral idea of what other people are.
Other people were things that were going to steal your baby and kill your wife and kill you and take your food and take your shelter.
That's why we have this thought of people being other than us.
And that's why it was so convenient to think of them as other so you could kill them because they were a legitimate threat.
That doesn't exist anymore when you're talking about a computer.
When you get to the point where you develop an artificial intelligence that does everything a human being does except the stupid shit, Is that alive?
Well, let me give you, okay, so everything a human being does.
So the good news is these machines are really good at generating the art, and they're really good at, like, tricking Google engineers into thinking they're alive, and they're really good sex bots.
I'm just saying there's a lot, and again, this goes to the thing.
And look, you could say I'm being human-centric in all my answers, to which it's like, okay, what can a computer do human can, or what's so special about all these things about people?
I think my answer there would just be, like, of course we want to be human-centric.
Like, we're the humans.
Like I said, like, you know, the universe doesn't care.
Do you know there's a gentleman from Australia who got his arm and leg bitten off by a shark?
I met him at the Comedy Store and he has a carbon fiber arm that articulates and the fingers move pretty well.
You can shake your hand.
It's kind of interesting.
And he walks without a limp with his carbon fiber leg.
And I'm looking at this guy and I'm like, this is amazing technology and what a giant leap in terms of what would happen a hundred years ago if you got your arm blown off and your leg bitten off.
What would it be like?
Well, you'd have a very crude thing.
You'd have a peg and a hook, right?
That's pirates.
What is it going to be like in the future, and are they going to be superior?
Do you remember when Luke Skywalker got his arm chopped off and they gave him a new arm and it was awesome?
But again, that's a lot simpler than building a brain.
And then you take your brain and you put it into this new artificial body that looks exactly like you when you were 20. And we may know how to do that before we understand how consciousness works in the brain.
There are scientists who would say, look, this goes back to the mind-body duality question.
There are scientists who would say, look, the rest of the body is actually so central to how the human being is and exists and behaves and like, you know, gut bacteria and all these things, right, that if you took the brain away from the rest of the nervous system and the gut and the bacteria and all the entire sort of complex of organisms that make up the human body, That it would no longer be human as we understand it.
It might still be thinking, but it wouldn't be experiencing the human experience.
There are scientists who would say that.
Obviously, there are religions that would definitely say that, you know, that that's the case.
You know, I would be willing to, me personally, I'd be willing to go so far as to say if it's the brain.
Because what if they do this, and then they take your brain, and then they put it into this artificial body, and this is the new mark.
You're amazing, you're 20 years old, your body, you have no worries, you're bulletproof, everything's great, and you just have this brain in there.
But the brain starts to deteriorate, and they say, good news, we can recreate your brain, and then we can put that brain in this artificial body, and then you're still you, you won't even notice the difference, That's the leap.
And you could, in theory, with future sensors, you could map the brain, meaning you could, like, take an inventory of all the neurons, right?
And then you could take an inventory of all the connections between the neurons and all the chemical signals and electrical signals that get passed back and forth.
And then if you could basically, if you could model that, if you could examine a brain and model that, then you basically would have a new, you would have a computer version of that brain.
Like you would have that.
Just like copying a song or copying a video file or anything like that.
You know, look, in theory, maybe someday with sensors that don't exist yet, Maybe, at that point, like, if you have all that data, you put it together, does it start to run?
Does it say the same things?
Does it say, hey, I'm Mark, but I'm in the machine now?
But would it even need to say that if it wasn't a person?
Like, if you have consciousness and it's sentient, if it doesn't have emotions and it doesn't have needs and jealousy and all the weirdness it makes up a person, why would it even tell you it's sentient?
But wouldn't it be not concerned about whether it's on or off if it didn't have emotions, if it didn't have a fear of death, if it didn't have a survival instinct?
Yeah, it's just like, yeah, there's a point at which the hypothetical scenarios become so hypothetical that they're not useful, and then there's a point where you start to wonder if you're dealing with a religion.
That's the most fascinating to me because I always wonder what defines what is a thing.
And I've always said that I think that human beings are the electronic caterpillar that's creating the cocoon and doesn't even know it and it's going to become a butterfly.
And then look, there are still, as you said, there are still core unresolved questions about what it means for human beings to be human beings and to be conscious and to be valued and what our system of ethics and morals should be in a post-Christian, post-religious world.
And like, are these new religions we keep coming up with, are they better than what we had before or worse?
One of the ways to look at all of these questions is they're all basically echoes or reflections of core questions about us.
The cynic would say, look, if we could answer all these questions about the machines, it would mean that we could finally answer all these questions about ourselves, which is probably what we're groping towards.
We're trying to figure out what it means to be human and what are our flaws and how can we improve upon what it means to be a human being?
And that's probably what people are at least attempting to do with a lot of these new religions.
I oppose a lot of these very restrictive ideologies in terms of what people are and are not allowed to say, are and are not allowed to do because this group opposes it or that group opposes it.
But ultimately what I do like is that these ideologies, even if they only pay lip service to inclusion and lip service to kindness and compassion, Because a lot of it's just lip service.
But at least that's the ethic.
That's what they're saying.
Like, they're saying they want people to be more inclusive, they want people to be kinder, they want people to group in, and they're using that to be really shitty to other human beings that don't do it.
but at least they're doing it in that form, right?
It's not like trying to, I know what you're saying.
But don't you think the goalposts because of this do get moved in a generally better direction?
And that the battle, as long as it's leveled out, as long as people can push back against the most crazy of ideas, the most restrictive of ideologies, the most restrictive of regulations and rules, and the general totalitarian instincts that human beings have.
Human beings have, for whatever reason, a very strong instinct to force other people to behave and think the way they'd like them to.
Well, the good news, at least in theory, of walking down that path would be less physical violence.
In fact, there is less physical violence.
Political violence, as an example, is weighed down as compared to basically any historical period.
And so just on a sheer human welfare standpoint, you'd have to obviously say that's good.
You know, the other side of it, though, would be like all of the social bonds that we expect to have as human beings are getting, you know, diluted as well.
They're all getting, you know, watered down.
And, you know, this concept of atomization, you know, we're all getting atomized.
We're getting basically pulled out of all these groups.
These groups are diminishing in power and authority, right?
And they're diminishing in all their positive ways as well.
And they're kind of leaving us as kind of unborn individuals trying to find our own way in the world.
And, you know, people having various forms of, like, unhappiness and dissatisfaction and dysfunction that are flowing out of that.
And so, you know, if everything's going so well, then why is everybody so fat?
And why is everybody on, you know, drugs?
And why is everybody taking SSRIs?
And why is everybody experiencing all this stress?
And why are all the indicators on, like, anxiety and depression spiking way up?
No, no, not everybody, but if you're looking at collective welfare, there is an adoption If you're looking at collective welfare, you don't focus on just basically the few at the top.
So once upon a time, I'm not religious and I'm not defending religion per se, but once upon a time we had the idea that the body was a vessel provided to us by God and that my body's my temple.
I have a responsibility to take care of it.
We shredded that idea.
And then what do we have?
We have this really sharp now demarcation, this really fantastic thing where basically if you're in the elite, if you're upper income, upper education, upper whatever capability, you're probably on some regimen.
You're probably on some combination of weightlifting and yoga and boxing and jujitsu and Pilates and all this stuff and running and aerobics and all that stuff.
And if you're not, you're probably, if you just look at the stats, obesity is rising like crazy.
And then it's this weird thing where like the elite, of course, you know, the elite sends all the messages.
The elite includes the media, sends all the messages.
And the message, of course, now is body positivity, right?
Which basically means like, oh, it's great to be fat.
In fact, doctors shouldn't even be criticizing people for being fat.
And so it's like the people, the elites most committed to personal fitness are the most adamant that they should send a cultural message to the masses saying it doesn't matter.
And so look, people may have a natural inclination to not exercise.
They may have a natural inclination to eat all kinds of horrible stuff.
That may be true.
But there's a big difference between living in a culture that says that that's actually not a good idea and that you should take care of yourself versus living in a culture where the culture says to you, no, that's actually just fine.
In fact, you should be prized for it.
And if a doctor criticizes you, they're being evil.
The people that are sending this body positivity message, in general, what I see is obese people that want to find some sort of an excuse for why it's okay to be obese.
And so one of the arguments for Juul historically was it is healthier than smoking cigarettes.
There's an issue with the heavy metals and the adulterated packets and so forth.
But generally speaking, if you get through that, people are generally going to be healthier smoking a vape pen than they're going to be smoking tobacco.
But think about the underlying thing that's happened, which is negative on nicotine, positive on marijuana.
Well, then think in terms of the political coding on it, right?
So who smokes cigarettes versus who smokes pot?
So who smokes cigarettes?
It's coded.
It's not 100%, but it's coded as especially middle class, lower class white people.
We've just – like I'm sort of reflexively libertarian.
My general assumption is it's a good idea to not basically tell adults that they can't do things that they should be able to do, particularly things that don't hurt other people.
And furthermore, it seems like the drug war has been a really bad idea and for the same reason prohibition has been a bad idea, which is when you make it illegal, then you make it, then you have organized crime, then you have violence, right?
And all these things.
So that's like my reflexive, as a soft libertarian, that's sort of my natural inclination.
Having said all that, if the result is that 20% of the population is stoned every day, Like, is that a good outcome?
It's a really interesting book, and I had him on with a guy named Mike Hart, who's a doctor out of Canada who Prescribes cannabis for a bunch of different ailments and different diseases for people, and he was very pro-cannabis, and I'm a marijuana user.
And so the two of them together, it was really interesting because I was more on Alex Berenson's side.
I was like, yeah, there are real instances of people developing schizophrenia radically increasing in people, whether they had an inclination or a tendency to schizophrenia, family history or something, and then a high dose of THC snaps something in them.
But there are many documented instances of people consuming marijuana, specifically edible marijuana, and having these breaks.
So what are those things?
And because of the fact that it's been prohibited and it's been Schedule I in this country for so long, we haven't been able to do the proper studies.
So we don't really understand the mechanisms.
We don't know what's causing these.
We don't really know what's causing schizophrenia, right?
Well, here's another question, another ethical question that gets interesting, which is, should there be lab development of new recreational pharmaceuticals, right?
Should there be labs that create new hallucinogens and new barbiturates and new amphetamines and new et cetera, et cetera?
But should that be a fully legal and authorized process?
Should there be the equivalent of, you know, the equivalent of the, you know, should there be companies with like, you know, the same companies that make, you know, cancer drugs or whatever, should they be able to be in the business of developing recreational drugs?
But isn't the argument against that, that if you do not do that, then it's the same thing as prohibition, that you put the money into the hands of organized crime, and they develop it because there's a desire.
On the other hand, do you want to be, again, it goes back to the question, do you want to be in a culture in which basically everybody is encouraged to be stoned and hallucinating all the time?
And by the way, you'll notice there's another thing that happens, again, as we kind of reach for our new religions.
Yeah.
The reflex, which is legitimate, which we all do, is to start to think, okay, therefore, let's talk about laws.
Let's talk about bans.
Let's talk about government actions.
There's another domain to talk about, which is virtues and our decisions and our cultural expectations of each other and of the standards that we set and who our role models are and what we hold up to be positive and virtuous.
And that's an idea that was sort of encoded into all the old religions we were talking about, like they had that built in.
Arguably, because of the dilution effect, we've lost that sense.
There used to be this concept called the virtues.
If you read the Founding Fathers, they talked a lot about it.
The Founding Fathers were famously like Adams and Marshall and these guys said, basically, democracy will only work in a virtuous population.
Right.
In a population of people who have the virtues, who have basically a high expectation of their own behavior and the ability to enforce codes of behavior within the group, independent of laws.
And so it's like, okay, what are our virtues exactly?
What do we hold each other to?
What are our expectations?
In our time, it is kind of unusual historically in that those are kind of undefined.
Well, this goes to, I mean, look, the reason I'm so focused on this all ethics morals thing is because a lot of the sort of hot topics around technology ultimately turn out to be hot topics around, like all the questions around freedom of speech, they're the exact same kind of question everything that we've been talking about to me, which is it's like it's an attempt to reach for, you know, should there be more speech oppression, should there be less, is hate speech misinformation, so forth.
These are all these sort of encoded ethical moral questions that prior generations had very clear answers on and we somehow have become unmoored on and maybe we have to think hard about how to get our mornings back.
And a lot of those people are atheists, guys like my friend Sam Harris.
Very much an atheist, but also very ethical, will not lie, has a very sound moral structure that's admirable.
And when you talk to him about it, it's very well defined.
And he would make the argument that religion and a belief in some nonsensical idea that there's a guy in the sky that's watching over everything is not benefiting anybody.
And that morals and ethics and kindness and compassion are inherent to the human race because the way we communicate with each other in a positive way, it's enforced by all those things.
So would you say that most people in the United States that don't consider themselves members of a formal religion are getting saner over time or less sane over time?
If they have some sort of a method that they use to solidify their purpose and give them a sense of well-being, and generally those things Pay respect to the physical body, whether it's through meditation or yoga or something.
There's some sort of a thing that they do that allows them, I don't want to say to transcend, but to elevate themselves above the worst-based instincts, the base instincts that a human animal has.
If you operate in a community of compassionate, kind, interesting, generous people, Generally speaking, those traits would be rewarded and you would try to emulate the people around you that are successful, that exhibit those things, and you would see how, by being kind and generous and moral and ethical, that person gets good results from other people.
You have other people in the group that reinforce those because they see it, they generally learn from each other.
Isn't it a lack of leadership in that way, that we don't have enough people that have exhibited those things?
You wouldn't propose an answer, but would you ever sit down and come up with just some sort of hypothetical structure that people could operate on and at least have better results?
Right, but to be able to get your personality and your body and your life experiences in line To the point where you have more positive and beneficial relationships with other people.
But do we have to universally value all the same things?
Like, isn't it important to develop pockets of people that value different things?
And then we make this sort of value judgment on whether or not those things are beneficial to the greater human race as a whole, or at least to their community as a whole.
Mike Judge was on the other day, and the podcast actually came out today, and Mike Judge is awesome.
And his movie Idiocracy, I had never watched it.
I'd only watched clips and I watched it prior to him coming on the show the fucking beginning scenes where they explain how the human race devolves is fucking amazing and It's so funny.
I mean, right now, there's a movement afoot among the elites in our country that basically says having kids, having anybody having kids is a bad idea, including having elites have kids is a bad idea because, you know, climate.
Eugenics itself was discredited by World War II. Hitler gave eugenics a bad name.
Legitimately so.
That was a bad idea.
So it shed the overt kind of genetic engineering component of eugenics.
But what survived was this sort of aggregate question of the level of population.
And so the big kind of elite sort of movement on this in the 50s and 60s was so-called population control.
Now, the programs for population control tended to be oriented at the same countries people had been worried about with eugenics.
In particular, a lot of the same people who were worried about the eugenics of Africa all of a sudden became worried about the population control of Africa.
That led to kind of this whole modern thing about African philanthropy kind of all flows out of that tradition.
But it all kind of rolls up to this big question, which is like, okay, are more people better or worse?
And if you're like a straight-up environmentalist, it's pretty likely right now you have a position that more people make the planet worse off.
That if we had better nuclear energy, we'd have far less particulates in the atmosphere.
I was watching this video.
It was really fascinating, where they were talking about electric cars, and they were giving this demonstration about, you know, if we can all get onto these electric vehicles, the emission standards would be so much higher.
Better.
The world would be better.
The environment would be better.
And then this person questioning him gets to, where's this electricity coming from that's powering this car?
And so here we sit today with this kind of hybrid thing where we mostly have a lot of gas.
Now there's some solar and wind.
There's a few nuclear plants.
And then Europe kind of has a similar kind of mixed kind of thing.
And then in the last five years, we've decided, both we and Europe have decided, well, let's just shut down the nuclear plants.
Like, let's just shut down the remaining nuclear plants.
Let's try to get the gold to zero.
And then, of course, Europe has hit the buzzsaw on this because now shutting down the nuke plants means they're even more exposed to their need for Russian oil.
Is that because we didn't properly plan what was going to be necessary to implement this green strategy long term, and they didn't look at, okay, we are relying on Russian oil.
What if Russia does this?
What are our options?
Do we go right to coal?
Why don't we have nuclear power?
A plan, we know that they can develop nuclear power plants that are far superior to the ones that we're terrified of, like Fukushima, right?
Ones that don't have these fail-safe programs, or have a limited fail-safe.
Fukushima had a backup, the backup went out too, and then they were fucked.
Three Mile Island, Chernobyl, meltdowns, that's what scares us.
What scares us is the occasional nuclear disaster, but are we looking at that Incorrectly, because there's far more applications than there are disasters, and those disasters could be used to let us understand what could go wrong and engineer a far better system, and that far better system would ultimately be better for the environment.
Now, the disaster-related deaths, actually, those were attributed deaths to the evacuation, and those were mostly old people under the stress of evacuation.
And then, again, you get into the question of, like, they were old people.
If they were 85, you know, were they going to die anyway?
So back to your question.
So look, nuclear power by far is the safest form of energy we've ever developed.
Like overwhelmingly, the total number of civilian nuclear deaths in nuclear power has been very close to zero.
There's been like a handful of construction deaths, like people, concrete falling on people.
Other than that, like it's basically as safe as can be.
We know how bad coal is.
By the way, there's something even worse than coal, which is so-called biomass, which is basically people burning wood or plants in a stove in the house.
If you're a pure utilitarian and you just want to focus on minimizing human death, you go after those five million.
Nobody ever talks about that because nobody actually cares about that kind of thing.
But that is what you would go after.
Nuclear is almost completely safe.
And then there is a way to develop – if you want to develop a completely safe nuclear plant that was safer than all these others, what you would actually do – there's a new design for plants where you would actually have the entire thing be entombed from the start.
So you would build a completely self-contained plant.
And you would encase the entire thing, right, in concrete.
And then the plant would run completely lights out inside the box.
And then it would run for 10 years or 15 years or whatever until the fuel ran out.
And then it would just stop working.
And then you would just seal off the remaining part with concrete.
And then you would just leave it put.
And nobody would ever open it.
And it would be totally safe, like totally contained, you know, nuclear waste.
And so you could build, especially with, and to your point of modern engineering, like there hasn't been like a new nuclear power plant design in the U.S. in 40 years.
And I think maybe, I don't know, the last time the Europeans did one from scratch.
But if you use modern technology, you could upgrade almost everything about it.
And so we have the capability.
We can do this at any time.
Like this is a very straightforward thing to do.
There has not been a new nuclear plant authorized to be built in the United States in 40 years.
Either people have a dispute about the facts or there's a religious component here where we have the same people who are very worried about climate change are also for some reason very worried about nuclear for reasons.
As an engineer, I don't understand how they kind of do it.
But what I was going to get to is that that energy also, there are strategies in place to take nuclear waste and convert it into batteries and convert it into energy.
This is always my – for anybody who ever – and there's a whole wave of investing that's happening.
There's a whole climate tech – and remember, there's a whole green climate tech wave of investing in tech companies in the 2000s that basically didn't work.
There's another wave of that now because a lot of people are worried about the environment.
And to me, the litmus test always is, are we funding new nuclear power plants?
Is it that we don't want it or that we don't understand it?
If it was laid out to people the way you're laying out to me right now, and if there was a grand press conference, That was held worldwide where people understood the benefits of nuclear power far outweigh the dangers, and that the dangers can be mitigated with modern strategies, with modern engineering, and that the power plants that we're worried about, the ones that failed, were very old.
And it's essentially like worried about the kind of pollution that came from a 1950s car, as opposed to a Tesla.
Like we're looking at something that's very, very different.
Also, Stuart Brand, who's the founder of the Whole Earth Catalog and one of the leading environmentalists in the 1960s, has been on this message for 50 years.
He's written books.
He's given talks.
He's done the whole thing.
There's a debate in the environmental community about this.
He's in the small minority of environmentalists who are on this page.
Well, I don't think there's a lot of people hearing this message.
This message, first of all, the pro-nuclear message, at least nationwide, as an argument amongst intelligent people, is very recent.
It's been within the last couple of decades.
Where I've heard people give convincing arguments that nuclear power is the best way to move forward.
Oftentimes, environmentally inclined people and people that are concerned about our future that aren't educated about nuclear power, that word automatically gets associated with right-wing, hardcore, anti-environmental people who don't give a fuck about human beings.
They just want to make profits and they want to develop energy and ruin the environment, but do that to power cities.
And so if you are on the right, you're like, this is great.
He's a hero on the right, and he runs this huge industrial company that's a fantastic asset to America, and this is a big opportunity for him and the company, and it's great, and we'll build the nukes, and it's going to be great.
And we'll export them.
It'll be awesome.
If you're on the left, you're cursing him.
You're putting him to work for you to fix the climate, right?
You're doing a complete turnaround, and you're basically saying, you know, look, we're going to enlist you to fix, you know, we view you as a right-winger.
This is a left-wing cause.
We're going to use you to fix the left-wing cause.
I'm saying in an alternate hypothetical world, they would find it entertaining.
Let me start by saying, this is what we should actually do.
We should actually give him the order and have him do it.
And I'm just saying, like, if the left could view it as, oh, we get to take advantage of this guy who we don't like to solve a problem that we take very seriously that we think he doesn't take seriously, which is climate.
Well, I don't know about your logic there, because they would think that he's profiting off of that, and the last thing they would want is Koch brothers to profit.
I've had conversations with people that don't, you know, they don't have the amount of access to human beings and different ideas, and they immediately clam up when you say nuclear power.
They propose a much lower human standard of living.
They propose a return to an earlier mode of living that our ancestors thought was something that they should improve on and they want to go back to that.
And it's a religious impulse of its own.
Nature worship is a fundamental religious impulse.
Look, any of these things become self-perpetuating industries.
There's always a problem with any activist group, which is do they actually want to solve the problem because actually solving problems is bad for fundraising.
It is kind of ironic in a sense.
I would not even say most of this is bad intent.
I think most of it's just people have an existing way that they think about these things.
He's from MIT? Yeah, he has a podcast called Titans of Nuclear, and he has gone around the country over the last five years, and he's interviewed basically every living nuclear expert.
So it seems like the problem is there's a bottleneck between information and this idea that people have of what nuclear power is.
That needs to be bridged.
We need to figure out how to get to people's heads that what we're talking about when you talk about nuclear power is a very small amount of disasters where a large amount of nuclear reactors and you're dealing with very old technology as opposed to what is possible.
Do you have any concerns about this movement towards electric cars and electric vehicles that we are going to run out of batteries, we're going to run out of raw material to make batteries?
And that could be responsible for a lot of strip mining, a lot of very environmentally damaging practices that we use right now to acquire, and also that this could be done by other countries, of course, that are not nearly as environmentally conscious or concerned.
So, technically, fun fact, we never actually run out of any natural resource.
We've never run out of natural resource in human history, right?
Because what happens is the price rises, right?
The price rises way in advance of running out of the resource, and then basically whatever that is, using that resource becomes non-economical, and then either we have to find an alternative way to do that thing, or at some point we just stop doing it.
And so, I don't think the risk is running out of lithium.
I think the risk is not being able to get enough lithium to be able to do it at prices that people can pay for the cars.
And then there's other issues, which is where does lithium come from?
I'll just give you an example.
A lot of companies are doing a lot of posturing right now on their morality.
One of the things that all electronic devices have in common, your phone, your Tesla, your iPhone, they all have in common.
They all contain not just lithium, they also contain cobalt.
If you look into where cobalt is mined, it's not a pretty picture.
You know, it's child slaves in the Congo.
And, you know, we kind of all gloss it over because we need the cobalt.
And so maybe there should be more, you know, maybe we should be much more actively investigating, for example, mining in the U.S. As you know, there's a big anti-mining, anti-national resource development culture in the US and the political system right now.
As a consequence, we kind of outsource all these conundrums to other countries.
It is fascinating to me that there's not a single US-developed and implemented cell phone.
That we don't have a cell phone that's put together by people that get paid a fair wage with health insurance and benefits.
And everything we make, I mean, when we buy an iPhone, you're buying it from Foxconn, right?
Foxconn's constructing it in these Apple, you know, contracted factories where they have nets around the buildings to keep people from jumping off the roof.
And people are working inhumane hours for a petance.
I mean, there's like a tiny amount of money in comparison to what we get paid here in America.
Why is that?
Like, is that because we want Apple to make the highest amount of profit and we don't give a shit about human life?
Well, here's an environmentalist argument I think I might agree with, which basically is it's very easy for so-called first world or developed countries to sort of outsource problems to developing countries.
And so just as an example, take carbon emissions for a second and we'll come back to iPhones.
Carbon emissions in the US are actually declining.
There's all this animation over the Paris Accords or whatever, but if you look, carbon emissions in the U.S. have been falling now for quite a while.
Well, there's a bunch of theories as to why that is.
Some people point to regulations.
Some people point to technological advances.
For example, modern internal combustion cars emit a lot less.
They have catalytic converters now.
They emit a lot less CO2. But maybe one of the big reasons is we've outsourced heavy industry to other countries, right?
And so all of the factories with the smokestacks, right, and all the mining operations and all the things that generate, and by the way, a lot of mass agriculture that generates emissions and so forth, like in a globalized world, we've outsourced that, right?
And if you look at emissions in China, they've gone through the roof, right?
And so maybe what we've done is we've just taken the dirty economic activity and we moved it over there and then we've kind of gone.
It's a little bit like the debate about the drug trade in countries like Mexico and Colombia, which is how much of that is induced by American demand for things like cocaine.
So, yeah, so it's this.
This is where the morality questions get trickier, I think, than they look, which is like, what have we actually done?
Now, there's another argument on the – I'll defend Foxconn.
There's an argument on the other side of this that actually, no, it's good that we've done this from an overall human welfare standpoint because if you don't like the Foxconn jobs, you would really hate the jobs that they would have been doing instead.
The only thing worse than working in a sweatshop is scavenging in a dumper doing subsistence farming or being a prostitute.
And so maybe even what we would consider to be low end and unacceptably difficult and dangerous manufacturing jobs may still be better than the jobs that existed prior to that.
And so again, there's a different morality argument you can have there.
Again, it's a little bit trickier than it looks at first blush.
I go through this because I find we're in an era where a lot of people, including a lot of people in my business, are making these very flash-cut moral judgments on what's good and what's bad.
And I find when I peel these things back, it's like, well, it's not quite that simple.
Well, number one, so dropping the price of energy.
Energy is a huge part of any manufacturing process, huge cost thing.
And so if you had basically unlimited free energy from nukes, you all of a sudden would have a lot more options for manufacturing in the U.S. And then the other is, look, we have robotics, the AI conversation.
If you built new manufacturing plants from scratch in the U.S., they would be a lot more automated.
And so you'd have assembly lines of robots doing things, and then you wouldn't have the jobs that people don't want to have.
And so, yeah, you could do those things.
There's actually a big point.
This isn't happening with phones.
This is happening with chips.
So this is one of the actual positive things happening right now, which is there's a big push underway from both the U.S. tech industry and actually the government, to give them credit, to bring chip manufacturing back to the U.S. And Intel is the company leading the charge on this in the U.S. And there's a build-out of a whole bunch of new, you know, these huge $50 billion chip manufacturing plants that will happen in the U.S. Was a lot of that motivated by the supply chain crisis?
A lot of chips are actually made in Taiwan and there's a lot of stress and tension around So if we get chips manufactured back in the U.S., we not only solve these practical issues, we might also have more strategic leverage.
We might not be dependent on China.
So the good news is that's happening.
And let me just say, if that happens successfully, maybe that sets a model.
To your point, maybe that's a great example to then start doing that in all these other sectors.
COVID has surfaced a problem that we always had and we now have a new answer to, which is the problem of basically, for thousands of years, young people have had to move into a small number of major cities to have access to the best opportunities.
Right.
And Silicon Valley is a great example of this.
If you've been a young person from anywhere in the world and you want to work in the tech industry and you want to be on the leading edge, you had to figure out a way to get to California, get to Silicon Valley.
And if you couldn't, it was hard for you to be part of it.
And then, you know, the areas, the cities that have this kind of, they call these superstar cities, the cities that have these sort of superstar economics, everybody wants to live there, they end up with these politics where they don't want you to ever build new housing.
They never build new roads.
The quality of life goes straight downhill and everything becomes super expensive and they don't fix it and they don't fix it because they don't have to fix it because everybody wants to move there and everything is great and taxes are through the roof and everything is fantastic.
And so one of the huge positive changes happening right now is the fact that remote work worked.
As well as it did when the COVID lockdowns kicked in and all these companies sent all their employees home and everything just kept working, which is kind of a miracle.
It has caused a lot of companies, including a lot of our startups, to think about how should companies actually be all based in a place like Northern California or should they actually be spread out all over the country or all over the world?
Right.
And so if you think about the gains from that, one is all of the economic benefits of being like Silicon Valley in tech or Hollywood in entertainment, like maybe those gains should be able to be spread out across more of the country and more of the country should be able to participate.
Right.
And then, by the way, the people involved, like maybe they shouldn't have to move.
Maybe they should be able to live where they grew up if they want to continue to be part of their community.
Or maybe they should want to be able to live where their extended family is.
Or maybe they should want to live someplace with a lot of natural beauty or someplace where they want to contribute to, you know, philanthropically the local community.
Whatever other decision they have for why they might want to live someplace, they can now live in a different place and they can have still access to the best jobs.
And it seems like with these technologies like Zoom and FaceTime and all these different things that people are using to try to simulate being there, The actual physical need to be there if you don't have a job where you actually have to pick things up and move them around.
So some big companies are having some trouble with this right now because they're so used to running with everybody in the same place.
And so there's a lot of CEOs grappling with, like, how do we have collaboration happen, creativity happen if I'm writing a movie or something?
How do I actually do it if people aren't in the same room?
But a lot of the new startups, they're getting built from scratch to be remote, and they just have this new way of operating, and it might be a better way of operating.
And by the way, it has a very nice office facility.
So our firm runs, we now run, we were a single office firm.
Everybody was in our firm basically all the time.
We now run primarily remote virtual mode of operation, but we have off-sites frequently, right?
So we're basically, what we're doing is we're basically taking money we would have spent on real estate and we're spending it instead on travel and then on off-sites.
Have a good time together, have lots of free time to get to know each other, go on hikes, have long dinners, parties, fire on the beach, like whatever it is, have people really be able to spend time together.
Well, and then what you do is you kind of charge people up with the social bonding, right?
And then they can then go home and they can be remote for six weeks or eight weeks and they still feel connected and they're talking to everybody online.
And then you bring them right when they start to fray, right when it starts to feel like they're getting isolated again, you bring them all back together again.
We have other ways to deal with that kind of thing.
More of what we're trying to do is brainstorming.
Creativity, there's definitely a role for in-person.
And then it's for all of the employee onboarding.
It's for training.
It's for planning.
It's for all the things where you really want people thinking hard in a group to do all those things.
But a lot of it is just the bonding.
Ben and I run our firm.
We're constantly trying to take We're trying to take agenda items off the sheet every time because we're trying to have people just have more time to get to know each other.
How do you weed out young people that have been indoctrinated into a certain ideology and they think that these struggle sessions should be mandatory and they think that there's a certain language that they need to use and there's a way they need to communicate and there's certain expectations they have of the company to the point where they start putting demands upon their own employers?
Just like the kinds of people who want to go online or want to write articles or whatever about how evil all the technologists are and how evil Elon is and how evil capitalism is and all this stuff.
Yeah, it's a meritocracy and that they don't have to take – we're not going to have politics in the workplace in the sense of they're not going to have to take – they're not going to be under any pressure to either express their political views or deny that they have the political views.
Or pretend to agree with political views they don't agree with.
That's just not part of what we do.
We're mission-driven against our mission, not all of the other missions.
You can pursue all the other missions in your free time.
There's this concept of economics called adverse selection.
So there's sort of adverse selection, then there's the other side, positive selection.
So adverse selection is when you attract the worst, right?
And positive selection is when you attract the best, right?
And every formation of any group, it's always positive selection or adverse selection.
I would even say it's a little bit of like if you put on a show, it's like depending on how you market the show and how you price it and where you locate it, You're going to attract in a certain kind of crowd.
You're going to dissuade another kind of crowd.
There's always some process of sort of attraction and selection.
The enemy is always adverse selection.
The enemy is sort of having a set of preconditions that cause the wrong people to opt into something.
What you're always shooting for is positive selection.
You're trying to actually attract the right people.
You're trying to basically put out the messages in such a way that by the time they show up, they've self-selected into what you're trying to do.
A public example is Coinbase is a company that's now been all the way through this, and it's a company we've been involved with for a long time.
And that's a very public case of a CEO who basically declared that he had hit a point where he wasn't willing to tolerate politics in the workplace.
He was the first of these that kind of did this.
We're going to be mission driven.
Our mission is open.
It's a cryptocurrency company said our mission is an open global financial system that everybody can participate in.
And he said, "Look, there are many other good missions in the world.
You can pursue those in your own time or go to other companies to do that." So was it a system where there were activists that infiltrated the company?
And the conclusion he reached was it was destructive to trust.
It was causing people in the company to not trust each other, not like each other, not be able to work on the core problems that the company exists to do.
And so anyway, he did a best case scenario on this.
He just said, look, he actually did it in two parts.
He said, first of all, this is not how we're going to operate going forward.
And then he said, I realize that there are people in my company that I did not set this rule for before who will feel like I'm changing.
I'm pulling the rug out from under them and saying they can't do things they thought they could do.
And I'm going to give them a very generous severance package and help them find their next job.
But he did a six-month severance package, something on that order, to make it really easy for people to be able to get health care and deal with all those issues.
Do you think going forward that's going to be what more companies utilize or that they implement a strategy like that?
Yes.
Ultimately, for your bottom line, it's got to be detrimental to have people so energized about so-called activism that it's taking away the energy that they would have towards getting whatever the mission of the company has done.
Yeah, so the way we look at it is basically, look, it is so hard to make any business work, period.
Especially from scratch, a startup, to get a group of people together from scratch to build something new against what is basically a wall of sort of start out with indifference and skepticism and then ultimately pitch battles with big existing companies.
Like in other startups, it's so hard to get one of these things to work.
It's so hard to get everybody to just even agree to what to do to do that.
What is the mission of this company?
How are we going to go do this?
To do that, you need to have like all hands on deck.
You need to have everybody with a common view.
A lot of what you do as a manager in those companies is try to get everybody to a common view of mission.
You're trying to build a cult.
You're trying to build a sense of camaraderie, a sense of cohesion.
Just like you would be trying to do in a military unit or in anything else where you need people to be able to execute against a common goal.
And so, yeah, anything that chews away at that, anything that undermines trust and causes people to feel like they're under pressure, under various forms of unhappiness, you know, other missions that the company has somehow taken on along the way that aren't related to the business, yeah, that just all kind of chews away at the ability for the company.
And then the twist is that in our society, the companies that are the most politicized are also generally, like, have the strongest monopolies, right?
It's like, look, the problem with using a company like Google or any other large established company like that, because people look at that and they say, well, whatever Google does is what we should do.
It's like, well, start with a search monopoly.
Start life, number one, with a search monopoly, the best business model of all time, $100 billion in free cash flow.
Then you can have whatever culture you want.
But all that stuff didn't cause the search monopoly.
The cause of the search monopoly was like building a great product and taking it to market.
And that's what we need to do.
And so this is where more CEOs are getting to.
Now, having said that, the CEOs who are willing to do this are still few and far between.
Leadership is rare in our time, and I would give the CEOs who are willing to take this on a lot of credit, and I would say a lot of them aren't there yet.
Well, it's all social media, but it's also the mainstream media, the classic media.
Like, look, so what's the fear?
Well, a big part of the fear is that you're then going to deal with, you know, you're going to have the next employee who hates you who's going to go public.
But they would also have to have a platform that's really large where it could be distributed so that it could mitigate any sort of incorrect or biased hit piece on them.
What are your feelings about the prevalence of, I mean, even these sort of novel coins, or novelty coins, and the idea that you could sort of establish a currency for your business?
That's like, you know, there was talk about Meta doing some sort of a Meta coin, you know, and that a company could do that.
Google could do a Google coin, and they could essentially not just be an enormous company with a wide influence, but also Literally have their own economy.
And so the frequent flyer miles are, like, a great example of this, right?
In fact, to the point where you have credit cards that give you, you know, frequent flyer miles and sort of cash back.
So companies have that.
You may remember from the 70s, more common in the old days, but there used to be these things called, like, A&P stamps.
There used to be these, like, savings stamps you'd get, and you'd go to the supermarket, and you'd buy a certain amount, and they'd give you these stamps.
You could spend the stamps on different things or send them in.
So there was sort of private so-called script kind of currency issued by companies in that form.
Then there's all these games that have in-game currency, right?
And so you play one of these games like World of Warcraft or whatever, you have the in-game currency and sometimes it can be converted back into dollars and sometimes it can't and so forth.
And so yeah, so there's been a long tradition of companies basically developing internal economies like this and then having their customers kind of cut in in some way.
And yeah, that's for sure something that they can do with this technology.
When you compare fiat currency with these emerging digital currencies, do you think that these digital currencies have solutions to some of the problems of traditional money?
And do you think that this is where we're going to move forward towards, that digital currencies are the future?
So I don't think this is a world in which we cut over from national currencies to cryptocurrencies.
I think national currencies continue to be very important.
The big thing about a national currency to me, the thing that I think gives it real,'cause national currencies are no longer backed by gold or silver or anything.
They're fiat, they're paper.
The thing that really gives them value, in my view, is basically that it's the form of taxation.
Right.
And so if the government basically is going to legally require you to turn over a third of your income every year, they're going to require you to do that not only in the abstract, they're going to require you to do that in that specific currency, right?
Yeah, but that's also, I mean, it's good news, bad news.
It's also a big plus.
It's also a big plus in the following way.
Like, we have a technology starting in 2009, right, sort of out of nowhere.
There is a prehistory to it, but really the big breakthrough was Bitcoin in 2009, the Bitcoin white paper.
We have this new technology to do cryptocurrencies, to do blockchains, and it's this new technology that we didn't have that all of a sudden we have.
And we're basically now 13 years into the process of a lot of really smart engineers and entrepreneurs trying to figure out what that means and what they can build with it.
And its core is the idea of a blockchain, which is basically like an internet-wide database that's able to record ownership and all these attributes of different kinds of objects, physical objects.
I think the way to think about that is anytime there's an economic system, there's some form of fraud or theft against it.
The example I always like to use is, if you remember the saga of John Dillinger and Bonnie and Clyde, when the car was invented, all of a sudden it created a new kind of bank robbery.
Because there were banks and then they had money in the bank and then all of a sudden people had the car and then they had the Tommy gun, which was the other new technology they brought back from World War I. And then there were this run of, oh my God, banks aren't safe anymore because John Dillinger and his gang are going to come to town and they're going to rob your bank and take all your money.
And that led to the creation of the FBI. That was the original reason for the creation of the FBI. Right.
And at the time, it was like this huge panic.
It was like, oh my god, banks aren't going to work anymore because of all these criminals with cars and guns.
And so it's basically – it's like anything.
It's like when there's economic opportunity, somebody is going to try to take advantage of it.
There's going to be – people are going to try criminal acts.
People are going to try to steal stuff and then you basically – you're always in any system like that.
You're in a cat and mouse game against the bad guys, which is basically what this industry is doing right now.
This goes back to the logic and motion stuff we were talking about earlier.
One view of financial markets, the way that they're supposed to work is it's supposed to be lots of smart people sitting around doing math and calculating and figuring out this is fair value and that's fair value and whatever.
It's all a very mechanical, smart, logical process.
Okay.
And then there's reality.
And reality is people are, like, super emotional.
And then emotionality cascades.
And so some people start to get upset, and then a lot more people get upset, or some people start to get euphoric.
Ben Graham is sort of the godfather of stock market investing.
Ben Graham was Warren Buffett's mentor and kind of the guy who defined modern stock investing.
Ben Graham used this metaphor in his book 100 years ago and he said, look, you need to think about financial markets.
He was talking about the stock market, but the same thing is true for crypto.
He said, you think about it, basically think about it as if it's a person and call it Mr. Market.
He said, the most important thing to realize what Mr. Market is, he's manic depressive.
Like, he's really screwed up, right?
And he has, like, all kinds of crazy impulses.
And he has, like, good days and bad days.
And some days, like, his family hates him.
And some days, he's like, you know, it's whatever.
Like, his life is chaos.
And basically, every day, Mr. Market shows up in the market and basically offers to sell you things at a certain price or buy things from you at a certain price.
But he's manic depressive.
And so the same thing on different days, he might be willing to buy or sell at different prices.
And you can spend a lot of time, if you want to, trying to understand what's happening in his head.
But it's like trying to understand what's happening inside the head of a crazy person.
It's probably not a good use of time.
Instead, you should just assume that he's nuts.
And then what you do is you make your decisions about what you think things are worth and when you're willing to trade.
And you do that according to your principles, not his principles.
And so that would be the metaphor that I'd encourage people to think about.
Like, these markets are just nuts.
There's a thousand different reasons why the prices go up and down.
I don't have any idea.
The core question is, what's the substance, right?
What's real?
What's actually legitimately useful and valuable, right?
And that's what we spend all of our time focusing on.
We look at everything through the lens of technology.
And so we look at the lens of these things.
We only invest in things that we think are significant technological breakthroughs.
So if somebody comes out with just an alternative to Bitcoin or whatever, and even if it's a good idea, bad idea, that's not what we do.
What we do is we're looking for technological change.
And basically what that means is the world's smartest engineer is developing some new capability that wasn't possible before, and then building some kind of project or effort or company right around that.
And then we invest.
And then we only think long term.
We only think in terms of 10 years, 15 years, longer.
And the reason for that is big technological changes take time, right?
It takes time to get these things right, right?
And so that's our framework.
We spend all day long talking to the smartest engineers we can find, talking to the smartest founders we can find who are organizing those engineers into projects or companies.
And then we try to back every single one of those that we can find.
So the venture for the firm I'm a part of now, we're up to about 400 people.
This is kind of what this organization does.
We've got about 25 investing partners.
This is what they do.
They spend all day, basically, we spend all day, basically, talking to founders, talking to engineers.
You know, a lot of us grew up in the industry, so a lot of us have, like, actual hands-on experience having done that.
And then a lot of our partners have been, you know, very involved in these projects over time.
It's a positive selection.
I mentioned adverse selection, positive selection.
We're trying to attract in.
We want the smartest people to come talk to us.
We want the other people, hopefully, to not come talk to us.
We do a lot of, we call outbound, we do a lot of marketing, we communicate a lot in public.
One of the reasons I'm here today is just like we want to have a voice that's in the outside world basically saying here's who we are, here's what we stand for, here are the kinds of projects we work on, here are our values, right?
A big example, the reason I told the Coinbase story of what Brian did is because like that's part of our, like we think that's good that he did that.
Other venture firms might think that's bad, right?
But like if you're the kind of founder who thinks that's good, then we're going to be a very good partner for you.
And then we spend a lot of time in the details.
We have a lot of engineers working for us.
A lot of us have engineering degrees, and so we spend a lot of time working through the details.