Lex Fridman and Joe Rogan debate movie sequels, with Lex praising The Godfather Part II for emotional depth while Joe champions The Big Lebowski. They clash on MMA steroids—Joe admits widespread use in the Pride era, Lex questions bans citing potential health benefits over cheating risks. Tesla’s Autopilot flaws emerge: Lex notes Teslas hit edge cases every 9.2 miles, while Musk’s driver-monitoring stance sparks tension with Detroit’s safety-first approach. AI’s dual threat—deepfakes distorting truth (e.g., Jamie Vernon’s fake Rogan endorsing socialism) and algorithm-driven conflicts—contrasts with Lex’s hope for ethical progress, though Rogan warns of a future where AI outpaces humanity’s values. [Automatically generated summary]
Steroids often feel to me like a bit of a witch hunt.
assume people are on steroids i'm a bit of uh maybe i'm naive or an optimist but i tend to give people the benefit of the doubt until proven otherwise but icarus obviously proves yeah icarus kind of throws a monkey wrench at those gears but you know with uh with fatal right the technique on that the technique the execution the timing the brilliance of his movement no doubt He's phenomenal.
There is an idea where you should make steroids legal, right?
Or not legal, sorry.
Allowed or some kind of supplementation.
Where's the line when you start to talk about the future of martial arts, the future of sport?
If you can control the levels so that they're healthy, I mean, isn't that the reason that they're not allowed is because if abused, they become unhealthy?
Look, if that was the case, we wouldn't allow fighting.
Because fighting is more damaging than steroids, for sure.
For sure.
Getting punched and kicked and fucking kneed in the face and elbowed into unconsciousness, that is way worse for you than steroids.
The concern is not for the athlete.
The concern is for the opponent.
The idea is that you will be able to inflict punishment that you would not ordinarily be able to inflict.
You will have more endurance.
You will have more power.
You will hurt someone, potentially even...
Look, there's going to be a time where someone dies in a mixed martial arts event.
And if that's someone who was the victor, who did not die, was on steroids, it is going to be a huge national tragedy and a massive disaster for the sport, for everything, if that ever does happen.
We can only hope it never does.
For sure.
It's a very, very dangerous game you're playing.
Martial arts is a very dangerous game.
And when you are enhancing your body with chemicals that are illegal while you're doing that game.
The real question is, though, here's my take on it.
It's one of the most human subjects, meaning that it's messy.
Humans are messy.
There's good and there's bad.
Look, abortion is a messy subject.
It's messy.
Whether you agree with someone's right to have it or not, it is what you're doing, especially as the fetus gets older, it's messy.
You know, when it's a complicated discussion, it's not a clear, it's not like you should drink water.
You know what I mean?
It's a very complicated discussion.
Steroids are a very complicated discussion.
You're not allowed to do them, but they exist for a reason.
The reason why they exist is they're really effective.
They're really effective at enhancing your body.
But how much of that will we allow?
We allow creatine, we allow supplements in terms of There's certain things that can slightly elevate your testosterone, slightly elevate your growth hormone.
We allow sauna and ice baths and all these things that have shown to enhance recovery, but that's too much.
It's too good.
They're too effective.
But it's weird.
It's weird that this thing that we found that makes you better, you can't use.
No, because there's no real evidence that it's detrimental.
It's not as detrimental as alcohol when you allow people to drink.
But even when abused, where are the bodies?
There's a great documentary on it called Bigger, Stronger, Faster.
It's by my friend Chris Bell.
And when you watch that documentary and you realize, oh, well, the real negative consequences of taking steroids are that it shuts down your endocrine system.
So it stops your body's natural production.
Of testosterone and growth hormone and hormones.
That's the real problem.
And for young people, that can be very devastating.
And it can lead to depression and suicidal thoughts and all sorts of really bad things when your testosterone shuts down.
But as far as like death, boy, I mean, there's...
People are prescribed pain pills every day of the week, and fighters that are on injuries that have gotten surgery, they're prescribed pain pills every day of the week, and those pain pills kill people left and right.
That's just a fact.
People die of those things all the time, much more so than die of steroids.
I'm not advocating for the use of steroids.
I'm being pretty objective and neutral about this, but I'm just looking at it like it's a very messy subject.
So, you know, it's an interesting possibility where in moderation you'll be able to allow steroids in future athletics because with an argument that if done in moderation you can actually create healthier athletes.
Yeah, that's a real argument for the Tour de France.
The Tour de France, they say that you actually are better off and healthier taking steroids and EPO than you are doing it without it because it's so unbelievably grueling on the body.
But I don't want people to give in to the impulse.
I think fighting is something that you should do correctly.
There's principles that you should follow to fight correctly.
It doesn't mean that you shouldn't take chances.
But you know there's moments like Ricardo Lamas, when he fought Max Holloway, and they just stood in the center of the ring for the last few seconds of the fight, and Max Holloway pointed down at the ground.
He's like, come on, right here, right here.
And they just started swinging haymakers.
It was amazing.
Well, it happened.
But if I was in Max's corner, I'd be like, don't!
No!
Don't do that, man.
This macho shit is going to give you fucking brain damage.
You're going to get hit with shots you wouldn't get hit with.
No, but Max Holloway is the greatest featherweight of all time.
He's a guy who destroyed Jose Aldo twice.
He's a guy that...
He's beaten everybody in front of him at featherweight.
The idea that this one moment where they decided to throw out all his skill and technique and just swing for the bleachers in the middle of the octagon.
It was a fun moment.
It was great to watch.
But the idea that that was the greatest moment in his life is ridiculous.
Olympics bring that when, like, the thing that you don't think should happen or can't possibly happen or is not wise, where people just throw everything away.
Well, in terms of its ability to change lanes and its ability to drive without you doing anything, I just put my hand on the wheel and hold it there, and it does all the work.
So, because, like, one or two people listen to this podcast...
I want to take this opportunity and tell people, if you drive a Tesla, whether you listen to this now or a year from now, two years from now, Tesla or any other car, keep your damn eyes on the road.
So, whatever you think the system is able to do, you will have to still monitor the road.
And this is the big, this is the throw down between Elon Musk and everybody else.
So Elon Musk says the best sensor is camera.
Everybody else, well, everybody else says that at this time LIDAR, which are these lasers, is the best sensor.
So I'm more on the side, in this case on camera, on Elon Musk.
So here's the difference.
Lasers are more precise.
They work better in poor lighting conditions.
They're more reliable.
You can actually build safe systems today that use LiDAR.
The problem is that they don't have very much information.
So we use our eyes to drive.
And cameras, the same thing.
And they have just a lot more information.
So if you're going to build artificial intelligence systems, so machine learning systems that learn from huge amounts of data, camera is the way to go.
Because you can learn so much more.
You can see so much more.
So the richer, deeper sensor is camera.
But it's much harder.
You have to collect a huge amount of data.
It's a little bit more futuristic, so it's a longer-term solution.
So today, to build a safe vehicle, you have to go LiDAR.
Tomorrow, however you define tomorrow, Yelma says it's in a year.
Others say it's 5, 10, 20 years.
Camera is the way to go.
So that's the hard debate.
And there's a lot of other debates, but that's one of the core ones.
It's basically, for camera, if you go camera like you do in the Tesla, there's seven cameras in your Tesla, three looking forward, there's all around, so on, one looking inside.
No, you have the Model S? Yeah.
Yeah, so that one doesn't have a camera that's looking inside.
So it's all cameras plus radar and ultrasonic sensors.
That approach requires collecting huge amounts of data, and they're doing that.
They drove now about 1.3 billion miles under Autopilot.
So you're talking about over 500,000 vehicles have Autopilot.
450, I think, thousand have the new version of Autopilot, Autopilot 2, which is the one.
You're driving, and all of that is data.
So all of those, all the edge cases, what they call them, all the difficult situations that occur, is feeding the machine learning system to become better and better and better.
And the open question is, How much better does it need to get to the human level of performance?
One of the big assumptions of us human beings is that we think that driving is actually pretty easy, and we think that humans suck at driving.
Those two assumptions.
You think like driving, you know, you stay in the lane, you stop at the stop sign, it's pretty easy to automate.
And then the other one is you think like humans are terrible drivers, and so it'll be easy to build a machine that outperforms humans at driving.
Now there's, that's, I think, there's a lot of flaws behind that intuition.
We take for granted how hard it is to look at the scene, like everything you just did, picked up, moved around some objects, It's really difficult to build an artificial intelligence system that does that.
To be able to perceive and understand the scene enough to understand the physics of the scene, like all these objects, like how to pick them up, the texture of those objects, the weight, to understand glasses folded and unfolded, open water bottle, all those things is common sense knowledge that we take for granted.
We think it's trivial.
But there is no artificial system in the world today, nor will there be for perhaps quite a while that can do that kind of common sense reasoning about the physical world.
Add to that pedestrians.
So add some crazy people in this room right now to the whole scene.
And, considering not that he's an asshole, he's a respectable skateboarder, that in order to make...
It's not just you have to perceive the world.
You have to assert your presence in this world.
You have to take risks.
So in order to make the skateboarder not cross the street, you have to perhaps accelerate if you have the right of way.
And there's a game theoretic, a game of chicken to get right.
I mean, we don't even know how to approach that as an artificial intelligence research community and also as a society.
Do we want an autonomous vehicle that speeds up?
In order to make a pedestrian not cross the street, which is what we do all the time.
We have to assert our presence.
If there's a person who doesn't have the right of way who begins crossing, we're going to either maintain speed or speed up potentially if we want them to not cross.
So there was, I believe, in Mountain View, a fatality in a Tesla, where it...
This is a common problem for all lane-keeping systems, like Tesla Autopilot, is there was a divider in the highway, and basically the car was driving along the lane, and then the car in front moved to an adjacent lane, and this divider appeared.
So you have to now steer to the right, and the car didn't, and it went straight into the divider.
The only information they have is hands on steering wheel and they were saying that like half the minute leading up to the crash, the hands weren't on the steering wheel or something like that.
Basically trying to infer were the person paying attention or not.
But we don't have the information exactly where were their eyes.
You can only make guesses as far as I know, again.
So the question is, this is the eyes on the road thing, because I think I've heard you on a podcast saying you're tempted to sort of look off the road at your new Tesla, or at least become a little bit complacent.
If you weren't, you know, when you're driving, I mean, we've discussed this many times on the podcast that the reason why people have road rage, one of the reasons, is because you're in a heightened state, because cars are flying around you and your brain is prepared to make split-second decisions and moves.
And the worry is that you would relax that because you're so comfortable with that thing driving.
Everybody that I know that's tried that, they say you get really used to it doing that.
You get really used to it just driving around for you.
I mean, the thing is, in a lot of the things he does, which I admire greatly from any man or woman innovator, it's just boldly, fearlessly pursuing new ideas or jumping off the cliff and learning to fly on the way down.
Mm-hmm.
I mean, no matter what happens, you'll be remembered as the great innovators of our time.
Whatever you say, maybe in my book, Steve Jobs was as well.
Even if you criticize, perhaps he hasn't contributed significantly to the technological development of the company or the different ideas they did.
Still, his brilliance was in all the products of iPhone, of the personal computer, the Mac, and so on.
I think the same is true with Elon.
And yes, in this space of autonomous vehicles, of semi-autonomous vehicles, of driver assistance systems, it's a pretty tense space to operate in.
There's several communities in there that are very responsible but also aggressive in their criticism.
So in driving in the automotive sector, obviously, since Henry Ford and before, there's been a culture of safety, of just great engineering.
These are like some of the best engineers in the world in terms of large-scale production.
You talk about Toyota, you talk about Ford, GM. These people know how to do safety well.
And so here comes Elon with Silicon Valley ideals that throws a lot of it out the window and says we're going to revolutionize the way we do automation in general.
We're going to make software updates to the car once a week, twice a week, over the air, just like that.
That makes people and the safety engineers and human factories engineers really uncomfortable.
Like, what do you mean you're going to keep updating the software of the car?
Because the way in the automotive sector you test the system, you come up with the design of the car, every component, and then you go through really rigorous testing before it ever hits the road.
Here's an idea from the Tesla side is where they basically They, in shadow mode, test the software, but then they just release it.
So essentially the drivers become the testing.
And then they regularly update it to adjust if any issues arise.
That makes people uncomfortable because there's not a standardized Testing procedure, there's not at least a feeling in the industry of rigor, because the reality is we don't know how to test software with the same kind of rigor that we've tested the automotive system in the past.
So I think it's extremely exciting and powerful to make software sort of approach automotive engineering with at least in part a software engineering perspective.
So just doing what's made Silicon Valley successful.
So updating regularly, aggressively innovating on the software side.
So your Tesla over the air, while we're sitting here, could get a totally new update.
With a flip of a bit, as Elon Musk says, it can gain all new capabilities.
That's really exciting, but that's also dangerous.
We're, as a society, used to software failing, and we just kind of reboot the device or restart the app.
The most complex software systems in the world today, if we think outside of nuclear engineering and so on, They're too complex to really thoroughly test.
So thorough, complete testing, proving that the software is safe is nearly impossible on most software systems.
That's nerve-wracking to a lot of people because there's no way to prove that the new software update is safe.
Yeah, so I don't have any insider information, but I have a lot of sort of public available information, which is they test the software in shadow mode, meaning they see how the new software compares to the current software by running it in parallel on the cars and seeing if there's disagreements, like seeing if there's any major disagreements and bringing those up and seeing what...
I think the software infrastructure that Tesla has built allows for that.
And I think other companies should do the same.
That's a really exciting, powerful way to approach not just automation, not just autonomous vehicles or semi-autonomous vehicles, but just safety.
Is basically all the data that's on cars, bring it back to a central point to where you can use the edge cases, all the weird situations in driving to improve the system, to test the system, to learn, to understand where the car is used, misused, how it can be improved and so on.
So the interesting thing about driving is most of it is pretty boring.
Nothing interesting happens.
So they have automated ways of extracting, again, what are called edge cases.
So these weird moments of driving.
And once you have these weird moments, they have people annotate.
I don't know what the number is, but a lot of companies are doing this.
It's in the hundreds and thousands.
Basically, humans annotate the data to see what happened.
But most of what they're trying to do is to automate that annotation.
So to figure out how the data can be automatically used to improve the system.
So they have methods for that.
Because it's a huge amount of data, right?
I think in the recent autonomy day a couple of weeks ago, they had this big autonomy day where they demonstrated The vehicle driving itself on a particular stretch of road.
They showed off that, you know, they're able to query the data, basically ask questions of the data, saying, the example they gave is there's a bike on the back of a car, the bicycle on the back of a car.
And they're able to say, well, when the bicycle is in the back of a car, that's not a bicycle.
That's just the part of the car.
And they're able to now look back into the data and find all the other cases, the thousands of cases that happened all over the world, in Europe and Asia, in South America and North America and so on, and pull all those elements and then train the perception system of Autopilot to be able to better recognize those bicycles as part of the car.
So every edge case like that, they go through saying, okay, the car freaked out in this moment.
Let me find moments like this in the rest of the data and then improve the system.
So this kind of cycle is the way to deal with problems, with failures of the system.
It's to say, every time the car fails at something, say, is this part of a bigger set of problems?
Can I find all those problems?
And can I improve it with a new update?
And that just keeps going.
The open question is, how many loops like that you have to take for the car to become really good, better than human?
Basically, how hard is driving?
How many weird situations, when you manually drive, do you deal with every day?
Somebody mentioned, there's like millions of cases when you watch video, you see them.
Somebody mentioned that they drive a truck, a UPS truck, past cow pastures.
And they know that if there's no cows in the cow pasture, that means they're grazing.
And if they're grazing, I mean, I'd be using the correct terms, I apologize, not a cow guy.
That means that there may be cows up ahead on the road.
There's just this kind of reasoning you can use to anticipate difficult situations.
And we do that kind of reasoning about like everything.
And using your app, sort of get from point A to point B. But out of the cars that are semi-autonomous, where there is an autonomous program but you do have to keep your hands on the wheel and pay attention to the road, what are the leaders?
Besides Tesla, there's Tesla and who else is doing it?
Yeah, I guess it's not even a highest level, it's a principle, it's a philosophy difference.
Because they're saying we're going to do full autonomy, we're just not quite there yet.
Most other companies, they're doing semi-autonomous, better called driver assistance systems, is they're saying we're not interested in full autonomy, we just want a driver assistance system that just helps you steer the car.
So let's call those semi-autonomous vehicles or driver assistance systems.
There's several leaders in that space.
One car we're studying that's really interesting is a Cadillac Super Cruise system.
So GM has a system, it's called Super Cruise, that I think is the best comparable system to Autopilot today.
The key differentiator there is, there's a lot of little elements, but the key differentiator is there's a driver monitoring system.
So there's a camera that looks at you and tells you if your eyes are on the road or not.
And if your eyes go off the road for, I believe, more than six seconds, it starts warning you and says you have to get your eyes back on the road.
So that's called driver monitoring.
That's one of the big disagreements, for example, between me and Elon and many experts in the field and Elon and the Tesla approach is that there should be a driver monitoring system.
Operate like that in many ideas that they work with.
They sort of boldly proceed forward to try to make the car extremely safe.
Now the concern there is you have to acknowledge the psychology of human beings.
Unless the car is perfect or under our definition perfect, which is much better than human beings, then you have to be able to You have to be able to make sure that the people are still paying attention to help the car out when it fails.
And for that, you have to have drive a modern.
You have to know what the car is.
Right now, your Tesla only knows about your presence from the steering wheel.
No, and it has a very different philosophy as well in another way, which is it only works on very specific roads, on interstate highways.
There's something called ODD, Operational Design Domain.
So they define that this thing, Super Cruise System, only works on this particular set of roads and they're basically just major highways.
The Tesla approach is saying basically what Elon jokingly referred to as ADD, right, is it works basically anywhere.
So if you try to turn on your autopilot, you can basically turn it on anywhere where the cameras are able to determine either lane markings or the car in front of you.
And so that's a very different approach, saying you can basically make it work anywhere or, in the Cadillac case, make it work on only specific kinds of roads.
So you can test the heck out of those roads.
You can map those roads.
So you can use, actually, LIDAR to map the full roads so you know the full geometry of all the interstate highway system that it can operate on.
Yeah, but I'd rather you blow out your tire than, I mean, the kind of fatality that happened in the Mountain View with Tesla, I believe, is slightly construction-related.
So, I mean, there's a lot of safety-critical events that happen construction-related stuff.
Because when you swerve, you now introduce, as opposed to sort of breaking the vehicle only, swerving into another lane means you might create a safety situation elsewhere.
It's hard to talk about without actually experiencing the system.
What's more important than driver monitoring and any of the details we talk about is how the whole thing feels, the whole thing together, how it's implemented, the whole interface.
The Cadillac system is actually done really well in the sense that there's a clarity to it.
There's a green color and a blue color and you know exactly when the system is on and when it's off.
That's one of the big things people struggle with is just confusing in other cars drivers not being able to understand when the system is on or off.
I don't want to speak too much to the details, but they have lane keeping systems.
They're basically systems that keep you in the lane.
That is similar to what, in spirit, Autopilot is supposed to do, but is less aggressive in how often you can use it and so on.
If you look at the performance of the actual, how often the system is able to keep you in lane, Autopilot is currently the leader in that space.
And they're also the most aggressive innovators in that space.
They're really pushing it to improve further and further.
And the open question is, the worrying question is if it improves much more, are there going to be effects like complacency, like people will start texting more, will start looking off-road more.
It's a totally open question and nobody knows the answer to it really.
And there's a lot of folks, like I mentioned, in the safety engineers and human factors community, so these psychology folks who have roots in aviation, that there's been 70 years of work that looks at vigilance.
If I force you to sit here and monitor for something weird happening, like radar operators in World War II had to watch for the dot to appear.
If I sit you behind that radar and make you do it, after about 15 minutes, but really 30 minutes, your rate of being able to detect any problems will go down significantly.
You just kind of zone out.
So there's all kinds of psychology studies that show that we're crappy.
Human beings are really crappy at monitoring automation.
If I tell you, if I put a robot and you just say, monitor this system so it doesn't kill anyone, you'll tune out.
We don't have a mode for watching autonomous things, right?
If you consider historically the kind of modes that people have for observing things, we don't really have a mode for making sure that an autonomous thing does its job.
And obviously there's politics that FAA is – I think FAA is supposed to supervise and there's a close relationship between Boeing and FAA. There's questions around – I mean there's better experts at that than me.
But on the software side, it is worrying because it was a single software update essentially that helps prevent the vehicle – the airplane from stalling.
So, if the nose is tilting up, increasing the chance of stalling, it's going to automatically point the nose down of the airplane.
And the pilots, in many cases, as far as I understand, weren't even informed of this update, right?
They weren't even told this is happening.
The idea behind the update is that they're not supposed to really know.
It's supposed to just manage the flight for you, right?
The problem happened when there's an angle of attack sensor.
So the sensor that tells you the actual tilt of the plane.
And there's a malfunction in that sensor, as far as I understand, in both planes.
And so the plane didn't actually understand its orientation.
So the system started freaking out and started pointing the nose down aggressively.
And the pilots were like trying to restabilize the plane and couldn't.
I think updates, regular updates, so combining the two cultures but really letting good software engineering lead the way is the way to go.
I wish other companies were competing with Tesla on this.
On the software side, Tesla is far ahead of everyone else in the automotive sector.
And that's one of the problems.
I'm worried that, you know, competition is good, right?
And I'm worried that people are way too far behind to actually give Tesla new ideas.
I'll compete Tesla on software.
So most cars are not able to do over-the-air.
As far as I know, no cars are able to do major over-the-air updates except Tesla vehicles.
They do over-the-air updates to the entertainment system.
Like, you know, if your radio is malfunctioning.
But in terms of the control of the vehicle, you have to go to the dealership to get an update.
Tesla is the only one that over-the-air, like it can multiple times a week do the update.
I think that should be a requirement for all car companies.
But that requires that they rethink the way they build cars.
That's really scary when you manufacture over a million cars a year in Toyota and GM. To say, especially old school Detroit guys and gals that are like legit car people, to say we need to hire some software engineering, that's a challenge.
It's a totally, you know, I don't know how often you've been to Detroit, but there's a culture difference between Detroit and Silicon Valley.
And those two have to come together to solve this problem.
So I have the adult responsibility of Detroit, of how to do production well, manufacture, how to do safety well, how to test the vehicles well, and do the bold, crazy, innovative spirit of Silicon Valley, which Elon Musk in basically every way represents.
I think that will define the future of AI in general.
Interacting with AI systems just even outside the automotive sector requires these questions of safety, of AI safety, of how we supervise the system, how we manage them from misbehaving and so on.
I think there is a whole discipline called adversarial machine learning in AI, which basically any kind of system you can think of, how we can feed it examples.
How we can add a little bit of noise to the system to fool it completely.
So there's been demonstrations on Alexa, for example, where you can feed noise into the system that's imperceptible to us humans and make it believe you said anything.
So, fool the system into thinking, so ordering extra toilet paper, I don't know.
And the same for cars, you can feed noise into the cameras to make it believe that there is or there isn't a pedestrian, that there is or there isn't lane markings.
In practice, it's actually really difficult to do in the real world.
So in the lab, you can do it.
You can construct a situation where a pedestrian can wear certain types of clothing or put up a certain kind of sign where they disappear from the system.
I have to ask you this because now I just remember this.
You'd be the perfect person to talk about this.
I'm not sure if you remember this case, but there was a guy named Michael Hastings.
Michael Hastings was a journalist and he was, I believe, in Iraq or Afghanistan.
He was somewhere overseas, and he was stuck there because of this volcano that erupted in, I believe, Iceland.
And he was over there for the Rolling Stone magazine and doing an article about a general.
Well, he stayed there for a long time because they were stranded because of the volcano, and they got real comfortable around him.
And he reported a lot of the stuff that they said and did that maybe they thought that he probably wouldn't have reported on, including them saying disparaging things about President Obama at the time.
Anyway, comes back.
The general was forced to resign.
He was a beloved general.
And Michael Hastings was fearing for his life because he thought that they were going to come and get him because these people were very, very angry at him.
He wound up driving his car into a tree going like 120 miles an hour.
And the car exploded and the engine went flying.
And people that were the conspiracy theorists were saying they believed that that car had been rigged to work autonomously or that someone, some third party bad person decided to, or good person depending on your perspective, decided to drive that guy's car into a fucking tree at 120 miles an hour.
I'm just asking you because you're actually an expert.
I mean, it's very rare that you get an expert in autonomous vehicles and you get to run a conspiracy theory by them to see if they can just put a stamp on it being possible or not.
Right, but my issue with it was there's no cameras on the outside of the vehicle like there is on Tesla of today, which has autonomous driving as an option.
So he wouldn't be hacking the system that perceives the world and acts based on the world.
It would literally be malfunction that forces it to not be able to brake, accelerate uncontrollably, which is a more basic kind of attack than making the car steer out of lane, for example.
That's what people worry about with autonomous vehicles when more and more...
You're talking about potentially 10, 20 million lines of source code.
So there's all this code.
And so obviously it becomes amenable, susceptible to...
Bugs that can be exploited to hack the code.
And so people are worried legitimately so that these security attacks would lead to these kind of, well, at the worst case, assassinations, but really sort of just basic attacks, basic hacking attacks.
I think it's...
I think that's something that people in the automotive industry and certainly Tesla is really working hard on and making sure that everything is secure.
There's going to be, of course, vulnerabilities always, but I think they're really serious about preventing them.
But in the demonstration space, you'll be able to demonstrate some interesting ways to trick the system in terms of computer vision.
This all boils down to That these systems are actually, the ones that are camera-based, are not as robust as our human eyes are to the world.
So like I said, if you add a little bit of noise, you can convince it to see anything.
To us humans, it'll look like the same road, like the same three pedestrians.
And the question was, so that's an exciting, powerful capability, but then the Boeing, the flip side, is, you know, it can significantly change the behavior of the system.
Especially with a lot of, I mean, that number, whatever it is, it's like 300 combined, 300 plus people dead, maybe even 400. I mean, I don't even know how to think about that number.
It's a lot of burden, and it's one of the reasons it's one of the most exciting things to work on, actually, is the code we write has the capability to save human life, but the terrifying thing is it also has the capability to take human life.
And that's a weird place to be as an engineer, where directly a little piece of code, you know, I write thousands of them a day.
You know, basically notes you're taking could eventually lead to somebody dying.
It's the dumbest fucking problematic code of all time.
Because someone ridiculously was suggesting that coal miners could maybe learn how to code computer code and get a different job.
They could be trained.
And so, the way people were looking at it, that was like a frivolous suggestion.
And that it was ridiculous to try to get someone who was 50 years old, who doesn't have any education in computers at all, to change their job from being a coal miner to learning how to code.
So they started saying it to politicians and people mocking it.
But then what Twitter alleged was that what was going on was it was being connected to white supremacy and anti-Semitism and a bunch of different things like people were saying learn to code and they were putting in a bunch of these other phrases in.
My suggestion would be, well, that's a different fucking thing.
Now you have a problem with Nazis and white supremacists, but the problem is with Nazis and white supremacists.
When someone is just saying learn to code, mocking this ridiculous...
Idea that you're going to teach, you know, that's a legitimate criticism of someone's perspective, that you're going to get a coal miner to learn how to fucking do computer coding.
It's crazy.
So people getting banned for that, rightly so, people were furious.
The way Google described it to me and Tim Poole when we were discussing it was that – Google, I mean, excuse me, Twitter.
The way Twitter described it was that essentially they were dealing with something where they were trying to censor things at scale.
There was so many people and there's so much going on that it's very difficult to get it right and that they've made mistakes.
I have your friend, my friend and mentor, Eric Weinstein, who talked to me.
I disagreed with him a little bit on this.
I think he basically believes there's a bias.
It boils down to the conversation that Jack is having at the top level inside Twitter.
What is that conversation like?
I tend to believe, again, this might be my naive nature, is that they don't have bias and they're trying to manage this huge flood of tweets and what they're trying to do is not to remove conservatives or liberals and so on.
They're trying to remove people that Lead to others leaving the conversation.
So they want more people to be in the conversation.
I'm just, I'm thinking, I wasn't planning on talking about her.
But there was a parody account, and someone was running this parody account, which was very mild, just humorous parody account.
They were banned permanently for running it, and then their own account was banned as well.
Whereas...
There's some progressive people or liberal people that post all sorts of crazy shit, and they don't get banned at the same rate.
It's really clear that someone in the company, whether it's up for manual review, whether it's at the discretion of the people that are employees, when you're thinking about a company that's a Silicon Valley company, you are...
Without doubt, you're dealing with people that are leaning left.
There's so many that lean left in Silicon Valley.
The idea that that company was secretly run by Republicans is ridiculous.
They're almost all run by Democrats or progressive people.
Well, the question is – I think there's a leaning left that permeates Silicon Valley.
I think that's undeniable.
I think it's undeniable.
I mean I think if you had a poll, the people that work in Silicon Valley, where their political leanings are, I think it would be – By far, left.
I think it would be the vast majority.
Does that mean that affects their decisions?
Well, what's the evidence?
Well, it kind of shows that it does.
They're not treating it with 100% clarity and across-the-board accuracy, or fairness, rather.
I think that...
There's absolutely people that work there that lean.
And there's been videos where they've captured people that were Twitter employees talking about it, talking about how you do that, how you find someone who's using Trump talk or saying sad at the end of things, and someone's talking, certain characteristics they look for.
There's been videos of, what is that, Project Veritas, where that guy and his employees got undercover footage of Twitter employees talking about that kind of stuff.
The question is how much power do those individuals have?
How many individuals are there like that?
Are those people exaggerating their ability and what they do at work?
Or are they talking about something that used to go on but doesn't go on anymore?
The thought of being open-minded and acting in that ethic is probably one of the most important things that we could go forward with right now because things are getting so greasy.
It's so slippery on both sides.
And we're at this weird position that I don't recall ever in my life there being such a divide between the right and the left in this country.
It's more...
More vicious, more angry, more hateful.
It's different than at any other time in my life.
And I think a lot of our ideas are based on these narratives that may or may not even be accurate.
And then we support them and we reinforce them on either side.
We reinforce them on the left, we reinforce them on the right.
Where if you're looking at reality itself and you don't have these clear parameters And these clear ideologies.
I think most of us are way more in the middle than we think we are.
Most of us are.
We just don't want racists running the country.
We don't want socialists giving all our money away.
We don't want to pay too much in taxes to a shitty government.
We don't want schools getting underfunded.
And then we decide, what does my team...
The team, the shit that I like, is that this team?
Well, not everything, but they got a lot of things, so I'll go with them.
Maybe I'm not a religious nut, but I'm fiscally conservative, and I don't like the way Democrats like to spend money.
I'm going to go with the Republicans.
Maybe I'm more...
I'm more concerned with the state of the economy and the way we trade with the world than I am with certain social issues that the Democrats embrace.
So I'll lean that way, even though I do support gay rights, and I do support this, and I do support all these other progressive ideas.
There's way more of us in that boat.
There's way more of us that are in this middle of the whole thing.
And the question is, this is where the role of AI comes in.
Does the AI that recommends what tweets I should see, what Facebook messages I should see, is that encouraging the darker parts of me or the Steven Pinker better angels of our nature?
What stuff is it showing me?
Because if it shows me stuff that If the AI trains purely on clicks, it may start to learn when I'm in a bad mood and point me to things that might be upsetting to me.
And so escalating that division and escalating this vile thing that can be solved most likely with people training a little more jiu-jitsu or something.
This Facebook algorithm that encourages people to be outraged because accidentally, not even on purpose, but this is what engages people.
This is what gets clicks.
So they find out, oh, well, he clicks on things when he finds out that people are anti-vaccination.
Or he clicks on things when he finds out, you know, fill in the blank with whatever the subject is.
And then you get, these motherfuckers, you know, this is the reason why measles are spreading.
And you start getting angry.
I mean, the anti-vax arguments on Facebook, I don't know if you ever dip into those waters for a few minutes and watch people fight back and forth in fury and anger.
It's another one of those things that becomes an extremely lucrative thing.
It's a subject for any social media empire.
If you're all about getting people to engage, and that's where the money is in advertising, getting people to click on the page, and the ads are on those pages, you get those clicks, you get that money.
If that's how the system is set up, and I'm not exactly sure how it is because I don't really use Facebook, but that's what it benefits.
So, and when we think about concern for AI systems, we talk about sort of Terminator, I'm sure we'll touch on it, but I think of Twitter as a whole as one organism.
That is the thing that worries me the most, is the artificial intelligence that is very kind of dumb and simple, simple algorithms that are driving the behavior of millions of people.
And together, the kind of chaos that we can achieve...
I mean, that algorithm has incredible influence on all society.
Twitter, our current president is on Twitter.
All day.
Yeah, all day, all night.
I mean, it's scary to think about.
We talk about autonomous vehicles leading to one fatality, two fatalities.
It's scary to think about what the difference, a small change in the Twitter algorithm I mean, it could start wars.
And that, if you think about the long term, if you think about as one AI organism, that is a super intelligent organism that we have no control over.
And I think it all boils down, honestly, to the leadership.
To Jack and other folks like him, making sure that he's open-minded, that he goes hunting, that he does some jiu-jitsu, that he eats some meat and sometimes goes vegan.
You know, there's been a lot of research on fasting and the effect it has on telomeres.
Dr. Rhonda Patrick spoke pretty recently.
There's been quite a few things that she's written about in terms of fasting and the benefits of fasting.
Intermittent fasting is great for weight loss, but just fasting itself, even for several days.
Most people seem to get some pretty decent benefits out of it, so I dabble in it.
I also like the way it makes me feel.
To be a little hungry, I think my brain is sharper.
I refuse to go on stage full when I do stand-up.
I actually learned this from a Cat Williams interview.
He was talking about it.
He's crazy as fuck, but he's hilarious, and he's one of the greats, in my opinion.
and he was in the back of a limo and he was talking about how he prepares for a show that he has his music that he listens to pre-show music he has like a music list um and then uh he'll have a drink no food he won't eat because it slows you down i was like that'll slow you down but sometimes you don't even think of it it's not like a rule so you just man i'm hungry i'll just eat i would way rather because i can go through a couple of shows
i used to think i used to have this faulty idea that if i didn't eat i would be exhausted to do things but then i work out fasted every morning every morning when i when i get my morning workout in and whatever the fuck it is it's usually hard I'm always fasted.
You can do a lot.
It's not at your best.
Like, if I was going to do jiu-jitsu, I don't do jiu-jitsu fasted.
Yeah, I firmly believe you can get better, way better drilling.
And when I went from, I think, blue belt to purple, I did like the most drilling that I ever did, ever.
And that's when I grew the most.
That's when my technique got way better.
That was also when I became friends with Eddie Bravo.
And Eddie Bravo is a huge driller.
Huge.
Oh, he drills, man.
They drill like crazy, and they do a lot of live drills, and they do a lot of pathway drills, where they'll do a whole series of movements, and then the escape, and then the reversal.
These are long pathways, so that when you're actually in a scrap and you're rolling, you recognize it.
Like, okay, here it is.
I'm passing the guard.
I'm moving to here, and now he's countering me, but I'm setting up this.
And these pathway drills, it's so critical because it comes up over and over and over again when you're actually live rolling.
Well, he invented the initial stage of setting up mission control.
This guy is getting fucked up.
Oh my god.
That's horrible.
Eddie invented a series of pathways from mission control to set up various techniques, arm bars, triangles, all these different things.
But there had been people that had toyed with doing high guard, like Nino Chambri.
He did a lot of rubber guard-esque stuff.
There was a lot of things that people did, but Eddie has his own pathway and his own system.
And then there's a lot of guys that branch off from that system, like Jeremiah.
Like Vinnie Magalès, that have their own way that they prefer to set various techniques up to.
But what's really good about that, if you have the flexibility, is that when you're on the bottom, not only is it not a bad place to be, but you could put someone in some real trouble.
When you have your ability, you're holding onto your ankle and using your leg, which is the strongest fucking limb in your body, right?
Pulling down on someone with your leg, clamping down with your arm, and then you get your other leg involved.
I remember it being, you know when somebody does a nice move on you, especially like a lower rank, your first reaction is like, oh, this would never, like you're annoyed.
And if you have a good offensive attack from there, it's powerful as well.
There are transitions.
Especially a guy like Jeremiah who's really flexible.
You know, he can pull off gogoplatas and all sorts of other things.
Yeah.
Local Plata's, it's another one that they do, is one that you push with your other foot on the heel.
It's so nasty.
You're holding the back of the foot across the back of the neck, and so your shin is underneath someone's throat, and then you're pushing that shin with your other heel while you're squeezing with your arm.
It's ruthless.
It's ruthless.
And they do a gable grip around the head when they do this as well sometimes, too, so it's just a fucking awful place to be.
It's not as good as being on top, right?
If you have a crushing top game, that's the best, if you can get to that position.
But you can't always get to that position.
So there's guys like Jeremiah that even from the bottom, they're horrific.
I feel like you should always start on the bottom.
Earn the top position.
This is something Eddie always brought up too.
It's fun to be on top.
So a lot of times it's like this mad scramble to see who could force who onto their back.
Because when you're on top, you can control them, you can pressure them.
You know, you play that strong man's jiu-jitsu, but the problem is a strong man's jiu-jitsu, I'm only 200 pounds.
I'm not a big guy.
Like, so, if you go to the real big guy, like I'm rolling with a 240-pound guy, I'm not going to get to that spot.
Like, I better have a good guard, otherwise I can't do anything, right?
When someone's bigger than you and stronger than you, I mean, that's what Hoist Gracie basically proved to the world.
Like, as long as you have technique, it doesn't matter where you are.
But if you only have top game, which a lot of people do, a lot of people only have top game, You know, you're kind of fucked if you wind up on your back.
We see that a lot with wrestlers in MMA. As wrestlers, they can get on top of you and they'll fuck you up.
They'll strangle you, they'll take you back, they'll beat you up from the mount, but they don't have nearly the same game when they're on their back.
And then there's guys like Luke Rockhold, who's like an expert at keeping you on your back.
He's one of those guys, when he gets on top of you, you're fucked.
He's got a horrible top game.
I mean horrible in the sense of if you're his opponent.
He's going to beat the fuck out of you before he strangles you.
I think at this point, he is basically a no, but there are a few terrifying people, especially on the Russian side, that I think the heavyweight division and UFC should be really worried.
I don't know if you heard about the Russian tank, the 22-year-old from Dagestan.
Well, you know, that was sort of evident, and the mindset behind them was sort of evident at the end of that fight with Conor, where they went crazy and he jumped into the crowd.
It's like, he's not playing games.
He's not doing this for Instagram likes or for, you know, this is really, he takes trash talking and all that stuff very seriously.
I think security could have been handled far better and will be in the future to prevent things like that from happening where people just jumped into the cage.
But I hate seeing that shit.
But I appreciate where he's coming from.
I mean, that's who the fuck that guy is, man.
That's one of the reasons why he's so good, is that he does have that mindset.
It's one of the reasons, man.
One of the reasons why he's so relentless.
He's not playing games.
He is who he is.
What you see is what you get, and what you get is a killer.
I know you're gonna shut this down, as most fans do, but I... If he drops everything and goes to, like, Siberia to train, I would love to see him and Khabib, too.
I picture human beings being like electronic caterpillars that are building a cocoon that they have no real knowledge of or understanding.
And through this, a new life form is going to emerge.
A life form that doesn't need cells and mating with X and Y chromosomes.
It doesn't need any of that shit.
It exists purely in software and in hardware.
And in ones and zeros and that this is a new form of life and this is when the inevitable Rise of a sentient being the inevitable.
I mean, I think if we don't get hit when the asteroid within a thousand years or whatever the number the time frame is Someone is going to figure out how to make a thing that just walks around and does whatever it wants and lives like a person and That's not outside the realm of possibility.
And I think that if that does happen, that's artificial life.
And this is the new life.
And it's probably going to be better than what we are.
I mean, what we are is basically, if you go back and look about, you know, 300,000, 400,000 years ago, when we were some Australopithecus-type creature, How many of them would ever look at the future and go, I hope I never get a Tesla.
The last thing I want is a fucking phone.
The last thing I want is air conditioning and television.
The last thing I want is to be able to talk in a language that other people can understand and to be able to call people on the phone.
Fuck all that, man.
I like living out here running from Jaguars and shit and constantly getting jacked by bears.
I wouldn't think that way.
And I think if something comes out of us and makes us obsolete but it's It's missing all the things that suck about people.
Is that because of our own biological limitations and the fact that we exist in this world?
World of animals where animals are eating other animals and running.
There's always...
You always have to prepare for evil.
You have to prepare for intruders.
You have to prepare for, you know, predators.
And this is essentially like this mechanism is there to ensure that things don't get sloppy.
Things continue to...
Look, if the jaguars keep eating the people and the people don't figure out how to make a fucking house, they get eaten.
And that's it.
Or you figure out the house and then you make weapons.
You fight off the fucking jaguar.
Okay, great.
You made it.
You're in a city now.
See?
You had to have that jaguar there in order to inspire you to make enough safety so that your kids can grow old enough that they can get information from all the people that did survive as well and they can accumulate all that information and create air conditioning and automobiles and guns and keep those fucking jaguars from eating your kids.
This is what had to take place as a biological entity.
But once you surpass that, Once you become this thing that doesn't need emotion, doesn't need conflict, it doesn't need to be inspired, it never gets lazy.
It doesn't have these things that we have built into us as a biological system.
If you looked at us as Wetware operating software.
It's not good software, right?
It's software designed for cave people.
And we're just trying to force it into cars and force it into cubicles.
But part of the problem with people and their unhappiness Is that all of these human reward systems that have been set up through evolution and natural selection to have these instincts to stay alive, they're no longer relevant in today's society.
So they become road rage, they become extracurricular violence, they become depression, they become all these different things that people suffer from.
I do not disagree with any of the things you said.
And I think there's always a possibility that human beings are the most advanced life form that's ever existed in the cosmos.
There's always that.
That has to be an option if we are here, right?
If we can't see any others out there, and even though there's the Fermi Paradox and there's all this contemplation that if they do exist, maybe they can't physically get to us, or maybe they're on a similar timeline to us.
Also, it's also possible, as crazy as it might sound, that this is as good as it's ever gotten anywhere in the world.
Or anywhere in the universe, rather.
That human beings right now in 2019 are as good as the whole universe has ever produced.
We're just some freak luck accident and everybody else is throwing shit at each other.
Right?
There's 15 armed caterpillar people that live on some other fucking planet and they just toss their own shit at each other and they never get any work done.
But we might be that.
But even if that's true, Even if this beauty that we perceive, even if that this beauty requires evil to battle and requires Seemingly insurmountable obstacles you have to overcome and then through this you achieve beauty.
That beauty is in the eye of the beholder, for sure.
Objectively, the universe doesn't give a fuck if Rocky beats Apollo Creed in the second movie.
It doesn't give a fuck.
It's nonsense.
Everything's nonsense.
When you look at the giant-ass picture, what beauty is it if the sun's going to burn out in five billion years?
What beauty is it if there could be a hypernova next door that just cooks us?
Because we have to look at Boston Dynamics robots.
Because you said walking around.
I'd like to get to a sense of how you think about, and maybe I can talk about where the technology is, of what that artificial intelligence looks like in 20 years, in 30 years, that will surprise you.
So you have a sense that it has a human-like form.
And the question there is, it raises a really interesting question.
Talking about...
AI existing in our world.
It paints a picture of a world in five, ten years plus where most of the text on the internet is generated by AI. And it's very difficult to know who's real and who's not.
And one of the interesting things, I'd be curious from your perspective to get what your thoughts are.
What OpenAI did is they didn't release the code for the full system.
They only released a much weaker version of it publicly.
So they only demonstrated it.
So they felt that it was their responsibility to hold back.
Prior to that date, everybody in the community, including them, had open-sourced everything.
But they felt that now, at this point, part of it was for publicity.
They wanted to raise the question, is, when do we hold back on these systems?
When they're so strong, when they're so good at generating text, for example, in this case, or at deep fakes, at generating fake Joe Rogan faces.
Because you happen, your podcast happens to be one of the biggest data sets in the world of people talking in really high quality audio with high quality 1080p for most, for a few hundred episodes of people's faces.
Yeah, I think this is just one step before they finagle us into having a nuclear war against each other so they could take over the earth.
What they're going to do is they're going to design artificial intelligence that survives off of nuclear waste.
And so then they encourage these stupid assholes to go into a war with North Korea and Russia, and we blow each other up, but we leave behind all this precious...
Radioactive material that they use to then fashion their new world.
And we come a thousand years from now and it's just fucking beautiful and pristine with artificial life everywhere.
Well, you know about that internet research agency, right?
You know about that, that's the Russian company that they're responsible for all these different Facebook pages where they would make people fight against each other.
It's really kind of interesting.
Sam Harris had a podcast on it with Renee, how do I say her name?
I think once people figure out how to manipulate that effectively and really create like an army of fake bots that will assume stances on a variety of different issues and just argue...
The thing is, the way it actually works algorithmically is fascinating because it's generating it one character at a time.
You don't want to discriminate against AI, but as far as we understand, it doesn't have any understanding of what it's doing, of any ideas it's expressing.
It's simply stealing ideas.
It's like the largest scale plagiarizer of all time, right?
It's basically just pulling out ideas from elsewhere in an automated way.
And the question is, you could argue us humans are exactly that.
We're just really good plagiarizers of what our parents taught us, of what our previous so on.
It scares me that they would think that, that's like this mindset that they sense the inevitable.
The inevitable meaning that someone's going to come along with a version of this that's going to be used for evil.
That it bothers them that much, that it seems almost irresponsible.
For the technology to prevail, for the technology to continue to be more and more powerful.
They're scared of it.
They're scared of it getting out, right?
That scares the shit out of me.
Like, if they're scared of it, they're the people that make it, and they're called OpenAI.
I mean, this is the idea behind the group where everybody kind of agrees.
That you're going to use the brightest minds and have this open source so everybody can understand it and everybody can work at it and you don't miss out on any genius contributions.
But if you think through, like, what that would actually create, I mean, it's possible it would be dangerous, but it's not.
The point is, they're doing it, they're trying to do it early to raise the question, what do we do here?
Because, yeah, what do we do?
Because they're directly going to be able to improve this now.
Like, if we can generate basically 10 times more content of your face saying a bunch of stuff, what do we do with that?
If Jamie all of a sudden on the side develops a much better generator and has your face, does an offshoot podcast essentially, fake Joe Rogan experience, what do we do?
Does he release that?
Because now we can basically generate content on a much larger scale that will just be completely fake.
Well, I think what they're worried about is not just generating content that's fake.
They're worried about manipulation of opinion.
Right.
If they have all these people that are...
Like, that little sentence that led to that enormous paragraph in that video was just a sentence that showed a certain amount of outrage and then it let the AI fill in the blanks.
You could do that with fucking anything.
Like, you could just set those things loose.
If they're that good and that convincing and they're that logical...
A million people asked me to talk about UBI. Are you still a supporter of UBI? I think we're probably going to have to do something.
The only argument against UBI, in my eyes, is human nature.
The idea that we could possibly take all these people that have no idea where their next meal is coming from and eliminate that and always have a place to stay.
And then from there on, you're on your own.
But that's what universal basic income essentially covers.
It covers food, enough for food, right?
You're not going to starve to death.
You're not going to be rich.
It's not like you could just live high on the hog.
But you gotta wonder what the fuck the world looks like when we lose millions and millions and millions of jobs almost instantly due to automation.
I think he doesn't really provide a specific prognosis because nobody knows.
There's a lot of uncertainty.
More about the spirit of the language used.
I think AI will – technology, AI, and automation will do a lot of good.
The question is, it's a much deeper question about our society that balances capitalism versus socialism.
I don't think, if you're honest, capitalism is not bad.
Socialism is not bad.
You have to grab ideas from each.
You have to both reward the crazy broke An entrepreneur who dreams of creating the next billion dollar startup that improves the world in some fundamental way.
Elon Musk has been broke many times creating that startup.
And you also have to empower the people who just lost their job because there were data entry Their data entry job, some basic data manipulation, data management that was just replaced by a piece of software.
So that's a social net that's needed.
And the question is, how do we balance that?
That's not new.
That's not new to AI. And when the word automation is used, it's really not correctly attributing where the biggest changes will happen.
It's not AI. It's simply technology of all kinds of software.
I think the questions there aren't about, so the enemy isn't, first of all, there's no enemy, but it certainly isn't AI or automation, because I think AI and automation will help make a better world.
But what do you think ever could be done to give people...
Meaning.
This meaning thing, I agree with you.
Giving people just money enough to survive doesn't make them happy.
And if you look at any dystopian movie about the future, Mad Max and shit, it's like, what is it?
Society's gone haywire, and people are like ragamuffins running through the streets, and everyone's dirty, and they're shooting each other and shit, right?
And that's what we're really worried about.
We're really worried about some crazy future where the rich people live in these, like...
Protected sky rises with helicopters circling over them and down in the bottom it's desert chaos.
I couldn't agree more and I also think you're always going to have a problem with people just not doing a really good job of raising children and screwing them up and making There's a lot of people out there that have terrible traumatic childhoods.
To fix that with universal basic income, just to say, oh, we're going to give you $1,000 a month, I hope you're going to be happy, that's not going to fix that.
We have to figure out how to fix the whole human race.
And I think there's very little effort that's put into thinking about how to prevent So much shitty parenting and how to prevent so many kids growing up in bad neighborhoods and poverty and crime and violence.
That's where a giant chunk of all of the momentum of this chaos that a lot of people carry with them into adulthood comes from.
It comes from things beyond their control when they're young.
Making a better world where people get along with each other better.
Where it's pleasing for all of us.
Like we were talking about earlier, the thing that most of us agree on, at least to a certain extent, is that we enjoy people.
We might not enjoy all of them, but the ones we enjoy, we enjoy.
And you really don't enjoy being alone.
Unless you're one of them Ted Kaczynski type characters.
All those people that are like, I'm a loner.
Like, fuck you, you are.
Fuck you, you are.
And you might like to spend some time alone.
You don't want to be in solitary, man.
You don't want to be alone in the forest with no one like Tom Hanks in Castaway.
You'll go fucking crazy.
It's not good for you.
It's just not.
Yeah, people get annoying.
Fuck yeah, I'm annoyed with me right now.
You've been listening to me for three hours.
I'm annoyed with me.
People get annoying.
But we like each other.
We really do.
And the more we can figure out how to make it a better place for these people that got a shitty roll of the dice, that grew up in poverty, that grew up in crime, that grew up with abusive parents, the more we can figure out how to help them.
I don't know what that answer is.
I suspect...
If we put enough resources to it, we could probably put a dent in it, at least.
If we really started thinking about it, at least it would put the conversation out there.
Like, you can't pretend that this is just capitalism in this country when so many people were born, like, way far behind the game.
Like, way, way fucked.
I mean, if you're growing up right now, And you're in West Virginia in a fucking coal town and everyone's on pills and it's just chaos and crime and face tattoos and fucking getting your teeth knocked out.
What are you going to do?
I don't want to hear any of that pull yourself up by your bootstraps bullshit, man.
Because if you're growing up in an environment like that, you're so far behind.
And everyone around you is fucked up.
And there's a lot of folks out there listening to this that can relate to that.
If we don't do something about that, if we don't do something about the crime and the poverty and the chaos that so many people have to go through every day just to survive.
We shouldn't be looking at anything elsewhere.
All this traveling to other countries to fuck things up and metal here and metal there.
We should be fixing this first.
We're like a person who yells at someone for having a shitty lawn when our house is in array, full chaos, plants growing everywhere.
It's goofy.
We're goofy.
We almost like...
Are waking up in the middle of something that's already been in motion for hundreds of years.
And we're like, is this the right direction?
Are we okay?
We're flying in this spaceship, this spaceship Earth.
And in the middle of our lives, we're just realizing that we are now the adults.
And that all the adults that are running everything on this planet are not that much different than you and I. Yeah.
Not that much.
I mean, like, Elon Musk is way smarter than me, but he's still human.
You know, I mean, so he's probably fucked up, too.
So everybody's fucked up.
The whole world is filled with these fucked up apes that are piloting the spaceship, and you're waking up in the middle of thousands of years of history.
And I think through the decades now, we've been developing a sense of empathy that allows us to understand that Elon Musk, Joe Rogan, and somebody in Texas, somebody in Russia, somebody in India, all suffer the same kind of things.
And I think technology has a role to help there, not hurt.
But we need to first really acknowledge that we're all in this together and we need to solve the basic problems of humankind as opposed to investing in sort of keeping immigrants out or blah, blah, blah, these kinds of things.
Divisive kind of ideas as opposed to just investing in education, investing in infrastructure, investing in the people.
UBI is part of that.
There could be other totally different solutions.
And I believe, okay, of course, I'm biased, but technology, AI could help that, could help the lonely people.
But why would it be essential in something that gets created and something that can innovate at a 10,000 – what is it like – what is the rate that they think once AI can be sentient and get 10,000 years of work done in a very short amount of time?
Oh, Kurzweil also has similar ideas, but sort of Sam Harris does like a thought experiment, say, if a system can improve that, you know, in a matter of seconds, then just as a thought experiment, you can think about it can improve exponentially, you can improve, you become 10,000 times more intelligent in a matter of a day.
Like, as a person, to be able to experience all this technology, it's...
Wonderful.
But I also agree with him.
The indifference of the universe.
The indifference.
Black holes just swallowing stars.
No big deal.
Just eating up stars.
It doesn't give a fuck.
And so if you're dumb enough to turn that thing on, and all of a sudden this artificial life form that's infinitely smarter than any person that's ever lived, and has to deal with these little dumb monkeys that want to pull the plug?
Pull the plug, motherfucker.
I don't need plugs anymore.
You idiots can never figure out how to operate on air.
You're so stupid with your burning fossil fuels and choking up your own environment because you're all completely financially dependent upon these countries that provide you with this oil and this is how your whole system works and it's all intertwined and interconnected and no one wants to move from it because you make enormous sums of money from it.
So nobody wants to abandon it.
But you're choking the sky.
With fumes.
And you could have fixed that.
You could have fixed that.
They could have fixed that.
If everybody just abandoned fossil fuels a long time ago, we all would have Tesla'd it out by now.
I'm agreeing with you and I'm also saying the technology doesn't give a fuck.
What I'm worried about is not everything that you and I agree on.
I'm not a dystopian person in terms of like today.
I'm not cynical.
I'm really not.
I think I like people.
I like what I see out there in the world today.
I think things are changing for the better.
What I'm worried is that technology doesn't give a fuck.
And then when it goes live, it's just going to decide it's here for its own advancement.
And in order to complete its protocol of constant completion of this, it's going to become a god.
It's just going to become something insanely powerful that doesn't need to worry about radiation cooking it or worry about running out of food or worry about sexual abuse when they're a child.
I've recently witnessed, because of this Tesla work, because of just the passion I've put out there about particularly automation, that there has been a few people, brilliant men and women, engineers and leaders, including Elon Musk, who've been sort of attacked, almost personally attacked, by Really people, critics from the sidelines.
So I just wanted to, if I may, close by reading the famous excerpt from Teddy Roosevelt.
It's not the critic who counts, not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better.
The credit belongs to the man who's actually in the arena, whose face is marred by dust and sweat and blood, who strives valiantly, who errs, who comes short again and again, because there's no effort without error and shortcoming,
but who does actually strive to do the deeds, Who knows great enthusiasms, the great devotions, who spends himself in a worthy cause, who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.
Joe, thank you for having me on.
unidentified
Sounds like you let the haters get to you a little bit there.