All Episodes
Oct. 7, 2024 - PBD - Patrick Bet-David
40:31
"AI Cults Forming" - Max Tegmark on China Running By AI, Non-Human Governments, and Global Control

Max Tegmark, a renowned MIT physicist, dives into the future of AI with Patrick Bet-David, discussing the profound impacts of AI and robotics on society, military, and global regulation. Tegmark warns of the risks and potentials of AI, emphasizing the need for global safety standards. ------ 📰 VTNEWS.AI: ⁠https://bit.ly/3Zn2Moj 👕 VT "2024 ELECTION COLLECTION": https://bit.ly/3XD7Bsm ⁠ 📕 PBD'S BOOK "THE ACADEMY": https://bit.ly/3XC5ftN 🎙️ FOLLOW THE PODCAST ON SPOTIFY: https://bit.ly/3ze3RUM 🎙️ FOLLOW THE PODCAST ON ITUNES: https://bit.ly/47iOGGx 🎙️ FOLLOW THE PODCAST ON ALL PLATFORMS: https://bit.ly/4e0FgCe 📱 CONNECT ON MINNECT: https://bit.ly/3MGK5EE 📕 CHOOSE YOUR ENEMIES WISELY: ⁠⁠⁠⁠⁠⁠⁠⁠⁠https://bit.ly/3XnEpo0 👔 BET-DAVID CONSULTING: https://bit.ly/4d5nYlU 🎓 VALUETAINMENT UNIVERSITY: https://bit.ly/3XC8L7k 📺 JOIN THE CHANNEL: ⁠https://bit.ly/3XjSSRK 💬 TEXT US: Text “PODCAST” to 310-340-1132 to get the latest updates in real-time! SUBSCRIBE TO: @VALUETAINMENT @vtsoscast @ValuetainmentComedy @bizdocpodcast @theunusualsuspectspodcast ABOUT US: Patrick Bet-David is the founder and CEO of Valuetainment Media. He is the author of the #1 Wall Street Journal Bestseller “Your Next Five Moves” (Simon & Schuster) and a father of 2 boys and 2 girls. He currently resides in Ft. Lauderdale, Florida.

| Copy link to current segment

Time Text
Do you worry that maybe a guy who's got a lot of money builds an army of 200,000 robots that'll be stronger than the military that we have?
Absolutely.
I was at a conference recently where a guy who had a lot of money was talking about building 3 billion such robots.
Capable of doing what?
Everything we can do but better.
And this was not Elon Musk.
Warren Buffett said you have to kind of look at AI as like the nuclear bomb, you know, it's like atomic bombs.
That's also the fear because the fear is like, let's not accelerate AI in our country and robots, but somebody else does, and then the military drops.
So then what happens?
People who don't use AI get replaced by people who do.
If a Chinese company builds superintelligence, after that, China will not be run by the Communist Party.
It'll be run by the superintelligence.
Don't think 10 years.
Think of that over the next two years.
Crazy things are going to happen.
The technology is here to stay.
And it's going to blow our minds.
Are you for it or against it, Patrick?
Maybe, maybe not.
One dirty secret, we have no idea really how it works.
And if we do this right with AI and use artificial intelligence to amplify our intelligence, to bring out the best in humanity, humanity can flourish.
So in other words, you believe the future looks bright.
Thank you so much again.
Appreciate you.
Yes, we're going to see.
Okay, so I'm going to get right into it, guys.
I think we have to talk about a soft, subtle subject with AI.
Do you worry that maybe a guy who's got a lot of money builds an army of 200,000 robots that'll be stronger than the military that we have?
Do you worry about that at all?
Absolutely.
I was at a conference recently where a guy who had a lot of money was talking about building 3 billion such robots.
Building 3 billion robots.
Yeah.
Capable of doing what?
Everything we can do but better.
And this was not Elon Musk.
Who was this guy?
This was one of those conferences where you're not allowed with Chatham House rules.
Is it a person that we would know or not really?
Yeah, you'd know.
I mean, it's only a few people that can afford 3 billion robots.
So what happens?
If let's just say, because you know Warren Buffett said you have to kind of look at AI as like the nuclear bomb, you know, it's like atomic bomb.
So he's a little bit more afraid of what AI is going to be doing.
But how should something like that be concerned about?
Because the risk is many different facets of risk.
I wrote a few things down.
One is war.
How many guys fear what could happen with AI with war?
Military, robots.
Okay, a lot of people.
Next one I wrote, humanity, right?
When Elon talks about he's a humanist, right?
He wants to make sure he can protect humanity.
Business disruption, pharma disruption, you know, criminal justice, you know, using AI, like that movie, Minority Report.
By the way, every one of these things, there's a movie for it.
War, the creator, I don't know if you've seen the creator, I thought it was a great movie, just came out.
Or iRobot, Humanity, Matrix, or even the movie Her.
Have you guys seen that terrible movie, Her, with Joaquin Phoenix, which was so weird that a lot of people liked?
I wasn't one of them.
Can I defend her just very briefly, though?
I cringe almost always when I watched her.
I knew we were going to get into a fight after two minutes.
But one really nice redeeming feature of her was it didn't have robots in it.
And I think that was a really important message.
You know, it's so easy to immediately put an equal sign between scary AI and robots.
And her shows how just intelligence itself can really give a lot of power too.
Are you married, Max?
Yes.
You're married.
Okay.
So if you're sitting locked in a room and you're really smart and you can't go out and touch the world, but you're connected to the internet, you can still make a ton of money.
You can hire people.
You can do all your job interviews over Zoom.
And if some super intelligent AI starts doing that, it can start having a lot of impact in the world.
Positive or negative?
That's the thing.
When I gave the definition of intelligence as the ability to accomplish goals, I didn't say anything about whether those were good goals or bad goals.
And intelligence, in that sense, is a tool.
It's a tool that gives you superpowers for accomplishing goals.
And tools and tech, they're not morally good or morally evil.
Now, like, if I ask you, what about fire?
Are you for it or against it, Patrick?
I'm for it.
You're probably for it.
I'm against who uses it.
An awesome barbecue.
But you're probably not against using fire for making...
You probably are against using fire for arson, right?
So whenever we invent some new tech, we also try to invent an incentive structure to make sure people use that tech for good stuff, not for bad stuff.
And it's no different with AI.
We have to, if we build these powerful things, make sure that we have all the incentives in place to make sure that people use them for the good stuff, not for the bad stuff.
How do you do that, though?
Like this guy that you're talking about who wants to build 3 billion robots, okay?
Do you think he's a good guy?
I'm not asking for a name, but do you think he's a good guy?
I think he thinks he's a good guy.
But, you know, I'm pretty sure Stalin also thought he was a good guy.
Fair enough.
Okay.
So now, so, you know, do you, in your mind, at all see a future where robots don't exist?
Or is it no?
Listen, we have to accept the fact that within the next whatever amount of time, we're going to be coming to places where robots are going to be doing customer service, military is going to be customer service, robots are going to be in the military, cops are going to be robots.
I'm going to go to a restaurant placing an order from a robot.
Do you see that as a near future that that's going to be happening whether we like it or not?
Yeah, but I loved what you said there earlier this morning about dreams.
And my dream is that we build really exciting AI, including a fair number of robots and not just plop them into society randomly and see what people choose to do with them, rather than we use a lot of wisdom to guide their use.
You know, if you have, you ask how can we make, how can we influence how people use technology, right?
So how do we influence how people use fire, for example?
First of all, we have social incentives.
If you get a reputation as the guy who always burns down people's houses, you're going to stop getting invited to parties.
And we also have a legal system we invented for that very reason.
So if you do that and you get caught, you get maybe a couple of years to think it over on very boring food.
And when we make these technologies, if they are technologies that can be used to cause a lot of harm, we want to make them such that it's basically impossible to do that.
You could there was a guy Named Andreas Lubitz, for example, who was very depressed.
And he crashed his German wings aircraft into the Alps, killed over 100 people.
You know how he did it?
He just told AI autopilot to change the altitude from 30,000 feet to 300 feet over the Alps.
And you know what the AI said?
Okay.
It's like, what engineer puts that in there?
When we educate our kids, we don't just teach them to be powerful and do stuff.
We also teach them right from wrong.
We have to make sure that every autopilot in an aircraft will just refuse crazy orders like that.
If you put good AI in a self-driving car and the driver tries to accelerate into a pedestrian, the car should refuse to do that.
So there's a lot of technical solutions like this where you just make it impossible for random nutcases to do harm.
And another big success story, I think, is basically every other technology that could cause harm, we have safety standards.
That's why we have the Food and Drug Administration.
If someone says, hey, I have this new wonder drug that's going to cure all the cancers, the FDA is going to be like, okay, well, where is your clinical trial?
Show us that the benefits outweigh the harms.
And until then, you can't sell it.
It makes sense to apply.
We do the same thing for aircraft, for cars.
It makes sense to do that with AI systems that could cause massive harm.
That way, all the wonderful products we're going to get are going to be safe, kind of by design.
And we've given incentives to the companies to really have a race to the top and make safe AI because whoever does that first, they're the one who gets the market share.
Yeah, but who regulates it?
So to create that, you know, 30,300, that the guy just go 3, 0, 0, and maybe he wanted to do 3,000, he forgot 1, 0, and it went from 30,000 to 300.
Yeah.
Or maybe like the, you know, it can't go less than 10% in a 15-second increment.
Okay, that's technology.
I get it.
But what I'm asking right now is, so this guy that wants to build 3 billion robots with today's regulation.
He really shot to you.
He really got to you.
Did he not get to you?
Yes or no?
Everybody, are you kidding me?
I bet he got to everybody.
But so this guy that wants to build 3 billion robots, what regulation do we have right now to prevent him from being able to do that?
Nothing, basically.
But this is actually changing.
In all other areas, there was also a lot of resistance to regulation.
When scientists started saying, hey, and engineers started saying, let's put seatbelts in cars, the auto industry was dead against it.
They're like, no, that's going to kill the car market or whatever.
So they passed the seatbelt law in the US anyway.
And did it kill the car industry?
No, the amazing thing is that car sales skyrocketed after that because people started to realize that driving can be really safe and they bought more cars.
So we similarly just need to get past this knee-jerk reaction from some tech folks that they're different from all other technology and should be forever unregulated.
There's a big food fight in California now, some of you might have followed.
There's this law called SBA 1047, which was just passed by the California Assembly.
It's very light touch.
It says stuff like, well, if your company causes more than half a billion dollars in damages or some sort of mass casualty event where a lot of people die, you should be liable.
If they had a law like that for airplanes, people wouldn't bat an eye.
But you have all these, you have a lot of people now taking to Twitter saying, this will destroy American AI industry, whatever.
If we just treat AI like other industries, we have safety standards, here they are, level playing field, free markets.
Once you meet the standards, you can make money.
Then we'll be in very good shape.
The European Union already passed an AI law last year.
China passed the AI law.
What's European Union's AI law?
Huh?
What's their rate?
It's called the EU AI Act.
They're very similar to product safety laws for medicines, for cars, stuff like that.
And as soon as we get something like that in the U.S., first maybe in states like California and then federally, I think we'll be in a much better place.
And then someone can't just come along and build three billion robots without first having to demonstrate that they meet safety standards.
And won't just, you don't want to sell robots where the owner of the robot can be like, hey, here's a photo of this guy, you know, go kill him.
This guy that wants to build three billion robots, is he an American?
Yeah.
He's an American.
Yeah.
Who are the top 10 richest people in America?
So if, by the way, I'm not going to ask you who, because I would never question you like that.
That's not my style.
Top 10 richest.
So you got, let's use the process of elimination.
Let's kind of go through.
Let's have some fun with this.
And listen, if we're going to have some, we may as well have some fun here.
All right, so here we go.
Elon Musk.
He said it's not Elon Musk.
Could it be Jeff Bezos?
Are you going to put light, that's why you're putting bright spark lights in my face here.
Could it be Mark Zuckerberg?
Could it be Mark Zuckerberg?
Look, I think it's not about individual people.
It's really about creating the right incentives for all entrepreneurs unless they realize that the way they get rich is by the way, I don't want to make you feel uncomfortable.
So you got Ellison, I don't think it's Buffett, Cates, Baltimore likes basketball, Paige, Sergei Bryn.
Well, let's talk about those guys.
So here's what the Google guys said, if I'm not mistaken.
Larry Page wants an AI God, is what he said.
Okay?
And he's got the money to do it.
And he called people who are against that.
He uses a word called species, which is a derogatory term, if I'm not mistaken.
Yeah, I was actually right there when he said that famous quote.
I'm the first-hand witness.
I think it got out because I wrote about it in my book.
So this is not bullshit.
He actually said it.
So what do you think about Larry Page calling us species?
Look, I don't judge people.
We do.
This environment's very judgmental.
We just are on top of that.
Everybody's held to their own dreams.
My dream is that my children will have, and your children will have a wonderful future, and a future where they don't need to compete against machines.
It would be, rather where AI and other machines make their lives more enjoyable, better, more meaningful.
Larry might say, now I'm species.
I should feel sorry for the machines or whatever.
But hey, we're building them.
They wouldn't exist if it weren't for us.
Why shouldn't we exercise that influence we have to make it something good?
I'm a very ambitious guy, and you are too, and I really respect that.
So to me, it's utterly unambitious if we're like, well, you know, we think we're so lame that the only thing we're gonna try to do is build our successor species as fast as possible.
Where's the ambition in that?
Why would we do something so dumb?
It's almost like Jonestown, some sort of suicide cult.
I love humanity.
I think humanity has incredible potential and I want to fight hard for us to realize it.
So I want us to build technology that gives us a future we and our children really want to live in.
Call me species.
That's my dream.
We're the same.
You and I are the same.
No question about that.
And I think a lot of people here are family folks are the same as well.
So in other words, I mean, let's who, yeah, so let me actually ask all of you guys in the audience also.
So raise your hand if you're excited about us building more powerful AI and robots that can really empower us and help us flourish in the future.
Raise your hand.
Okay, now raise your hand if you're really excited about building AI, which will just replace us.
It's a little hard to see what the spotlight, but I don't see any hands at all.
So I think we're all on team human here.
And fellow species.
Well, let me ask you a question.
Let me ask you a question.
So, for example, so now, what if, because it's always the government's gonna say you can't build robots.
Okay?
But then.
No, no, no.
Push back.
The FDA doesn't say you can't develop new drugs.
They just say that in order to sell them, you just have to do the clinical trial and show that the benefits outweigh the harms.
Similarly, the government, the law would say, yeah, sure you can build robots, but before you can sell them, you just have to make sure they meet the safety standards you've all agreed on.
So for example, you can't sell a robot if it enables the owner to just go tell it to do terrorism or murder people for you, right?
That's a safety standard.
We can specify it.
And it's perfectly possible at the technical level, actually, just like we teach our children what they can do, what's good, what's bad, do that with machines also.
But the question I'm asking is the following.
Do you think Iran has a nuclear weapon?
They say they don't.
Do you think they have it?
Maybe.
Maybe, maybe not.
Okay.
How would we know that they don't?
Iran is a massive country, bunch of deserts.
How can we know they don't have a nuclear plant?
We don't, I think, know for sure.
But look at it.
It comes back to incentives again.
Suppose they do have a nuclear weapon.
Why haven't they nuked us?
Because they have an incentive not to, because then they would get nuked too, right?
And similarly, like, why do companies have an why do companies have an incentive to build airplanes, don't crash?
Because ultimately it's bad for them, right?
So that's the whole point, really, of having safety standards.
There didn't used to be an FDA, actually.
And this company, which I will not name, sold this drug called thalidomide and said it's great for mothers to take during pregnancy if you're feeling a bit stressed and have headaches.
And they didn't mention that there was early research suggesting it causes a lot of kids to be born without arms.
And they sold it.
They sold it.
It was a horrible tragedy.
And eventually they got shut down here.
And then they started, when it was banned in the US, they started selling it in Africa.
So that's what happens if you have the wrong incentives, right?
Whereas if you have safety standards that companies have to meet, then when they try to maximize their shareholder value, they're actually going to do the good things and not the bad things.
Yeah, but again, let me ask this maybe a last question and we'll transition to a different topic.
So we could have regulations for America that, hey, these are the standards you need to go through until you do XYZ.
However, in America, we allow lobbyists to buy up politicians.
So what if somebody's building a robot company, has massive spenditure now for lobbying a few billion a year, 10 billion a year, and they're able to go past certain laws to give them the leverage to continue growing.
And then, hey, don't come out with these laws.
And maybe even other countries who don't have to abide by our rules and guidelines, who have their own money and they spend a bunch of resources, and we don't, and come 10, 20, 30 years from now, all of a sudden the military in another country is stronger than ours, and we fell behind.
That's also the fear, because the fear is like, let's not accelerate AI in our country and robots, but somebody else does, and then the military drops.
So then what happens?
You understand the concern I have?
Totally, totally, yeah.
So that's a very real concern.
AI can be very persuasive now, and it's getting more persuasive by the day.
This morning I was reading about some new AI cults that are forming on the internet and so on.
And so it's super important to have safety standards also for systems that are out there on the internet talking to people to make sure we understand what's going on there.
We're lucky in that we in the US have the strongest AI industry in the world.
And that gives us the opportunity to make sure that bad things aren't done with our tech.
And interestingly, we should do it regardless of the rest of the world for our sake.
And if you worry about China, for example, so Elon told me this really interesting story.
He said he was in a meeting with some top people from the Chinese Communist Party.
And he started talking to them about superintelligence.
And he said, you know, if a Chinese company builds superintelligence, after that, China will not be run by the Communist Party.
It'll be run by the superintelligence.
And Elon said that he got some really long faces, like they really hadn't thought that through.
And then within a month of that, they passed their new AI law.
So, you know, why would China of Chinese companies doing crazy shit?
So, you know, in other words, China, they put in place their own regulation on drugs, not to help us, but to protect Chinese consumers.
They are reigning in their tech companies for doing crazy stuff because they want to stay in control.
So here again, you know, incentives, incentives, incentives, they kind of align.
Each company has an incentive to make sure that they don't lose control, that none of their companies lose control over their tech.
And then once that happens, at a grassroots level in individual countries, there's a very strong incentive now for different countries to talk to each other.
You know, the European FDA, the American FDA, and the Chinese one, they talk to each other all the time to harmonize the standards.
So someone who develops the medicine in the US can get quick approval elsewhere.
We have these AI safety institutes, which have just been created now in the last year.
America has one, England has one, China just started one.
These nerds are going to be talking to each other again, comparing notes.
And that's how we're going to eventually get some global coordination, too.
It's a mistake to be I want to add some optimism here, you know, because some people are so gloomy these days.
And they say, oh, you know, well, we're screwed because we're never going to get, we don't have the same goals as China.
We're never gonna get along.
Hey, you know, we were not best buddies with Brezhnev from the Soviet Union either.
Didn't have very light goals, but we still made all sorts of deals to prevent the nuclear war.
Because both sides realized that everyone would lose if we just lost control of that.
It's exactly analogous here.
All the top scientists across these countries, if they tell their governments that, you know, everybody loses if we lose control, there's an incentive, whether they love each other or don't, you know, for them to just coordinate.
And this can be done.
There's definitely hope there.
Okay, awesome.
In regards to a lot of folks here run businesses.
They're entrepreneurs, small business owners, from a million year to a billion year in top-line revenue, from two employees to 10,000 employees.
What would you say on the opportunity side, specifically for the business side?
So the next three, five, ten years, we're already seeing it all over the place.
Open AI, you got Grok, you got NVIDIA, you've seen all these different things happening.
But as a small business owner, how should I have my what should I view my relationship with AI?
Great questions.
I have a lot to say on this.
First, don't think 10 years.
Think at the next two years.
Crazy things are going to happen.
And if you make a plan for what you're going to do in nine years, it's going to be completely irrelevant because you want to be nimble is what you want to do.
And look at what can you do right now that's going to help you in the next 12 months and then go from there.
First thing I would talk about is hype.
There is a lot of hype about AI.
It's a strong brand right now, so people will try to sell you a glossed up Excel spreadsheet and call it AI.
Don't fall for the hype.
But there are two kinds of hype.
The first kind of hype is what we had with cold fusion, like where the whole technology is just complete dud.
Then there's a second kind of hype, like the dot who remembers the dot com, who's the dot-com bubble?
Yeah, so that was a lot of hype, right?
But would it have been, and a lot of people lost a lot of money, but is the right lesson to draw from the dot-com bubble that the internet never and the web never amounted to anything?
Would the smart thing to do for a company then to be like, no, we're never going to have a website?
Of course not.
So the hype there was not the technology itself was in fact gonna take over the world.
The hype was that about certain companies that were giant flops, right?
The kind of hype we have with AI now, I feel, is exactly the dot-com kind of hype.
There are a lot of companies that are very overvalued and are gonna go bust, but the technology is here to stay and it's gonna blow our minds.
So what do we do with our own personal business here in such a risky environment?
Well, first of all, look at your existing business.
Instead of dreaming about some pie in the sky, completely new thing you could do that might be a giant flop, look at what are you doing right now across your company that AI might be able to greatly improve the productivity of.
Usually what happens first in your companies will not be that you just replace a person by AI, but rather enhance your staff with AI.
So you look at someone who's doing certain tasks and you realize you can give them some tools that they can use to do 40% of their tasks much better.
Much more productivity for the same headcount.
Much lower risk now, right?
Because they already know what they're doing.
If the AI writes the first draft of that report or whatever, they will read it before they send it out.
So you don't take a risk of being like that lawyer who filed the court case where there was a case law they cited from ChatGPT that was just completely made up and the drugs didn't like that.
So if you take things you're doing basically and empower your staff to have AI do first drafts of things, et cetera, but the humans are in charge, there's still quality control, very low risk, huge productivity gains.
A second thing I would say is it's tough when you're, especially if you're a small business and lack a lot of in-house expertise to not get ripped off by companies who are trying to sell you a bill of goods with an AI sticker on it.
So it's a really good idea, even for relatively modest sized companies that you have, to just get at least some in-house expertise.
Even just one person is really quite knowledgeable.
And that person can then go around and talk to other key staff across your organization, learn what they're doing, and advise them on how they can automate certain things, enhance the productivity, and get things done in a way where you get all the upside and no downside.
What would be the position of that?
And by the way, in about eight minutes, I'm going to come to you guys to ask questions.
We're probably going to get two or three questions.
So if anybody wants to ask Max any questions, go line up by the mic.
We'll come to you guys momentarily.
So when you're looking at hiring somebody that has AI expertise, when you're interviewing a CTO or you're interviewing somebody that's a CIO, you're bringing somebody that's a BI or business analyst.
What types of questions are you asking to make sure they have background in AI?
Ask them what they've built before.
This is very much not about being able to talk the talk, but to be able to actually walk the walk and build systems, make things work.
They should have a track record of having built things.
I mean, really nerdy, sit at the keyboard, install stuff, get real productivity.
But don't put them in charge of making business decisions.
They are, you know, they're automation engineers.
They're people who humbly go and interview other people in the company and ask them, hey, tell me about your workflow.
And if those other people feel, yeah, this would be great if you could give me a tool that writes the first draft of this thing.
That person you've hired could either do it themselves or contract it out to someone else to do it, sort of provide the expertise.
There are many pitfalls.
I mentioned Night Capital, for example, with this trading disaster because that's kind of in a nutshell what you don't want to do.
Put in place some AI system in your company that you haven't understood well enough.
It might not even screw you over by crashing.
It might screw you over by giving your proprietary data to whatever company owns the chat bot, right?
And maybe you don't want that.
It might screw you over by being very hackable.
So suddenly you come into the office and there's ransomware on all your computers.
It's really important to tread carefully.
But if you do tread carefully like this, there's just spectacular upside.
Night Capital is the one that you said $10 million per minute for 44 minutes.
Yeah.
Did you tell them already or no?
Did you guys hear about the story?
What happened with them?
Yeah, I mentioned it's $440, $440 million in 44 minutes.
So One dirty secret I have to tell you about the state-of-the-art large language models and and a lot of the Gen AI stuff is we have no idea really how it works.
But that's not necessarily a showstopper.
If you have a coworker, you don't understand exactly how their brain works either, but you can ch you can have someone else check their work and so on.
And if there's an important business decision, do you have the final call and you ask them some tough questions first, but you have to treat, you have treat any AI systems you have as the output from them, as some work you got from some temp worker that you have no reason to trust at all.
So if there's you can find a way of just verifying that what they did is actually valid and correct, that takes much less time for you than it would have taken to actually create that thing in the first place.
You're on.
Max, do you have kids?
I do.
Okay.
So with kids, I got a 12-year-old, 10-year-old, 8-year-old, and a 3-year-old.
Anybody have kids?
Raisiana, if you got kids, a bunch of people here has got kids.
So how do you not only have the conversation with your kids about AI, but also career planning, positioning, where traditionally you're like, hey, son, you're going to grow up to be a this.
How do you manage career planning with them at a young age, knowing what direction AI is going?
I just said to you, what do you do with your two-year, three-year, five-year 10-year plan with AI?
You said, here's what you don't do.
No 10-year plans, right?
You should go to what, 12 to 24 months.
So imagine 12-year-old.
What's going to happen in six years?
So how do you manage that with career planning and kids?
Yeah, you know, I have a little guy who's just going to be two years old in December here.
And it's tough.
It really keeps me awake at night thinking about this.
I think one obvious message is you have to be nimble to live in the future and prosper.
The idea that you spend 20 years studying stuff and then some career and then you do that for 40 years, forget about it.
That's so over.
You need to be nimble.
And I have the idea that you're going to constantly be innovating, learning new things, and going where it makes sense to go.
The second thing is whatever field you're in or your children are in, or going into, even if it seems like it has nothing to do with AI, it's crucial that they're up to speed on how AI is influencing and will influence that industry, right?
Because what's going to happen then is not that your kid is going to be replaced by an AI, but rather people who don't use AI get replaced by people who do.
And you want your kids to be in the second category, the ones who are the early adopters, who become more productive, not the ones who are in denial and just get replaced.
How soon do you introduce them to them?
There is no too soon, you know, it's, I mean, okay, little Leo, you know, we keep him away from screens altogether, but.
And if you have someone, I mean, any kids in school these days, they're always using ChatGPT, even if you think they're not.
So no need to get the introductions.
But it's really important to get kids thinking about how they can make the technology work for them, not against them.
And both in the workplace, actually, and in their private lives also.
It makes me really, really sad when I walk around a beautiful place like this and see all these good-looking teenagers around the table.
And they're not looking at each other.
They're staring into little rectangles like the zombie apocalypse came or something like that, right?
And so this isn't just business.
This is also about our personal lives.
How can we make sure that we control our technology rather than our technology controlling us?
Coming back to this idea, we are team human.
Let's figure out how we can make technology that brings out the best in us so we can let our humanity flourish rather than trying to turn ourselves into robots and compete in a losing race against machines like John Henry against a steam engine.
And since we're out of time, can I just end on a positive note?
Please.
Since you said that you're a doomer, a gloomer.
I want to remind us all about something incredibly inspiring and positive about all this.
Our planet has been around for 4.5 billion years.
And our species has been around for hundreds of thousands of years.
And for most of this time, we were incredibly disempowered.
Like a leaf blowing around on a stormy September day, very little agency.
Oops, we starved to death because our crops failed.
Oh, we died of some disease because we haven't invented antibiotics.
And what's happened is that technology and science have empowered us.
We started using these brains of ours to figure out so much about how the world works that we can make technology to become the captains of our own ship.
And we've doubled our life expectancy even more.
And every single reason for why today is better than the Stone Age is because of technology.
And if we do this right with AI and use artificial intelligence to amplify our intelligence to bring out the best in humanity, humanity can flourish, not just for the next election cycle, but for billions of years, not just on this planet, but if you are really bold, even out in much of our gorgeous universe out there, we don't have to be limited anymore by the intelligence that could fit through Maumee's birth canal.
It's an incredibly inspiring, empowering future that is open for us if we don't squander it all by doing something reckless and dumb.
And this is what I want to leave you with here.
We want to make sure to keep AI safe, keep it under human control, not because we're a bunch of worry warts, but because we are optimistic.
We have a dream for a great future.
Let's build it.
So in other words, you believe the future looks bright?
We have the power to make it bright.
It's going to take work.
Max.
Appreciate you for coming on.
Make some noise, everybody.
Max, take more.
Give me one second.
For the last four years, every time we do podcasts, I have to ask Rob or somebody.
Hey, can you pull up this news?
Can you pull up that?
Which way do these guys lean?
Can you go back to the timeline of eventually after asking so many questions, I said, why don't we design the website that we want aggregated?
We don't write the articles.
We feed all of it in using AI.
So nine months ago, eight months ago, I hired 15 machine learning engineers.
They put together our new site called vtnews.ai.
What this allows you to do when you go to it, if you go to that story right there, that says Trump proposes overtime pay, click on it, it'll tell you how many sources are reporting on this from the left.
If you go to the right, Rob, it says left sources, click on it.
Those are all the left sources.
If I want to go to right sources, those are the stores.
If I want to go to center, I go there.
Now, if I want to go all the way to the top and I want to find out a lopsided story, a story that only one side is reporting on, either the left or the right.
So if you notice the first one will say Zelensky announces release of 49 Ukrainians from Russia.
Notice more people on the left are reporting on that than the right.
If I go to the middle one, same thing.
If I go to the right one, same thing.
You can see what stories are lopsided.
And if I pick one of the stories, pick the first story, click on a Trump one proposal, overtime tax cuts.
To the right on the AI, I can ask any question I want, but click on the first question that has it.
It says, what is the political context and potential motivation behind the tax, Trump's new tax cut proposal?
Click on the question mark.
It explains exactly what the motives are.
So for you to use, whether you're doing a podcast, you're in the middle of a podcast, or you just want to know it for yourself, you're busy like myself.
And last but not least, this is all AI doing to some machine learning engineers.
Go all the way to the top.
I can go to timelines, go to timelines and see how far back a story goes.
Pick the Israel-Palestinian conflict.
If I want to go to that and go back and see why are some of those two days a big spike, I'll have Rob pull it over to go to those two days with the big spike and I'll see exactly what happened on that day or the previous day and many other features vtnews.ai has.
So simply go to vtnews.ai.
There's a freemium model.
There's a premium.
And then there's the insider.
If you want to have unlimited access to the AI, click on the VTAI insider.
Export Selection