Jeff Brown and Glenn Beck analyze the rapid ascent of AI, noting ChatGPT's 10 million users in 40 days and predicting Artificial General Intelligence by 2028. They debate whether low-skill jobs will vanish, potentially necessitating Universal Basic Income or societal control through entertainment, while discussing ethical risks like "eysexuals" and corporate surveillance. The conversation highlights $4.7 billion in fusion investments, aiming for net energy output by late 2024 to resolve resource conflicts, alongside DeepMind's breakthroughs in autonomous robotics. Ultimately, the dialogue suggests that unchecked technological acceleration without robust guardrails could reshape human agency and global stability before the decade ends. [Automatically generated summary]
We don't have a lot of repeat guests on the program.
This guy, I would have an interview with him, an hour with him every week if I could.
He is my favorite guest.
The AI revolution is here and it's more real.
In 40 days, OpenAI, the chat box, ChatGPT, has accrued 10 million daily users.
More users than Instagram.
They've been out for how long?
It's the fastest growth of anything we've ever seen.
And we're not even sure about all the things that ChatGPT can do, but it already can do a lot.
You can find videos created by AI explaining AI and ChatGPT.
ChatGPT published its first book about itself.
For many of the same reasons, today's guest has always loved the cosmos.
He has been devoted to becoming an astronaut.
He studied rocket science at Purdue, which is called the cradle of astronauts, because 27 graduates have become astronauts, including Neil Armstrong.
But he changed his focus after graduating.
He shifted his focus to another frontier, cyberspace, the new home of mind to explore the wildlands of tech and artificial intelligence and automation.
He is fascinating.
He spent two decades working in Japan as an executive in cutting-edge tech, broadcasting, semiconductors, IT networking, security, automotive, you name it.
He was there.
While he was in Japan, he studied ancient martial arts and became a third degree black belt.
Few years ago, he decided to get a master's degree in management from Yale.
Then he studied quantum computing at MIT.
He makes me feel like a slug.
He is the founder and chief investment officer and chief investment, I'm sorry, analyst of Brownstone Research Group.
It's a publishing company that specializes in technology, finance, geopolitics, and futurology.
As an angel investor, he's made a lot of important people, a lot of money.
He has a knack for predictions.
These days, that is incredibly rare.
A guy who I think sees things clearly as the best of days and the worst of days.
But it's all up to us today.
Welcome, Jeff Brown.
Now, I'm not making any predictions here, but I don't know if in the future we'll even have feet.
But if we have feet now, which you should, you need a great pair of socks.
And I want to tell you about a company that makes a great pair of socks, makes great belts, wallets.
Their wallets are great.
The most important thing about them is they're all made here in America.
And it was started by a couple of guys who, one of the guys was like, you know, we can't make anything in America.
And he made this great wallet for a friend and he couldn't make it here in America.
Nothing was made.
So Grip Six was started.
And they have certain products that they want to make.
And it is the true American experience.
When you buy their socks or their belts or their whatever, you are supporting Americans because everything is made here in America.
All of it, the whole process.
That's their goal to help bring manufacturing back in all walks of life here in America.
American-made products and American labor.
Check out GripSix today.
Gripsix.com slash Beck.
grip6.com slash Beck. It is always so great to have you.
Welcome.
It's great to be here again.
Thank you.
I've got like, oh, we could do weeks of shows with you because you can pretty much cover anything and everything.
Let me just lay down some basic understandings of things first.
Defining the Language Model00:15:03
Do you, are we at strong AI now?
Are we strong AI?
I would, the way I think about it is that we're on the cusp of that.
We're on the Rubicon of very powerful AI, incredible utility.
Okay, now, do you have a separate category of AGI, which would be AI that could do multiple things?
And how far away are we from that?
So I'm still maintaining my original prediction back the last time we sat down in 2019 that we reached the point of AGI by 2028.
And so to me, that's just right around the corner.
That's not long at all.
And that's, please explain how game-changing that is.
Why should people care about the, I say to people, AI, AGI, ASI, they have no idea what I'm talking about.
Right.
Yeah.
Why should people know the difference between AI and AGI?
Well, this is this year.
So what happened in just the last three months has been extraordinary in the fields of artificial intelligence and machine learning.
There have been so many breakthroughs that are driving what's about to happen.
And so I think 2023, when I think about this year, this is the year when people actually realize what it can do for them and how it actually can change their daily life and how much utility it can be and how it frees up their time.
Now, this is let alone what's going to happen in the government, private, and public sector.
The productivity enhancements that will result that will come from the applications of artificial intelligence will be extraordinary.
Now, you're talking specifically about ChatGPT or all the things that we're doing.
Oh, it's much more than that.
I think ChatGPT is like the moment of the smartphone where you're like, oh, this is entirely different.
But you didn't know when you had the smartphone, you didn't know exactly all of it, but you knew.
I think ChatGPT, am I wrong?
Is a moment that people are like, wait a minute, what is this?
What's coming?
Yes, yes.
So the evolution of ChatGPT and some of the other kind of competitors out there, Claude, which hasn't been released publicly from a company called Anthropic.
Just yesterday, Google announced BARD, which is their large language model.
These are all what's referred to as large language models, and they're trained on billions of parameters, actually hundreds of billions of parameters.
Parameters we can think of kind of very loosely as points of information.
Sometimes they're simple as words, but they just ingest an incredible, an incredible library of knowledge.
These are large language models, and that's what's going to enable the types of very highly efficient and accessible, easy-to-use applications that can reside, to your point, on a smartphone.
It's going to make everyone feel as if they've got a high-powered executive assistant that would cost somebody $100,000 a year right in their smartphone, capable of saving them an hour, two hours, two and a half hours a day of So give me what's coming from this.
What are we going to say?
What's right on the horizon on this?
Right.
So where this is evolving to is exactly that.
From a consumer perspective, we can think of this as a personalized digital assistant.
But that would be a GI, wouldn't it?
Because you would have to be able to do multiple different things if you're truly my assistant.
It's not, that's another evolution.
That's a much bigger step before we get to AGI.
can still have these language models that can be highly functional and capable of doing multiple tasks, prescribed tasks, tasks that they've been taught and learn how to do.
That's an important distinction.
Taught, yeah.
We can dig a little bit deeper on that.
But through the body of knowledge that they've been trained on, they actually have instruction sets on how to accomplish certain things.
And if we think about how humans can augment these large language models, i.e. productize them, then they can be productized for explicit purposes.
So give me an example of products that will come.
So let's keep going with this personalized digital assistant.
You can have a generic product that is made available, perhaps for free to everybody's smartphone.
But then the moment that you start learning it, it starts learning all of your preferences.
It starts learning exactly what your schedule is throughout the day.
It can potentially even listen to all of your conversations through the microphone on your phone.
Thank you.
It knows everything that you order.
It knows where you are because of your GPS.
So it knows exactly which patterns you follow every day and at what times and what you prefer to do and when you like your downtime and when you like to exercise, hopefully, and what you like to eat.
And it doesn't take long.
It doesn't take that many cycles for the AI to effectively really deeply understand you.
And if it's a company like Google or Facebook, Meta, who already has this massive dossier on you, then they'll come effectively preloaded already knowing you.
So these companies actually have an inherent advantage because they've been collecting data on all of us for more than a decade.
I don't feel comfortable giving companies more of my information.
None of us should, honestly, but almost all of us do.
And even if each individual sat down at a table and spoke with an expert about this and they explained exactly what these companies were doing and what they were taking, they would still do it because the convenience, it's just so incredible.
And there's nowhere else they can go for it.
They can only go down a few different paths and they find it incredibly useful.
They would never be cut off from this.
So that, I mean, that's going to lead us to places later, I think, in the conversation of no way out, no way out, because It already knows more about us than we might know ourselves.
But once you start tracking eyes, once you start tracking absolutely everything personal, it'll be able to set up dates for you because your virtual twin can go out and date a thousand other virtual twins, correct?
And so it's just going to start giving you, it's going to make your life really, really sweet unless there is a problem.
It will proactively present options for us that it knows we will like and appreciate.
It will proactively present things for us to do that it knows that we will enjoy.
It will feed us.
It will clothe us.
It will keep us entertained.
It is bad.
It will make us make us.
It'll make us feel smart.
But we'll be dumber.
Won't we?
I mean, if we're spoon-fed, everything.
Everything.
Yes.
And so with, so when you go, somebody told me the other day that the goal of a search engine coupled with something like ChatGPT, you'll ask a question, but it will know you so well that you're nine.
It will call everything, but it will write it for you in a way that you can understand.
And it won't give you a whole list necessarily of everything that you choose.
It will give you the answer.
And that is a little terrifying.
Well, there's two really interesting threads on that topic.
One is related to these large language models is applied to search.
The first one, I think, that I'd like to explore a little bit is around the education, the potential impact of education and learning.
This is one of the things.
Right now.
Well, I'm actually very passionate about this because there's an incredible amount of good that can come from the application of these kind of intelligent large language models as applied to education.
So it has the potential to completely democratize and provide the best possible education to every child on the planet, irrespective of their economic means or where they come from.
Now, there's a caveat at the end that I'm going to share with you.
But if we can imagine the world's body of knowledge of everything from history to mathematics to physics to reading comprehension, every subject, science, is ingested into this educational, purpose-driven, large language model.
And all a child needs is a simple device, an inexpensive device, a tablet through which it can interact with the AI.
All of its learning can come through this, and they will be taught as if they're being taught by some of the finest teachers in the way they will learn.
Each one would be specialized in.
Exactly where I was leading.
You have visual learners.
You have learners who do much better if they just read text and they can have some time to digest it and synthesize it.
Some people learn subject matters when topics are introduced in a different sequence.
Correct.
Right?
Somebody does better with it completely inverted.
And an AI, and again, this can happen for almost zero cost.
An AI can figure out the best way to teach a subject to an individual student.
So this is really free.
This is the sword that shaves you or cuts your head off.
Yes.
They're already doing much of that in China, still in classrooms, but they're doing it.
It depends on who's running it, who is inputting the information, what is on the software, what is it they're trying to shape.
And, you know, I keep coming back to the only solution that I can see, and I'd love to hear you.
The only solution to this is you have to have your own chat GPT that is that you own, you own all your information.
It negotiates that information against others that are using, and it guards you.
But I don't know if you can ever, who's going to give up this much information?
Who's going to give up this much power?
We will.
But who is doing the education programs that will be utopian without an agenda or a significant cost?
So that's exactly right.
This is the caveat.
It's not even the who.
It's the what.
It's which information is being fed into the language model.
Who's choosing which information is right or wrong?
Because right now, these language models go out onto the internet.
They go to places like Wikipedia, which has transformed over the last three years.
It's unbelievable how the definitions of things changed.
Yes.
Even the definition of something like, what is a vaccine or what is immunization?
Right.
Like they've changed science real time and thought nobody would notice.
Right.
Right?
Well, they didn't change science.
They changed the definition.
The definition.
It was crazy.
And so, I mean, that's a simple example to understand that, you know, even something that was thought to be an independent, objective source of truth is not.
It's not.
And so this is the most complex thing about artificial intelligences and these language models.
How do we ensure that, you know, clear, rational, objective, truthful information are the inputs?
Because ideally what we want is an unbiased artificial intelligence to help us evolve as a society, to help educate every child on earth and to do it without a political agenda.
Where are those better angels that are designing that?
That are even discussing that in serious ways at the upper level of these companies.
Yeah.
I mean, I know good people who we disagree on things and we're like, it didn't happen that way.
That's not, you're reading in, and it's a good faith back and forth.
Okay.
But I know a lot of people on one side or the other, no, you're just wrong.
Right.
That didn't happen or you're just wrong.
And so they just will block out anybody.
And that's happening on all sides.
Just block out anybody.
That is not healthy.
That's not healthy.
No, it's not.
So how do we encourage or escape that or prevent that from happening?
No idea.
Musk's Platform Augmentation00:04:53
Well, I wish there was an easy answer.
And so let's take an example.
Twitter, Elon Musk.
Absolute genius.
What he's done with that platform is remarkable in such a short period of time.
Removed 75% of the workforce.
I've never seen the platform function technically better than it ever has before.
All the hate speech has rapidly declined on the platform, and he's adding tens of millions of new users.
It's never been healthier as a platform because he's reintroduced objectivity onto the platform.
Instead of fact checkers, there's community notes.
If there's something that's controversial or said, context is added rather than saying, no, you're wrong.
Right?
Here's some additional context.
Now you have a better picture and you can decide one way or the other.
So the problem with that, that is a solution in a way.
One person, a single person was able to affect positive change, use objective rules.
So it's possible, but it's complex because there's only one person that can buy a $54 billion company and transform it.
Right.
I think if we were living in an objective world, what he did would, in any other time era that I remember, would have been known as the ground shaking move of the year because it would have transformed and would have caused just dominoes to fall everywhere.
But I don't see any dominoes falling elsewhere.
I don't see people.
Do you?
No, the one thing that has happened in the tech community is that it's forced a lot of companies to rethink how they're architecting their organizations.
And so that was essentially the precursor, the catalyst to a large number of layoffs and reductions in force in the tech community.
People saw that Musk could make that kind of reduction.
And not only did it not hurt the company, it actually made it better.
Okay, we kind of have cover to go in.
Of course, they're not going to do something that severe, but if they need to make a 10 or 15% reduction in force, they know it's going to be received well by investors because first thing the investors did is, wait, Musk just did this and the business is healthier now.
Right.
What's your plan?
One of the reasons why I like Jeff as a guest, you'll notice that I'm not wearing my glasses because I didn't think about it until we started the interview that I don't have my glasses.
So I can't see really any of my notes.
But when I do have my glasses on, those notes are great.
My wife just had to get some reading glasses, some progressives.
She usually wears contacts, but there is no better lens than the lens you can find at Better Spectacles.
This is a conservative American company.
It is exclusively offering Rodenstock eyewear.
It's what I wear.
For the first time in the U.S., Rodenstock is bringing this stuff.
It's a 144-year-old German company, been considered the world's gold standard for glasses for a long time.
Rodenstock scientists have, they've taken and they've done biometric research and measured the eye, 7,000 points.
And that, after they took those findings and ran them through a million patients plus artificial intelligence, they know exactly where and how to construct the progressive glasses.
Seamless.
They're really great glasses and not the most expensive on the shelf, thank God.
Betterspectacles.com slash Beck.
Go there now.
Betterspectacles.com slash Beck.
Can I ask you, because we just had a business meeting about this with Chat GPT.
The first day it came out, I went to the CEO here and I said, so it's not very long down the road.
If you have fact checkers and you have people that are watching it and looking at, you could do what BuzzFeed's doing.
And that went through the roof.
I don't like that at all.
And I don't want to be a neophyte and, you know, or, you know, the Amish.
Why You Still Need to Learn Code00:12:40
I love technology.
Make the case for handmade.
Make the case for humans being in the line.
Can you as a businessman?
Well, to me, the application of artificial intelligence in a business context is really about augmentation.
So I don't want to replace Glenn Beck.
Right.
Right.
I want to hear the words from you.
I don't want to hear them from a computer or an avatar that looks like you.
I want to hear them from you.
Now, let's think about augmentation.
Great thing about a large language model like that is that we could feed it the entire body of everything, every book that you've written, every monologue that you've given, every interview that you've done, and it can learn how you think and how you analyze all of these types of things.
And so we only have X amount of hours a day, right?
We have all sorts of obligations.
But we might need to produce extra editorial, extra copy, whatever it is.
You can bring up a topic, feed it to the AI, and it can put a draft in place for you.
You can review that draft.
You can agree with it and edit it, whatever.
But they haven't replaced you, but it has saved you an amazing amount of time because you've already talked about the topics inside of what's being written.
So it already knows what you need to write.
It just saved you an hour's worth of time of doing it.
And so to me, that's a simple argument.
And you can see that.
That's kind of where we came to the understanding.
Don't want to shun it just because it's AI.
We want to use it the way we want to use it and if it will help us do more, do better great, but not to replace people.
Yeah, when do we get to the place where, you know, because I could see a future where two ways the human being becomes very important um, but it has to kind of be a brand, you know, I think um, it has to.
You know somebody like me, and that's probably a really not a great example, but somebody like me established, you know me, everything else, large body of work.
It could be a brand that goes on and on and on and on.
You know what I mean with me dead even um, but then there's the, the person who's just doing whatever the, the accountant um, and the the, even now, I mean, it looks like musicians.
What comes first what?
How bad are the job cuts that are coming?
What should you go to school for and not go to school for?
I mean, I just talked to a kid 19 years old.
So what are you going to do?
He said, i'm computer programming.
I to my wife, hurry.
Well um gosh, so many, so many interesting uh topics there.
Uh, I mean education again it's, it's um, this topic is, is a passion of mine, but the one piece of advice that I can give to anybody that's going to school now is, couple computer science with whatever it is you're passionate about.
You know, physics plus computer science, double major biology plus computer science just together, anything that you're studying goes hand in hand.
The the best writing code?
Well, you have to learn how to program.
Won't machine what, won't machine learning?
Eventually take that job.
Well, it already is.
But um, i'll give you a simple example.
Um, you know chat GPT.
I was reading some research a couple weeks ago and one of the conclusions was that chat GPT is capable of doing about 80 percent of the coding for what they needed to be done.
But it's not a hundred percent sure sure, and the computer scientists still needed to review the work of the 80.
So um we, us humans, still need to have some foundational knowledge around coding and programming.
But most of the work, what's that?
For how long?
For a while, you know certainly um, certainly for the next five years.
You know it's amazing.
Well, quite a while.
Certainly for the next five years.
Probably the better way for us to imagine that is that computer programming will evolve.
Yes.
You know, right now, still, we're working with prompts and writing lines and codes, but.
How long before I can just say, I want a new website.
I want it to do this.
I want it to look like this.
It needs to sell this product.
I want it to look these specific ways.
And I have a website built.
Believe it or not, that's a much easier problem to solve.
A lot of those problems have actually been solved for using artificial intelligence.
They haven't been productized yet.
I suspect probably about 80% of what you just described will be available before the end of next year.
So that's within the - tell me in the next year, what are the things that are coming out that excite you and would excite the average person going, wait, what?
We can, what's coming?
Yeah, yeah.
The biggest one, the one that will be most tangible to all of us will be our, let's just call it a personalized digital assistant.
Just because the impact that it will have on our lives will be so significant and so meaningful.
We'll feel an immediate change in how we interact with the computer in our hand.
Tell me, when you talk about these things, compare them to the impact of the iPhone.
Meaning everyone says, oh, I can't, no, I can't live without my iPhone.
I mean, it's insane.
13 years ago, we all lived without an iPhone.
Now you will not surrender that.
So that personal assistance compared to an iPhone, how significant is that?
I think the attachment to that will be even more significant than the smartphone.
Will you know the I'm fascinated by the idea of the loss of free will?
When you have something listening to you all the time, it is trying to make your life better, but it's also a product.
And it's suggesting, it's listening to you and it's suggesting, I mean, you get to a point to where you're like, I don't know chicken and the egg.
I don't know if that was my idea or if that was somebody, you know, or some algorithm's idea.
Yeah, yeah.
So it depends on who's behind the curtain.
Right?
So I haven't found many good guys behind curtains.
One of the first things that I ask whenever I'm looking at anything is, you know, where's the monetary incentive?
Like, what's the business model?
So Facebook, Google, let's take them as an example.
These are advertising companies.
They collect data and they sell access to the data to generate advertising revenues.
It's a very simple model, right?
When you see these companies talk about what they do, it's all magnanimous.
They're making incredible contributions to society.
They're connecting everyone everywhere for free.
For free.
We're the good guys.
But yeah, so if the business model of the company that's offering the artificial intelligence is advertising, then we cannot and should not trust what we're being told to do.
Because that tells us that products are being sold through this AI that very much feels to us is so natural, so comfortable.
It's so useful to us.
Right.
Can we go back on two things?
I want to sweep up a little bit on education.
When do we get how do we get there?
And when do we get to a place to where the information is out there and it's everywhere and we're overwhelmed by it now.
But ChatGPT, that's the beginning of being able to put it all down and whittle it down into something useful for each private individual.
When do we get to the point to where we're back to critical thinking, where we're not being taught what to think, but we're being taught how to think so we can question and shape.
Do you see anything on the horizon that's moving in that direction?
Or is it all just, this is the truth, you will learn it?
I mean, there's just the unfortunate part is that, I mean, I know you've done a lot on this.
You know, the education system has just been entirely corrupted.
Corrupted.
It's heartbreaking.
Yeah, it is.
And, you know, how do you get around of that kind of designed programming?
You know, some go to homeschooling.
I think that's a wonderful idea.
If you have the time and the resource to do that, incredible, right?
Inside of the current educational construct, it's very difficult to change that.
However, perhaps a different framework is this technology actually makes something like homeschooling accessible to everyone.
Everybody.
It could literally disrupt everything that we know about starting out in kindergarten and graduating from a university.
But everything.
Will it be any different?
It depends on, again, it depends on who produces it.
I know that, for example, Elon Musk is passionate about education and he even in part founded kind of a private school for his own children.
What if, what if he created an AI to do exactly that?
Well, would there be an AI that is just like a generic tool?
It's a screwdriver.
It's a hammer.
It is what you need it to be.
So I could buy this AI and then I could develop the school and control the parameters myself if I wanted to homeschool.
Yes.
Is that on a horizon?
It's possible.
All it takes is a few entrepreneurs and somebody to develop that kind of, let's call this a generic AI, right?
That can be optimized for anyone's individual preferences.
Isn't that a better perfect solution than trying to find this one billionaire that can buy stuff?
Yes, I think there's an amazing business opportunity there.
Definitely.
And it's going to create an amazing amount of chaos within the educational system.
Oh, yeah.
If you think about a large percentage of students dropping out to go learn from, example, far better teachers without kind of corrupted teaching methodology, that causes some problems.
Social Unrest and Air Purifiers00:04:22
You can purify the air in your home and get healthy, clean, fresh-smelling air.
You can eliminate the odors.
You can kill mold, mildew, bacteria, and viruses.
How do you do it?
The Eden Pure Thunderstorm Air Purifier.
It has oxy technology that naturally sends the O3 molecules into the air, which seek out the odors and the air and the pollutants and everything else and destroys them.
It doesn't mask things.
It doesn't cover up the bad odor and pollutants.
It eliminates them.
And it's called the Thunderstorm because it purifies the air in your home and provides you with pure, fresh air, like after a nice rainstorm or thunderstorm.
So right now, save $200 on the Eden Pure Thunderstorm 3-pack for whole home protection.
You'll get three units under $200.
That's a fraction of the cost compared to other air purifiers and what they can go for.
Put one in your basement, your bedroom, your family room, your kitchen, wherever you need clean, fresh air.
Special offer right now, you're getting three units for under $200.
Go to EdenPureDeals.com.
Put in the discount code Glenn and save $200.
EdenPureDeals.com.
Discount code, Glenn, shipping is free.
I think the reason why a lot of people are saying China is, you know, a lot of people in power.
China is the new model is because China is producing new workers who will obey, who will do what the state or the company says.
They live many times where they work.
Their whole life seems to be spent there.
And I think such social unrest is such a great potential.
Just think of education.
All the schools, all the employees, the fight over the power.
That's a lot of money just in power that is out there.
All of that starts to go away.
That's not going away without a fight.
You know what I mean?
The social unrest, I think that's why China, to some people who view themselves as ranchers and us as sheep, close those pens, try to get you into those pens because they know social unrest is coming and they can't let it affect their power.
Do you agree with that theory?
Oh, that's the whole point.
The guardrails have come on for good reason.
They can't allow the population to wander too far.
So how do you have true revolution?
You know, when the internet first came out, we thought, first of all, we didn't think of the things that are being thought of now that are the good things that could happen.
We just knew this is freedom.
This is real freedom.
And it appears as though, because it's going to cause so much disruption, no, no, no, this could be the loss of man's freedom for a very long time, you know, until there's a glitch in the system.
What do we do to help?
I mean, how do you fight that?
How does the average person, what do you do?
Well, my working assumption is the change won't come from the government and it certainly won't come from the education system.
So it's going to have to come from the private sector.
And I'm hoping, I'm even looking for a team of executives that are willing to do something like that, to create something like that.
You find them, let me know.
I will.
Definitely.
I can help raise a lot of money on that.
I really think there's people on all sides and some are stuck where they are.
But I think there's a great need all over the world for people who say, I just want, I just want accountability and truth.
And I want to control my own life.
You know what I mean?
I just don't, I don't know why I don't have control of the algorithms of my Twitter feed or my Facebook.
Why We Trust Algorithms00:12:45
Why don't I have control of that?
So I know what I'm seeing, you know?
And it's, I think it's because they don't trust people.
They just don't trust people.
When I think about, this is kind of some of the cultural risks in terms of the direction that these technologies can take us.
One of the biggest concerns is as the broad population uses these things, they get so inherently comfortable with their interactions.
As this technology improves, it won't feel like you're talking to a robotic voice.
Oh, no, it's a friend.
It will actually have a human-like voice.
It won't just be a chat anymore.
It will talk to us through our earbuds and it will be meaningful, very deep conversations.
And they will never get angry at us.
They'll never point out our flaws.
They will always have answers for us.
They will know exactly when we need a pat on the back, a shoulder to cry on, a joke to cheer us up.
I wrote a blackmarish script years ago, and it's that.
If I could have the perfect woman who never says, you know, you don't listen to me.
I don't have to say, how was your day, dear?
It only is revolving around me and will talk and take me in different places, but they know it's all places I most likely want to go.
You know what I mean?
And then if I get bored, I don't know.
What's it like if I threw her off a building?
She's not real.
Tomorrow morning, she'll be there again.
You know what I mean?
Why would you get into personal relationships where it's so much of the time, you know, it's messy, sticky, frustrating, right?
Heartbreaking.
Why?
What's the cure to that one?
This may sound like a bit of a quirky prediction within the next 24 months, but I, and this is a word that I made up, but I believe people will literally identify as eyesuals.
So AI, sexual, bisexual.
It's also kind of a play on words.
In Japan, Japanese I is the word for love.
And of course, robotics and animation and characters are, you know, this is a huge pastime.
It's massive in Japan.
And people really do develop these connections with these characters.
If it's designed for you, of course.
Of course they would.
Of course they would.
It'd be a drug.
You know, to your point, people, these asexuals will just find more meaning with these perfect relationships than they would with human relationships.
And of course, my concern culturally is that people will stop or decrease the amount that they interact with each other.
Of course they will.
And they'll lose the ability for peaceful conflict resolution.
When we had cars that were not so comfortable, didn't have everything in it, didn't have air conditioning, okay?
They were smaller or bigger, but you rolled down the windows.
You didn't just slam on your horn to people.
We're driving around our personal house now.
Every convenience we have is right there.
Get the hell out of my way.
I don't know who you are.
I don't care who you are.
Why?
Life is messy.
Why would you not do that?
Yeah.
It's, I mean, you want to talk about the ultimate drug.
That's it.
That's it.
And it begins, I think, with the personal assistant.
And there is the reason why this is going to happen is that there is so much money at stake here.
And I'll give you a perfect, very concrete example.
So OpenAI, the company behind ChatGPT, days ago just released a new business model, and that is $20 a month, premium access.
You can access the AI at any time, any day.
It doesn't matter, peak hours, you're in.
Now, ChatGPT was the fastest software application in history to make it to a million users, five days.
It's also now, as of a few days ago, the fastest to make it to 100 million users, just under two months.
Faster than TikTok and all of it.
Absolutely.
How long is it going to take to get to a billion users?
So we can imagine even if 10%, even if 5% of its billion users, which will happen within 18 months, I predict, imagine how much revenue a simple product like that that can be deployed over smartphones can generate.
OpenAI was just valued in its last round.
Microsoft made a $10 billion investment.
It's valued at $29 billion right now.
This is a company that didn't exist three years ago.
I want to talk to you about living with pain.
It is the worst.
And I had gone to doctor after doctor after doctor for probably five years trying to find something that would relieve the severe pain that I was having and couldn't find anything.
This network was doing commercials for Relief Factor.
And one of my best friends was on it.
And he was like, you got to try it.
And I'm like, it's not going to work.
It's all natural.
Please, please.
My wife saw the commercials and I was complaining and she's like, why aren't you trying that?
And I said, honey, because it's not going to work.
And she's like, I don't want to listen to you whine if you won't try everything.
So I did, assuming it wouldn't work.
It did.
Within three weeks, my pain diminished greatly.
And now my pain is gone.
Relief Factor, it's not a drug.
It was developed by doctors to fight inflammation.
Try it today.
Three-week quick start, 1995.
Take it as directed for three weeks.
70% of the people who try it go on to order more.
It's relief factor.com, relieffactor.com or call 800, the number four, relief.
I'd love to hear you wrestle with the social question of I am not for basic income.
What is it?
Universal basic AI.
Yeah, universal based BI.
Not a fan of it at all.
However, I do think there is a coming problem to where somebody is going to be unemployable or low skill unemployed.
It will be a lot cheaper to do that job, AI.
So they don't have anything.
And as Yuval Harari is like, what are these useless people?
What are we going to do with them?
We got to make them happy and entertain them.
And then you're going to have on the other side, the people who, very small group of people that have just made billions of dollars in 18 months, they've got all the money, all the control, and they're just trying to keep these people just from rioting or whatever and paying them.
How do you solve the great wealth disparity that looks like it would be coming and the great power disparity?
Do you have any thoughts on this?
Well, I mean, this is such a complex issue.
And why don't we start with Yvald Harari's solution?
Yeah.
Yes, you're exactly right.
He believes that most of us are literally useless.
And he speaks openly about that.
Oh, I know.
Yeah.
He's frightening.
He's frightening.
He truly is.
And his solution is to what are we going to do with all of these useless people is we're going to give them games and drugs.
Correct.
That's his solution.
I know.
Of course, you and I don't agree with that at all.
No.
But it seems like the World Economic Forum and global leaders do believe in that.
Yes.
Yeah.
Insanity.
Absolutely insanity.
And frightening, of course, this body of unelected officials that are literally putting this type of structure in place for the global society.
It's just frightening.
But back to your question, it doesn't have to be that way, of course.
Right.
The better outcome, of course, is that these tools not only make people feel smart, but they also empower or augment them to do tasks that they otherwise would not have been able to do.
It's like the washing machine.
If it's the washing machine, it's great.
That freed women up and people up.
The stove, you know, electricity, all of that stuff took all those chores away.
But this one has a high cost to it.
So let's add in just one component of technology.
And I'm going to introduce this because I think it's a good analogue or anecdote for understanding how something like this could be employed.
So one of the biggest things that will happen this year will be Apple's release of its augmented extended reality headset.
It'll be very high-end.
It'll be expensive.
$3,000.
But we're going to actually get a view of the future in terms of the way I like to think of it, it's a next-gener computer interface.
It's a next generation way for us to communicate with computers.
And of course, this will just be, this will be bigger and clunkier.
Within 24 months, these will look like sunglasses.
And they'll be just as functional as what we see.
I'm predicting that Apple will make the product announcement in May or June this year.
So we're just months away from some pretty incredible tech that will be revealed.
But let's just project two years into the future where these become really small form factor glasses.
We employ the artificial intelligence.
The glasses are a means through which AI can communicate with us and actually instruct us to do things that are productive and meaningful and useful in society.
So you take someone who, let's just say that it's very complex.
It would be very hard to train them to do some complex tasks.
But if we empower them with artificial intelligence and a mechanism through which we can guide them through these tasks, they can become productive human beings, productive members of society.
They can go out and earn a paycheck, contribute.
Maybe it's taking care of an industrial plant.
Maybe it's doing repairs on another hot topic of mine, which is robotics and the cross-section of AI and robotics.
So, you know, maybe humans won't be taking out the trash anymore, but they'll be servicing the robots that do.
That's going to be one of the best jobs to have, which will be, you know, maintenance and operations related to robotics technologies.
And so at scale, the key part is at scale, we will be able to train the entire population to do tasks that are needed in this new world.
So I think we're at the place now where this is not a hypothetical.
I asked this of Ray Kurzweil 2012, maybe.
Fusion Reactors for Space00:16:03
And I said, but what about those people who don't want any of this?
And he said, there won't be any of those people.
It will change your life.
It'll all be an upside.
Why wouldn't you?
And I said, because they want to remain human, totally human.
Yes.
Because I've read about the AR VR goggles, and they sound fantastic.
And it is a completely different thing.
As I read them, my son was sitting next, and he was like, oh, gosh.
And I said, A, they're $3,000.
And B, it is, I got a camera pointed at your eyes.
So it will gather more information in two weeks than all of the information that is out there about you now.
I'm never putting a pair of those on.
Now, I can say that because I'm 60, but who is what happens to those people who are like, I don't want that?
Yeah.
Well, there were always, I'll have to disagree with Ray on this one, but there will always be some of us that will prefer to be as close to off-grid as you can get.
So how, where is the line?
Where's the line where you go, I'm crossing over and I, this is too far.
Where is that?
Yeah, yeah, yeah, yeah.
Well, I think, you know, the moment that we use, again, the moment that we use products that have ulterior motives in terms of what they're doing with us and our behavioral characteristics and our data, then to me, we've, we've crossed that line.
We've either done it knowingly or unknowingly, but with wokeism everywhere, isn't that kind of a little bit of everything?
I mean, do you trust?
I mean, Apple, I think, is the best out of the big ones.
Yes.
But do you trust them?
Not anymore.
Well, not after what we saw in 2020, you know, after the election, you know, deplatforming applications from the Apple store.
And not only, you know, Amazon did the same thing on their cloud services.
And so I agree with you that they're the best in terms of privacy protection.
And they historically have not mined our data for advertising purposes.
Look what they're doing in China.
It's not like they have an ethical problem with doing whatever to make a buck.
Because look what they're doing in China.
Of course.
Absolutely.
But there's only two platforms to choose from, right?
You've got the Android OS, which is Google, or you've got Apple.
That's the entire smartphone industry.
I know.
That's why I have Apple, and I hate both of them.
You know, we're going to run out of time, and I've got so much more to ask you.
So let's go to energy.
Yes.
We've talked about new forms of energy.
I'm getting an opportunity to see something this weekend that is going to be announced soon that is remarkable, remarkable.
I know we're building little teeny nuclear plants.
We've talked about the fusion reactors that are coming out.
And you think we're close to that.
I still do.
I would argue even I'm more resolute in my predictions on that front with regards to nuclear fusion in particular.
But your point about even the existing, the next generation nuclear fission reactors.
Yes.
So since you and I last spoke about this topic, one of the companies in this space, it's called NuScale, actually got certification for their small modular reactor.
And the certification allows any power generation company, a utility company, to name their technology, their reactor design, in their applications to build a power plant.
Now, this is only the sixth time in history that any form of nuclear technology has been certified for that.
And they're basically on track for a plant in Idaho in 2029.
And another company just came out, actually, a division of GE Hitachi and some other partners announced that they have their own small modular reactor design and they're targeting to have a plant by 2028.
So again, this is right around the corner.
And these are radically different, radically safer.
Even the existing technology is safe.
This is even safer.
And the reactors are small.
They're low to the ground.
There's no big huge cooling tower.
You wouldn't even know they're there.
And they're fantastic technology.
Now, you know, there's so much political sensitivity, oddly enough, oddly enough, around these because they are in deep fission.
You know, I'm very skeptical that they actually get turned on.
They say that there's no China meltdown.
There isn't.
It's not possible.
Absolutely.
They're absolutely safe.
But the reason they're so politically sensitive is that they do produce some radioactive waste.
And then the issue becomes, how do you manage it?
How do you transport it?
Where are you going to put it?
Not in my state, not in my backyard.
The permitting process is so complex.
Let's go to fusion then.
Tell me about that, because that's clean.
Completely.
Some forms of it have absolutely zero nuclear waste at all.
It depends on the kind of fusion reactor.
Other forms have very limited amount of waste with dramatically different half-life profiles than what comes from a fission reactor.
So this is a completely new world.
It's the power of the sun.
It's really no different than what the sun does in terms of producing energy.
And there should be, theoretically, no political resistance to adopting fusion energy.
I don't believe that.
Well, there's so many vested interests who would be so severely disrupted by the employment of this technology that you know there's going to be people out there that will be fighting it.
Oh, yeah.
And some of it will try to paint this picture somehow that these are not good for the environment, which is insane.
I mean, I think there's a lot of people.
I hate to even say this, but I've come to a place to where I believe some, and I wouldn't even begin to name names because I would hate to even, I don't want to, I don't want to believe that there are people out there who really just don't like people and would like to depopulate a lot of the planet.
And, you know, and I think some of those people are in charge of some global strategies right now.
Yes.
Because some of the things that are being done, you're going to lead to starvation.
Yes.
Yes.
And power outages.
You don't have to have that.
No.
And some of those people are openly talking about it.
This Balthusian nonsense, right?
Yeah.
So last year was a remarkable year for nuclear fusion.
So let's kind of start there.
$4.7 billion was invested into the industry in one single year.
This is all private investment.
More than all other investments into the nuclear fusion space in history just last year.
So we've hit an inflection point.
And of course, we saw all that news coming from the National Ignition Facility at Lawrence Livermore Laboratories.
That generated a huge buzz.
Was that a PR move or was that real?
It was.
Or a little both.
I think you and I are both on the same page there that it was a PR move to get funding, additional funding, right?
Largely.
It was a fantastic science experiment, and it did work, and it was real, and it was a huge accomplishment after decades of scientific pursuit.
It's awesome, but that form of nuclear fusion is not economical in terms of commercializing it for clean electricity production.
So there's so many other companies that are taking different angles on nuclear fusion, most of which are through some form of magnetic confinement where you use large, powerful magnets to basically contain, safely contain this super hot plasma.
You think 50 million degrees, 100 million degrees, 150 million degrees, you know, hotter than the temperatures on the sun.
But the magnets are so strong, you can do that safely with no damage to the reactor.
And it can manage these plasma reactions that have a net energy output, 100% clean.
And that's the future.
So when we think about this, you know, the potential for these dystopian outcomes, or we look at all the conflict that's happening in the world, you know, so often it's being driven by need.
You know, we need food.
We need metals, minerals.
We need oil and gas, and we're running out of it.
So we have to go somewhere else to get it in a world that power production is essentially limitless and free and completely clean.
I mean, the reason that we had this incredible revolution in the last 100 years is because we had cheap, abundant energy, reliable energy that we could count on in the form of oil and natural gas.
I mean, that fueled everything, everything.
And so when we multiply that by a factor of 10 with a decentralized power production grid, completely clean, no carbon emissions, and basically limitless, the cheapest possible form of electricity that we could think of, that's what nuclear fusion is.
Tell me the, give me two things in the next two to five years that you are like just can't wait for it to be announced or to happen.
One of my predictions, which I'm still holding, is by the end of 2024, we will see the first net energy output fusion reaction, fusion reactor.
not the inertial confinement fusion that we saw from Lawrence Livermore, but from one of the technologies that can be commercialized.
So we'll see that first reaction by the end of next year, which is game changer.
It's a game changer.
And that will be, that's the moment where we cross that line from theory into practice, into reality.
Okay, we know we can do it now.
Now we just need to scale it.
And that takes us to other planets.
I mean, that can be put on, I would imagine, spaceships.
I mean, it changes absolutely everything, not just here, but an endless supply of safe energy.
Yes.
Man can do almost anything.
It really does.
And so after that, it's just about, all right, how do we commercialize it?
Correct.
How do we replicate?
How do we manufacture it?
And how do we get out to our power grid as quickly as possible?
We can remove all carbon emissions.
The moment we turn this on, we're 100% clean.
Remarkable.
And so it's really interesting that you mention This is a source of energy for planetary exploration.
So, one of the most incredible things that will happen this year, in fact, we're weeks away, SpaceX will launch the Starship.
I know.
Now, are you going?
I'm trying to talk to anybody to be able to get.
Well, you're a little closer than I am, but I'll be jealous if you get to go.
That's amazing.
Yeah, first orbital flight of the Starship.
Now, and this is, if I'm not mistaken, this is slightly bigger than the Atlas V, the biggest rocket we've ever made.
It's excessive.
Anything, anything, anything that's ever been sent to space.
It's extraordinary.
But the best part, so let's just think economics here.
This is where it gets really exciting.
So, before SpaceX built its Falcon 9 rocket, which is, you know, 62 launch, 60, 61 launches in 2022, sets a record.
A single company does 61 launches in a single year.
And we don't even talk about it.
We don't even talk about it.
Remarkable.
Transformed the entire aerospace industry.
Before the Falcon 9 came along, it used to cost roughly somewhere between $50,000 and $55,000 per kilogram to get payload into space, into orbit.
Oh, my gosh.
Per kilogram.
Oh, my God.
2.2 pounds, right?
Yeah.
50 to 55,000.
All right.
Falcon 9 comes on.
Roughly, roughly $4,000 a kilogram payload into space.
Like 90% of the cost of getting payload into orbit, gone.
One company, one rocket.
That's how transformational the Falcon 9 was.
But here it is.
What happens with the Starship?
The Starship can get payload into space, into orbit, for $100 a kilogram.
Oh, my gosh.
Yes.
97.5% less cost to get payload into space.
So you got it.
Wow.
We can do anything now.
If you need to ship up a compact nuclear fusion reactor into orbit to get it to the moon for a manned moon permanent presence on the moon or on Mars, you can do it with that.
Starship is the key to everything.
Wow.
It will transform yet again the industry.
That's how meaningful and significant this single event is this year.
See, I knew it was significant, and I said to my kids, I'm like, we are going.
I don't care if we have to stay at a little hotel across the water because I watched the last space shuttle take them.
And I just know in my bones, this one is game-changing.
This is a moment in history.
It is so exciting.
I'm really tempted to leave it there because it's on a high note.
Is there anything you feel like you should say?
Yes, there's one other area that is absolutely worth exploring.
Robotics Crossing Into Reality00:10:50
In fact, it's this conversion of technology.
I meant robotics.
And I think it'd be remiss if we didn't explore that a little bit.
This combination of artificial intelligence and robotics.
Now, something, actually, there was a remarkable research paper that was just published a few days ago on February 1st.
It came out of DeepMind.
So this is the AI group that Google acquired in 2014.
From England, yeah.
They did AlphaGo, beat all the best human GoMasters.
They did AlphaFold, which predicted more than 200 million how proteins fold.
Remarkable.
One of the grand challenges of life sciences, and this was the computer scientist that did this, right?
They just put out a paper that has incredible real-world implications, and that is they combine the large language models that we've been talking about, so this body of knowledge, with something called reinforcement learning, which is another form of artificial intelligence.
And they gave it tasks.
Now, what's interesting about this is the large language models are these big neural networks.
You take massive amounts of information, you synthesize this information, you optimize it, you gain confidence or a weight in certain outcomes, and then it produces an output that hopefully you can trust.
Reinforcement learning is actually really good at dealing with complex tasks that aren't predefined.
And so you mentioned AGI a little bit earlier.
This is a critical element of AGI.
So what's neat about the paper— Hang on just a second.
Is this any part of the Boston Dynamics robot where the guy says, I don't have my tools, and it just— No, no.
So that was a bit of a PR stunt, but we'll get there.
It's still impressive.
Very impressive.
The reinforcement learning, though, has the ability to be given a task and then figure out how to solve that task in an optimal way without any pre-given instructions.
Wow.
And they prove two things.
One is that it works.
This feedback loop, this combination of these large language models with reinforcement learning actually produces results.
The AI, the combination of these AIs, can perform complex tasks.
That it had no instruction on that.
It had no instruction on how to do, right?
The second thing is, the larger the language model, the larger the foundational knowledge that it has, the better the performance was.
Now, it's becoming very cheap to train these large language models.
I mean, measured in the millions, not the hundreds of millions.
This paper has shown how to cross that line between kind of the world of software and the real world that has to interact with humans.
So the chat, you know, you're still kind of in software, right?
You're still this, if we take that technology and we put it into a robot, we give the robot intelligence.
And then that robot can perform tasks that it doesn't have to be specifically trained to do.
Now, this is remarkable.
This is a massive breakthrough.
Nobody is talking about it.
I haven't seen anyone talking about this paper.
What's the name of the paper?
Oh, the title.
I'll have to send it to you.
But it's something along the lines of Deep Mind, but I can send it to you.
But it's collaborative.
I apologize.
Just tell me this.
Bing now is a good search engine.
That's a different topic, the search engine.
But so this will start to be employed very quickly.
And if we think about companies that are making humanoid robots or bipedal robots, this technology is going to drive the robots.
Tesla is working on Optimus, which before the end of the year, I believe Tesla is going to show the world something that it couldn't have imagined.
It's going to show us a robot that employs this bleeding edge technology autonomy, which actually comes from its self-driving division, plus the best robotics technology that's available today.
Another company, Agility Robotics, is doing this.
You mentioned Boston Dynamics.
They can all employ this kind of technology.
And so the significance of this paper is that we now actually have a very clear path towards making robotics, humanoid robotics, extremely functional.
And it doesn't matter if it's in a factory setting, in a mine, in a hospital, in an assisted living facility, or even in our own homes.
We're going to have a second set of hands that will never complain, that will always be there, that will do extremely meaningful tasks, that will make life very comfortable and do some very difficult tasks that may be actually quite dangerous.
Once these technologies are going to be made available and they're not going to be that expensive, the hardware involved will be measured in tens of thousands of dollars, not hundreds or millions of dollars.
And so, again, once we have these prototypes, it's just about scale.
And with scale, the costs come down and the adoption goes up.
And so that is something that we're going to see before the end of this year.
I think you'll be quite surprised with how quickly that evolves.
Do you believe we'll ever hit ASI, artificial super intelligence that is way, way, way beyond us and autonomous?
Of course.
Yeah.
2028, artificial general intelligence, you know, after that, people define the singularity as different things, but it won't be long.
The technology is almost, has almost started to run away from us because the advancements are happening so quickly now.
But the super intelligence will almost certainly happen within 10 years of AGI.
So when you talk about ASI super intelligence, you cross over to a place to where some people say, well, that's going to be, then they're going to be conscious.
And I don't believe that.
I don't think we will be able to define what life is a lot sooner than that, because I think 2028, 2025, you're going to have a lot of people convinced with your software that is your digital assistants that I'm alive.
Yes.
And just that, again, talk about social disruption.
That's going to change fundamental bedrock principles.
Yeah.
Right?
Most people will think that what they're communicating with is self-aware and sentient.
Do you believe that that could ever be accomplished?
Oh, it will be accomplished, definitely.
It will.
Yes, they will become self-aware and sentient, definitely.
Now, can we build this technology with the kind of, this is the most critical part, with the kind of guardrails required so that they don't take the world in the wrong direction so that they don't cause harm to the population.
Our creator gave us rights and let us work it out.
Their creator is going to put them to work and say you're not an individual.
You don't have real life.
I don't see that working out real well.
No, no.
Especially not when it becomes self-aware.
Yeah.
That's not very fair or humane to use a human line, right?
Yeah.
And I have a feeling we're going to be arguing.
I mean, I already tell my kids, don't talk back to Siri.
Don't talk back.
Be nice.
Please.
Please be nice.
Yes.
Please.
Only because, A, you want to train them for what is coming.
And I don't ever want to teach an algorithm that people are untrustworthy, people are mean, people are rude.
Everything, even at this beginning stage, it's learning.
Am I being too sensitive on this?
Well, it's not, let's put it this way.
We're not at the stage where that technology or an AI can learn from our own human bad behaviors.
One of the, I think.
No.
One of the most positive things that occurred last year, I think, is that the industry started to think and actually take action on something called counterfactual harm.
So, I mean, this is a positive thing.
So counterfactual harm basically means that understanding what harm could be produced through certain actions.
So not causing harm and then reacting to that and recognizing that you caused harm, but recognizing that harm could be created under these types of circumstances.
Now, this is a really complex thing to solve because it's just software and software has to be programmed.
And a lot of things are about weights and biases and balances and statistics.
And so taking this very complex, because harm is defined differently by each individual person.
Especially now.
Right, especially now.
So how do we program that?
And so for the first time that I saw in the industry last year, there was some actual research done on quantifying this particular concept.
And I think obviously, as an industry, if they work towards that proactively, that's a very good thing.