Ray Kurzweil, futurist and AI pioneer, wears 30 hand-painted suspenders daily as self-expression while predicting AI will surpass human artistry by 2029, citing 80 years of exponential computing growth—from 0.0007 calculations/sec/dollar (1939) to 35 billion (2023). He argues renewables will replace nuclear and coal within a decade, despite grid hurdles, and AI’s drug discovery breakthroughs, like Moderna’s mRNA vaccine in two days, prove its transformative potential. Kurzweil forecasts "longevity escape velocity" by 2029, enabling biological age stagnation through 80+ daily supplements, while dismissing spiritual concerns as unproven. Though AI military risks and monopolization fears exist, he insists democratic oversight and neurochemical conflict resolution will prevent dystopia, envisioning a future where suffering fades—but Rogan jokes about losing humanity’s "blues." Kurzweil’s upcoming book, The Singularity Is Near, set for June 24th, explores merging with AI, with Rogan musing if AI could even narrate it—Kurzweil agrees, despite current nuance limitations. [Automatically generated summary]
Well, my father was a musician, and I felt this would be a good way to relate to him.
And he actually worked with me on it.
And you could feed in music, like it could feed in, let's say, Mozart or Chopin, and it would figure out how they created melodies and then write melodies in the same style.
So you could actually tell this is Mozart, this is Chopin.
It wasn't as good, but it's the first time that that had been done.
Well, one of the main arguments against AI art comes from actual artists who are upset that what essentially they're doing is they're, like you could say, write, draw a paint or create a painting in the style of Frank Frazetta, for instance.
And what it would be would be they would take all of Frazetta's work that he's ever done, which is all documented on the internet, and then you create an image of That's representative of that.
So you're essentially, in one way or another, you're kind of taking from the art.
AI has invented moves that have now been implemented by humans in a very complex game that they never thought that AI was going to be able to be because it requires so much creativity.
The person who told me that was Elon, and Elon was telling me that this is the reason why you can't have a fully solar-powered electric car, because it's not capable of absorbing that much from the sun with a small panel like that.
He said there's a physical limitation in the panel size.
He thought that there's a limitation of the amount of energy you can get from the sun, period, how much it gives out and how much those solar panels can absorb.
Because just to pull the amount of power that you would need to charge, you know, X amount of million, if everyone has an electric vehicle by 2035, let's say then, just the amount of change you would need on the grid would be pretty substantial.
So it gives you some answer And if the answer's not there, it just makes something up.
It's the best answer, but the best answer isn't very good because it doesn't know the answer.
And the way to fix hallucinations is to actually give it more capabilities to memorize things and give it more information so it knows the answer to it.
If you tell an answer to a question, it will remember that and give you that correct answer.
But these models, we don't know everything.
We have to be able to scan an answer to every single question, which we can't quite do.
It would be actually better if it could actually answer, well, gee, I don't know that.
Like, in particular, like, say, when it comes to exploration of the universe, if there's a certain amount of, I mean, a vast amount of the universe we have not explored.
So if it has to answer questions about that, it would just come up with an answer?
I mean, they do learn from people, and people have ideologies, some of which are not correct, and that's a large way in which it will make things up, because it's learning from people.
So right now, If somebody has access to a good search engine, they will check before they actually answer something with a search engine to make sure that it's correct.
Because search engines are generally much more accurate.
When it comes to this idea that people enter information into a computer and the computer relies on ideology, do you anticipate that with artificial general intelligence that will be agnostic to ideology, that it will be able to reach a point where instead of deciding things based on social norms or whatever the culture is accepted currently, that it would look at things more objectively and rationally?
So for machines, when they start testing medications with machines, how will they audit that?
So the concept will be that you take into account biological variability, all the different factors that would lead to a person to have an adverse reaction to a certain compound, and then you program all the known data about how things interact with the body?
right but the question would be like who's in charge of that data and like how does that how does it get resolved And if artificial intelligence is still prone to hallucinations and they start using those hallucinations to justify medications, that could be a bit of an issue, especially if it's controlled by a corporation that wants to make a lot of money.
So there's going to have to be a point in time where we all decide that artificial intelligence has reached This place where we can trust it implicitly.
When you look at artificial intelligence and you look at the expansion of it and the ultimate place that it will eventually be, what do you see happening inside of our lifetime, like inside of 20 years?
What kind of revolutionary changes on society would this have?
Yes, well, that's a very big issue, and it's already doing lots of things that make some people uncomfortable.
What we're actually doing is increasing our intelligence.
I mean, right now you have a brain, and it has different modules in it that deal with different things, but really it's able to connect one concept to another concept, and that's what your brain does.
We can actually increase that by, for example, carrying around a phone.
This has connections in it.
It's a little bit of a hassle to use.
If I ask you to do something, you've got to kind of mess with it.
Actually, it would be good if this actually listened to your conversation.
Well, I mean, Google has, I don't know, 60,000, 70,000 programmers, and how many programmers exist in the world?
How much longer is that going to be a viable career?
Because large language models already can code, not quite as good as a real expert coder.
But how long is that going to be?
It's not going to be 100 years.
It's going to be a few years.
So people see it as competition.
I have a slightly different view of that.
I see these things as actually adding to our own intelligence and we're merging with these kinds of computers and making ourselves smarter by merging with it.
And eventually it'll go inside our brain and be able to make us smarter instantly, just like we had more connections inside our own brain.
I wonder if there's a similar chart about consumerism, like just about material possessions.
I wonder if like how much more we're purchasing and creating.
I've always felt like that's one of the things that materialism is one of those instincts that human beings sort of look down upon and this aimless pursuit of buying things.
But I feel like that motivates technology because The constant need for the newest, greatest thing is one of the things that fuels the creation and innovation of new things.
I mean, all of this additional intelligence that we're creating is something that we use.
And it's just like it came with us.
So we're actually making ourselves more intelligent.
And ultimately, that's a good thing.
And if we have it, and then we say, well, gee, we don't really like this, let's take it away, people would never accept that.
They may be against the idea of general intelligence, but once they get it, nobody wants to give that up.
And it will be beneficial.
The blow lights started 200 years ago because the cotton jenny came out and all these people that were making money with the cotton jenny were against it and they would actually destroy these machines at night.
And they said, gee, if this keeps going, all jobs are going to go away.
And indeed, people using Cotton Jenny to create more wealth, that did go away.
But we actually made more money because we created things that didn't exist then.
We didn't have anything like electronics, for example.
And as we can actually see, we make 10 times as much in constant dollars As we did 100 years ago.
And if you were to ask, well, what are people going to be doing?
You couldn't answer it because we didn't understand the internet, for example.
What I think is that human beings are some sort of a biological caterpillar that makes a cocoon that gives birth to an electronic butterfly.
I think we are creating a life form and that we're merely conduits for this thing and that all of our instincts and ego and emotions and all these things feed into it.
Materialism feeds into it.
We keep buying and keep innovating.
And technology keeps increasing exponentially and eventually it's going to be artificial intelligence and artificial intelligence is going to create better artificial intelligence and a form of being that has no limitations in terms of what's capable of doing.
And capable of traveling anywhere, not having any biological limitations in terms of...
I mean, if you have a job doing coding, and suddenly they don't really want you anymore because they can do coding with a large language model, it's going to feel like it's competition.
Well, people that make movies, people that actually film things with cameras and use actors are going to be very upset.
So this.
This is all fake.
Which is insane.
Beautiful snowy Tokyo city is bustling, the camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls.
Gorgeous sakura petals are flying through the wind along with snowflakes.
And this is what you get.
I mean, this is insanely good.
The variability, like just the way people are dressed.
If you saw this somewhere else, look at this, a robot's life in a cyberpunk setting.
If you saw this, You would say, oh, they filmed this.
But just look at what they're able to do with animation and kids' movies and things along those lines.
I mean, no one took into consideration the idea that kids are going to be cheating on their school papers using ChatGPT, but my kids tell me that's a real problem in school now.
I mean, once we have an ability to emulate everything that humans can do, and not just one human, but all humans, and that's only like 2029. That's only five years from now.
When you think about the concept of integration and technological integration, when do you think that will start taking place and what will be the initial usage of it?
Like, what will be the first versions and what would they provide?
Some people feel that we can actually understand what's going on in your brain and actually put things into your brain without actually going into the brain with something like Neuralink.
Because we'll be able to, based actually on this chart and also the increase in the ability of software to also expand, we'll be able to multiply our intelligence a million fold and we'll be able to Put that inside of our brain.
Not necessarily, but what people do say is that technology is too invasive and that it's too much a part of my life and I'd like to sort of have a bit of an electronic vacation and separate from it.
And there's a lot of people that I know that have gone to...
I'm saying some people just like being a human the way humans are now.
Because one of the complications that comes with the integration of technology is what we're seeing now with people.
Massive increases in anxiety from social media use, being manipulated by algorithms, the effect that it has on culture, misinformation and disinformation and propaganda.
There's so many different factors that are at play now that make people more anxious and more depressed statistically than ever.
But if we get to the point where one superpower has AI, artificial general intelligence, and the other one doesn't, how much of a significant advantage would that be?
But my question was, if there's a race to achieve AGI, how close is this race?
Is it neck and neck?
Who's at the lead?
And how much capital is being put into these companies that are at the lead?
And whoever achieves it first, If that is under the control of a government, it's completely dependent upon what are the morals and ethics of that government?
What is the constitution?
What if it happens in China?
What if it happens in Russia?
What if it happens somewhere other than the United States?
And even if it does happen in the United States, who's controlling it?
I mean, the knowledge of how to create these things is pretty widespread.
It's not like somebody can just capitalize on a way to do it and nobody else understands it.
The knowledge of how to create a large language model or how to create the The type of chips that would enable you to create this is actually pretty widespread.
So do you have any concern whatsoever in the idea that AI gets in the hands of the wrong people?
So when it first gets implemented, that's the big problem, is before it exists, before artificial general intelligence really exists, it doesn't, and then it does, and who hasn't?
And then once it does, can that AGI stop other people from getting it?
Can you program it to make sure?
Can you sabotage grids?
Can you do whatever you can to take down the internet in these opposing places?
Could you inject their computations with viruses?
What could you do to stop other people from getting to where you're at if you have an infinitely superior intelligence?
And if you were to ask people right after we used two atomic weapons within a week, 80 years ago, what's the likelihood that we're going to go another 80 years and not have that happen again?
But would artificial general intelligence in the control of the wrong people negate that mutually assured destruction that keeps people from doing things?
Obviously, we did drop bombs on Hiroshima and Nagasaki.
We did indiscriminately kill who knows how many hundreds of thousands of people with those weapons.
We did it.
And if human beings were capable of doing it because no one else had it, if artificial general intelligence reaches that sentient level and is in control of the wrong people, what's to stop them from doing?
There's no mutually assured destruction if you're the one who's got it.
You're the only one who's got it.
My concern is that whoever gets it could possibly stop it from being spread everywhere else and control it completely.
And then you're looking at a completely dystopian world.
I know that's a ridiculous abstract concept, but if heaven is real, if the idea of the afterlife is real, and it's the next level of existence, and you're constantly going through these cycles of life, what if you're stepping in and artificially denying that?
So things like that is a reason to continue so that we can create that And create our own ability to continue to exist.
You talk to people and they say, well, I don't really want to live past 90 or whatever, 100. But in my mind, if you don't exist, there's nothing for you to experience.
And we've actually seen who would want to take their lives.
People do take their lives.
If they are experiencing something that's miserable, if they're suffering physically, emotionally, mentally, spiritually, and they just cannot stand the way life is carrying on, then they want to take their lives.
Otherwise, people don't.
If they're enjoying their lives, they continue.
And people say, I don't want to live past 100. But then when they get to be 99.9, they don't want to disappear unless they're suffering.
That's what's interesting about the positive aspects of AI. Once we can manipulate human neurochemistry to the point where we figure out what is causing Great Depression?
What is causing anxiety?
What is causing a lot of these schizophrenic people?
Think about how many people do take their lives and with this technology would not just live happily but also be productive and also contribute to whatever society is doing.
I think what a lot of people are terrified of is that these people that are creating this technology, there's oversight, but it's oversight by people that don't necessarily understand it the way the people that are creating it.
And they don't know what guardrails are in place.
How safe is this?
Especially when it's implemented with some sort of weapons technology, you know, or some sort of a military application, especially a military application that can be insanely profitable.
And the motivations behind utilizing that are that profit.
And then we do horrible things and somehow or another justify it.
As long as it's a legitimate democracy that's not controlled by the military-industrial complex or the pharmaceutical industry or whoever puts the people that are in elected places, who puts them in there?
How do they get funded?
And what do they represent once they get in there?
Are they there for the will of the people?
Are they there for their own career?
Do they bypass the safety and the future of the people for their own personal gain, which we've seen politicians do?
There's certain problems with every system that involves human beings.
This is another thing that technology may be able to do.
One of the things, if you think about the worst attributes of humans, whether it's war, crime, some of the horrible things that human beings are capable of.
Imagine that technology can find what causes those thoughts and behaviors in human beings and mitigate them.
You know, I've joked around about this, but if we came up with something that would elevate dopamine just 300% worldwide.
Maybe that's just a byproduct of our monkey minds and that one day we'll surpass that and get to this point of enlightenment.
Enlightenment seems possible without technological innovation, but maybe not.
I've never really met a truly enlightened person.
I've met some people that are pretty close.
But if you could get there with technology, if technology just completely elevated the human consciousness to the point where all of our conflicts become erased.
Just for starters, if you could actually live longer, Quite aside from the motivations of people, most people die not because of people's motivations, but because our bodies just won't last that long.
And a lot of people say, you know, I don't want to live longer, which makes no sense to me.
Why would you want to disappear and not be able to have any kind of experience?
Well, I think some people don't think you're disappearing.
I mean, there is a long-held thought in many cultures that this life is but one step.
And that there is an afterlife and maybe that exists to comfort us because we deal with existential angst and the reality of our own inevitable demise or maybe it's a function of consciousness being something that we don't truly understand and what you are is a soul contained in a body and that we have a very primitive understanding of the existence of life itself and of the existence of everything.
I don't see the evidence of it either, but it's a concept that is not – look, just when you start talking to string theorists and they start talking about things existing and not existing at the same time, particles in superposition, you're talking about magic.
You're talking about something that's impossible to wrap your head around.
Even just the structure of an atom.
Like, what?
What's that?
What's in there?
Nothing?
How much of it is space?
The entire existence of everything in the universe seems preposterous.
But it's all real.
And we only have a limited grasp of understanding of what this is really all about and what processes are really in place.
But if you look at people's If somebody gets a disease and it's kind of known they can only live like another six months, people are not happy with that.
If you can take human consciousness and duplicate it, much like you could duplicate your phone, and you make this new thing, what does that thing feel like?
If we figure out that if biological life is essentially a kind of technology that the universe has created, And we can manipulate that to the point where we understand it, we get it, we've optimized it, and then replicate it.
Physically replicate it.
Not just replicate it in form of a computer, but an actual physical being.
Yeah, seven armed people is cool because it's like, you know, maybe five on one side, two on the other.
No, I'm just curious as to how much time you've spent thinking about what this could look like.
And I don't think it's going to be as simple as, you know, it's going to be Ray Kurzweil, but Ray Kurzweil as like a 30-year-old man 50 years from now.
I think it's probably going to be, you're going to be all kinds of different things.
You could be kind of whatever you want.
You could be a bird.
I mean, what's to stop?
If we can get to manipulate the physical form and we can take consciousness and put it into a physical form...
I just wonder how much time you've spent thinking about what this world looks like with the full implementation of the kind of exponential growth of technology that would exist if we do make it to 2069. Well, I did write a book, Danielle, and This young girl has fantastic capabilities, and no one really can figure out how she does this.
She actually takes over China at age 15, and she makes it a democracy, and then she actually becomes president of the United States at 19. She has, of course, Create a constitutional amendment that at least she can become president at 19. That sounds like what a dictator would do.
Right, but unlike a dictator, she's very popular and she writes very good music.
If you look at Steven Pinker's work, right, you scale it from hundreds-plus years ago to today, things are generally always seem to be moving in a better direction.
He just looks at the data and says it's gotten better.
What I try to do in the current book is to show how it's related to technology, and as we have more technology, we're actually moving in this direction.
When you think about the idea of life on Earth and that this is happening and that we are on this journey to 2045 to the singularity, do you consider whether or not this is happening elsewhere in the universe or whether it's already happened?
What if it gets to the point where artificial intelligence gets implemented and then that becomes the primary form of life and it doesn't have the desire to do anything in terms of like galactic engineering?
Right, but what if it's like us, but it gets to the point where it becomes artificial intelligence, and then it doesn't have emotions, it doesn't have desires, it doesn't have ambitions, so why would it decide to expand?
Well, we'd have to program it into it, but it would probably decide that that's foolish and that those things have caused all these problems, all the problems in human race.
What's our number one issue?
War.
What is war caused by?
It's caused by ideologies.
It's caused by acquisition of resources, theft of resources, violence.
My point is that if artificial intelligence recognizes that the problem with human beings is these emotions, and a lot of it is fueled by these desires, like the desire to expand, the desire to acquire things, the desire to Well, the emotion is positive.
But if it gets to the point where artificial intelligence is no longer stimulated by mere human creations, creativity, all these different things, why would it even have the ambition to do any sort of galaxy-wide engineering?
It is based on us until it decides it's not based on us anymore.
That's my point.
If it realizes that, like if we're based on a very violent chimpanzee, and we say, you know what, there's a lot of what we are because of our genetics that really are a problem.
And this is what's causing all of our violence, all of our crime, all of our war.
If we just step in and put a stop to all that, will we also put a stop to our ambition?
But as soon as you become something different, why would it even have the desire to expand?
If it was infinitely intelligent, why would it even want to physically go anywhere?
Why would it want to?
What's the reason for our motivation to expand?
What is it?
It's human.
The same humans that were tribal creatures that roamed, the same humans that Stole resources from neighboring villages.
This is our genes, right?
This is what made us, that got us to this point.
If we create a sentient artificial intelligence that's far superior to us, and it can create its own version of artificial intelligence, the first thing it's going to engineer out is all these stupid emotions that get us in trouble.
If it just can create happiness and joy from programming, why would it create happiness and joy through the acquisition of other people's creativity, art, music, all those things?
And then why would it have any ambition at all to travel?
I'm saying that if you wanted to program away some of the issues that human beings have in terms of what keeps us from working with each other universally all over the globe, what keeps us from these things?
And if you looked at greed and war and crime and all the problems with human beings, a lot of it has to do with these biological instincts, these instincts to control things, these built-in genetic codes that we have that are from our ancestors.
But when we get there, You think we will be a better version of a human being and we will be able to experience all the good, the positive aspects of being a human being?
The art and the creativity and all these different things?
Well, I mean, when I was born, we created nuclear weapons, and very soon we had hydrogen weapons, and we have enough hydrogen weapons to wipe out all humanity.
We still have that.
That didn't exist like 100 years ago.
Well, it did exist 80 years ago.
Yeah.
So that is something that concerns me.
And you could do the same thing with artificial intelligence.
It could also create something that would be very negative.
Yeah, if you have Pegasus, you could hack into your phone easily.
Not hard at all.
The new software that they have, all they need is your phone number.
All they need is your phone number, and they can look at every text message you send, every email you send, they can look at your camera, they can turn on your microphone.
If you didn't have a phone, okay, and you were at home having a conversation, a sensitive conversation about maybe you didn't pay as much taxes as you should, there's no way anybody would hear that.
But now your phone hears that.
If you have an Alexa in your home, your Alexa hears you say that.
People have been charged with crimes because Alexa heard them committing murder.
But you recognize the financial incentive in not doing that, right?
Because that's what – a company like Google for instance, that's where they make the majority of their money is from data or a lot of their money I should say.
If people agree that the benefit of overcoming that outweighs the loss in the financial loss that you would have with not having access to everybody's data and information.
Well, I mean, what you're giving up is a certain type of data that you want, a certain type of capability that you could buy, and so they can advertise that to you and people feel that that's okay.
But my point is, as this technology scales upward, when you have greater and greater computational power, And then you're also integrated with this technology.
How does that keep whatever group is in charge from being able to essentially access the thing that is inside your head now?
If you have a technology that's going to be upgraded and you're going to get new software and it's going to keep improving as time goes on, what kind of privacy would be involved in that if you're literally having something that can get into your brain?
And if most people can't get into your brain, can intelligence agencies get into your brain?
Can foreign governments get into your brain?
What does that look like?
I'm not looking at this as a negative.
I'm just saying, if you're just looking at this completely objectively, what are the possibilities that this could look like?
I'm trying to paint a weird picture of what this could look like.
What do you think about the potential for a universal language?
Do you think that one of the things that holds people back is, you know, the Rosetta Stone, the Tower of Babel, the idea that we can't really understand what all these other people are saying.
We don't know how they think.
If we can develop a universal worldwide language through this, Do you think it's feasible?
Because what we have, all of our language is pretty flawed.
Ultimately, I mean, we use it, but how many versions of yore do we have?
There's a bunch of different weird things about language that's imperfect because it's old.
It's like old technology.
If we decided to make a better version of language through artificial technology and say, listen, instead of trying to translate everything, Now that we're super powerful, intelligent beings that are enhanced by artificial intelligence, let's create a better, more superior, universally adopted language.
Well we would be human beings and in my mind the human being is someone who can change both ourselves and means of communication to enjoy Better means of expressing art and culture and so on.
If you mean that we're living in a simulation, well, first of all, some people believe that we can express physics as formulas.
And that the universe is actually able to...
It's capable of computation, and therefore everything that happens is a result of some computation.
And therefore the universe is capable of...
We are living in something that is computable.
And there's some debate about whether that's feasible, but that doesn't necessarily mean that we're living in a simulation.
Generally, if you say we're living in a simulation, you assume that some other place and teenagers in that world like to create a simulation.
So they created a simulation that we live in And you want to make sure that they don't turn the simulation off, so it'd have to be interesting to them, and so they keep the simulation going.
But the whole universe...
Could be capable of simulating reality and that's what we live in and it's not a game, it's just the way the universe works.
I mean, what would the difference be if we lived in a simulation?
If we can and we're on our way to creating something that is indiscernible from reality itself, I don't think we're that far away from that, many decades away from having some sort of a virtual experience that's indiscernible from regular reality.
It's insane how beautiful it is and how incredible what the capabilities are.
So if you live in that, that's kind of a simulation that Right, but as you expand that further and you get to the point where you're actually in a simulation and that your life is not this carbon-based biological life feeling and texture that you think it is, but that you're really a part of this thing that's been created.
This is where it gets real weird with like probability theory, right?
Because they think that if a simulation is possible, it's more likely it's already happened.
I mean, there's really an unlimited amount of things that we could simulate and experience.
So it's hard to say we're living in a simulation, because a lot of what we're doing is living in a computational world anyway, so it's basically being simulated.
And if you were some sort of an alien life form, wouldn't that be the way you go instead of like taking physical metal crafts and shooting them off into space?
Wouldn't you sort of create artificial space?
Create artificial worlds?
Create something that exists in the sense that you experience it.
And it's indiscernible to the person experiencing it.
So the galaxy-wide engineering is the main thing that you look at to the point where I don't see any evidence for life outside.
Well, there's definitely no real evidence that we've seen, other than these people that talk about UFOs, UAPs and pilots and all these people that say that there's these things...
Another way of looking at it, I mean, we have mice, and they have experiences.
It's a limited amount of complexity because that particular species hasn't really evolved very much.
And we'll be going beyond what human beings can do.
So, to ask a human being what it's like to be a human being in singularity, it's like asking a mouse, what would it be like if you were to evolve to become like a human?
Now, if you ask a mouse that, it wouldn't understand the question, it wouldn't be able to formulate an answer, it wouldn't even be able to think about it.
And asking a current human being what it's going to be like to live in a singularity is a little bit like that.
Well, that's why I'm excited about it also because it basically means more intelligence and we'll be able to think about things that we can't even imagine today.
It's very important that people have access to this kind of thinking and You've dedicated your whole life to this in this book Ray Kurzweil the singularity is near when we merge with AI. It's available now.