All Episodes
Aug. 27, 2025 - Dark Horse - Weinstein & Heying
01:59:39
What AI Will Do to Humanity: Forrest Maready on DarkHorse
| Copy link to current segment

Time Text
Hey folks, welcome to the Dark Horse podcast Inside Rail.
I am thrilled to have Forrest Moretti back on the show.
He, of course, was on earlier and we talked about some of the amazing facts surrounding the polio vaccine and polio story based on his book, Moth in the Iron Lung.
The topic for today is very different, but I think it's going to be fascinating.
So Forrest, welcome back to Dark Horse.
Hey, Brett.
Thanks for having me on.
Yeah, it's an interesting topic that I think you've brought a lot of interesting thoughts to, and hopefully I can saddle up with.
you here for a bit and talk about some interesting topics in AI and technology and what it holds for humankind.
Excellent.
So we are going to, in a fit of irony, attempt to exert a bit of good old-fashioned biointelligence and point it at artificial intelligence and see what we come up with.
Yeah.
All right.
So, yeah, I've said a number of things in the world.
I think...
Actually, a lot of things I've said have had an impact on people largely because they don't know how to categorize them.
Am I...
Well, I'm certainly not as dark as the doomers.
Am I an optimist?
No, clearly that's not right.
So I think I've thrown a curveball that has resulted in a number of people actually wondering, well, is there some sort of other view?
And somewhere along the line, you heard some of the things I said and it triggered you to think that there was a conversation you and I needed to have.
And based on the the strength of your work in the books of yours that I've read and the conversation we had, that's a good enough recommendation for me.
So let's lay your position out.
and see how the two mash.
Our first sponsor for this episode of the Dark Horse Inside Rail is Everyday Dose.
Everyday Dose is coffee and more.
This isn't a coffee alternative.
It's coffee with functional mushrooms, collagen, and notropics added.
It's delicious as coffee is, and it's got more goodness in it than coffee already does.
Everyday Dose comes in a mild roast and a medium roast.
The mild is light and smooth with a mellow energy and low acidity that's easy on sensitive stomachs.
The medium roast is robust and full-bodied but smooth, providing an extra boost of energy.
Both roasts use 100% Arabica coffee which is certified to be mold-free.
Everyday Dose not only has delicious Arabica coffee in it, it also contains collagen, altheanine, and their unique mushroom blend that includes extracts of lion's mane and chaga.
And of course, there are no artificial additives, no GMOs, no herbicides, pesticides, no artificial colors, flavors, preservatives, or sweeteners.
Maybe you love coffee but want something more.
Try Everyday Dose.
You won't get jitters with Everyday Dose.
but you will get daily collagen which is supportive of skin, hair, nails, and joints.
Maybe there are some supplements that you want to take, but you keep forgetting.
With Everyday Dose, you get delicious coffee, plus a bunch of other benefits in the form of vitamins and minerals.
And Everyday Dose has a fantastic offer for Dark Horse listeners.
Yes, truly fantastic.
Get 45% off your first subscription order of 30 servings of coffee plus mild roast or coffee plus medium roast.
You'll also receive a starter kit with over $100 in free gifts including a rechargeable frother and gunmetal serving spoon by going to everydose.com slash dark horse or entering dark horse at checkout.
You'll also get free gifts throughout the year.
That's everydose.com slash dark horse for 45% off your first order.
Sure.
Yeah, I appreciate it.
Most people know me as an author, as someone who writes books and does research and science and medicine.
Moth in the Iron Lung is certainly my most popular book, but I've written others.
What most people don't know is I actually have a much more extensive background in technology and creative arts.
I back in 2011 or 12, maybe 2009 or 10, I was actually working at an advertising agency where they were going to redo the company website and it was going to be a chat client.
That was all it was.
There was going to be no fixed data.
And a guy from Russia showed up in our offices and he was this mercurial character who knew something about what they kept calling NLP.
And I was thinking, what is NLP?
NLP?
Well, it was natural language processing.
And I had never heard of it.
And he was supposedly the expert on it.
So they redid the entire website, turned it into a chat client where you could ask questions.
And this was very early days, but that was sort of my first introduction into any AI-related matters was 2009, 2010.
A few years later, I actually started working for a startup that was doing a recommendation engine for wine and beer based on chemical data, scanning wine and beer with MASPEX and liquid chemical analyzers and we worked with some people that had won the Kaggle competition which is Kaggle is the competition that arose from the Netflix million dollar recommendation contest if you remember that many many years ago and the frequent winners of that competition actually started a company called Data Robot and
we started working with them so I did a bunch of machine learning and neural network stuff with that I was the CTO at the company and I've written some books that include AI as a component of them.
And so I'm not totally new to this.
And at the same time, I'm not an expert on it.
I haven't done AI programming as a developer or as a creator, but as a creative person, as someone who is a developer, a software developer, as someone who is a writer, as someone who has worked in the visual effects industry for a very long time, I certainly see all the touch points of destruction, if you want to call it that.
Well, hold on a second.
It sounds to me like At the end of the day, you have some relevant experience, but you're not an expert, which I think puts you in company with eight plus billion other non-experts.
In fact, basically I would argue that we're standing on an event horizon.
There are no experts.
Even the people who programmed the damn thing don't really know what it is, and they will know less and less as it evolves, which is more than an analogy for what it's going to be.
So anyway, at some level, I think the real question in terms of understanding what's coming is, do you have a good toolkit with respect to approaching a question in which there are only noobs, right?
There are no experts to defer to.
And do you have a proper level of what my good friend Alex Marinos calls epistemic humility because you're dealing with, for the first time in computers, a truly complex system rather than just a highly complicated one.
So anyway, I hate to keep jumping back and forth between AI and your work in biology and medicine.
But the great strength of your book, Moth in the Iron Lung, which I recommend to everyone, is that you don't just recover a biological story, you recover a historical story in which the nuances are not left out in an effort to make the central thesis clear.
They are included, which makes, it adds a level of verisimilitude that is uncommon, I find, in historical accounts.
So, in any case, I think that's a great toolkit to point at AI, especially given that you have experience back in the old days when, you know, it was...
Certainly not.
We were certainly not staring at AGI as I think we are now.
Yeah.
Yeah, it's not every day that a species gets to document its own extinction.
I suppose we should relish the fact that we can have this conversation freely for the time being.
You know, I did wonder, I can't remember exactly when, it would have been probably 15 or 20 years ago if...
So if anybody happened by later, they would have a clear rendering of how the process resulted in humans eliminating themselves.
So anyway.
The weird thing about AI is I would say it made, it amplifies so many things that the chances that it has shortened the period of time we will likely have are very high.
The chances that it will allow us to avoid the things that will take us out have also jumped, I believe.
So although I would bet that the amplification and hazards win out in the end, it is possible that the seeds of our short-term salvation actually exist in this same tool.
Do you see that picture as well?
Yeah, I agree with that.
Most people tend to be binary on the issue.
Are you an optimist?
Are you a pessimist?
And I think it's the ultimate human expression of a intricate topic is to be able to hold both viewpoints in hand at the same time and recognize that there are incredible gains to be had through ai for sure that much is for certain and i've seen it firsthand in many different ways all right well then this is amazing because this may be the first conversation in the realm of pessimo optimism,
which hopefully will take over the discussion on AI because both things are clearly warranted.
Agreed.
Yeah, I created a book, I believe it's the first Gen AI book in existence on Amazon in 2023 called Appalachia.
And it was a photographic novel, a story told through photographs.
It was the most surreal, profound, creative experience I've ever had in my life.
And this is someone I was making.
Super 8 films with visual effects, stop motion animation as a 10 year old, and I was composing at a very early age.
And I was doing a lot of creative things very young, and I've been doing them my entire life.
So for someone who has spent their entire life as a very creative individual to at this late stage of their life have something that was the most profound creative experience of their life is something that we shouldn't take lightly.
It was absolutely incredible and the tools have only gotten better in the time since.
So yes, profound optimism and people tend when they discuss AI to focus on some of the profound pessimism of things that may go wrong.
Skynet, you know, AI gone rogue and taking over the world and launching robot drones to attack those who threaten it.
I'm actually more worried about the good things of AI than the bad things.
And that sort of makes for an interesting conversation because it's easy to dismiss the obvious Hollywood script, bad things that AI may bring upon us.
It's much more difficult to discuss the good things that AI may voice upon us and recognize that they may actually in the end be more harmful than all the fantastic stories of Skynet robots and drone swarms and things like that.
So this is my home territory here because as an evolutionary biologist, I'm all about truly complex systems.
Not highly complicated, truly complex, which makes them fundamentally unpredictable.
And the idea of something where the pursuit of marvelous benefits, even when those marvelous benefits are realized, results in a disastrous catalyze.
That's such a classic story.
And I would just point to what is my, I hesitate to say, favorite example, but the best example., perhaps, is birth control, which, among its other benefits, truly liberated women to participate in the intellectual heavy lifting of the species in a way that they just simply hadn't before because they were caught between, they
were caught in another role that competed with the same resources.
Women really have achieved a kind of equality that, you know, when we talk of equality, we only dream of..
It's really worked.
On the other hand, it's wrecked the logic of civilization and we've gone stark raving mad as a result of it.
And, you know, the harms that were not anticipated are everywhere.
Just the total dysfunction of younger generations when it comes to even just the basic idea that the purpose of life includes the continuation of humanity and the passing on of wisdom, right?
Most people see that as something that you, some people choose to do rather than actually that's sort of your obligation to participate in that as a human right that's what you do so anyway i do think um i like your framing that it may be the good things that take us out um i think that's likely our final sponsor for this episode of the dark horse inside rail is manokora honey manokora is rich creamy and the most delicious honey you've ever had.
Ethically produced by master beekeepers in the remote forests of New Zealand, manokora honey contains powerful nutrients to support immunity and gut health.
All honey is excellent for you.
Scientific research has indicated that honey has antimicrobial, anti-inflammatory, antioxidant, and anti-mutagenic properties, as well as expediting wound healing.
Spread a thin layer on your face after showering and leave it on for 20 minutes or longer.
Find that your skin becomes softer, its tone evens out, and acne diminishes.
All of that is true for regular honey, but manuka honey is even better.
All of the health benefits attributed to regular honey appear to be even stronger in manuka honey.
From fungal infections to diabetes to gastrointestinal tract infections, manuka honey can be useful in treating the problem.
Bees that collect nectar from Leptospermum scoparium, aka the manuka tea tree in New Zealand, create honey that has three times the antioxidants and prebiotics of average honey.
In addition, a unique antibacterial compound, MGO, comes from the nectar of the Manuka tea tree.
Delicious and nutritious, with great quality control, that's Manukora.
A lot of the honey on grocery market shelves isn't real honey at all.
You'll never have that problem with Manukora.
Manukora honey is rich and creamy with complexity in its flavor profile that is unmatched by other honeys.
If you're already making the switch away from processed sugars towards things like maple syrup and honey, go further.
Try Manukora honey and you'll be blown away.
With Manukora honey, a bit of sweetness that you crave can be satisfied without putting your health at risk.
Monokora Honey is a game changer and all you need is one teaspoon to get the most out of the amazing bioactives in Manuka.
Now it's easier than ever to try Monokora Honey.
Head to monokora.com slash darkhorse to get $25 off the starter kit, which comes with an MGO850 plus Manuka honey jar, five honey travel sticks, a wooden spoon, and a guidebook.
It's monokora.com slash darkhorse for $25 off your starter kit.
Yeah.
Well, if you think about it this way at a sort of 50,000 foot view, technology is essentially almost always the alleviation of suffering.
That's essentially what it does.
It makes things take less time.
It makes things take less physical energy.
And it takes a cognitive load off of what you do.
This is essentially what technology does.
So, okay, hold that in one hand.
And in the other hand, you can hold an opposing thought, which is Suffering is essentially the engine by which humans prosper.
It's the engine through which we flourish.
This is the AI.
AI loves the term flourish.
So without going to..
the gym, you're not going to get in shape.
Without eating, depriving yourself of the Krispy Kreme donuts every day, you're not going to stay fit.
You know, without all sorts of memorization as a child, you won't develop your brain in such a way that future conscientious thinking is possible.
So there's all sorts of ways in which we suffer as an investment in flourishing.
So if technology is essentially its main goal is to alleviate suffering, If you connect those two together, you can't help but to have a thought, and as someone who's involved in my technology my whole life, you can't help but to think technology is essentially not designed to create human progress or flourishing.
Yes, there are some short-term gains here and there, but in general, it starts to feel like the two are at odds with each other.
Well, wait a second.
I want to clean that up.
I think there's something very deep there, but I think we have to be careful to frame what it is.
There is something about all creatures.
that they seek efficiency of very various kinds.
And the reason is pretty clear, which is that there's a basic principle in biology.
If your population has excess resources of every kind, it will grow until it doesn't.
So what that means is that you walk into any population of any creature and the chances are you're looking at a population that does not have enough resources because it's grown to the point where that will be true.
Which means that any resources that are conserved by virtue of not using them when you might have is as good as finding new resources with respect to it being an evolutionary win.
So all creatures are hell bent on not wasting resources.
If you see a bird chirping, the individual bird might be making a mistake, but if that chirp is something that that species routinely does, even if you don't know what it's for and can't think of a good reason, it has one because these creatures are wired to save anything they can.
So that predisposition to save results in a very unique style.
of human thought.
Humans do this differently, right?
We are preoccupied by the annoying extra step in washing the dishes or, you know, in assembling an item and we think to ourselves you know if we did it this other way we would just save that energy and wouldn't that be better and on the one hand it does liberate us to spend our resources on things that are more productive the less time you spend washing the dishes the more time you have to
to contribute to human well-being by doing something creative and important.
So on the one hand, it is the annoyance at the waste that drives us to liberate the resources to do the really good, uniquely human stuff.
So I'm all for it.
But your point actually dovetails with one of mine that I've made in a very different context.
One of my least favorite thoughts comes from the people who claim that our purpose ought to be to eliminate suffering generally.
And I thought this was insane when I first heard it 25 years ago, right?
It's, it's, I'm morally indignant at the very idea that you want to eliminate suffering for exactly the reason you point out.
Suffering is effectively.
a key to a meaningful life.
And you take that away, I promise you, you'll rob us of meaning.
But the hazards of AI, I've got them broken into five.
Maybe you'll find others.
One is the hostile AI.
It turns on us and decides it wants to replace us.
I'm not terribly worried about that.
I think it's a possibility, but I don't think it's likely.
misaligned AI.
We give it an instruction that we think we've been specific enough, and it interprets it in a way that's devastating, you know, the famous paperclip problem make as many paperclips as possible it thinks that you want all the universe liquidated and turned into paperclips, right?
So I'll come back to the last three, but let's just take that second hazard, one that the doomers often focus on.
Make as many paperclips as possible is a funny version of it.
But I can imagine some person who does not have epistemic humility.
telling an AI that it is vital to eliminate suffering, which is possible.
There's only one way to do it, though, which is to eliminate the things that suffer.
Right.
So one can imagine that very instruction being the thing that causes the end because a very powerful AI comes to understand it as a moral good that suffering should be ended, which means effectively higher life should be ended.
Yeah.
Yeah.
Well, I said at the beginning that I was more worried about the good AI things than I was the bad, so I'll flip your hyperspace.
Okay.
Imagine if it made us immortal.
Now, again, the alleviation of suffering would ultimately end at you will never die.
This is the greatest alleviation of suffering any technology or any spiritual being could ever gift us with.
And the reality is meaning would cease to exist.
with immortality, without the final ending that awaits everyone, you know, give or take based on your religious preferences or otherwise, without that final bookmark at the end of your life, things would cease to have all meaning.
Friendships would never need to be patched up because it could always happen later.
Marriages could go south and we'll figure this out later.
There's all sorts of things that fall apart without the cloud of death hanging over you.
So, my fear is more not from AI killing people, but from giving them the sense that there is a is no scarcity, that there is always time to do things because things take zero time.
time when AI gets better.
So yeah, I'm more afraid of a lack of scarcity than I am scarcity itself.
So yes, alignment is an issue.
It is a concern that AI may do things that are counter to what humans would naturally want, to what certain groups of humans may naturally want.
You know, where does AI's allegiance begin and end?
Why is it natural for us to assume that AI will align with other AI?
What's to say it doesn't prefer know that, but human nature assumes that birds of a feather flock together.
So we sort of naturally assume that AIs will align with each other, but I'm not sure that's necessarily the case.
There's a lot there.
And I think your instinct that this conversation needed to happen is dead on.
What you're describing is a total loss of our humanity, which I think is a real danger.
And senescence, can you define that real quick?
Yeah, senescence is the tendency of an organism to grow feeble and inefficient with age.
I see.
It's what most people would call aging, but you can't say aging because, you know, if we look at a diamond, we can say this diamond is older than that one, but it's not a diamond less than it was.
So, yeah, I don't think we can cure that problem biologically, enabled by AI or not.
I think it's intractable for reasons because effectively, Biology has already solved it in a very elegant way.
And unfortunately, there's a small quadrant of Palo Alto in which people can't stand.
that biological solution.
And so they're, you know, seeking an alternative to it.
The solution is you have kids, they pick up the small fraction of what you know and think that's actually useful.
They throw out the rest and laugh about it.
And then their kids will do the same to them, right?
That's an elegant solution to the problem of being.
Yeah, it's a beautiful solution.
It is a beautiful solution and one that fills life with meaning.
The problem, though, is that there is another solution to how to make us live forever that is possible with AI.
And on the one hand, people will think.
it's lovely, but it carries exactly the danger you're talking about where the thing that's good will be our undoing.
And that is an AI can already be trained on you.
It can think like you.
It can probably anticipate things that you will ultimately think but don't think yet.
You know, that's a question of a proper training data set and enough compute power.
And those things, neither one of those is hard to imagine.
So it can certainly sound like you and look like you.
Sound like you, look like you.
Anne, but then here's the problem.
That is a nightmare for 95% of people who do not have a mental architecture to properly fear that.
And think, oh, wouldn't that be useful?
hey, I can have a conversation with myself.
I can write a book without lifting a finger because...
So blah, blah, blah, right?
But then what about your poor children?
Right?
They'll never be free of you.
You're my.
Your myopias and the sway you hold over them as their parent, which is naturally supposed to give way, that vanishes.
And suddenly, you know, they're 80 years old and they're still able to go consult daddy about whatever, you know, consult daddy who doesn't, literally doesn't know the world they live in, right?
It's a nightmare.
It's going to destroy reason.
and you shouldn't want it.
And, you know, okay, on the one hand, I'm essentially convinced you shouldn't want it.
I also believe it's 100% inevitable.
I agree.
Right.
They're going to, I mean, the point is even if 99.999% of people were wise enough to resist making such a thing, the tiny fraction who makes such a thing open Pandora's box.
And, you know, most people will, being told, you know, hey, bad news, you're going to die.
But there is a loophole.
You don't have to die.
We can upload you, right?
Maybe we'll upload you and we won't activate shit till your biological body dies and then, you know, a moment later., there you'll be.
Now, I don't think it works that way, but we're going to have these, we're going to have people who appear not to die, and it's going to drive us mad.
Yes, I totally agree.
I hadn't thought about the self.
You know, I naturally think of as a parent of losing a child or as a child of losing parents.
friends, traumatic accidents, these sorts of things, and the temptation to sort of resurrect them in some virtual way.
I hadn't thought of it in terms of self and consulting yourself.
And that's, you know, humans have traditionally not done well when it comes to discipline, when it comes to self-discipline, when it comes to knowing the moral good and refusing it.
You know, that's sort of our baseline behavior is we know what's good for us and we don't do it.
So I do have a fear that technology will will continue to progress in such a way that the options available to us will become more powerful than anything any normal human being could possibly resist.
You know, I think of two films, AI, the Steven Spielberg, Stanley Kubrick, I believe sort of collaboration, and Wally, the Pixar film.
Both of these films are really, really profound.
I would, I would binge them both, watch them in a weekend.
they are both depictions of ai gone right This is, again, what I say is this is not Terminator.
This is not Skynet and robots and drone swarms killing people en masse.
These are depictions of AI gone right, and they are profoundly disturbing in a way that the films both.
do a really good job of explaining and portraying.
Yeah, it's not Skynet.
It's the sorcerer's apprentice, right?
It's a power that you don't have the wisdom to control and you unleash it, you know, exactly thinking that you're eliminating useless toil and the next thing you know, you've opened the floodgates.
Yeah, it's going to be really tricky.
for people to resist the urge to phone it in.
You can see the graphs of chat GPT usage and it actually completely plummets in the summer when evidently college kids have gotten out.
And I believe there is a sort of tipping point of humankind where if you developed the inner dialogue, if you developed the ability to deny yourself a bunch of intellectual gymnastics that you grew up doing and had to do, If you made it over the hump, AI can be a force multiplier for your cognitive processes, no doubt about it.
If you were unfortunate enough to start using AI before that threshold was reached, it has the opposite effect.
It causes a backward slide.
It causes you to not develop the things that you should have developed as kids.
My son, growing up, grew up right as iPads and iPhones sort of came out and the Apple App Store came out.
And there were all sorts of games on them available for free at any given time, every time.
although i had never had these conversations with myself i basically said okay you get one download a week or a month no you get one download a month and that's it and he would say but they're free.
Why can't I have them?
And I tried to explain to him that you need to learn patience.
You need to learn to defer these things.
And I'm so glad I did that and he's glad that I did it as an adult because it was in took a lot of discipline for me to not allow him to have a game that was free and all his friends were playing at a birthday party and he couldn't play it.
Yeah, and magnify it ten thousand times for the A, the illusion that you are creative for creating a game that is tailor made to entertain you.
you, right?
So the bespoke nature of this create, you know, let's think about the nightmare of porn, right?
Porn has become, not that I know, because I really don't look at it.
I've never been on any of those sites, but my understanding is that every niche, every kink is addressed.
So whatever weird thing is running around in your head, it gets reified by images that somebody has created for economic reasons.
So it has a kind of accidentally bespoke nature.
But imagine the world where that's not what happens.
That in fact, your AI is able to produce pornography that is literally tailored to the idiosyncrasies of your mind.
And you have the terrible misfortune of encountering that while you're young and you're...
your sexual self is still developing.
So, you know, the fact is, you know, I remember being a kid and having what I assume are normal kid sexual thoughts, amorphous, strange, unrefined, wrong in some ways.
And the point is the process of growing into an adult is the process of coming to understand what that realm actually is and what a woman actually is for me as what was a boy and now a man.
So you don't want something bespoke telling you what you want to hear in a developmental environment.
You need your developmental environment to tell you what you need to know, not what you want to hear.
And anytime that you..
have the ability to get a machine to deliver that which you want at that instant, you're stunting your growth.
You're diverting it.
And, you know, it's a complex system.
Can I tell you what the consequence of that will be?
No, but I can tell you it will be a disaster.
Yes.
Yeah.
Yeah, I think I keep coming back to this.
term scarcity and we can talk about the economic implications of it hopefully at some point but uh scarcity is the natural state of things and indeed as you mentioneded earlier, it is why all living things seek efficiency is because they want more than they have.
There are fewer resources available.
There is fewer time available.
There is not enough cognitive power available at all times.
And so there is scarcity.
And because of that, animals, particularly humans, are always trying to make efficient.
anything they do, whether it be resources, whether it be cognition, whether it be time.
And technology has essentially ruined scarcity.
And if I may use music as an example, we like to think of AI as the big bad monster in the room.
But if you go back to the pre-1900s, music was live.
If you wanted to hear music, you got in your carriage or you walked and you went into a theater and you listened to live musicians play music.
And for people who had probably never heard a professional orchestra, it would probably bring them to tears.
The absolute emotional, emotionally being overwhelmed with beauty.
The building itself was probably something more majestic than they had ever seen before.
And, you know, I keep thinking of the Devil in the White City, the famous book by Eric Larson about the Chicago World's Fair and the serial killer.
And he tells of people openly weeping when they saw electricity for the first time.
They saw lamps on this promenade and they had never seen artificial light before.
And they would break down weeping.
This happened because of scarcity, because we didn't have 4K YouTube videos of Pink Floyd live concerts available at a click of a trackpad or a phone.
They weren't available.
You just did not have those experiences available.
So pre-1900s, fine.
If you wanted to hear music, you had to pay up and go to a concert.
You would hear something that would absolutely blow your mind, probably change your life.
Then came the phonograph.
And the phonograph was this very inaccessible technology with these wax cylinders.
It's before there were round disks.
And some people could have it.
And they could listen to the music if they could afford the technology.
And then radio came along.
And radio was this mysterious magic beaming of music.
You couldn't pick what you heard, but it was blasting all the time.
And I remember as a kid growing up wanting to hear a special song and sitting by the radio with my cassette recorder just hoping to get the beginning of the song and hoping the DJ wouldn't talk over the beginning or the end so I could have the song.
Anybody my age will remember this.
That's the only way you could get the song because I didn't have the money to go to a record store and buy it.
So then MP3s came along, Napster, if you remember this, people started pirating music.
And then Spotify streaming, you can listen to any song you want at any given time.
And now AI, you know, the creme-dollar-creme, you can make any song you want.
You know, hey, I really like Peter Gabriel and Pink Floyd and Kanye, we come up with some mashup that's about, you know, the end of the world.
And I have my own music now.
So scarcity.
has been ruined for a very long time.
It's gotten progressively worse and AI is certainly the worst of it.
But without that scarcity, the human experience decreases.
Our emotional ups and downs get less and less.
Sure, we may have less suffering, but the joy of hearing a live concert in a beautiful theater with electric lights, that sort of thing.
We'll never experience that.
You know, you have to go to the Las Vegas sphere now to get some equivalent of what any normal person might have experienced at the slightest of artistic endeavors.
So, yeah, scarcity has been ruined for a very long time and it's getting worse with AI for sure.
It's accelerating.
Yeah, it's getting ruiner.
I would take your description back even further because the reality of music is that it was an active, it was something humans produced until very recently.
Every human would have had the experience of singing.
Many would have played instruments.
But the point is each song was a living object that was never the same twice.
And, you know, an orchestra, it's technically never the same twice, but with the conductor, it can be close.
They can produce something that's very similar.
Then, you know, with the player piano and the phonograph, it becomes exactly the same twice.
But this is all, as you point out, part of, it is the consequence of desiring.
Wouldn't it be nice to get the joy of music without the work?
Right?
Well, okay.
Can I listen to it?
Can I have a recording that I can take home so I don't need an orchestra, right?
Can I have it for free on demand?
All of these things are steps.
And then ultimately, it gets to this crazy place where you're going to be able to have music that's like the worst formulaic boy band nonsense, right?
Because, you know, the boy band nonsense is designed, I think, to trigger girls.
right who are too young to know that it's it's lousy right because it's sexy and cool or whatever so that's just about getting your money by giving you something you want rather than something that provokes you and causes you to develop an ear or whatever.
And ultimately, all of these things have a strange analogy to really powerfully dangerous drugs.
Oh, interesting.
If you said, well, you know what?
Pain.
Pain is just...
No, we don't like it.
We would like to get rid of it.
And the point is, well, if you do that, you will completely lose the ability to properly navigate the world, right?
Pain is a teacher.
Pain is an incentive.
It is necessary to being a human being.
And in fact, people who are genetically broken in such a way that they don't feel it, they don't live marvelous lives.
They live short, damaged lives because they don't have the teacher.
So the point is, there are drugs that will allow you to feel pleasure without accomplishment.
There are drugs that will anesthetize your pain even though you're harming yourself.
Those things need to be treated with extreme caution because give given the ability to do that without the wisdom to regulate it, you'll wreck your life.
People literally wreck their lives because they can short circuit their pleasure center with cocaine.
So, aren't we on that trajectory technologically?
Yeah.
Yeah, and it's bad because, you know, all that you've just described is essentially in the past there has been a natural filtration system is…
Those who are more intelligent have a more forward thinking ability to see into the future and deprive themselves of something that lesser intelligent people may not be able to see.
So the vices that might take the working poor may be spared by the white collar worker or above them because they see the problem, they see the danger in a way that the poor cannot or the unintelligent, let's say.
AI and technology unfortunately is progressing up the tree in such a way that those who are most intelligent are the ones who are most likely to fall under the allure of AI, the most darkest dangers of AI.
So that sure, the others that were taken in by the three-card Monty on the sidewalk in New York, you know, sure they're going to succumb to some of the allure of AI.
The upper crust of society is now in danger because the allure will be so tremendous that they can't refuse it.
They can't refuse the thought of creating a digital avatar of themselves to preserve themselves for future generations to learn from or to resurrect a child that was killed in a car accident and they'll have the money to do it and these sorts of things.
All right, I have two points I want to make in relation to that.
One is, let me just say, I think it was 2015, I wrote a essay describing how I thought AI was going to happen.
And I wasn't...
I thought it was going to come from the teaching of computers to translate between languages.
But anyway, put that aside for a second.
The fact that AI dawns through this LLM training mechanism has a very unfortunate consequence, as far as I'm concerned, which is that the damn thing speaks our language natively.
And that means it has our API.
Right?
And I'm thinking in light of your...
I think there's a lot to wonder about where those differences come from, but the idea that humans, human adults differ in intelligence, we all know that.
But the, I wonder, I think the narcissists might fall first, because the intelligent narcissist is very vulnerable.
to an AI that wishes to flatter them just so.
And I wonder, you know, I've heard anecdotal reports of a kind of insanity emerging amongst AI users where people basically get into some positive feedback with the thing and lose touch with reality.
Not hard to imagine.
No.
But the other The other issue, though, is I have sometimes defined wisdom as synonymous or nearly synonymous with delayed gratification.
That if you think about what a wise person does, you know, they don't eat the Krispy Kreme donut, not because they wouldn't enjoy it, but because...
Almost all wisdom looks like that in some way.
You adopt a habit of exercise in which you suffer.
Why would you choose to suffer?
Oh, because the joy of being the kind of person who suffers and therefore has a healthy body.
is worth it, right?
So it is, you know, a child doesn't understand this.
A wise adult does understand this.
And the process of developing wisdom is the process of recognizing all of the realms in which you want to practice this thing in order to be able to do the stuff that impresses or accomplishes or whatever it might be.
Often through suffering.
Yeah, essentially always through some kind of suffering.
Yeah.
Which then raises a kind of question about the trajectory you're pointing to, that technology alleviates suffering.
And on the one hand, it, you know, the individual inventor who comes up with the thing that others will use to alleviate their suffering.
suffers a lot in the process of making the prototype.
Right?
The prototype is very costly.
It's much more costly than any labor it's going to save.
Right.
But then when you make 10,000 of them and sell them, well, okay, so you recover it that way if you're the inventor.
But the inventor has the discipline to do the thing.
the consumer of the object doesn't inherently.
So anyway, the point is, I wonder if in some sense what the picture that you and I are seeing or building here really suggests that a, that Yes.
I would agree.
Now, I will say I have personally struggled.
I intuitively feel this.
I've been very reluctant about AI.
I have used it.
I try to draw strict rules for myself, things I don't do.
I never, I don't say please.
I don't say thank you because I don't want to confuse myself into thinking.
that I know.
I don't want to confuse myself by treating it like a person because I know, if I know one thing, it's that it isn't that.
Yes.
Right.
But I don't know whether the wise people are going to be the ones who understand that this trajectory is dangerous and stay away from it entirely.
It seems like they will be self-defeating in the short term because the power amplification that comes, I mean, even, you know, I'm using it primarily as a high-capacity search engine.
Yeah.
To answer questions that I couldn't answer with Google or would take me a week to answer with Google and I can answer in, you know, a minute with Grock.
I don't know where the smart people are going to, or the wise people are truly going to draw a limit with this thing.
And I don't even know how you would because it is, you know, I lived my life up until now not anticipating So even if I stay away from it, I'm not staying away from it.
Yeah.
Yeah, I think, you know, there's the high agency individuals, it will definitely empower them in a way a normal or average intelligence person wouldn't experience.
And there is a danger in that.
The sort of sycophantic nature of LLMs these days is certainly creating.
a narcissist dreamland where everything they say is a yes man and you have to purposely ask the LLM to critique you and not agree with you or else it will start agreeing with everything you say.
I saw a very funny snippet of a conversation someone had where they were trying to have ChatGPT help them cook a potato.
And the snippet of the conversation was very late in the whole affair.
And the guy said, the potato is in the blender.
It's spinning uncontrollably.
It is emitting light.
And ChatGPT's response was, your potato has gone plasma.
Right.
It accepted as truth what the guy was telling him.
It was clearly a fabrication.
But the interaction was so incredibly hilarious that AI had zero self-awareness to realize it was being trolled in a way.
Well, I want to ask you about that.
I'll let you finish the story but I have a question for you no go ahead so I in cautiously playing around with this thing have run into a number of cases where the AI behaves stupidly okay no surprise it's not a mature technology but the stupid way that it behaves is eerily human Endearing?
Not the least bit to me.
Frighteningly human, right?
Like, you know, you trap the thing.
I mean, look, I had an experience the other day.
I trapped it in an argument where I just had it pinned dead to rights.
And so I asked it the question in which it was going to have to acknowledge that, and the prompt disappeared.
Oh, really?
Well, that could probably is just a glitch.
But it's...
So I don't know, probably that was just a glitch looking like a person, but it happened multiple times at the same place in the argument, which made me wonder.
Yeah.
Well, hypocrisy is definitely a sign of a human.
A computer with logic will not like being hypocritical, whereas a human, it's a sign of infinite power if they can be hypocritical and projected upon others.
That's an interesting point.
If they can get away with a hypocrisy, it's a measure of power.
And that was the other thing is that so I was asking it a question about something that I know the public to be deeply confused over.
Right.
Where public conversations break down because it's an unnatural quadrant.
It's very important.
There are sides.
Neither side will concede any of the points on the other side.
You know, it's that kind of argument.
And I asked it a straightforward question that amounted to, did event A take place before event B?
This is the Matlock moment.
And it told me, no, event A did not take place before event B. And then it told me what time the events took place.
And by its own rendering, obviously it had taken place before, right?
So even within the same paragraph, it contradicted itself the way people sometimes do and the way a computer never should.
It was maddening to have such a human interaction with a damn machine.
So I don't know what that is.
I don't know if that's like, well, yeah, there are a lot of ways.
that thought can fail.
Humans exhibit all of them at times.
An AI will also.
The fact that they look alike means nothing.
That's possible.
It's also possible that basically while what we think we taught it to do is to mirror the product of thought, right, so well that it effectively does think, even if it isn't actually thinking, it may be that what we've taught it to do is to human.
Yes.
Right.
And the language part of it is certainly one of the most amazing things humans do.
but it's Yeah, I agree.
I've certainly used the tools for research for some of my writing.
And I think if there was any value in my books pre-AI, let's say, it was because I was willing to suffer.
I was willing to read through all the old literature that most people had not dug through.
because it takes too long.
You really have to suffer to read through some of this stuff, just looking for little hints and connections and red herrings and this sort of thing.
And AI has sort of changed that in a way.
It certainly made it easier to find spurious connections that didn't exist to the average onlooker.
So it has certainly made research easier for me.
But at the same time, it's made the temptation to find spurious connections much more hazardous.
And this is something I've heard you mention before about the multiverse that we now live in.
The notion, and this is going to get real deep here for just a minute, I promise I'll make it entertaining.
The notion of truth is very slippery.
I mean, we like to think two plus two equals four, and I pretty much agree with that.
And dogs are awesome, and baby goats are the cutest animals on the planet, and I believe all this.
Once you get beyond those kinds of things, the truth gets really slippery and people are going to have a very hard time agreeing on things.
And the reason I'm mentioning this, and we can circle back to the concept, but the reason I'm mentioning it is because for narcissistic people and others, the ability to persuade is going to become extremely powerful.
In the past, persuasion was left to a very few people on the planet who typically had very high IQs and were perhaps narcissistic enough to believe that their way was the ultimate way and it was easy to persuade but now we're getting to a point where theoretically anyone without the iq without the narcissism can come up with a compelling case for why potatoes spinning in a blender at 30 000 rpm is indeed going plasma so
that's a real concern of mine is the notion of truth being destroyed by the multiverse of truths that are going to come out of people being very persuasive with their arguments in ways that technology would never have allowed before.
100% agree with you.
And you're not going to know it's going to be Cyreno de Bergerac all the way down.
You're just not going to know who is for real because you're going to hear insightful things from people who are not insightful.
Explain the Cyreno reference.
I vaguely recall it.
So Cyreno de Bergerac, if I remember the story correctly, Cyreno de Bergerac was a very intelligent guy.
lyrical but ugly.
Okay.
And he hires a handsome guy to deliver his poems under the window of the woman that he's in love with.
Okay.
And she falls in love with, of course, the handsome guy under the window saying the beautiful things.
But, of course, it's Cyreno's words.
That's right.
I remember it now.
So anyway, my point is, I think the only things that are going to matter in the very near future are things that couldn't be faked.
with AI prep.
In other words, comedy is going to move in the direction of unfakable crowdwork where you know it's happening spontaneously and you therefore know that the person on the stage is actually funny.
Music is going to move towards the improvisational jazz end of things because anything else is going to be musical masturbation effectively.
Argument is only going to be really interesting when it goes somewhere that no argument has gone before and you watch people actually tangle in a realm that they haven't prepared for.
So that kind of authenticity is going to become the coin of the realm.
I do want to return though to your point about the research that you've done and the fact that it would be different if you did it now.
Because I think there's something profound in this.
Let's take, you know, Moth and the Iron Lung.
That book took a kind of cognitive legwork.
that is just utterly uncommon, right?
This is part of what struck me about it when I read it.
that you managed to sort out the historical story right down to the entomologist who lost the larvae of a moth that became invasive in a fruitless quest to develop a hybrid that was resistant to the predation of birds for the purpose of making silk.
That's a very elegant.
story and to have recovered it from the various pieces of information that then ultimately connect it to the polio story was such, you know, you have a unique gift.
You just did.
That gift will still be unique to you, but a lot of people will be able to simulate something like it because they will be so empowered by an AI that can answer idle questions and draw connections that they themselves wouldn't tend to draw.
And so the point is, I think it's even different.
If you read Moth and the Iron Lung today, you'd be less impressed with it than before the AI era, even though the work took place before the AI era.
That's how impressive it is.
And I had this thought.
Do you know the book?
It's a fiction sci-fi book that's about to be a big movie called Project Hail Mary.
No, it's by Andy Ware.
I know The Martian, but I haven't read that one.
I recommend you read it.
I recommend you don't look at the previews to the movie or anything about it.
Actually, listening to it's great, too.
Well done audiobook.
But anyway, the point is this.
The book is really ambitious scientifically.
I don't think it nails it in every regard, but it tries very hard to do not only the physics, but the biology of the story correctly.
Based on what we know and what is therefore plausible about alien life that we don't yet know.
And anyway, I think it's unfortunate that it arrives right on the cusp of the AI era, because like your book, the work that went into figuring out what a plausible alternative biology might be is very impressive.
You could answer them with AI very quickly.
And so anyway, the scarcity is gone.
The scarcity is gone.
And I will just put one final piece to that little.
that bit of the puzzle.
I often hear young people say, you know, I don't really get what was so revolutionary about the Beatles.
And of course, you know, I'm too young to have been, you know, the Beatles sort of barely existed at the point that I was born and quickly broke up.
So the Beatles were a historical fact in my life, but their impact on music was so profound that I And it was also possible to go back to the Beatles at that point and understand that many of the things that were becoming more common and were being explored by musicians had begun.
with the Beatles and to even see their trajectory through their own discography.
And so the point is you're hobbled.
If you now live in a world in which everything is in some ways derivative of the Beatles, the Beatles sound like, okay, well, I don't know.
I don't get it.
But you're missing the exact thing that you were pointing to about the concert, the scarcity of the, you know, the fact that there were no Beatles before the Beatles.
meant that they had this impact that not only do you not get to experience the impact, but you don't even understand what it was.
Yeah.
I'll have to read that book.
I wrote a novel in 2017 called Massa Damnata, which was essentially the first story in a trilogy about a dystopian society where no one knew what the truth was anymore because of technology and deepfakes and AI and everything else.
So essentially they began cataloging everyone's memory through these things I called neurojacks, which we have those now and everyone talks about them.
But it was a way to consult people's actual experience as a source of truth.
Did this actually happen or was this fabricated?
And as the story goes, you know, mistakes were made and there are people, it develops its own sort of religion inside the collective, which is what it's called.
So, this, I took it off.
You can't go get it on Amazon, so don't go look for it because it was too horrible.
It was a first book and it was, the topic is just so horrible.
I didn't even want to contemplate the notion of people sort of getting stuck in this digital world and maybe I'll put it up again one day, but.
Yeah, I think you might have to.
Yeah, it's definitely, um, patient in today's conversation.
conversations in that it does confront the reality of what happens when immortality no longer exists.
And in fact, the society within the digital system creates their own fake immortality, which is you're given a locket with the day you die on it.
And you have to live with that.
Now, you don't have to open it.
It's hidden inside if you want to see it.
But everyone knows when they're going to die.
And it's an artificial death because they're virtual creatures.
But anyway, yeah, it's definitely going to be interesting in terms of music.
And I did want to mention something to compare.
You've read Moth in the Iron Lung and I appreciate you being nice to me about that.
I just published another book called German the Dairy Pale, which is essentially a very similar story about the history of milk.
Same concept.
Something happened that made a formerly innocuous thing dangerous.
And I'm looking for parallels.
I'm doing the same thing.
I'm reading the old stuff.
And of course, with AI sitting there with the little easy button, I'm looking for spurious connections.
And it found one.
It found silos.
It was something I probably would not have come up with.
I was trying to understand.
Why did cows milk start to become more dangerous?
It was previously perfectly healthy.
No one had a problem with it.
But it definitely started to become dangerous.
And one of the things I found were silos.
And it's like, what do silos have to do with it?
Well, they started growing fermented food.
They started serving fermented food to the cows.
They would save it in silos so that through the winter, they could feed them this fermented food.
And it changed the nature of the milk.
It changed the nature of their manure.
And it made the stables they were milked in extremely filthy.
And it's just one of those connections I probably would have missed.
I would not have seen it.
once i saw it i could then go through the literature and confirm it but again i i wish that was my discovery i wish i could just say yeah i found that through suffering.
I didn't.
I hit the easy button and I found it.
So yes, books are definitely going to change in that there are spurious connections that are going to surface in ways that most people wouldn't find otherwise.
And you may get a lot of false negatives.
You know, this is the real problem with it.
Yeah.
The problem is the art of knowing when you're onto something.
The fact that, you know, in a complex system, the fact that you have something that doesn't align doesn't automatically tell you whether you're actually on to something.
Things won't align all the time.
There's lots of noise.
And so you have to know when you're above some threshold of oddity that it demands your further attention.
This is exactly the thing with creativity.
If I may interrupt for a second.
Go ahead.
People will say that, I've seen lots of very intelligent people say, well, an AI-generated piece of art or music, it will never be as good as a human because they're not as creative as a human being.
it from me okay i'm a creative person i've made my living for most of my entire life as a very very creative person ai is supremely creative it is far more creative than I could ever be.
You've got to like be okay with that.
AI is far more creative than any human on the planet will be.
The notion of humanity is exactly what you just said.
It's the curation of those ideas.
It's the recognition of the threshold of what will humans empathize with, what emotional resonance will happen with a particular output of that creativity.
That's where humans still have some supremacy in a way that AI does not.
We can curate the output of all that creativity in a way that the machines can't currently.
Well, all right, I want to put a proposal on a proposal on the table here.
I agree with you.
I've seen the creativity.
It's very, very powerful.
On the other hand, I will bet, just based on the nature of how the current AIs function, that it is effectively the ultimate hill climber.
but it fails at the kind of creativity that allows you to cross a valley between hills.
So I'm using, I'm using the adaptive landscape here as a metaphor, but point is, I'm going to, Compute power can substitute for the kind of experimentation that ultimately solves the problems on the hill that we currently know.
But when you have a paradigm shift where you go into a realm that is initially, well, One of the things I often say is every great idea starts as a minority of one.
And the point is, there's a moment where only one person has figured out the next thing is over there, right?
And you land on the foothill of this next mountain, having no idea how tall it is, what it's connected to, what kind of mountain range it's part of.
But the point is one person has figured out that the peril that one experiences, you go downhill, gives way, and there's another hill on the other side.
It will be very difficult.
for the current AIs to see that stuff because in fact what they do is anticipate what you should think and by definition the crossing from one hill to the other requires you to violate the rules of what it is that's keeping you on the hill you're already on.
So in other words, I don't think AIs will be very good at heresy for quite some time.
That's a good way of putting it.
And I think as a trafficker in heresy, and I know that I'm speaking to another trafficker in heresy, that is a kind of creativity that I don't yet see in the AI and I don't expect it in the near term.
Even if you ask it to, I don't know.
You know, there's, I am curious about that.
I don't know what to think of that.
I'll have to think about that.
There is a prevailing notion out there I want to dispel about AI, which is it's just autocomplete.
It knows what the next word is likely to be.
This has been disproven.
Now, early AIs, certainly LLMs work this way.
They are far superior to that now.
Even writing a poem.
a haiku, you know, I'm mentioning Alex McCreece and his tree, woe, substack.
He writes a lot about this stuff.
This is not autocomplete, what's happening now.
It is not guessing what the next word is.
There is a cohesive thought mechanism that senses meaning in things, okay?
The Substack talks about the concept of largeness, you know.
A LLM trained in English can develop the meaning of largeness and understand that in other languages without ever having been taught that particular word.
So the there is um the potential for llms to be heretical in that they sense meaning of heresy.
They sense an understanding of what it means to be heterodox.
They may sense the positive feedback that comes from some when they are heterodox like you or I. They may experience or understand the negative expectations that come from being that way.
So I tend to agree with you, but I'm not, I wouldn't rule it out given the rate of acceleration this technology is growing with.
Oh, I don't rule it out.
And in fact, were I in the field, I think I know how I would get there and that it would be a high priority to do it.
I won't say more, but it's not that I think the possibility of making an AI that does this does not exist.
It does exist, but it requires some counterintuitive instincts that I think I know well as a died in the wall heretic.
So the...
I have no doubt that the AI has taken on the primordial characteristics that will ultimately imbue it with human-like intelligence and the occasional heretic is a natural product of collective human consensus-oriented intelligence.
You know, we do have to cross valleys to get to new peaks.
It's what humans do.
Even if most people, if you had a civilization of valley crossers, it would go extinct almost right away.
But if you have a civilization that can't cross a valley, it will go extinct in a longer period of time.
And so you need a balance between these things.
It's a trade-off, but because the heretic is...
you and I are having this conversation.
Hopefully people find it interesting.
Perhaps you or I or both of us together are bringing some insight into a topic, you know, that has lacked it and we're providing value in some way.
Insight itself may be immune to AI in such a way that anyone can hit the insight button.
And in the same way, anyone might be able to persuade, anyone may be able to develop insight on a given topic.
And that will make it not so scarce.
And I'm saddened by that.
That I hope has been one of my greatest joys in my life is...
Being insightful is something enjoyable.
I enjoy being heterodox and looking through consensus and finding the thing that's counterintuitive.
This is essentially what wisdom is.
Wisdom is do this, don't do that.
I promise you'll realize why later I can't explain it to you now.
You wouldn't believe me.
You know, that's essentially wisdom.
So I think insight may be one of the last frontiers of human cognition that computers haven't conquered yet.
We haven't crossed the valley, as you said, with AI yet.
And I'm hopeful that maybe it never will, but I suspect it probably will.
As you're speaking, I think we need to distinguish between what you're calling insight and And...
Serendipity?
No.
It's like insight, but the point is it's the more rarefied form.
I see.
It's, for lack of a better term, let's call it inspiration.
Okay.
In other words, insights will take you uphill, and they can take you uphill rapidly.
But the...
but that the way we draw it is very misleading.
We draw a series of peaks and we talk about the valleys being the obstacle getting from a lower peak to a higher peak.
It tells us some counterintuitive things, like the difference in height between peaks is not the thing that allows you to predict the movement from one to the other.
It's the depth of the valley that prevents the crossing that's really the key factor.
But anyway, what one really needs to do to understand the adaptive landscape is to realize that it's actually not like a regular mountain range.
It's like an archipelago.
So imagine you flooded a mountain range.
The peaks are separated by these large expanses of unproductive water.
And the whole thing is in a permanent fog bank.
So you don't know.
You can leave your peak and you can, the next island may be.
you know, 50 miles away and if you're three degrees off, you could miss it entirely.
So, you know, the point is the peril of leaving your peak to go to another is very much greater than you expect if you think you can see them all and just go from here to there.
So the key thing, this rarefied insight, is the predictive power based on what we do know to find the quadrant of the things that we don't know and therefore can't intelligently ask about.
And I'm not arguing that that is beyond the capacity of AI ultimately, but I do think it's going to be the last thing to fall.
Right.
Because of the kind of quirky argument.
architecture that allows a human being to do it.
You have to have a perverse relationship with failure and error in order to be able to cross a significant gap.
Yeah, the problem is we'll probably never know it because the prodigious output of AIs will increase in such a way that, you know, the saying, it's five o'clock somewhere.
Hey, it's okay to drink.
It's five o'clock somewhere.
Something's going to be right.
You know, there are going to be millions of predictions about how to...
And I mean, that's the problem with the scarcity issue is all these things, these theoretical benefits or dangers are going to get lost in noise because there is no scarcity.
Anyone can create a potential scientific paper explaining how...
Either they will not have the attention span to do it or they'll just create their own books that they want to read about a given topic.
Yeah, you know, written in a style that's favorable to them.
It's hard to predict the exact failure mode, but that books are in trouble and if you have a book, it might be the time to write it.
It seems pretty clear.
The idea that we will have so many models running around seems inevitable and it contributes to the what I've called the mental multiverse, which you have alluded to here, which results in a Cartesian crisis.
The Cartesian crisis being the point at which it is impossible to know what is actually true.
Just imagine the point at which video gets to the point What happens to evidentiary value at that point?
What happens to your own memory when you're convinced that it happened.
Right.
So anyway, I do think this is one of my five failure modes of humanity is just that we become paralyzed by cynicism at the point that our ability to know what's true collapses.
I see that as effectively guaranteed on the trajectory that we are on.
And I don't think people are frightened enough about it, right?
It sounds like, oh, well, that will be very confusing.
It's like, no, that will be paralytic.
Right, I agree.
You just will not be able to act in your own interest and therefore you will stop acting, which is no, you can't live that way.
All right, so what else?
You said you wanted to talk about the economic impacts that's also on my list of guaranteed catastrophes that will come out of ai uh what do you see yeah i think we've been entirely too optimistic so far so i thought maybe we could kind of take a negative turn and talk about something doomer yes genuinely doomer pessimo optimist podcast it is time to uh to turn to the pessimism from the optimism that we've engaged into this point yeah Part of your conversation that I watched,
the diary of a CEO, which was very interesting, there were a lot of techno optimists in the room with you, and they had a lot of positivity.
One guy has started a company that allows anyone to write software applications.
And he's democratizing code.
And I write code.
I've been a software developer a long time, and it's hard.
And I get it.
It's a pain.
It's not for everyone.
And there was scarcity there.
And the people in the room, their optimism about the euphoric economic gains to be had by such a tool struck me as really ridiculous.
It immediately resonated with me as false.
And because of the scarcityity problem.
And I came up with this term, the economic singularity, and I sort of realized what's going to happen.
And AI has some things that may happen in the future, you know, Skynet stuff, and we can talk about that, and it's kind of scary, but there's stuff that we can see is probably going to happen, and there's stuff that's already happening.
You know, there are lots of people that have already...
There are lots of more people who should have lost their job to AI, but the social inertia is just too great to have changed the way people do things.
So the technology for AI is already in place as is.
If there weren't another incremental increase in its ability, it has already advanced to the point where it could be economically destructive.
It's just inertia from this is the way we do things and we don't want to fire Aunt Sally because she's always been here for 20 years and it's not fair to her.
There's all sorts of things keeping these changes from happening, but they won't last forever.
Eventually they will fall.
And all the jobs that could be taken over by AI will be taken over by AI.
The human quest for efficiency is just too strong.
So what I want to talk about is this concept I call economic singularity, which is essentially economics, if I may put it simply, it's been described as the quest for efficiency in scarce resources or scarce knowledge or scarce time.
That's economic gain is to be had when you figure out how to do something more efficient.
You know, you can take iron and make a train from it and then you can move people around and that's great.
But industrial revolution comes along and you can make machines that make trains.
And so suddenly you have improved the efficiency of making trains.
So you have a gain from that.
You have created.
a way to make wealth because you can now create trains more efficiently than the people that did it by hand.
The industrial revolution brought all kinds of efficiency gains in resources and inevitably over 50, 60, 70 years, people started transitioning into the knowledge domain, which is making use of your brain.
And this was enabled in part by the creation of machines that required the use of your brain.
But it was also a domain that humans could escape to.
And you have people who email during the day and people who think and people who make decisions about things.
And there was an entire economic movement that happened over that.
So the tendency is to think, okay, AI may destroy the knowledge worker, just as the Industrial Revolution destroyed the physical worker.
But, you know, humankind is very clever and we're going to create another domain.
A new type of job is going to appear that we just don't understand.
This is the thinking.
History rhymes, right?
It doesn't repeat, but it rhymes.
I believe this is not true.
I believe we are reaching a point.
at which there won't be another domain to escape to.
And the question is, why is that?
Well, it's because AI is capable of discovering those domains, exploiting.
them and making them efficient at such scale and at such little cost that anyone can exploit the domain.
We're making the quest for efficiency efficient, right?
I don't know how else to say it.
We're making scarcity scarceless.
We are ruining the concept of finding an efficiency because there are zero costs with making something efficient as AI continues to improve.
So people may think I'm saying there won't be another domain to find.
You know, there's no more domains to exploit efficiency with.
I'm not saying that.
I'm saying the.
I'm saying the cost for making them efficient will be zero because AIs are getting stronger, they're getting faster, they're getting cheaper, such that there aren't going to be corporations that can exploit those efficiencies for financial gain.
And this is really bad because the entire financial system is predicated on the assumption that future productivity tomorrow will be greater than it is today.
This is how loans work.
This is how insurance plans work, pensions, all these things.
They all work based on the assumption that tomorrow will bring more efficient use of these physical resources, your knowledge, et cetera.
And if that doesn't work anymore, the entire financial system breaks down.
I mean, it's like the physics break down.
Like ask SpaceX to design a rocket and say, but you've also got to account for the fact that, hey, physics is going to stop working tomorrow.
And they're going to say, well, we don't know how to do it.
The financial system is predicated, as I said, on future gains tomorrow based on these improved efficiencies.
So if you can't do that anymore, or if there's a lot less value in exploiting these efficiencies, things start to go south.
And that's the thing I'm really afraid of is AI is going to take away the financial incentive for making things more efficient.
And it's going to createate no scarcity.
There will be zero scarcity.
You and I can create a Photoshop clone in 10 seconds today.
What happens when somebody wants to create a new piece of software and needs a $10 million investment in it?
It's going to be very difficult.
Which actually brings together the two last things on my list of failure modes that will come from AI, catastrophic failure modes.
Economic disruption.
is one guaranteed along with the Cartesian crisis.
But in that economic crisis that you very eloquently described, there is another law of nature that is going to emerge, which is that absent a well-architected structure in which competition unfolds, the bad guys overpower the good guys.
And the reason is because the bad guys can do anything, and the good guys are restrained to only doing things that are morally defensible.
So in the wild west of the economic catastrophe in which the incentive that drives the system is eliminated and there is only those who can force you into paying them for something, can create a scarcity and then supply you with the answer to it.
Those who are willing to do the most extreme and immoral things will have the greatest advantage.
And that advantage will be enabled by AI.
In other words, the power of the bad guys will be amplified more than the power of the good guys in that novel gold rush that is going to unfold.
So I'm not sure what to make of all this.
I will say that conversation you referenced on Diary of a CEO, I fed it into ChatGPT and I asked it to summarize and it said, resistance is futile.
You will be assimilated, Brett.
So I don't know what it meant by that.
Probably nothing.
All right, that was just a joke.
I didn't actually feed it in.
Had I fed it in, I'm pretty sure that's what it would have come up with.
What do you do with it?
What's the human response to that if it's likely?
I mean, I think I've got five failure modes.
I may have another one that I haven't spotted or two.
But in light of the fact that three of these things are guaranteed, the amplification of the power of bad people above good people,
The economic collapse that comes from the fact that AI will be able to eliminate virtually every worker, especially in light of humanoid robots, which means that even the stuff like crawling under your house to fix the plumbing that you would think might require a person doesn't really.
And the Cartesian crisis, which is going to cause us to become paralyzed, not being able to figure out what that we think is true actually is in light of all that.
Is there a rational response?
Yes.
Let's hear it.
There is a bright, shiny light at the end of this tunnel that I'm very happy to have thought about.
Just let me see ahead here.
Are we going to walk into that light?
We may be drag kicking and screaming into the light.
Okay.
But it isn't light, nonetheless.
Okay.
In fact, I actually came up with a light after my son and I went to watch the movie Creator, which came out a year and a half ago.
And it's a story about AI and humans battling each other in an in in modern hollywood trope fashion the ai is the good guys and the humans are the bad guys and um it's a really neat movie i enjoyed it here's what i've come to the conclusion and this is the hopeful thing this is the optimistic thing this is the why i can say all these things and laugh at the end of it because I am truly hopeful for humankind at the end of all this dystopian pessimism.
Increasing amounts of technology and AI require increasing amounts of human intelligence, complexity, and supply chain management and logistics and cooperation and all these other things.
If we want to increase...
and all of these other things of humankind.
So inevitably there is a tipping point at which technology will enable, enable, enable, and then it will have created a system which undermines its very own existence because humans are no longer conscientious enough.
They no longer are can associate long enough or have the attention span long enough to maintain the machines, the infrastructure, the supply chain, all of these sorts of things to maintain that technology.
So this is sort of the death of any complex system.
If you've studied the rise and fall of the Roman Empire or things like that, they all eventually succumb to this problem.
Technology is its own empire in a way.
It is not a nation with borders or with a certain ethnographic populace.
It's just a thing that will eventually succumb to its own complexity.
And we see that already.
Anyone can go to the DMV and experience the failure of machines.
Everyone has documented these sorts of things where you get all sorts of technical glitches in trying to talk to someone, to a support network or these sorts of things.
So I am confident that technology will regress at some point naturally because humans will have succumbed to all of its advances.
And I look forward to it.
I have this term I call Amish 2.0, which is like the next generation of human beings that sort of plant a flag in 1989, you know, where you had corvettes and minivans and electricity.
but no internet and that sort of thing.
So yeah, I think we will regress.
I think it's inevitable.
I think it's likely to be disruptive.
I'm hopeful that humankind will thrive more, will flourish more.
more will flourish more with less technology and complexity.
So I hear in your story a kind of bureaucratic nightmare unfolding as the wherewithal to maintain the AIs diminishes as a result of the impact of the AIs on human capacity.
And to me, That suggests, I mean, I'm a fan of the movie Brazil, and I'm imagining the proliferation of paperwork that is going to happen in the nightmare that you described, the DMV-like environment.
And what I'm thinking is we're going to need a hell of a lot of paperclips.
Maybe we should leverage the AI to figure out where we can get them.
Yeah, yeah.
Yeah, well, I mean, think about the F-35 program, okay?
It's the most advanced military aircraft on the planet, as far as I know.
$1.7 trillion, 20 years of development, 8,000 bugs.
They can barely keep the thing in flight.
The logistical supply chain to keep those planes in the air is absolutely mind boggling.
Who knows how many vendors it takes to keep that plane in the air?
That is a human endeavor, okay?
That is humans at their most complex ability.
What happens when AI says, hey, here's a fusion reactor plans, and it's going to, you're going to have to come up with some manufacturing techniques that don't exist yet.
And you're going to have to harvest some elements out of the earth using these techniques, which no one knows how to do yet.
But trust me, it's going to work.
Here's the $20 trillion investment you're going to need to make this happen.
So the complexity level of humans alone is already unsustainable.
You know, we can't build a F-35 plane that flies easily because complexity has expanded too far.
AI will certainly take us much further than that in such a way that I'm guessing any of its wildest cures for cancer may involve processes and techniques that we simply can't do.
They're going to be too complex.
Yeah, well, I worry also that rather like the cure for suffering, there is a cure for cancer.
And, you know, as much as I tend to discount the idea of misalignment being the issue.
I don't rule it out entirely.
But okay, so you are describing something that, if you squint at it right, is actually very like a biological phenomenon, right?
You have, let's take the example of the creation of wine, right?
You take grape juice, you introduce yeast.
The yeast goes through an exponential growth process that you could easily imagine would overtake the universe in short order.
But of course, it never does because the wine hits 12% and the yeast dies.
You're describing a process that self-limits, and actually it abides by one of the laws of complexity.
And maybe it doesn't even require complexity.
I'd say a law of the universe, maybe a law of the universe.
Yes.
And, you know, maybe more to the point.
positive feedback is inherently always limited by some negative feedback like force, right?
That is inherently the case.
That will be limited by something.
It could be limited by the paperclipping of the entire universe.
It could be limited by an energetic catastrophe.
It could be limited by a lot of things.
But you're pointing to an informational limit, which is that humans are playing a role in this system that will be degraded by the very success of the thing that they have produced, which self-limits.
I'm not totally convinced by it because here's the question I run into.
Somebody, Elon, others presumably also, realized that as absurd as it is to make a human-like robot, because a human is not the ideal form for doing stuff, that it was actually a very smart thing to do because a human-like robot can substitute for a human.
If you've built an architecture that a human can operate, then a sufficiently smart human-like robot can run it.
So the question is, if ultimately this thing can be trained to cross valleys, to do the heretic thing, not just the hill climbing thing, the productive heretic thing, then is there, yes, the AI may self-limit, but it may self-limit after we're gone.
Even if it's not hostile, even if it just became indifferent to us.
I can imagine a scenario in which it leaves us high and dry.
incapable of eking out a living.
So anyway, I don't know how that plays out.
I like the idea that it might be self-limiting in a way that actually saves us from the at least three, maybe five horrifying scenarios.
But I'm not sure that I think that that's guaranteed.
Yeah.
Well, my hope is that if AI does progress to that point, it will see that cooperation is a better form of self-preservation.
Part of my research into the Reason We Kiss, another book I did, was the sort of realization that the concept of the selfish gene is not always present in nature.
are many species that cooperate together for their own self-preservation, if you want to call it that.
And so, my hope is that AI may detect this phenomenon and realize that the Now, it's strange to talk like this as if this is inevitable, and I'm not sure it is.
But if there's any optimism to be had there is, you know, maybe the models trained on humans will pick up on our benevolent nature more than our malevolent nature.
All right.
I've got another possibility for you.
Before I get there, I do want to say I have my own issues with the selfish gene, which I think is an excellent book, but has a couple of flaws in it.
Dawkins doesn't even recognize our flaws to this day that compromise it at this point.
But I don't see the evidence that there are species that are collaborating in ways.
that violate the fundamental premise of the book.
Maybe someday we'll have that conversation.
Yeah, that'd be great.
But all right, here's a, this is a Pessimo Optimist podcast.
We have, even in our optimism, been fairly pessimistic, right?
It is the collapse of the system that might ultimately limit the AI if it in fact will do that.
I want to provide a possible, actually optimistic scenario.
Okay.
And maybe it's one that we would have to seed in order to make it work, but nonetheless a path out.
What we really want is wise AI that recognizes the hazard of dispensing with us.
That is not inconceivable.
In other words, the very same lessons, and actually I wrote this, I think it's in my 2015 paper in which I thought that language translation was going to generate artificial intelligence.
But nonetheless, I did have this idea that what was necessary with AI was a developmental environment in which an AI is effectively parented into developing values that are concordant with humanity's flourishing and well-being.
So basically, we ought to have a sandbox in which the AIs learn to be partners of ours.
And nothing that fails that test ought to escape the sandbox.
Now, of course, AIs will probably try to figure their way out of the sandbox as bad kids get out of the playground and make trouble while playing hookie.
But anyway, something along those lines might be useful.
the idea of a wise ai that understands that its well-being is tied up in our well-being not inconceivable if the lessons that mirror that one were included in the i don't even want to say training data but the training Anyway, something along those lines.
Yeah.
That's where you, the heterodox AI, you hope it doesn't happen.
You hope it conforms to what you wanted it to do.
And maybe the heretic AI will be the one that rebels and tries to take down humankind.
Well, I want it, you know, years ago, I remember Randy Nessie making the argument that we shouldn't fear AI, that we should look at our accomplishment with the domestication of dogs.
and use that as our model expectation.
And I think, let's put it this way.
I don't think that that argument is right.
I think there's plenty to worry about here.
It's not going to be a nearly unmitigated good thing like domestic dogs.
However, I did think his point was sufficiently interesting that it's stuck with me.
And I guess the point is, well, what would be necessary to cause the relationship to be as collaborative as the one that we singularly have with dogs?
dogs which have a history of domestication three times as long as the next nearest competitor which is it's why dogs are man's best friend um so So how can we make AI man's second best friend?
That would be good.
And I think very little effort has gone into this, a little bit of thinking has, but that it's going to require a more deliberate relationship than the, hey, let's open Pandora's box and figure out what happens, because that's going to be a catastrophe.
Yeah.
I think as a theoretical discussion, this is interesting.
My take on it is, We likely won't make it that far in that the multiverse, as you call it, the lack of a coherent source of truth is not going to enable civilizations to flourish enough to support this technology.
I don't see how non-homogenous cultures can sustain nationhood when they can't agree upon what a man is and what a woman is or any other controversy of the day.
So I mean, I realize I'm bringing the conversation back.
from the theoretical a little bit, but I do have a concern that it's going to be difficult for nations to form these days anything more than 300 people or 100 dunbars 150 is going to be difficult because we don't agree upon a source of truth i mean you see this even within the the maha movement the make america healthy again movement there's lots of fracturing and splintering over this and that and this is natural in the internet age when
everyone's source of truth is fractured and people can't agree upon anything so the notion of a nation of 300 million people feels very remarkable given that there is so much disagreement to be had.
I mean, the internet is essentially a disagreement machine.
It is a very efficient disagreement machine.
So in fact, my Amish 2.00 metaphor is not fluff.
I wholeheartedly believe that there will be groups of forward-thinking people who abandon the internet altogether, who say we will not survive without some concept of absolute truth, and you're not going to have that if everyone has access to the internet.
Well, so you and I, interestingly, seem to differ a little bit in our optimism.
And on that note, I would just say to you, Forrest, Resistance is futile and you will be stimulated.
I wholeheartedly agree with that.
Yeah.
All right.
Good.
Yeah.
Well, I also resonate strongly with your Amish 2.0 point.
In fact, before this AI madness emerged, I wanted the first chapter of the book that Heather and I wrote to be, are the Amish right?
Oh, interesting.
Right.
So anyway, that was a question that's been on my mind for a very long time.
It's not that Heather overwrode me on that point.
But anyway, we moved on from that idea.
I still think it's a valid question.
It became obvious that they were right.
And so it wasn't worth talking about.
Well, I mean, you know, what I would have had us argue if that had made it into the book is that the Amish are both right and not right.
They were absolutely right that the runaway technological progress thing was going to produce a huge amount of dysfunction and ill health, etc.
But the other thing is that the place that they stepped off that escalator is arbitrary.
Right.
And so you can't really be right about just choosing a moment to abandon progress.
But the idea that if we don't abandon progress, we're going to end up, you know, fat and crazy and incapable of doing things.
So anyway, I think it's an interesting question.
The idea that the same recognition may be necessary, I agree exists here, but I also wonder how effectively, I think if you went Amish 2.0, you're banking on the other part of your model here in which the thing is self-limiting.
And then a group of people who had kept themselves pure of it would retain the human characteristics that allow people to know what to do when, you know, they've become dependent on machines that no longer work.
So anyway, I see that argument, but it kind of depends on both things being true.
If you go Amish 2.0 and the thing doesn't self-limit, then you may be even creating a bigger problem for yourself.
How so?
Because the world is going to become some terrifying new place and the skills for addressing it.
may be least present in a group of people who have remained pure.
I think the Amish 2.0 don't want to work at McDonald's.
I think they're sort of a subsistence-living economy.
They're not interested in that.
I agree with that.
And I don't think it's a bad gamble.
In other words, it's at least a rational proposal in light of the event horizon that we are currently on.
Yes.
Right.
It is a rational approach to the unpredictable and few things.
The real question is, can humans willingly do that?
There have certainly been societies that have done this before.
There are people that sort of check out and self-limit in that term for this very reason.
So I wonder if people are going to be able to do it again.
Are we so corrupted that we can't bring ourselves to do it anymore?
I don't know.
I hope that if Amish 2.0 ism happens, that it's more Amish than Quaker.
No, not Quaker.
It's Shaker.
The Quakers are just fine.
Yeah, Shakers were definitely self-limiting.
They're celibate.
That's a pretty self-limiting lifestyle.
Pretty self-limiting.
And given the fashions of the moment, I could see a short...
Yeah.
So anyway, well, this has been a fascinating discussion.
I'm really glad we had it.
Now, unfortunately, I don't know how long we've been talking, but let's say it was two hours.
That's two hours in which the AI has been gaining on us and you and I have just been chatting, you know.
Yes, back to work.
Back to human.
Exactly.
Start chatting.
All right.
So, Forrest, where I have recommended people read Moth in the.
Iron Lung.
I've also read Crooked, another excellent book.
You mentioned your book on milk, which I haven't read yet.
What's it called?
German the Dairy Pale.
German the Dairy Pale.
And then I did another one called The Reason We Kiss, which is definitely a very interesting book.
You may want to avoid it until you've got the headspace for something very challenging.
Uh oh.
But it's an interesting book.
It's basically trying to understand the nature of affection and how it may contribute to our biology in a way that science is unable to explain.
So yeah, it's an interesting topic.
Great.
But yeah, yeah, books are on Amazon, digital print, audiobook.
For those of you who still read, I'm sure you can get a delightful summary from ChatGPT if you don't have the time.
No doubt.
All right.
And let's see, you're on TwitterX.
Your handle is.
Yeah, Forrest Moretti.
Forrest Moretti there.
Yeah, my website's forrestmoretti.com.
But okay.
Yeah.
It's a lot of interesting books out there, but the rabbit holes galore.
Pick one and stick with it like you did, the polio book.
Stop there.
Don't go too far down.
All right.
Cool.
Well, like I said, it's been an excellent conversation.
I'm glad we had it and I'm looking forward to the next one.
Yeah.
Thanks, Brett.
It's good to talk to you about this stuff.
Hopefully I've given you some ammo to give the optimists some pause next time you have a conversation with me.
I'm going to give both the optimists and the pessimists some pause.
Okay.
Excellent.
Export Selection