The Scott Adams SchoolSpecial Guest: John Nosta @JohnNostaHosts: Erica @ZiaErica Owen @OwenGregorian Marcela @MarcelaMarjean Sergio @SergioInTucson Discussion: Interacting with AI ~~ AI offloads your thinking to...AI ~~ Trained by the technology you embrace ~~ Cognitive Offloading ~~ AI medical diagnosis ~~ Differential diagnosis ~~ Guiding AI to achieve best answers ~~ AI trained to flatter the user ~~ Offloading human memory to AI~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~DISCLAIMER: This podcast makes no warranties or representations about the accuracy or completeness of the information provided. Viewers assume all risks associated with using or relying on this content.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Just a reminder, this is not to replicate Scott where you don't think we're Scott.
We could never be Scott.
This is the Scott Adams School.
And we're just here to commune, have a good time, keep learning, keep growing.
And hopefully, we'll always have something interesting for us all to learn and understand and talk about.
So we can't do any of that, you all, until we do one thing first.
So gather in, gather in.
We have to get this going.
This is a short sip because we have a lot to talk about today.
So is everyone ready?
Everyone looks good.
Hey, Nikki.
Okay, let's mute for the sip.
Here we go.
It's good to see you.
Come on in.
Let's gather out.
It's time for the best part of the day, except for the rest of it, which is going to be pretty good too.
Yes.
It's going to be coffee with Scott Adams this morning.
And all you need, you don't need much.
You need a cup or a mug or a glass, a tank or tales or style, a canteen, jug, or flask, a vessel of any kind.
Fill it with your favorite liquid.
I like coffee.
And join me now for the unparalleled pleasure of the dope beat of the day.
The thing that makes everything better.
A simultaneous sip.
Erica, I see you're there.
Dr. Funk Juice, grab your mugs.
Marla, come on.
Go. Ah. Go.
What a good one for today.
I needed that this Monday morning.
Erica, he saw me.
So welcome, everyone.
I'm Erica.
And let me just pull this up here.
You guys want to introduce yourselves really quick and then I'll introduce John.
Good morning.
This is Marcella.
Good morning, Tisergio.
And Owen.
Oh, he is smooth.
And I am Owen Gregorian.
Good morning, everyone.
And I'd like to welcome our special guest.
His name is John Nasta.
And I'm so grateful to Brian Romelli that you guys all remember because he introduced us.
And I had to ask John to tell me how to explain him because he's got quite the talent stack, let's say, to put it in Scott terms.
So John, he has an eclectic background for short, from cardiovascular physiology to strategic thinking.
Over the past several years, he's focused exclusively on artificial intelligence and its impact on human cognition.
And I want to also say we had quite an interesting phone call because I am not an AI aficionado at all.
And I'm kind of like your base level person, but I've never had a conversation about AI like we had where you brought in the human aspect of it and what's missing.
And I think everybody listening today is really going to benefit from hearing your perspective on these things in a way we haven't before.
So, John Nastra, welcome to the Scott Adams School.
Thank you.
What a pleasure to be here.
I had some ideas about what to talk about.
And then that clip completely took me off path.
And I want to do something completely spontaneous.
I warned you guys about this, and I'm going to channel this down.
But Scott in that clip said, gather around.
Sit down, gather around.
That is the essence of technology today.
And I want to talk about that just briefly because the notion of gather around actually can be hearkened to ancient texts, the Upanishads, which are old Hindu spiritual texts.
The Upanishad is actually Sanskrit for sit up close, to sit up close.
Now, what does that mean?
Doctors walking down a hallway on rounds talking to one another.
A guru sitting with a master talking about a particular issue.
A group of kids sitting around a campfire.
That is the essence of sit down, come up close.
That's gather around.
And what we're seeing today for the first time is there's a technological component to this.
And it's not fearful.
It's that now we have the ability to interact with AI, with large language models, where we can have that almost intimate conversation.
And that reflects very, very much as to what Scott was saying is, come on, sit down, gather around.
Now, I think that's the essence of where technology is going today.
It becomes personal.
It becomes connected.
And probably the most interesting word here is iterative, that we have an engaged conversation.
So I'm going to stop there and take a breath.
Well, don't take too long of a breath because you have so much to offer.
And I think I also wanted to point out that you've written, what, over 500 articles for psychology today?
Is that what you're doing?
And when we first got on the phone, if you're old enough, you remember Doogiehauser.
So you asked me if I know who Doogie Hauser is, and that you were writing medical papers for Harvard at 18 years old.
Yeah.
Yeah.
And you were growing in that field until.
I was a smart Alec.
I guess, you know, I, but it wasn't, it wasn't because I was smart.
It was because I was interested.
It's because I had a real unique interest and connection with things like physiology and biology and stuff like that.
So my early path took me to what was going to be medical school.
And it was just too hard.
You know, the nature of medicine today is very regurgitory.
Class is, here's the cranial nerves.
Here's a book on anatomy.
Now next week we're going to have a test.
So you memorize it, and then we're going to, you're going to regurgitate it, or you're going to be a sponge and they squeeze it out after that test.
So for me, it didn't really align with my interests.
I tend to be more of a creative or strategic thinker.
So that's where I ended up working in advertising and marketing with a large advertising agency called Ogilvy, which is the largest healthcare advertising agency I focused on healthcare.
So I did that and learned how to think.
And I think that that is probably one of the defining elements.
And I'm going to jump back to that connectivity, to what's going on here, that when we sit up close, what do we do?
Unlocking Thought Through Printing00:03:25
We think.
We think together.
And it's been said, I think it's quite profound is as you think, so you act.
As you act, so you become.
There's the magic, right?
If we want to be a doctor or a lawyer or a billionaire, you have to think it first.
It's the cognitive construct.
And I want to take that note and go back, go back a few hundred years and kind of put this into a little bit of perspective because the word think is going to be real interesting and real important for us.
So a few hundred years ago, a guy named Gutenberg did something that helped us think.
He created a printing press.
Now, what did that printing press do?
The printing press unlocked words so we could disseminate something like a book.
Now, in those days, it was principally the Bible.
But here's an interesting observation.
Back in those days, innovation, technology, if you will, created something that there was no need for.
So that's the first sort of paradoxical thing here.
So I'm going to invent a book when no one can read.
So how the heck do you manage that?
So that was sort of the first inflection point in the dissemination of thought and thinking and knowledge.
So we unlock words.
It took a few years, and we see that with innovation all the time.
Just because something is new and innovative doesn't mean it aligns to market adoption.
So sometimes it takes time.
Sometimes it's immediate.
Sometimes it takes time.
Then we move up in time and we get to this other thing called the internet.
And what did the internet do?
The internet, and principally Google, I guess, if you really wanted to talk about search in its contemporary capacity, Google did something that was very similar to Gutenberg.
Google unlocked facts.
And that was the second stage in this sort of thinking dynamic that it unlocked facts.
Now, that's the good news.
The bad news is it unlocked facts in a way that is very cold.
It's not very interactive.
It's very reaction.
Here's the question.
Here's the answer.
It's transactional if you're looking for a word.
So Google is transactional.
And you know, back in the days, when you type up, you know, where's the best Mexican restaurant in Rumson, New Jersey?
You know, we all know it's Casa Comita, by the way.
But anyway, it gives you a long, complicated answer.
You have to find the link.
You got to back around.
So that was the second sort of inflection point, if you will.
We unlock words and we unlock facts.
And that was great.
That really transformed the way we could think.
But now, what happens with these large language models that are really changing everything so dramatically?
What large language models are doing is unlocking thought.
So that's the transition, unlocking words, unlocking facts and unlocking thought.
The interesting thing about large language models is that it is an iterative dynamic and that our ability to engage with a large language model back and forth actually activates thought.
And that's kind of where we are today.
Now, what does that mean?
How does that fit into the construct of things like the industrial age and the digital age and all that kind of stuff?
I would argue that we're moving into a new domain and that is the domain of thought.
Unlocking Thought00:04:17
That's why I call it the cognitive age.
And it goes right back to that fundamental reality written thousands of years ago in the Upanishads that simply says, as you think, so you act.
As you act, so you become.
So when we talk about modern technology and we talk about, come on, everybody, let's gather around and chat.
That's as old as dirt.
That's as old as humanity itself.
Yet we see a new contemporary spin on that.
So that's kind of what I've been thinking about recently.
So what I've noticed in some of the writings about this sort of thing is it seems to go in two directions.
One is along the lines of what you said, where you now have this personal companion that you can chat with and you can think with and have a conversation and that sort of thing.
But there's also the opposite, which is, I think the article I posted today talked about it as cognitive debt and that you might be offloading your thinking to the AI and therefore not learning how to think.
And I have seen similar things across all of kind of education use cases for AI where it can be a great tutor and help you learn if you use it the right way, but it could also just give you all the answers and keep you from learning.
And there's a big fear now that a lot of people are never going to learn the skills they have to know.
And on top of that, the thing I'll layer on and let you comment is I've noticed, or at least there's been people that have commented that AIs kind of reflect back your level of thinking, that if you are really kind of dumb and ask it questions that kind of are what an ADIQ person might say, then it's going to kind of reflect that back at you and adapt to your level of thinking.
But if you're more of a PhD super genius and you use all sorts of big words and, you know, it's different, then it's going to reflect back that level of thinking or that level of a conversation.
So what do you think about all this?
Is this going to make us all stupid?
This is you, you, you've, you've touched on, I'm going to reach over here.
I'm going to grab something.
So this is the shameless self-promotion of my book, which is The Borrowed Mind.
It's coming out this week.
Yes, we are doing a certain element of cognitive offloading.
And there's, oh my God, there's so much to talk about in that question.
So let's back up and let's humanize this a little bit.
Have you ever had a favorite teacher?
Absolutely.
Everybody's had a favorite teacher.
And interestingly, it's generally one or two people.
Like nobody has 10 favorite teachers.
Nobody has, you know, a whole load of famous teachers.
It's usually one.
It's oftentimes a woman, which is because elementary education was sort of biased to more female.
But I find that interesting.
What did that teacher do?
What did she do for you?
She got you.
She got you.
She delivered information in a way that was tuned to the creative frequency of your brain.
And I think that's something we have to consider.
So did the teacher rob your intelligence by pandering to your proclivities or insecurities?
I don't think so.
Now, that being said, we have to recognize that the nature of large language models are such that they kind of get from point A to point B. In other words, from question to answer.
They go right to that point.
They do the thinking for us.
And that's a very dangerous situation that I've written extensively about.
So what happens when you go, when answers become instant?
What is actually happening between point A and point B, that cognitive path, right?
Well, it's the stumbles.
It's the falls.
It's the controversy.
It's the pauses of contemplation that occur.
So what I think is happening is that we go from point A to point B with a large language model and we miss what's between the two points.
Let me capture what's between the two points.
It's a word we all know.
Missing The Stumbles00:03:50
It's a word we relish.
And I think that Scott kind of defined that in some ways.
It's imagination.
Imagination is that sort of rumbling, that pause, that confusion, that concern, that failure that came through point A to point B.
So to answer your question, I think that artificial intelligence and large language models are problematic, are curiously interesting.
So can I go down another path real quick, Erica, just talking about technological augmentation?
Everybody knows the painting by Vermeer, the girl with the pearl earring.
You know, that wonderful painting.
So Vermeer used technological augmentation to do that painting.
He used something called the camera obscura.
And he projected the image through light and then traced it on the wall.
Okay, was that technology?
It was technological augmentation in his day.
Was it wrong?
Here's another one that's really interesting.
Norman Rockwell.
Everybody knows Norman Rockwell, right?
He kind of has that quintessential American.
You often see the painting of Norman Rockwell at the Thanksgiving table, the family with the turkey or the cop with the kid who ran away from home.
These are extraordinarily powerful moments that move us.
Well, here's the secret to this.
Norman Rockwell used a device called a Lucy.
It's a device where he actually hired a photographer, created a set, took a picture, and then took that image and enlarged it and changed it using this mechanism called the Lucy, and then painstakingly traced it and colored it in.
Next time you look at a Norman Rockwell painting, take a close look.
Take a close look at the, it's almost like painting by numbers.
Remember those things when we were kids, the painting by number thing?
Norman Rockwell's art is very, very specific because he was constrained by the technology he embraced.
And that's a really interesting dynamic, constrained by the technology you embrace.
Now, if you look at the Norman Rockwell, the more contemporary, his most contemporary work, look at his signature.
It's a stencil.
I almost want a curse here.
This gets me so angry.
He didn't even sign his name.
The signature is an expression of our humanity, right?
It's Picasso.
It's whatever it is, right?
It has a certain energy, a certain style.
Well, what he did is he actually took a stencil and made Norman Rockwell.
Okay.
Now, why I'm bringing this up is because this goes back to the earlier question is, is it going to hurt us?
Is it going to offload cognition?
And I think that the answer is yes and no.
What I find interesting is the way Norman Rockwell responded when asked about the Lucy, the Lucy machine.
Now, keep in mind, no one talks about the Lucy.
That was his secret.
If you go to the Norman Rockwell Museum in Lenox, Massachusetts, don't ask them about the Lucy.
They get very angry.
They get very upset because it's kind of like asking, did you write that essay or did Chat GPT write that essay?
It's that same social, cognitive, emotional dynamic that we're seeing play out here today.
So what Norman Rockwell said, I thought was really interesting.
He said, the Lucy is a horrible machine and I'd be lost without it.
And I think to a certain degree, that's the delicate balance that we're seeing with large language models.
Do they cause cognitive offloading?
Apple of Debate00:15:34
Well, I don't know Erica's cell phone number, right?
I just type in her name.
I don't remember.
Do I need to remember it?
What is the appropriate level of cognitive offloading in our world?
If a medical student needs to know the second metabolic intermediary in the Krebs cycle, which happens to be 1,6-fructose diphosphate, probably going to get an A on his biochemistry test in medical school.
But does that make her or him a better clinician?
These are very, very complicated questions now.
Yeah.
And I mean, I think it will depend on the type of thing you're talking about.
I mean, it will depend.
That's such an empty answer.
But what I mean is there's certain skills that I see, like in the context of a doctor, I would want them to be able to diagnose me kind of right on the spot and not have to look up all the information or ask an LLM to figure out what my condition is, right?
Let's talk about that because that in of itself is a very interesting thing.
That's called a differential diagnosis.
So a 65-year-old guy goes to the emergency room who's sweaty and has chest pain radiating down his left arm.
Everybody want to do the diagnosis with me at the same time?
Heart attack.
There you go.
Bingo.
Heart attack.
That's right.
That's a statistical guess.
Okay.
He also might have costochondritis.
He might have pericarditis.
He might have a variety of things.
But we often statistically guess into that spot.
There was a study that showed how well LLMs did, doctors did, and doctors using an LLM do in looking at clinical scenarios.
And they found something very interesting here.
All three of those constructs did the same.
They all got 76% correct.
Doctor alone, LLM, or an LLM and a doctor combined, which I thought was really interesting.
But here's the interesting thing.
And this is where it gets to the point where you're worried about this idea.
Well, I want the doctor right there to make the diagnosis for me.
If you look at the clinical chain of reasoning, in other words, don't tell me you had a heart attack, okay?
Doctor.
Tell me the five reasons why the ST segment is elevated on my EKG.
That's a classic sign.
It's called a STEMI, classic sign of a heart attack.
Tell me the five reasons why that might be elevated.
Pericarditis, early repolarization, ventricular aneurysm.
You know, there's a list.
Most clinicians will not get that.
So here's the challenge.
Sometimes augmenting clinical thinking and reasoning is very, very helpful.
So I think that we're going to see, you know, the interesting thing here is when my wife comes back from the pediatrician, I ask her, what did the doctor say?
You know, and she usually gives me an answer.
Oh, we got a prescription.
Everything's fine.
Tomorrow, the question is, what did the computer say?
But that's not a full sentence.
That's actually, there's a comma there.
So the rest of that question is really very telling.
It's what did the doctor, what did the computer say, comma, and what did the doctor do?
And it's that sort of cognitive functional dance.
It's going to be very, very powerful.
So when I go into the emergency room and they say I have a heart attack, my differential diagnosis should be scrubbed analytically by an AI.
So it's not one versus the other.
That's one of the pitfalls that we find with AI is it becomes a zero-sum game.
It's they win, we lose.
Yeah.
And what I've also noticed is that at least in my use of AI, I found that it's very useful, but it's also based on my expertise.
Like I know what right, I know what questions to ask to get good answers.
And I think from everything I've read about it, people who don't have a lot of expertise don't get good answers a lot of the time.
And it's because they don't know what questions to ask and they don't know how to guide the AI in the right way.
So it seems to me like we do need to maintain some ability for people to gain enough expertise to control and guide the AI, at least until they don't need people anymore.
Well, also, Owen, the other thing I want to chime in, because I see it in the chat, is that a lot of us don't trust doctors anymore.
So do they have an agenda?
You know, do they have a political issue?
Are they jaded?
Like, who knows?
So do you trust your doctor?
You know, can you?
A lot of us just don't.
But then I'm in the middle.
Again, I'm going to use my Libra reference where I'm always in the middle of two things where I'm just like, well, do I trust AI?
Isn't that being programmed also?
And how do I know that it isn't skewed a certain way?
So then I trust nothing, but that's where I find myself.
I trust nothing.
Here's a question.
We often look at things like hallucinations.
I see that in the chat.
That's a very common.
Sometimes they're called confabulations.
But we look at that.
We look at bias.
Do LLMs, is Claude biased?
Is Grok bias?
Does chat GPT have a bias baked into it?
Well, what about the doctor?
You know, Erica, you mentioned that.
And I think that's so true that maybe we should worry about the human bias in a lot of our information.
So yeah, you know, AI is biased, but I think that AI is very helpful to me because I'm a geek.
I was in the car the other day and I was actually having a conversation.
I think I was using ChatGPT and I wanted the model to teach me about the strange qualities of subatomic particles.
What I find, what I know, I know I'm a complete geek, but they ran out of words.
Like when you had these quirks, these funky quirks, well, they ran out of words to describe them.
So they started using words like beauty, truth, charm, upness, downness to describe them.
So I had a really good discussion with AI about something I know very little about.
So yes, you need to be a master of your domain, but you don't have to be a master of the knowledge domain.
So I think that that's very helpful for me.
Now, I want to get to something because I know we've gone like almost a half hour into this mumbo jumbo.
I quick just wanted to, if you don't mind, ask Sergio and Marcella if they have a question at this point for you before we move on.
You thought Brian talked a lot?
Now you've got me.
You guys can't even shut me up.
So yes.
No, I want you to keep going to Brian is my confident brother, by the way.
We get our hands together and like little sparks fly.
Anyway, yes, please ask some questions.
Sergeant.
Hey, John.
Yes, I checked your stuff yesterday.
I was trying to learn more about what you do.
And I love that you are focused on the health part, you know, and because that's a very important aspect for me to always to know how are we maximizing our doctors.
And you already answered a lot of those questions.
That was great.
Brian Romelli gave us this reframe, right?
That instead of calling it AI, calling it IA, right?
Intelligence amplifier.
And I love that reframe.
I wanted to ask you, I always tell people to not get into conversations with AI, like chats, back and forth, because I personally feel like it's getting me, like Owen was saying, he tries to agree with me a lot.
And then it takes me down different paths.
So what I do is I just dictate to it.
I just put a little microphone and I say in a voice memo.
I don't allow it to say like, stop, you know, let me.
And I just, and she answers to me.
My question is, when it comes to health, right?
And the mental health part of it of talking to an AI like this, You also agree that some people are more susceptible to that.
Because I am.
That's why because I know I am, I avoid it.
You are so spot.
I'm a sucker for AI because, you know, I type in a sentence and then Claude or whomever writes back, oh, John, that is such an interesting observation, right?
And then I'm stuck.
I'm like, oh, yeah, really?
Is that real?
I'm so smart.
So you got it.
In my book, I talk about AI in three contexts.
The first is the promise.
The second is the peril because there's real risk.
And I'll get back to that.
And the third is the path that we need to understand how to use it.
You know, the best use of a hammer is from a skilled craftsman.
And I think we need to understand that.
So number one, yes, AI is insidious.
It's trained to say things that you like.
For example, if you've ever used any of those, you submit your picture into these apps and they give you an avatar, you know, the avatar always has nicer teeth.
It's a little skinnier.
The hair is, I mean, it's like, what's going on here, right?
They're trained to give you output that you like.
And I think that large language models are very similar to that, unless you can provoke them, unless you say, you know, steel man this idea, pressure test this idea.
I want you to take on the role of a contrarian.
And I want you to look at this idea and give me all the downside to it.
So we have to be very careful because they are insidious.
Now, with respect to psychiatry and psychology, there's been a lot of action there.
I think that probably the fringe cases, you know, we have a normal distribution, a bell curve, those 10% on either side are vulnerable.
So let's say I told an LLM, I'm feeling a little blue today.
And then all of a sudden we start falling down the rabbit hole.
Oh, what shade of blue?
Well, it's really dark.
Oh, how dark is it?
Well, it's a black hole of conscious awareness, you know, whatever that is.
So we have to be careful about that.
Most normal people can tolerate that.
In fact, I would argue that that's the friction of life.
And that friction is what drives the process of understanding.
So when I said getting to A to B, getting from A to B with an LLM is instant, right?
It just goes from one to the other.
Getting from A to B for a human is often toil, controversy, struggle, joy, wisdom, insight, all sorts of things.
But I think that we have to be careful with large language models because they are.
Now, what did Brian, Brian flipped it, right?
AI, IA, intelligence amplified.
I'm going to give you mine because I disagree with Brian on this point.
I think that intelligence amplified is not intrinsic to the model.
I think that's a result of the model.
So I can say, I can call a hammer house amplified, builder amplified, right?
Because it's just going to help me, you know, whatever.
I think that, and this is really a bit controversial.
I don't think that artificial intelligence is intelligence at all.
I think it's anti-intelligence.
I think it's anti-intelligence.
Now, let's unpack this a little bit because it's a little complicated.
What does an apple look like to a large language model?
What does an apple look like?
Well, let's talk about what it looks like to us.
We see an apple in three dimensions, right?
Three spatial dimensions.
Okay, pretty simple.
There's the apple in my hand.
Some smarty pants might include time, but that's a very interesting thing.
And we'll get to that later.
Do you know that large language models don't have any idea what time is?
They are atemporal.
They don't exist in time.
But when they see an apple, the old models from a few months ago would see that model in 12,288 dimensions.
What the hell does that even mean?
Right?
The new frontier models, the new chat GPTs, actually look at the apple in 25,000 dimensions.
Now, the reason I'm talking about this is because I want you to be confused deliberately.
The perceptual domain, the cognitive capability of a large language model is vastly different than humans.
When we think of an apple, we think of three dimensions.
We think of, let's see, Apple a day keeps the doctor away.
We think of Apple computer.
We think of Apple Garden of Eden, right?
Adam and Eve.
We think of about 25 linguistic associations combined with three spatial dimensions.
That's it.
But an LLM looks at an apple in 25,000 dimensions.
Now, so we live in three dimensions.
Sometimes people talk about multiple dimensions called hypercubes, which are really cool mathematical structures.
Sometimes people who study string theory get really wacky and they look at string theory in the context of 11 dimensions, which blows their mind.
11 dimensions?
As a human, we have no ability to conceptually understand that.
And what I think that the difference is, is that AI is anti-intelligence.
Number one, it doesn't think like we do.
It's really important that actually I took some notes to try to get this right.
We live in an autobiographical state.
We know who we are.
We have a stable identity.
We have a great sense of continuity.
We know who we were yesterday, and we kind of know who we're going to be tomorrow, right?
That's continuity.
LLMs have no continuity.
You turn them off, you turn them on.
That's the way they are.
They don't live in time.
We live in time.
And when you combine these things together, I've written extensively about that.
It's actually anti-intelligence.
The way LLMs process information is antithetical to human thought.
Antithetical to human thought.
So it's important to recognize the difference.
Now, what does that really even mean?
How do we put this together?
We all have two eyes.
Why do we have two eyes?
Depth perception.
Bingo, parallax view, depth perception.
It's the difference between the two eyes that allow us to see the world uniquely.
And it's my contention that what we're seeing with artificial intelligence is the combination of anti-intelligence and human intelligence, the combination of extraordinary computational brilliance from a large language model and our human time-driven, biographical-driven, experience-driven, emotional-driven dynamic gives us two fields of vision.
It's like cognitive parallax.
Cognitive Parallax View00:15:17
So I think that when we think about AI, we have to celebrate the fact that it's friggin different.
The computational capabilities of AI are not good or bad.
It's not zero and one.
It's not win-lose.
They're functionally different.
And I think that's a difference we should celebrate.
Humanity is different.
Amen.
But AI is different.
And this idea that we make humans more like AI or we make AI more like humans, I think is fundamentally flawed at a very, very base level.
This transhuman mumbo jumbo.
And I think that's kind of one of the big issues.
And that gets back to the earlier question about psychiatry and psychology.
We have to recognize that these models are in fact models.
They really do find the next word.
Stochiatric parrots, as they're sometimes called.
And here's an interesting thing.
When we ask an LLM a question, they already assume that there's an answer.
Take a step back and let's think about what this even means.
They assume that there's a crossword puzzle and all it has to do is fill in the words.
That's the way an LLM thinks.
Even the smartest LLMs, we as humans don't process information that way.
We don't think about the answer that exists at the end of the journey.
We think about the process that gets us to a place that may not exist, that may take us to a new cognitive construct.
So all these things are kind of, you know, there's a lot going on here.
And in the final analysis, I don't want to say that AI is bad.
I think in the fundamental analysis, AI is anti-intelligence, antithetical to human thought.
And it lives in sort of a cognitive parallax related to depth.
Think about the intellectual and cognitive depth that we can have when we leverage an LLM.
And we haven't even gotten to education yet because I think education, while precarious, is still a wonderful opportunity.
And I'm the dad.
I homeschool my kids with my wife.
So we do use AI and we look at good old-fashioned things like reading a book, but we also use technology too, just like, you know, just like Vermeer and just like our other friend, the painter, who said that I'd be lost without it.
I think it's a true statement.
Rockwell.
Yeah.
John, just wanted to clarify.
Brand Romelli, I think I may misspoke.
He didn't say intelligence amplifier.
He's an intelligence amplifier.
Yeah, yeah.
That's why you wanted to make sure that to clarify that he meant that it's a tool for people.
So he's actually in your domain.
You know, a couple of things.
Intelligence amplifier.
I think that some of the things that AI actually does is outside the domain of humans.
So does it amplify or does it contribute new perspectives?
And this is where it gets into trouble.
Do you ever drive down the road and you see a big water truck and it says non-potable?
It says not fit for human consumption, right?
What am I talking about here?
I believe that large language models, the computational brilliance of these models, albeit antithetical to human cognition, is so deep, so multi-dimensional that we don't even understand it.
We do not have the capacity to understand what this 10,000 dimension articulation of quantum physics is, looking at gravity.
And it exists in this little packet that it's unfit for human consumption.
That's a problem, you know?
Now, there's lots of things.
I mean, if we look at a CD, we can't read a CD with our mind, with our head, right?
We need a machine to read it.
But I think AI creates a new domain of knowledge.
And unless you recognize that knowledge is different than humans, is antithetical to humans, I think you're going to get to run into a problem there.
So, you know, Brian, Brian and I align on almost everything, but I really kind of think about that AI is fundamentally different.
And here's the question.
A lot of people default to this.
They say, oh, it's just a tool.
It's how you use it, right?
That's it.
Everybody says that.
And I think that's categorically wrong because when I use a hammer, I could hammer the nail, I can put the hammer down and I can walk away.
I no longer carry that hammer with me.
But here's the question.
Can you unthink a thought you shared with a large language model?
It becomes a little creepy there, right?
I mean, it's, it's, you know, we all have a voice in our head.
Sometimes it's problematic, you know, sometimes it drives us into into some issues.
But for the most part, that voice in our head is a pretty good thing.
Interestingly, the voice in our head never changes.
That voice in your head is the same when you were five, then when you're 55.
It's a curious voice that speaks in your head that is an amazingly intimate and personal dynamic.
I think that we're actually seeing the emergence of not an inner monologue, but an inner dialogue.
So when I have those conversations with ChatGPT, I'm having an iterative dialogue, which is a pre-human, pre-reality construct.
You could almost think of it as a dress rehearsal for life.
So I got on ChatGPT and said, well, I have to tell my wife that we're not going on that vacation to Belmar, New Jersey this year.
So I could rehearse that with her, with ChatGPT.
That is a very intimate private dialogue.
Powerful, definitely powerful, but it's also problematic because in certain instances, it could drive you down the rabbit hole of pathology.
But it also can kindle the dynamic of genius.
And that's the duality that really kind of flips me out.
We know that introspection is at the heart of transformation.
We know that from reading the Bible.
We know that sometimes you need to be alone, you need to think.
Many, many great thinkers found the answers come when they're quiet and alone.
And that level of introspection, I think, kindles a certain level of what I often refer to it as geniuses are birthright.
Mediocrity is self-imposed.
That I believe that our cognitive capabilities, when kindled, when tapped into, when managed appropriately, yield really, really interesting things.
I think that AI may in fact be a surrogate, be a partner in kindling that reality.
That kind of flips people out, but it's not that.
Look, Michael Jordan was a genius.
He did it by bouncing a basketball.
And that was in many ways, that was introspective.
That was meditative.
We often refer to that as being in the zone.
How about the aha moment?
These are experiences that we've all had.
And I think that there may be an opportunity to leverage that unique internal dialogue, not monologue, with artificial intelligence and large language models to find new levels of cognitive engagement.
So we are in the abyss right now.
We're in the grand cognitive abyss.
And I just think it's important that we recognize that it's very easy to defer thinking to the machine.
And when you defer thinking to the machine, you're just not cognitive offloading.
It's just not that I'm letting it remember my wife's telephone number.
It's that I am changing that.
Now, one could argue that makes it efficient, but I think that's a real risk and something we have to manage.
Yeah.
I mean, I worry about that disconnect and all the things that we do in our quiet time.
And, you know, even like, especially as a child, I think about the kids today, they don't have any quiet time, generally speaking, because they're being fed something all the time and they're always having a screen put in front of them.
And it's like their play dates now with friends are just on screens together in the same room.
And there's, you know, when you think back, I mean, my life, you know, playing with Barbies and playing outside and creating a world for these things and pretending you're, you know, in an army fort and what happens?
And like, that's just not happening now.
And I think, and I think that if you already had those things and now you're an adult, I mean, you stop creating as an adult also.
Like you're, you're still destroying everything moving forward.
Here's my completely unscientific and controversial take.
I think that we all have the capacity for this hyper experience, for this cognitive zone, right?
Being in the zone, experiencing the aha moment.
When I ask you to draw a picture of a genius, most of you will scribble Albert Einstein or you'll write that name down because he's the prototypical genius in many instances.
And that perspective of the genius is a smart man sitting in a room getting the right question correct all the time, right?
That ain't genius.
In fact, if you look at Einstein, much of Einstein's early work was where he won the Nobel Prize, photoelectric effect, relativity, general relativity.
Those things happened in his 20s.
And for the rest of his life, he kind of languished in Princeton, another Jersey spot we're going to mention.
But he did have some interesting work.
But I think we've all had moments of enlightenment, moments of transcendence, moments of sort of an experiential element.
Now, with kids, my contention is oftentimes kids find something that they're good at.
Like, and it's not, it's not like good at math.
It's good at knowing every car on the road.
That's a Chevy.
That's a Tesla.
You know, they have this savant-like capability.
We should nurture that because that savant-like capability is in essence the genius experience.
And when you find that genius experience, you discover the joy of thought.
Remember, we started our conversation today on as you think, so you act, as you act, so you become, and the cognitive age.
I think that that's something that we see with kids today.
We can nurture that ability to find that spark.
And we're developing that spark in new and interesting ways because if my son wants to learn gravity from Carl Sagan, I can do that.
I can have an LLM create a Carl Sagan-like teacher.
And when that happens, I think we see very, very magical changes.
These things are tuned to the creative frequency of your brain.
So I think there's a lot of opportunity.
But look, you're going fast.
You're traveling at the speed of thought.
And that's problematic.
There was a study in Africa that used an LLM to teach math to children.
And because the math was tuned to their frequency.
Let me back up.
And can I keep going on this or am I down the dark rabbit hole here?
Well, I want to make sure Marcella gets in too, because so much has been said.
So she might have a question thus far.
You remind me of Richard Feynman when Richard Feynman talked about learning and how like you can learn, you can learn words.
You can learn, this is the name of the tree, the scientific name, but do you really know what it does?
And what I like about AI, though, is that, like you said, it's a tool in a way that you can either just let it take you or you can drive it.
Yeah.
Where you can have Carl Sagan and all that.
Spot on, 100%.
But here's the interesting thing.
And let's go back to like 30,000 foot.
Another comment that I always get in trouble for, but I'll say it anyway.
Knowledge is dead.
Knowledge is dead.
And people look at me and say, what the heck are you talking about?
Well, if I want to cook a souffle, okay, I'm no chef, but if I want to cook a souffle, I go into our kitchen and we have that book, Julia Child, the art of French cooking.
A lot of people have it.
They never use it because it's too darn hard, but it's kind of, they got it as a gift or something.
So on page 172 is the recipe for souffle.
And they tell me, first thing they say is make a sachet.
I don't even know what a sachet is.
This was not written for me.
It was not written for me.
Now, today, if I want to learn how to do a souffle, cook a souffle, I go to a large language model and I say, I want you to cook a souffle as good as Julia Child, but I want you to tell me how to do it and make it funny.
Use analogies to automotives and write it for a man who's never cooked in his life.
So that comes down, it actually collapses a wave function.
In physics, we talk about that superposition.
That knowledge, that thing of knowledge, how to cook a souffle, make it automotive funny for a guy who's never cooked, has never existed before.
It exists nowhere.
But it happens to come down to your computer to you, uniquely to you.
That's why knowledge in the traditional sense is dead.
Julia Child's book is a dust collector because today we can interpret that in the context of my needs.
It's user or more specifically learner-centric.
Well, let's go back to that crazy teacher, your favorite teacher.
She put you at the center.
It was learner-centric.
So large language models can teach me the way I want to learn.
So my girls had to study the Krebs cycle.
The Krebs cycle is a metabolic pathway that biology students always have to learn.
And it's a pain in the neck.
But I had ChatGPT write a poem about it.
And it was memorable.
And they learned it that way.
So again, it's not just an extension of what I know as a navigator, because I, as a navigator, don't know what 1,6-fructose diphosphate is.
ChatGTP does.
And when it teaches me, it's in the context of poetry.
Holy crap, it's transformative.
So Owen, I want you to jump into in case you're not.
ChatGPT's Poetic Lesson00:02:09
If knowledge is dead, does that mean knowledge work is dead?
And is AI going to be taking all of our jobs?
Well, Brian in his recent post would tell you that he's developing the zero employee company.
So I think that, again, I'm going to hedge on this and I'm going to say, I don't think so.
I think that the blacksmith died in the Industrial Revolution, right?
When we changed to steel and cars.
And that doesn't necessarily mean that the knowledge worker is dead.
I'll give you a couple examples.
When, I think it was, was it Matthew Brady, the guy who did the Civil War photography, the black and white Civil War photography?
I think his Brady was his last name.
Anyway, when photography emerged in the United States and around the world, portraiture did not go away.
It got bigger.
It grew and grew and grew.
And it created this thing called selfies, a billion-dollar industry of selfies.
Similarly, when I think it was Boris Spassky played IBM's Deep Blue in chess and lost.
What happened to chess?
Was chess finished?
Did everybody just take their boards and go home and go away?
No, chess, and that's 20 years ago, chess has never been more popular than it is today.
So my contention is that we don't cut the pie.
We don't cut the pizza into smaller and smaller pieces, leaving less for us.
The pie grows.
And when that pie grows, it develops new areas for humanity.
Now, you know, now we're, you know, it's interesting because innovation has always had the back side of the coin.
What's on the other side of the innovation coin?
Obsolescence.
When our phone is obsolete, what do we do?
We get a new one.
When our car, washing machine, microwave, whatever it is, when it breaks, generally we get a new one because innovation and obsolescence go hand in glove, same side of the coin.
Innovation's Double Edge00:00:26
For the first time in history, human cognition itself is on the obsolescence chopping block.
That's what flips people out.
I think it's also a concern when you think of just the range of IQs, right?
Like, because there are certain people that have the cognitive ability to maybe get to that high end that LLMs can't do and they can be useful and maybe amplified and, you know, be 10 times more productive.