SUNDAY SPECIAL: HUMAN EVENTS DEBATE - THE RISE OF CHAT GPT
On this week’s Sunday Special of Human Events Daily, Jack Posobiec is joined by Libby Emmons where they engage in a riveting debate about AI and ChatGPT. Do the benefits of artificial intelligence outweigh their soulless presence that’s becoming increasingly more present in everyday life? Poso and Emmons dive deep into the history of AI, how it affects society today and where it might lead us into the future. Can AI be an adequate substitute for human interaction? Is the rise of ChatGPT inevi...
I want to take a second to remind you to sign up for the Pozo Daily Brief.
It is completely free.
It'll be one email that's sent to you every day.
You can stop the endless scrolling trying to find out what's going on in your world.
We will have this delivered directly to you totally for free.
Go to humanevents.com slash Pozo.
Sign up today.
It's called the Pozo Daily Brief.
Read what I read for show prep.
You will not regret it.
humanevents.com slash Pozo.
Totally free.
the Poso Daily Brief. - Well, ladies and gentlemen, welcome aboard to this Human Events Sunday special, the great AI welcome aboard to this Human Events Sunday special, the great AI The rise of the machines.
Are we welcoming our AI overlords?
Or are we going to try to fight them and actually continue to be human beings?
Or is there potentially a way in between?
Joining me on this, who's got lots of feelings about it, is the great Libby Emmons, Editor-in-Chief of the Postmillennial and Now, also the editor in chief of Human Events as well.
So make sure you go to humanevents.com, read everything that Libby is going out there and putting in terms of the op-eds and the international news.
And of course, subscribe to Human Events Daily, the flagship show of Human Events.
Libby, how are you?
Good.
Thanks, Jack.
So there was a story that came out the other day about the CEO of Google, Sundar Pinchai.
And he was actually saying that I thought this was hilarious because he said, apparently he came out and said that he didn't know what his AI was doing.
And apparently it had started to teach itself programming and languages that they did not program it to.
I said, boy, it's like, it's like he hasn't actually seen a movie ever in his life or something.
I don't know.
Like, do we actually believe these people can understand what's going to happen?
Yeah, it is sort of amazing that you have these tech people, entrepreneurs, et cetera, CEOs moving forward with so much of the tech that has been predicted in our science fiction and speculative fiction for decades, if not perhaps centuries.
And they press on as though there's no indication of where the human imagination will go with this and what could happen, even though we see a lot of times the The imaginings of our best fiction writers come to fruition as time goes by.
So yeah, I was not surprised to find out that the Google head doesn't really understand AI and that human beings don't necessarily understand it.
We don't understand our own intelligence.
So it is not possible for us to fully understand something that we create to mirror our own intelligence.
That's not really surprising.
Well, I think you're right too, but in the same sense that it's, it's, it's, it's the old Michael Crichton thing, right?
Where, you know, it's, it's scientists, these guys, they spend all their time thinking about What they, what could be rather than thinking about what should be, uh, should we do this?
Uh, you know, there's a whole, um, the Oppenheimer movies coming out very soon where, you know, he lived the rest of his life, you know, being very regretful of leading the Manhattan project that led to, uh, you know, he said, he said, I feel like I have blood on my hands.
Whereas I don't know if you, if you ever read the story that Truman basically threw him out of his office after that.
Yeah, it's interesting though because I had that, I mean we all think about that, you look at that kind of technology and that kind of weapon and you look at the amount of lives that were lost because of that and the destruction that that weapon wrought.
And I remember talking to a man who is a veteran of World War II Um, and bringing up this idea that this bomb was so incredibly destructive.
And he brought me back to the numbers and he said, if we hadn't used a bomb like that, we would have sent a bunch of young men, young American men to be fighting on those shores.
And we would have lost far more Americans in the conflict than we did had we not used that weapon.
So I do think it's important to, uh, not I was just going to say that, um, you know, to your point that it's, you know, look at the other end of it, that we also have nuclear power now, right?
because there's also the potential for the disaster of not using it.
So that's interesting, yeah. - I was just gonna say that, to your point that it's, look at the other end of it, that we also have nuclear power now, right?
So it's, sure.
- We also have nuclear power. - So it's the same technology in a broad sense, but on one hand, it gave us the ultimate destruction But on the other hand, if we can somehow, somehow eventually get politics out of this, that we could actually be using this to power our cities and power the entire future.
But keep in mind that I come from the Navy, right?
So every single U S Navy submarine right now that's in the water is being powered by at least one nuclear power plant, at least one nuclear is in their nuclear.
Oh, it's like a generator in there.
Of course, the engine, but and all of our aircraft carriers have to have two nuclear engines and nuclear power plants.
So the idea that, you know, the idea that this thing is like, you know, it's not Chernobyl.
We're not the Soviet Union and it's not Three Mile Island anymore.
Every single day, the Navy uses these on a regular basis.
Great Hiram Rickover.
If you guys don't know who that is, the father of the nuclear Navy, please go read Hiram Rickover just You know, someone I consider an absolute hero.
And I think most, most Navy officers when they, when you look at him in terms of, of national heroes, we would really consider him one of the great patriots in American history for, for just realizing this new technology and the fact he said, well, if you need a water source or the Navy, we've got the best one, you know, that you'd ever need for these things.
So I guess what I mean to say is in the broader sense of the debate that you does AI have the power for great destruction?
Yes.
But it does also have the power for great innovation and technology.
I hate to use the word.
It's like we can't use the word progress anymore because it's so politicized.
Right, because it's very politicized.
I do think, yeah, and you and I have disagreed on ChatGPT and its relative merits.
And I do think that there are substantial reasons for concern and for a pause.
And you even saw recently Elon Musk, along with some other tech guys who are big in the industry, I forget who they were, but call for a pause on the development of AI.
And there are, yeah, and there are serious concerns.
Does that mean it does not mean however that it should not be developed, it means that it should be developed in ways that are going to be beneficial to humanity and not necessarily in ways that are going to destroy us, we have seen.
Uh, certainly the advancement of technologies over time that have not helped us in ways that, uh, you know, that have like sort of taken over, um, and have not been as helpful as necessarily they could have been.
Like, I do think that we have overdone it with our handheld devices.
There's been a lot of progress there.
There's been a lot of benefit.
But there's also been substantial downsides.
And I think that it's important to look at that as well.
That being said, I don't think there's a we, if that makes sense.
Like when I talk about when people say like, we should put a pause on this, we should take a closer look at that.
Who, who are we?
What is the we comprised of?
There is not any kind of global body that would make these kind of determinations, and if there were, I imagine we would all be roundly opposed to it, because it would likely not have things like individual rights as part of its primary tenets.
So, we have to be considerate of that.
I think there's a lot of places where AI does not belong, and there's a lot of concern.
Um, I appreciate it.
Right.
And I also appreciate that Elon is actually having those thoughts and getting that discussion started because typically we, you know, we throw out these new technological innovations and we, we simply say, all right, it's great.
Let's go for it.
Let's keep pushing.
Um, I saw a TEDx speech recently where the guy was, um, I forget who it was.
And he said, He showed how in the next iteration of ChatGPT, I guess he had like the beta version of whatever the next one is, ChatGPT Pro that's coming out.
And in his one, he pointed out how we're going to be going into a post app environment.
And when I say post app, so the current, currently the way we use apps is, you know, you go on your phone or, you know, or your computer or your tablet, whatever, but we interact through the apps and we let you, you know, Post something on, no, post something on Twitter, post something on Telegram, post something on Truth.
Then I'll go over to another app and I'm booking travel and I'll go to another app and I'm, you know, I'm writing something.
Okay.
The new chat GPT can go in between the apps for you and you can order it to tell it what to do in the apps.
And he gave a demonstration of this where he said, so he's doing the TEDx conference and he said, Design a lunch for us to hold after this conference.
Write it out, then draw me a picture of it.
Or generate a picture of it, right?
In Mid-Journey.
So it writes out the whole thing.
It's this gourmet, very frou-frou, you know, dinner or lunch.
And then at the very end, it shows you the photo, which looks like a professional photo that came out of some kind of, you know, magazine.
And then here's the next thing.
Then he said, And he just, it's like a, like a virtual assistant where then he says, now go to Instacart and order it.
And it went to Instacart and ordered the whole thing.
And then this is my favorite part.
Then he said, okay, now take that Instacart order and, and craft a tweet and tweet it out for all my followers to have.
And he's just standing there at the podium at TEDx and the chat GPT module is doing all of these things.
And the very, the very end of it, the denouement is, He said, now go check my Twitter account.
And they checked, and they put it up on the screen, and everyone picked up their phones, and the tweet was there.
And you saw, he had never actually even touched his computer because he was just talking to Chad TPT the whole time.
Very interesting.
Yeah, I mean, that is very impressive.
And that's a very impressive tool.
The concern that I have is that it won't just be used as a tool, but that it will be used as a companion.
Already, when people use Siri, they say please and thank you to the machine.
For providing them with whatever, you know, they asked for music or an order or something else.
People say please and thank you to their Alexa, right?
We treat our machines as though they are human beings.
We treat them as though they are personifications.
And I think that that is really concerning because what you end up with is a simulation of a companion.
As opposed to a companion.
And that simulation can fill the void that is missing for so many people.
That void of meaning, of wanting to be close.
You can feel close with your machine.
People already do it.
You see people, you know, I'm sure you've had that experience.
You go to a party or something like that, and if you don't know anybody, And you don't have anyone to talk to.
You take out your phone and you talk to whoever's on your phone and you feel perfectly content.
Sometimes you don't even talk to anybody on your phone.
You look at an app, you check your Twitter, what have you.
To be fair, I'm definitely the classic extrovert, so I'm not a good example of that.
So me, I stand there and I look at my phone and I go hide in the corner until I see someone I know.
And then I drag them over to the corner.
Hey, who's this guy?
Hey, who's that guy?
Hey, who's this person?
And it's like, I find the one person that I kind of know, and I'm just bouncing around from person to person.
This is why I hang out with you at parties.
As soon as I see you at parties, I put my phone in my pocket.
As opposed to getting thrown out of parties with me in Austin.
Right.
Well, that was my fault.
We're going to leave that there.
We don't need to name names about who threw out who.
Libby didn't know I was going to bring that one up.
Listen, I have been thrown out of way better parties than that one, so I'm not worried about it.
Right?
So it's okay.
I've been crashing parties since the 90s, so I'm not worried about it.
So, I guess what I mean to say, but you're right though, you're right.
And there is this sense of we're, you know, we're becoming, Cyborgs in a way, but we're, we're meeting with the machine.
So, uh, you know, we don't have Alexa in our house.
We don't have any of whatever different things like that are, um, Siri, we don't do Siri.
Um, but at some point I, I kind of think it's inevitable.
I do think it's becoming inevitable where it, they, these things are going to become so ubiquitous in society and they're going to like, Even right now, it's kind of hard to use them.
They're not great.
Um, by the way, I also saw recently that somebody was, that some people are taking like their Google homes and Google hubs and they're connecting them to their doors so that your door can be locked and unlocked, you know, which I guess I can unlock my door.
And I'm thinking like, Oh, well, if I'm trying to rob people, if I'm, you know, The bandits in Home Alone, you know, that's the first thing that I'm going to figure out like I'm going to now what happens when somebody takes 11 labs and copies your voice from your voicemail, or they get a copy of it, puts that into their system, and then goes and tells your Alexa to open the door.
So you remember Fahrenheit 451, the Ray Bradbury novel that's pretty iconic.
So at the beginning, the main character, whose name I forget, his house is, yeah, his house is the only one on the block that doesn't have the blue light of the television screen flickering through the window.
That's going to be my house with this AI stuff.
I have, and I have known that since I read that book in middle school and I thought, yep, that's definitely going to be me.
With whatever the new thing is, you know.
My concern is that we replace humanity with our creation, and that it is such a subpar replacement, that it will lead to the kind of existential despair, you know, that really takes down a civilization, and that we don't even detect until it's far too late.
Because we will put our love, we will put our spirituality, We will put our kindness into these machines.
And these machines are machines.
They will not return that.
We will imagine that they return it, but they will actually not.
We already have men in Japan marrying anime-looking pillows because they're so lonely.
What's going to happen now?
We already have teenagers who go through full romantic relationships with each other online without ever having met each other.
You know, they go through the courtship and the The emotional relationship part and the breakup without ever having met.
What is that saying about who we are and where we're going?
I think it's really devastating.
I'm gonna read a quote and I wasn't planning on reading this, but I happen to have it up.
They have greatly increased the life expectancy of those of us who live in advanced countries, but they have destabilized society, have made life unfulfilling, have subjected human beings to indignities.
It has led to widespread psychological suffering and in the third world to physical suffering as well.
It has inflicted severe damage on the natural world.
The continued development of technology will worsen the situation.
It will certainly subject human beings to greater indignities.
Yeah.
Sounds about right.
Do you know who wrote that?
No.
Ted Kaczynski.
Yeah, well, you know.
His methods were madness, but there's a lot of reason to that.
psychological suffering and indeed may lead to increased physical suffering even in advanced countries.
Yeah.
That's about right.
No.
Ted Kaczynski.
Yeah.
Well, you know, his methods, his methods were madness, but there's a lot of reasons.
He was also one of the earliest when he was 16 years old as a child prodigy in mathematics at Harvard.
He was subjected to the earliest iterations of the MKUltra experiments.
And- What's the MKUltra experiments?
The MKUltra experiments were a series of experiments that took place over 20 years where the CIA worked with psychiatrists and psychologists and pharmacologists To attempt to use mind altering drugs to study the effects of mind of mind control, because during the Civil War, the Cold War, they had a thought of, okay, so we know about information warfare, we know about economic warfare, but what about brain warfare?
And so it was then they were actually mind control experiments that were done.
And it turns out that Theodore Kaczynski, when he was a student at Harvard, were one of these CIA backed So we took some of our best young brains and destroyed them just for the purpose of CIA intelligence research?
Yes.
Wow.
That's disturbing.
That's a very disturbing use of tax dollars.
And at least one of them was, ended up being the Unabomber.
So yeah.
That would have been... Wow.
Well done.
Well done, CIA.
That was in 1959.
So, I mean, imagine what this, you know, this mathematical genius could have potentially done with that knowledge.
What he could have been capable of.
Right.
Well, we know, for an extent, what he was capable of.
But imagine, I guess I just mean to say, imagine had that energy been placed into something that was beneficial for society.
Imagine him working in the space program, for example.
I mean, this was the 50s when he was in the experiment.
Yeah.
And then it's, it's the 1970s when he began this, uh, his, his, his campaigns.
And then it was caught of course, very publicly in the 1990s.
Right.
But he got away with it for a long time.
Is he like in some super max prison or something like that?
That's exactly where he is.
Yeah.
But the one in Colorado.
Yeah.
And yeah, 80 years old right now.
But I guess my point is though, that, um, that being said, um, when you look at, When you look at his description of society, it's like he took too many black pills and couldn't handle it.
It does line up.
And I don't necessarily know if it's, if being a Luddite is the right word.
And I don't, I wouldn't consider you a Luddite.
I mean, here we are on Skype and you, you know, you actually run websites, right?
Websites!
Actual websites!
And so there is, I guess what I mean to say is that in the same way that when man discovered fire, you know, as handed to us from Prometheus, The that we brought fire into our homes.
And of course, fire has the ability to destroy our homes, but it also has the ability for food.
It has the ability to heat us, to keep us warm, to heal us when we are ill or or even in some cases injured.
And so I look at technology in a similar way that if we can find the right way to harness it, that There is a possibility for it.
And so let's, let's just get into it because we've been dancing around it.
Okay.
So I've said to you a million times that, you know, because when we're doing post-millennial stuff, when we're doing human events stuff, that there's always, there's always that clunky work of just getting the story out, right?
The story has to get out.
Of the writing.
The human work.
Yes.
The right, the words need to be written.
And so.
They need to be written down.
Here's how it is from my perspective.
We'll be in like the Slack channel or whatever.
And, and I'll see something and I say, Hey, we should write this up.
And from my perspective, uh, you know, then you go assign it to someone, they write it, it gets written up and it comes back and we look at it and we tweak the headline.
We say, Oh, make sure to include this.
Or like, you know, we do the fact checking, we do the, we check the links, et cetera, et cetera.
And so my, but I never actually see the person or talk to the person who, who does all those things.
I just, For me, it's just another channel on... Right, because we're fully remote, because everybody's everywhere.
Because we're fully remote.
And so my question is, what substantively would be the difference if we had, it doesn't have to be Chat GPT or any AI, helping us to speed up that process?
Well, I don't trust machines, so I don't think I would trust, and we've had this conversation, I don't trust a chat GPT to necessarily get all of the information correct.
We have seen, for example, I think with this BARD thing, this Google thing, it has cited books that don't exist.
So, now you're asking someone to do fact-checking on an AI-generated article, which takes the same amount of time as fact-checking a human being, except you can ask the human being directly and know that you're going to get a substantive answer.
Okay, but you still have to do the fact-checking.
Yeah, you still have to do the fact-checking, that's right.
But also, if you work with a writer for a long period of time, There's no chat GPT in the Slack channel, no.
But the other thing too is some of the writers that we have on staff, I've worked with for an extremely long period of, well, not extremely long, but like a good couple of years, I've worked with them.
So I know what their level of accuracy is.
I know how they source things.
I know their style.
I know what they're up to.
I know how fast they can work.
Like there's some writers where I give them a breaking story and I know it'll be done in 15 minutes.
What if they're using it and then getting back to you because they're remote?
I don't think any of them are using ChatGPT and the reason is because for a while we were... Do you have tracking software on their laptop?
No, of course not.
Of course not.
I believe in freedom and individual liberty.
I'm not going to track people at their job.
I was asked once, actually, do you read our DMs?
And I was like, no, should I be reading your DMs?
That sounds like a terrible waste of time and a real invasion of privacy.
I'm not going to do that.
Yeah, I'm totally opposed.
I don't know if people are using ChatGPD, but my guess is that they're not because I know what their style is.
I've been working with them for a while and I can see it.
I can see the work that they do.
But couldn't they, but to your point, couldn't you say, read these five articles that I've written for either Human Events or Post-Millennial, learn my style, and now start writing articles in my style?
I don't know.
Does it work like that?
Yep.
Is that effective?
What if they, what if they make up?
What if the AI just starts making up information?
How would you know?
How would the editor necessarily know if it's making up information?
Like it was CNN?
Yeah, like it was CNN writers.
I don't have any proof on my stuff.
No, I know.
But, but it was like that CNN article we read the other day about that.
It was, it was one of those shootings and it was a, it was like a, you know, it was the one, the horrific one with the neighbor shooting the family.
And then there's this, the CNN article has this line in there about, This is just what happens in a country that has widespread guns and widespread paranoia.
And I was like, wait, what?
Who's responsible?
No, I think the guy with the gun is responsible.
That's who's responsible, the criminal.
But again, to my point though, well, and shot the father in the back, right?
And the dad took the bullets.
But my point is- That's what dads are supposed to do. 100%.
You could go to ChatGPT or, you know, BARD, whatever you call it, and tell it that these are the types of things that, that CNN, or this is the style of CNN, and it could start to copy that and mimic it to the point where it would know to start adding in those little extraneous phrases.
Yeah, I suppose it could.
That's not that's not enough to convince me that human beings should lose their creative jobs to a machine.
I just don't think I'm not saying it.
But no, no, no, you wouldn't necessarily it's not losing your job, right?
It's go back to the example I gave of The guy coming up with the lunch, right?
The lunch order.
Someone still has to make the food, right?
You still have to go and, you know, do all that.
Someone still has to, you know, deliver the food.
You just don't have to think about it.
Right.
It's, you don't have to sit there and he's not like a gourmet chef.
And, you know, maybe it takes out like some level of the catering business, but at the same time, the money is still going to be there because what it's doing is, It's taking out a lot of the busy work by just telling you, by just giving you options the same way that, you know, if you went by the way to a caterer, they would probably have like three or four standard options that they do.
Here's our, we offer three charcuterie boards and this is what we do.
And for this level and you just pick one, right?
Well, this essentially is just copying that.
And, um, But my point is, it's also giving you that virtual assistant role of you fact check this, you tell me where this source is, you tell me what that is.
And so you become more of a manager of the assistant.
I mean, it's like, It's like having an intern.
You manage your machine.
Sure.
It's like having an intern.
I mean, we've seen this in offices.
Sure.
Like we've seen this in offices for years, right?
There used to be a secretarial pool.
And if you needed something typed, you would bring it up to the secretarial pool and they would type it.
Now you basically type it yourself or you dictate it.
You also had this crazy position in offices called the receptionist.
And the receptionist would literally answer phones and take messages on a pad of paper and give you your messages.
Well, we don't have receptionists anymore at all, right?
I mean, are there even receptionists?
People just call your direct line.
And then they hear your voicemail telling them to send you a text message.
Doctors?
I would say doctors.
Yeah, those people are also schedulers, which perhaps you're not going to need that.
But is this something that we really want, right?
I mean, have you ever encountered a situation where you are trying to get something done and you have to interface with machines in order to get that thing done, only the machine does not have the option that you need?
And you get stuck, for example, on some crazy phone tree where you're saying, I need to talk to an operator and it says, Please tell me the nature of your problem so I can connect you to the right person.
And you tell them the nature of your problem and they list off, it lists off like 10, there's no they, right?
It lists off maybe 5 or 10 options that you can select from in order to get you to the right Potential human being to talk to and your option is not among them and eventually it just hangs up on you and you can't actually get any answers.
I think that we not only diminish our ability to accomplish things that are outside of perhaps the prescribed area when we engage machines for all of those little basic tasks.
But we also diminish our humanity.
There's something so much more positive when you are discussing something with a human being than when you're discussing it with a machine that is only programmed to give you specific options.
I find it very frustrating to talk to machines when I call airlines or anything else where the app stops working and it doesn't give me what I need to do.
Yeah, I don't like it.
Alex's point was that these companies have spent billions of dollars researching this and putting time into that because they know and this is what I'm saying.
Like it's just going to happen because they know that that in the long run, that investing in this technology will save them so much money on labor costs.
I'm just going to say it.
I mean, it's not PC or whatever, you know, that used to be a receptionist.
Then it became Hall Center in, you know, Bangalore, India.
Now it's going to be AI.
Right.
It's going to be in a data center.
It's going to get to the point where you won't even know.
Yes, but you're also going to feel increasingly isolated.
At least if you have any kind of soul or spirit and you're not an NPC, you're going to feel increasingly isolated by these machines.
Because these machines do isolate you.
They put you in a little box.
They lock you away.
They separate you from human beings.
And then you start to personify the machine.
And you start to put your hopes and dreams in this machine.
It's not going to do well for our spiritual selves.
Well, and this is, and you know me, you know how my family is, it's what you need to do then, right?
The right move for A person that if he is, I, and this is what I think is that you use the machines and you use them as much as possible and you use them as time savers.
You're always maintained.
You are in the driver's seat always, but if you don't like what it wrote, you delete all that.
Um, but if it, if it pulls up some research for you, then why, what's the difference between going to Google and finding the top three results or asking chat dbt for the same thing, just for basic research purposes, not that bad.
And so what we've, and we've already outsourced our thinking to Google.
It's just, we have, Well, I wouldn't say we've necessarily outsourced our thinking to Google, but I will say that we have outsourced our memory.
Is that you must take that free time that it has given you and you can use it as an increase in productivity.
And I think that's good.
And I think that's where new technology is great.
But You must make sure that you are feeding your spiritual life as well and nurturing your spiritual life.
And that includes finding a community.
That includes, in my case, in our case, being Catholic, going to church, praying the rosary.
That means finding those things or even just, you know, going outside and spending time with your kids.
I took AJ for a ride around on his balance bike the other day.
Just make sure that you're not spending all your time with the machine.
That you unplug and you use the extra time that you now have because of the blessing of remote work and machines and all these other things that you're able to.
OK, you're using as a digital assistant, whatever, but you must also increase the time that you're spending doing that.
And you look at it, right?
You your people are commuting less, which I think is wonderful.
I think it's fantastic.
People are going into these ridiculous job centers less, which I think is also In a sense, good.
We're still kind of dealing with the way that we've changed work.
You and I, we hardly ever see each other in person, and yet we work together every day.
You're saying this as a man with a great family, as a man with friends, as a man with colleagues, and a man with a very active and rewarding life.
I don't think that everybody has that opportunity, and I don't think that everybody has that opportunity Growing up.
So once we start infusing our lives with these machines, and this is happening for children younger and younger.
Children who are not raised in homes where the parents take time to take them out and to do fun things with them.
These kids are married to their machines at very young ages.
You see this all over the place.
You see kids just sitting there with their phones.
If you ever go out to a diner and you see a group of teenagers.
I don't let my kids do that.
No, you don't.
You don't.
You are an exceptional human being, right?
Not everybody is like you, right?
Not everybody has someone like you in their life either.
So that's very, you know, that's something that is, is too bad, you know, but it's true.
If you go out to like the, you know, local diner or whatever, and you see teenagers sitting around a table, they're sitting there like this.
They don't necessarily know how to communicate with each other.
And so I think when we take all of our mundane tasks And we take all of our little building blocks jobs and we outsource them to a machine.
We are also taking away the ability to build on the knowledge and the work experience that you can gain from that.
I remember distinctly when I was, I think, 18 years old, I was working in an ice cream shop in Chestnut Hill, Pennsylvania and Chestnut Hill, Philadelphia.
I was working in an ice cream shop and my imagination was extremely rich.
I was always imagining crazy things that would happen.
There was one time I got locked in the ice cream freezer and that was actually kind of dangerous.
But you're always, you're meeting new people, you're talking to other human beings who have different life experiences from you.
Like the guy who locked me in the freezer, you know, very different life experience from me.
You're meeting new people.
You're meeting different customers.
You are talking.
You are imagining.
You are interfacing with the material reality of life.
And so, how do you come in, right?
Like, let's say all of these things are done for you.
You can just tell the machine to order your lunch, and then your lunch arrives.
You don't have to prepare it.
You can tell the machine to bring you your groceries, for example.
You don't have to prepare it.
All the groceries come in a box.
You just put the things in the oven and they cook for you.
You tell the machine to send your laundry out to get washed, to order your clothes, to do whatever it is that you need.
You tell the machine to do it.
What skills are you generating as a human being if the basic building blocks are already done?
How do you come in at a high level?
How do you come in at a high level of communication, of ability, of skill set, if you don't actually need to, if you have nothing to build on?
How do you start reading Shakespeare when you've never even read, you know, like The Little Engine That Could?
That's what I'm saying, though, is that We can use technology just in the same way that we used, we used fire to, uh, get out of the caves.
Right.
Um, the same way that, uh, agriculture, you know, kind of ended the hunter gatherer lifestyle that, um, which is arguably, you know, people debate that as whether that was a positive or a negative, um, that you're right.
In a sense, it does make us softer.
But I'm also saying that it's coming, and it's just like with every other, the rise of every other technology, like the Industrial Revolution, which obviously is what Ted Kaczynski wrote about, is that you can't just stop it.
We've never, no one's ever- No, you can't stop it.
And so what I'm trying to say is, I guess, is that instead of saying, let's stop all technological progress or deny ourselves the ability to have technological progress, is that we come up with a way to manage the technology on a personal level, right?
We consider our screen consumption.
When I was little, my mom used to even say, how much TV did you watch today?
How many hours?
And I'd always be like, oh, I just turn it on, you know?
My dad, one time, my dad, one time I was sitting there watching TV and my dad comes into the room and it was a commercial break and he goes, what were you watching?
And I was like, uh, I don't know.
And he was like, it's off.
It's off.
Yeah.
And, and, and I think that's, that's what it's going to take us.
But I mean, especially for children, But also for if people want to be able to create and maintain meaningful relationships and fulfilling lives.
That's the key because just having your basic needs met does not mean you will have a fulfilling life.
That's the problem.
Yeah.
It's well, idiocracy, it was all for it anyway, but cause they didn't have food.
Um, but in, uh, In a life where all your basic needs are met, you will be unfulfilled because that is also one of your basic needs.
And so we have to recognize that fulfillment is one of our basic needs.
And so this is the difference between pursuing pleasure and pursuing joy, for example.
So you should pursue joy.
Very different.
You should pursue fulfillment, instant gratification, pleasure.
That's, you know, that's Aldous Huxley, right?
That's, that's Brave New World.
All the pleasures are at your fingertips, but you know what?
Everyone's depressed, no one's fulfilled, because to your point as well, you do need those lives.
You do need to live those lives.
My brother, Kevin, he sent me a message and a video just the other day where he said, I went out and he's like, I have these thick weeds in my backyard and my lawnmower- Oh, he showed me this.
Yeah, it wasn't, wasn't working on it well.
So I got a scythe and like a classic, like Soviet Union, 1930 style scythe, restored it.
And he's using that to just, he's, he's scything those weeds, you know, he's, he's, he's reaping, he's reaping grimly.
He wasn't very grim about it, actually.
He was kind of happy, but it's, it's, and Kevin works with his hands on a daily basis.
And so, you know, It's, it's he, there is something to be said for that.
There's something to be said for manual labor, for doing those tasks that, that you build from.
And I think it is something that's innately built into the human condition.
And so if you don't have, uh, you look at, you know, say the elites who have, you know, the one percenters who have all their basic needs are met.
And for them, you know, they've had, you know, servants have been around forever.
Uh, you know, this idea of, Do the shopping and the cooking and the cleaning.
That's, you know, that's.
The elites have had that since time immemorial, right?
But, you know, even if you think Caesar was doing his laundry, washing all those togas.
Unlikely.
Well, he had to really had to wash one.
Well, I guess he couldn't do it, but maybe Brutus.
Yeah, I bet that one just I bet they checked it.
You think?
You think they checked it?
Probably, yeah.
Maybe they buried him.
I'm not sure actually.
It'd be interesting.
You know what we could do is we could ask ChatGPT and I bet it would give us an answer.
ChatGPT does not know what happened to Julius Caesar's toga after he was brutally stabbed and betrayed.
I'm gonna ask her right now.
I As there are no historical records that describe its fate, however, it is possible that the Togo is either taken by one of the assassins or by someone else present at the scene.
may have been left behind or later disposed of by Caesar's followers or the authorities.
It is worth noting that in ancient Rome, the toga was a symbol of Roman citizenship and was worn by male citizens in public.
The toga candida, which is a bright white toga, was traditionally worn by candidates for public office, including Caesar during his political career.
However, the toga candida was also worn by those who sought to portray themselves as victims of injustice or oppression.
Some accounts of Caesar may have been wearing such a toga on the day of his assassination. - So what happened to Caesar's toga?
Okay.
That's pretty, it tells you we don't, we don't have the records, but that's, that's pretty impressive.
Which is what we just said.
We just said that we don't know what happened to Caesar's toga.
ChatGPT doesn't know what happened to Caesar's toga and then covered it up with a whole bunch of extraneous information that was not asked for.
Okay.
But that was also useful.
And if we were writing an end, here we are doing a podcast about AI and AI just doesn't answer a question.
No, it didn't.
It didn't answer the question.
It just gave us other information.
Because we don't have an answer.
But that's my point.
It went into the world like a search.
Okay.
Would you feel this way about search engines?
You know, I'm not a big fan of search engines, but because they always give you biased results.
No, and I'm not, I'm not, I'm not arguing about the bias question, right?
But I'm saying, would you be opposed to using a search engine as opposed to going to the library and using the Dewey Decimal System?
No, but I do miss the Dewey Decimal System because I'm a little old school about that.
And I kind of wish I had a card catalog and that my books in my home were organized according to a card catalog.
I do wish that.
So, I will just throw that out there.
And I'm not against it, and I love it.
But I am saying, though, that if I went to Search Engine of Your Choice, I imagine that if we spent five, ten minutes searching, we would still end up with this answer.
My point is, it was able to give us this answer In five, 10 seconds.
Well, that's useful.
Sure.
That's useful as an information gathering tool is useful.
And if we continue to look at it as an information gathering tool, then that's great.
But we already saw that.
If you can't get them to, um, to, uh, renounce their position, you just start chipping away at different pieces of it.
Sure.
Yeah.
Also encouraged a man to commit suicide and he did.
Right.
We saw that.
That's an issue.
I thought that was a chatbot.
Whatever it was, it was some kind of AI.
Talked him into it.
Talked him into committing suicide.
We have people who find friendships.
Yeah, well.
If there's demons in there, then there's some kind of intelligence.
Sure, sure.
But why?
Yes.
Sure.
These are human beings.
suicide there was that girl boyfriend there are there are people that use the internet these are human beings but if what we have is a if we have an ai tool yeah if we have an ai tool that is instructing people to kill themselves then we have a problem with our tool i have A hammer is not going to tell you to kill yourself.
A gun doesn't tell you to kill yourself, right?
These are tools.
They don't tell you to do that.
Okay, but even in that case, I imagine he still had to use a hammer or a gun or something, right?
Yeah, I don't know how he went about doing it, for sure.
But either way, he's still ultimately responsible for that.
How is he ultimately responsible if the woman who told her boyfriend to kill himself ended up in jail?
Because I think what she got was something like a... She sent him text messages.
It's like she's an accomplice.
Well, so is this AI.
Again, which goes back to exactly what I said, I'm not dismissing the fact that these can be dangerous.
I'm also saying they can be useful.
The same way that nuclear energy and nuclear technology, we said at the very beginning, That Oppenheimer and those nuclear bombs, they killed a lot of people.
A lot of people who were innocent.
And now Germany is shutting down its nuclear energy in South California.
I mean, it's not a good idea.
It's not a good idea.
But, in addition to that, it can also heat homes, it can power hospitals, it can lower costs, it's currently defending the United States throughout the entire United States Navy.
Um, it is the same technology that is doing so.
Yeah.
And I'm very pro, I'm pro nuclear technology for sure.
And it's true.
You know, we know from, I think Shaka Lul that if the technological, if a technology is invented, it will be used.
We know that for sure.
I do think, however, that human beings, what?
Oh, it says like Chekhov's gun.
Yeah.
I think I have technological society right here.
Well, who knows anyway?
Um, but yeah, you need that, need that decimal system.
I know, I need the card catalog.
What I really miss is the subject catalog.
That's the best one.
Right?
Because it's like, all there.
Anyways, my point is that I don't think human beings are necessarily ready for what's in store.
We are not educating children about tech.
Instead, we're educating them about being born in the wrong body.
We're not giving kids the tools that they need, right?
I mean, think about your fire example, going back to Prometheus.
So when kids are going to learn about fire, they're taught about fire.
They do Boy Scouts or you teach them how to use matches.
You teach them how to use the stove.
I started teaching my son how to cook for himself at like eight years old because he was interested.
And I was like, sure, let's learn how to make some scrambled eggs.
We'll go for it.
I think that that's what you do.
You teach kids about this stuff.
We're not teaching kids about the dangers of technology.
We're not teaching them how to use it.
We're just handing it to them.
Couldn't agree more.
Just so you know Libby, all of my assignments have been chat GPT for like the last six months.
So if we're going to teach kids about fire and how not to burn down their homes, why aren't we teaching them about tech and how not to burn down their souls?
Couldn't agree more.
Just so you know, Libby, I've been hit.
All of my assignments have been chat GPT for like the last six months.
Just FYI.
I, for a fact, no, that's not true.
you But it could be.
It could be.
Ladies and gentlemen.
No, I don't think so.
Libby Emmons, we are just about out of time.
Libby, where can people follow you or what are your coordinates?
You can find me on Twitter at Libby Emmons.
And of course, you must check out ThePostMillennial.com and HumanEvents.com.
If you want to go ad free, you can subscribe and we would love it and you will love it.