All Episodes
April 14, 2023 - Ron Paul Liberty Report
21:47
"Artificial Intelligence" -- The Good & The Bad

Artificial Intelligence" is a tool, and tools can be used for good and for bad. Central planners think AI is a key to a man-made utopia -- It certainly isn't. As with all challenges that we face with any kind of man-made tools, it comes to how much centralization and concentration of power that exists. The greater the centralization, the greater the dangers. "Artificial Intelligence" is yet another reason why it would be wise to decentralize power and speak out for a limited government with limited powers. Artificial Intelligence" is a tool, and tools can be used for good and for bad. Central planners think AI is a key to a man-made utopia -- It certainly isn't. As with all challenges that we face with any kind of man-made tools, it comes to how much centralization and concentration of power that exists. The greater the centralization, the greater the dangers. "Artificial Intelligence" is yet another reason why it would be wise to decentralize power and speak out for a limited government with limited powers.

|

Time Text
Birch Gold and Monetary Chaos 00:02:02
Hello everybody and thank you for tuning in to the Liberty Report.
With us today is Chris Rossini, our co-host.
Chris, welcome to the program.
Hello Dr. Paul, great to be with you.
Well good.
You know we're going to be talking about artificial intelligence but the truth is I'm still trying to figure things out there because I thought all the artificial intelligence has already occurred up in Washington.
Everything seemed to be artificial to me.
But anyway I'll try to understand that title because it's an interesting subject and there's potential for it, both good and bad as we suggest.
But before we start talking about that, I want to talk about our partners at Birch Gold.
We've been working with them now for more than a year and very pleased to be with Birch Gold.
And of course, the subject is gold when you deal with Birch.
And there's something available that they'll give if you're interested in what their positions are on gold.
If you use the text number on the bottom of the screen, text Ron at 989898, you can get some of this information.
It doesn't cost anything.
But this is an important time to try to sort things out because a lot of people have been investing in gold lately.
But in the last two days, I would say two days of that, me personally, how I react to that is I'm glad I'm not a day trader or an hour trader because boom, boy, what a wonderful day.
And then boom, bad day.
And of course, I've thought about gold long term, savings, protecting investments, protecting retirement benefits.
And I got interested in that, as most people know, that watched this program a long time ago.
And that helped me get involved in the monetary issue and in the politics.
Because if you look at it long term, thousands of years even, we know that gold has been a haven for abuse of money by governments.
Government's Gambit with AI 00:16:34
And this to me is very important.
And I don't see how you can have a healthy economy if you don't have a sound monetary system.
And this is the reason why we emphasize this.
Birch Gold emphasizes the fact that what your options are and trying to get protection from the chaos that the government creates by manipulating the value of the dollar.
And that is what we want to help people do.
And that is, if you want to follow up and get some information, text Ron, 98-9898.
Very good.
And we'll be back, Chris, with you in a second here.
But I want to just introduce our program, artificial intelligence.
You know, the other day I went out and bought a little mower.
A little mower costs a few thousand dollars.
But I went to pay for it.
And one of my habits is, you know, I like to give the person I'm given a check to or pay.
And I like, if I can, I'll pay them cash.
But then I say, well, you know, if I give them a check, some people don't like checks because there's, or they don't like the credit card because they have to pay for it, a check they don't have to pay, you know, the extra fee.
So I frequently think about the person receiving this.
So I wrote a checkout for this instrument that I was buying.
And believe me, if I tell you the check was good, but they put it in the machine.
They went to artificial intelligence.
They went in.
How is this check?
Rejected.
Put it in again.
He rejected.
Let me tell you, I do have money in the bank.
And they kept rejecting it.
And they say, I said, well, what's going on?
What do I do?
Well, you have to go to the supervisor, maybe, or you have to call your bank, and you have to go through the supervisor and all this.
So I did that.
And finally, I said, well, what else?
What are my other choices?
I said, will you take a credit card?
And they said, sure, we'll take a credit card.
Sign up right here.
Take our credit card.
And we get you on there.
So if that's artificial, I was looking for a little bit of more back and forth.
And I was surprised that the conditions are changing, that they do want these records, and they're always looking for another angle.
So I guess it's the information you put into those machines that makes all the difference in the world.
And that's one of my questions I always have about artificial intelligence, because I keep thinking that I ought to be an expert.
I've been working with a bunch of people who have been using artificial intelligence for decades, and sometimes they go astray.
They say one thing and do another.
But this is supposed to help iron things out.
But I got to thinking about COVID octod.
You know, if you're going to do, if you want the computer to help you out, and you list a group of symptoms, and it sounds like, boy, it sounds like you have the flu, it sounds like you have COVID.
And then the government comes in, which they did, even without artificial intelligence.
They were very anxious to tell us what you should do.
And of course, it turns out that everything they were telling us was a lie and a misinformation and not good medicine.
So yes, it is true that there's going to be examples that will do exactly the opposite.
Maybe the computer can catch a mistake by the doctor.
But I'd have a hard time doing anything that would leave the doctor-patient relationship and thinking that the computer can comprehend emotions and different things that could go on when you have doctor-patient relationship.
Anyway, I think this debate is going to go on for a long time.
And Chris, I know you've been thinking about this project as well.
Yeah, Dr. Paul.
And AI is a tool like any other, any man-made tool, and it could be used for good and for bad, and it will be used for both.
There's a lot of good stuff.
You know, in our phones, AI is used.
It's pretty neat what they can do with pictures, how they could even sort them and find them.
And it's used a lot in market activities.
But then you have the downside.
If it's in the hands of government and their cronies, it could be used for power and tyranny.
So this is no different than every other tool that we have.
And it comes down to centralization.
You mentioned COVID.
We saw what happens when there is centralization.
It affected all of us in a negative way.
We don't have a limited government.
We have these big bureaucracies, FDA, CDC, concentrated media.
If we had decentralization of power and we were left to our local communities, our states, it would have looked a lot different than what ended up happening.
So the same can be applied to artificial intelligence.
Put it into a big power center that can affect all these different people, and then you could have these huge problems.
So I think artificial intelligence is another signpost.
Hey, decentralize.
Otherwise, if you have centralized power and they have this in their hands, they're going to do damage.
I think people have to be cautious with this.
And Chris, you make some good points because the government can abuse it.
But the thought that comes to my mind is a computer really can't think for itself.
It's magnificent, but anything you get out of the internet or any place, somebody had to put it in.
And I don't think there's anything that you can get out of there.
It doesn't have the emotions that are necessary.
And so many things that we do in medicine, especially, you have tone, you have personalities involved, and you can't substitute for that by saying, well, what we're dealing with is objectivity, and it's pure and simple, and this is it.
And then where does the subjectivity come in?
Whoever writes up the program, whoever does that.
And it has to be somebody to put the information in there.
And the machines pull it there quickly and it can be used in various ways.
But I wonder about counterfeiting.
They counterfeit money and who would ever think about counterfeiting gold and all this other stuff.
But they could counterfeit these things.
And then when I was reading one article on this, I couldn't get straight who was in quotes.
Was the whole thing in quotes?
It was artificial.
Or was it made up conversation?
And it looks like if they use artificial intelligence and try to gather the information and relay it in a conversation, is it real or isn't it real?
And I think I have a little trouble figuring that out, but that might be my inexperience with computers and all.
But I just think that there's an element there that doesn't solve the problem.
The biggest problem we have right now in this world is sorting out the truth from fiction.
And you might say, well, this will sort it out.
It will reveal the truth.
And I don't quite buy into that.
That would be, if you want to know the truth about your religious beliefs or something, there's always going to be subjectivity involved in that.
It reminds me a little bit about Austrian economics because what is the value of such and such?
Well, we'll just measure supply and demand.
But there's other things that do it because there's a subjective element on how bad somebody wants something.
And I think in picking and choosing and having an artificial conversation and you're not positive, because I ran into that.
I said, now, who's saying what is it in quotes?
Does that mean the individual that they're mimicking actually said this or not?
But they, you know, like the Rogan article, there was a disclaimer on there.
But I don't think you give permission for it.
Should you have to have permission if somebody's going to write up an artificial conversation?
I just don't think that would be good.
You know, they can pick and choose.
And the experts will be the kind that will be able to cover all the rules that government writes and mimic a real conversation.
So I think the individual who happens to want to believe things, if they can mimic all the news we get on the internet and social media, what the government says, what's in our movies, what's in our newspapers, and distort the information, we've already used the example of medicine.
They did it even without artificial intelligence, but if the artificial intelligence knows are the rules, but then what if they come up and they tell me, well, they have COVID and this is what you do, and I don't do it, and I get sued because things didn't work out the way we thought.
Who's telling the truth?
Can you be held liable for this?
What if you didn't, what if somebody does something and there's no explicit permission?
It sounds to me like somebody might be able to do this and all they have to put up there is and say, well, This is an artificial interview.
It's not real.
And then they put it in quotes.
I think it's going to confuse a lot of people.
I want clarity.
And that's why, if you're a good physician, I think that's what you work for: clarity and understanding the emotions of the patient.
How are the questions asked?
Even in here, how a person asks questions to the artificial intelligence operation makes a big difference whether it has a personality to that because you can imply a lot of things.
So I have a lot of questions in my mind, and I think it almost means just go back to telling the truth in all that we do, and we might have less of a controversy.
Chris?
Absolutely, Dr. Paul.
And of course, as could be expected, central planners, people who lust after power, they see AI as a tool for a man-made utopia.
It's not because central planning can't work even with AI.
As you said, Dr. Paul, humans, we are emotional, we learn, and there's no way to anticipate how a person learns.
You know, there's a lot of people that would never get another government-endorsed shot again.
We adapt, we're always valuing in the present.
And artificial intelligence can have past data, but it can't possibly anticipate all the unknown variables that exist in our universe, no matter how much data it has.
It will always be dealing in probabilities.
But we already always deal in probabilities.
You know, business owners went to work today under the impression that there's a high probability that people will come in.
But they don't know for sure.
A tornado could rip through, and nobody comes in.
So everything is a probability, whether you're trading in the marketplace, in the stock market.
You know, we're always anticipating the future without knowing it.
So AI, now it could be better at calculating probabilities, but it'll always be working with probabilities and never be able to precisely predict the future.
You know, I can understand how this would be magnificent in collecting information.
And if you want to, you're really honestly looking for who is this person, what does he do, and what are his positions.
But I think it's the format that is sort of sneaky.
That's the way I feel about it.
That it sneaks it in, and then there will be references.
And then what is the recourse?
When does it become fraudulent?
When does it become libelous?
You know, this to me raises questions.
The one thing I believe that the computer can't think for itself.
It only can regurgitate.
The person, there's people who can regurgitate and manipulate and put it in, and it's sort of entertaining.
It comes across as a story, but that might make it just more seductive.
Maybe they could sneak the things in.
Back to the analogy that I use of natural immunity.
What if they do that constantly?
They did it even without this automatic thing where there's a lot more pressure.
But this takes a real job because you have to have somebody else in.
Yes, what have they implied?
What are they pretending to know?
Did they make a mistake?
And how about somebody that is a pretty good imitator?
You know, it looks to me like there could be artificial artificial intelligence, you know, fake artificial intelligence.
And once again, it's when you go back to the very basic effort of people getting along and understanding and doing business, and that is people telling the truth and having honesty in money, honesty in government, honesty in the judicial system.
And the artificial intelligence doesn't have that capacity.
It might say, okay, well, we're going to look into this.
We'll look and see what Ron Paul ever said about this and come up with stuff.
But that, of course, is risky too because they can always interpret things a little bit differently.
And I may be more cautious than most, but generally I've always been very open to technology.
I think it's just great, even though I'm not a technical person, understand how it works.
Because when the internet came out, I really got excited.
Still, I'm excited about it because we're, I think, using the internet right now.
It's available.
But think of the harm done by the internet.
When the internet and the social media becomes partners with the government.
Well, what if everything that comes out of artificial intelligence, you have to partner with the government and get it approved by the government?
It could be very bad.
So I think that's why I like to think and make it simple, make it clear.
People aren't allowed to tell lies.
We're looking for the truth.
And we're looking for identifying that.
And we should have a code of ethics and a legal code that we know what we ought to follow.
Chris?
Excellent, Dr. Paul.
Here are my closing thoughts.
You see, or at least I see on social media, there have been calls to pause AI development, you know, so that regulations could be made.
And regulations, you know, who's going to be the regulator?
Government?
We just went through COVID.
Look how they regulated those vaccines.
They failed so bad on that.
We're going to go now we want them to regulate AI.
I mean, that sounds like a big gamble to me because government, as a quote-unquote, regulator, they pick favorites.
That's how you'll monopolize AI.
Yes, these are the authorized AI users, this company, this company, this company.
So you've got to be very careful when people call for government regulations.
A lot of times, that's their avenue to gaining a monopoly in the space.
So, you know, and I'm not saying that government, or there shouldn't be any type of rules with AI.
It's just be very wary when government regulations are mentioned.
We've had enough experiences with this.
We've got to learn at some point that government as a regulator is really poor when compared to the marketplace.
So, you know, AI is, you know, there's a lot of things that can be good.
There's going to be a lot of things that are going to be bad.
We don't have the answers to this.
We just have some ideas on how maybe to think about it.
And maybe let's not shoot ourselves in the foot by calling for these big centralized powers to be in charge of AI.
Government Regulation Warnings 00:02:56
Very good, Chris.
And I'm going back to my original issue, and that is machines can't think for themselves.
They're very important.
They gather information.
But they said that the dehumanizing aspect of this, because if it's the perfect instrument that can relate information from the government, the perfect instrument to make sure a doctor never makes a mistake, you know, that'd be one thing.
But what if the government makes a mistake?
Or what if the government cheats and it gets involved in this?
And what are these regulations to be?
But the relationships are so important that if you lose that, If you lose that, society becomes very sterile.
And when I think of dehumanization, we're getting it in this country already, and it could get worse.
But all authoritarian governments sort of dehumanize people.
And anytime, you know, in an authoritarian state, even if they talk, I give a minor little joke, they got to be watching who's under the table that might hear it.
We're even getting to that point.
So, if everything can be recorded and repeated and said into a conversation, I put up some warnings.
We ought to be careful.
But, like Chris has pointed out so clearly, most of this stuff can be good and bad, good or bad.
And I mean, the inventions can be killers or life-saving.
And that means that we as human beings have to have a code of conduct and have a set of ethics and some principles that you can share so people can get along.
And that is the reason why, in a libertarian society, a lot of that is answered if everybody was a libertarian, because you're not alive, cheat, steal, kill.
And our governments do that all the time.
Is this going to even deal with it?
No, the machine is going to tell you, okay, ABCD, guess what?
It says go to war tomorrow with China.
That's the only way we can solve our problem.
No, I want to have the humanity aspect of it on personal liberty and some basic rules.
I could do it in less than 10.
The thing that we have to follow.
But anyway, it's interesting stuff.
It's very important.
There's a lot of technology involved.
But I think a discussion, hopefully, the discussion we have today is helpful to people I know.
Most of the people I talk to really think it's super exciting.
And like I was, I was super excited about the internet.
And this is exciting, but it's also somebody has to say, you better watch out.
It could be risky.
I want to thank everybody for tuning in today to the Liberty Report.
Export Selection