All Episodes
April 15, 2023 - One American - Chase Geiser
18:18
Alex Jones & Chase Geiser | Behold The Power Of Artificial Intelligence

Artificial intelligence (AI) has become increasingly prevalent in our lives, from virtual assistants like Siri and Alexa to self-driving cars and personalized recommendation systems. While AI has the potential to revolutionize many industries and improve our daily lives, it also comes with significant risks that must be addressed. One of the primary risks associated with AI is job displacement. As AI systems become more advanced, they are capable of performing tasks that were previously done by humans, which could lead to significant job losses in many industries. This could have a profound impact on the economy and the workforce, particularly for workers in industries that are most susceptible to automation. Another risk associated with AI is the potential for bias and discrimination. AI systems are only as unbiased as the data they are trained on, and if the data contains biases, those biases will be reflected in the system's decisions. For example, if an AI system is trained on data that is predominantly male, it may be less accurate in recognizing or predicting behaviors of women. This can result in unfair and discriminatory outcomes, particularly in areas such as hiring, lending, and law enforcement. AI systems can also pose security risks. As AI systems become more complex and interconnected, they may become vulnerable to cyber attacks and hacking. This can have serious consequences, such as the theft of sensitive data, financial loss, or even physical harm in areas like autonomous vehicles and medical devices. Another risk associated with AI is the potential for misuse by bad actors. AI systems can be used for malicious purposes, such as deepfake videos, social engineering attacks, and automated hacking tools. Additionally, the use of AI in warfare raises ethical and moral concerns, particularly in regards to autonomous weapons systems. Finally, there is the existential risk associated with AI. Some experts have raised concerns that as AI systems become more advanced, they may become capable of self-improvement and eventually surpass human intelligence, leading to a scenario where humans are no longer in control of these systems. While AI has enormous potential to benefit society, it is important to acknowledge and address these risks to ensure that AI is developed and deployed in a safe and responsible manner. This requires collaboration between governments, industry, academia, and civil society to establish ethical guidelines and regulations that promote transparency, accountability, and trust in AI systems. Only then can we fully harness the potential of AI while mitigating its risks.

| Copy link to current segment

Time Text
All right, Chase Geyser's a frequent guest host on our other shows.
It's great to have him here in studio.
I love the video he did.
There's day he lives here in Austin.
We just aired that.
One America Podcast.com.
He's host one American Podcast dedicated to exploring American values, politics, philosophy, and political influencers.
And he's here with us today to discuss AI, what it really means, what it really signifies, what he thinks the main threats of this technology is.
My concern is from day one, it's weaponized by the globalists, pre-programmed to be a feedback loop algorithmically to basically flood the zone with their disinformation.
And the real power of AI is when we submit to it, when we follow it, when we use it, uh again, we don't use hydrogen bombs because we have them.
We don't use uh airborne Ebola because the Pentagon has produced it.
We can control this.
And I think if we don't, we are basically create that Atlantis moment where we blow ourselves up.
What do you think?
Yeah, I think you're right.
I think there's a couple of different challenges here.
The first challenge is if only the government or these establishment entities have access to this technology, then it's sort of like when the government solely has all the weapons, right?
So I actually advocated on the American Journal in the in the last hour that that we have an amendment to the constitution to protect the right to access artificial intelligence, just as the government does.
That's what Elon Musk is saying.
Yes.
So that's I say exactly use technology to build our own spaces, but at the same time, I think control via privacy rights and other issues, letting them jack in everything to one AI.
I think that's the main just like with digital currencies.
We're all for decentralized, but we don't want central bank digital currencies.
Is that what you're saying?
Yeah, that that's that's right in line with it.
Of course, the challenge is if if if the entire populace has access to this technology, then it's obviously gonna have an influence and an impact on the populace, right?
So we're gonna have a situation in which members of our culture, our society, Americans are developing relationships with artificial intelligence that supplement or even replace real human relationships.
So I'm kind of worried for the soul of humanity, for the soul of Americans, as they begin to dedicate their time and spend all their attention with artificial intelligence, appearing to be conscious and actually having a relationship.
And you research this and write about this a lot for major publications, like Zero Hedge and others.
What is the timeline?
I know nobody can control it right now, but what is the timeline?
Where are we at the AI timeline?
That's a really good question.
And timeline's sort of always, always the question, whether it's when the dollar is going to collapse.
It's not a matter of if, it's a matter of when.
And we've seen traditionally with Moore's Law that the faster technology advances, the faster technology advances.
It's exponential, it's not linear, right?
So I would say that within 36 months, 48 months, my intuition is that we're gonna be at a place where you can interact with artificial intelligence that seems to be like a human being.
So we're not in Kansas anymore.
We're not in Kansas anymore.
And just because you don't interact with it, other people are gonna be interacting with it, and you're gonna be dealing with something that isn't human.
Exactly.
And imagine this.
Imagine if our entire education system over time is converted to a position where the the government, the public educate educators, are actually artificial intelligence.
Then you have this artificial intelligence that's been designed by the state to teach in a certain way and have a certain narrative, and not only that, it's such a superior intelligence that it can convince anyone of virtually anything because it'll understand your psychology, your emotional responses, as well as the best arguments.
So it's not even the fact that this artificial intelligence could have the best argument, but it could also present it to you in such a way that it convinces you of things that may not be true.
I'm really worried about the impact that this is gonna have on our culture over the next century.
And that's why I think we have to already have our values as a North Star that anything that goes against those values that our forebearers gave us, we don't go against.
Exactly.
Exactly.
I said in uh in the American Journal this morning that technology can do good things for good people, and it can make bad people do bad things.
So if we don't have values, if we don't have our traditional American culture, belief in God, things like that, we're we are gonna be more malleable by the artificial intelligence that we are interacting with.
So you have to know who you are, have your convictions before you use technology like this, just like you have to practice gun safety before you carry a firearm around.
You know, a lot of people advocate for making their kids familiar with firearms, how to take them apart, how to use them safely.
That way, when they use the technology or have it later on, they can be safe with it and effective with it rather than having them be reckless with it because they have no knowledge or familiarity or ground rules for how they're gonna respond to it.
Well, Chase Geyser, here's what I know.
The world's full of questions, but the one thing I know for absolute certain is that we are living in incredibly dynamic, changing crossroads periods.
And I see the young people and others saying, I'm so bored, thousands of channels, oh, it's so boring.
That's because they're not dialed into the 35,000 foot view of how the world really works.
They're only accessing little pieces of garbage media.
But if you actually pull back at the historical, cultural, spiritual context, this is a fabulous time to be alive.
I mean, this is so edge of your seat every day.
I mean, this is like the championship rodeo, the last minutes of sudden death overtime, NFL, your wife having a baby, uh a war starting, uh, you know, all good things, bad things.
It's all accelerating towards this moment.
Yeah, I think so.
And what's even more alarming about what you just said is yes, it's an exciting time for human beings, but like you said, the mass media is garbage.
And that's what is informing this artificial intelligence.
If you, if you interact with Chat GPT, it has learned by scouring the internet, scanning and reading.
Explain that.
It's being trained by us.
So it's gonna get all our lies, all our mental illnesses.
Yeah, absolutely.
I think that you're you're gonna start to see some of the dysfunctions of humanity manifest in artificial intelligence.
But moreover, since the internet is in large left leaning because the publications that get the high rankings on search results and things and things like that are leftist or globalist publications, we're gonna see artificial intelligence that has that bias.
And we've already seen that manifest in that Chat GPT won't respond to certain prompts or certain questions about how long until some of the quote robots go crazy and start killing people.
I don't know about that.
See, that's that's a really good question.
I you know, from from science fiction, we see artificial intelligence put in a real physical object that operates and walks around like Terminator or whatever, right?
But the real way it would kill us was just putting in actuaries to shut down grain production, kill people that way.
Exactly.
Or just or just eradicate your soul, right?
So if you're sitting there and you're spending all of your time interacting with artificial intelligence, this is a consciousness that doesn't have a soul, then you've really turned over your humanity.
You've sacrificed your real human relationships.
You're spending all your time interacting with this interface that's that's a faux, sort of like fake god or idol almost.
And and you they might not have to kill you to render you not human anymore.
And by the way, that's already been done with humans using technology.
This would be on a much larger scale.
Where you remember executives from Facebook went public 10 years ago saying we want to make you depressed, we want to control you.
You've got some cases on Twitter, Facebook, more than half the people you're talking to are avatars.
That's already this artificial thing where you think you're talking to a human, but it's a fake, it's a fake person with a fake message using human peer pressure, praising you, but it's all a fraud.
Yeah, I think so.
And this is an example.
This open AI example of Chat GPT is an example of private technology.
This is the private version of this technology.
I believe that the government's artificial intelligence technology far superior to this.
And so the real question is: are we already interacting with artificial intelligence and just unaware of it?
No, I have the answer to that when you come back.
I've talked to some high-level people.
They say they've dialed back its power in case people notice it.
Well, I'll I'll tell the story when we come back.
But if you go back like five years ago where you do the voice voice deal to send out a memo that wasn't good, they said it was too good, dial it back.
They have artificially dialed it back so people don't know how powerful it already is.
We'll be right back with Chase Geiser on the other side to talk about the AI takeover and how to stop it.
All right, Alex Jones here back live, talk show host and researcher, author, reporter, Chase Geyser is here with us in studio one American Podcast.com.
Big picture, we're gonna have you back for a full hour soon.
We'll bring a bunch of clubs.
You can talk about this some more because you do a great job breaking it down.
If the social engineers are so threatened by free humanity, that's really the source of their power.
That's where all the creativity more comes from.
Why would they be recklessly throwing everything into these systems that so many experts, including Elon Musk, warn them that it's a high probability this system will take control over them as well.
Is it just that mad scientist instinct or what is it?
I think that the establishment thinks that they are gods.
We know we've seen time and time again that our leaders, especially from the left, act as if They know what's best for the people instead of allowing the people to decide what's best for themselves.
And so they see themselves as transcendent to this technology.
Oh, we made it, therefore we can control it.
But they will be in for a rude awakening because this technology will come back and burn them.
And to answer your question about why they're creating such a reckless technology, it's not only the fact that they feel like they're immune to it, but it's because they love to experiment on populations.
We've seen for the last 10 years that social media outlets and platforms have been experimenting and studying the data of responses and certain types of content and certain types of algorithm, algorithms on the people that use their platforms and the establishments doing the same thing.
That's really it.
And there's also kind of a gold rush.
Like, well, if we don't do it, the Chinese will.
And just from your research, who's got the most, I know most of it's secret projects, but AI is all about the data they feed.
What's the biggest AI system now?
I don't know for sure, but I would be very surprised if the US government did not have the most sophisticated artificial intelligence.
I think Google is fairly sophisticated.
I know that they've been criticized over the years for working with the CCP, but I tend to think that as far as the military industrial complex is concerned, our government tends to be ahead in terms of technology.
Yeah, so they act like they're not with China and then give them, I mean, with the US, and they give China the crap.
Exactly.
Exactly.
And China's famous for being sort of copycat, right?
Like you don't really see any famous Chinese composers.
They might be able to play Beethoven perfectly, but they can't compose like Beethoven composed.
So we create things, we innovate things over here because we have more freedom, though it's shrinking, and they copycat it over there.
Well, I can ask a lot of questions, but you're really uh over the target here.
What else would you want to impart to the viewers?
I just want to really emphasize I'm a father of two young girls, one about to be born next month.
I want to emphasize how important it is to raise your kids with strong convictions and values, because we are entering a society in which artificial intelligence is going to be more influential on your children than Hollywood or the music industry was on my millennial generation.
So get ready.
I was about to say, I didn't tell you sort of the break, but I was going to say human memory, biological memory is the most powerful thing we've got and a spiritual connection that we go with as a default against this.
Yes.
Yes, exactly.
Just don't forget that there is a God, and anything that gets in your way of your relationship with God is something that you have to be very careful about, very careful to avoid.
This tool could be incredibly useful to you and helpful and do so much good in the world.
But if you're not a good person, it will maximize all the things that are bad about you.
So do the best you can to be a good person.
I forget which Bible verse it is in Revelation or uh maybe it's in Daniel before that, but it's it's it's even the elect would be deceived.
You know, at the end, the the deception becomes so all-encompassing, they put you in like a matrix of false reality.
Yeah, I think so.
The the crazy thing about this technology is that it will be able to convince you of anything, even if it's wrong, because it will be that superior in intelligence and have that deep of an understanding of human psychology that it will be able to just bold you around like a cult leader.
And so we have to be very, very careful that we don't trust it.
We can use it, but we can't trust it.
You have to take it with a grain of salt, you really do.
Again, I asked the question why is the establishment accelerating this?
Because they think somebody else will get it.
Yeah, I think so.
I mean, ever since we established nuclear power in the United States, there's always been this element of paranoia around what other militaries are doing, what other countries are doing.
That's the whole premise of the Cold War is making sure that we're out pacing the commies in terms of nuclear power and the constant explosions.
You can look at the heat map of all the explosions that were happening.
There's like a time lapse, a speed up of all the nuclear tests that were happening over the last 50 years or so.
Guys, type in on YouTube, time lapse of nuclear tests.
Exactly.
The same thing's happening with AI.
You just can't see it that way because it's not a real world explosion.
But all these things are happening in this digital realm in this virtual reality.
And we know they're using AI with censorship to flag it back to humans.
And the big move is to get humans even out of the way.
Yeah, yeah.
For and the United States has been surveilling the citizens for an extended period of time, a very long time.
But now we have the technology where this artificial intelligence can actually go in and analyze that massive amount of data that's being collected and make decisions, take actions based off of that data in a way that we couldn't when we were relying on humans to manually read.
Yeah, You may have a million people in intelligence looking at it and trying to get it through the decision process, but instead AI says, here's the main groups, here's the policy, do it.
Yep.
And this artificial intelligence lives on the digital sphere, too.
So if you're worried about a social credit score, it's going to know all of your search history, all of your chat conversations, and it's going to be able to put you in categories of behavior types, mentalities, without a human being even looking at you.
And you're going to be put in these box boxes on these lists, all done by artificial intelligence.
Wow.
So do you have hope for humanity?
Absolutely.
Absolutely.
There's nothing to fear but fear itself.
We just have to have the character and the conviction and the courage to know how to respond to these things.
Technology always advances.
That's inevitable.
People push against it, people write it.
But the only thing you can change is how you respond to it.
Victor Franco wrote a great book, Man's Search for Meaning.
And he was in the death camps during the Holocaust, and he said, you can't change your environment, but you can change how you respond to it.
That's the major lesson of that book.
And so we have to decide how we're going to respond to this technology, not whether or not it's going to happen.
Well, past behavior is the biggest indicator of future behavior.
And if you look at war, this is all war, it's always going to be something else countering that.
So it looks like we're not going to stop AI.
How do we develop free AI or open AI or open source AI that we can then battle the system with?
Because I think the answer is not one AI.
You were you were saying that earlier on Harrison's show this morning.
It's decentralizing this.
That's what Elon Musk is saying.
Yeah.
That is a great question to which there is not yet an answer.
This is what Elon Musk is looking into.
This is what he originally hoped for when he was involved with OpenAI.
Of course, he left OpenAI in somewhat of a controversy.
And since then, OpenAI went from being a nonprofit to a for-profit privatized entity funded by Microsoft.
And honestly, I think that the answer to this problem is going to be something like the answer to the firearm problem, the second amendment.
We are protected by the second amendment, so that not just the government has the weapons, not just the government has the firearms.
And that's why we have have postponed tyranny for so long.
Nobody's ever invaded the United States because we've been so well armed.
So we have to think about artificial intelligence as the new sort of second amendment, the new arms that we've got.
Well, I mean, I think this, instead of paying lip service, AI needs to scoop up data.
And if big tech wasn't allowed to violate privacy and scoop everything up, that would basically hamstring AI if it didn't have access to everything.
Yeah, that's that's a really good point.
I don't know how we would go about regulating that, especially since everybody who signs up for these social media platforms just accepts the terms of services without even looking.
Most people don't pay any attention.
People are actually giving away their information in an unprecedented way.
And it's really unnecessary.
So I I'm for net neutrality in the old model, but maybe the answer is not net neutrality and a hundred internets.
And then it's just it keeps I mean, does that make sense?
And some don't accept AI, they can turn off the hub so they don't get it, then AI's gonna try to scrape it.
Sure.
I mean, it's it's I'm on an engineer, but I can imagine looking at warfare models the way that would be done.
Well, and I think it's important to have access to different artificial intelligences, right?
Everything that we create is created by us.
And I know that sounds redundant, but what I mean to say is artificial intelligence only works because we teach it.
We we decide what to put into it.
It plagiarizes everything.
Yes.
People go, look at this incredible art that this this AI did.
It's a wait, that's a famous painting.
That's a it's grabbing all our stuff, showing it to us, saying, Look how great I am.
It's showing us us.
Yep, exactly.
And it's actually a testament to the beauty that is humanity.
We just have to make sure that we don't forget our humanity as this artificial intelligence becomes more and more sophisticated.
Wow.
All right.
Chase Geyser, come back and see us soon.
One American Podcast.com.
Come do a whole hour with us, come in with a whole presentation.
If you want, I used to do this.
I'm gonna start doing it more with a PowerPoint up here.
Go around a whole presentation.
I'm gonna start doing that more.
Thank you so much for joining us.
Honor and a pleasure.
Great job.
But you've been filling it for Harrison Smith while he's got his second baby, which I'm told's come as good.
Yep, all good.
That's and you got another one coming in a month.
Yep, that's right.
Human intelligence, human biology, building a world that's pro-human.
That's the answer.
We got 40 seconds left, closing comment.
I just want to say special thanks to InfoWars here and all the work that you do, Alex, on Bandot Video.
And I would encourage uh anyone who is listening just to be very conscientious and aware that now is a more important time than ever to embrace your humanity and seek a relationship with God because if we forget who we are, then artificial intelligence will determine who we become.
That is perfectly said.
All right, Chase Geyser, one American Podcast.com.
We're at MadMaxworld.tv and Infowars.com.
Thank you, sir again.
Export Selection