Alex Jones & Chase Geiser | Behold The Power Of Artificial Intelligence
Artificial intelligence (AI) has become increasingly prevalent in our lives, from virtual assistants like Siri and Alexa to self-driving cars and personalized recommendation systems. While AI has the potential to revolutionize many industries and improve our daily lives, it also comes with significant risks that must be addressed.
One of the primary risks associated with AI is job displacement. As AI systems become more advanced, they are capable of performing tasks that were previously done by humans, which could lead to significant job losses in many industries. This could have a profound impact on the economy and the workforce, particularly for workers in industries that are most susceptible to automation.
Another risk associated with AI is the potential for bias and discrimination. AI systems are only as unbiased as the data they are trained on, and if the data contains biases, those biases will be reflected in the system's decisions. For example, if an AI system is trained on data that is predominantly male, it may be less accurate in recognizing or predicting behaviors of women. This can result in unfair and discriminatory outcomes, particularly in areas such as hiring, lending, and law enforcement.
AI systems can also pose security risks. As AI systems become more complex and interconnected, they may become vulnerable to cyber attacks and hacking. This can have serious consequences, such as the theft of sensitive data, financial loss, or even physical harm in areas like autonomous vehicles and medical devices.
Another risk associated with AI is the potential for misuse by bad actors. AI systems can be used for malicious purposes, such as deepfake videos, social engineering attacks, and automated hacking tools. Additionally, the use of AI in warfare raises ethical and moral concerns, particularly in regards to autonomous weapons systems.
Finally, there is the existential risk associated with AI. Some experts have raised concerns that as AI systems become more advanced, they may become capable of self-improvement and eventually surpass human intelligence, leading to a scenario where humans are no longer in control of these systems.
While AI has enormous potential to benefit society, it is important to acknowledge and address these risks to ensure that AI is developed and deployed in a safe and responsible manner. This requires collaboration between governments, industry, academia, and civil society to establish ethical guidelines and regulations that promote transparency, accountability, and trust in AI systems. Only then can we fully harness the potential of AI while mitigating its risks.
All right, Chase Geyser is a frequent guest host on our other shows.
It's great to have him here in the studio.
I love the video he did.
Yesterday, he lives here in Austin.
We just aired that.
OneAmericapodcast.com.
He's hosting One American Podcast dedicated to exploring American values, politics, philosophy, and political influencers.
And he's here with us today to discuss AI, what it really means, what it really signifies, what he thinks the main threats of this technology is.
My concern is from day one, it's weaponized by the globalists, pre-programmed to be a feedback loop algorithmically to basically flood the zone with their disinformation.
And the real power of AI is when we submit to it, when we follow it, when we use it.
Again, we don't use hydrogen bombs because we have them.
We don't use airborne Ebola because the Pentagon has produced it.
We can control this.
And I think if we don't, we basically create that Atlantis moment where we blow ourselves up.
I think there's a couple of different challenges here.
The first challenge is if only the government or these establishment entities have access to this technology, then it's sort of like when the government solely has all the weapons, right?
So I actually advocated on the American Journal in the last hour that we have an amendment to the Constitution to protect the right to access artificial intelligence, just as the government does.
So I say exactly use technology to build our own spaces, but at the same time, I think control via privacy rights and other issues, letting them jack in everything to one AI.
I think that's the main, just like with digital currencies.
We're all for decentralized, but we don't want central bank digital currency.
Of course, the challenge is if the entire populace has access to this technology, then it's obviously going to have an influence and an impact on the populace, right?
So we're going to have a situation in which members of our culture, our society, Americans, are developing relationships with artificial intelligence that supplement or even replace real human relationships.
So I'm kind of worried for the soul of humanity, for the soul of Americans, as they begin to dedicate their time and spend all their attention with artificial intelligence appearing to be conscious and actually having a relationship.
And timeline is sort of always, always the question, whether it's when the dollar is going to collapse.
It's not a matter of if, it's a matter of when.
And we've seen traditionally with Moore's Law that the faster technology advances, the faster technology advances.
It's exponential.
It's not linear, right?
So I would say that within 36 months, 48 months, my intuition is that we're going to be at a place where you can interact with artificial intelligence that seems to be like a human being.
Imagine if our entire education system over time is converted to a position where the government, the public educators are actually artificial intelligence.
Then you have this artificial intelligence that's been designed by the state to teach in a certain way and have a certain narrative.
And not only that, it's such a superior intelligence that it can convince anyone of virtually anything because it'll understand your psychology, your emotional responses, as well as the best arguments.
So it's not even the fact that this artificial intelligence could have the best argument, but it could also present it to you in such a way that it convinces you of things that may not be true.
I'm really worried about the impact that this is going to have on our culture over the next century.
And that's why we have to already have our values as a North Star, that anything that goes against those values that our forebearers gave us, we don't go against.
I said in the American Journal this morning that technology can do good things for good people and it can make bad people do bad things.
So if we don't have values, if we don't have our traditional American culture, belief in God, things like that, we are going to be more malleable by the artificial intelligence that we are interacting with.
So you have to know who you are, have your convictions before you use technology like this, just like you have to practice gun safety before you carry a firearm around.
You know, a lot of people advocate for making their kids familiar with firearms, how to take them apart, how to use them safely.
That way, when they use the technology or have it later on, they can be safe with it and effective with it rather than having to be reckless with it because they have no knowledge or familiarity or ground rules for how they're going to respond to it.
The world's full of questions, but the one thing I know for absolute certain is that we are living in incredibly dynamic, changing crossroads periods.
And I see the young people and others saying, I'm so bored, thousands of channels.
Oh, it's so boring.
That's because they're not dialed into the 35,000 foot view of how the world really works.
They're only accessing little pieces of garbage media.
But if you actually pull back the historical, cultural, spiritual context, this is a fabulous time to be alive.
I mean, this is so edge-your-seat every day.
I mean, this is like the championship rodeo, the last minutes of sudden death overtime, NFL, your wife having a baby, a war starting, you know, all good things, bad things.
I think that you're going to start to see some of the dysfunctions of humanity manifest in artificial intelligence.
But moreover, since the internet is in large left-leaning, because the publications that get the high rankings on search results and things like that are leftist or globalist publications, we're going to see artificial intelligence that has that bias.
And we've already seen that manifest in that ChatGPT won't respond to certain prompts or certain questions.
You know, from science fiction, we see artificial intelligence put in a real physical object that operates and walks around like Terminator or whatever, right?
So if you're sitting there and you're spending all of your time interacting with artificial intelligence, this is a consciousness that doesn't have a soul, then you've really turned over your humanity.
You've sacrificed your real human relationships.
You're spending all your time interacting with this interface that's a faux, sort of like fake god or idol almost.
And they might not have to kill you to render you not human anymore.
And by the way, that's already been done with humans using technology.
This would be on a much larger scale where you remember executives from Facebook went public 10 years ago saying, we want to make you depressed.
We want to control you.
You've got some cases on Twitter, Facebook, more than half the people you're talking to are avatars.
That's already this artificial thing where you think you're talking to a human, but it's a fake, it's a fake person with a fake message using human peer pressure, praising you, but it's all a fraud.
They say they've dialed back its power in case people notice it.
Well, I'll tell the story when we come back.
But if you go back like five years ago, where you do the voice, voice deal to send out a memo, it wasn't good.
They said it was too good, dial it back.
They have artificially dialed it back so people don't know how powerful it already is.
We'll be right back with Chase Geiser on the other side to talk about the AI takeover and how to stop it.
Talk show host and researcher, author, reporter.
Chase Geyser is here with us in studio at one AmericanPodcast.com.
Big picture.
We're going to have you back for a full hour soon.
We'll bring a bunch of clips.
You can talk about this some more because you do a great job breaking it down.
If the social engineers are so threatened by free humanity, that's really the source of their power.
That's where all the creativity and more comes from.
Why would they be recklessly throwing everything into these systems that so many experts, including Elon Musk, warn them that it's a high probability this system will take control over them as well?
Is it just that mad scientist instinct or what is it?
I think that the establishment thinks that they are gods.
We know we've seen time and time again that our leaders, especially from the left, act as if they know what's best for the people instead of allowing the people to decide what's best for themselves.
And so they see themselves as transcendent to this technology.
Oh, we made it, therefore we can control it.
But they will be in for a rude awakening because this technology will come back and burn them.
And to answer your question about why they're creating such a reckless technology, it's not only the fact that they feel like they're immune to it, but it's because they love to experiment on populations.
We've seen for the last 10 years that social media outlets and platforms have been experimenting and studying the data of responses and certain types of content and certain types of algorithm algorithms on the people that use their platforms.
I don't know for sure, but I would be very surprised if the U.S. government did not have the most sophisticated artificial intelligence.
I think Google is fairly sophisticated.
I know that they've been criticized over the years for working with the CCP, but I tend to think that as far as the military-industrial complex is concerned, our government tends to be ahead in terms of technology.
I just want to really emphasize: I'm a father of two young girls, one about to be born next month.
I want to emphasize how important it is to raise your kids with strong convictions and values because we are entering a society in which artificial intelligence is going to be more influential on your children than Hollywood or the music industry was on my millennial generation.
I was about to say, I didn't tell you through the break, but I was going to say, human memory, biological memory is the most powerful thing we've got and a spiritual connection that we go with as a default against this.
Just don't forget that there is a God, and anything that gets in your way of your relationship with God is something that you have to be very careful about, very careful to avoid.
This tool could be incredibly useful to you and helpful and do so much good in the world.
But if you're not a good person, it will maximize all the things that are bad about you.
The crazy thing about this technology is that it will be able to convince you of anything, even if it's wrong, because it will be that superior in intelligence and have that deep of an understanding of human psychology that it will be able to just bold you around like a cult leader.
And so we have to be very, very careful that we don't trust it.
I mean, ever since we established nuclear power in the United States, there's always been this element of paranoia around what other militaries are doing, what other countries are doing.
That's the whole premise of the Cold War: making sure that we're outpacing the commies in terms of nuclear power and the constant explosions.
You can look at the heat map of all the explosions that were happening.
There's like a time lapse, a speed up of all the nuclear tests that were happening over the last 50 years or so.
And the United States has been surveilling the citizens for an extended period of time, a very long time.
But now we have the technology where this artificial intelligence can actually go in and analyze that massive amount of data that's being collected and make decisions, take actions based off of that data in a way that we couldn't when we were relying on humans to manually.
Yeah, you may have a million people in intelligence looking at it and trying to get it through the decision process, but instead, AI says, here's the main groups, here's the policy, do it.
And this artificial intelligence lives on the digital sphere, too.
So if you're worried about a social credit score, it's going to know all of your search history, all of your chat conversations, and it's going to be able to put you in categories of behavior types, mentalities, without a human being even looking at you.
And you're going to be put in these boxes on these lists, all done by artificial intelligence.
I don't know how we would go about regulating that, especially since everybody who signs up for these social media platforms just accepts the terms of services without even looking.
Most people don't pay any attention.
People are actually giving away their information in an unprecedented way, and it's really unnecessary.
I just want to say special thanks to InfoWars here and all the work that you do, Alex, on band.video.
And I would encourage anyone who is listening just to be very conscientious and aware that now is a more important time than ever to embrace your humanity and seek a relationship with God.
Because if we forget who we are, then artificial intelligence will determine who we become.