Speaker | Time | Text |
---|---|---|
This is what you're fighting for. | ||
I mean, every day you're out there. | ||
What they're doing is blowing people off. | ||
If you continue to look the other way and shut up, then the oppressors, the authoritarians get total control and total power. | ||
Because this is just like in Arizona. | ||
This is just like in Georgia. | ||
It's another element that backs them into a quarter and shows their lies and misrepresentations. | ||
This is why this audience is gonna have to get engaged. | ||
As we've told you, this is the fight. | ||
unidentified
|
All this nonsense, all this spin, they can't handle the truth. | |
The thought of educators who are dealing with so much right now when it comes to making sure that when students hand in a paper, it's a paper written by that student. | ||
How worried are you that this is already writing papers for students at colleges across the country? | ||
I wasn't worried Too much until I started this thing here, listening to this. So, yeah. | ||
You know, I think about my son. | ||
You know, when I look at my son's signature, Mika, it looks like the signature he used when he was in the fourth grade. | ||
Because now he's just using the compute. | ||
So there's a sense in which technology has almost disappeared a certain kind of attention to penmanship, right? So what does it mean for creativity? | ||
What does it mean for a kind of, not necessarily originality, but inspiration that one finds on the page when it's being outsourced in this way? | ||
That's the first kind of worry. | ||
And, you know, at the heart of education isn't so much about grading papers. | ||
It's about, in some ways, kind of developing a certain kind of character, a certain kind of openness and inquisitiveness to the world. | ||
And what happens when that's being outsourced to technology? | ||
That's the first kind of concern. | ||
The second concern is that technology exists in a world that's shaped by bias. | ||
That's shaped by the ugliness of this world. | ||
ChatGPT is not going to get rid of that bias. | ||
So what happens when it's outsourced into this kind of technological medium, right? It doesn't resolve the ugliness of misogyny. | ||
It doesn't resolve the ugliness of racism and anti-Semitism. | ||
How does that find its way into the code? | ||
And human beings are doing that, it seems to me. | ||
So some questions have to be asked. | ||
But other than that, as an educator, what happens to inspiration and creativity when a damn computer is doing all of this work? | ||
Also, part of really learning, at least for me, was the actual process of writing with a pen and now, I guess, a computer. I know, I know. | ||
But I'm just saying the process of writing a paper for a student is that's how you're learning. | ||
You're developing on it and you're erasing and going. | ||
You're part of that process which helps you learn. | ||
This is all being done for them. | ||
It's frightening. I want to just close... | ||
unidentified
|
I want to start with OpenAI because OpenAI was founded in part out of a fear that artificial general intelligence or sort of superhuman intelligence shouldn't be in the hands of any one company and was perhaps coming faster than we think. | |
Is it really just right around the corner or is it farther off than perhaps even you thought it was initially? | ||
I think it's a question of timeframes. | ||
Like, I view the next few decades for this sort of most important technological milestone in human history, I view that as right around the corner. | ||
And so the debate of is it 10 years or 100 years, I don't think that matters too much given the magnitude of what's happening. | ||
Like, it's coming soon enough and it's a big enough deal that I think we need to think right now about how we want this deployed, how everyone gets the benefit from it, how we're going to govern it, how we're going to make it safe and sort of good for humanity. | ||
unidentified
|
Elon Musk, who founded this with you, has been concerned about the sort of apocalyptic possible future of AI. How realistic is that apocalypse? | |
I think it's always hard to say, like, when you have any incredibly powerful new technology, here's exactly how it's going to go. | ||
I will say that I am personally optimistic we are going to get to the good future, but I think that's going to require incredibly hard work from very talented people that needs to start now. | ||
But when you look at the trend as a whole, I think it's going to be incredibly positive for humanity. | ||
unidentified
|
But you do believe that AI can supersede human intelligence? | |
I believe it absolutely will. There's a big debate about timeframes. | ||
I think it takes unique human arrogance to believe that AI cannot supersede humans. | ||
unidentified
|
But what about that scares you? | |
So many things, right? | ||
What does it mean to build something that is more capable than ourselves? | ||
What does that say about our humanity? | ||
What's that world going to look like? | ||
What's our place in that world? | ||
How is that going to be equitably shared? | ||
How do we make sure that it's not like a handful of people in San Francisco making decisions and reaping all the benefits? | ||
I think we have... | ||
An opportunity that comes along only every couple of centuries to redo the socioeconomic contract and how we include everybody in that and make everybody a winner and how we don't destroy ourselves in the process is a huge question. | ||
How we don't destroy ourselves in the process, Wednesday, 25 January, year of our Lord, 2023. Joe, Alan, I want to bring you a bunch of clips. | ||
And we're with Joe for this hour. | ||
By the way, Jim Hoft I think is going to join us later. | ||
We can work out this Ray Epps, some of this Ray Epps video. | ||
Joe Allen. Right there, Mika. | ||
And Mika, very heartfelt in walking through the process that all of us have gone through of having to sit there on a blank piece of paper and do that process of creation, of taking your own ideas and having them manifest themselves in the written word and the power of that kind of Socratic method with yourself to really... | ||
You know, take you to the next level and hone your thinking and your rationality, your logic, all of it, which gets you ready for critical thinking and how you interact with the world. | ||
In a very heartfelt discussion with the professor, but even kind of overwhelmed at a very tiny, tiny stage. This is so tiny what's happening right now about the papers and about the GPT. You know, the AIGPT, which is kind of, you know, came up ahead. | ||
It's coming out party a little bit in Davos. | ||
And now it's the talk of everywhere. | ||
And quite frankly, it overwhelmed Davos. | ||
When you really talk about what's happening in Sam Altman right there, and it's quite telling when you look, he said, she asked, she says, will artificial intelligence eventually supersede human intelligence? | ||
unidentified
|
His response was, Yes. | |
Artificial intelligence will supersede humans. | ||
Didn't say human intelligence, said humans. | ||
Joe Allen, we've got a couple of other clips we're going to play in this hour. | ||
I believe you've seen everybody reading the Book of Revelations and everybody... | ||
Thinking through the Apocalypse, always look at the Antichrist as someone that was going to appear, and whether that was in the Vatican or Asia or wherever, over the years you've seen these different pitches. | ||
The Antichrist, which is the anti... | ||
The anti of the those made in the image and likeness of God is being created now in weapons labs, in laboratories, in universities, in private companies in the United States, Western Europe, Eastern Europe, the mainland of China, North Korea, South Korea, some as weapons, some as to help you write term papers. | ||
Walk us through the difference between what's overwhelming MICA And quite rightly so. | ||
I'm not trying to downplay that. | ||
But that is such a tiny, small-bore problem of artificial intelligence versus what is accelerating at an accelerating rate is about to overwhelm all of humanity, which is artificial general intelligence. | ||
Joe Allen. You know, Steve, I don't know if this is some sort of prank, but somehow you've gotten me agreeing with Morning Mika and Professor Eddie Glaude of Princeton. | ||
But I have to say that other than perhaps the Constant obsession with whether or not AI is going to be racist, sexist, and homophobic. | ||
I agree with them completely on this. | ||
They're talking about the dehumanization of people at a very formative stage in their lives, at the educational level. | ||
And, you know, much as I disagree with Eddie Glaude on most things, I will say that he has spent a lot of time with students, with young people, and he understands the importance of education. | ||
And what we're seeing right now, definitely one of the first impacts of this is that all the kids who would have otherwise struggled with their assignments or found somebody to write them for them, they are going to turn to ChatGPT and other large language models to do not just the writing for them, but as you just mentioned, to do their thinking for them. | ||
And right now there's a major company. | ||
It's basically a plagiarism detection company. | ||
It's used by a lot of universities. | ||
It's called Turnitin. | ||
They use it at the universities I've been to. | ||
Turnitin has created this ChatGPT detector. | ||
And the way it works is basically it looks for the most average paper because ChatGPT right now produces basically a very, very average product. | ||
But two things are gonna happen very, very fast. | ||
One, and I hate to give the kids tips on this, one, All a kid has to do is ask the chatbot for an essay on X, Y, or Z. The chatbot will produce a relatively decent essay, and then they'll just go through and kind of tweak this or that to give it a little bit of personal flavor. | ||
They're not going to be able to detect that. | ||
That's going to get hacked immediately. | ||
The second thing that's going to happen is, unless the technology just stalls out and it hasn't so far, Then it's going to get better and better, more and more sophisticated. | ||
And so they won't have to do so much editing in the future. | ||
And in fact, the chatbot will just simply do really kind of high level thinking for students. | ||
And so what you're going to see in the same way that, and I hate to use such a crass analogy, but in the same way that you see like the big, you know, kind of land whales on the Walmart scooters everywhere, just because they just decided to give up and that's just what they're going to do. | ||
That is going to be the sort of mentality of X number of people going forward. | ||
You could say it began with the automobile and with the television, but we're talking about an acceleration of the process of machines becoming more and more sophisticated and able to produce however artificial it is. | ||
Some sort of mimicry, some sort of resemblance to the human mind, while simultaneously the human mind, actual human beings, the people we're supposed to really care about in this equation, their mental faculties will decrease more and more and more. | ||
And yes, you will have super smart people who will use these technologies basically to augment their intelligence and get smarter. | ||
But as it's been warned again and again and again and again, what you will see is this sort of elite that's able to exploit the technology going up and up and up, and a few people in the middle that are able to exploit the technology just to get by, and a huge swath of humanity that is basically atrophying. | ||
And that's my prediction. | ||
We'll see if I can take that one to the bank. | ||
But it actually gets worse from there. | ||
What Sam Altman was talking about with artificial general intelligence or artificial super intelligence destroying humanity, what he's talking about is that because these AI systems are going to be integrated at every level of the infrastructure, if one of those AIs goes rogue, right, or if a whole swarm of AIs goes rogue, if they decouple from human beings, Will, human intentions, | ||
if they are not aligned, they call it the alignment problem. | ||
If that happens and you're talking about AI systems that are in some way influencing or in control of genetic engineering, which they're already integrated into, you have then the possibility of releasing a bioweapon. | ||
If you have them in charge of military equipment, you then have the problem of what happens if it goes rogue and starts attacking human beings just simply either at random or because there's some sort of malevolence programmed into it. | ||
And then going from there, you've got the possibility that AI, as human beings become more and more attuned to being companions with them, that the AI will basically On its own, so to speak, begin to kind of cloud human thinking intentionally and begin producing lies that just further confuse an already confused human race. | ||
So if you want to get to the next worst step of this, we can go to the second video, which I think is a bit more bone chilling. | ||
unidentified
|
Hang on. Yeah, I do. | |
Real quickly, just for definition, though, you talked about the decrease in the ascension of man and relying too much on this, on this technology, and not fully developing your own natural mind and reasoning and all that. | ||
But artificial intelligence also is orders of magnitude grade artificial intelligence. | ||
That's where the artificial intelligence itself starts to replicate itself. | ||
It's not just about the robot could go rogue or some component piece. | ||
The mere conception itself is it takes And correct me if I'm wrong, the human actually comes out of the equation. | ||
There's such a basis it knows so much, it can start to replicate and take itself up to, not consciousness, but further developments that are unencumbered By the human hand and the human writing programming and the human writing algorithms, is that essentially the greatest threat when you have, when he says that artificial intelligence, when she says isn't the problem is artificial intelligence can replace or supersede human intelligence, | ||
his response is artificial intelligence can supersede humans. | ||
Joe Allen? Yeah, to just nail down that definition for the audience, many will be familiar with this if they've been listening to us for a while. | ||
You have what exists right now, which used to just be called algorithms, but because of their sophistication, they're defined in the literature. | ||
Artificial narrow intelligence is able to perform one type of task, whether it's gene sequencing or playing games or a large language model that deals with language or an image generator. | ||
They can do one task But in so many of these cases, they can do that one task at a superhuman level. | ||
They can't do anything else, but they can do that one task far faster and far more accurately than a human being. | ||
That is happening right now. | ||
The second level, which is the sort of dream of people at Google's DeepMind, which is the dream of people at OpenAI, which is the dream of many at Meta and all across the globe, including the corporations in China like Alibaba and Tencent and Huawei. | ||
The dream of artificial general intelligence is to be able to kind of glue all of these different narrow intelligences together. | ||
To create a complex, flexible cognition. | ||
It won't be, it may be like a human, but one of the things they talk about all the time, it won't be like a human. | ||
It won't have a body like a human. | ||
It won't have emotions necessarily like a human. | ||
From a theological level, many argue it won't have a soul like a human. | ||
But what it will be is intelligent. | ||
It will be able to solve problems faster than a human being. | ||
And even if there's huge mistakes and flaws in it, we are being set up right now psychologically and culturally to accept that such a machine is more intelligent than you and you should trust it. | ||
Going from there, and it really is just directly connected, going from there, artificial superintelligence. | ||
And that is where the AI is writing programs to improve itself. | ||
And as that feedback loop becomes more and more intense, you then get what's known as a hard takeoff, an exponential takeoff in its intelligence. | ||
So the danger there, What's defined in AI alignment, the danger there is if this thing is not in any way aligned with human values, you've just created a sort of digital Frankenstein. | ||
You have created a monster, and you won't be able to control it. | ||
And right now, Google's deep mind, open AI, all of them can code. | ||
They can write code. | ||
The question is whether or not they will ever be able to improve themselves. | ||
So that question is answered by Nick Bostrom. | ||
He's the centerpiece of our next video. | ||
Okay. I just want to make, we're going to go to that video in a second. | ||
I just want to make sure everybody understands that we talk about the singularity, the convergence on a point. | ||
On this side of it, you have homo sapiens or humanity as we know it for, you know, hundreds of thousands of millions of years. | ||
On the other side is something else. | ||
That singularity is a convergence of quantum computing, advanced chip design, you know, biotechnology, which we call CRISPR. You've got regenerative robotics. | ||
Regenerative Robotics and Artificial General Intelligence. | ||
That convergence is the point of the singularity. | ||
As fast as all these are accelerating, I think clearly Artificial Intelligence and Artificial General Intelligence is probably, as many as you've seen on the rest of these, this may be the lead sled dog. | ||
Let's go to the next video and we'll bring Joe Allen back in. | ||
unidentified
|
So the potential for superintelligence kind of lies dormant in matter. | |
Much like the power of the atom, light dormant throughout human history, patiently waiting there until 1945. | ||
In this century, scientists may learn to awaken the power of artificial intelligence, and I think we might then see an intelligence explosion. | ||
unidentified
|
So let's do a thought experiment. | |
Let's say that we decide to have a chat with China on some kind of treaty around AI surprises. | ||
In the 50s and 60s, we eventually worked out a world where there was a no-surprise rule about nuclear tests, and then eventually they were banned. | ||
When you do a When somebody launches a missile, they, for testing or whatever, they notify everyone. | ||
And everyone then uses their missile defense systems to watch, to target, to train the systems. | ||
It's an example of a balance of trust or lack of trust. | ||
It's a no surprises rule. | ||
I'm very, very concerned. | ||
That the US view of China as corrupt or communist or whatever, and the Chinese view of America as failing, which has been well documented, will allow people to say, oh my god, they're up to something, and then begin some kind of conundrum. | ||
Begin some kind of thing where, because you're arming or you're getting ready, you then trigger the other side. | ||
We don't have anyone working on that, and yet AI is that powerful. | ||
I think we need something like a Manhattan Project on the topic of artificial intelligence. | ||
Not to build it, because I think we'll inevitably do that, but to understand how to avoid an arms race and to build it in a way that is aligned with our interests. | ||
But the moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is, and we admit that we will improve these systems continuously, Then we admit that the horizon of cognition very likely far exceeds what we currently know. | ||
Then we have to admit that we're in the process of building some sort of God. | ||
Now would be a good time to make sure it's a God we can live with. | ||
unidentified
|
Thank you very much. It's not science fiction. | |
That's not a movie. That's a presentation. | ||
Joe Allen. This is the deepest, most serious issues we have as a species. | ||
Forget a nation. | ||
Forget a political movement. | ||
Forget being populist national, which is all important. | ||
This is why it's so important to have your sovereignty and have your individual sovereignty, because this is all accelerating at an accelerating rate. | ||
Joe Allen, what did we just hear and see? | ||
Steve, if I could just say up front, We are not trying to fear monger here, right? If you don't think any of this is possible, then just dismiss what these gentlemen say. | ||
Dismiss what we say. If it is possible, if anything approximating it is possible, it's very important to listen to these gentlemen because they are dominating the conversation at the top. | ||
So the first, Nick Bostrom, Oxford professor, philosopher, Co-founder of the World Transhumanist Association. | ||
May get fired for saying rude things about race some 28 years ago. | ||
But Nick Bostrom wrote the book Superintelligence. | ||
Paths, Strategies, and Dangers. | ||
This book profoundly influenced Elon Musk and many of the other people working on this. | ||
His contention, as we just heard, is that artificial intelligence, should you get that hard takeoff, should you get that intelligence explosion that the computer scientist I.J. Good predicted back in 1965, Where you have a constantly self-improving algorithm that is able to completely outpace human beings, | ||
then you have on your hands either a god or a demon depending on how it started out and how it develops. | ||
This thinking goes to the core of the project to build an artificial general intelligence which all these major corporations and a number of labs and universities from MIT to Stanford, And also just random programmers who are working their best to do what they can do to improve artificial intelligence. | ||
All of this is working towards artificial general intelligence. | ||
That's their dream. That's their hope. | ||
The second was Eric Schmidt. | ||
Eric Schmidt obviously is the ex-CEO of Google. | ||
He has come forward since then to basically be the kind of go-to guy, the go-to expert. | ||
What are the possibilities and dangers of artificial intelligence? | ||
He chaired the National Security Council on Artificial Intelligence a couple of years ago. | ||
What he's saying there is that you have a situation where if the U.S. and China or any other world powers working on artificial general intelligence and suddenly you have a hard takeoff, you're gonna need some kind of warning system to let people know that they're going to see a lot of, let's just say, a lot of noise in the system going forward so that people don't start lobbying nukes at each other or worse, right, if it gets any worse. | ||
But I think for our purposes, we run out of time here. | ||
Last, the guy, Sam Harris, right? | ||
Popular philosopher, one of the four horsemen of new atheism. | ||
He's famous for arguing not just that there is no God, but he was obsessed with the idea that human beings have no free will. | ||
That all of our decisions are just an illusion. | ||
And they're basically just a manifestation of the deepest sort of neurochemical processes happening within our brains. | ||
We don't really have choices. | ||
So I'm not sure why he's even talking about our choices about AI. But that last point, the point he ended on, this belief that we are building some kind of God. | ||
That is a new religion. | ||
It's based in scientism, the belief that science can answer all existential questions. | ||
It then is developed in technocracy, the idea that experts should run society, and then it culminates in transhumanism, the idea that human beings should merge with machines, that we are a species in transition towards something else, and that we just simply need to let go of our old identities and embrace the new, including these godlike AIs. | ||
This is, and to Joe's first point, for you at home, it's the reason we started with Mika and Morning Joe and Professor Gould from Princeton, is that Davos man, who's supposed to be the inside baseball and everything, was completely overwhelmed, not prepared for, and wowed like kids at a carnival on AI GPT. This new system is out. | ||
It's kind of like a Wikipedia that you can use and write papers, ask questions, do all sorts of things. | ||
That is at such a small level. | ||
But even that has started to overwhelm people. | ||
That shows you the reality. | ||
That shows you the reality. | ||
This is not only real, it's going to be in your lives and causing societal problems, causing many more problems than it will solve right now. | ||
We're going to take a short commercial break. | ||
Our editor for All Things Transhumanism, Joe Allen, joins us again on the other side. | ||
Short break, stick around, you're not going to want to miss this. | ||
unidentified
|
Out with Stephen K. Bannon | |
Joe, we've got a bunch more clips, but I do have Jim Hoffs who's going to join us on some breaking developments on the Ray-Up situation. | ||
So we may hold some of those for tomorrow. | ||
But I want to tee up this right here. | ||
For our TV audience, you're going to be able to see it. | ||
I think with subtitles, for the podcast and our vast radio audience, you'll only hear Russian, but we'll explain it to you. | ||
What do we got here, Joe? | ||
This is Bishop Porphyry, a Russian Orthodox minister. | ||
He has... | ||
Very, very controversial ideas of what all of this technological advancement means. | ||
But I think we would be remiss if we didn't capture the sentiment in the Orthodox Church, because whether he speaks for the entire Orthodoxy, the Eastern Orthodox Church is absolutely far ahead of the game in terms of understanding this on a religious level. | ||
So if you want to roll it, Logan. | ||
unidentified
|
Но это же самоощущение мировых правителей, которые обладают огромной властью, которые управляют целыми правительствами. | |
И они вознамерились уничтожить большую часть человечества, 6 миллиардов людей, по их плану, оставить лишь небольшую часть. | ||
Но не только это. | ||
Они вознамерились, будучи ограниченными лишь возможностями технологий и науки, и поднять руку на самого человека, на самое сокровенное в человеке, на его личность, на образ Божий в нас. | ||
Сокрушить этот образ Божий с тем, чтобы сделать из человека нечто среднее между биологическим, техническим и цифровым существом. | ||
Это называется постчеловек, это называется конвергенция, это идеология трансгуманизма и постгуманизма, которая Okay, for our podcast and radio audience, you saw with all the vestments, as you're familiar with, | ||
the Orthodox Church, or in this case, Eastern Orthodox Church, you have Ukrainian Orthodox, Russian, Greek vestments, this magnificent setting. | ||
What was the bishop telling people, Joe Allen? | ||
Well, I think the four major points, one, that the self-conception of elites around the world is one of transhumanism and moving towards a state of being all-powerful. | ||
Two, that the major restraint on their power on this worldly plane is just simply the limits of science and technology. | ||
Four, and probably most controversially, the notion that they intend to reduce the population by six billion into a much smaller minority. | ||
And four, the idea that, and this is, I think we have shown it for the last year and a half, and many, many others before have pointed this out, That the basis of all of this is, as Klaus Schwab would say, the merging of the physical, digital, and biological worlds in our physical, digital, and biological identities. | ||
And as the bishop says, these people are either covertly, I would add, Or overtly marching under the banner of transhumanism and post-humanism, that state after human beings are no longer the most powerful or even an existent species on the planet. | ||
The thing about when you see Altman, we see these different videos. | ||
I want to make sure everybody understands the way the world works. | ||
When you're at Davos, they're there to get the word out. | ||
Let's take AIGPT. It's to be there among decision makers, but also media, marketers, people know how to brand, and it just explodes everywhere. | ||
Everything that went on at Davos this time, that was the talk at Davos because it caught A lot of the insiders by surprise. | ||
It shouldn't be lost anybody. A few days after that, Google announced that Sergey and the two co-founders are now back into Google in a much more focused area in AI because they were caught by surprise what was shown at Davos. | ||
The other people there are the capitalists, the early-stage venture capital. | ||
The next phase is private equity. | ||
The next phase is the hedge funds and the big banks that help take these things public and raise massive amounts of capital and create value. | ||
This is where a flood of capital, when I walk through this, like quantum computing, advanced chip design, the regenerative robotics, CRISPR biotechnology, and artificial intelligence, those five areas Where there's this convergence, they also go to make this presentation because it attracts capital. | ||
It shouldn't be lost to anybody. | ||
When the Chinese Communist Party put out Made in China 2025 and the ten industries that they were going to dominate by 2025, the top five were the ones I just listed. | ||
The top five. So what you're seeing now, Joe, is a rush of capital, and that's only going to mean a bigger acceleration. | ||
Also, everybody in this audience has to understand something. | ||
Your tax dollars are underwriting a lot of this. | ||
Your pension funds, because your pension funds is the money that's managed by the venture capitalists, by the private equity, by the hedge funds. | ||
But your tax dollars, Joe, Because we've got a limited time here, and we're going to deal on this with the other things. | ||
But I've got to go back to the executive order. | ||
The executive order on the quote-unquote the moonshot. | ||
When this executive order was signed, all Joe Biden says, I've really been focused on the cancer moonshot. | ||
This is all about cancer. | ||
It's all about cancer. | ||
It's all about cancer and solving cancer. | ||
It's a moonshot. That executive order... | ||
showed a whole-of-government approach, a whole-of-government approach for exactly these issues we're talking about. | ||
I want to tie it back up there before we punch, and we'll get you back on tomorrow, about how that totally integrates what the administrative state, which is the partner of these companies, these public-private partnerships, in driving this agenda so rapidly forward, in that it was not about a cancer moonshot. | ||
That was about transhumanism. | ||
That was the executive order on transhumanism that put a whole-of-government approach on that, Joe Allen. | ||
Yeah, this is the convergence of two major sectors, right? | ||
You've got the convergence of the military through DARPA, and then you've got the convergence of the biomedical establishment. | ||
And so with the creation of ARPA-H, the Advanced Research Projects Agency for Health, Which is on the heels of the executive order. | ||
You now have an institute that is right there at the top of the executive order, paragraph 3, with the intention to program the genome as if it were software. | ||
And this is a very common conception across the biomedical sector, right? Bio-digital convergence. | ||
It's both a conceptual convergence and an actual convergence. | ||
So not only do they intend to alter the human genome in order to cure cancer and things like this, Renee Wegrezen, the director of ARPA-H, talks about not necessarily her intentions, but that the field in general is moving towards humanity 2.0. | ||
And at the same time that this is happening, you have the Brain 2.0, which is just basically a revamp of the old Brain Project, which intends to map every neuron in the human brain and their interactions and all the different functions. | ||
Again, you have connections directly to DARPA, and DARPA is, of course, intending to create more and more advanced Brain-Computer Interfaces, beginning with the non-invasive, as we covered in the WEF presentation, but also invasive brain-computer interfaces to link the mind to artificial intelligence. | ||
And one more point about that Brain 2.0 project and the mapping of the human brain. | ||
The center of that is the Allen Institute for Brain Sciences in Seattle, Washington. And their sister organization is the Allen Institute for Artificial Intelligence. | ||
And you have to understand that the direct connection between these, not only in this government-funded sort of circle, but across the entire sort of AI project from Silicon Valley over to China, the more they know about the human brain, they believe, The better they can create artificial intelligence that resembles the human brain, | ||
as intelligent as the human brain, and of course will be more intelligent than the human brain in the future. | ||
Before I turn this over to Jim, if I could just say one thing, Steve. | ||
I very rarely talk about my religious beliefs. | ||
I try to keep them private. | ||
But you started off talking about the Antichrist. | ||
I will say this. | ||
The Greek root, anti, has two different meanings. | ||
Of course, it's come to mostly mean anti as in against. | ||
So you're anti-black or anti-white, you're against something. | ||
But in the original context of the term, when the Bible was written, the Greek root just means in place of. | ||
So the Antichrist is that deity that stands in place of Christ on this earth. | ||
And whether or not you believe artificial intelligence could ever attain anything like superhuman capability, one thing is for sure. | ||
The people at the top of these fields, the people not only in government and in the World Economic Forum and Silicon Valley and MIT, all of them are trying to create an intelligence that stands in place of the traditional role of Christ, that transcendent entity to which you turn to transcend your human limitations. | ||
I won't make any other hard theological claims beyond that, but in that sense, it is undoubtedly an antichrist. | ||
Christ gave us the warning on this, and it was in Mark, I think Mark 3, 28, 29, I think it is. | ||
I repeat this all the time. | ||
When he sent the disciples out to heal, and they came back, and they said, hey, you know, he said, how'd it go? Oh, we did it, and we were healing people, but they said that you inspired us, you gave us the power, and you're Beelzebub. | ||
And, you know, that's not good because they're saying you're the devil. | ||
And he sits there and says, don't worry about what they call me. | ||
Don't worry about what they call you. | ||
And he points to the only unforgivable sin. | ||
And I've read every document on this because it's such a powerful moment in the Gospels. | ||
He says all sins can be free. | ||
Only eternal sin. Only unforgivable sin. | ||
An unforgivable sin. | ||
The eternal sin is to blaspheme the Holy Spirit. | ||
To blaspheme the Holy Spirit. | ||
And we know that the Holy Spirit comes into man. | ||
That's what makes you in the image and likeness of God, the Holy Spirit. | ||
And that ties together with what's happening. | ||
We pride ourselves on getting ahead of important issues, whether that's the impeachment, whether it's the pandemic, whether it's the invasion on the southern border, whether it's the global capital markets crisis, inflation, the vaccine, the whole deal. | ||
We pride ourselves on election fraud. | ||
We pride ourselves on working with other news organizations and others to get you ahead of it. | ||
This is the single biggest, most important issue of our time, full stop. | ||
Full stop. Because the way the system works is capital is pouring into this now at a rate I've never seen before, and I've been, since I got out of Harvard Business School in 1985, you know, part of the system of how capital is deployed throughout the world. | ||
I'm trained in this, and I can tell you, I have never seen anything like this in my life. | ||
And this is, and the Paul Allen, just to tie a knot, the Allen Institute in Seattle, both of the brain and artificial intelligence, that's the co-founder of Microsoft. | ||
You think the Gates Institute and everything, the Gates Foundation, the Bill and Melinda Gates? | ||
Well, Paul Allen's, his co-founding partner in that. | ||
And that is... | ||
And it's absolutely critical. | ||
So it's going to be just enormous. | ||
And we've got to make sure we're going to focus on this every day. | ||
Joe, how do people get to you? | ||
You can find me at joebot.xyz. | ||
You can find me at social media at joebot.xyz. | ||
And of course, warroom.org. | ||
Under the Transhumanism tab, everything collected there. | ||
Thank you very much, Steve. | ||
Thank you very much to the War Room Posse. | ||
Not trying to scare monger, but you have to be aware. | ||
No. You've got to be, hey, we don't want to scaremonger you, but we're talking about the Antichrist, the real Antichrist, and we're not theologians. | ||
Joe Allen, thank you so much. | ||
By the way, outside of Archbishop Vigano in Rome and outside of Bishop Porfroy, I don't see a lot of this coming out of the seminaries, the monasteries, the theological departments. | ||
I mean, the Christian intellectuals have got to get on top of this. | ||
Because this is not just the future, this is the present, and it's accelerating at an accelerating rate. | ||
Joe Allen, thank you so much. Honored to have you on here. | ||
From the sublime to maybe the less sublime. | ||
Jim Hoff, these are the kind of fights we have to fight every day. | ||
The situation, it's never made sense with Ray Epps, and I've gone through all the documents with people on the 800 pages of J6. And the testimony, the one that makes no sense at all, brother, is the guided testimony of Ray Epps. | ||
So tell us what you have. | ||
We've got about five minutes. We're going to play it simultaneously while you talk us through it. | ||
But I just wanted for you, because you and your brother, Joe, are pretty savvy. | ||
Did the Ray Epps testimony make any sense to you at all, Jim Hoft? | ||
No, it didn't make any sense at all. | ||
And they were treating him with kidneys. | ||
Loves and they were feeding him the answers is what it sounded like when you read over the transcript with Ray Epps when he spoke in front of Liz Cheney and the J6 committee. | ||
So no, that made no sense at all. | ||
So tell us what this new development that the exclusive to the Gateway Pundit. | ||
Yeah, this is huge, Steve. | ||
And we posted the video this morning. | ||
Kara Castronova, who is a writer at Gateway Pundit, and Alicia Powell, another reporter, two excellent reporters we have, are sitting in the Proud Boys trial this week. | ||
There's five individuals who are up for seditious conspiracy. | ||
It's Ethan Nordean, Enrique Tarrio, Joseph Biggs, Zachary Rell, and Dominic Pozzola. | ||
What they've noticed in the courtroom was that during one of the videos that the prosecution played, there was a glitch in it all of a sudden. | ||
When the crowd was breaking through, the bike racks, you know, those bike racks they set up. | ||
There it is right there, the glitch. | ||
Now what they noticed was, here's Ray Epps, and we circled him in red. | ||
He's the one who's walking through now, walking up to the Capitol. | ||
So we have these gentlemen who are being under trial and may spend years in prison. | ||
For seditious conspiracy, their big crime, Steve, was walking as a stack up the steps of the U.S. Capitol that day. | ||
unidentified
|
Hold it. Hold it. Hold it. | |
You gotta stop. You gotta stop. | ||
I thought Ray Epps, the whole thing, I thought Ray Epps was just down, supposedly just down by the ellipse or whatever, and he's ordering people, we gotta get there, gotta get there. | ||
But I thought the whole rap is that he wasn't actually there leading people in, like that video just showed me. | ||
He's absolutely leading people in. | ||
He was very active that day. | ||
We posted previous video of Ray Epps where he's holding on to this huge Trump sign, metal sign that was thrown at police. | ||
He was actually holding on to that before it was thrown at the police. | ||
He's very active in the crowds that day. | ||
He was right there at the launch of when the barricade was broken into. | ||
And notice, these people aren't rushing up to the Capitol. | ||
These people are carrying Trump flags. | ||
But what happened in this trial this week is they put this glitch up, and Kara was smart enough to realize, hey, this isn't what I've seen. | ||
She got this video. | ||
This is a video that has not been released. | ||
It is part of the 14,000 hours of footage that has not yet been released to the public. | ||
We got this, and it shows ray-ups. | ||
The unvarnished video without the glitch shows ray-ups right there. | ||
Up next to the barriers, breaking through with the rest of the group, leading up to the Capitol that day. | ||
So that was Ray Epps, and for some reason, the prosecutor, Jason McCullough, did not have that same video. | ||
They had a glitch in their system of that clip, and we think it's nefarious. | ||
It wasn't a glitch. | ||
Did they look like they tried to cut the Ray Epps part out so you couldn't actually follow it? | ||
How could you have a glitch? | ||
This is a trial to put guys away in prison for a long time. | ||
You don't have glitches. | ||
There's no glitch. | ||
Exactly, Steve. | ||
This is something that appears to be purposeful because we have the video, as you're seeing with your own eyes now, We have the video in full without this mysterious glitch that they showed in the courtroom. | ||
So it's just very sad that they're resorting to this type of tactics to destroy the lives of these five men. | ||
who were attending the rally. | ||
You know, there was no sedition, as you know. | ||
It's all made up. But leave that aside, because that's terrible in his own right. | ||
I just have a question. We've only got a minute. | ||
Why is Ray Epps not on trial by going to jail for 20 years? | ||
He's leading people in through the Barrett case. | ||
And you've got everything about him pointing people to go down there from the ellipse and from down by the Willard Hotel. | ||
He's there at the lead of the barricades. | ||
Why is Ray Epps not on trial to go to prison for 20 years, sir? | ||
That's the question, Steve. | ||
That's the big question. | ||
And it needs to be revealed. | ||
And there's no reason that Ray Epps is not on trial. | ||
When he's standing there, and you can see he's talking to the people, and he was urging them to To rush into the Capitol that day, and here he is right there at the barricade when it was broken through, and they're trying to hide this from the American public and from the jury in the courtroom this week. | ||
Real quick, who were the two reporters that broke this for Gateway? | ||
Yeah, so Kara Castronova. | ||
Who is just phenomenal. | ||
And Alicia Powell is in the trial this week, too, sitting in the trial. | ||
And they were able to catch it. | ||
They thought it was unusual. They're both phenomenal reporters. | ||
They've had them both on the show. They're phenomenal reporters. | ||
Real quickly, what's the social media? | ||
How do people get to you? Yes, Steve. | ||
That's Gateway Pundit. | ||
unidentified
|
And we're on True Social, Getter, and Twitter, and everywhere. | |
It's on Gateway Pundit. | ||
You're the best. | ||
One of the best news sites out there. | ||
The one I go to first every morning. | ||
See you back here at 10 a.m. |