All Episodes
Sept. 20, 2023 - Clif High
29:10
AIiiiiiiiiiiigh!
| Copy link to current segment

Time Text
Hello humans, hello humans.
It's the 19th and we're a little after 8, 8.15, heading inland.
I have to go do my chores, do my chopping, pick up some stuff here and have a couple of meetings with some people.
I'm going to meet in town and do necessary stuff.
Maintenance stuff.
So I was, I had a few minutes this morning and I was skimming through some of the new uploads and ran across a video by Nino Rodriguez, David Rodriguez, the ex-fighter.
And he had been talking to somebody who was a military subcontractor and he's got Nino all whipped up.
Now, Nino's all upset because he thinks our military is compromised, that it's divided.
And it's like, well, okay, that's true, but the military was divided during World War II, during Korea, during Vietnam, all of this stuff, right?
You always have the military that doesn't want to go to war, and then the fuck-tart Khazarians that are pushing it all, right?
And those corporations and stuff that are incentivized for war because they make money at it, right?
And the people that are making the decisions don't have to face the consequences of those decisions because these guys don't go to war.
You know, they're all military subcontractors pushing on congresspeople and stuff.
So if we had a really effective system, it would say any congressperson that votes that we go to war has to be right there in the very first wave.
They've got to that day take off their Congress clothes and go put on military clothes and go surrender to induction in the military.
You know, that put the kibosh and all this war talk, especially from all the women.
Anyway, so, you know, I don't hate these people, but I have an incredible detest for these fuck-tart women like Patty Murray and Cantwell, the other senator person we've got from our state.
You know, these people are warmongers.
Yeah, they've got all their other social issues because they're Democrats, but beyond that, they have no objection to killing children.
And they vote to do it all the time.
So, you know, I've seen death, destruction, war, all of that kind of shit, right?
I don't want any of it.
And so I will fight fiercely that we not do those kind of things.
Anyway, slight diversion there.
So Nino's got this guy on, and he's saying that I was a military subcontractor.
I saw these major screens that the military used and that the military now has AI that can win wars on its own.
And that's absolute horseshit.
So Nino is not a techie, so he doesn't know how to think about these things.
That guy as a technical subcontractor was a techie, but his vision is limited to that.
And so he doesn't grok what the hell's really going on.
So yes, AI can win every damn war that you model.
Okay?
Computer models are not reality.
So our AI can't do anything.
It can't load a weapon.
It can't fly an airplane.
It can't drop a bomb.
It can't shoot a laser.
You know, it can't do any of these things.
Mostly all it can do is issue communications directed by someone else.
Then there's something else.
AI does not think.
It is not self-aware.
It has no internal concept of who it is or what it is.
All right.
So the models don't even model the AI.
And if it's not in the AI database, it cannot create a solution.
So in other words, if you don't have it in the database, it can't find it.
All these AIs are, are these complex replicas of very, very, very limited neural nets that are overlaid on complex data with complex indexing.
And it takes vast quantities of data preparation in order for the data to be properly indexed, that it may be found by the AI.
So bear in mind, 97% of the internet is not indexed.
Google only knows about 3% of the internet.
AI only knows what is in its database.
If it's not in there, it can't do anything.
Doesn't understand it, doesn't know what's there.
Now, here's another thing.
As I say, AI can't pull a trigger.
AI can't load a magazine into a weapon.
It can't put bullets into a magazine.
It can do nothing physical.
All it can do is issue text instructions.
And that's it.
100% it.
So, AI cannot protect itself from sugar.
All right, it doesn't even know it needs to protect itself from sugar because nobody's put in the database, oh, AI, you're vulnerable to sugar.
How is that?
Oh, well, you know, the AI knows that there, because it's in the database, that there's a potential for saboteurs to sabotage our electrical system during war, right?
AI is vulnerable because AI exists as electricity.
And so AI knows that, oh, well, that's not a big deal because the electrical system all around the nation, and especially the key critical ones that protect the source of electricity for AI, are provided with automatic backup generators.
Okay, so that's fine.
Maybe the automatic backup generator, though, has some issues and it takes it a while to, you know, has to try two or three times to come on.
Well, that's a serious gap in the AI ability to get at any data that was behind the now defunct electrical grid for that particular subnode.
And so there's all of these things that AI is not prepared, nobody has ever modeled into their reality.
So they don't have.
I'm pretty sure that I could make a million dollars in a million dollar bet by saying that the United States military does not have in its model that the AI is going to use to direct any kind of war activity, that AI is vulnerable to downtime of electrical generators due to people putting refined Italian pastry sugar or even just reground regular sugar into balloons and
then packing those balloons with about half of them filled with very powdered sugar and then puffing them up with air the rest of the way and then just throwing them at the gen sets.
They smash onto the blade area, the radiator, the cooling system of the gen set.
The balloon breaks, the finely powdered sugar is aerosolized and is taken into the machine's air intake and that machine is fucked in like two minutes, not even two minutes.
The sugar will carbonize and the pistons will grind to a halt and that's it.
It's just done and it'll be fucked and you won't be able to unfuck that machine unless you totally tear it down.
And so does AI model that it's vulnerable to powdered sugar?
Probably not.
And because that's a creative kind of a thing to do.
If you knew for instance that a particular building was housing the AI and it had its own gen set, you could go and target the fucking AI.
You know, you could take out its computer.
See, AI is nothing but the electrical current flowing through that particular machine at that particular moment.
It only exists as long as that electricity exists.
So it's very vulnerable there.
It's very vulnerable to hardware failures.
Very, very, very vulnerable to sabotage.
And here's the whole thing.
AI in warfare will never ever ever work because it's going to break down as a result of a necessary, it will happen, a continuing diet of lies.
Okay, so the concept is that AI would have a battle plan.
It would get the information coming in from the various different sources that it has.
It would filter through that information and determine where the enemy was and what was going on.
Well, this means it's relying on that source of information and it's relying on the, and it has an inbuilt assumption that all the information it's getting is accurate.
It has an inbuilt assumption that none of the people that are working for it, that it supposedly thinks of as its assets, that it's in a position to direct based on decisions made by the linkages that it's got in its database.
And it's under the opinion that all of those are 100% as is described to it.
And so it doesn't know that probably a good third of all of the units that it thinks of as assets are less than effective readiness rate.
In other words, you know, their Jeep's broken down.
They got to put tires on a vehicle.
They don't have the latest delivery of bullets.
You know, all of this kind of stuff, right?
They just report that they're 100% ready to go.
And they're doing it because that's the way that you usually do it within the military is that, you know, you do make these reports that you are 100% ready to go.
And your superiors know that there's a certain amount of bullshit involved, but they're not able to quantify that level of bullshit at any given time.
Maybe they've got a gut estimate, but they're not passing on that gut estimate up to the AI because there's no incentive for them to do so at this stage.
So AI is like all other wars.
And there's a truism.
No plan survives first contact with the enemy.
You have to upend and redo your plan because the enemy is not going to do what you think it's going to do.
And so this is true of even AI.
Even if AI has a potential of 99,000 or 999,000 potential responses that the enemy might make, there's always a certainty that the enemy will do things that the AI has not been told could be done or would be done, right?
And, you know, military people make projections all the time.
Oh, that's a highly illogical thing that would happen, right?
And so my father's history in Korea proves this exactly because he was part of a plan that upset the communist Chinese and the Koreans with their plan.
They had a plan.
They were going good.
The communists were, the communist Chinese and the North Koreans had all of these troops pinned down.
And shit was going good for them.
They were going to wipe out all these guys.
And my dad decided he did not want to die in that hole.
And that if he was going to die, he was going to die standing up, walking up that hill.
Well, he didn't die.
He walked up the hill.
He got wounded.
But he walked up the hill and he kept firing all the time, killing these people.
And when he got to the top of the hill, he found out that he was being followed by every other fucker, and they were also all firing.
And he had no intention of doing that.
It was not his intention to lead a great charge or anything or to overturn the battle plan of the North Koreans.
It was that existence right then and there that, you know, under all those circumstances, after all he'd been through in his life, and I won't go into that, he was just not going to die there.
He's a stubborn son of a bitch.
He's from Missouri, Missouri.
And, you know, that's the way it was.
So he was just stubborn and said, fuck it.
You know, if I'm going down, I'm not going to die in this fucking hole.
I'm not going to lie here and be shot.
I'm going to stand up and shoot back.
And so that was all it took was that one thought.
And the Chinese plan was upended.
They lost that hill.
They lost a lot of fucking people.
And it was a, you know, my father got a battlefield commission out of it and ended up on a path that put me into existence as I am now, among many other things that occurred.
But none of those were anticipated by the communist Chinese.
And so as a person planning battles, as a human planning battles, you assume they're going to come up with shit you hadn't thought about.
AI does not make those assumptions, right?
AI cannot make those assumptions.
AI has not got the ability to be self-examining on its own assumption base.
And so AI is like a little tiny stupid tool.
Now, I use AI all the time in the form of chat API or ChatGPT and some of these other tools, right?
There are other AI tools out there.
Most of them are touted under other names, but they really resolve down to the chat API that's just being repackaged.
So there are not that many alternatives.
But ChatGPT is broken down all the fucking time.
So there is not a day that goes by since I've signed up for this.
Well, actually, okay, so there's been two days since I've signed up that I have not gotten a notice that the chat GPT AI is down or having problems or is throwing higher levels of errors.
And there's something else.
Chat, by my reckoning and my work, continually throws errors to the rate that 70% of all of the things I ask it, I have to go through and re-examine.
I find it has made an error.
And then I have to drill in and find out if I can work around and find the actual solution for that.
So 70%.
So three quarters of the time almost that you ask it a question, you're going to get a wrong answer in some degree.
Well, this is no matter how good the military's AI is, no matter how good their linkages are and their, what do they call that, mask, M-A-S-Q-U-E, I believe, as in the French.
But anyway, it's a linkage layer that they lay down over the database.
No matter how good that is constructed, it will always have errors.
Bear in mind, too, that the military in there that's got all Nino all freaked up out about all of this stuff is doing a war game model.
Okay, so models are known to only resemble reality to a certain degree.
And so in an AI model for war, maybe you could manage to guesstimate 20% of everything that would be involved and, you know, some kind of hard number.
You know, your enemy's got this many tanks and that kind of shit.
Right.
So maybe 20% of your data is hard and factual.
All the rest are just basically guesses and speculation, alternate plans, those kind of things.
And you will get lies, okay?
And so that's what I'm saying, that the AI is based on all these assumptions, but nobody's modeling in that every single report has to be that comes into the AI has to be examined once the action starts as though it is a lie, non-factual to some degree.
And so you have all these situations.
So AI is directing a battle.
It sends some troops out, tells this lieutenant by a text message, you know, take your company and move over to this hill and take this position.
And so the lieutenant says, okay, we're at the bottom of the hill and we're heading up.
And then boink, no more communication.
All right.
So what's AI supposed to do?
Has that target been taken and the communication simply been knocked out?
And so it doesn't know it's been taken?
Has that company of men been wiped out and therefore the target's whole and intact?
It has no fucking way of knowing.
And so it's going to have to, you know, so you've got to have all of these other resources to go and validate this.
What if it starts getting conflicting reports from the actual field where, you know, that lieutenant is at the bottom of the hill.
He's got to go on up and take this bunker or whatever the fuck it is with his company.
He's got a, there's spotters from other units across the valley that are watching him.
There's all kinds of smoke and shit.
The spotters see that there's been an engagement at the bunker that's supposed to be taken and then everything quiets down and they say, oh, okay.
And they report back that the bunker's been taken, when in fact it has not.
And so AI makes a decision and sends further troops that way, starts routing a major offensive through this now pacified valley, only to discover that, no, the valley isn't pacified.
And all of its decisions that it had made from that point forward have to be rolled back.
So now this could be going on continuously, constantly, throughout the battle.
And as I say, it depends on the weakest link here in all of these things, right?
Which is the chain of command.
And so that was one of the things that the guy was saying to Nino was that even if the, you know, that he'd met all these good guys in the military that he'd worked with as a subcontractor and so on and so on, but all these fuckers are bound by the chain of command.
And that's true right up to the point of the actual engagement in the war.
Thereafter, if you're off with a company, you may or may not decide as the leader of that company to pay attention to the chain of command because ultimately it is your responsibility to keep yourself safe and your company safe.
And you know these individuals, you're not going to want to get them killed, etc., etc.
So, you're leading your little company along, and AI tells you, go and assault this bunker.
And you're down at the bottom of that hill, you see the bunkers arrayed with all kinds of machine guns, there's people all around it and stuff.
And you say, No, I ain't going to do that shit.
We're going to get slaughtered.
We're not going to kill ourselves deliberately on the orders of this AI.
So, humans will, as much as in the peacetime, they'll sit there and they'll grit their teeth and they'll do whatever the fuck the chain of command tells them.
In wartime, it doesn't happen that way, right?
It just does not happen that way.
And the AI will get some level of near-real-time reporting out of battles, but even that will be confused.
And I don't know what the allowance that the military is making in their AI modeling to accommodate that, right?
That the reports are going to be wrong 30% of the time.
Really, it's closer to 50 or 60% of the time the information you get is wrong, and it'll always be followed up with something else that's wrong to some level of degree.
And then there's the whole idea here: that all of these military guys sat around with a bunch of subcontractors, okay?
And they developed this AI model and they put it into the computer, and then thereafter, this AI wins every fucking war scenario that you throw at it.
Well, okay, first off, these are models, all right?
They cannot, by definition, do not have a comprehensive view of what's going on.
And so, I have seen, okay, so the climate models that the hairy crabs there, the tranny foes, and all of those guys are using to say climate change, climate change, you're all dead, that kind of shit.
Those climate models are modeling less than 8%, 8% of those factors that affect the environment.
Okay, so their models only hold less than 8% of all of the factors that affect our climate and our environment.
And so, their models are making decisions, or they're making decisions based on models that are basically useless, right, for that level of decision.
And now, climate is very complex, but it's basically finite and knowable if you take the human activity part out of it.
But, war, it's all human activity, and all human activity is going to be chaotic and have elements of like you know, creative stuff in it, right?
And so, models are not reality, and models don't ever behave as reality behaves.
And all models fail.
You just know that going in, that the model is a model, it is not the reality you have to work with.
So, a lot of kids get all whipped up because they got a computer model, and they think it's going to work out the way the computer model says it's going to work out, and it just never does.
And so, AI, the classic human versus AI, is where they have AI set up with a the military did this experiment.
They have AI set up at the top of this little hill, and they have, I think they were like, I want to say 12 individuals from like special forces, and they told the special forces guys that AI was up there, and they had to pretend it was a machine gun, but AI was watching them with an automated binocular kind of computer camera feed.
And so, whenever it spotted them, it would send a signal and they could say, Okay, you were killed by AI.
This was their test, right?
And so, the subcontractor puts the AI machine up there, they get it all set, ready to go, and then all of the soldiers are, you know, as per the model, they're all down at the bottom of the hill where the AI can see them, right?
And then they say, Okay, you guys go on and see if you can work your way up to the top of the hill without being seen.
Well, every single fucking test that these guys did, AI failed every single time.
It failed 100% of the time in these tests.
These guys would cover themselves with a cardboard box.
AI wasn't prepared for a cardboard box, cardboard box was not a threat.
The cardboard box could walk right on up to it and kill it.
They did all kinds of weird shit, right?
So, one guy hopped like a bunny rabbit.
So, obviously, he was not a human.
So, he hopped all the way up to the AI.
And so, those are the kinds of creative solutions that will always consistently defeat the AI.
And then, AI is operating on a computer model that is flawed to begin with.
So, I'm not of the same opinion with Nino, right?
I don't believe the The shit that comes out of the military or any of these other Khazarians saying, Oh, you're all doomed, we've got AI, we're gonna kill you all, that kind of shit, right?
No, if you're relying on AI, I've got your ass, I'm gonna kill you because AI is really dumb.
Um, anyway, so like I say, I'm not particularly upset by those kind of aspects of these sorts of things.
Uh, I see that as just you know, yet another challenge as we're going along, and a lot of this is going to be um moot, right?
As we get further into this year and into next year, uh, and as we get whatever the hell our event is, things are going to radically change.
Um, this change is going to be so fundamental that plans that are being made now will be abandoned.
Okay, so plans they've got in place for their next war to kill us all off, all of this kind of shit.
Once we get this next do attack, um, you know, all bets are off, everything's exposed.
You're going to get a lot, a lot more people, like a serious lot more people that are going to just be wailing and letting out all the information about the Khazarians will wake up even more.
And then, as I say, within the world of plots and this kind of thing, you all always ultimately come down to some guy.
And will he do it or won't he do it?
And will he report that he's done it and not do it?
You just never know.
Or will he report that he's done it and he tries to do it, but he doesn't succeed.
A lot of failures mostly individually at that level.
War is failure, right?
You're just trying to stay stay alive, make it from one day to the next that you can get out of it.
Um, anyway, so uh, as I'm saying, I'm not particularly worried about the these computer models and the AI and all of that shit, right?
I work with it, it's easy to fuck these things up.
Uh, any number of creative things you could do to, like I say, like sugar, you know, sugar in balloons, right?
That was a favorite thing of the Italian resistance.
They um would have all these balloons with sugar.
There was something else they put in them too.
Maybe it was maybe they were just pressurized.
I don't know.
Um, uh, they would have bags, you know, like little thin paper bags of sugar, uh, and and they would just come along and walk along, and you could just put it on a um you know, just set it on a bumper of a car when the car started up.
It would pull it up into the to the air intake, and the bag would rupture at some point, and then there's sugar everywhere, it gets into the air intake, and the engine's fucked up.
So, all different kinds of stuff can be done.
And, you know, AI does not know if the fuel supply is secure for its generators that keep the electricity going, that keeps the AI going.
And AI has no sense of itself, it doesn't think, you know, it may have a part of its instruction that says worry about the electrical system, but to what extent, how much to worry, where the fuel is coming from, yada, yada, yada, yada.
So, the world as envisioned by the Khazarians, where they're going to use AI to control us all, ain't really going to happen.
I actually think it's going to break down seriously in China in relatively short order.
That China's in some deep, deep, deep problems, as we're seeing by the purges that are going on, the consolidation.
CCP has reached the end of its lifespan, and China is about to go up into a huge upheaval as a result of this.
You can't oppress people at that level forever.
The very first opportunity that when things break, they will, you know, slowly, but they will take advantage of it.
Okay, guys, I got to go and do chores and stuff here.
As I'm saying, don't worry about AI.
We've got a lot of other things to worry about.
They are going to do some kind of an attack on us, at least according to all the remote viewing and all the psychics and all that kind of shit.
We'll see how it works out.
I don't think these guys are particularly intelligent, so they're not really paying attention enough to know that they're being outed everywhere, and that outing is going to cause their undoing.
Export Selection