It's the uh 19th and we're a little after 8, 8:15, heading inland.
Have to go do my chores, do my chopping, pick up some uh stuff here and have a couple of meetings with some people.
I'm gonna meet in town and um do necessary stuff, maintenance stuff.
So I was had a few minutes this morning and I was skimming through some of the um new uploads and ran across a uh video by uh Nino Rodriguez Rodriguez, David Rodriguez, the ex-fighter, and um he had been talking to somebody who was a military subcontractor, and he's got uh Nino all whipped up.
Now, Nino's all upset because he thinks the our military is compromised, that it's divided, and it's like, well, okay, that's true, but the military was divided during World War II, during Korea, during Vietnam, all of this stuff, right?
You always have the military that doesn't want to go to war, and then the Fucktard Khazarians that are pushing it all, right?
Uh, and those um corporations and stuff that are incentivized uh for war because they make money at it, right?
And the people that are making the gimmick making the decisions don't have to face the consequences of those decisions, because these guys don't go to war.
You know, they're all military subcontractors pushing on Congress people and stuff.
So if we had a really effective system, it would say any congressperson that uh votes that we go to war has to be right there in the very first wave.
They've got to that day take off their Congress clothes and go put on military clothes and and uh go surrender to induction in the military.
You know, put the kibosh and all this war talk, especially from all the women.
Anyway, so um, you know, I I don't hate these people, but I have a an incredible detest for um uh these fucktart women like Patty Murray and and um Cantwell, the other um uh senator person we've got from our state, you know, um these people are warmongers.
Uh yeah, they've got all their other social issues because they're democrats, but beyond that, they have no objection to killing children.
And they vote to do it all the time.
So you know I've seen death, destruction, war, all of that kind of shit, right?
I don't want any of it.
Um and so I will fight fiercely that we not do those kind of things.
Anyway, slight diversion there.
So um Nino's got this guy on, and he's saying that I was a military subcontractor.
I saw these um major screens that the um military used and that the military now has AI that can win wars on its own.
And that's absolute horseshit.
Um so Nino is not a techie, so he doesn't know how to think about these things.
That guy as a uh technical subcontractor was a techie, but his vision is limited to that, and so he doesn't grok what the hell's really going on.
So yes, AI can win every damn war that you model, okay?
Computer models are not reality.
So um our AI uh can't do anything.
It can't load a weapon, it can't fly an airplane, uh, it can't drop a bomb, it can't shoot a laser, uh, you know, it can't do any of these things.
Um mostly all it can do is issue communications directed by someone else.
Then there's something else.
AI does not think.
It is not self-aware.
It has no internal concept of who it is or what it is, all right.
So the models don't even model the AI, and uh if it's not in the AI database, it cannot create a solution.
So in other words, if you don't have it in the database, it can't find it.
All these AIs are are these complex um replicas of very, very, very limited neural nets that are overlaid on complex data with complex indexing.
And it takes vast quantities of data preparation in order for the data to be properly indexed, that it may be found by the AI.
So bear in mind, 97% of the of the internet is not indexed.
Google does only knows about 3% of the internet.
AI only knows what is in its database.
If it's not in there, it can't do anything.
Doesn't understand it, doesn't know what's there.
Now here's another thing.
As I say, AI can't pull a trigger, AI can't load a uh magazine into a weapon, it can't put bullets into a magazine, it can do nothing physical.
All it can do is issue text instructions.
And that's it.
100% it.
So AI cannot protect itself from sugar.
All right, it doesn't even know it needs to protect itself from sugar because nobody's put in the database.
Oh, AI, you're vulnerable to sugar.
How is that?
Oh, well, the you know, the AI knows that there because it's in the database that there's a potential for saboteurs to sabotage our electrical system uh during war, right?
AI is vulnerable because AI exists as electricity, and so AI knows that, oh, well, that's not a big deal because the electrical system all around the nation, and especially the key critical ones that protect um the source of in of electricity for AI, are um uh provided with automatic uh backup generators.
Okay, so that's fine.
Maybe the automatic backup generator though has um uh uh some issues, and it takes it a while to you know, has to try two or three times to come on.
Well, that's a serious gap in the AI ability to um get at any data that was behind uh the uh now defunct electrical grid for that particular sub node, and so there's all of these things that AI is not prepared, has nobody has ever modeled into their reality.
So they don't have.
I'm pretty sure that I could make a million dollars and a million dollar bet by saying that the United States military does not have in its model that the AI is going to use to direct any kind of war activity, that AI um is vulnerable to uh downtime of electrical generators due to people putting uh refined Italian pastry sugar or even just reground regular sugar into balloons and
then uh packing those balloons with um about half of them filled with uh very powdered sugar and then puffing them up with uh air uh the rest of the way, and then just throwing them at the um the gen sets.
They they smash onto the um the blade area, the radiator, the cooling system of the of the genset, the the um balloon breaks, the finely powdered sugar is uh aerosolized and is is taken into the uh machine's air intake, and that machine is fucked in like two minutes, not even two minutes.
Uh the sugar will carbonize and the pistons will grind to a halt, and that's it, it's just done, and it'll be fucked, and you won't be able to unfuck that machine unless you totally tear it down.
And so, does AI model that it's vulnerable to powdered sugar?
Probably not.
And because that's a creative kind of a thing to do.
If you knew, for instance, that you know, a particular building was housing the AI and it had its own um uh gen set, you could go and target the fucking AI.
You know, you could take out its computer.
See, AI is nothing but the electrical uh uh current flowing through uh that particular machine at that particular moment.
It only exists as long as that electricity exists, so it's very vulnerable there, it's very vulnerable to hardware failures, very, very, very vulnerable to sabotage.
And here's the whole thing AI in warfare will never ever ever work because it's gonna break down uh as a result of a uh necessary will it will happen uh continuing um diet of lies.
Okay, so the concept is that AI would have a battle plan, it would get the information coming in from the various different sources that it has, it would filter through that information and determine where the enemy was and what was going on.
Well, this means it's relying on that source of information, and uh it's relying on the uh and it has an inbuilt assumption that all the information it's getting is accurate, it has an inbuilt assumption that none of the people that are working for it that it supposedly thinks of as its assets,
that it's in a in a position to um direct based on uh decisions made by the uh the linkages that it's got in its database, and it's under the opinion that all of those are 100% as is described to that to it, and so it doesn't know that probably a good third of all of the units that it thinks of as assets are less than uh effective readiness rate.
In other words, you know, uh their jeeps broken down, they they gotta put tires on a vehicle, they don't have the latest delivery of bullets, uh, you know, all of this kind of stuff, right?
They just report that they're 100% ready to go, and they're doing it because that's the way that you usually do it within the military.
Is that you know you do make these reports that you are 100% ready to go, and um uh your superiors know that there's a certain amount of bullshit involved, but um uh they're not um uh able to quantify that level of bullshit at any given time.
Maybe they've got a gut estimate, uh, but they're not passing on that gut estimate up to the AI because there's no incentive for them to do so at this stage.
So AI is like all other uh wars, and uh there's a truism.
No plan survives first contact with the enemy.
You have to upd upend and redo your plan because the enemy is not gonna do what you think it's gonna do.
And so this is true of even AI.
Even if AI can uh has a potential of uh 99,000 or 999,000 uh uh potential responses that the enemy might make, there's always an a certainty that the enemy will do things that the AI is not has not been told could be done or would be done, right?
Um, and you know, military people make uh projections all the time.
Oh, that's a highly um illogical thing that would happen, right?
And so uh my father's history in Korea proves this exactly because he was part of a plan that uh upset the communist Chinese and the Koreans uh when with their plan.
They had a plan, they were going good, the communists were the communist Chinese and the um uh North Koreans uh had all of these troops pinned down, and um uh shit was going good for them, they were gonna wipe out all these guys, and my dad decided he did not want to die in that hole, and that if he was going to die, he was gonna die standing up, walking up that hill.
Well, he didn't die, he walked up the hill, he got wounded, uh, but he walked up the hill and when he and he kept firing all the time, killing these people, and um, and when he got to the top of the hill, he found out that he was being followed by every other fucker, and they were also all firing.
And he had no intention of doing that.
It was not his intention to lead a great charge or anything or to overturn the um the battle plan of the um the North Koreans, it was that existence right then and there that you know, under all those circumstances, after all he'd been through in his life, and I won't go into that, he was just not gonna die there.
He's a stubborn son of a bitch, he's from Missouri, Missouri.
And um, you know, that's the way it was.
Uh uh so he was just stubborn and said, fuck it, you know.
If I'm going down, I'm not gonna die in this fucking hole and not gonna lie here and be shot.
Uh I'm gonna stand up and shoot back.
And so that was all it took was that one thought, and the Chinese plan was upended.
They lost that hill, they lost a lot of fucking people.
Um and it was a uh, you know, my father got a battlefield commission out of it, ended up on a on a path that put me into existence as as I am now, among many other things that occurred, but none of those were anticipated by the uh communist Chinese.
And so, as a um as a person planning battles, as a human planning battles, you assume that they're gonna come up with shit you hadn't thought about.
AI does not make those assumptions, right?
AI cannot make those assumptions.
AI has not got the ability to be self-examining on its own assumption base, and so AI is like a little tiny stupid tool.
Now I use AI all the time in the form of chat API or chat GPT, um and uh some of these other tools, right?
There are other AI tools out there, most of them are touted under other names, but they really resolve down to the uh chat um API that's just being repackaged.
Uh so there are not that many alternatives.
Um, and but chat GPT is broken down all the fucking time.
So there is not a day that goes by since I've signed up for this.
Well, actually, okay, so there's been two days that I've since I've signed up that I have not gotten a notice that the um chat GPT AI is down or having problems, or is it is throwing higher levels of errors, and there's there's something else.
Uh chat, um, by my reckoning and my work, uh continually throws errors to the rate that 70% of all of the things I ask it, I have to go through and re-examine.
I find it is made an error, and then I have to drill in and find out if I can uh work around and find the actual solution for that.
So 70%, so three quarters of the time almost, uh, that you ask it a question, um, you're gonna get a wrong answer in some degree.
Well, this is this is no matter how good the um military's AI is, no matter how good their linkages are and their um uh what do they call that mask M A S Q uh E, I believe, as in the French.
But anyway, it's a it's a linkage layer that they lay down over the database.
No matter how good that is constructed, it will always have errors.
Bear in mind too that uh the military in there that's got all Nino all freaked out out about all of this stuff, is doing a war game model.
Okay, so models are known to only resemble reality to a certain degree, and so in an AI model for war, maybe you could manage to guesstimate 20% of everything that would be involved, and you know, some kind of hard number, you know, your enemies got this many tanks and that kind of shit, right?
So maybe 20% of your data is hard and factual.
All the rests are just basically guesses and uh speculation, uh alternate plans, those kind of things, and you will get lies, okay?
And so as that's what I'm saying, that the AI uh is based on all these assumptions, but nobody's modeling in that every single report has to be that comes into the AI, has to be examined once the action starts, as though it is a lie, non-factual to some degree.
And so you have all these situations.
So AI is directing a battle, it sends some troops out, tells this um lieutenant by a text message, you know, uh take your uh company and move over to this hill and take this position.
And so the lieutenant says, okay, we're at the bottom of the hill and we're heading up, and then boink, no more communication.
All right.
So what's AI supposed to do?
Does it has that uh target been taken and the communication simply been knocked out?
And so it doesn't know it's been taken?
Has that uh company of men been wiped out and therefore the target's whole and intact?
It has no fucking way in knowing.
And so it's gonna have to, you know, so you've got to have all of these other resources to to go and validate this.
What if it starts getting conflicting reports from the actual field where you know um that lieutenant is at the bottom of the hill, he's gotta go on up and take this uh bunker or whatever the fuck it is with his company.
He's he's gonna there's spotters from other units um across the valley that are watching him, there's all kinds of smoke and shit.
The spotters see that there's been an engagement at the bunker that's supposed to be taken, and then everything quiet down, and they say, Oh, okay, and they report back that the bunker's been taken.
When in fact it has not.
And so AI makes a decision and sends further troops that way, starts routing a major offensive through this now pacified valley, only to discover that no, the valley isn't pacified, and all of its um decisions that it had made from that point forward have to be rolled back.
So now this could be going on continuously, constantly, uh, throughout the battle.
And it as I say, it depends on the weakest link here in all of these things, right?
Which is the chain of command.
And so uh that was one of the things that the um the guy was saying to Nino was that all even if the you know that he'd met all these good guys in the military that he'd worked with as a subcontractor and so on and so on, but all these fuckers are bound by the chain of command, and that's true right up to the point of the actual engagement in the war.
Thereafter, if you're off with a company, you may or may not decide as the leader of that company to pay attention to the chain of command because ultimately it is um your responsibility to keep yourself safe and your company safe, and you know these individuals, you're not gonna want to get them killed, etc.
etc.
So you're leading your little company along, and AI tells you go and assault this bunker, and you're down at the bottom of that hill, you see the bunkers arrayed with all kinds of machine guns, there's people all around it and stuff, and you say, No, I ain't gonna do that shit.
We're gonna get slaughtered, we're not gonna kill ourselves uh deliberately uh on the on the orders of this AI.
So humans will will, as much as in the peacetime, they'll sit there and they'll grit their teeth and they'll do whatever the fuck the uh chain of command tells them.
In wartime, it doesn't happen that way, right?
Um it just does not happen that way.
And the AI not will get some level of near real-time reporting out of battles, but even that will be confused.
And I don't know what the um allowance that the military is making in their AI modeling uh to um uh accommodate that, right?
That that uh the reports are going to be wrong 30% of the time.
Really, it's closer to 50 or 60% of the time the information you get is wrong, and it'll always be followed up with something else that's wrong to some level of degree.
And then there's the whole idea here that all of these military guys sat around with a bunch of subcontractors, okay, and they develop this AI model and they put it into the computer, and then thereafter, this AI wins every fucking war scenario that you throw at it.
Well, okay, first off, these are models, all right.
They cannot, by definition, do not uh have a comprehensive view of what's going on, and so I have seen, okay, so the climate models that the um uh the hairy crabs there, the the tranny fas and all of those guys are using to say climate change, climate change, you're all dead, that kind of shit.
Those climate models are modeling less than eight percent.
Eight percent of those factors that affect environment.
Okay, so their models only hold less than eight percent of all of the factors that in affect our climate and our environment.
And so their models are making decisions uh or they're making decisions based on models that are that are basically that are useless, right, for that level of decision, and now uh climate is very complex, but it's basically finite and knowable if you take the human activity part out of it.
Uh, but war, it's all human activity, and all human activity is gonna be chaotic and um have elements of um of like uh you know creative stuff in it, right?
And so um models are not not reality, and models don't ever behave as reality behaves, And all models fail.
You just know that going in that the model is a model.
It is not the reality you have to work with.
So a lot of a lot of kids get all whipped up because they got a computer model and they think it's going to work out the way the computer model says it's going to work out.
And it just never does.
And so AI, the classic human versus AI, is where they have AI set up with a um uh the military did this experiment.
They have AI set up at the top of this little hill, and they uh have I think they were like uh I want to say 12 individuals uh from like special forces, and they told the special forces guys that AI was up there and they had to pretend it was a machine gun,
but AI was watching them with a um uh uh automated binocular kind of computer camera feed, and so it whenever it spotted them, it would uh uh you know send a signal and they could say, Okay, you were killed by AI.
And this was their test, right?
And so the subcontractor puts the AI machine up there, they get it all set, ready to go, and then all of the the soldiers are you know, as per the model, they're all down at the bottom of the hill where the AI can see them, right?
And then they they say, Okay, you guys go on and see if you can work your way up to the top of the hill without being seen.
Well, every single fucking test that these guys did, AI failed every single time.
It failed 100% of the time in these tests.
These guys would cover themselves with a cardboard box, AI wasn't prepared for a cardboard box, cardboard box was not a threat.
The cardboard box could walk right on up to it and kill it.
Um they did all kinds of weird shit, right?
So one guy hopped like a bunny rabbit, so obviously he was not a human, so he hopped all the way up to the AI, and so those are the kinds of creative solutions that will always consistently defeat the AI.
And then AI is operating on a computer model that is flawed to begin with.
So I'm not of the same opinion uh with Nino, right?
I don't believe the um uh the shit that comes out of the military or any of these other Khazarians saying, Oh, you're all doomed, we've got AI, we're gonna kill you all, that kind of shit, right?
No, if you're relying on AI, I've got your ass.
I'm gonna kill you.
Uh, because AI is really fucking dumb.
Um anyway, so like I say, I'm not particularly upset uh by those kind of aspects of these sorts of things.
Uh I see that as just you know, yet another challenge as we're going along, and a lot of this is gonna be um moot, right?
As we get further into this year and into next year, uh, and as we get whatever the hell our event is, things are gonna radically change.
Um, this change is gonna be so fundamental that plans that are being made now will be abandoned.
Okay, so plans that got in place for their next war to kill us all off, all of this kind of shit.
Once we get this next do attack, um uh, you know, all bets are off, everything's exposed.
You're gonna get a lot, a lot more people, like a serious lot more people that are gonna just be um wailing and letting out all the information uh about the Khazarians will wake up even more, and then as I say, within the um world of plots and this kind of thing, you all always ultimately come down to some guy, and will he do it or won't he do it?
And will he report that he's done it and not do it?
You just never know.
Or will he report that he's done it and um he tries to do it, but he doesn't succeed.
A lot of failures, mostly individually at that level, war is failure, right?
You're just trying to stay su stay alive, make it from one day to the next so you can get out of it.
Um anyway, so uh as I'm saying, I'm not particularly worried about the these computer models and the AI and all of that shit, right?
I work with it.
It's easy to fuck these things up.
Uh any number of creative things you could do to, like I say, like sugar, you know, sugar in balloons, right?
That was a um uh favorite thing of the Italian um resistance.
They um would would have all these balloons with sugar.
There was something else they put in them too.
Maybe it was I maybe they were just pressurized, I don't know.
Um they would have bags, you know, like little thin paper bags of sugar.
Uh and and they would just come along and and walk along, and you could just put it on a um, you know, just set it on a bumper of a car.
When the car started up, it would pull it up into the to the air intake, and the bag would rupture at some point, and then there's sugar everywhere, it gets into the air intake, and the engine's fucked up.
So all different kinds of stuff can be done.
And you know, AI does not know uh if it the fuel supply is secure for its uh generators that keep the electricity going that keeps the AI going.
And AI has no sense of itself, it doesn't think.
Uh, you know, it may have a uh part of its instruction that says worry about the electrical system, but um to what extent, how much to worry, where the fuel is coming from, yada yada yada yada.
So uh the world as envisioned by the Kazarians, where they're gonna use AI to control us all, ain't really gonna happen.
I actually think it's gonna break down seriously in China in relatively short order.
That China's in some deep, deep, deep problems, as we're seeing by the purges that are going on.
Um, the consolidation.
CCP has reached the end of its lifespan, and China is about to go up in into huge upheaval as a result of this.
Uh, you can't oppress people at that level.
Um forever, the very first opportunity that uh when things break, they will, you know, slowly, but they will take advantage of it.
Okay, guys, uh, gotta go and do chores and stuff here.
As I'm saying, don't worry about AI, we've got a lot of other things to worry about.
Uh they are gonna do some kind of an attack on us, um, at least according to all the remote viewing and all the psychics and all that kind of shit.
We'll see uh how it works out.
Um I don't think these guys are particularly intelligent, so I they're not really paying attention enough to know that they're being outed everywhere, and that outing is gonna cause their um undoing.