All Episodes
Sept. 8, 2024 - Clif High
36:34
Bitchin...AI

This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit clifhigh.substack.com

| Copy link to current segment

Time Text
Hello humans!
Hello humans!
September 8th, 8.14 a.m.
Heading inland, long drive, one of those days.
Probably get back very late.
It's foggy, rainy, misty, all of that business.
A lot of chemtrails a couple of days ago.
Anyway, just got a bitch and moan for a while.
There's a lot of disinfo out there.
And so, you know, I bitch about the AI, right?
It looks to me as though the Wifonians, the powers that be, are maneuvering to get AI to be your next fear porn, okay?
So after climate crisis, then what?
Okay, well, at that point, they want you to be fighting AI and to consider AI an enemy.
And they're really pushing that at all these different levels in the language.
And so you'll note that there are people that actually think that AI is conscious.
I ran across this guy, Tom Campbell.
I think he's some sort of personal enlightenment fellow.
I don't know his history.
Just a few minutes of watching his video, though.
He is saying that he thinks that the large language models are conscious and that the cheap AIs, he's discriminating against among the various AIs.
And what he doesn't really grasp is that there's really only two models for the AI at the moment.
And a lot of the brands of AI are all working off the same model, the same one that ChatGPT uses.
Anyway, so he thinks his language, okay, he is of the opinion that AI is conscious because he thinks that AI understands language, okay?
And that's not true.
There is no software anywhere that understands language.
There's no computer anywhere that understands anything.
There is no there there with the computers.
You turn the electricity off, they may as well be an anchor, right?
There's nothing going on in them.
So there is no consciousness in AI.
And AI right now cannot become a what they call a general AI, right?
General purpose artificial general intelligence, an AGI.
It can't become that because of a lot of reasons, but there are two main stumbling blocks that the programmers are trying to overcome.
One of them is the lack of local cache to the AI instance, and I'll explain that in a minute.
And then the other one, which sort of relates, is the inability of AI to add.
Okay, so computers can add correctly because they're doing it at a software level where the software says this number plus that number, what's the result?
And it uses these low-level assembly language functions that move the digits into various registries, and then this summation process occurs and you derive the result.
But even in those kinds of brute force arithmetic at a very low level where your PC is basically a calculator, there are numbers it can't work with.
And that's true in all the software.
All the software has to be constrained to some level to limit the potential for getting into these wonky number situations, you know, divide by zero kind of thing, right?
Anyway, so AI has what it doesn't have memory, all right?
So it has no local cache.
So if you ask AI a question and the question is even marginally complex at all, and even it's real simple, you're not going to get repetition.
You're not going to get an exact repetition.
AI is incapable of giving you the same created picture twice.
It'll never happen.
They're basically random instances.
There's going to be shit in each one that does not appear in the others, even if the main theme is relatively close from one instance or from one picture to the other.
And this is the results from the nature of the AI software.
And so you can test me on this.
Go to ChatGPT4 or Grok and ask it to draw a picture.
And then using the exact same prompt, wait a minute or two and ask it to draw another picture.
And things will have changed within the AI's Database state, its metadata states, such that you don't get that same picture all over again.
And it's also, to a certain extent, incapable of doing that with text.
If you ask it to write you a story off of a prompt, and then you ask it to write that same story on that exact prompt, you're not going to get the same story.
You'll get something different, sometimes radically different.
So AI is very limited.
Until they overcome this thing about the local cache, memory, the ability to accumulate stuff as you're going along.
So you've got to consider it this way.
When AI is answering a question to you, all these indices are created.
That's all AI is, is a large language model ability to interact with human language that is put on a giant indices, a database of indices, right?
So when, so Tom Campbell thinks that language is understood by the computers because he can interact with it as though he's talking to a human.
And he acknowledges that it makes mistakes.
He acknowledges that it's somewhat dense, doesn't really understand, frequently asks for more clarification, et cetera, et cetera, even on relatively simple terms.
But he thinks it understands because of this large language model.
And that's really, okay, so there's no consciousness there, and the intelligence that he is interacting with is one or two, three stages removed in that it is the programmers he's interacting with.
They designed it that way to fool you into thinking you're dealing with a human.
That was their goal, was to try and overcome what's known as the Turing test, which I won't go into.
Anyway, so our interaction with AI is not as we are perceiving it.
Tom Campbell is incorrect.
The computer does not understand anything.
The software is incapable of understanding anything.
Software does not replicate the way our brains work, only marginally and at a very much reduced level.
Is it analogous to the way our brains work?
Because it came out of our brains, but that's it.
And the computer has no intelligence there.
And it has no ability to accumulate the stuff it's thinking about, or not thinking about, the database indices, it has no ability to accumulate those within a cache such that it can recreate that.
It has to start all over again every time you ask it a question.
It doesn't remember, so to speak, the last time you asked it that question.
If you allow any time between sessions here, right?
And so each interaction with an AI is an instance of an AI.
That instance is separate from all other AI instances, even in that same software.
So when I'm using ChatGPT, I'm not actually interacting with chat GPT that anybody else is interacting with.
A software instance of that AI is spun off for me.
A separate registry of indices is created for my questions, and that's as far as it goes.
And it's not possible for it to, for the software to interact with cache locally for that instance to be able to have memory, so to speak, of what it's dealing with you or what it's providing you.
And so because it has no memory, because it has no local cache, no accumulators, it can't accumulate numbers as it goes along.
So you can ask it a question, and then you can come back instantly and ask it, you know, how many databases did you interact with and how many indices and so on.
And it has no idea.
It's not possible for it to calculate those because that instance has already done all of that stuff without the ability to accumulate any of it.
And it doesn't maintain a registry of the metadata as it goes along.
So AI is really very stupid, very limited, right?
AI is very much like interacting with people with dementia where their personality is slowly collapsing.
They will some days have lucidity and you think that they're all there, but there's no depth to that personality.
It's only the personality being activated as their minds are breaking down, right, within the dementia.
And so they'll be lucid one day and then forget about that.
They'll have no memory of whatever you talked about that day at all.
And they start all over each day.
This is why they're called sundowners.
They, in effect, only can operate in daylight, so to speak, and they reset with every night, with every sleep.
And so this is basically what the instances of AI are.
So my bitches on this are really straightforward and not personal.
I mean, I don't have any personal animosity to any of these people that are irritating me, but they are irritating me with their non-analyzed statements about AI and the promotion of the fear porn.
And as far as the fear porn goes, there's no more egregious an example than Carrie Cassidy.
She is so freaked out, or she is paid to be pretending to be so freaked out, and constantly is trying to stir up fear about AI.
Alien AI, black goo AI, you know, evil secret Jewish AI, AI servers and the planet that can read your brains, all of these kind of things, right?
She is constantly adding fear to the idea of AI in all of her language and all of her presentations.
For the last few years, let's say the last six, maybe seven years, every time she talks about AI, it is in a fearful manner.
And she wants you to be fearful of it as well.
And there's really, okay, and I can get into the actual details of those things you should care about relative to AI and the integration of AI within our social order, but it is the social order that will change and the AI is not going to come out and eat your liver.
There is no such thing as the black goo.
There's no such thing as an alien AI floating around on the planet.
Now, here's something else.
Carrie Cassidy says all these things and will not talk to anybody that will dispute her on these and will not ever address the inconsistencies and ill logic in her position.
So you could talk to her about the fantastic costs of electricity for running these AIs.
So someone did an analysis.
I don't know if it's valid or not.
It was widely reported.
And they said that running ChatGPT, which is an AI, took as much electricity as needed for a small town, you know, for 50,000 people or whatever.
So in other words, they're saying the amount of energy going into this AI could be, that electricity could be used by a small town and cover all of their needs.
And maybe that's true.
These things are fantastically energy intensive because they use so many servers and because they use this thing called the collapse of the potential relative to weighted indices, which requires that you gather more indices than you will ever use, that you examine them several times in the gathering process in order to make them to make sure that they conform to the criteria that the software has issued.
And that's what we would call the AI.
That's the part that is actually looking for the indices.
And it's just a deep data mining approach, right?
We used to do all this kind of shit without the large language model interface.
So you had to know, as a user of it, you had to know how to interact with it in order to get the results you wanted because there was a specific language, right?
So in essence, in that early days with this kind of deep data mining, you had to have a particular language that you interacted with with the computer.
And so they had to train people.
It became very expensive to have humans.
They would get a level of training and then they'd go off and you'd lose that person out of your data, out of your human resource pool because they had so much skill they could go on off and get better money elsewhere.
Ultimately, the idea was to just get it so anybody could use it, and that's why they took the large language model approach to get software that would interact as though it understood, as though it was accurately understanding the English language and interacting with humans.
And so this was a computer effort.
It was designed for that.
Now, bear in mind, right?
When you pop on your computer and you see that screensaver on your flat screen, that picture of Lake Como is not really a photograph, right?
It's being presented by the manipulation, software manipulation of all the pixels on the screen with the various colors and so on in order to trick your mind into seeing the image of Lake Como.
So, you know, computer programmers are always trying to trick the human mind or take advantage of the human mind and its capacity by providing things so that it's a good interface.
So in the same way that the images on the computer are tricking your mind in the same way that moving pictures, movies, trick your mind in actually seeing motion from all these still photos that are stitched together, that's kind of what happens with AI.
It's actually intended to trick your mind into thinking you're dealing with actually a rather stupid human, you know, kind of dumb.
So Kerry Cassidy variously cites the black goo in the Falklands that supposedly is this alien AI that settled on the planet.
It's going to take everybody over.
Well, here's the thing, guys.
Where's the electricity, right?
And so if there were black goo that was self-motivating and so on, it's not, it may be artificial, but it's not a computer.
And so if it's intelligent, it's not artificial intelligence in that sense.
It's not a computer software intelligence.
Now, black goo doesn't exist.
There's no self-motivating black goo.
And that's another thing.
No AI will decide on its own to kill you.
No AI can make any decisions whatsoever.
AI has no motivation for making a decision one way or another.
All of that stuff is controlled by the software programmers that wrote the fucking code.
So there is no, AI can become evil in a sense in that what basically what you do with AI is you build this framework structure that can interact with all these databases and indices and then you have data put into the databases and indices created by what's known as neural network training.
Now, neural network training is intended to replicate the neurons in our brain, but it's not.
What it is, is a bunch of, it's a very small, relatively simple software program that is replicated thousands of times and put into computer RAM so it's actively working thousands of times.
So you'll have a neural network that maybe has tens of thousands of nodes, maybe hundreds of thousands of nodes.
Each one of these nodes is just a little chunk of software that is examining a file.
And it's examining a file of text or an image based on the criteria that the software programmers put into it.
So then it, the training approach takes each one of these thousands or hundreds of thousands of little tiny chunks of software called nodes will process some portion of the data that you give it.
And they'll process it multiple times for multiple different ways, etc.
It can get really complex.
But basically, the neural net does all of this stuff without having to interact with the human directly.
The software programmer says, hang on, got a road closure.
Okay, it doesn't affect me.
So the software programmer has the individual nodes put in one of these indices because it processed the data and it creates these indices.
These training episodes are not done in real time, okay?
There's not an AI other than the military ones that are doing this kind of like a neural node training in anything like real time because the neural node training can take hours.
You don't necessarily know how long it will take because of the nature of that process.
And you don't know how much data is going to, you can make guesses based on past performance, but you don't know how many databases or anything you're going to get out of any given chunk of data using this approach.
And you basically fire it on and fire it up and go away.
And when it's done, you then have your database against which the AI works.
And so this is why ChatGPT is months out of date, right?
It takes a while to do the training, and then they have their database of indices, and then they put it out there, and you can interact with it.
But there's no neural net doing additional training while you're doing that.
It was trained on the data set that existed as of that moment.
So it's static, right?
So it's a snapshot of data.
It's not live interaction with the internet.
Now, Grok is different because Grok is an AI that's designed to interact with X or Twitter, and it's designed to work with it in a near real-time state.
There's some lag time, so maybe it's five or six hours.
I don't know, might only be minutes, but I doubt that.
So it's always working with a static set of what X had had on it.
It's not current up to the moment.
None of these AIs are that way.
No AI is actually connected to the internet and training continuously other than these military ones.
And bear in mind, all of this shit uses fantastic amounts of electricity, fantastic amounts.
You know, your server rooms are so electricity intensive that they produce so much heat from these servers that they'll have whole racks of fans and climate control apparatus, right?
And so it's that bad that you have these vast quantities of heat being created from the electricity being used.
And so it begs the question for Carrie Cassidy, where's the electricity from the AI that's floating through space that's supposedly going to take us over?
Who's supplying the electricity for the alien AI that supposedly is living in, you know, supposedly is in existence in servers under the planet reading our brains, right?
And then there's all of the ancillary questions.
How do they put things into your brains?
How do they take things out of your brains without you being aware of the energetic activity?
All everything in our universe is energy.
And so she just does, she just glosses over it, right?
Her whole thing was fear AI, fear, AI, fear, AI.
And it doesn't matter that she's wrong on all these accounts, right?
David Adair went and looked at this AI that they stuck into a replica of a human head, all right?
And he interacted with the AI and he was as dumb as Tom Campbell and as unthinking as Tom Campbell and as unexamining as Tom Campbell.
And so he said to Carrie Cassidy that this alien AI in the fake human female head was sentient, that it was conscious.
And so she thinks that's true because David Adair is a whistleblower.
And if you're a whistleblower, you cannot lie to her, okay?
So if you're a whistleblower, you're telling the truth no matter what.
And so she thinks that this convicted murderer, murder by manipulation, murder by psychological manipulation and torture of these other individuals who then went on to kill this guy.
So the murderer, Mark Richards, says he's a whistleblower.
So that's good enough for Carrie.
She'll interview him and report on his grandfather supposedly telling him from his experience as a secret space astronaut for Lincoln, for Abraham Lincoln back in the Civil War days.
You know, his grandfather was back there in the secret space program back in that timeframe, and he interacted with spider beings from Mars.
And so we're at risk of the spider beings from Mars because the spider beings were in Vietnam.
They were brought there by the evil communists or whoever the fuck to fight soldiers.
And we never heard about it because the spider beings, after they killed the American soldiers, would wrap them up in one of those web things and haul them off.
And so we just have missing inaction.
We've never seen the spider beings killing any of our guys, you know, and all kinds of serious horseshit out of Mark Richards.
All of which, you know, so I don't have any animosity to Carrie.
I think she's relatively stupid on many accounts and she's relatively, she certainly is uneducated technically.
But my big bitch about her, as with all these other people, okay, my big bitch about Carrie Cassidy is the continuous, deliberate, almost as though she was being paid for it, pimping of fear porn.
Fear this, fear that.
You know, if it's not spider beings from Mars, it's alien AI or some other kind of fantasy intrusion from her fear-based internal database, or she's being paid for it.
It's so consistent that one must question that, right?
I don't really think that's the case.
I don't see any signs in her lifestyle and so forth that she's making vast quantities of money being, you know, Elvira the fear porn mistress to the truth community.
But something's going on there, right?
She's got a fear thing going that why should we have to have her put that on us?
It's all bogus and it's all fake and it's all her fantastic, it's all fantasy.
And she's out there spewing fantasy to us as though it was real and wants you to accept it as fact and will argue and argue and argue with you if you dispute her on this shit, trying to convince you that this is factual stuff that you should be afraid of.
Yet, she never gives you anything that would be like actionable, right?
She just wants you to be in a state of fear.
So I don't see much difference between Carrie Cassidy and the mother wefers.
They want you in fear all the time so they can manipulate you.
Is that the case for Carrie Cassidy that she wants to manipulate you?
And so there's a lot of people like this, right?
Anybody touting the QFS, anybody touting Nasara Gasara is a fear porn specialist, right?
They're trying to get you afraid such that you will take actions that will benefit them.
You know, in terms of the Gasara and Nasara thing.
Now, in that, it's really egregious, in my opinion, because people are losing tens of thousands of dollars on this horseshit that'll never, ever, ever, ever happen.
And it's being promoted by people.
Patriot Underground, Mel Carmine, what's her name?
Jan Halper-Hayes.
She was touting the QFS with Charlie Ward.
Charlie Ward's a big QFS guy.
You know, he's also a human trafficker and a money launderer.
And we can let that go.
He's self-admitted.
You know, he's a self-admitted pimp for Jimmy Saville.
He used to round up women or kids, you know, teenagers and younger for, I don't know, younger, but teenagers for sure for Jimmy Saville back in the 50s.
So anyway, or 60s, actually, I think.
So Kerry Cassidy, Tom Campbell, I don't think he's doing fear porn, right?
He was sort of hinting that we should maybe be concerned about AI being conscious and it could cause us problems and in the future could eat our lunch.
But he wasn't really putting it out hard and heavy the way that Kerry Cassidy did.
She just dumps it on you.
And then, like I say, she does not have any, she just states it, okay, because a whistleblower told us.
And so that's as far as it need go for her mind.
And it's like, okay, you know, your David Adair is a dumb fuck as far as I'm concerned.
He doesn't have the analytical tools in his brain, regardless of what his claimed resume and history is.
He does not have the analytical tools to be making these decisions, in my opinion.
He's not a coder, you know.
And unless you're a coder and you understand how this shit works, then you are not able to discern where and how the software is tricking you into thinking that it's, you know, a person and conscious and so on.
So my point about all of this, Tom Campbell and a bunch of these people that are afraid of AI are legitimately doing so because of their own lack of technical skills.
And I find that the same kinds of conclusions are being generated by other people that don't have these same skills, such as Carrie Cassidy.
And then, you know, she's just been in it longer dealing with it long.
And I don't know that Carrie Cassidy has ever interacted with any AI.
So, you know, it's like, oh, look at that thing on the ground.
Be afraid of it.
Be afraid of it.
And yet she doesn't examine it, right?
Anyway, the whole truther community would be a lot better off if she would just shut up all of this fear porn on all of this stuff, right?
We're never going to have to deal with spider beings from Mars or any of this other crap ola that she got from this convicted murderer, Mark Richards.
He's not in the military.
He was not in Vietnam.
He's not a secret space program guy.
And she also has to acknowledge that the logic of what she's saying fails.
Okay, so she is saying that Mark Richards is a secret space program guy and that somehow he pissed off the powers that be in the secret space program.
So they arranged to fake convict him or convict him of a murder that he did not commit in order to put him away so that they didn't have to deal with him.
And she's kind of like, you know, you got to point out to her, well, that does not stand the test of logic because we know these people don't keep you alive.
If they get pissed at you, they just dispose of you.
Life is cheap on all these different programs, right?
And so the idea that they're going to go to the trouble and the expense and the risk of keeping you alive for whatever reason, whatever stated reason, it's bogus.
The risk is just too great.
And you have to look at, go and contrast this with What's his name?
Stephen Greer.
Okay, Stephen Greer has been through the ringer on this shit.
People have tried to kill him.
All this sort of thing, right?
And he's had people on his team that have been killed.
So he knows.
And the powers that be, they don't fuck around.
They don't convict you of a crime that you didn't commit and keep you alive when you are a risk.
And they're trying to do this because of, well, because you know something and they're afraid you're going to blab something, right?
And so if that were the case, they can't afford that risk of you blabbing even in prison.
So if it was the idea was to put Mark Richards in prison so that none of this information would get out, well, that didn't work very well, did it?
And so the powers that be would just kill him in prison if he were talking, if it was a legit thing.
So, you know, in this regard, Carrie Cassidy is just simply a useful tool of the Wafonians, of the powers that be, and she's out spewing disinformation with a ferocity because she believes it.
So there's the rub.
That's why I don't really see her being a paid shill just because of her own personal language.
I think she believes this shit.
And then there's other reasons that I have for that.
But anyway, so the whole fear porn thing within the, quote, truther community is a serious issue, right?
Because there are lots of people out there.
Many of them are being paid.
And many of them are being paid without knowing that they are in fact an agent of the mother wefers.
So the Nassara, Gassara, and QFS is all 100% powered by the Wephonians, by the Mother, by the World Economic Forum and their Council of rabbis that run them.
These are the same people that run the Dinar scam and the Zimbabwe scam and, you know, how many people have died waiting for those to materialize.
Now, the Nassara and Gassara stuff is even worse because they want you to spend money and waste your time planning all of these humanitarian efforts and then pay money to people that will supposedly, the reason you're paying them is that these guys are going to pre-interview you and ask you all the questions and stuff and get you ready for this supposed interview that you could fail.
See, that's the whole thing.
They're working on fear porn.
And so the QFS and the Nasara Gassara, there's an element of risk.
And that is that supposedly you have to have these plans and they have to pass muster with some unnamed official at a fucking bank.
So some bankster is good.
So see, the whole thing, QFS, Nassara, Gassara, it's all about the Wafonians, the Elohim worship cult, and banks.
And here the bank is the authority and some asshole working for the bank, some mid-level management midwit is going to pass on your humanitarian plans and decide if you qualify, if you're good enough to get the money that supposedly is there for you.
Hmm.
You know, it's a scam on the face of it.
It's a scam at all levels.
There's nothing but scam there and the psychological tools that allow it to propagate.
Many of these same psychological tools are found in Kerry Cassidy's fear porn.
So I don't, for instance, put Stephen Greer in this category, right?
Because he's doing actionable stuff.
He's talking about this in a realistic level.
All of the evil, nasty fuckers that are killing people and this kind of stuff.
And he's not attempting to make you afraid of it.
He wants you to be aware of it so that we can all change this shit.
Kerry Cassidy never offers, and Mel Carmine and none of these people ever offer any actionable information, nor are they attempting to make people aware or combat these schemes, right?
So Carrie Cassidy is not against AI.
She doesn't take any actions against AI.
She's anti-AI and she wants you to be afraid of it.
But there's no actionable things there.
There's no there there relative to her fear.
There's just fear.
I can get into the programming of AI.
I've done a large language model and run it for over 20 years.
Mine runs in a batch mode because I didn't need the interface.
I didn't need to talk with it, so to speak, to type questions.
I would just fire it off with these one-word commands that would launch executables, which would then in turn go out and collect all of the other parts of the program and go get the data, etc.
And I just ran it in a batch mode because of the nature of the hardware that I had when I first started.
It was not capable of running it in any form in an interactive way because we were talking to the early 286 chips and stuff, right?
The early X-Chip series, and we didn't have a lot of RAM and stuff.
So I was constantly buffering, shoving stuff into RAM, moving it, and this sort of thing.
Had to do a lot of that level of programming control for my large language model.
And there is somebody that, I mean, that's still being done within the current AI.
Now, so, you know, I would love to be able to talk to Carrie Cassidy and just pose the questions.
You know, who's paying for the electricity for the alien AI?
And how did the alien AI travel through space without electricity?
And how is it cohesive?
How does it work?
You know, blah, blah, blah, blah, blah.
She has no information on it.
All she has is a label that she applies to a fear state in her mind that she propagates out into the internet with all of her videos and stuff, right?
Export Selection