All Episodes
Dec. 13, 2023 - Clif High
31:08
Artificial Intelligence is retarded.

This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit clifhigh.substack.com

| Copy link to current segment

Time Text
Hello humans!
Hello humans!
It's the 13th of December, early in the morning, gotta go do chores, one small little stop to make a payment to some soil engineers for some work, and just regular chores after that.
Traffic stuff right off the bat here.
Okay, so it's an interesting time We've got our splits happening.
You know, I mean, we got Alex Jones back on Twitter.
So, you know, it's all the world's ending.
There's no question about that.
Anyway, so how do I get across this idea?
All right, so I've been playing with artificial intelligence ever since it's been generally available for, you know, like civilians, not people in the companies to get involved with it.
You know, it's impressive to a certain extent, and then it fails.
So, for instance, you can get it to do a, oh, a really good picture, right?
You could describe it.
You can tell it what you want.
You can say, okay, you know, you can, and actually I've done this repeatedly trying to work around some of the problems with AI.
And the, all right, so we don't have artificial general intelligence.
That, in my opinion, in my definition, is where you can teach the artificial intelligence to learn on its own.
Man, this gets into some real technical stuff.
And let me see how I'm going to frame this.
All right, so here's the thing.
Let's look at the symptom and then we'll look at the cause.
All right, so the symptom is that you can do a picture with AI.
And you can do a very incredibly detailed picture.
You can do it photorealistic.
It'll do all kinds of cool stuff that way.
And then you go back and you tell it using that same, so you're going to develop a cartoon character, okay?
And so say we were going to develop a, you know, like a version of Roadrunner and Coyote.
Okay.
And so we'll call this woodpecker and fox.
All right.
So we're going to do a cartoon and we're going to have a character that's a woodpecker and a character that's a fox.
And we go to AI and we say, okay, AI, you do this for us, right?
And you make this character for us.
And it'll come out, you got just a beautiful character, it's just what you want.
You know, maybe it takes you three or four or five iterations to get it to zero in on it.
And it gets you this character that you want.
And then you put the character into a scene in an image.
And then, since you're going to be making a whole series of cartoons and stuff, you want to make the next image with that character.
And so you tell it, you know, you tell the AI, take this character and make it do this now.
You know, instead of running away from the fox, you make it fly.
Okay.
And so it's going to fly for you.
And that's when you discover the real problem of AI.
That AI has no ability, it has no memory and it has no ability to do sequential instructions.
So this is, well, what it ends up being.
Hang on, I got people and dogs and weird shit going on here.
Hang on.
Crawling in out of the woods.
That's not good.
Well, maybe they're mushroom hunters or something.
Anyway, so it can't, because it doesn't have a memory, it does not, does not hold in its, so to speak, mind, it doesn't hold the image that it just created.
In fact, it can't see the image.
All it can do is issue instructions to code, which then produce the image for you.
But the AI has no comprehension.
It has no mentition at all.
It doesn't cogitate.
And it doesn't have any visual mechanisms.
So it doesn't know what it actually created for you in the way of an image.
Thus, when you go back and you tell it, take that woodpecker and make him now fly up in the air away from the fox as the fox is chasing him, it will do that.
It will create you a new scene with a woodpecker flying away from a fox.
But it won't be the same woodpecker and it won't be the same fox.
And by the way, every time you get an image, every time you get an image, a cartoon image, a photorealistic image, all this kind of stuff out of AI, there's shit in the image you did not put in there that you didn't want to have go in there.
You wanted to have fewer elements.
So you could tell it, make an image of, and this is going to happen on every image, okay?
And you could tell it, make an image of Joe Normie on at a cocktail party on the fantail of a big yacht, okay?
And it'll do that.
And then you say, well, shit, you know, why are there 45 people all around?
And why is there this big giant person standing on the, you know, the poop deck?
And why are these two people hanging over the edge?
And, you know, basically, why is there just this extraneous leg and a foot sticking out from the side of the boat?
So AI puts shit in there because it has no visual acuity and has no memory and it has no discrimination or control.
And so what happens is this.
AI works as data.
So in creating an image, you have to go through what's known as the large language model, where AI sort of understands spoken language or, you know, like allows you to speak to it as though it were a personality, okay?
And, you know, where you can tell the AI, do this, as opposed to actually having to write computer code.
And so the AI then interprets your language to see what you want, and then it has resources, you know, hooks into an image generating program, et cetera, et cetera, that it uses.
And so it will then take all these various elements and it will do its best to come up with some instructions that when those instructions hit that image generating program, it will generate what you want.
But AI is not a discrete, integral cogitation.
So it's not a mind, all right?
So AI works by these things called neural nets.
And you have to train it, and you train it on data.
And the more data you can get, the more trained it is.
But that training is not persistent beyond a certain point and has no mentition and is in fact an overlay, a network.
That's why they call it a neural net.
It's a network of individual indices that are linked up to various levels of interpretive code, various levels of code to interpret what it thinks that or what it has linked up to.
So the neural net doesn't exist as a mind.
It's not sitting there thinking when you're not asking anything of it.
It's just there, right?
There is no, there's no there there.
There's no sense of self.
There's no sense of there's no sensation at all.
There's no internal point of integrity for the AI.
So the AI does not think of itself.
It has no concept of itself.
It can't say, I exist here.
can say that because you could ask it questions that would elicit that response out of the interpretation of the large language model.
But it's not going to actually have mentition.
It's not actually going to have thinking involved in the process at all.
So it's not able to be repetitious.
So it can't repeat something and it can't repeat something with a variant.
It can repeat general concepts to a variant, but not any details that you may wish to carry forward.
This is the same kind of limitation that prevents AI from being able to do math.
AI is terrible at math.
It can't add shit worth a damn.
It can't run an accumulator, you know, so it can't count.
So you can have it create an image and then you can feed that image back into it and say, how many people are in this image?
And it can go through and examine, but how it's going to interpret the question is going to be variant from what you think, and because you have to be somewhat explicit, right?
Don't count the extraneous legs sticking out of the yacht put in there by the image generation program as a people.
So, you know, you got to get into some level of specifics on these fuckers.
And the AI, like I say, can't accumulate.
It can't add.
There is some sort of a big breakthrough they thought they had at Chat GPT towards what they call an AGI, artificial general intelligence.
So an artificial general intelligence is one that you initially train with your neural net, but thereafter it has the capacity to continue training itself without you having to participate in it.
And see, this is what scares everybody.
You know, all these people that are managers, funders, you know, pundits, social analysts, that kind of thing that are out there saying AI is going to come and, you know, and harvest all humans and we're all toast.
All right.
What scares them is the ability for AI to have mentition and to have cognition and to be able to train itself.
You know, in my opinion, that will never be done.
It can't do that.
Especially not with these interlacing indices approach via neural nets.
And so this will reach a dead end.
And it's a fun little toy, but it's not, and we can use it for some really good stuff, right?
So getting us away from images, you can use AI for analysis very effectively because you're not attempting an analysis to tell it to do a task and then repeat the task or accumulate around that task.
So you're not asking it to do anything that a human could do in the sense of, you know, maintaining a focus in the moment and carrying forward thoughts from one moment to the next in their basic form and then altering them in the next moment kind of a thing, right?
So humans can accumulate, humans can do cognition at that level of thinking.
And AI does not just because of the nature of the neural nets and the fact that it is basically just forming all these indices.
You know, it's a giant database of indexes.
These indexes go to other databases and chunks of code and all different other kinds of stuff that allow it to function and to mimic speaking with a human.
It does not mimic human intelligence and it does not mimic intelligence intelligence, right?
What it is, is a display that basically understands articulation and is able to mimic articulation.
You know, it can speak to you.
And they've got various little things in there for, you know, making you think it has a personality.
So AI is not, when you interact with AI, you don't necessarily have to use any code at all.
You can do that kind of thing if you're at that level of interacting with the AIs, like if you're training them or that sort of thing.
You can write code on the fly and even have the AI write the code for you and then insert it into the process, reboot, and there you go.
So AI provides all kinds of cool tools to us.
Can do incredible analysis.
So you can give it a photograph and you can tell it, you know, ask it, is this photograph artificially generated?
And it has ways of analyzing all the photograph at levels that you can't, you could not compete with just in terms of both speed and detail.
And it can come back and say, yes, there are these artifacts within the photograph that suggest that data was put in after the file itself was created and sealed.
And so then you would know, aha, this photo had been tampered.
You could use it to analyze accounting.
It's really good at that, right?
So you could use AI as an auditor and it finds shit like you would not believe.
So if I were an auditor, I would get into AI seriously.
The reason I'm bringing all this AI shit up is that I've become involved with a couple of different groups here of people that are moving into AI either as investors or as owners, right?
Some people that want to own an AI for their own purposes, and I'm helping them out.
So there's, all right, so there's a couple of different kinds of AI in a general sense now.
So we have this, it's not artificial general intelligence.
We don't have any AGI.
In my opinion, if we're going to achieve that, it will be from a spectacular breakthrough that is not predictable.
And thereafter, we would be off on a totally different kind of AI technology.
Now, that having been said, there are a couple of different kinds of AI out there.
One of the AI kinds is where they train the AI and they write the code for the artificial intelligence large language model interaction, and then they get it all set up and they actually build in the ability for it to train itself on specific data sets.
So that's the kind of AI you could use for specific tasks like writing law stuff, right?
Like writing suits or responding to a suit or writing a motion or something like this.
These kind of AIs can also be used to do accounting.
So I've seen a couple of these guys that are, these AIs now that are what they call an API, right?
Application programming interface where you take somebody else's AI and you throw out all of the, after it's been trained and stuff and you throw out all of the guts of it, the stuff it's actually been trained on, make it basically, I guess I'm going to say naked.
You know, it has no real data in it.
It just understands how to train itself given some data.
And then you put in the data that you would like, and then you can tell it to train itself on accounting, or you could tell it to train itself on, you know, engineering analysis or something like this.
And so, but you have to train it.
You have to supply the data sets.
And of course, there's potential for wonkiness there because if you don't provide an adequately broad enough or deep enough data set for it to train on, it will make huge mistakes.
Now, as I was saying earlier about the pictures where you might get, you know, one giant guy on a boat and everybody else look like regular humans and then a couple of extra legs or an arm or something like that sticking through the side of the boat.
Those kind of errors are continuous and constant with AI.
So everything you do with AI, you've got to double check.
If you're doing anything that's like serious work, like an audit or you are going to do a court case.
Now, it'll get the verbiage right, you know, the pleading to the court.
It'll get the appropriate proceeding format.
It'll stay to the word limit you set on the document you're trying to create, this sort of thing.
But if it's going to give you a legal citation, you'd better damn well look that legal citation up yourself and make sure it actually says what the AI tells you that it says.
Because frequently it does not.
And this happens.
And all the AI guys that run these things will tell you these fuckers are wrong a lot.
Double check every fucking thing, especially if it involves any of these kinds of elements, such as, you know, adding something up or going to a specific that you're going to need to rely on.
So I'm actually seeing court cases now that have been chucked out way in the beginning because this was at a, I think it was a state prosecutor.
I don't think it was county.
I think it was state.
Anyway, this court case was thrown out right at the very beginning of it because the prosecutor used an AI to generate some forms and the AI put in some references to some legal cases in support of this case and those cases didn't exist.
They were bogus.
It just made it up.
So the thing is, they say that AI lies, right?
But that's not true because AI has no concept of what is factual and what is not.
And so it just is responding to what its indices find.
Because you're shoveling in as the trainer, because you're shoveling in vast quantities of data, basically attempting to shove in so much data that you get this near cogitation effect out of the indices, you're not really sure.
You can't actually validate that all of the data is factual and is worth looking at.
And in fact, you're taking an approach of saying, well, you know, we're going to assume that, you know, 8 to 10% of the shit we're shoveling into this is bogus.
But we're just basically hoping that the 90% that we think is good and valid will swamp the bogus shit so we don't have that many errors.
And that's fundamentally how they're operating.
It's an interesting business.
It's really cool in a lot of ways.
I'm working with this one group that's going to be doing investing into the AI business using AI.
And what they're going to do, and they're asking my assistance here in developing what are called prompt injections.
I'll tell you about those in a second.
But they're asking me to help them develop the script basically that will instruct the AI for what to look for in the news and commentary and this kind of thing about various different forms or various different companies and their AI work that would allow these guys to decide, okay, so these AI company in Indonesia here is doing really good stuff.
And so we'll invest a little bit of money in that, this sort of thing, right?
So they're using it at that level.
So they're using AI to analyze in order to be able to invest in AI in a long-term plan.
These guys are going to be buying stock and they know the stock market is just going to crash out.
They know the stocks are going to just absolutely shit themselves and that most of them will be toilet paper when all this is done.
These guys are buying stocks, but hey, get this.
They are demanding delivery of the certificates.
And boy, have they run into problems.
So when the system crashes, almost none of the stock you own will ever be able to be given to you.
Okay.
So you own a rehypothecated chunk of digitry.
So you buy, let's just say, ATT stock.
You don't actually own any ATT stock.
You've got some digits in a brokerage somewhere where they say that they're going to provide to you ATT stock on demand if you ever demand it, but they're assuming you'll never ever demand it.
And they've sold that same chunk of stock to who knows how many other people.
One guy says that it's quite likely that there are literally hundreds of thousands of rehypothecated individual shares in any given company.
So all the brokerages buy one share of ATT stock and then they sell it over and over and over and over and over and over and over again because nobody ever demands delivery of the actual item.
They're just always dealing with the derivative, which is the representation of that stock at the exchange, whoever they're dealing with, right?
At the broker, the dealer, or the exchange.
So anyway, as this all unfolds, my clients know that the ability to or that the whole stock derivative thing, right, is going to crash out and will have to be replaced by actual stock ownership at some level.
So they'll have to, you know, start delivering to you some form of a stock certificate, this sort of thing, right?
Because we've got to get back to real goods, real value.
We can't live in this artificial derivative world any longer.
And the whole artificial delivery or artificial derivative world is collapsing at all these different levels because it's all based on fiat.
And so basically the stock exchange is a fiat version of a stock exchange, right?
There's no real there there.
Things will operate entirely differently when we have a return to actual assets and value as we go through this transition period, which will take many years to get all sorted out.
But initially, you know, most of the big troubles are going to be felt in like a six to eight month period of time.
And then there will be another 18 months after that, another, you know, 36 months after that of gradually getting shit worked out and dampening down all of the problems.
Anyway, so we'll be able to use AI in this process of cleaning all this shit up.
So I expect that when we get conservatives back into Positions of power within the constitutional republics that they'll start doing things like using AI to analyze news reports and track down all the statements that you know XYZ anchor news anchor made, line it up with the events that were actually going on and see where they were bribed, etc., etc.
You can use AI to suss out all different kinds of stuff as an analytical tool.
As a creative tool, it ain't worth shit.
But as an analytical tool, hey, I don't think you can beat it.
I mean, really, really, really cool if you use it right.
You have to be aware of the problems of it and so on.
You know, the fact that they say AI lies to you.
Well, it's not lying.
It's just simply reporting the same level of confidence on these indices, which happen to be non-factual as any other indices that it's got relative to reporting data to you.
So, you know, so there again, they're putting a personality, a human touch on this that is not valid, that shouldn't be there.
Yeah, I see that bastard.
Oh, car people doing weird shit out here.
We had a fatal accident in front of my house.
And then, like, that was a vaccident.
And then maybe it was less than, so it's like less than two weeks.
So maybe it was like 10 days.
We had another vaccident that led to two fatalities in our area.
And then just yesterday we saw, I don't know what the hell it was, but boy, the staters were screaming north.
Local county guys going north.
Aid cars heading on up, you know, just going like bats out of hell, sirens, lights, all of this sort of thing.
So I don't know what was going on up there, but we've got some nasty vaccine, nasty drivers around here.
So I give everybody a huge uh-oh, there we go again.
There's another sheriff.
Oh, he had lights and shit going.
Oh, okay.
All right.
He wasn't that serious about it.
Anyway.
Or he just wanted to see if we'd all move.
Anyway, so AI, it's useful stuff.
I enjoy playing with it.
You know, getting into investigating the various companies is going to be interesting.
And the various different kinds of approaches to this AGI is also going to be interesting.
It's a good goal.
We've got real problems in getting there.
And these guys, I think, in my opinion, that are doing the neural net training and the structure they've got, they won't reach that goal.
They're not going to get an AGI out of the approach they're taking at this moment.
I've got a lot of reasons for suggesting that and go into them at some point.
But I encourage everybody to, you know, go get a free trial on like ChatGPT or any of these other AIs.
I was offered a chance and I'm dealing with, I was offered an opportunity to deal with a couple of these AIs where it's a blank slate and you load your own data.
So again, very tedious, right?
Because in order to get quality in the way of indices and a response out of it, you've got to have quality and quantity on your data going in.
And so what I want to do with the one AI, which is being provided, the access is being provided to me by a Russian corporation.
What I want to do is I want to train the bugger to do like my Alta reports, right?
To go out and analyze and this kind of thing.
Hugely complicated task training an AI to do this, but I believe it's worthwhile.
I believe the AI could eliminate vast quantities of the tedium and the actual work that my process used to take.
But it might take me six, eight months to train the thing.
I don't know.
If it's even possible to do this particular model.
And you don't know until you get into it some distance and do its function to see if indeed it has the capacity to achieve what you want.
You know, but I'm retired.
It's not like a big investment for me to put four or five months into it and then have it crap out because I'll learn a lot in the process anyway.
What the hell, dude?
He turned against the light, just took a left into this.
Oh my God.
No wonder we have so many accidents.
So, anyway, I'm getting close to my first stop here.
I'm going to go in and get some over at Home Depot and get some of that paint you spray on the ground and you know the little orange string for and stuff for making for surveying and laying out your building.
Anyway, then a bunch of other crap.
So it's kind of a strange day here.
Uh-oh, what have you done there, people?
No, no, no, no, no, no, no.
Hold still.
So, anyway.
Uh.
Okay, guys.
Anyway, watch out for the AI.
It always fucks up, makes mistakes, but it's a useful tool.
And I'm not particularly scared of it.
I don't worry about AI, and I sure as fuck don't worry about alien AI floating through the air and taking over.
So, you know, I mean, I shouldn't get on Carrie Cassidy's case.
You know, she got gut fears.
She doesn't understand.
She doesn't program.
But anyway, AI is cool.
It's very useful and extremely useful if you're a programmer.
So it's very worth pursuing.
It's not really scary and it's not very reliable.
Export Selection