All Episodes
May 4, 2023 - Clif High
48:30
real AI

brittle, fragile, frequently wrong, ignorant, but fun https://purebulk.com/products/clif-highs-pure-sleep https://clifhigh.substack.com/p/cancer https://knowledgeofhealth.com/what-if-cancer-was-already-cured/

| Copy link to current segment

Time Text
Hello humans.
Hello humans.
May 4th here.
It's about 8.25 in the morning.
And there we go.
Transitions.
Whoa.
Hang on a second.
Let's adjust that a bit.
There we go.
Okay, so today we're going to talk about real AI.
Gonna throw a bunch of words at it, huge amounts of descriptions, but basically it's all about people's fear of AI.
So I use AI.
We're talking artificial intelligence, which I really think of as neural network expert system, which we'll get into in a minute.
It's not artificial and it's not really intelligent at all, right?
It's a happenstance of coding.
Anyway, so I have had occasion to ask a lot of local people, people I encounter in going out and doing stuff.
Yes, it's still cold, still have to wear the hat, not quite the thick one.
Ask them about their impression of AI.
These are like regular guys, you know, mechanics, you know, tree harvesters, you know, retail clerks, this sort of thing.
And there's a great deal of concern, and a lot of them are worried.
And then I explore those, explore with those people that are worried or fearful of it, why they're fearful, right?
And it has basically has come down to two different impressions.
That's why we're going to talk about real AI and why it's so cool and what you can do with it, right?
Okay, so there's two different impressions here.
There's the number one fear that shows up with a lot of basically low information people, okay?
People that don't code, have never had an interest in coding, not necessarily any advanced mathematics, don't explore those subjects.
They may watch science documentaries on TV, that sort of thing, but it's not anything that really that they pursue, right?
And so the number one fear that shows up is the idea of being, let's just say, eaten, okay?
And we're using this as, you know, taken over by AI.
That's a piss poor marker.
Okay, so literally they're afraid that in a vague sense that there will be AI somehow invading them, you know, that like the graphene oxide in the blood thing from the VAX idea forming up into neural networking that, or that's what they call it neural networking, but they don't understand the terms.
But forming up into a network that can take you over and make you robotized, right?
I'm certain that there's shit in that VAX that will fuck you up and destroy your mind and your heart and all of this other stuff, right?
But it's not trying to create and turn you into an AI.
That may be the WEF's goal, but they're as deluded as anybody else because these guys don't ever do any real work.
So Bill Gates does not code.
Okay, he never did code.
He bought DOS with his dad's money, MS-DOS, right?
Microsoft DOS.
He bought it from Seattle Computing Company, the disk operating system that later has evolved into Windows.
So he didn't code.
He's not a fantastically smart person or anything.
He's just a good money man and a manipulator and a pedophile, likely.
Anyway, so they're actually afraid of the same kind of stuff that you hear on the real far fringe of the Wu business, right?
All these people that think that AI can float through the air, that AI is somehow disembodied and can run without a computer, and that alien AI could come here without the aliens and take us over, or that alien AI can live in a black goo.
That's actually quasi-feasible at a weird, bizarre way, but there's no evidence whatsoever for black goo.
There's only a single central source for it that's been promulgated by, and it was put there for them by a specific ring of, I don't want to call them disinfo because I don't think they're trying to be disingenuous.
I don't think they're trying to lie to you.
But there's a group of people out there that will just spew shit out because they believe it, okay?
Because they feel that it's valid.
It doesn't have to have evidence or anything.
And then they put it back on you saying, you know, you've got to use your discrimination.
You know, basically they're saying, I believe this, I'm spewing this shit out, but I know it's probably false because I got it off a real raw news, which is, you know, a known disinfo site.
And so it's up to you to use your discrimination and not react to it.
So, you know, so I don't like that mindset, right?
But there's a lot of these people that think that AI could float through the air and take you over, or if you touched a machine with it, it would get you, that kind of thing, right?
Their fear and points of fear around artificial intelligence know no limits because they don't understand it as a technology.
And so they don't code.
They have no real grasp of it.
So as far as they're concerned, if they characterize it as a demon coming through the night to get them, you know, it's as valid as anything else.
So that's the number one fear.
Okay.
That's what I found most often given as an answer.
Are you afraid of AI?
That's how I asked it, right?
And then we would get into discussion.
Number two that actually shows up, and it is this.
So this one right here is a no-go, right?
That it can't take you over.
It's not examining you.
You're not its prey.
None of that shit, right?
Now, number two is being replaced.
All right.
Replaced by AI.
And that has some validity.
But even then, people's thinking about it is a little bit less than rigorous.
Okay.
So what they're actually talking about are people that think, my job's at risk.
You know, my livelihood is at risk.
So this I found most often in people that were in their late 30s to like maybe early 50s, okay, where you've got a working life ahead of you and you can see that AI has a potential to come in and maybe replace you.
All right.
And so some people, you know, you can reassure them fairly early that, you know, look, you know, we use AI constantly.
It's in your phone.
It's the autocomplete.
It's all little AI, right?
It's an assistant and it helps you do shit.
Right now, you're seeing more and more AI in all of our ancillary gear, our cars, you know, I have one of those cars that's driven with a little joystick thing and it's all software control.
So the more software you get into, the more you're dealing with AI.
Parking assistants, right?
The Tesla, all of the shit in the Tesla cars, the software that aids you in driving and this kind of stuff that lets people fall asleep and kill other people because no one's paying the fuck attention, right?
Because that's my real fear, by the way, is that it makes humans lazy.
But so these people are afraid of having their livelihoods replaced.
The number two fear.
They don't really think about the idea of being taken over or eaten by the AI.
But here's the thing.
There are indeed some jobs that can be replaced by AI.
But the tree harvesters out here and my tree guy that comes and deals with individual urban trees, right?
Not the wild, fierce trees out just on the other side of the road.
These guys are not really in danger of being replaced by AI, and their lives will be made better by AI.
So I can envision a time when our Lyft trucks would scan the tree as he's going up and pinpoint for him on a screen that's showing there the most efficient way to take down the limbs, creating the smallest pile at the bottom with the least possible spread and damage to other people, yada, yada, yada, yada, right?
As an assistant.
It's not going to instruct him.
It's not going to order him.
It's not going to tell him how to do it.
It's just going to be there to provide him with this.
But there are things the AI won't be able to see, right?
So maybe the AI would say that you're better off taking down these top two branches here before taking down the others, but it can't see because of the nature of the limited sensors and its vision and so on, and that it had never been built into the software that a branch could fall straight down, but there might be something on the ground that was not a life form that could still be damaged by that branch falling there, right?
So there would be decisions to be made in which you would have to, as a human, you're going to have to overrule the AI.
You're going to have to say the AI doesn't have enough information.
Its resources in terms of sensors, perceptions, and cognition are very limited, which we'll get into in a second here.
And therefore, I'm going to decide, no, I don't want to take that branch down first.
What I'm going to do is to take off that whole top at a slant and have the whole thing slide down over this way instead of falling to the back where it could have injured that, you know, barrel of oil or fallen into a fish pond that the AI thinks is a solid surface, whatever the hell, right?
See?
So there'd be all different kinds of stuff that the AI is not cognizant of.
And so we have, AI is very stupid, okay?
AI is fragile.
It's brittle.
Its downtime is massive.
It's very rarely right on the first time in doing these things.
It can be massively wrong and very deceptive.
So let me give you a concrete example here.
So there's a guy, I think I came across this on Twitter, who said that he was just super excited, a lot of tweets about it, that he had used ChatGPT to discover and produce 25, 25 hidden anti-gravity things.
25!
25 of them!
And they were hidden.
But ChatGPT could pull them on up for him.
Well, so here's the thing, guys.
No, he didn't discover any hidden forms of anti-gravity, okay?
He doesn't understand what ChatGPT does, and he didn't understand why he was being given so much shit by people on Twitter for coming up with these things.
The very first thing in his list was this idea that there's a particular kind of an insect that has a particular kind of a shape on the underside of its wing covers that allows it to go hypersonic into,
and it doesn't fly by flapping its wings so much as it creates a hypersonic vibration that lifts it up off the ground because of the shape of these things in its wings producing this ultrasonic vibration that lifts the whole bug up, right?
And so he said, look how easy anti-gravity is.
And when you go and you kill a bunch of these bugs or find them or whatever, and he had the species name and all of this.
Now, see, I've seen this.
This is old information.
Anyway, but when you get a bunch of the bugs, you can tape all of the bug wing bits, wingshell bits, on the inside of a box that you're going to stand on here that's got a control rod, and you can stand on it and control it.
And using all of these bug wings on the underside of the box, you can go a thousand miles an hour or even faster.
And it creates a bubble around you so your oxygen doesn't get sucked out.
So it's really cool.
And that was just one of 25.
So we've seen this before.
It comes from the 1920s or even earlier.
Comes from a photograph that this guy faked.
He came up with this idea.
It was a grift like Charlie Ward does, you know, like Kerry Cassidy will promote, like the black goo.
It's a grift, okay?
It's a fake.
And this guy, using early photographic techniques, did a photograph where it appears that his little box dealie here is up off of the ground because of the shadow that's being cast by him and the device.
And so supposedly he's, you know, just ready to take off and he's like floating up three or four feet off of the concrete.
I think maybe it was, it was in Europe somewhere.
Maybe it was in like Paris or one of the big cities or something, right?
But it was a grift, a well-known grift.
It wasn't anti-gravity, didn't exist, and so on.
But Chat did discover it.
And that's because of the way that ChatGPT works.
So I'm going to draw this.
All right, so in computer world, this right here used to, I mean, back in the old days when I was doing computers, this was a graphic for a large database, okay?
And each of it and every one of these were a large data array.
And so each and every one of these could be a complete database, could be a subset of your database, or so on.
And you would just draw these to indicate basically vast amounts of data in your process, function, diagrams, and all this other stuff.
So what we're going to do, though, is we're going to use this to illustrate what AI is in these large language models, okay?
This is like chat GPT.
Then we'll also come back to this in other forms here in a minute.
But these large language models like chat, general purpose tool, chat general purpose tool, work off of this concept called linkage.
You derive a linkage from cross-referenced indices.
You build these cross-referenced indices not by hand and not in a deterministic automated fashion, but by neural nets, okay?
So if we imagine that these are entire databases, and so this would be a huge stack on a server somewhere, right?
And so we'll say that this right here represents a server.
Maybe it's on the cloud.
It's got a vast quantity of database in there.
And this is, and of course, a database is what?
It's words, okay?
So it's words with descriptors as to when they were taken, when they were put into the database, when they were written down on it.
All different kinds of things about the word and the time of its acquisition by the database.
But just words at its core.
And so we find that there's bazillions of words in each of our databases here, right?
And so this structure here, we might have hundreds of these things, and these are all our little servers, right?
Each one has a database, and we might have hundreds of these, and these are all networked.
Now, bear in mind that each one of these databases is just vast quantities of words.
And so here's what you do.
Let's just say that we've got, let's just draw more of these for an easier build-out later.
And so all of these represent that.
And there's more.
There's a lot more.
Okay, it's a large language model.
But what's important in the large language model is not the individual words, but rather the words and their relationship to each other, okay?
So we have sentences that are usually composed of, you know, I won't do it that way.
Let me take the Russian approach, which is a better view of it.
Okay, so we have sentences that are composed of names of things, all right?
Names or descriptions of things in action or motion, what we call verbs of motion.
And then there's all the other adverbs and everything that modifies those, right?
That gives us nuance.
But basically, it's names of things and some action involving or some, yeah, some action.
Because even an emotion to a thing is an action, right?
You gin up the love for the puppy.
That kind of a deal, right?
A puppy isn't a thing, but it still has a name.
Anyway, so we have sentences here.
And in this sentence structure, it is not the individual words that we care about so much for our large language model.
It is the interrelationship, the space that connects the words and their method of being connected to each other.
Okay, so it becomes very, very complicated here because you're adding all these different layers to the individual words themselves.
You want to know what word is on either side of the word you're looking at.
You want to know how those words relate to the rest of the words in the sentence.
So you get this very complex model of the flow of the sentence, the interrelationship of the various ordinal components of what they are and how that what relates to each other.
And you catalog all of this stuff, and that's called natural language processing, NLP, okay?
And the goal is to use a machine that can only add ones and zeros to process natural language that you put in these databases, which are nothing more than giant collections of words, vast quantities of words.
Okay, so once you've done this, once you've got all this collection, you have to interconnect all of these things to form the ultimate database that is chat.
Chat is not a mind, it doesn't think, has no cognition, can't invent, can't create.
That was another thing the guy with the anti-gravity stuff said, like, you know, he'd found, he and chat had found like maybe X number of them, these anti-gravity ways to do anti-gravity, but chat had invented a couple, okay?
So it can't invent anything.
It doesn't think.
It's not creative.
Doesn't have hormones, doesn't have a body, it's code, it's software.
Okay, it's even interest in code and software because here's what happens.
You build, so the guys that make chat, they build little munchers, okay?
I can just refer to all of them as a monad.
All right.
And a monad in any system is the smallest level of unit you're going to deal with.
So if you're doing, you know, atomic chemistry, maybe the monad that you're going to be dealing with in atomic chemistry is the charge state of an electron.
That's the smallest little thing you're ever going to have to fuck with.
And from there, everything else builds.
Okay, so the monad that we've got here, shit.
It gets tricky in explaining this, okay?
But you build yourself a, okay, so the smallest level of interaction that we're ever going to deal with in all of our servers with all of our bazillions of words and everything, our monad is going to be an indices.
Okay, it's going to be an index.
An individual index value for a particular word in all of this structure.
Okay.
So that index is what the machine actually reads.
And it's some hercylong number here, right?
That defines the relationship of this word to the whole system that is the chat GPT.
I'm giving you a high-level description that's not going to be valid at a detailed level, so let's not get on my case at the moment, okay?
Because we have to build this for people that don't understand what AI is.
Anyway, so you've got this big ass number that is your index for each and every one of these words, but this index for chat defines that word not just in where it is in the database, but it defines it in this kind of a context of natural language processing, and it provides a hook feature,
a hook, to hook that word to all of the other words here in all of these networked internetworked databases.
So the goal in artificial intelligence, what they're hoping to do is to create some kind of newness by dealing with the network more than they are with the individual words.
So chat provides you this I, okay, so it provides you as a service, it provides you a floating, altering by sort of the moment network of the interrelationship of all of these words.
And so getting back to our anti-gravity guy, right?
So our anti-gravity guy asked Chat to find him anti-hidden anti-gravity devices or solutions for the gravity problem, however he phrased it.
And so what he did was he got the words hidden and anti-gravity.
Now, these have been processed by the natural language processing that is the front of chat that talks to you as though it's a human, that responds to you quasi-human-like, right?
But he got these two guys connected.
And then they went and it found indices for the two words individually.
And then it found indices for the relationship of those two words to each other.
Now, if it had been 30 words, there'd be a lot more relationships and so on, right?
So basically what it's doing in its natural language processing at this stage with the question, with what is known as the prompt, where you say, you know, we're going to go look at this and what is, you know, go find me hidden anti-gravity shit, okay?
When you write that in there, the natural language processing creates a network map, okay?
Network map of the question.
And so that is like a little tiny map or grid of all these interrelationships.
And they can be this way, they can be that way, and there'll be options, and it can look at it this way and drop down a level and look at it that way.
It's different, okay?
So this is a somewhat complex idea, this little mini network map around the neural linguistic processing or natural language processing of the prompt that goes into all of this.
But then, once it's got this network map, then what it does is it takes that and it runs on over here and it tries through all of these databases, looking at all of these words to find something that looks like that.
That's all it's doing.
So in that sense, it's just a big search engine, right?
But it's not searching for the words the way that you might with Google or Brave or any of these other search engines.
It's not looking for the, and even they use this approach to try and get speed.
But this is actually looking for a match on the networking map that is developed out of this that is overlaid on all of these words.
And that's why this guy was able to put in anti-gravity and hidden and pull out 25 of these things.
Not because chat actually had these, but because in one of these databases at some point, which could all of these, and see, here's the thing about AI.
AI at this level doesn't know whether it's valid or not.
So chat doesn't know what a bug is.
It just knows words.
And it just knows the network map.
And so it cannot think about a bug or bug wings lifting a man and that kind of thing.
It's just responding with words.
And so it gives this guy the number one cheapest way to get anti-gravity is to go find some of these bugs and glue them to the bottom of this wooden box and put these little controllers on top.
You know, it's 100% bogus.
It was written about an old grift, but it doesn't matter because chat doesn't know it's factual or not factual.
So chat is not smart.
Chat is really stupid.
Chat also breaks a lot.
And it doesn't know math.
It's not a great computer program in that way in terms of doing math because it won't necessarily reconcile units of measure.
So you could start off talking about a building in terms of thousands of square feet, but if you get your units of measure over to, you know, millimeters, it'll do something like start multiplying the thousands of square feet by the number of millimeters and getting these huge numbers because it doesn't do the translation on unit of measure.
It's really stupid in that regard because it's looking for a network map overlay on words, not doing things in a logical brain function fashion.
The reason that this occurs is because the people that do chat, they know that their monad that they're going to be dealing with is this interrelationship network between all of these words.
So that's not stored on these databases.
So this network map that it's going to overlay and all this other networking on all top of all of these databases does not exist as its own separate level of static indices.
So you can come back and recreate conditions and go back and find something that is that same network map, but you may not find the same arrangement of inter-networked words at the time that you come back in a real live AI, which adds data all the time.
Chat does not.
Chat was capped off at like October of 2021 or something, right?
So anything after there, it has no knowledge of.
But you can come back and you can recreate this part of it by using exactly the same language and it does the same level of processing to produce the same network map.
But this may not exist as it looked the day before because maybe there's been new shit added in here, right?
So you might now get 27 different hidden anti-gravity devices.
And so it may change.
Or maybe some of those data sets could have been qualified and maybe now chat sees a qualification that says, oh, bug wings is bogus.
So it has enough interconnection of itself in building these network maps to sometimes they will apply discrimination features, right?
That would say, if you find that it's hidden, but you find dozens of articles labeling this thing a grift or, you know, blah, blah, blah, in terms of forming an opinion about it as being bogus, then don't present it as a solution.
Obviously, ChatGPT does not have this because it did present bug wings glued to a cardboard box or a wooden box as being a solution to anti-gravity.
And of course, if you look through the things that were provided by it for anti-gravity, they're all bogus.
They're all these hidden because they were discounted and tried and thrown away kind of things, right?
So that's the state of AI.
It's fragile.
It breaks a lot.
These things change all the time, which means it's not deterministic.
It won't give you the same answer day after day after day, necessarily.
Okay, sometimes, yeah, that is the case, but not necessarily.
So there's this element of uncertainty.
There's an uncertainty factor in terms of the results.
That's why in AI, in ChatGPT, you can tell it regenerate the answer and regenerate.
Okay, so what it does then is it takes the network model that it's looking for, the linkage model, right, the map, and it applies it at a deeper level of its indices here based on this number, which has its layering and so on.
And also something, ChatGPT cannot be told in any prompt form to ignore the first sets of answers.
It can't.
There's all kinds of stuff it can't be instructed to.
Okay, so basically, you cannot instruct ChatGPT to alter its own behavior insofar as all of this.
Okay, you can instruct ChatGPT to respond to you, leaving out some words, and so you can teach ChatGPT to respond to you without being a woke bastard, to not use woke language, right?
You can do that, but it's still going to do the same thing and still find the same answers based on its processing because you can't do anything to instruct it to alter this internal functioning of it.
Make sense?
Okay, so that's really how AI works.
It works differently for graphics and so on, but fundamentally, AI is simply a really fast indexing tool that has some level of rules, some heuristics about how it processes through all the vast quantities of words that we're collecting.
Now, AI right at the moment, so I've seen people on shows online say that, you know, that they believe AI is being instructed and trained and is going to crack all of our brains open, this kind of thing.
They're afraid of it in assaulting humans, not replacing us, not making us lazy or anything like that, not by its existence, inducing a lazy attitude because, oh, I don't have to worry about the safety because the AI will take care of it.
It's not going to be that way.
AI may provide you relief such you can only have to worry about the 10% of things that are very unlikely because it's got the other 90% that are most likely covered in terms of warning you, right?
But AI is not ancient.
There's no space alien AI.
AI does not learn.
Here's the thing.
When that guy was doing his query on the anti-gravity stuff, ChatGPT had that isolated.
It was isolated to his machine.
It's having to do things with all of this here and put maps on it.
And those people at chat can watch what you do if they ever had an interest, but they're watching it at this level, not at what you're looking for.
They're watching it how it overlays and fits for an efficiency and accuracy and so on, but they're not really looking at any of the details.
I don't even know if it's possible for them to look at the details because this is what's transmitted back to ChatGPT from your interaction with your PC.
Maybe they can decode it, but I don't think they have any real interest in that, right?
And chat itself does not learn from your interactions with it here.
It does learn if you tell it to regenerate because you didn't like that first answer or the 34th answer, however many times you do that, every time it does that, it'll ask you, is this answer better or worse?
And then it just keeps track of how deep it goes in these databases of networks for finding that particular answer.
It does not understand and doesn't learn from that process of what you're doing.
So you're not adding any information to it because it's locked at this model level from, I think, as I say, it was like October 2021, right?
Maybe it was October 2020.
No, it was October 21, something like that, right?
And so it's locked in that.
So you're not adding any new information.
So no, it's not learning from what you're doing with it.
They, the company, are learning how to deal with their product better, but they're not, and maybe that will improve the product, but your interaction with that AI in no way enhances that AI's ability to come and torture you, right?
It just does not work that way.
Okay, so there's the two fears.
The AI being, you know, space alien AI that's going to come and get you, or it's going to, you know, replace your job.
Okay, so if we look at jobs, that's a legit fear.
So right now, there is, I can't think of the name of it, but it's an AI that's tuned for law.
And so, yeah, if you're a legal clerk, damn right, you better be worried because this AI can put you out of business just really quick.
It's a beautiful product, the tuned law AI.
I'm going to get hold of it and start working it myself because it is just a beautiful assistant for getting suits against the government or whoever really wants to annoy you, right?
So you can really become productive and churning.
It would have aided me tremendously to have that.
The law AI at the time I was whipping Corey Good's attorney's ass, you know, beat it, beat her ass mercilessly.
So, but it would have been a lot easier if I'd had that.
I wouldn't have had to do so much of my own research.
Anyway, though, so law, that's badly written, but insurance, insurance, finance, anywhere your job consists of handling paper or going through words and validating words and forms.
So government, to a certain extent, probably all the medical techs that do nothing but deal with patient information, those kind of people, their jobs are at risk.
But if you handle physical material in the real world, you're going to have an AI assistant, not an AI boss.
It'd be stupid.
Some stupid companies will set it up as an AI boss and make you toe the line and do what the fucking AI says, but it really shouldn't be that way.
Over time, that'll evolve and change, and we won't do it that way.
But if you're just handling words, yes, your job is at risk.
But then there's going to be tons of new jobs that will emerge as a result of the AI itself, even in handling the words.
There'll be people that will have to be AI adjudicators.
They will have to be intermediaries between sort of like an IT person, but they won't deal with the code so much as dealing with the interrelationship of the words and the networking.
There'll be diagnosticians at that level, keeping the AI sane, that sort of thing, right?
You'll still have IT jobs.
You'll still need people to do coding.
The AI will be your assistant.
It'll spit out all the boilerplate.
You'll still have to do the discovery of the design pattern and plug it in.
It will eliminate the grunt coding jobs.
It does a beautiful job on spitting out some very decent code from pseudocode.
So I like it as a design tool.
It'll be great for AI will be great for things like boat design, car design, all of this sort of thing.
Bear in mind now that probably everybody, you're interacting every day with AI right now.
Not only on your phone, but as you drive.
Okay, there's a sensor.
The sensor goes to, as you come to a stoplight, the sensor detects you.
It detects the, it has to make some decisions.
It goes to a central processing unit that says, are there any other cars waiting?
If not, okay, it's more efficient for us to just let you go and worry about any other cars that show up as they show up, that kind of thing, right?
So you're dealing with, those are all artificial intelligence systems.
Expert systems, we used to call them back in the day, which is a better way to think about them.
So you're going to get an AI as an assistant that will handle the KRUG work for you, right?
So if you're out in the field, the AI is going to aid you with doing the paperwork.
So if you're a UPS driver right now, you do less paperwork.
Yes, you're dealing with the machine and doing stuff, but there's AI behind that that's setting it all up for you and linking it, helping plan the routes and this kind of stuff, right?
So we interact with AI all the time.
It's going to get better over time, but there will always be stuff that humans do better.
That AI cannot do.
The large language model, all of the other models.
By the way, there's a new critter coming out called Modular, which is an interesting approach to AI.
I'm going to explore that further as well with the tuned law version and a few of the others.
It'll just be fun to play with these.
But there will be jobs as, as I was saying, as diagnosticians, as Wranglers, all of that kind of stuff, because AI does not do creativity.
It can't invent anything.
It can't do anything new.
Humans could use it as I'm going to with modular.
So I'm going to use modular to investigate anti-gravity in a sense, even though I don't believe that gravity exists.
I'm going to use modular and the AI to create a tuned model for investigating real physics, not this Einsteinian quantum CRAPola, but real physics.
And maybe in that, I'll be able to achieve levitation and so on, even though I suspect that gravity does not exist.
It certainly is not as it has been sold to us, right?
But AI is not going to be creative.
All it's going to do is to act as like shop assistants in a mental sense, keeping the notes and helping formulate all of the formulas, etc., etc.
But I'm going to have to supply the creativity, right?
So humans still have to do that kind of stuff.
And so this is yet another technical evolution.
Now, this one offers us all different kinds of possibilities that didn't exist in previous technical evolutions or revolutions, right?
So the chip re, you know, so space aliens were shot down.
All right, so let me back up.
Way back, space alien AI.
It exists, okay?
There's no fucking question at all about it.
That any space alien that arrives here on Earth from any fucking place got here with an AI assistant.
We have a reason to suspect that this was true in 1947 with the crashes that were supposedly retrieved, also with the supposed archaeological retrievals, you know, buried ships that were dug up from mentioned by, oh, whatever.
I can't remember him first, or the first mention of that, but it was in like, I want to say 19, I think it's like 1923.
There was a small book written about a, well, I think it included the description of the 1880s Midwest crash.
But in any event, it talked about archaeological, you know, buried spaceships that we'd discover.
But anyway, though, so any aliens that get here are going to have spaceships.
All the mundane shit's going to be run by AI.
You know, why should you bother yourself, bother your mind with that level of activity if you can get it automated?
I love computers.
I love automating the crap out of stuff so that I don't have to mess with it.
It frees up my mind and attention to go off and do other things.
And so you can create, right?
So I made this sculpture.
It's actually raccoon.
It's like 30 plus years old, probably very fragile, but if I were to take a torch to it, I could bring out the blue here where it's dark and bring out, and the whole face is copper colored, but it's all oxidized.
More than copper.
I mean, it's multicolored.
It really was brilliant.
I don't have any digital photos of it, though, because of the time I made it.
Digital photos didn't exist.
Cameras were extremely expensive.
And I didn't even think to do that.
Anyway, though, so humans will be doing the creativity.
We don't even have a clue as to what we're going to create in the way of new jobs with AI.
So anyway, I'll let it go at that.
This is going long enough.
But this is the reason to not fear it, okay?
AI is just going to be an assistant.
You're dealing with it constantly now.
It's just going to evolve.
It's not going to come and eat your NADs.
It might eat your job if you're in one of those jobs where they're not paying you to be creative, but rather to deal with the mundane aspects of it.
But it won't be eating that job anytime soon, like next year or the year after, very likely, because we're going through these huge economic upheavals.
They're going to totally reorganize our social order.
In that reorganization, we're going to need the AI to help us out in getting a lot of this shit done because a lot of people have been disabled by the vax.
We've got all of these, you know, numb-nut mother wefers at the top trying to kill us all off for their own space alien agenda, yada, yada, yada, yada, yada, right?
All of that shit will intrude.
So maybe over the course of the next 10 years, we'll eliminate some of the jobs in processing mundane text and this kind of thing.
But like all these other, unlike, actually, unlike all of the previous displacements from technological innovation in the past century, many of which I lived through, unlike those times, we're going to have, we're going to be moving ourselves into sound money.
Sound money that doesn't have federal Reserve Bank hidden taxation in the form of inflation, et cetera, et cetera.
Sound money will totally change our social order in and of itself.
And in that process, will provide us with less or yeah, less of a potential for widespread social disruption from technical innovation because of the nature of what you have to do when you've got to fund this shit with real money.
So in the past, we would have a technical revolution, have some new idea come up.
We'd create tons of jobs, right?
So you'd have somebody come up with an idea.
That idea would spawn itself into lots of jobs.
But that idea was funded by the dollar.
That dollar comes from the Fed.
The Fed has absolutely nothing at risk in creating those dollars so it funded all kinds of wild ass ideas.
You get all these people up here with the jobs, and then at some point reality would hit and the whole thing would crash because the idea was really fucking stupid and never should have been funded to begin with.
And in a sound money environment where it won't be the Fed, but it'll be you and me, and that guy's going to ask us for our silver and our gold.
We'll say, wait a second, dude, that can't happen.
Or whatever it is, you know, your idea is full of shit.
Yeah, you can get up to this point, but it ain't going to be, you know, productive in the long haul.
And no, I'm not giving you any silver for that.
So that's what we're coming into, right?
Sound money makes you think differently about stuff.
And that's going to change the social order.
And it will boost the AI because it's a cool productivity tool.
And in my opinion, nothing to worry about, nothing to be fearful of.
But like I say, okay, I do think that any space alien that's coming here is arriving here because they've got AI assistants.
So we supposedly get Velcro from the 1947 crash.
We supposedly get printed circuits, fine wire networking, multi-phasic, multi-layered nano-created amalgams, all different kinds of stuff from this particular crash.
And I'm interested to see if any of the fuckers ever looked for software, ever looked for the AI in there, because we don't hear about that coming out.
But in any event, though, I suspect that was the case.
Okay, so I'll cut it off here.
Don't worry about the AI.
We've got a ton of other shit to worry about, including the nutty humans, right?
The AI ain't going to kill you, but a nutty human running around may very well.
So be on your guard, guys.
Take care.
Export Selection