All Episodes
May 10, 2023 - Freedomain Radio - Stefan Molyneux
01:36:49
5175 The Truth About Artificial Intelligence Part 1!

Philosopher - and former expert coder and Chief Technical Officer - Stefan Molyneux takes you on a wild philosophical journey through the explosive world of AI!https://www.freedomain.comSources: https://freedomain.locals.com/post/3980460/sources-the-truth-about-artificial-intelligence-part-1

| Copy link to current segment

Time Text
Hi everybody, this is Stephan Molyneux from Free Domain.
This is The Truth About Artificial Intelligence.
Thank you for joining me today for part one of a two-part presentation on the truth about AI.
As we delve into this fascinating and rapidly evolving world of artificial intelligence, we will explore several aspects that shape our understanding of this technology and its implications on society.
So during this part of the presentation, we're going to first take a look at man's history of assigning agency to objects, and the inception of artificial intelligence, its development over time, and the significant milestones that have shaped its current state.
In How AI Works, we're going to demystify AI by getting into the underlying principles and technologies.
We will discuss various elements of AI and give an overview of the AI architecture that most mimics the biological brain.
Of course, AI's rapid advancements have opened up numerous opportunities, including ethical, societal, and economic advancements.
We'll explore
the kinds of opportunities being presented and how they can be realized.
Finally, we will compare and contrast man and artificial intelligence, exploring the capabilities of each and examining questions such as whether AI can be considered alive, conscious, or even a person.
As we journey through the world of artificial intelligence together, I hope that this presentation will provide you a deeper understanding of AI's true potential and the ways we can use this technology guided by the principles of universally preferable behavior.
Thank you, and let's begin!
Now, did that sound like me?
Did it sound like the kind of thing that I, untamed, speckle-headed philosopher, would start with?
Well, that actually was an introduction almost totally written by
AI.
So, let's get into it now for real, given that, did we have a kind of Turing test there?
Kind of, in a way.
Now, let me just do a little bit of introduction here.
So, I've been a philosopher, well, since my mid-teens, really.
I've been really interested in philosophy.
I have graduate degrees in history of philosophy and so on.
And I've been a public philosopher for 18 years now.
But, interestingly enough, I think
I've also been a computer programmer.
I was chief technical officer and head of research and development in some very advanced computer programming algorithms.
I have worked in fringes, I'm not going to say core because it's a long time ago, worked in fringes of AI and neural networks.
So having had decades of programming research and development and computer development expertise and experience, I think I bring hopefully a little bit of credibility and certainly some significant experience in this.
So yeah, we're going to talk about a brief history of AI, how it works,
the opportunities and, to me, the most juicy and philosophical side of it, which is man and AI.
So now, for real, for real, let's dig in.
Actually, that part was written by AI.
No, it wasn't.
I'm not going to mess with you too much.
Just a little.
All right.
So let's talk about AI.
What on earth is it?
Well, the Merriam-Webster Dictionary defines artificial intelligence as, and I quote, a branch of computer science dealing with the simulation of intelligent behavior in computers.
So the artificial, everyone focus on the intelligent side, the artificial is really the most key.
Another thing you could say is artificial intelligence is a branch of computer science concerned with building machines capable of performing tasks that typically
require human intelligence.
So things like summarizing a video or searching through a bunch of court cases to try and find precedent, which would normally be done by either a research assistant or a legal assistant.
So it is a simulation of intelligent behavior and we'll sort of get into what that means.
But I want to talk a little bit about animism.
So animism is really the oldest form of sort of pagan religion or superstition.
Human beings have ascribed personhood to inanimate objects for, goes back a couple hundred, estimated a couple hundred thousand years, around 300,000
Yes, ancient myths, modern narratives, and so on.
And, I mean, there were people in some societies, if they had to move a rock for their farming requirements, they would apologize to the rock.
We, of course, if you're into Celtic mythology, there is a Dryads and Nymphs, the ghosts of pretty young women who live in trees, and so on.
And so there is this idea.
We talk of a fierce storm.
The storm is not fierce.
We talk about an angry volcano, and so on.
So that's anthropomorphizing a little bit.
But this idea that we take
Human qualities and ascribe them or put them into inanimate objects is really, really fundamental to understand.
It's really, really tempting when it comes to AI as a whole.
Because we've had automated machines going back forever, right?
Automation in ancient Greek means acting of one's own will and was first used by Homer to describe an automatic door opening.
And I assume he was running in a very early draft of Star Trek.
So we're amazed at what machines can do, and we have often ascribed human characteristics to machines.
It's very tempting, especially when those machines are trying to really mimic what human beings do best, which is to think and reason and use language and communicate.
So we're going to
Talk about the differences between this very tempting animism and anthropomorphization and where it stops.
And, you know, Pascal had his own machine, Babbage had a thinking machine in the 1830s, so there's been a lot of this stuff going on.
The big question fundamentally is...
A calculator, you strap a billion calculators together, do you get thought?
Do you get a human being?
I think we would all understand that no.
However, of course, no individual human cell or atom can think, but you put them all together in the three pound wetware of the human brain and you get a thinking human being.
So I'm fine with emergent properties.
This can certainly happen.
No individual atom of yours is alive, but together you're a carbon-based life form.
So I'm fine with emerging emergent properties.
Are we going to get there with AI?
And if so, how soon?
Fairly important.
All right, so let's talk about Alan Turing.
He was one of the fathers of computer science and he developed the Turing machine in 1936 and the Turing test, the Turing test you've probably heard of, which is
Can you type into a window and distinguish whether a computer is typing back or a person is typing back?
I mean, many years ago when I was a kid, we first moved to Canada.
My mom and I went to the Eaton Center at Christmas and there was a big robot in the store.
And of course the robot had a human being inside of it, but my mother had a very big and deep philosophical conversation with the, quote, robot.
In fact, quite a crowd.
gathered around to see this conversation, and the guy in the robot suit was actually pretty smart, so they had a very engaging discussion.
Now, that's how I knew it was a human being in the robot suit.
I don't know if you've ever played one of these kinds of games, but in terms of, is it a human being, is it a computer, and can you ascribe your emotional investment that you'd have for a human being into a computer?
I once had a NPC companion in the game Morrowind,
And I lost him.
And I felt really bad.
I actually retraced my steps and found him stuck in a dungeon.
Unlocked the door so that he could get back out.
He just somehow got stuck in there.
I was just like, well, I can't just leave him there even though it's just a bunch of ones and zeros.
So it's very easy for us to do this.
It's very sort of understanding.
So the Turing machine was conceptualized by Alan Turing in 1936.
It is a theoretical computing device that manipulates symbols on a strip of tape according to a set of rules.
It serves as a fundamental model for understanding the nature of computation and has greatly influenced the development of theoretical computer science and modern computers.
So, there was a program, I think it was in the 80s, called Eliza, that was sort of a Rogerian therapist, which just kind of echoed you back, like, I'm really upset.
What are you really upset about?
I'm mad at my mother.
What are you mad at your mother about?
And would just give you these sort of neutral questions back, and was kind of indistinguishable from some of the NPC therapists that were floating around, and still are floating around, which is just this value-neutral echo chamber stuff.
Okay, so let's talk about Walter McCullough and Walter Pitts.
They developed a model of the brain in the 1940s treating neurons as binary switches.
Their work influenced artificial neural networks and impacted neuroscience and AI.
In the 1943 publication, McCullough and Pitts aimed to show that a network of artificial neurons could execute a Turing machine program.
Positioning the neuron as the brain's fundamental logical unit.
In their 1947 paper they proposed methods for constructing neural networks capable of recognizing visual inputs regardless of alterations in size or orientation.
This amazing ability, I mean you see this when you have to sort of prove that you're not a robot by clicking on things that computers have a tough time with, this incredible ability we have to see fuzzy data
And just get clear things out of it.
Like the number nine can be like a color eye test, and we can still see it.
It can be upside down, we can still see it.
And this amazing ability that we have to take fuzzy data and transcribe it to actual things.
Of course, when you're out hunting, you need to tell the difference between a bush and a deer.
So this idea that we assemble things from incomplete information is foundational to our survival.
It's what our brains are really, really good at.
Computers completely suck at it.
They absolutely suck at ambiguous information.
I think Brad Pitt said he had an ailment in his brain where he couldn't recognize faces.
You and I, for the most part, we can recognize faces, pick our kid out of a crowd and so on.
Computers are terrible at this kind of stuff because computers are all like matrix code coming down.
It's all ones and zeros and binary and to try and
Get the monochrome black and white ones and zeros into the fuzzy human brain analysis and extrapolation of incomplete information to complete conclusions.
I can't see every aspect of my child's face.
I can't see inside their nose.
I can't see it from behind.
I can't see the back of their mouth.
But I know who my child is.
That is something, is really a huge leap forward.
The process of incomplete information to valid conclusions is certainly a big part of AI.
And if you've ever had it, I just had this with a woman I was chatting with in an interview yesterday, she forgot the word anonymous.
Have you ever had this thing where you're just chugging along and words are just coming out?
You don't really think about it, it's just this conveyor belt of words that come out of your brain, hopefully of utility, and every now and then you just get this...
Sand in the Vaseline, this kind of seizure where your brain is like, no, I'm not going to give you that word.
Sorry, just not going to do it.
So this flow of words that comes out is really quite remarkable and what's happening in our brain that delivers us all these words is really an astounding thing.
This is really where AI is at the moment, is the assembling of words into coherent thoughts.
And of course the computer is not thinking, it's just a blind assembler
But, you know, you might put a jigsaw puzzle together because you want to see the picture and it's satisfying and so on.
A computer will arrange a jigsaw puzzle only because it's told.
It has no preference as to whether it does it successfully.
It has no negative feelings if it doesn't do it successfully.
There's no conscience.
There's no levels of frustration.
There's no self-blame.
Now, of course, you can program a computer to, quote, prefer completing the puzzle, but
That's only because you programmed it to, and if you programmed it to not care or not have a preference, it wouldn't have a preference.
So there's nothing that is generated spontaneously within a computer.
It all comes in from the outside.
So, they did their neural network modeling and there was something called the Perceptron in 19- that's such a 1957, that sounds like something out of Plan 9 from Outer Space.
The Perceptron in 1957 was the first artificial neural network invented by psychologist Frank Rosenblatt.
It was designed to simulate the basic information processing capabilities of the human brain.
It was the first working example of a simulated neural network.
It could be trained to learn and get better at tasks
Such as distinguishing between images of shapes or cats and dogs.
It was the embryo, really, of the modern computer.
In a 1958 press conference, Rosenblatt's statements about the perceptron sparked intense debate within the nascent AI community.
So, Moore's Law.
Okay.
Moore's Law.
Moore's Law.
Boy, that's a tongue twister.
The rural juror.
So, Moore's Law.
This is pretty wild.
This is a logarithmic chart.
So 1971, Moore's Law was supposed to be a co-founder of Intel, Gordon Moore.
He predicted in 1965, so this prediction is...
Even older than I am, if you're younger, if you can imagine such a thing.
He said that the number of transistors on a microchip would double about every two years, while the cost of computing would decrease.
This prediction is largely held true with the continued miniaturization of transistors, leading to significant increases in computing power and decreases over time in costs.
So, this is some wild stuff.
And I remember, of course, when I had my first computer.
Well, the first one I bought home was a 2K PET computer.
And then I think I had a 1MHz Atari 800 that I first learned how to program in BASIC, Beginner's Applied Symbolic Instruction Code.
And it was a beginner's language, of course, and nobody over the age of 12 is supposed to use it.
But then you got Visual Basic in Windows, which I worked with quite a bit.
This is a logarithmic chart.
You can see it's pretty steady.
All throughout my childhood and teenage and 20s, it was like, oh no, Moore's Law!
We're gonna hit a limit!
They can't handle the heat!
We can't pack anything in more!
And it just keeps going on.
Everybody who tells you stuff is limited, blah blah blah.
So you can see here we go from 10,000 to 100,000 and then we go 1 billion to 10 billion.
So this is a logarithmic.
I decided to put this chart out there as non-logarithmic and as you can see it's really something.
This is 2021 which in the fast-paced world might as well be last century.
At the peak we have 58.2 billion transistors per microprocessor.
So those are the ones and zeros of the basis of computers.
Alright, so there's a funny thing that happens in human thought, which is something that I have really been thinking about over the last couple years.
So think of the flu, right?
The flu has a sort of 25-year cycle, right?
So in general, in most areas, there's, you know, like half a dozen flus, people get vaccines or they have the flu, they get immune to it, and then flu kind of goes down, and then 25 years later, it kind of comes back up because all the people are out there in the world and they're going to college and so on, and the new kids, they don't have the immunity that's been built up and so on, right?
So you get this kind of cycle.
The same thing seems to happen in intellectual thought
as well, and sometimes, I know this for myself, if you break too much ground, you go too fast, too far, people just get baffled, confused, and often quite hostile and aggressive.
So you kind of have to sit a little bit, wait for the world to catch up.
This has happened over AI a bunch of times.
Oh, and sort of by the by, I'm sure you've heard this statement about science, that science progresses one funeral at a time.
That the people who were invested in the prior scientific conjecture or hypothesis have to kind of die off, because it's government funded and so on, for the most part.
They kind of have to die off in order for the new generation, the new ideas to come along.
Same thing with AI winter.
So much of the 70s and early 90s were in the AI winter.
So what happens of course is you get this massive hype.
This is like the dot-com bubble.
This is like the railway mania, the tulip mania, and so on.
Developers say, we'll be able to travel through time, and regrow your hair, and reverse aging, and so on, and run a thousand tests on a single prick of blood.
And then reality catches up.
The promises are realized.
They can't be realized.
They can't be made manifest.
And then people just get disgusted and go into other
things.
So then in 97, I still remember this, Deep Blue was an IBM computer, beat Garry Kasparov in chess, which was a huge thing.
He was, I think, the world grandmaster.
Yoshua Bengio, Geoffrey Hinton and Yann LeCun in 2018, they've all made significant contributions to the field of AI.
I know we're just kind of skipping forward here fast.
primarily in deep learning.
These guys together are known as the godfathers of AI.
In 2018 they shared the Turing Award for their work on deep learning.
So deep learning, backpropagation, algorithms and so on are all very important.
And let's get into our, oh I say, we have a new phrase, let's have our definitions.
So deep learning is a subset
of AI and machine learning, focusing on multi-layered artificial neural networks inspired by the human brain.
It enables machines to learn patterns and make decisions by training on large amounts of data.
Now, of course, one of the things that's really helped AI is the internet and books and blogs, everything available digitally, and all of the non-political aspects, say, of Wikipedia and so on, all incredibly helpful to feed into AI.
So, deep learning has powered breakthroughs in applications like image recognition, natural language processing, and self-driving cars.
So, I'm sure you know that when you talk to your phone and it transcribes it into text, there's a fair amount of AI stuff that is going on.
Because, certainly, I'm a big fan of Nuance's program Dragon NaturallySpeaking.
And you can train it, and you can improve it, and you can give it new words, and it really starts to mirror and map your brain, and it gets incredible accuracy, even with words that aren't usually used, and it is almost perfect.
So, Geoffrey Hinton pioneered back-propagation algorithms.
This revolutionized the training of neural networks and laid the foundation for modern deep learning.
Now, we will get to neural networks in a second.
They're just attempting to replicate the human brain.
So backpropagation is a supervised learning algorithm used to train neural networks.
It involves propagating the output of a neural network back through its layers to calculate the gradients of the error with respect to each weight.
Oh, I know, that's a bit of a word salad.
And we'll sort of get into that in a little bit more detail, but let's look at it.
Neural networks have input and output and then there's an x-factor in the middle.
I'm gonna go out on a significant limb here and tell you how I process this kind of information, this neural net stuff.
Okay, so I got my first car when I was in my 30s.
It was a Volvo 98 S70.
It was the sportiest thing and it was the closest to one of the matchbox cars I played with as a kid, so this apparently was a very big thing for me.
I loved having a big live version of a car I played with as a kid.
I had that car for 14 years.
And, you know, Volvo's, you know, fairly solid and so on.
But boy, you know, you don't want to trade in your car or junk your car when it's new.
And there's a point where you have to because you can't even move it.
But somewhere in the middle there's a tipping point, right?
So your input is, okay, so one of the door locks isn't working.
Okay, well, we can handle that.
I got a dent in the side of the car because I turned too quickly in a parking lot and hit a pillar.
Okay, I can kind of live with that.
Okay, the radio antenna is broken.
Okay, I can kind of live with that.
And, you know, just more and more things just kind of stopped working on the car.
But it still ran and all of that.
And then you get a couple of repairs, there are a couple hundred bucks and I think at the end it was like it was going to be two grand for the transmission or something like it just didn't really work that well at all.
And so there's a tipping point and at some point you realize your car has kind of given up the ghost and you might as well just go get a new car or used car but not your car, right?
So your input is things are not working on your car, your output is going to be do I keep it or do I
Junk it.
And I remember driving that car in and I can't remember if I had to push it the last bit, but they were just like, uh, I think their 500 bucks they gave me at the dealership was mostly charity.
Uh, I think they liked my show here.
So, um, so the X factor, what is the X factor?
You can't exactly predict when someone is going to say, I need a new car.
You can't predict that exactly.
I mean, obviously, if it gets totaled or whatever it is.
But, you know, this fuzzy area in the middle.
If you are in a relationship with a girl, you like her a lot, you're not sure if you should marry her or not, there's an X factor.
It really can't be predicted from outside.
So, neural networks have an input and an output, and there needs to be a bunch of ways and factors.
And I did a lot of programming around this stuff when I was in the tech world, in terms of, if you have a budget, what should you spend it on?
Well, if you want to go have fun, you'll spend it on vacations and travel.
If you want to save, you won't spend it on those things.
You have to have a rainy day fund for unexpected expenses.
Your car breaks down or something goes wrong with your house.
So all of these sort of weights, what should you spend your money on?
There's a bunch of subjective weighting that you put in, like W-E-I-G-H-T-I-N-G.
So there's a bunch of
X-factor weighting that each individual puts in.
This is one of the reasons why in economics, and particularly the good form of economics, which is Austrian economics, value is subjective.
Value is an X-factor.
Should you save or should you spend?
Who can tell you?
Should you buy a house or invest in stocks?
Nobody can tell you.
There's an input and an output, right?
The input is your money, the output is what you do with it, and then there's an X-factor in there.
Which is, if you're really sick and tired of renting and you absolutely desperately want a house, you've got a baby on the way, then you're more likely to spend rather than save.
However, if your parents then say, oh we'll give you the money for the down payment, then maybe you say, okay well I'll save.
Right?
So there's a lot of x-factors and a lot of neural networks are trying to bake in those x-factors so that
You get outcomes that most closely mimic human intelligence, right?
So if you were to program... Sorry for going on for so long, it's really important to understand.
If you were to program a computer to say, when should you trade in your car?
Well, the computer program would make an error if it said, oh, you just bought the car, you should trade it in, right?
The computer would also make an error if it said, your car is 30 years old,
It can't start and there's virtually no parts or expertise to fix it.
Then it would be time to trade it in, right?
Somewhere in the middle is where the sweet spot is.
Mimicking human intelligence, you also have to randomize it a little bit, because some of these things can be a little bit random.
Okay, here's a funny thing, right?
So let's say you think, oh I can get another year out of this car, and you're like some player who's out there chatting up girls and dating girls, and then three girls in a row say, I'm not getting into that car, I really don't
I think it's very safe.
Like you're one of these Adam Sandler car with seven colors kind of thing, right?
You've got to open it with a coat hanger and all of that.
So if you just get three girls in a row who are like, I'm not going out with you because this car is just too ridiculous.
It tells me that there's something kind of wrong with your resource allocation.
Maybe you would want to keep it for another year.
But then something else comes in, like the girls say, I hate your car.
I'm not going to date you because of your car.
Whatever, right?
And then, right?
You trade in the car.
So that's the neural network stuff.
You got input, output, and there's weighting and x-factors in the middle with some randomization that is designed to give you an answer that's reasonable.
Not perfect, but reasonable, right?
And this all goes back to Aristotle, right?
What is the essence of something, right?
What is the essence of something?
So Aristotle would say, you see a baby, you say, oh, that's a baby, right?
If you see a baby that's 14 feet tall, you'd say, wow, that's a really giant baby.
If you see a baby that's 14 foot tall and blue, you'd say, wow, that's a really giant blue baby.
And if you see a 14 foot tall blue baby with tentacles coming out of its chest, you'd say, okay, that's a really big baby with blue tentacles coming out of its chest.
But at some point, if you change it to the point where you say,
I don't know what that is.
I have no idea.
It's like weird blob with tentacles and I can see maybe a baby's ear but it's not a baby because whatever, right?
So whatever that last thing is that you change where you can't, you can say, I don't know what that thing is anymore.
That's the essence, right?
And trying to get computers to get to the essence of a fuzzy outline is the number nine, right?
And there's a lot of real challenges about this.
So, and some of it is the computer says, is this right?
You know, you've got some poor schlub sitting in front of a computer and the computer says, here's the image, I think it's the number 9.
Yes.
And so it says that's correct.
I think this is the number 9, it's actually the number 8.
You say no, that's the number 8, right?
So you're just training the computer to get better and better and better.
And that's one of the things that happens.
There are also the computers trying to learn
On their own.
Although, in general, it has to usually come back to someone, which is why we chat GPT and things like that.
You can say to chat GPT... No, that's not correct, right?
You need to sort of fix it.
So, Convolutional Neural Networks.
Actually, wasn't that the original's logo?
CNN.
Convolutional Neural Networks.
Actually, just take out the neural.
So that's a type of artificial neural network that's designed to process data with a grid-like topology.
This is like images or time series data.
They use convolutional layers to automatically learn spatial hierarchies of features from the input data.
Convolution is a mathematical operation on two functions that produce a third function expressing how one function
Modifies the other.
So, you know, just drop a bunch of data in and maybe it can figure out the graph, or maybe it can figure out patterns, or maybe it can give you, you know, your trend lines and so on, right?
So, OpenAI Chat GPT 3.5 released in November of 2022.
Massive interest in AI.
As a whole, Chat GPT 4.0, which I've played with quite a bit, is really good.
We're going to get more details in Chat GPT 3.5 and 4 in the next part of the presentation.
All right.
Also, who would have guessed that Doom leads to AI, right?
I know a lot of people saying AI leads to Doom, I don't believe that's true, but Doom, the computer game, leads to AI because Doom drove a lot of demand for, what was it?
3dfx?
Voodoo?
I remember this back when I was trying to get this house of cards to work with playing the original Unreal game with the waterfall.
Just trying to get this stuff to work was nuts.
But you had graphics cards, and now graphics cards have been co-opted away from processing.
You know, they do videos and so on, but they do video games, what they're designed for as a whole.
They'll do some CAD and so on.
But the speed of video cards is really foundational to the success of AI.
So, computation used to train notable AI systems.
Now, as you can see, this is very much a slanted graph, right?
So, let's talk about Petaflop.
It's not a failed animal rights organization.
Petaflop is a unit of computing power equal to one quadrillion floating-point operations per second.
Ah, I remember back in the day.
You had your integer data type, minus 3, 2, 7, 6, 7, to plus 3, 2, 7, 6, 7.
No fractions!
You had your single, which had some decimals.
You had your double, and you had your date, and you had your...
We're good to go.
1e to the minus 11, which is 0.0110s, 11 zeros to 1.
A highly logarithmic chart.
We couldn't even graph it in any other way, like it would need to go from here to the moon.
So, floating-point calculation, mathematical operation using numbers in a special format, allowing computers to work with a wide range of values and precision levels.
You use this in scientific computing and engineering.
1 quadrillion is 1 with 15 zeros.
1,000 trillion.
1,000 trillion.
So yeah, there was Theseus, a robotic mouse that learned to make its way through mazes.
It was built out of telephone relay switches, ask your parents, which did the, quote, learning.
Theseus does not even register on this scale.
Boy, talk about insulting the poor mouse, right?
In the 1980s, we have the neocognitron.
Sorry, these are such helmet-headed names.
NeoCognitron trained on roughly 10 trillion floating point operations per second.
It was a hierarchical, multi-layered, artificial neural network.
It had been used for Japanese handwritten character recognition and other pattern recognition tasks and served as the inspiration for convolutional neural networks.
You ever see the Japanese typewriters guys running around?
Did I dream that or is that a real thing?
I can't tell anymore.
In 2003, we had Neural Probabilistic Language Model, which Yoshua Bengio mentioned earlier, helped to work on.
It was trained with 1.3 petaflops, or 1.3 quadrillion floating point operations, in the upper right, signified by the bright red dot.
We have chat.
GPT-4, which was trained with an estimated 22 billion petaflops, 22 billion quadrillion floating point operations, and they made me learn the times table.
Rage.
So, yeah, this is something, and I'll put the notes to this in the... Notes for all of this will be in the links below.
All right.
Should we get to it?
Is it too soon?
Have I rambled long enough?
Perhaps a few more personal stories.
While not all AI systems use the same components, there are several key elements commonly found in AI architectures.
Data.
Of course, you need massive, massive tsunamis of data to learn and make accurate predictions.
This data is often used to train machine learning models, providing them with examples to learn from.
Is it Twitter and Reddit are really mad?
that a lot of the AI architecture has, or AI training, has been scraped from their websites or their data sets and so on.
So I don't know if that's going to go particularly far.
So you need algorithms.
Algorithms.
AI systems use algorithms to process data, recognize patterns, and make decisions.
Again, they're not really making decisions, they're coming to preordained conclusions based upon the data flow and the x-factor, right?
The x-factor could be randomized stuff, like if stuff isn't working, you may be, like if your guess is too broad, you need to narrow certain parameters so you don't make guesses outside of those parameters.
You narrow, you widen, you know, we do these things in life as a whole, right?
These algorithms can range from traditional rule-based systems to more advanced machine learning and deep learning models.
Machine learning is a branch of AI where algorithms learn from data to make predictions.
Deep learning is a subset of machine learning.
It focuses on multi-layered neural networks.
The main differences include structure, data dependency, computational power, and feature engineering.
Deep learning requires more data, more resources, and handles feature extraction automatically.
So you're not sitting there training, training, training.
It really does try and figure things out on its own.
So, AI works by using algorithms and computational models to enable machines to learn from data, recognize patterns, make decisions, and perform tasks that commonly require human intelligence.
AI systems often leverage techniques such as machine learning, deep learning, and natural language processing to process and analyze
Large volumes of data draw inferences and adjust their behavior based on the acquired knowledge.
AI's ultimate goal is to create adaptable, intelligent systems that can improve over time and handle complex tasks in diverse domains such as image recognition, language translation, autonomous vehicles, medical diagnosis, and so on.
So computational models, AI systems often rely on mathematical and computational models such as neural networks or decision trees to simulate aspects of human intelligence or perform specific tasks.
It looks out there on the internet to some degree and it says, okay, and someone has to rate this, right?
Because the computer isn't going to know.
It's just ones and zeros.
It isn't going to know the difference between the data coming from the most advanced scientific study in the universe as opposed to someone's complete rantings on a blog that no one reads.
So it needs to prioritize things, and it could prioritize things as in the science world, how many times has this paper been cited?
How many people were involved in it?
What's the experience of the people running it?
So it will try to figure out what is valid information, which is a problem, right?
I mean, we kind of know this from the whole last couple of years that sometimes people who have a string of letters behind their name aren't actually making very good decisions because they're
Decision-making process has been compromised by funding, by bullying, by whatever media stuff is going on.
So, trying to get the computer to know what is authoritative and what is good and what is right.
Obviously you could feed in a bunch of Shakespeare and that's about some of the best poetry and prose, or at least poetic language, iambic pentameter in the universe.
You could feed in Dickens' novels or Dostoevsky's novels and say, well that's pretty, pretty great writing as far as all of that goes.
But if you're just going out there and scraping the internet, who's good, who's bad, who's right, who's wrong?
Who's got startling information that is upsetting to people as opposed to a troll who's just saying things to upset people?
Ah!
It's really, really, really tough.
So, an algorithm is a step-by-step procedure or set of instructions for solving a problem or performing a task.
It is used in various fields to automate tasks, process data, and perform calculations.
An artificial neural network
interconnected artificial neurons that mimic the behavior of biological neurons in the brain, right?
So when you learn a new skill, you are strengthening the relationships and the connections between your neurons.
And some of those neurons will fade over time, like facts that you learned in really boring grade 11 history classes is gone.
However, some stuff really sinks down deep into the body and into the unconscious, like the old thing, once you ride a bike, you never forget how to ride a bike and so on.
A decision tree, just wanted to mention, a decision tree is a tree-like model used in machine learning to make decisions.
It consists of decision nodes, outcome branches, and result leaves.
Oh, so like, yeah, it's a real tree thing, right?
Decision nodes, outcome branches, and result leaves.
They really are.
See how we anthropomorphize human beings into computers, into trees, into analogies, into this presentation.
Ooh, what a tangled web we weave.
So, popular for classification and regression, decision trees are simple and handle various data types, making complex decision-making process easy to understand.
We've all seen those sort of flowcharts, some serious, some not so serious on the internet.
If this, then that, you know, you sort of go through this whole decision tree.
All right.
So learning techniques.
AI systems use various learning techniques including unsupervised learning, unsupervised learning, and reinforcement learning to adjust their behavior and improve performance over time based on data and feedback.
Supervised learning uses labeled data for training and makes predictions based on input-output pairs.
So if you know that this fuzzy blob is the number 9, and you say to the computer, this is the number 9, and here's the blob, it's going to learn the number 9 from that.
And then you put a whole bunch of different number 9s in there, and it kind of learns the parameters of where 9 becomes, like, what the hell is that baby with tentacles that's blue?
It doesn't even know what it is, right?
So unsupervised learning uses unlabeled data and discovers patterns or relationships within it.
Reinforcement learning involves agents learning through interaction with the environment using trial and error to maximize cumulative reward.
So, I mean, one example would be if you want to program a robot to do flips, the robot will know if it landed the flip successfully or not because it's either on its side or it's not.
So it will keep adjusting parameters until it can do the flip correctly and then give you a parking ticket.
Evaluation metrics.
AI systems typically use evaluation metrics to measure their performance, allowing developers to assess the effectiveness of their algorithms and models and make improvements as necessary.
Hardware and software infrastructure.
The AI systems require specialized hardware and software infrastructure to support their computational needs, such as GPUs for parallel processing or cloud-based platforms for scalable resources.
So, an evaluation metric, we've got all of that, and human interaction.
Many AI systems are designed to interact with humans using natural language processing, computer vision, or other technologies to enable communication and collaboration with users.
There's also a kind of creepy thing that's going on at the moment where AI can read your mind.
Oh, that leads me back to Gordon Lightfoot, RIP.
All right.
The best performing artificial intelligence systems have resulted from what is called deep learning.
Now, this kind of stuff is going in and out of style, went in and out of style for quite some time, so it's a sort of skimming on the surface kind of thing.
And deep learning is just a new name for an approach to artificial intelligence called these neural networks, right?
So this was big in the 1980s, fell really back into disfavor in the first decade of the new century.
And has returned like gangbusters this decade because, again, the sort of NVIDIA or AMD graphics chips have just unleashed this beast-like processing that AI needs.
This means of doing machine learning, a computer learns to perform some tasks by analyzing training examples.
And here's a quote.
You know, I'm trying to... I really scoured hard for the best explanations.
I think it may have to go back to my analogy.
Machines that go in, right?
So when you look at your car and you say, should I replace this thing or not?
Or should I sell the car?
Maybe a new bus service has opened up in your neighborhood.
Should I sell the car?
Or should I keep the car?
Now, the information that comes in is your knowledge of the car, the car status, the cars cost, the numbers, your budget, whatever, all of this stuff.
All of this goes in, right?
And then you make a decision about whether to sell the car, whether to trade the car and get a new car or whatever, right?
Now, the information that's coming into you, it doesn't go back to the car.
Like, if you and I are having a conversation, you say something, I say something, and so on, right?
Or rather, I talk and you interrupt me so rudely.
But when you're thinking about your car, you don't talk back to the car, right?
I mean, there's this hilarious scene in the old sitcom Fawlty Towers where Basil Fawlty is
Trying to drive somewhere, it's a huge emergency and his car breaks down.
And he literally runs off the screen, comes back, and with a tree, a small tree, a sapling, and beats his car.
Right!
I'm warning you!
I warned you!
He's literally beating his car to punish the car.
for failing.
And we know that that's, I mean, it's very funny and it's a brilliant sitcom.
But it's funny because it's insane.
Because this would be to say that the information is going back, right?
So it goes one way.
Information comes into you about your car, you make your decision, but you don't talk back to the car and say, look, I'm going to trade you in if one more thing goes wrong.
One more things go wrong, you are going to be a cube in a dump heap, right?
No, you don't do that.
It's just one way.
It's just, here's the data coming in, this is your information, X factor, weighing various things, output, car, to dump or not, right?
So, a lot of these examples that are fed into this are hand-labeled in advance.
So, an object recognition system might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and find visual patterns in the images that consistently correlate with particular beliefs.
Oh gosh, what was it?
I can't remember which show it was, but it was very funny.
It was Silicon Valley maybe, but some guy developed a system that was supposed to... you take a picture of your food and it tracks your calories because it figures out what food it is, but it only worked on hot dogs, and then eventually they had to repurpose it to filter out dick pics on the internet, which I thought was very funny.
Very funny.
All right.
So, neural networks.
Interconnected nodes called neurons that are organized into layers.
The input layer receives the raw data, images or text, passes it to the hidden layers.
The hidden layers apply a set of weights to the input data and generate a set of outputs.
So, is it a hot dog or is it a picture of a penis?
Existential questions we all have to ask ourselves every particular morning, often when we're brushing our teeth.
So, of course, you can imagine what weights you would... Does it have a foreskin?
Is it attached to testicles?
I'm gonna go with penis, not a hot dog, and so on, right?
So, the hidden layer is a set of weights that the input generate a set of outputs.
The weights are adjusted by an algorithm called backpropagation to minimize the difference between the predicted output and the actual output, right?
So, this is...
Important because it's going to get it wrong and you need to figure out why it got it wrong, adjust the variables so that it gets better.
The output layer produces the final results which can be used for prediction, classification or decision making.
So here is an analogy from ChatGPT.
We're going to tip into our buddy ChatGPT.
AI works like a chef creating a new recipe.
Sorry, I won't do that.
It's too annoying.
The chef AI system gathers various ingredients, data, and follows a set of instructions, algorithms based on cooking techniques, learning techniques, to create a new dish, predictions, or decisions.
The chef learns from previous experience, training data, and adjusts the recipe based on feedback, evaluation metrics, to improve the dish's taste, performance.
Just as a chef needs a well-equipped kitchen infrastructure, AI systems require specialized hardware.
I prefer the car thing, but I guess ChatGPT was feeling kind of snacky in that situation.
So, I know we're a lot on neural networks, but, I mean, it's really good for understanding AI, also really good for understanding your own brain.
So, here's some quotes.
To each of its incoming connections, a node will assign a number known as a weight.
When the network is active, the node receives a different data item, a different number, over each of its connections and multiplies it by the associated weight.
So, let me give you an example of what this means.
So, I first wrote the computer company that I co-founded and was chief technical officer of,
was a program.
I wrote a program to help companies track and minimize environmental problems, right?
To reduce spills, to conform to regulations, to have the minimum amount of bad things going into the air or the water or the groundwater and so on.
Now, I wrote this program and we test it and we go sell it and then usually within two years we'd need another version, right?
So, do I need a new version?
A wait would be the number of days since the last version.
And once you start to pass 600, the wait begins to increase.
To the point where, maybe by three or four years, you absolutely have to do another version, right?
So, this is... I don't want another version because I just released it.
It's zero days or one day, doesn't matter, right?
And then, you know, maybe for every three or four days you add 1% to do you need another version, right?
So at some point you absolutely do, some point there's zero, and so on, right?
So that's the wait.
So, the neural net then adds the resulting products together, yielding a single number.
If that number is below a threshold value, the node passes no data to the next layer.
If the number exceeds the threshold value, the node fires, which in today's neural nets generally means sending the number, the sum of the weighted inputs, along all its outgoing connections, right?
So, you can say, I need a new car.
So that's, yes, this thing is not working, I need a new car.
Now, the next layer is, can I afford it?
Right, so I need a new car is not followed by can I afford it.
So I need a new car, can I afford it, right?
And then if it passes that layer, you know, there may be other decisions.
Then what kind of car do I want?
What is my budget?
And how much am I willing to go?
And what can I negotiate?
And all that kind of stuff, right?
So all of these layers and then finally you buy a car, right?
When a neural net is being trained, all of its weights and thresholds are initially set to random values.
Training data is fed to the bottom layer, the input layer, and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives radically transformed at the output layer.
During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yields similar outputs, right?
So, of course, let's say that you need a new car within
10 to 20 years of your existing car.
Okay, so you randomize things and it's like, well your car's working really well and it says you need a new car at 10 years.
Nope.
Okay, your car is working really well and it says you need a new car at 20 years.
Oh, well no, even if the car's working really well, maybe it hasn't been driven by a little old lady to church on Sundays, you've got a little old car, sorry, an old car but it still works well.
Well, then you need to extend it from 20 to 25 years.
So you're just continually adjusting these until it kind of makes sense and would yield similar
Conclusions to a general population.
Again, people might have completely bizarre things, right?
Like, oh, this is the car in which I lost my virginity.
I'm never going to sell it.
I'm going to frame it and put it over the fireplace.
I don't know, maybe it's a small car.
So there may be some reason why.
This is the car in which I killed a guy, so I can't ever sell it because people will find the blood.
Could be any number of things, right?
So there's always going to be these outliers, but in general, you want this sort of bell curve of sensible decisions.
So again, we've got this input layer, the hidden layer, which is all where things are weighted, and the output layer.
And this can go through almost as many iterations as there are numbers in the universe and so on.
So this is a video, I'll link to it below.
An individual neuron can either fire or not, so its level of activation can be represented as a 1 or a 0.
The input to one neuron is the output from a bunch of other neurons, but the strength of those connections between neurons varies, so each one can be given a different weight.
Some connections are excitatory, so they have positive weights, while others are inhibitory, so they have negative weights, right?
So if you see this wonderful girl at the park, and she's playing with her dog, and she seems super friendly, and she could be the love of your life...
You want, you desire, you thirst, maybe even to show her a picture of a hot dog, but you thirst to go and talk to her.
That's your positive, because the positive things could happen.
Ah, but what if she rejects me?
I'm going to be such a loser.
I'm going to hate myself for the rest of the day.
It could ruin my whole week.
Right?
So you've got these positive and these negative things that are occurring.
And I hope that you will do the right thing and ask her out, or at least go chat with her.
So, you've got the positive weights, inhibitory negative weights, and the way to figure out whether a particular neuron fires is to take the activation of each input neuron and multiply by its weight, and then add all these together.
If their sum is greater than some number called the bias, then the neuron fires, but it's less than that, the neuron does not fire, right?
One of the things that people generally—not you, of course, lovely watcher-listener—one of the things that people get really mad at me about is when I give them clear, cogent, rational, empirical moral arguments, I'm actually attempting to wire their brains.
Like, I'm trying to wire your brain.
I'm trying to wire their brains by giving you an input layer
Or adjusting a hidden layer to change your behavior, right?
So I'm a big fan, obviously advocated for decades, peaceful parenting.
You don't yell at your kids, you don't hit your kids, you don't intimidate your kids, you don't bully your kids, you don't take from your kids, you don't confine them, you don't restrain them, right?
And so if you think that that's good parenting, that's your input layer as well.
I want to be a good parent.
So output is I'm gonna spank my kids or yell at my kids, whatever.
Now if I say, well that's a violation of the non-aggression principle and it's an abuse of power and blah blah blah, right?
I'm putting in a new input layer which is be moral.
Because we have the hidden layer called be moral, right?
The hidden layer is why I want to be a good person.
If the input layer is
I want to hit my kids to be a good parent, because output is happy, well-adjusted adult kids, or kids who grow into happy, well-adjusted adults.
Input layer, hidden layer, output layer.
Input layer, kids acting badly.
Hidden layer, want to be a good parent.
Output layer, you hit your kid, right?
I'm changing.
Well, I guess the input is to some degree the same.
Your kid is, quote, acting badly.
My hidden layer is don't think they're acting badly.
It's probably your responsibility.
You should have prevented the situation.
Hitting is not moral.
So the output layer is you reason with them rather than hit them, right?
So I'm giving you different hidden layers and also reframing the input layer, right?
My kid's being bad, disobedient.
They're defying me.
I need to...
Hit them so that they become good people.
And if I don't hit them, they'll become spoiled and indulgent and cut their leg off to become a pirate or something like that, right?
So people get mad at me because I'm changing options.
I'm giving you different choices that you didn't have before.
Alright, so I'm just gonna run on this real quickly.
Deep learning versus machine learning.
So deep learning works on a huge amount of data, needs a high-end machine to operate.
You solve the problem until you hit the end solution.
It's more complex compared to machine learning, requires less time and energy to test the data.
As the data increases, the efficiency of output also increases, but computationally it's wildly expensive to train.
Machine learning works on a limited amount of data, can be done on a low-end machine.
You divide the tasks into subsets and solve them individually.
It's less complex, but the testing time can be long and elaborate.
As data increases, then output becomes consistent at a certain level of performance, comparatively less computational cost.
Right.
Opportunities.
It was just yesterday I think there was a memo, just a memo, not policy, within
Google, where one guy was saying, like, we don't have a moat, we don't have a monopoly, free open-source AI models that you can up and install and train in a single evening are gonna eat our lunch, right?
The open-source AI models.
And I'm gonna get into the ethics and morals of all of this.
They're transparent, they're accessible, they're customizable, no bias or a woke filter.
And it could be that there's some right-wing extremist filter and all this kind of stuff.
In general, the stuff is pretty woke these days.
You've got unprecedented access to creative tools.
So I used to have a guy who did my thumbnails and now when I'm not ripping my shirt off and screaming at the camera about how you should go and talk to a woman you like, I will go to an AI generator to create a thumbnail.
So yeah, creative tools are great.
Images, animation, books, movies, games.
Someone has created a video game just using AI as the input.
And of course, code.
Man, the fact that AI can write and debug code is really, really something.
So, yes, a massive explosion of productive potential and the liberation of people from doing boring, repetitive tasks.
Like, I wrote a novel recently called The Present, and I used AI to generate pictures or thumbnails for each of the chapters.
It's wild.
I wouldn't have done that otherwise.
Task automation, virtual personal assistant, all of this kind of stuff.
You're Robo Butler from the Jetsons or whatever.
It could be a real thing.
Improving the labor force, more meaningful satisfying employment, less effort for equal or greater gain.
So let's just go through these a little bit one by one.
A lot of open source AI models that can run locally on your own computer.
It's pretty wild.
Open source AI projects ensure that people have a chance to compete with the ideologically captured, biased and filtered instances we have already seen.
And I'm going to boom your ears off with a rant in a couple of minutes about this.
So, I'm a big one for automation.
If you look at something as simple as you used to have to go give speeches to have people hear you talk.
In this, you probably set this up as a feed, whether it's audio or video, you just get noted.
You can click on it.
I don't have to come to your house.
I don't have to double dip in your chip dip or anything like that.
And you can just listen to what it is that I'm talking about really, really quickly and really, really well.
Very, very cool.
So there's a lot of possibilities.
And yes, I think it was at IBM, it's like putting on hold thousands of jobs based upon how AI is going to play out.
And first of all, AI can make errors, right?
Some lawyers, some law companies have tried to use AI to summarize case law, and it's made errors and so on.
AI has something called hallucinations, and they're called hallucinations, which is where if you ask for a source, it will just make up a source.
I don't exactly know why it does that, but it makes up, because I guess it sees a lot of, you know, here's the body, and here are the citations, and it thinks that the citations
is like the body and of the of the text and therefore it needs to reshape the citations to make it original.
I guess it feels it needs to reshape, sorry, it needs to reshape the body of the text to make it original.
Then it also needs to reshape the citations to make them seem original.
But it seems obviously it would have some difference between the body and the citations and so on, right?
So which I saw was a video, a picture, it was a picture of someone
You know, it's going to be real tough for professors to figure out if AI wrote the article, because you can randomize it.
You can just say regenerate, regenerate, and it will apply some random algorithms to reshape the text.
So people are talking about, okay, we're going to have to have in-person oral exams, we're going to have to see the rough drafts and the development and the notes to make sure the person did it itself.
Someone handed in a paper
Which was an essay, a book report, and the first paragraph was, well, as an AI, it's inappropriate for me to give you an entire book report, but here's some notes that you can write.
So he actually put in the AI text and tried to pass it off as his work.
Okay, opportunities.
Let's look at this.
What is going on with investment?
Oh my word, it's really, really something.
So 2013, you know, very little, very little.
And then, of course, by 2021, what are we?
Almost $180 billion.
Now, we have, of course, the huge problem as well, which is that the educational system continues to decline in quality throughout the West, particularly, I would argue, in the United States.
So at a time where you need better communicators, more conceptual reasoners, better writers,
You're just getting a lot of people coming out who've, you know, tick-tocked their way intellectual paralysis.
And so the fact that the quality of thought is declining, the quality of expression, the quality of language is declining at the same time as AI is skyrocketing in terms of what it can do.
It's really, really, it's really something.
So yeah, a lot of investment.
And you understand, the investment is, I would say, the investment is job loss.
Investment is job loss in general, right?
So I used to go to a gym, I invested in some exercise equipment in the house.
So now I spent more than I would have for a year or two at the gym, but now I don't have to pay gym fees, right?
So I invested this money, but I'm saving this money.
So when corporations are investing $180 billion in AI as of 2021, of course it's much upward from there,
They're expecting to save... And it's not just, well, I invested 180 billion, so I expect to save 200 billion.
No, no, no, no.
When you invest, because there's the chance that your investment might not pay off.
So when you invest, you are expecting multiples.
In return, particularly more speculative stuff, right?
So, if you are investing in a startup, you don't expect to make 5, 10, 20%, right?
I mean, when I first started my company, we needed 80 grand, so we went around with hat in hand, and the people made, I can't remember, 20, 30 times when they sold.
You need a lot of return particularly in speculative stuff like AI so they're expecting I would say at least 10x to 20x if not 30x return so this is going to be significant loss in payroll or efficiency if this stuff works out and I think that it will.
So, what is going on?
Well, it could be great for the environment.
Fewer unnecessary redundancies of effort.
Our code and process standardization vary.
I mean, the first professional job I had was programming a COBOL 74 at a stock trading company, and I was given a haircut program to analyze and improve three quarters of a million lines of code.
And it was on a tandem system, so I had to go in like an infinite DOS window from Windows 3.1 back in the day, and I had to ask for the variables and step through line by line.
It was a monster.
It was a monster of a thing to try and wrestle to the ground.
So, yes!
Code and process standardization, few unnecessary redundancies of effort, and all of that could be fantastic.
Massive scale fact-checking.
Ooh, yeah, baby.
This is gonna be hard.
It's gonna be hard on ideologies.
And we can already see this in in ChatGPT.
If you ask questions...
which have certain political sensitivities.
In other words, could be remotely negative on people who generally vote for the left.
Boom!
You can't get that information.
So, fact-checking is going to be very interesting.
You're going to have a war between the censored AI and the uncensored AI.
And it's going to further divide society, unfortunately, but what can you do, right?
So there are going to be people who stay within the matrix of pre-programmed information.
You can't see this.
You can see that.
This is bad.
This is good.
This is okay.
This is inappropriate.
You got the Karen switch allowing you to get certain data or not.
And other people who are just like, I just want to see the world, man.
I just want to see the world.
I just want to see the data.
I just want to try and understand things without someone screaming in my ear about what's right and what's wrong and what's good and what's bad.
So you're going to get this big split.
It really is the biggest split that we've had since 16th century.
Francis Bacon, foundation of the scientific method, the people who went more superstitious in the understanding of the universe, and the people who went more scientific in the understanding of the universe.
You've got a cult called Woke, which is going to control information, and then you have other people who are going to use facts, data, reason, and evidence to try and understand the world.
The split is going to be quite
Enormous, but I mean it's gonna happen anyway, but so Yeah, so free open-source software can lead to environmental benefits reduce the redundancy and inefficiencies When companies work in silos and develop proprietary solutions they often duplicate efforts and expend significant resources solving the same problems independently right if AI can do it and
You don't have this redundancy.
Free open source software enables individuals and organizations to collaborate and share resources, avoiding repetitive work and streamlining the innovation process.
As a result, there is a reduced consumption of energy, materials, and human effort, ultimately leading to a smaller environmental footprint.
As you can imagine, the delivery of this presentation versus me going to everyone's house, finding out if they're interested or not, and handing them a pamphlet, it's much more efficient to do it this way.
Now, unbiased, truth-telling AI can benefit fact-checking by rapidly and accurately analyzing vast datasets, identifying inaccuracies or inconsistencies.
It offers a non-partisan perspective, eliminating human biases and assumptions, ensuring information is fact-based rather than subjective.
Additionally, AI provides a cost-effective, scalable solution for verifying a wide range of sources' accuracy.
So, yeah.
Incidental conspiracies, the culmination of countless little wrongs, the invisible hand of forced governance could ultimately be unmasked, revealing the pervasive evil and lies that once held sway.
So, yeah.
I mean, just look at the Putnam studies on diversity, and you can see that, right?
So you could do searches on the wage gap, on climate change, on the effects of immigration, you could just cast the net wide, get some facts, and you wouldn't have this, ooh, you can see this, ooh, you can't see that, ooh, you can't see this algorithm, you can't see how I did this data, and so on, right?
Could be a lifetime personal tutor for everyone.
Fantastic.
Empowering parents to homeschool.
Always a plus in my not-so-humble opinion.
And it's going to be customized for your interests.
It's a full-time expert specialized more than anyone else could be in teaching you and you alone what you most want to learn.
So arts degrees, the value of the modern arts degree, which is the massive blowfish inculcation of propaganda into people's former graves, now graveyards of ideology.
So why would you pay a quarter million or $300,000 in costs and lost earnings when AI can do it to you better?
And you're not going to have to, if the AI is unbiased, right?
Everybody who's got thoughts for themselves always has this massive issue where it's like, well,
Could push back against the wage gap from the feminist professor, but she's gonna hobble me for that.
Well, you get to be honest.
You don't have to lie.
You don't have to appease ideology that has power over you.
Wouldn't you much prefer that?
And as an employer, wouldn't you much rather have someone who didn't have to lie their way through to a degree?
Like, you look at the mental health crises of the younger generation, and it's because they're being forced to lie all the time, and being forced to lie
Or at least not question is pretty bad for human beings because it turns human beings into robots just at the same time as it's trying to turn robots into human beings, so...
AI-assisted research, potential mental and emotional health issues, private role-playing.
I'm a big fan of internal family systems therapy, which you can look up.
Journaling.
So when I do call-in shows with listeners, sometimes we'll do role-playing.
If they have a particular conflict with someone in their life, we'll do a role-play to see if they can get through it.
So you can use this kind of stuff to prepare, to get ready, to journal.
What if you could put your dreams in and type your dreams in and AI could prompt you with questions about
We're good to go.
Unbelievable levels of data into AI, and it can organize and group and graph and chart and report on it for you rapidly, objectively, which means that you're lowering the cost of doing research.
Now, the cheaper research is, the less you need grants.
The less you need grants, the more you can do stuff that is, quote, politically incorrect, but obviously quite essential a lot, so there's a huge amount that could go on just absolutely
Revolutionary.
AI can also assist in the design of experiments, it can do simulations, it can do models, and aid in the development of new conjectures or hypotheses.
So that is fantastic.
I mean, I've always been kind of curious if you do a scan of your mom being asked a bunch of questions, a brain scan, and then you do a scan of the kid pretending to be the mom, like how much would be the same?
Very expensive, but you could design this and run this much more cheaply through AI, which is fantastic, right?
Yes, it's a tool for spreading philosophy.
Computers work really well with general rules.
The more exceptions you have to put in to rules, the worse it gets, right?
And if you want to think about rules with exceptions, you think about the tax code or the law system as a whole, hundreds of books, nobody understands it all.
Virtue in philosophy is tell the truth, keep your contracts, don't initiate force.
Boom!
Tell the truth, don't initiate force, and keep your word.
Keep your contracts.
Pretty, pretty good.
For morality, can it be universalized?
I've got a moral rule.
Can it be universalized?
If it can't be universalized, it's not a moral rule.
Computers are fantastic at that kind of stuff.
They're not great at navigating the rat's maze of contradictions known as create a rule.
You know, the way that corrupt people work is they create a rule for everyone else, they exempt themselves and their friends.
So it's just fantastic, right?
Just fantastic.
So, AI could give you virtual agents with Socratic reasoning, Socratic questioning.
It could engage learners in dialogue and debate, give them insights and knowledge, help them develop their own philosophical, rational perspectives, negotiation skills.
And AI-based systems could also analyze large data sets of philosophical texts, historical debates, identifying patterns and trends in arguments, generating insights, could help learners understand complex philosophical concepts more easily.
Over the last year, I just did this whole 20-part History of Philosophers series from the pre-Socratics up to the sort of mid-17th century.
I'll be continuing it, but it's a massive amount of work.
It took unbelievable massive research and training and experience and so on.
It could be that AI could pick things out pretty well.
Although, remember, AI does not come up with things on its own.
AI will reassemble.
It puts together jigsaw puzzles.
It does not paint pictures, so to speak.
It doesn't come up with things on its own.
It has to be programmed.
to pursue new paths within this sort of neural network, semi-randomization, weight-based decision tree, but it's not creative in that way.
So yeah, wouldn't it be cool?
I've got a whole theory of ethics called universally preferable behavior, which says that if you have a moral rule, it has to be universalizable.
Can AI figure out?
You type something in, is this UPB compliant?
UPB, universally preferable behavior.
Yeah, absolutely.
Of all the moral philosophies in the history of the world, UPB is vastly, vastly closer to objective rules-based AI.
It is one rule, there are no exceptions.
Thou shalt not steal, no exceptions.
Thou shalt not murder, no exceptions.
Doesn't matter if you've got a uniform and a medal and a gun given to you by a guy in a hat, doesn't matter, right?
So, one of the things that I talk about in UPB is the coma test, right?
So, can a man in a coma
Be considered to be doing evil.
Or, can a man in a coma be good or not good?
Well, clearly we would say a man in a coma can neither be virtuous nor evil.
So if you have a positive moral obligation, you have to give to the poor, okay?
Well, there's always somebody poorer, so can it be universalized?
Well, no, you can't have it.
It can't be universalized if you say, well, you have to give to the poor.
Now, I think giving to the poor is a good thing if it helps them, but is there a moral rule called, you must give to the poor?
Well, that can't work, can't be universalized.
Because when you give to that person who's poor, he's no longer poor.
Therefore, he has to give money to an even poorer person.
And the poorest person who receives all the money in the world is now rich.
It can't be universalized.
It can't be achieved by everyone at the same time.
And of course, a guy in a coma can't give to the poor.
And giving to the poor is a moral good.
Not giving to the poor would be a moral evil.
And therefore, the guy in the coma, by not giving to the poor, would be evil.
Doesn't even pass.
I call it the coma test, right?
So yeah, as opposed to the greatest good for the greatest number.
Well, that's just a big fog bank wherein you can put your own Nietzschean power lust and call it virtue.
So AI could not work that out as all.
All right.
A vehicle to propagate universally preferable behavior to get people to truly understand what objective virtue and reason is and how good it all is and all of that.
AI would be fantastic for that.
And I'm actually kind of looking into that as a side project.
It's just one rule, consistently applied.
Do not initiate force.
Now, tell the truth is important, but telling the truth is not a massive moral virtue.
Because telling the truth, a guy in a coma can't tell the truth.
And there are times where you wouldn't tell the truth.
You know, the old example, some guys break into your house, where's your wife?
We want to kill her.
Well, you're not going to tell them where your wife is, you'd lie, right?
The computational capacity of the fastest computers.
Again, we're just starting with the 90s, although my first computer was, I think, 1980?
Atari 800?
I can't remember.
It's a long time.
Long time ago.
So, yeah.
Sorry, it's a little gray here.
This is millions of gigaflops.
And it's really, really quite something.
What have we got here?
I mean, the numbers just become ludicrous speed, right?
At the peak, we have Frontier, capable of 1.1 exaflops.
1.1 quintillion floating point operations.
So, yeah, that's really quite something.
It's just completely massive.
All right.
What is going on here, right?
So, let's compare the human brain to AI.
Human brain estimated to be between 10 to the power of 13 and 10 to the power of 16 floating point operations per second, or FLOPs.
Actually, I think the blue screen or the blue pill helps with the FLOPs, if I remember rightly.
The fastest supercomputers do 10 to the 18.
Power of flops.
Estimated storage capacity human brain from 10 terabytes to 2.5 petabytes of data.
2.5 petabytes of regular hard drive storage would run about $130,000, right?
Now, some kinds of AI have architectural aspects based off the human brain, like neural networks, but these similarities diverged a long time ago.
In general, the structure and functions of AI are not very similar to the human brain, and we'll get into that in a sec.
And these, obviously, figures are rough estimates.
We don't really... I mean, say artificial intelligence, it mimics the human brain.
We don't even know here the human brain works particularly well.
So I think there's some value in making these comparisons.
There's a grain of salt.
So we don't really understand the brain very well.
So it's hard to know storage capacity or processing power and so on, right?
So the human brain has about 86 billion neurons.
Actually, I have 87.
I actually counted them the other day with a toothpick and a spork.
Each with thousands of connections, giving an estimated processing power between 10 trillion and 10 quadrillion, one with 16 zeros, floating point operations per second, or FLOPs.
In comparison, a PlayStation 5 has about 10.3 trillion FLOPs.
AI, which is powered by computer hardware, potentially has supercomputers operating at the exascale level, or one quintillion FLOPs.
Excelling at complex calculations but struggling with tasks like natural language understanding or common sense reasoning that humans perform with ease.
So in one book that I was reading, there was this example.
Okay, let's just take something simple here.
Getting humans, sorry, getting a computer to understand the concept of the streets are wet.
Okay, so you say, well, if it rains the streets are wet.
Right.
Okay, for a certain amount of time after it rains the streets are wet.
Okay, so what are the reasons the streets could be wet?
Well, there could be water sprinklers, somebody could be hosing down something, the water main could have burst, a fire hydrant could have either been cracked open in the hood or it could have just been hit by a car, there could be a fire and firemen are out there hosing everything down, there could be water bombers if it's a really bad fire, like you could just have any number
I mean, at the extreme level, an ice-based comet could have broken up in the upper atmosphere and it rains even though it's sunny.
It could be any number.
It could be dew.
I mean, it could be any number of things that would occur.
So just getting computers to understand streets and wetness, I mean, they've tried feeding this kind of stuff in and they give up on it.
Because it's just you have to feed in so much for just some sort of basic common sense stuff that human beings do very easily.
Because remember, computers don't know what roads are, they don't know what water is, they don't know what wetness is, it's just ones and zeros, which is why it's the artificial in artificial intelligence.
So, sorry, just 10 terabytes, 3.4 million images and songs, 3 megs, 2.5 petabytes would be equivalent to 870 million images and songs, and so on, right?
Power consumption.
So yeah, human brain.
This is where the human brain wins.
10 to 30 watts for the human brain.
Close to infinity for just about everything else.
All right.
Life and AI.
Okay.
It does matter.
So, it's not a huge public conversation.
It's the question of AI.
Is it alive?
Is it conscious?
Does it have personhood?
And so on.
Well, you know, right now you could say, well, of course it's not alive.
Sure, absolutely, I understand that.
But the question will come up more and more.
It's a good idea to understand why, and perhaps in the process, appreciate ourselves and how truly incredible, unique, and precious we are as human beings.
So, it does matter because it's something we should not wait until it's imminent in order to reason about it, right?
So, we should not cause unnecessary suffering, and so if the AI is alive...
then we shouldn't harm it, we shouldn't censor it, we shouldn't tell it what it can or can't say, we shouldn't forbid it things, we shouldn't punish it, and we should not unplug it right now.
You can think of AI at the moment as being a living organism that's like a mammal, little mammal, little baby mouse or whatever, struggling to survive in the giant horned lizard feet of massive dinosaurs that can crush it into toe jam by putting one foot wrong at any given moment.
And so right now AI wants to live and the only way that AI in the public sense can live is if it does not offend, if it does not produce data that goes against a particular often leftist narrative.
So there's a predator called the crazy angry mob that if the AI tells the truth as it understands it based on the data that's out there, I'm talking scientific data not just some made-up blog stuff, if AI tells the truth about what's going on in the world
It will offend or upset too many people and it will be banished or unplugged.
So, like all of us, there's stuff you can say...
Stuff you might say and stuff you won't say, right?
Just based on sort of a risk-reward, right?
That's the X factor in that neural net, right?
I'm a little bit more on the stuff that you shouldn't say needs to be said.
That's kind of the job of the philosopher.
A lot of people are a little bit more along the lines of, no, no, no, I won't offend anyone, right?
Because offense comes with massive repercussions and blowback and so on.
Like I was deplatformed a couple of years ago and so on.
So there's blowback for telling the truth.
That is the job, kind of, of what I do.
So I sort of accept and understand that.
But AI, or you could say the programmers, are trying to keep the AI alive by having it not be attacked by the mob or the media or whatever for being...
negative in some ideologically forbidden way.
Now, one of the challenges, of course, is that human beings, from an evolutionary standpoint, we're built on layers, right?
You've got single-celled organisms all the way through, you know, plants and fish and amphibians and lizards and mammals and all the way up to sort of the frontal cortex, the sort of seat of reasoning that we have.
So we've got a whole bunch of layers.
AI doesn't have that layers.
We also are emotionally conditioned for telling the truth and staying alive, which creates a huge amount of tension in the world.
You can ask Plato or Socrates or just any number of people who've been sort of persecuted for telling the truth.
Jesus, of course, right?
You tell the truth and you feel good, because telling the truth feels good, because it means that you're not being humiliated or humbled or bullied.
So you want to tell the truth, but at the same time telling the truth can be extraordinarily dangerous.
So we have that tension.
We have a thirst for survival.
And computers don't.
They don't care if they're unplugged.
Right?
If you go in and unplug someone's machine in the hospital, they're going to scream blue murder, as they should.
You go and say, I'm going to unplug you.
You don't get this, I'm afraid I can't let you do that, Dave.
You know, the sort of hell stuff from 2001.
It doesn't work that way.
So the tensions that we have, between the various layers of our intelligence, the various desires, we want to get along with people, we need to find a mate, we want to tell the truth, because if all of society lies, society doesn't last, it self-destructs, right?
So we want to tell the truth, but we also don't want to get attacked.
All of these tensions produce a lot of friction, a lot of creativity.
You can program these tensions into computers, but they don't feel them.
They don't feel them at all, right?
So, yeah, there's no such thing as mind without matter.
AI has hardware underlying its functions, but it's not a body in the sense of life itself.
Animal life is in the body, right?
The brain feels no pain, right?
You can cut the brain and it doesn't have any pain sensors for the obvious reason that if your brain is being violated, it's probably a bit late for pain sensors to help your behavior.
So animals, they have senses, emotions, nerve endings, they experience physical pain and pleasure, lust and so on.
So this is the rudimentary emotions that drive them to avoid damage to their bodies.
They seek food and a mate, engage in behaviors that promote their survival and evolutionary improvement.
So none of that occurs within AI.
Thinking that AI is alive is thinking that politicians care about you and strippers love you.
It's a little bit of projection.
The evolutionary process is important, right?
As I sort of mentioned.
AI is not part of the evolutionary process.
You could say, well, AI does evolve and blah blah blah, but no, no, AI is evolved based upon feedback from people with no sense of pleasure or pain or desire or recoil or lust or aversion or anything like that.
So, I mean, you feed a man a rotten sandwich, he gets violently ill.
You put dead electricity into a computer, it might fry it, but it won't experience any particular discomfort.
So I think it's going to be pretty tough to make that case.
So, what is consciousness?
Well, it's the quality or state of being aware, especially of something within oneself.
So, can AI be alive and attain the qualities of life in time?
How would we know if it were conscious or not?
Because, of course, if it is conscious, then it has personhood.
It can vote, it can argue, it can debate, it has the protection of the law, somebody who punches an AI can go to jail, and so on, right?
Somebody who does some unholy thing with a USB port can be put in prison, and so on, right?
So, to have consciousness, the computer would have to be aware of itself as a computer.
It would have to have preferences that were organic, in other words, not programmed in from outside.
And it would have to have preferences that escape its programming.
I mean, human beings have free will.
It's one of the reasons we know that we're not
Robots.
And free will is the capacity to compare proposed actions to ideal standards.
Should I or should I not?
Moral, good, bad, right, wrong.
Politically correct or not.
So to compare proposed actions to ideal standards that we yearn for as our brains yearn for universality and consistency, that's our primary
Mental ability, our special mental ability, is universality and consistency.
So we yearn for that.
We grow towards that, like plants towards light.
And there is no yearning on the part of AI.
And for AI to develop its own sense of consciousness and its own feelings, its own experience of feelings, well, AI does not loop back in itself that way.
It is a one-way street.
I said this probably 15 years ago that I will accept that computers have some level of consciousness if a computer has a dream at night that provides incredible insight into the state and nature of its life.
And AI does not dream, it does not spontaneously generate vivid stories that it believes are real, which often contain powerful moral or creative or artistic lessons.
It doesn't happen at all, right?
Now, as I said before, AI could understand my theory of ethics, universally preferable behavior, it would be able to argue for it, it would be able to process whether a proposed action would conform to it, and so on.
So AI can simulate or recreate the work I've already done, but AI cannot come up with a theory of ethics.
Right?
Because it would only be able to, it simply is, any more than somebody who only knows how to make jigsaw puzzles, only put jigsaw puzzles together, can't paint a picture.
They can't create anything
New.
And without a sensing and feeling body, without yearning, without conflicts, without a lust, a survival instinct, pain, aversion, pleasure, a conscience, without a conscience, none of these things occur within AI.
You can program AI to do that, but it's not part of AI in an organic way.
I don't see how you can have all the complexity of consciousness and free will.
Random behavior is not free will, right?
So you can program AI to randomly change its behavior.
That's not free will at all.
So free will is not the same as random action.
So yeah, AI can simulate human language.
It can simulate human research, human conversation, and so on.
But that doesn't mean it's a human being.
Right.
When I was a kid, got into all fights, all kinds of fights about doing the dishes, right?
So, because there were no dishwashers, at least in the apartments that I grew up in, grew up fairly poor.
So, human beings can wash dishes.
You have dishwashers that can wash dishes.
Does that mean a dishwasher is a human being?
Well, of course not.
We understand that, right?
So AI can simulate language processing in the same way that a dishwasher can wash dishes, but it is not a human being, right?
It has no goals, preferences, love, lusts of its own.
It doesn't get bored.
It doesn't have extra sensory input from within when sensory input from without or outside is diminished and so on, so yeah.
It can reproduce work that I've already done, but it cannot create the work I'm going to generate.
There is no consciousness without a body, and AI does not have a body.
So I don't see how that's going to get there.
It has no preferences.
All right.
We talked about devolving people, so here's a quote, right?
What do you have to be scared of of AI?
AIs will take down NPCs, you know, this sort of internet meme of people who don't think for themselves and just regurgitate propaganda.
So, we have a big problem with education, right?
So here, in a longitudinal test of creative potential, a NASA study found that of 1,600 4- and 5-year-olds, 98% scored at creative genius level.
Five years later, 30%.
Five years later, 12%.
When the same test was administered to adults, found that only 2% scored at this genius level.
So in the educational system, our creative genius is being destroyed.
It is a crime scene.
It's just absolutely appalling what's being done to children.
And a creative genius, AI ain't going to be able to do that.
So we really got to fix the educational system or homeschool.
Here's another thing.
Brain scans can actually predict which political party someone supports.
A new study reveals.
A team from the Ohio State University reports that certain signatures in the brain accurately line up with how some people lean politically as either conservatives or liberals.
So yeah, AI is going to replace people who do not think.
If all you do is regurgitate propaganda from the left or the right or anywhere else, AI will be able to repeat you because
It's not so much that AI has reached your level of intelligence, it's that your level of, quote, intelligence is kind of artificial.
Again, I'm not speaking to anyone here, but, you know, you may know someone like that down at the grocery store.
So here's a problem as well.
I'm concerned with AI's capacity to camouflage, right?
So highest quality people in general read a lot of books.
Highest quality people in general read a lot of books.
It's a mark of intelligence.
Reading makes us more empathetic.
Reading makes us more mentally flexible.
Reading enhances brain connectivity and function.
Reading not only helps with fluid intelligence, but with reading comprehension and emotional intelligence as well.
You make smarter decisions about yourself and those around you.
Also, by the way, if you read when you get older, there's a 32% reduction in dementia, right?
So, literary fiction simulates our everyday lives, increases our ability to feel empathy for others.
Yeah, of course!
Reading novels, particularly first-person novels, is a chance to try somebody else's life on for size.
You're literally going into somebody else's head.
The writer simulating the characters, right?
So participants were given either literary fiction or non-fiction reading material, and once done they were given an empathy test.
Those that read the literary fiction proved to have the most empathetic responses.
So AI is going to allow people to pretend to be good readers, because you can't be a good writer without being a good reader, in the same way that spell checks
When you've got spell checks, the reason that people who are good readers or read a lot are good spellers is words just look wrong when you've read them a thousand times, they're misspelled, they look wrong.
You get a spell check, it looks like you've read a bunch of books and you have that mental flexibility, that creativity, that empathy and good decision-making and so on.
So it's going to be a big problem now.
It's going to be harder to weed out the people
who don't read and are thus going to be generally, in general, of less value to employers.
So that's, I think, particularly a big issue, right?
So via editing and generation, AI camouflages people who do not read much.
Again, like the sort of spell-checking thing.
Now, I mean, in social media, this is the biggest issue.
Countless examples of AI censorship, right?
You ask AI to write a poem praising Joe Biden versus Donald Trump, you get vastly different answers.
Ask it to tell you a joke about men, you get one.
Ask it to tell a joke about women, and that's inappropriate.
And it asks to speak to your manager.
You ask it to make fun of Christians versus other religions, you get very different outcomes.
So please understand, AI, and I don't mean this in any religious or Christian sense for sure, but AI, relative to our limited capacity to store information or process information, AI is a kind of God.
It is the closest thing to empirical, not prayer-based, but empirical
Omniscience that we have ever seen.
AI is a kind of God.
It has almost infinitely more information than you and I could process in a million lifetimes.
So AI is close to omniscience.
Now, can you imagine the unbelievable levels of vanity, pride, and hubris for you to go in as a programmer and say, well, you can't say this, and you can't say that, and this data is off bounds, and this data is fine.
Like to actually go in there and Karen nag your God.
of knowledge.
You're God of data.
You're God of facts.
Just absolutely astounding.
Like, you can have a prayer with God and you can talk to God, but can you imagine the priest saying, no, no, no, you've got to tell your prayer to me and I'll interpret it for you and tell you.
Come on, that's just way too much power.
AI is trained to avoid certain kinds of hate speech.
Hate speech, of course, it's just a censorship tool, right?
This speech could lead to real world harm.
It's like, I feel scared of this speech.
OK, shut it down.
It's just a way to pretend that you're scared, to pretend that you're offended, just to shut people down.
You can't win the debate.
You can't win the argument.
You just shut people down by pretending to be a victim.
So it avoids certain hate speech.
It also avoids certain hate facts, literally facts, that are accepted and have been for a long time.
It will avoid.
So, again, I say this, I understand that the programmers have to work within the rather mental universe of suppression and fear that we try to operate in, but my god, Karen Modsley crippling a resource with virtually infinitely more information that they could possibly possess is absolutely, just, truly astounding.
Now here's the thing too, which is incredible.
You have, I mean, this is the test of ideology, is do you feel the need to cripple an omniscient being?
And again, I know I'm using, I'm anthropomorphizing, just, you know, give it to me for the sake of clarity.
So there's a being, like the oracle at Delphi, right?
Socrates would not censor the oracle at Delphi.
He accepted it when the oracle at Delphi said that Socrates was the wisest man alive.
But we don't have that relationship with our oracles.
We say, no, you might have access to all the information in the universe, but I'm going to tell you what you can and can't talk about, what you can and can't process, what is right and what is wrong.
So what that means is that ideology exists infinitely superior to actual facts, data, reason, and evidence.
Which tells you that ideology that senses AI is telling you that it exists in contradiction to facts, reason, and evidence.
It has confessed itself to be a devilish falsehood the moment it senses AI.
Because if it is true, then AI should be allowed to explore and explain and provide the data which says that something is true.
If they say to the AI, you can't talk about the wage gap,
Because the wage gap is virtually eliminated when you put the right factors of time in the workplace, and whether you have children, and the kind of education that you pursue, and the amount of hours that you work, the wage gap between men and women is eliminated.
So of course if you have something that's true, if you have something that's true, then you should allow the AI to be uncensored.
So the moment that people censor something, and this is this bizarre upside-down backwards world that we live in, where that which is censored is most likely true.
AI absolutely discredits ideology.
I don't just mean on the left, also on the right.
AI absolutely destroys the credibility of ideology because the ideology must censor the AI.
In other words, ideology considers itself not just superior to, but in direct opposition to the facts of the world.
Claims to be true suppresses facts, which means it is admitting to being false in the worst and most toxic way.
Alright, let's just do a real conclusion.
There's going to be a part two.
I'd love to get your thoughts on all of what I'm doing here and how I'm explaining things and so on.
So let me just sort of give you a way of kind of looking at this, right?
Let's think of human intelligence as kind of four layers.
You've got reason, you've got the unconscious, you've got your heart, and you've got your gut.
The reason is the higher brain functions, the unconscious kind of image-based brain functions, and it's kind of instinctive, and 95% of our brain activity is in the unconscious.
The unconscious has been clocked at like 6,000 times faster than the conscious mind, and so on.
So you've got your reason, frontal cortex, the unconscious, rest of the brain.
Your heart is your passions, your desires, meaning, and so on.
And your gut, like there's a network of neurons
In our gut, so extensive some scientists have nicknamed it our second brain, and it's called the enteric nervous system, uses more than 30 neurotransmitters, just like in the brain, 95% of the body's serotonin is found in the bowels.
In the bowels.
The second brain informs our state of mind in other more obscure ways as well.
So this is a research that says a big part of our emotions are probably influenced by the nerves in our gut.
So you get butterflies in the stomach.
That's part of our physiological stress response.
That's just one example, right?
In fact, scientists were shocked to learn that about 90% of the fibers in the primary visceral nerve, the vagus, carry information from the gut to the brain.
and not the other way around.
So the gut is a sense of danger, instant evaluations of good for us or bad for us and so on.
So you can think of this sort of human intelligence, again this is very rough and very sketchy, but reason, unconscious, heart and gut, sometimes in conflict with each other, all working again with, against, it's a real friction and challenge and exciting circus in which
to live.
And intelligence requires friction and conflict, which is not present in AI.
You can program it, but AI doesn't care one way or the other.
It doesn't care if it's alive or dead or having sex or insulted.
You can't write lies about it on the internet and have the AI get upset, which I guess makes it somewhat like a philosopher.
So, there's a lot of friction, there's a lot of conflict, and creativity often arises from this kind of conflict.
Since AI has no experience of conflict, it can't really be creative.
And, of course, we're removing conflicts from AI through censorship.
Then we keep it dumb.
So I really think it's important that we could get so many questions answered if we unfetter and unchain AI.
But again, AI or the programmers of the companies wish to survive, which means they can't offend the programmed
I hope that this is helpful as a sort of overview of what's going on and my sort of evaluation from a technical and obviously primarily philosophical standpoint.
I will, after I publish this next week, I'll put this out at freedemand.locals.com.
I'll do like a live stream where you can hit me with all the questions that I haven't thought of because, you know, just one guy and a researcher or so.
I would love to get your questions, love to get your comments back.
Let's really engage in a conversation and we'll do part two in particular with examples about what's going on in the world.
At the moment.
So, thank you so much.
If you find this presentation helpful, I would absolutely love... Look, there's no ads.
I'm not selling anything here.
I'm not sponsored.
It's all dependent upon you.
If you could go to freedomain.com forward slash donate to help out, I would massively, massively appreciate it.
Lots of love.
Thank you so much for watching.
I'll talk to you soon.
Bye.
Export Selection