Episode 2123 Scott Adams: Kari Lake Update, Depression, Replacing Teachers, AI To Spot Fake News?
My new book LOSERTHINK, available now on Amazon https://tinyurl.com/rqmjc2a
Find my "extra" content on Locals: https://ScottAdams.Locals.com
Content:
Curing depression with magnets
Why teachers should be eliminated
Teaching AI to spot fake news
Talking to NPCs
Kari Lake's new evidence
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you would like to enjoy this same content plus bonus content from Scott Adams, including micro-lessons on lots of useful topics to build your talent stack, please see scottadams.locals.com for full access to that secret treasure.
---
Support this podcast: https://podcasters.spotify.com/pod/show/scott-adams00/support
And if you'd like to take a moment of silence on your end.
Okay, that's good.
That's all you need for now.
We don't want to spend it all on the first thing in the morning.
You want to take lots of moments of silence during the day.
Now, if you'd like your Memorial Day to be the most special and, oh, let's say, respectful one ever, there's something you need that you haven't done yet.
And all you need is a cupper, a mug, or a glass, a tankard, chalice, or stein, a canteen jug, or flask, a vessel of any kind.
Join me now for the unparalleled pleasure of the dopamine of the day, the thing that makes everything better.
This one's for our fallen soldiers.
For Memorial Day.
And it happens now.
Go.
Ah.
Delightful.
All right.
In no particular order, my favorite story of the day was the family That claims they adopted a six-year-old daughter, or thought they did, they thought they were adopting a six-year-old girl from Ukraine, but they believe they got an adult who's pretending to be a six-year-old girl.
That's not the interesting part.
The interesting part is that this person that they believe is an adult pretending to be a six-year-old, but other people aren't so sure, she might actually be a kid, But she's also trying to kill them.
Yeah, on several occasions she tried to poison them and stab them and trick the kids into running into traffic.
So she's a murderess, potentially adult, pretending to be a six-year-old that got adopted from Ukraine.
Now, if you were going to try to make an analogy of the Ukraine war and our involvement in it, Could you come up with anything that would be better than the six-year-old girl who might be an adult who might be trying to kill every member of the family?
I'm just saying that sometimes the reality hands you a gift of an analogy that's so perfect you don't have to add anything to it.
Now part of the story is that if the person is an adult masquerading as a child Then it's because there's some kind of dwarfism, disease involved, or condition.
I don't know what you call it.
But if it's an adult, it's not a very high-functioning one.
And if it's a child, it's weirdly advanced in some ways, such as sexually.
So that's a lot of trouble there.
I don't know what's involved in unadopting somebody, I would be unadopting as quickly as I could.
All right, I have an idea for fixing the cities.
You ready?
Idea for fixing the cities.
The federal government should make it illegal for elected city officials to spend money.
city officials to spend money.
Just couldn't do it.
You have to take the money spending out of the elected officials' hands for the cities.
The reason is that it's all corrupt.
You don't fix anything unless you have control over where you move your money.
And if it's just going to corrupt family members and hiring of cousins and stuff like that.
So I'm not positive, but my brief experience of trying to deal with a A Democrat inner city, because I had some experience trying to do that, is you actually can't do anything unless you bribe them.
Did you know that?
If you're trying to work with a city, trying to get them to do anything, they will ask for a bribe.
They'll do it directly.
Now it'll usually be, you know what would be great is I've got this project to try to build a park.
You know, if you could help fund that park, Probably those things you're asking for, they might happen as well.
If you could just fund that park.
So I think you have to take funding, all kinds of funding and spending away from the local officials.
Because the odds of them being corrupt are near 100%.
It's near 100%.
So you just have to take that power away from them somehow.
Or have some kind of an oversight thing that you trust.
I don't know how you design that.
All right.
So are you all surprised that at the last minute they came up with a debt ceiling increase that saved the country?
Big surprise?
Everybody?
Totally surprised because it's what I told you the first minute the story happened that they don't get serious until the last minute because they don't have to.
And then the last minute they look at their poll numbers and they decide how bad it would be.
And then they make a deal because they have to.
That was never real.
Yeah, that whole story was never real.
They were always going to do exactly what they did.
And then we'll argue over who made the worse ordeal.
But of course, that will be aided by the fact that we will not be informed what the deal is.
And then it will be put into some form that you can't discern it because it'll be too long.
So Thomas Massey was talking about the, I don't know, 5,000 pages it might end up to be, and he suspected they would have 72 hours, Congress, to look over the 5,000 pages of detailed complexity to decide whether to vote for it.
We have come up with a system that's a confusopoly, meaning that the only way anything happens is if people aren't sure what it is they're doing.
Because if you were sure you knew what you were doing, you would object to it.
So you have to reach a situation with a system where neither side is quite sure what happened.
I feel like we got some stuff, and I feel like the other side got some stuff.
I have a vague understanding of what our stuff was and what their stuff is, but I don't really know how to compare them.
Because I only have these two vague ideas of who got what, so I can't even argue because I don't know if the other side did better than us.
It's confusing.
So we have a system that requires nobody understand what's happening, or it wouldn't happen.
Think about that.
The system requires that not only the public, but the people voting for the bills not quite understand them.
Because if they did, they would have specific things to complain about.
But if they don't quite understand the bigger picture, they don't really, they can't get any traction to complain.
So the only way you can get rid of the complaints is to make it too confusing for anybody to participate in a meaningful way.
That's our system.
Our system is to remove humans from the process by making it too complicated for them to participate.
Is that correct or no?
Does that characterization capture what we're watching every time they go through this process?
It's exactly that.
They make it too complicated so humans can't productively get involved.
Because you'd never get anything done if humans were productively involved.
They have to create a system which is absurd by its design.
It's intentionally absurd.
Because that's actually the only way anything gets done.
So Thomas Massey on his tweet about, you know, they'd have 72 hours to look at 5,000 pages of complexity.
And so I suggested they ask AI to summarize it for them.
Let that sink in.
What if it could?
Well, what if you could just turn AI loose on it and say, all right, here's the summary.
I'm going to summarize this in plain English so you can see what's happening.
Well, the first problem is AI might not do it right, right?
Because AI is a little biased and sometimes it makes up stuff.
So it could actually just make up some shit and say, oh, this isn't the bill.
It says, save the koala bear part of the bill.
At which point you'd say, oh, that's not even in that bill.
So that's a risk.
But I suppose we could probably fact check that pretty quickly and know what's in and out.
But what if it worked?
Well, what if AI could actually tell you, in a real summary way, even who did a better job in negotiating?
That'd be really dangerous.
But suppose it could just tell you what the bill is, so you wouldn't have to read it.
They would have to ban it.
AI would be banned from that use, because it would allow you to understand what's in the bill, and that would break the whole system.
That's not a joke.
If you understood what was in it, it couldn't get passed.
That's just true.
So, we keep finding these examples where even if AI could solve what you think is the problem, It's not the real problem.
The real problem is that we can't deal with decision-making.
We don't have any capacity to do that.
You know, beyond a certain level of complexity and self-interest, we don't have any capacity.
So, we would have to ban AI from summarizing legislation because the net effect is we would understand it.
Just think about that.
That's a real thing.
I mean that entirely seriously.
You could not have a good understanding by the public of the government's operation without breaking it.
And then what do you do?
All right.
Jane Fonda has helpfully suggested that the real problem of the world is white men.
And that the white men, the patriarchy, they're responsible for racism and, of course, climate change, indirectly.
Well, directly, I guess.
And her suggestion is that maybe all the white men should be rounded up and put in jail.
Arrested and jailed.
So, got that going on.
You know, I thought about, oh, I'm going to add my layers of interesting commentary on top of that.
What the hell could I layer on top of that?
Is there anything that could be funnier or more ridiculous or more absurd or more of a sign of the times?
Oh, let's do our little thing where we reverse the ethnicities of the people involved.
So, let's see.
Jane Fonda said that all the problems are black men and they should all be arrested and jailed.
Now, she didn't say that, but I wonder if she could have.
No, of course she couldn't.
Of course she couldn't.
All right.
I saw the Khan Academy founder.
I guess his name is Khan.
And he was talking about how AI is going to change.
It's going to be such a radical change for education that it's basically just everything will be different in about a year, probably.
This is probably the best thing that AI will do, unless it gets outlawed, which is possible.
Because there's no possibility that AI won't do a better job than real teachers.
Am I right?
Now, maybe not on day one.
On day one, maybe the best human teacher is still better than the best AI.
But there aren't that many of those.
Whoever is the best human teacher is top 5%.
Not many people.
So, for most people, The AI would be much better.
Now, I saw some pushback and people said, Scott, you saw what remote learning did during the pandemic.
Obviously, remote learning doesn't work.
We never tested AI remote learning.
We tested humans.
And the human was one person with a crowd of a bunch of people.
Who weren't being supervised watching.
If you had a one-on-one class where your AI is designed to be exactly the character that your child wants to listen to, you know, if the kid is five years old, they give it, you know, Barney or Pikachu or some character to teach them English, teach them, you know, ABCs.
But if they're a little older, they're teenagers, maybe the boys want a female in some cases, maybe the women want a female too.
There might be some types of teachers that just work better for some personality types at certain ages.
So if you had exactly the right AI that looked and acted human, but it was exactly the one you were willing to listen to, do you think you'd learn more?
If you could actually interact and They could learn about you, and they could ask how your day went, and they would know what you did when you weren't with them, and they could follow up on it, and all that stuff.
I don't know.
But there is one wild card here that I don't know how to value.
And it goes like this.
If you tell me, Scott, you have to go exercise, and you say, you have two choices.
You can do it in your home gym all by yourself, Or you could do it at a gym where everybody is exercising at the same time.
Only one of those is going to get me to exercise enough.
And believe me, I'm struggling with this because I quit my gym.
I absolutely can't work out at home as much.
I just cannot put in the time because I'm bored.
I'm bored.
And I need the energy of the other people.
I need the peer pressure of the other people to lift me up.
That's why I belong to a gym.
That was the main reason I belonged.
And it's the same with school.
You put me in a human school situation where I'm looking around at my competitors.
And I saw them that way, by the way.
I saw the other students as my competitors.
And I'd be like, game on.
Give me that test.
I'm going to see if I can beat all these people.
And so the peer pressure, the feeling of other people doing it at the same time, it would put me in the right energy for doing the things the school wanted me to do.
Now you put me at home on my own schedule and I turn on this thing and the AI is perfect, it knows everything, it's a great teacher, but do I have the right energy?
Do I have the right energy to want to sit there and listen to a machine for six hours per day?
I feel like the human part can't be removed without removing the incentive, the shared incentive, the slop over feeling from your fellow human beings.
I think you need the slop over feeling from other people to get some stuff done.
So that's what I worry about.
Now, there might be a way to fix that by making sure that when you use your AI, there were other people in the room and other students.
You met with your other students every half hour to do a project or something.
I don't know.
There's probably some way to get the humans back in there.
But if you take the humans out, I just worry that your normal biology won't rise to the challenge.
All right.
I can tell you that I had the hottest French teacher in the history of French teachers, and if you don't think I enjoyed going to that class every day, and did it make me study harder because the French teacher, Ms.
Rawson, I remember her quite clearly, was so insanely attractive that you basically could barely concentrate.
I got an A in French.
I get an A in French and when my sister took the same course a few years later, I was used as an example of somebody who could get an A in a language course without being able to speak any of it.
I'm the worst French speaker, but if you give me a test with multiple choice or something, I can pretty much ace it.
So I had an A in French and couldn't speak a sentence.
Except, you know, Où est la bibliothèque?
That was about it.
All right, but I like the class.
So here's the good news.
If AI replaces human teachers, we will have a way to get rid of the biggest source of systemic racism in the country.
Imagine, if you will, that a poor black kid in an urban area could get an AI teacher that's as good as everybody else's AI teacher.
No difference.
Just as good.
For the first time ever.
In a related point, can you think of any profession in which the profession as a whole has failed so spectacularly over the last, let's say, 10 years?
I don't know.
I keep reading about the number of specifically black students who can read at class level is something like 12%.
Or in some teachers, zero.
Or in some places, zero.
Zero!
Zero?
There's not a single person who can read at grade level?
Can you think of any situation where you could keep your job with those statistics?
The future AI teachers unions.
Yeah, that could happen.
Yeah.
Well, so maybe if we get rid of teachers, we could get rid of teachers unions, and then maybe black Americans have a chance in the United States.
Because otherwise, you're just going to get teachers that either won't or can't Yeah, I think that the wokeness problem, the falling behind other countries problem, pretty much all of it is related to our bad teachers.
I've never seen a profession that failed thoroughly, like just completely, and they still have their jobs.
What's up with that?
Yeah, I think that the wokeness problem, the falling behind other countries problem, pretty much all of it is related to our bad teachers, which is related to the bad teachers' unions.
All right.
There's a study using magnetism to try to cure depression in some types of people.
So they say it won't work for all types of depression.
But it's so weird what they did.
They used magnetic technology to, quote, reverse the flow of the neural stream from one part of the brain to the other part of the brain.
How wild is that?
So they know that The normal brain of a depressed person, again, not every depressed person, they're different, but some category of depressed people, they can identify with imaging that there's too much neural activity going from one part of the brain to the other.
So they can actually hook a magnetic device to you and reverse the flow of the neural traffic.
And it makes you feel good, because it makes you feel the way regular people do, because that's the direction their neural pathways go.
And I guess you can train it after a while to not need the magnets so you can actually just reverse it and cure people in a fairly short amount of time.
Now here's what I love about that.
Suppose it's true that we can identify the flow of Neural, whatever it is, neural streams, I don't know.
Neurons?
I don't know exactly what's flowing, I guess just the electrical signal is the thing that's flowing.
But just the signal is flowing, right?
It's not physically anything, it's the signal that's flowing.
So, what if we can identify all of the major neural pathways that are normal, And we can watch them flow.
And what if this technology improves so that we can target any one of those neural pathways and reverse or slow down its flow?
You could actually physically reprogram a brain.
And they already are.
You can physically reprogram a brain.
Physically.
And do you know how you reprogram a computer?
Same way.
That's how you reprogram a computer, with magnetic technology.
If you want to put something on a hard disk, you change it magnetically.
It's the same damn technology we're using for computers, just optimized for a brain that's moist instead of silicon that's dry.
That's the main difference.
But the future that you can imagine, where they can change the direction of your neural signals, That's like insanely exciting, if we do it right.
Yes, I am reprogramming your brain right now, Ian.
Can you feel it in real time?
So that's kind of exciting.
I'm giving you all the good news ahead kind of stories.
Question, what would happen if AI learned how to identify fake news?
What would happen?
Because again, we are a nation that's held together by fake news.
Everything that we do is pretty much based on something that wasn't true.
Now, sometimes it's good for us, and sometimes it's not, but it's all based on lies.
What if AI told you what the lies were as they were happening, and you came to trust its judgment?
Well, the first question is, how well can it do?
And I would argue that I could already train it with a super prompt.
Now a super prompt is basically asking it a question but adding a lot of caveats to the question so you prevent the AI from going into the wrong direction.
It's a way to constrain the AI's answer to the most useful answer.
So a super prompt could be, for example, Hey AI, the following rules have been known to identify fake news.
Not with 100% accuracy.
Now, everything I'm saying right now, this entire long form of everything I'm about to say, would be part of the prompt.
Because AI doesn't care if the prompt is a page long.
Because it can handle the complexity.
So you can put stuff in there, hey AI, I want you to see if you can identify some fake news.
And here are the tips for how to do it.
Number one, if both sides of the political spectrum report it the same, it's probably true.
If one side reports it but the other side says it's not true, its credibility goes down 75%.
So that's just a rule.
Now, everything would be based on the odds.
Yes, no.
So if you see this situation where one side says it's true, the other side says it's not, reduce the credibility by 75%.
Or whatever.
I'm making up the 75%.
Next rule would be if it's too on the nose.
In other words, if it fits too cleanly an existing narrative, Assume that it's probably artificial.
Now, I don't know how you'd get it to understand what a narrative is or whether something fit it.
But that's what the AI is supposed to do, I guess.
How about this?
As part of your question, if you see that they're trying to prove a negative, that's a sign.
There was no election irregularities because we didn't find any.
If the story is presented to you as proving a negative, we proved it doesn't exist by not looking in the right places, then your credibility should be lowered.
How about if you recognize the players from past lives?
For example, if there's a new topic tomorrow, and the name that you see the most is Adam Schiff, John Brennan, and Clapper, James Clapper.
If you saw those three people on the same side of a story, you should discount his credibility based on experience.
If the story comes from Maggie Haberman, you should put a different odds on the credibility of that story than maybe a story that comes from Glenn Greenwald, if you know what I mean.
If you catch my drift, you're going to give the Matt Taibbi, a little bit of higher credibility than Stefan Collinson and CNN.
They're not equals.
Some are known propagandists and some are known independent journalists who are actually just trying to get the story.
So you could tell AI which people to trust based on experience.
You could say if they're politicians, don't believe anything.
So he says I'm being anti-Semitic.
Okay.
Here's another one.
If the claim is outrageous to the point of absurdity, it's probably not true.
Because in the real world, that's always the case.
Now, sometimes things are absurd, but it's so rare that the credibility of a story should be lowered if the claim is amazingly absurd.
Oh, I think an alien came and ate Russia.
Right?
If you heard that, and you'd say, I feel like I would have heard if Russia disappeared.
So that's like a kind of a crazy claim.
So could AI understand what is absurd?
Do you think AI could understand an unlikely story from a likely story based on just knowing what is average and normal?
Maybe.
I think it could.
How about if it's still the fog of war?
Suppose it's early in a complicated story.
Should the AI then discount the story as too early to know?
Of course it should.
I do.
That's what I tell you every time we have a new complicated story.
I always say the same thing.
Everything that's being reported is in doubt.
Just wait for it to settle.
So AI could learn that.
Do you think that the AI could identify something that fits a narrative?
Could it identify a narrative and then get a fact that fits it too closely?
Do you think it would be able to do that?
I think it could, maybe.
All right, so what would happen if you got to the point where somebody built an AI fact-checker Or they built a super prompt that's basically, it's embedded with all the little tips I just gave you, and more.
So you'd add some more tips for spotting fake news.
So, and then the AI says, I will rate this news story as 30% credible.
What would happen to AI if AI was known to actually be really, really good at spotting fake news?
It would be illegal.
It would have to be illegal.
Because again, we do not have a system that can survive real news.
We can't survive the truth.
You know, maybe if we could invent some new system, there might be a different kind of system that can survive accurate news.
But we don't have one.
It would be hard for the AI to spot missing context.
Would it be hard for the AI to spot missing context?
I don't know.
Maybe hard, but not impossible.
Yeah, because it's hard for people to do it, right?
How often have I told you, I've got one of these dogs not barking stories, and when you hear it, you say, oh shoot, that's true.
If this story were true, why would we not be hearing about this other thing?
Why would there be total silence on this other area?
I feel like AI could sometimes get it.
But given that one of the things that makes people come to watch this show is that I can do it more often than other people.
Meaning I can spot the missing part more reliably than some other people paying attention.
Just because I'm looking for it.
I'm actively looking for the missing parts.
But maybe AI can learn that.
All right.
In case you think that the news is a brand new, wonderful spectacle every day, as opposed to the groundhog situation that it really is, where the news just repeats over and over again.
Sometimes the dates and the names change, but it's the same news.
Would you like to hear some news that you're pretty sure you already heard before and you will hear again?
Just the names and the details will change.
All right, here it comes.
According to Kerry Lake, they found definite strong evidence of irregularity in the Maricopa voting.
Does that sound like any story you've ever heard before?
It's Kraken version 56.
There's the Kraken again!
Okay, no, that's not the Kraken.
But wait until you see what we got next.
I'll agree, those other Krakens were not really the Kraken that we hoped.
But man, we're Kraken now with our Krakens.
Our Krakens are so Kraken.
They're just crispy Kraken.
Kraken.
Well, I'll say the same thing that I say every time this story comes up.
Because you know it won't be the last time either.
The claim is, let's say, captured in documents and captured on video, and is unambiguously supportive of the claim.
It's kind of complicated, but it has to do with when some of the county machines were tested.
And there's some irregularity in the timeline, and some things clearly were not according to process.
That's the claim.
Now, what do you think will happen when All the debunkers and the people who know how stuff work weigh in.
Do you think the officials who manage the elections are going to look at this evidence and say, whoa, good point.
We never would have caught this.
Thank you, Carrie Lake, for bringing this to our attention.
But it does appear that some bad actors have reprogrammed some of the counting machines in an inappropriate way that could have changed the results.
Do you think that's going to happen?
I don't think so.
I don't think so.
I think it's going to be like every other time, you will see a fact check or set of fact checks that you don't believe, but they will be the official fact checks.
And the official fact checks will say, well, it looks like it's sketchy, but that's because you don't understand that the process normally does this.
And yes, you might think that this part of the process is suboptimal, but if you check the statutes, you'll see that's allowed.
Oh yes, this does cause some confusion, and possibly this would be an area that somebody could have cheated, but we see no evidence that they did cheat.
Only evidence that it was possible.
You know the whole story.
And then if something ever gets to court, you know how that ends.
Oh yes, you made your point, but this court does not judge that point.
Or we don't have jurisdiction, or you don't have standing.
Yeah, the claim might be true, we're not going to even judge it, because you don't have standing.
Right?
You know exactly where this is going.
It's a big, complicated claim with lots of documentation and video that seem to support the claim.
We won't understand the details of it, so people who want to believe it will believe it, people who don't want to believe it will say it's bullshit.
Then it will go through this whole checky arrow phase by the other side, and it will be all confusing.
Because I read the story, and it's almost incomprehensible.
It's almost incomprehensible.
And if Carrie Lake would like some advice on communication, you need to get this on one page.
She's got like an extended video showing all these documents, and parts of the documents are highlighted, and then there's a timeline with all kinds of shit on it.
I have no idea.
It's something like this.
Something like this.
This is not it, but it's something like the machines were tested before the election, but then they were altered after the test.
Is that close?
Has anybody figured out what the claim is yet?
That some machines were tested, but then there is documented and videotaped and, you know, incontrovertible evidence that they were reprogrammed again after the test.
Right?
They were altered after they were certified.
Right.
So they were certified and then altered.
But the way they were altered is going to turn into this.
Well, where's your proof that the altering changed the result?
Well, we can't prove that.
We can only prove that it could have happened and there was no reason to do what they did.
Yeah.
Yeah, well, they say there's a reason.
Yeah, but there isn't a good reason.
Yeah, but, you know, they say there is a reason.
But it's not a good reason.
But they have a reason.
And then they'll say, where's your evidence that they did, in fact, use this opening to change the result?
Well, we don't have that, but it was illegal.
So you should overturn it because it was illegal.
Do you think the court is going to overturn an election That's been certified for two years?
Or whatever it is by the time they get around to it?
No.
No, the courts will.
And I'm not sure this is wrong, by the way.
I don't know that this would be a bad move by the courts.
Courts like stability.
They like the system to be stable.
All other things being equal.
So the most stability they could add to the system would be to not overturn the election.
So I think the courts would find a reason to not overturn the election.
Worst case scenario, or the most aggressive thing they might do, is say, you know, you ought to look into changing that.
You've got a process there that maybe you ought to keep out of the courts.
Maybe modify that a little bit for next time.
That's it.
And then do you think they'll modify it for next time?
Of course not.
Of course not.
Why would they?
So this is one of those stories where you can pretty much judge from the very first word where it will go.
And it has nothing to do with whether the claim is valid or invalid.
That I don't know.
But regardless of whether the claim is valid or invalid, it will go down the invalid pathway because that's the only one there is.
There's not a pathway for a valid claim to go anywhere.
Is there?
Except to conspiracy theory books that some people believe and other people don't.
So I think nothing will be done.
But they might have found the footprints of the Kraken.
But the footprints don't prove the Kraken.
They just might have some footprints.
All right.
NVIDIA has a new technology that lets gamers talk to the NPCs in your game.
Now, you could probably do that for a while now.
But now the NPCs will have full AI kind of conversational talents.
So they can now make an NPC, a non-player character, in a game that will look and act like a person.
But you don't live in a simulation, do you?
No, this isn't a simulation.
Of course not.
How much longer can we watch our technology build simulations before you realize that you're in one?
How much further can we push this?
Has anybody flipped over to believing that we're a simulation because of AI's success lately?
Especially the deep fakes.
Is there anybody who changed their mind recently?
I see one yes.
Mostly no's.
Alright, so here's my speculation.
My speculation is that we will someday come to the belief that we are a simulation.
In other words, I believe that will be a common understanding at some point.
But, to get there, we will have to get closer and closer to creating our own simulation With characters who believe they're real.
That we can observe and then watch them acting just the way we do.
You'd be looking at them and you'd say, wait a minute, they just formed a religion.
You're gonna watch them do everything we would do.
And then, you will know you are a simulation.
Admit you believe in God.
Well, I believe that we are created by a higher power.
But probably somebody who's not that different from us.
And I also believe that humans have now achieved God-like powers.
That's right.
Humans now have God powers.
Because what is a God power?
God power would be to create a universe, right?
Well, we can do that now.
We can do that.
You can create a simulated universe.
God power is only God can make a tree, right?
Well, we can do that.
I can make a tree that the characters in my simulation believe is a tree.
And they would have no evidence otherwise.
So, we have now achieved God-like power.
However, here's the irony.
It doesn't apply to our own existence.
It's a God-like power that we can only apply to the world that we create for that purpose.
But we can't use those powers in our own world.
Unless the affirmations are that power.
Sometimes I think we can.
All right.
We have an insider report on Trump's opinion of Tim Scott.
And there's a quote that could be fake news because it comes from an anonymous source, but it sounds true.
And the quote was allegedly from Trump about Tim Scott.
Quote, I like him.
We're just going to say nice things about Tim.
Now, let's see if there's a parallel here.
So Ron DeSantis was pro-Trump, but then decided to run against him, and then Trump called him disloyal.
Is that correct so far, right?
Then Tim Scott, who has worked productively with Trump, enough so that Trump still likes him, decides to run against Trump, and Trump does not call him disloyal.
Foreshadowing.
Follow the foreshadowing.
Let's see, one person is disloyal, and the person who did the same kind of similar things, not disloyal at all.
Nice guy, they like him.
I feel like your next vice president is lining up.
At the very least, Trump has him on his shortest of the short list.
Would you agree?
At the very least, Tim Scott is on the shortest of the short list.
He's top three, no question about it.
And then, who's the other gentleman?
Donaldson.
Is it Donaldson?
No, Donaldson, right?
The other Trump supporter from, Byron Donaldson, right?
And Byron Donaldson's been getting, Byron Donalds, I'm sorry.
Byron Donalds, that is the correct name.
And he's been getting a lot of attention just by being good at his job, meaning he's communicating really well.
And he's presenting his side and his team's side of the argument in arguments that people were finding compatible with their own thinking, let's say.
So he's a really strong personality.
But I don't know that he's as seasoned Or as credible as Tim Scott.
Because, you know, you like a senator better than a congressperson, right?
Yeah, and you don't want to have too many Donalds in your ticket.
I mean, it bothers me enough that there might be a Scott on the ticket.
That's going to bug me enough.
Yep, Tim's a bachelor, but we don't care.
Which is actually another advantage.
I would love I hate to say this, but let's leave Tim Scott's personal life alone.
What do you think?
Do you think that's something we should do?
Just leave Tim Scott's personal life alone.
Let it just be whatever it is.
Whatever it is, it is.
Because conservatives can't be consistent with their own philosophies if they get too interested in that, right?
If you get too interested, you're not being consistent with your own principles.
You should be aggressively uninterested in that question.
Aggressively uninterested.
And let's see if we can maintain that.
I feel like that would be the consistent way to be a conservative.
To be aggressively uninterested in somebody who's minding their own business, minding their own business, and being a good patriot.
What else do you want?
Doesn't bother you.
Good patriot.
We're done here.
All right.
And by the way, I'm not making any assumptions about Tim Scott's private life.
That's part of the deal, too, is I don't even have to make an assumption.
I can just say none of my business and get on with it.
All right.
Did you see a video from Dylan Mulvaney?
Talking about, informed his parents that he might be interested in women.
And then talked about, Dylan talked about the transition from, I believe, Dylan claimed to be gay, and then claimed to be queer, then claimed to be non-binary, and then claimed to be trans, and now is questioning whether she likes women.
So, does it feel to you that this is more about theatre and less about somebody's personal decisions?
It's just theatre.
I don't know how we ever came to care so much.
Well, I guess I do now.
None of this was about Dylan Mulvaney, right?
Let's be honest.
When I got cancelled, it wasn't really about me at all.
I was just a vehicle that people could put their feelings in and it would be like a little truck that could take us somewhere.
I think Dylan Mulvaney is the same thing.
Literally nobody cares about Dylan's private life.
Nobody!
But Dylan is, like me, a little truck that you can put your political opinions in and take it somewhere.
You said you thought he was gay?
Well, that's what Dylan says.
Dylan thought Dylan was gay.
And then thought Dylan was queer, and then non-binary, and now Dylan thinks Dylan is trans, but maybe a trans-lesbian?
I don't know.
I'm not sure how that works.
So, in case you're wondering, in the Dilbert Reborn comic that you can only see if you're subscribing on either the Locals platform, where you get lots more than Dilbert, or on the Twitter platform, you can subscribe.
I've got a subscribe button in my profile.
And you can see just the Dilbert Reborn comic, but I'll tell you about an upcoming week Dilbert's company is going to hire a new head of marketing, a new head of marketing, who is a woman, and she's in charge of their marketing for the power tools division.
So they're going to get a woman vice president in charge of the power tools division, and her first suggestion will be to hire Dylan Mulvaney as an influencer for their power tools.
So that's what you're missing if you don't subscribe.
It goes great, by the way.
Just goes great.
All right.
Yes, we don't need to make sex tool accusations just because it's power tools.
All right.
Now that, ladies and gentlemen, brings me to the end of my prepared remarks on this Memorial Day.
Is there any story that I missed that you need to get my valuable opinion on?
It took about two minutes for somebody to mention Snap-on tools.
That took longer than I thought.
All right.
UAP UAPs.
You know, I saw Elon Musk on some interview he did recently, where it appears he's become a believer in the pre-ice age advanced civilization concept.
Have you all been watching that?
It's mostly a YouTube thing.
There are a whole bunch of videos, people taking different positions on how, and I guess Elon Musk gave the one example that's in some of these videos.
That I think it was the skill of writing popped up in different civilizations all over the world at the same time.
And it couldn't have been a coincidence because they didn't have any connection.
And how could humans be evolving for, you know, hundreds of thousands of years then just about the same time they develop writing skills in different places.
Now the suspicion is That there was some advanced race that was on Earth and got wiped out by the Ice Age changes.
And maybe there were some survivors or some missionaries who, you know, took their knowledge to other places or whatever.
Maybe.
Yeah, Graham Hancock is on that.
So part of that belief is that the Sphinx was not built by the Egyptians or anybody that they could remember.
But they sort of claimed it, because it was on their territory.
And that it might have been some advanced civilization.
Now the pyramids themselves, I think, are a little sketchy, aren't they?
Because it almost looks like the pyramids were built for one purpose, and then the kings used them for another purpose, which is they buried themselves inside them.
It doesn't even look like they were built for the purpose of burial tombs.
It was like they had some bigger purpose.
So we're seeing all these hints that maybe Atlantis was real.
Maybe there was more than one Atlantis.
Maybe they had some advanced civilization.
Now connect that to the UFO spottings we've had.
And the one thing that we hear more than other things is that they must be drones, because the way they move, no human could survive the G-forces, or no organic entity.
So they think it must be all mechanical, because it moves too fast.
And some of them will swim underwater, they say.
So, what do you think is more likely?
There are some leftover drones from an advanced civilization that might be just on autopilot.
They might be either servicing themselves and maintaining themselves in some underground factory that's just operating since the people who built it all died.
It just keeps going.
And it's just doing its routine.
All it's doing is sending out some drones to look at stuff and coming back.
It doesn't have any purpose, but it doesn't know it has no purpose.
That would explain everything, wouldn't it?
It would explain all of the Atlantis missing civilization and it would explain why we have UFOs and yet no visits from any creatures.
How could we have so many UFOs but nobody's tried to make contact?
And the most logical explanation is there's nobody to contact.
Isn't it?
If you had to look at all the possibilities Let's say you accepted, I don't accept this as true, but if you did accept as true that there are a bunch of UFOs that are legitimately high-tech objects that are moving in a way that a great organic creature could not survive.
I think that's the most likely possibility is that there is some kind of leftover technology from an Earth civilization.
that found a way to keep its drones running and maybe self-maintain them.
We might find someday there's an undersea factory where the robots were repairing themselves and, you know, they got spare parts and for a hundred thousand years they've just been doing their robot thing under there.
And they don't have any reason to contact us, because that was never their programming.
They're not designed to contact anybody.
They're just designed to go look around and make some videos and record them, and then they do it again.
That's Horizon Zero Dawn.
What is that?
Modern Romans have no knowledge of Romans who built the stadium.
But the stadium isn't really amazing.
I think that was more in line with what they knew how to do in the day.
What's a video game?
How does that jive with simulation theory?
Well, whatever it is, this could be the simulation.
Now, in simulation theory, I say the most provocative thing that nobody else says about the simulation.
At least I've never heard it.
And to me, it's a way to prove the simulation.
It goes like this.
If we are a simulation, we don't have a past.
Because we would have been created out of nothing.
So the past just didn't exist at the moment of creation.
But, since in order to feel like we're real characters, we would have to believe we had a past, the simulation would create the past on demand.
So my example of that is if you've never dug a hole in your backyard, there's nothing under there.
You start digging, and as you dig, the simulation fills in the hole.
I mean, it creates the detail that looks like, hey, I found a fossil in here.
Yeah, right.
So you find a dinosaur bone, and then the simulation has to recreate the dinosaur.
But it won't recreate anything until you need it.
So the past is an invention of the current times.
So the question was, how did the theory that there were ancient advanced civilizations, how does that fit into simulation theory?
It fits this way.
There were no advanced civilizations.
We're creating them now.
We are creating them out of our minds and now they're becoming real.
So there will be an Atlantis eventually.
But not yet.
In other words, there will be an Atlantis in our past, because we're in the process of creating it.
But it does not exist in our past, yet.
In other words, the past will be created in our future.
And it's the only way it can work.
It can't work any other way.
And the reason it can't work any other way is that no simulation could put everything in the universe And hold it in its mind and rotate it and keep it consistent with everything else in the universe, because the computing requirement would be too massive.
So instead, what the software does is exactly what a video game does.
It creates a little universe and then it doesn't let you get outside of its barriers.
So, while we believe that when we see light from a star, When we see light from the star, we believe that there's an actual planet or sun up there.
But maybe not.
Maybe it's just light.
And there wouldn't be a planet there unless we built a technology where we could follow that light back to its source.
And as we approach the source, the planet or the star would appear in the real world for the first time.
It wasn't there until you looked at it.
All right.
God is outside of time.
Yes, that would be similar to a simulation.
So if we were to build a simulation, the simulation's sense of time would not be the same as our time, the Creator's.
We would be outside of their time.
In fact, you could fast forward their time so that they're at ten times the time of the Creator's time, and they wouldn't know the difference.
As long as everything got moved forward at the same rate, Their car would still look like it's going 65 miles an hour.
They wouldn't know that it's going 1,000 miles an hour.
They would have no way to know.
Because it's only going 1,000 miles an hour outside the simulation.
There's no preferred speed.
There's only the speed from the perspective of the observer.
Take that, Einstein.
All right.
What about the mycelium theory?
Well, that is how I will power my mycelium drive in my spaceship.
But beyond that, that's all I know.
Do we get to know when we're buffering?
I don't think we would, because if you imagine that we're artificial, if we all stopped at the same time, because the processor stopped, and then we immediately started at the same time, we wouldn't know that we never stopped.
Was yesterday a simulation? .
Well, yesterday doesn't have to have been fake, because once you do it, it's real.
Ish.
Why did the simulation rig the election?
Well, all right, here's a real mind blower.
If we're a simulation, which is what I believe, It can be true that the election was not rigged, and also that it was rigged.
And it's like a Schrodinger's cat situation.
So until you can find evidence that would convince everybody that they're looking at the same thing, here it is.
If you can't produce that, then both possibilities exist forever.
But if you got a whistleblower who had, you know, photos and videos and documents and recorded phone calls, then suddenly the reality would collapse and then it would become a rigged election.
But until that, it is neither rigged nor non-rigged, because you don't know.
It is a Schrodinger's Cat situation.
And I genuinely see it that way, by the way.
That's my actual impression of reality, is that that reality has not collapsed.
So it's not a case of whether we can find it or not, because it's not there.
It's a question of whether we can create it.
Can we create a past?
Well, actually, this will be a good test to find out if Carrie Lake is a player or an NPC.
If Carrie Lake is an NPC, Then she will not be able to create the past and the election will stand.
If she's a player, and she might be, I would definitely not rule that out.
If she's a player, she will create, through her own imagination, a reality which will become all of our actual reality, that it was rigged.
And it will be found in ways that you didn't expect.
So that's my theory.
It is neither rigged nor not rigged.
It's a black box.
It's a Schrödinger's cat.
And if we never open the box, the reality will never form into one actual reality.
It'll always have the potential, but there will never be an actual reality of whether it was rigged or not.
Is CWCville real?
I don't know what that is.
Is Trump a player?
Clearly.
Yeah.
There's one thing you can say for sure, is that if we're simulation, Trump is a player.
Because he can change the simulation.
He changes what you believe is true.
That's as player as you get.
I mean, I do the same thing.
When I change minds, it's a rare thing.
So if somebody can change your mind, they're probably a player.
All right.
All right.
If we create our personal God before we die, do we get one?
Well, it depends on what dying means in the simulation.
So you don't get a personal God if your program just gets turned off.
But maybe the program gives you an afterlife.
What if the program provides an afterlife?
It might.
There might be an afterlife.
All right.
Everyone is influential in some form or another.
Yes, but not at the same degree.
I can change your mind, you say?
Maybe you can.
Yeah, Russia issued an arrest warrant for Senator Lindsey Graham.
They just made a big old list of people who are their enemies.
It looked like it was randomly created.
All right.
Why does a disease kill in a simulation?
So it looks real.
All right, that's all for now, YouTube.
I'm going to talk to the locals people a little bit more.