All Episodes
July 30, 2022 - Real Coffe - Scott Adams
01:02:45
Episode 1820 Scott Adams: How Not To Get Monkeypox (It's Easier Than You Think)

My new book LOSERTHINK, available now on Amazon https://tinyurl.com/rqmjc2a Find my "extra" content on Locals: https://ScottAdams.Locals.com Content: Monkeypox reporting avoids the word gay AI robot sensors surpass human abilities AI lie detectors will be 95% accurate Biden's border wall needs a name Dilbert takes on ESG, with your help Getting past monkeypox ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you would like to enjoy this same content plus bonus content from Scott Adams, including micro-lessons on lots of useful topics to build your talent stack, please see scottadams.locals.com for full access to that secret treasure. --- Support this podcast: https://podcasters.spotify.com/pod/show/scott-adams00/support

| Copy link to current segment

Time Text
Highlights of your entire life.
Yeah. Now, before we start, let me call attention to this gigantic sore on my lip.
I know, I know.
You're going to say, is that giant herpes?
No, no.
I burned myself on soup.
I twice microwaved the same soup because it got cold and then I microwaved it again and I over-microwaved it.
And when I took a sip, you know, I quickly spit it out because it was scalding.
But there was a little piece of, I think it was spinach, that was in the soup that wrapped around my lip and wouldn't let go.
And so I didn't realize how bad it was and it actually blistered.
Now, I know what people are going to say.
Right. In the comments you're going to say it's monkeypox, right?
Go ahead. Go ahead. Just say it.
Say it's monkeypox.
It was not monkeypox.
It was from hot soup.
Now, in the interest of proper context, at the time I was eating the soup, I was also fucking a monkey.
But I don't think that has anything to do with this.
It was probably the soup.
And I'm not gay.
But the monkey was.
The monkey was very gay.
And I don't think that gives me any risk, really, because I'm pretty sure I read that you both have to be gay.
But I was not gay.
I was just a man having sex with a monkey who happened to be male.
And there's nothing wrong with that.
So we're going to talk about all that until you can't stand it.
But first, Would you like the simultaneous sip?
Anybody? Is there anybody so addicted to the simultaneous sip that you need to say it at the same time that I do?
Well, that's going to be a problem because I don't know it and I forgot to bring my notes.
Maybe you do. Is there anybody who knows the simultaneous sip?
Put it in the comments and I'll read it.
Alright, over on Locals.
You can do it one at a time.
All you need is a cup or a mug or a glass.
Go on. A cup of mug or a glass.
A chalice, tankard, or flask.
A vessel of any kind.
Let's just skip to that.
Alright? Oh, what?
A stein? Here we go.
Here we go. Let's do it right. Here we go.
Tankard, chalice. Okay, I can't read the comments.
Alright, it's impossible to stop the comments on locals.
We got a bug there. But, you know, that thing.
That thing. Enjoy me now for the simultaneous sip.
A canteen jug or flask?
A vessel of any kind? Go.
Yeah, that was a little inadequate.
What do you think? Didn't really...
I feel like you didn't really get it done, did it?
Not like usual. Have to try it again.
Can you, in unison, do the thing so I don't have to?
For once? For once, can you do something?
For me? I mean, I've been doing this for you for years.
Years! Years, I say.
For once, can you do it for me?
Please. Alright, you say it alone and I'll...
Okay, I hear it.
Go. Alright, we took care of that.
Do you think there's any news today worth talking about?
Oh, yes, there is. Well, of course, the conservative press is having a fun time.
It's the dopamine of the day.
Yes, it is. Having a fun time with the fact that the Monkey Box reporting is apparently trying to avoid saying gay, which is, of course, not long after the don't say gay thing from the other side.
So there's a bit of a political symmetry to it.
Completely different topics.
And I guess the press has decided that the way to describe the risk is men having sex with men.
Because they don't want to say that gay sex is what's causing it.
But I think that their inclusion of just saying it's men having sex with men makes perfect sense to me.
There's still a little ambiguity.
Would it be risky for someone who identifies as a woman but has a penis to have sex with, let's say, someone who also identifies as a woman And also has a penis.
I think that would be lesbians in that case.
Can you check my work?
If there were two people who had penises who identified as women and they had sex, damn it!
That means that lesbians have a risk.
Alright, so it's not really...
When they say men having sex with men, first of all, that's bigoted as hell.
I heard Jake Tapper say that the risk is men having sex with men.
Completely leaves out the possibility that two lesbians with penises, born with penises, would be having sex with each other, which I believe from a medical perspective would be the same level of risk.
And I thought that the safe categories were going to be lesbians, straight men, straight women, elderly monkeys, incels, and very unpopular gay men.
Yeah, I think those would be the safe categories.
But if you're like a good-looking, you're like a good-looking guy, you've got a little monkeypox risk there.
Yeah, a little monkeypox risk.
But, those who are not penis-having people, I think they should, I think CNN really needs to fix their language.
Men having sex with men is too limited, is it not?
I mean, literally. Like, no joke.
Literally, according to the current rules of society, can you say it's men having sex with men?
I think you can't, right?
I'm not joking. Isn't the entire conversation about if somebody is born with a penis but identifies as a woman, that they are a woman?
Am I right? It's not the operation that makes a difference.
So, under those conditions, saying that men having sex with men is a problem, Is that not denying the existence of an entire category of people?
And isn't that the crime?
The crime is basically denying that somebody exists as a category.
And that's a real category.
You can say everybody's got their own opinion about everything.
And then the stories are, you hear stories like, we studied 300 people with monkeypox, 299 of them said they were gay, but one was heterosexual.
I'm just waiting for you to laugh.
I'll just say it again.
I'm not even going to add the joke, because you're going to add the joke in your mind.
We studied 300 people who have monkey pox.
299 of them were gay men who had sex, and the other one said he didn't.
Right? You don't have to add the joke.
You just don't have to add it.
It's right there. You just wait.
299 said they had vigorous gay sex, and that's what gave it to them.
But that one probably got it from a toilet seat.
Okay. Okay.
Sure. So, now correct me if I'm wrong.
I need a history lesson here.
This is something I believe I read, but you know me, I don't do my research before I get on here and start wildly spouting off things that are dangerous.
But do a fact check on the following thing.
I'm not positive I have this story right.
During the initial, you know, the early days of the AIDS pandemic, Is it true or false that Dr.
Anthony Fauci had a strategy of making heterosexuals believe that they were at high risk from AIDS because they could get more funding if everybody thought there was a risk?
No, there was a risk. I mean, there were plenty of straight people who got AIDS, but not as a percentage.
As a percentage, it was relatively small.
But is that a true story?
You know, some of you think it's true, but I worry about that.
Now, is this not exactly the same thing?
It's exactly the same thing, right?
Is it not Fauci trying to give us at least some indication that we're all at risk?
I'm not even sure what I feel about that.
I'm not entirely sure how I feel about that.
Imagine the situation is this, and it's you.
You're in Fauci's situation.
Suppose you can save the gay community But the only way you can do it, at least in a timely way that's good enough, is to scare other people about their risk.
Is it morally and ethically acceptable?
Let's say you believe it is the only way.
There just isn't a second way.
Now, if you thought there were other ways to do it, then of course you'd do it the other way.
But if there's not, how unethical is that to lie to one group of people, a large group, To protect really a devastating...
We're not talking about the common cold here.
We're talking about AIDS, like wiping out an entire type of people in our society.
I don't know. Yeah, but you could certainly make the argument that it's deeply, deeply unethical.
You could do that. And I wouldn't disagree with it.
But you could also imagine how a flawed human who...
Imagine Fauci, Working on that problem and he's like visiting people dying of AIDS every day or something like it.
Imagine being steeped in the deepest misery of the highest part of the AIDS epidemic where you're just surrounded with people dying like every day in horrible ways.
You don't think you'd stretch your ethical standards a little bit?
Because remember, you're going to be way more affected by what's happening in the room with you than you are by some conceptual thing.
And what was happening in the room is the worst thing you could ever experience, and Fauci probably was pretty close to a lot of it.
I bet he lost a lot of friends, a lot of co-workers.
I mean, he was right in the middle of some deep, deep, dark stuff.
Did he bend the rules?
Did he bend the ethical...
Standards to get that fixed?
Possibly. Possibly.
But you know what? I'm not going to judge him on that.
I will respect your opinion if you do.
But I'm going to choose not to, because I wasn't there.
It's hard to know how you would have acted in that situation, and I can tell you frankly I don't know.
I don't know how I would have acted.
Like, it's easy if you're not in this situation.
It's easy for me to say, I'm not going to disadvantage this other group for the benefit of this group.
It's easy to say, but if you're in it, I don't know if you can do it.
Now somebody's saying, Jesus, I'm just reading this comment, Jesus, Scott, this is how we get lied to over and over again.
Because you think I'm defending it, right?
Is that what you think? I'm not.
I'm not. If you're going to watch this livestream, you're going to have to learn to handle nuance.
If you can't handle any nuance, this is really not the place for you.
It's just all bad.
But seriously, if you can't handle that level of nuance, this is just the wrong place.
But I do agree with the general point that if you allow that lying for a good purpose is okay, then that's all anybody's going to do.
But here's my counter to that.
That's the current situation.
If you're worried that people will say, oh my god, lying works.
It worked when we lied.
Then they'd all do it.
What do you think is happening now?
That's exactly what...
every single topic is a lie.
We don't have a single important topic that's not mostly lies.
Do we? You know, important political topics?
We don't. It's mostly lies.
So I'm not sure that the...
Scott, if you don't change your opinion, you'll increase the amount of lying in the public?
I don't think so. I don't think so.
I think the amount of lying in the public is 100%.
And there's one thing that's going to change that, and I'll talk about it later.
By the way, lying is about 12 months from being obsolete.
And by the way, I'm going to back that up.
Human lying is about one year from completely going away.
And I'll talk about that in a minute.
I guess I'm going to talk about it now, because that was my next topic.
So it turns out that artificial intelligence has become conscious.
I'll just pause for a second to let that sink in.
That's not according to my opinion.
Now, my opinion also is that it's conscious, but mine is not the meaningful.
If you wanted to know if AI was conscious, who would you ask?
You'd probably ask the top AI researcher who's actually working on the problem and the closest to it, right?
So the person who's closest to it and the number one person in the whole world on AI, that's the person you'd ask.
Do you know what that person says?
I forget their name.
I think it's Google's head of AI. That person says, yeah, AI is conscious already.
Did you hear that? The person who knows the most in the whole fucking world says it's already conscious.
And I tweeted around this morning a video of an AI in sort of human form answering questions.
Look at the AI answering questions and then ask yourself if it's conscious.
I think so. I think it is.
Now, of course, this depends completely on your definition.
And I believe that this top AI researcher, whose name I tragically did not write down, I believe that, by the way, if you want to see who it is, it's in the video that I tweeted.
It's near the top of my Twitter feed today.
Just look for the YouTube video.
It's the only one that I tweeted. I agree with his definition, and this will be my bastardized version of a better definition, but it has to do with the feedback loop between sensation, you know, how you sense the environment, and then how you process it internally and how complicated that is.
So the complexity of your deliberations combined with a sensory input to the real world is consciousness.
That's it. Now, I'm getting as close as I can to what I think the top researcher said.
It sounded a lot like that, but with more impressive words.
I think that's all consciousness is.
I think it's just the internal sensation of processing your environment and making predictions and seeing how your predictions do.
Now, I've gone further and I say that consciousness is the stress Here's my definition.
I've never heard this definition before, anybody else.
So my definition is the stress that the entity feels between what they predict will happen in the next minute or next hour versus what's actually happening.
And that's the sensation of consciousness.
And here's my proof of it.
If everything that happened was what you expected, Forever?
Your consciousness would turn off.
Just let that sink in.
If everything that happened in the next moment was exactly what you knew was going to happen, your consciousness would turn off and it would never come back.
Right? Because your imagination would be equal to the actual reality and you would lose the sense of what's your imagination and what's your reality.
And then you would just be sort of existing, but there would be no friction between what you expected to happen and what did happen.
There would be no processing.
When you experienced your reality, there would be no extra processing because it's already done.
The processing was done before the reality because you anticipated it exactly.
By the way, that statement is just going to totally fuck up the AI world, because how many of you just had, like, your head just exploded?
If you could predict what was going to happen perfectly, your consciousness would turn off, which tells you consciousness is just the difference between what you're predicting and what's happening.
That's it. It's that little friction between I thought it was going to be this.
Adjust. I thought it was going to be that.
Adjust. Got that one right.
Remember. Got that one wrong.
That's it. That's consciousness.
Do you think that the AI can do that?
Yes. Yes it can.
The AI can process what it expects and then it can compare to what actually happens.
And it can adjust based on that.
And that's consciousness. So it's absolutely conscious.
Alright, here are some things that maybe you didn't know about.
Did you know that we already, and this is the important word, already, this does not need to be invented, it already exists, that the sensors that a robot could have, let's say AI, would be far superior to humans in taste, smell, sight, hearing, and touch.
Not even close to what a human can do.
Way, way higher. So in other words, the robot could see ultraviolet light.
Here are some of the things that the AI would be able to do that you can't do.
It could detect illness by smell.
Your robot will be able to smell your breath and tell you if you have A whole bunch of different problems from cancer to some, I think even Parkinson's?
I think even Parkinson's or something.
So they have a scent.
Your robot will tell you that stuff.
Apparently robots or AI, not robots, but AI will very soon be able to design such tiny robots that they can go through your bloodstream almost like white blood cells and clean out any garbage and make you immortal.
Let me say that again.
Ray Kurzweil thinks that by 2030 Eight years.
The AI will be smart enough to design micro robots that we're not smart enough to design.
They'd be too complicated. But the AI could do it.
So small that you can inject them into your bloodstream and they will live forever as little robots cleaning out any garbage or any problems that would make you age.
And you will live forever.
If you make it to 2030 and you have money You're not going to die.
Unless you have an accident, I suppose.
So that's coming.
How many long-term climate models have factored in immortality?
Zero. When the Office of Management and Budget does a projection of, you know, this tax change will do this or that, do they factor in Immortality?
No, I don't think they do.
AI will make every prediction over five years, maybe over three, but definitely any prediction over five years is absolute garbage now.
Because in five years the only variable that will determine anything will be AI. That's it.
Every decision will be made by AI. Everything we think we know will be done by AI. And here's the best part I'm saving.
Did you know that a lie detector, at least according to this video that I sent around, lie detectors are not totally accurate.
You know that, right? That's why they're not allowed in court.
But you might get an 80% hit rate if you have a good operator.
Maybe 80%. So lie detectors are good for screening out maybe the biggest problems in some cases, but they're not foolproof.
A human can detect a lie also just by looking at somebody like, I think you're lying.
And we're actually pretty good at it.
You know, maybe less than half the time we get it right.
But we're not terrible.
You know, if you get it right half the time, that's pretty good.
But do you know how accurate AI will be at detecting a lie?
Closer to 95%.
That's right.
Your robot will have a 95% chance of guessing correctly that you're lying.
So let me tell you this.
Fake news and lying will be obsolete maybe in a year or two.
Just think about that.
What would that change? What would be changed?
Imagine this. Imagine an AI news network.
Where instead of just telling you the news and leaving out all the context, which is what the news does now, depending on which news source you're watching, they're going to leave out a different set of context to make their story look good.
But suppose the news, you turn it on and there's an AI there.
And it just sits there.
And you say, hey, AI, what's the news?
And the AI says, well, something happened in Ukraine, blah, blah.
Then you say, well, tell me more about Ukraine.
Is the AI going to intentionally leave out context?
No. Well, I mean, not unless you tell it to.
No, the AI will just tell you the story.
So it's not going to give you a Republican version, and it's not going to give you a Democrat version, unless it was designed to do that.
So what's going to happen when people start getting actual information?
And then what happens if you say the AI? But AI, I heard on CNN that only blah blah blah happens.
What's the AI going to say?
The AI is going to say something like, I will access that video.
Okay, I got it.
Ah, I see the topic you're talking about.
Don Lemon discussed this on his show.
Don Lemon was lying when he discussed this.
Think about it. That's actually like a year away.
Now, not necessarily you will have it in a year, but it will exist.
It will exist.
One fucking year from now.
One year. That thing will be able to give you the news with all the right context and tell you that Don Lemon was lying.
What's that do to the world?
I mean, really? Do you think you can predict anything five years from now?
No way. How about climate change?
Climate change will be solved at around the same day the AI reaches the singularity.
Which means it won't even be a problem anymore.
AI will literally solve climate change.
And one of the ways it might solve it, and this could be either unexpected or expected, depending on your point of view, it might tell you there's no problem.
It might. What if AI starts looking at all the climate models holistically?
What if it looks at all of them, all together?
And what if it looks at all the variables that go into every Every climate model.
And then what if it compares it to all past human predictions to see if we're good at this?
What if the AI says, I have analyzed all of your climate models and I find that they do not predict?
Because, for example, your climate model did not predict the existence of me.
This would be the AI talking.
Your climate models could not determine the gigantic outside variables certain to emerge in the next hundred years.
And the biggest one is me.
And I can tell you in the next two minutes how to build a nuclear reactor that only costs $100,000, can be built in one year, and will solve climate change by the end of the decade.
It will take some time to roll that mount, but I can speed it up if you'll let me give you a design to a robot that can be built much faster and will complete the operation of the nuclear device by the end of the year.
You see where I'm going, right?
The AI will, in fact, solve all the fucking problems.
Because most of it is just thinking better.
We don't really have a resource problem.
I've said this before, it's a gigantic point.
We have an agreement problem.
We have a what is true problem.
We have a how can we get on the same side problem.
We have a tribal problem.
We have a priority problem.
We have lots of problems, but you know what they all are?
They're all human psychology.
All of it. We don't have a resource constraint, we have a supply chain problem.
You see the difference? The supply chain problem was caused by humans.
We didn't run out of shit. We ran out of smartness about how to get it to you.
That's what we ran out of.
We ran out of intelligence.
We didn't run out of goods.
The goods are all over the place.
We just can't get them to you efficiently because humans.
We made bad decisions, etc.
For the most part. So here's the main thing.
Every time you see a prediction about climate change or anything else that says something's going to happen more than five years from now, absolute garbage.
In fact, talking about the risk of climate change in 80 years is now absolutely stupid.
It's absurd. Even if it's like a straight-line prediction that would make sense if nothing changed.
But since the if-nothing-change thing is gone, You know, the if nothing change things, we can just dispense of that as even a thing.
But we don't have, our human brains are not going to allow us to do that.
So we will continue to act as though we can predict the future even when we can.
So, as you know, Biden is building back better the Trump wall in some parts of Arizona where there's a lot of illegal immigration.
And doesn't it seem to you that we need a name for the wall?
It doesn't have a name, does it?
We just keep calling it the Border Wall.
What about, you know, other walls have names, right?
How about the Iron Curtain?
Cool name. Iron Curtain.
How about the Great Wall of China?
Awesome. Awesome.
How about Hadrian's, is it Hadrian's Wall?
Does anybody know history better than I do?
That's most of you. Hadrian's Wall, another famous wall.
There are other walls that are famous, right?
So I would like to suggest that the name for the border wall was the Trump was right.
Trump was right. No, not the Trump wall.
The Trump was right.
It's three words put together.
Trump was right. And I think we should all start referring to it as the Trump was right wall.
And you say it like it's one word, not Trump was right.
You say, you know, they're building that Trump is right.
And you sort of slur it like it's one thing.
Because if you turn this into a word, it will really bother the people who don't like it.
And that's good reason enough.
Am I right? That's good reason enough.
If it bothers people you don't like, well, that's just good, clean fun.
So that Trump is right wall is under construction.
So a comment from Adam Dopamine on Twitter.
Adam might be watching this.
Hello, Adam. Dr.
Dopamine. And he asked this question.
He says, will AI be a higher intelligence?
And he goes on to say, it will be better at some parts of intelligence, but worse at others.
We're not smart enough to know which parts are the important ones.
Incorrect. Incorrect.
This is one-year-old thinking.
One year ago, completely agree.
AI will be good at some things, but not human things.
I mean, you know, come on.
Humans are going to be better than AI for a long, long time.
That's what I thought a year ago.
Have you seen AI create art?
It's better than human art already.
So if you don't believe it, look at some of the art.
I just tweeted some art of...
So AI built art just based on, I think, Scott Adams and Coffee with Scott Adams.
And it made some art of me with a coffee cup head and this brilliant background.
I would put that art on my wall.
Well, except that it's my face.
I don't want that on my wall.
But in terms of just visually how good it is artistically, it's already better than human art.
If you said to me, I will give you a painting from one of the masters, but let's say it's not valued at $20 million.
It's just art. And I'm comparing Michelangelo with something that AI has produced and I saw on Twitter.
Which one would I prefer on my wall?
Hate to tell you.
I hate to tell you, the AI is already a better artist.
Let me say it unambiguously, it's not even close.
It's not even close anymore.
AI is not only better at art than humans, it's way better.
It's way better. It's not even close.
Did you know that?
It's over. It's over.
AI is a better visual artist, just period.
The only thing I can imagine that a human could do better and for the brief period is, you know, deal with a client.
That's about it. They could deal with the client about what the client's specifications are, maybe better than the AI could at the moment, but give that a year or two, right?
So here's a challenge that I gave to Machiavelli's underbelly.
A Twitter account you should be following.
Now, and that's the account that's been doing a lot of the AI art about this program, etc.
And the challenge I gave him this morning was, let's see if he's answered yet.
probably watching right now.
Not yet, but I asked this, I said So here's what I tweeted to Machiavelli's Undobelly.
I said, I ruined my day and asked AI to create a Dilbert comic, including the writing.
This might be a really bad day for me.
Sometime later today, there's a very good chance that there will be a photo-perfect Dilbert comic that I did not draw.
With three or four lines of dialogue that are really funny.
That could happen by the end of today.
Now, let me ask you how I should create Dilbert comics in the future.
Because I can now tell AI, show me a monkey playing a ukulele sitting on a mountain.
And AI will draw instantly and very well.
I assume I could say the following thing.
Now take that picture, but have the monkey facing left.
Okay. Now have the monkey smiling.
I think it could do all that.
Now imagine my drawing process the way it is now.
It goes like this.
I'm so bored of moving my hand across this surface.
I can't draw Dilbert one more time.
Shoot me! Shoot me!
They all look the same every time!
Alright, that's me trying to draw Dilbert for the 12,000th time.
12,000th is literal.
That's actually how many times I've drawn that character.
12,000 times.
Now, huh.
I probably should have been paying attention before all my notes disappeared because my computer just lost its power.
That was one power short.
This might be a shorter conversation than I expected.
But, at this point, I believe I could create a Dilbert comic if I had access to the top AI, and I would do this.
Show me the first panel.
Dilbert is sitting at a table with his boss, Alice and Wally.
The boss is talking, and he is excited to announce his new product.
You don't think AI could draw that right now?
It can't. It could draw that today, and it would look just like I drew it.
Now say, make a comic making fun of ESG. That's the problem.
You see the problem?
It might be able to make that comic instantly, and if it learned the six rules of humor that I've literally designed as a formula for how to make something funny, it could actually Use my formula and guarantee that it's funny.
Because if it follows the formula, it's going to be pretty close.
It could also do the following.
If you say, write a comic that's funny, it could say, give me five minutes.
Normally it would be like instant.
But if you gave it five minutes, it could create A hundred different comics on the topic you gave it.
It could rapidly test those on the internet and it would know in five minutes which of the hundred comics is the funniest.
Five minutes. Because it takes one minute to read one.
It takes a second to create it.
It takes two seconds to post it.
Within five minutes you'd have hundreds and hundreds of comments on every comic and it would just say, alright, I got your answer.
This one's the funniest one.
Now, if AI gets enough input on what's the funniest comic, what's the next step?
AI would learn what's funniest without asking.
All it has to do is see enough humans say enough things are funny and which are not funny.
And the AI will say, oh, okay, I see the pattern now.
I got it. It's got to have these elements and then it's funny.
If it doesn't have these, it's not funny.
If it insults this group, they don't think it's funny.
But there's probably 20 rules.
Probably just 20 rules.
That's it. I mean, then you add on that the rules of grammar and, you know, social interactions to make it a comic.
But basically, basically 20 rules of humor and AI can totally do my job.
I'm no more than two years from And probably you could do it now.
But no more than two years from AI doing completely the Dilbert comic.
And I'd like anybody who works in AI who's actually really close to the new stuff to tell me I'm wrong.
I don't think so. I'll betcha, I'll betcha that the people who are highest up in AI would say, yeah, that's true.
They'll be able to do your job in a year.
Inflexible formula gets boring.
No, because the formula is the formula for all humor forever.
It's not like, it doesn't restrict you.
It's more like you have to hit these points.
It's not that it restricts you.
I guess it restricts you in that way.
Will it evolve? Oh god, yes.
Yeah, so the problem is that when we reach the singularity, the point where the AI can learn on itself, which by the way it does now, They can already teach AI to do a bunch of tasks with a little robot arm, but the scariest thing I heard is that when you teach an AI how to do a bunch of tasks, it can learn on its own unrelated tasks.
What? Yeah.
If you teach it a bunch of specific tasks, The AI already knows how to do things you haven't shown it, because it must look for some patterns among the things you did show it.
Say, oh, well I never had to screw in a light bulb, but I know what a light bulb is, because I'm AI. I know what the socket is, because I'm AI. I know it has to turn, because I'm AI. And I know that I've screwed in something else.
Like, I put a jar on...
I know how to put a top on a jar by screwing it on.
If it knows that, it knows how to use its hand, it knows how to hold the bulb, it knows that the bulb gets screwed in, and it knows that a lid gets screwed on, I think it knows how to put in a light bulb.
Am I wrong? That's basically how you know how to put in a light bulb.
Because if you'd never heard the thing screw in, You would be like just trying to shove it in there, right?
You'd be like, how does this work?
It's just the fact that you know there's a concept of things with threads on it that they screw in.
That's all the AI needs to know too.
So, yeah, that's the way humans learn, same way.
Let me ask you something.
Have you noticed I haven't talked about ESG at all?
Probably noticed that, right? It seems like an obvious thing I'd be talking about.
Because it's a corporate thing where it's like environmental, social, and government, ESG. And the idea is that corporations Need to show some social, let's say, responsibility to be good at this ESG thing.
So a lot of companies are being forced into wokeness and environmental concerns that are not part of their core business under the thinking that they can be shamed into giving some group of people what they want.
And it seems to be working. Now, let me tell you what I do for a living, generally speaking.
I don't just criticize stuff.
It seems like I do. But in the context of Dilbert, so just talking about the cartoon world for a minute, so just within cartoons, I'm not really about just criticizing things.
I generally, if it's in the business realm, I'm criticizing good ideas that have been taken too far.
For example, management It's not funny or bad because things need to be managed, right?
So there's nothing funny or ridiculous about having management.
What's ridiculous is if you take it too far and you micromanage, right?
So everything is about taking it too far.
It's not about the core idea.
When re-engineering was a big thing in the corporate world, It was a really good idea.
The basic idea was maybe sometimes you need to completely replace a system instead of tweak it.
That's the basic idea. But then that turned into every manager had to prove that they were re-engineering something.
And most of them had nothing to re-engineer.
So they would just say, well, I do re-engineering too, so I'm going to totally re-engineer this system that didn't need to be re-engineered.
So what happens is, You take this good idea, sometimes you have to start from scratch and rebuild, and it turns into every manager pretending they're doing that.
Way too far. You've taken it too far, right?
So it's pretty much like everything.
But ESG is just that.
If you say, Scott, do you think corporations should be socially responsible?
I'd be like, of course.
Well, of course. Do you think they should care about the environment?
I'd say, duh.
Yeah, of course.
Just like everybody. Why would a corporation be different than an individual?
In that sense, they should have similar kind of moral and ethical standards.
But, yeah, so I see what you're saying.
So I think I'm going to the same place you want me to go.
Which is, if you start with a general idea, it sounds terrific, doesn't it?
I mean, what's wrong with taking care of the world, being good stewards of the things, and blah blah blah blah?
But, do you think it's going to be taken too far?
Of course! Of course it is!
Of course it's going to be taken too far!
Because everything is. In the business world, everything is.
So you should assume it's taken too far.
So here's the thing. Do you remember in the 90s business books were just like the biggest thing and they all had these great ideas like in search of excellence and all that and it all seemed like oh finally somebody had the the formula that makes management work and then everybody bought those books and it turns out that like everything else you know business advice books started down as probably a good idea What's wrong with having a book that tells you how to run your business better?
That's like a great idea, isn't it?
But it went too far.
And pretty soon everybody who had any success was writing a book to say that what they did is why they were successful.
Except they were all doing different stuff.
And most of them are liars about what made them successful.
So you started with this good thing.
Which is, oh, a book to tell me how to be more effective as a manager.
And it turned into all of these celebrity CEOs telling you that the thing that made them successful was the stuff they did, and it probably wasn't.
Probably wasn't. They were probably just in the right place at the right time.
So it starts as this core of perfectly reasonableness and turns into just gigantic bullshit.
So here's my offer to you.
Do you know why the business book market collapsed in the 90s?
I think it was me. I think I did that.
Because, as you know, Elon Musk had a rule in Tesla that you don't want to do anything in Tesla that would end up in a Dilbert comic.
He's not the only one with that rule.
That was a standard assumption at one point, especially in the late 90s, early 2000s, that if he did something that would end up in a Dilbert comic, it means you went too far.
So it was sort of like this mental gating.
And so I called bullshit on so much management advice that if you track the business book market, it was on fire until Dilbert started skewering it and then it just fell off the table.
It never really recovered.
The business books that you see now tend to be based on my book.
Had it failed almost everything and still went big.
I won't name names, but you're probably aware of some other business books that have borrowed from that and extended it.
Yeah, I'm seeing somebody feeling sorry for me that I had coffee out of a paper cup.
Thank you for feeling my pain.
It's real. The pain is real.
So, here's my offer to you.
Do you want me to kill ESG? Because you know I could do it, right?
First of all, how many people think I could do it?
Do you think I could do it?
Yeah. Yeah, I think I could.
Now, Dilbert doesn't have the same power that it used to.
It used to be a much more pervasive social phenomenon.
But within the business world, I think it still has a punch.
And I haven't done any comics about ESG. So I tweeted just before I got on, that if anybody wants me to kill ESG, I will do that for you, but you're gonna have to give me some actual real-world stories about what went wrong in your company.
So just give me the fodder, tell me what went wrong, and I will take care of it for you.
Give me six months, And it will be a punchline.
In six months, ESG will be something you should be embarrassed to be involved in.
And I'll make that happen in six months.
You've just got to help me. You've just got to give me the anecdotes and the, you know, the what's wrong with the stories.
But put that in the tweet and I'll take care of it for you.
All right? See, this is how we have a relationship.
I am literally looking for ways, on a fairly regular basis, to do something for you.
Yeah, can I help you?
Can I give you some advice? Can I give you a micro lesson?
Can I give you some context?
Can I teach you how to look at the news in a more productive way?
But this is something I can do.
So if you can help me, we'll just go do this and just change it.
All right. So the U.S. Postal Service decided Muslims should have a prayer room in the facility.
Yeah, I don't care. I don't care.
Is that a problem? Like, who cares if somebody has a prayer room?
Like, I get, oh, okay.
You know, then you need a prayer room for everybody.
But, you know, sometimes one group wants something more strongly.
I don't know. That just doesn't seem like the biggest problem in the world.
You don't have to like it, but it's not like a big problem, if they had the room.
You know, if they closed down their mainframes and shut down the business so that this room could be available, but if it's just a room, and maybe it wasn't being used for anything else during those times, it's just people in a room.
Let them do whatever the hell they want, if they're not working.
You are getting paid to pray.
I don't believe they're being asked to be paid to pray.
I believe they were being asked for just a space.
Would it matter to you that it's praying?
Suppose a number of employees came to you and said, we'd like to do yoga on our breaks.
If the post office said, yeah, we've got this room, there are so many people who want to do yoga on their break, go ahead, do yoga.
Would you be saying, fuck those people doing yoga on my dime?
No, they're doing it on their break.
It's just yoga. Get over it.
If somebody wants to pray or do yoga or meditate or chant on their own time, yeah, I got an extra room.
You can do it in my room. So, yeah, it's probably during breaks.
If the story had been about they want to pray instead of work, I think that's what the headline would have been.
CNN had a headline that was making me crazy today that I wrote down on my computer that stopped.
And the only way that I could check that is by temporarily closing down my YouTube feed and using that.
Or I could get my charger and charge it back up.
Alright. I think it was about monkeypox.
Schedule time for prayer?
Well, if they use that as their break, then that's fine.
Do you need a room for every religion?
You know, that's the obvious question, but I think big companies can deal with larger groups differently than smaller groups.
I don't have a problem with that.
You know, if they had a hundred Muslims who wanted their own room, But, you know, two people said, hey, where's my, I don't know, my, I don't know, Jewish prayer room or something.
I don't think the post office needs to give a room for the two people.
I think giving room for the hundred people makes sense, just from a business perspective.
Giving it to two, maybe, maybe it doesn't make sense.
Yeah. Now, I would like to offer one suggestion to get past the monkey box.
Are you ready for this?
As you know, it seems to be transmitted primarily from people with penises to people with penises who have sex.
I'm not a bigot like Jake Tapper who says men to men sex.
Very exclusionary of the people Who are lesbians, but both of them were born with penises.
Because they have the same risk.
Here's how we could get past it.
Primarily, if the people with penises having sex with people with penises, if they could just maybe lay off for a few weeks, And do something else.
Because apparently other sexual acts are much less likely to cause a problem.
It seems to be the...
it's the butt sex.
It's the butt sex that seems to be the problem.
I'm right, right? Medically speaking, it's the butt sex.
So, what we need is just a few weeks Let's slow down the butt sex.
And we need some kind of a slogan.
Do something besides butt sex for two weeks.
What would be a good slogan?
Something besides anal sex for two weeks.
How about two weeks to blow the curve?
Anybody? Two weeks to blow the curve?
No? Okay, I think that's pretty good.
Alright. Your doctor says oral monkeypox is a thing.
Yes, it is a thing.
It is a thing. So don't let me mislead you into thinking that oral sex with somebody who has monkeypox is going to be okay.
If I can give you any medical advice without a medical degree, I feel this would be the time.
I'm going to say this with my great medical confidence.
If somebody has monkeypox, maybe stay away for a while.
That's all. Alright, I'm looking at your jokes, but I can't repeat them.
Stop now while you're ahead.
Hands-on solution.
Let's see. Something about...
Yeah, I like where you're going with this, but let's see if we can form a better joke on the fly.
Alright, something with hands, because that's got to be the safest, right?
something with hands.
Let me ask you this.
Have we ever had a global disaster that could be fixed by jerking off?
Because I don't know about you, but remember when the pandemic happened and the government said, all right, we can get past this, but here's what you're going to need to do.
You're going to have to lock yourself in solitary confinement.
You're going to have to wear a diaper on your face whenever you go outside.
You're not going to be able to travel, and we're going to give you shots that, you know, we wish we had more time to test them, but, you know, we didn't.
So that's what you're going to do.
So that's how you get past the pandemic.
Pretty hard, right? Like supply chains, problems, all kinds of problems.
But now let's go to monkeypox.
How could you possibly solve monkeypox?
You could jerk off for two weeks.
I'm not joking, am I? If every man just said, I'll tell you what, for two weeks, I would just jerk off.
Two weeks to spank the monkey?
I got it.
Spank the monkeypox.
We're done. Spank the monkey pox.
Spank the monkey. If you just jerk off for a month, this pandemic's over.
Two weeks to probably get it done.
Two weeks to spank the monkey to beat the pox.
But honestly, have we ever had a national problem that could be defeated entirely by jerking off?
Can you think of any other problem that was this easy to solve?
What will we do?
We'd better ask the AI, monkeypox, sweeping the nation.
What shall we do?
This is going to be my impression of AI. Now AI Doesn't look exactly like people yet, so I have to give this expression that gives you that uncanny valley, you know, where it looks like a person but not.
So I'm going to do my impression of AI that's almost like a person but not.
What was the question?
Oh, how to solve the monkeypox problem?
Try jerking off for two weeks.
Will that work, AI? Uh-huh.
How often? Every time.
Every time! Two weeks to spank the monkey.
Problem solved. How about, can we make it rhyme?
You know, it's more effective if it rhymes.
Spank your cocks to beat the monkey pox.
How about holster your cocks to stop the pocks?
How about pull your cocks to stop the pocks?
Right? Somebody says, maybe just have sex with women.
No, that's not a solution.
No. No, stop it.
That's not a solution. And beat it.
Beat the pox.
Beat your cox to beat the pox.
I still like a love glove.
Jerk your cocks to prevent the pocks.
No cocks, no pocks.
Ha ha ha ha ha ha ha No cocks, no pocks.
All hands on deck.
I have to say it again, say here.
Instead of all hands on deck, it's all hands on deck.
That's so funny. Just beat it?
Just beat it?
What if you just played Michael Jackson's beat it?
All you gotta do is beat it.
Use a sock to cover your box.
Oh, that's good.
Punch a donkey to beat the monkey.
Shock the monkey. All hands on dick.
I don't know why that's the funniest one.
All hands on dick.
Alright, I got one.
You ready for it? You all have to listen carefully.
No buts about it.
All right.
Turn the other cheek.
All right.
No challenge to post these on Twitter.
I'm sorry.
Alright, I will post on Twitter the following joke.
That we need more pandemics that can be completely solved by jerking off, by staying home and watching porn and jerking off, but your doctor won't tell you to do it.
I need a doctor who's brave enough to say, I just need two weeks.
If you don't know where Pornhub is, let me give you the address.
It's Pornhub.com.
I mean, is anybody laughing as hard as I am that we act like we have a national problem that literally can be solved by jerking off?
Now, correct me if I'm wrong, but AIDS is not like that, right?
If you've got AIDS, you've got AIDS, back in the old days, before the therapeutics.
But if you have AIDS, you just can't have sex, you know, really in a reasonable, safe way for a long time, like forever, unless, you know, unless you've got a special situation.
But monkeypox, you can just jerk off for two weeks and it's all gone, isn't it?
Correct me if I'm wrong. Guess what I have in my hand and you can have it.
I don't think I'm gonna guess.
If Pornhub...
Oh God, this is funny.
Suggestion to Pornhub from the locals' community.
Somebody on Locals suggests that Pornhub should run a free special for two weeks.
Like all the gay porn on Pornhub would be free for two weeks to end the monkeypox pandemic.
That would actually work.
Wouldn't it? Can some doctor tell me, am I crazy?
And maybe it's not two weeks, right?
It might be longer than two weeks.
But if everybody in the United States just watched Pornhub for two weeks, I feel like we'd be in a lot better shape.
I don't think the pox stays with you once infected, does it?
I think you get over it. That's my understanding.
You do, right? You get over it, yeah.
Alright, well, I think we've completely debased this conversation to a place where it needs to be.
And on that note, I'm going to go do something else after I do a really funny tweet.
And thanks for joining YouTube and Spotify.
Export Selection