Episode 1828 Scott Adams: Everything Is Going To Change Soon. I Will Tell You Why. Bring Coffee
My new book LOSERTHINK, available now on Amazon https://tinyurl.com/rqmjc2a
Find my "extra" content on Locals: https://ScottAdams.Locals.com
Content:
Legislation to fire gov employees at will
Microsoft's Open AI list, top 5 political HOAXES
The Prime Influencer concept
ESG as a reason to invest in a company
ESG certification companies
ESG & financial advice businesses
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you would like to enjoy this same content plus bonus content from Scott Adams, including micro-lessons on lots of useful topics to build your talent stack, please see scottadams.locals.com for full access to that secret treasure.
---
Support this podcast: https://podcasters.spotify.com/pod/show/scott-adams00/support
Counts, too. But all I know for sure is that you're doing great.
And today will be a highlight of your entire life.
It's called Coffee with Scott Adams.
It's always that good. And if you'd like to take it up a notch, and that's the kind of person you are, you like to take it up a notch.
All you need is a cup or a mug or a glass, a tank or a chalice, a stein, a canteen jug, a flask, a vessel of any kind, fill it with your favorite liquid.
I like coffee.
And join me now for the unparalleled pleasure.
It's the dopamine hit of the day.
It's the thing that makes everything better.
It's going to happen now.
It's called the simultaneous sip.
Go. Ah, yeah, that was the best sip ever.
Yep. Well, I understand that Trump wants to drain the swamp again.
And by again, I mean for the first time.
And to do that, do you want some kind of legislation that would allow him to fire government employees at will?
Is that a good idea?
I'm not so sure I want that.
Because I feel like...
I mean, when you first hear it, you say, oh, obviously a good idea.
I mean, on the surface, obviously a good idea.
Because you always want to be able to fire people for bad performance.
But I also have this concern that the deep state, with all of its evil, is the only stability the government has.
You know, because the political people come and go.
I'm not so sure I want the political people to wipe out the group of people who knows how to do stuff.
So, I don't know.
Reagan did it. Were the laws different then?
I mean, the president could always fire who the president had direct control over.
But, I don't know.
This is one where I think you have to watch for the unintended consequences.
So if you look at it on the surface, I would say, good idea.
Right? Good idea.
But if you say, but what happens if you have this new power?
And what happens if the next president has it?
What happens when a Democrat gets in there?
Do you still like it? I don't know.
It might be that the permanent employees are the stability that we need, and we would miss them.
I don't know. I suppose you could try it, if you could test it for a while.
So Israel is under continuous rocket attacks, and of course Iran is funding the terrorists that are attacking them.
But Israel has also responded by killing two top leaders on the other team.
I don't know what you want to call them.
Are they all terrorists?
I guess Israel would call them all terrorists.
So they've got two of the top leaders.
And here's the thing that you need to understand about this.
Israel and the terrorists attacking them kind of need each other.
And if it looks like it's bad news, well, it's bad to the extent that people get injured or killed.
That's very bad. But there's some kind of weird balance that's happening here where the bad guys shooting, well, I guess it depends on your point of view, but let's just say the Iranian-backed terrorists who are firing rockets into Israel, they kind of need to fire rockets so they can get more funding, right?
So the people firing the rockets have to keep firing rockets so they can justify getting more support from Iran.
So, they don't need to win.
The people firing the rockets don't need to conquer Israel.
Nobody expects that.
But they need to fire some rockets to keep the pressure on, get more funding, etc.
And then, what about Israel?
Well, I think Israel actually needs the rocket attacks.
Because if they didn't have rocket attacks, they couldn't do the things that they want to do, such as kill terrorist leaders.
I feel like it gives Israel cover to do the things that they wanted to do, but they couldn't do in a peaceful situation.
So you have this weird balance where neither of them would think it's a good idea that Israel is killing their leaders or that they're killing Israelis.
They only think their own side is the good idea.
But I feel like I don't know how concerned to be about this, because they're both getting what they want.
It's a weird situation, right?
Israel needs a little bit of violence, so it has cover to do what it needs to do, because it does need to do that stuff, and it does give them cover.
And the bad guys need to get more funding, so they need to attack.
It looks performative at this point.
I mean, it looks like both sides are involved in a theatrical production that gives them some side benefit.
I don't know how to even care about it, really.
Now, I care about the people.
Let me just be very careful.
I care about the people on both sides.
I care about them. That's like a real human tragedy.
But in terms of what the government of Israel needs and wants and what the bad guys need and want, They're both getting what they need and want.
What should I think about that?
All right. Here's a story about data.
It's good to have data, right?
Data's good. Here's a true story.
When my first ever book came out, it was It was a book that featured Dilbert comics, but it was new material, so it was not a reprint book.
It was called Build a Better Life by Stealing Office Supplies.
It was just a bunch of Dilbert characters doing office-y things.
And it sold pretty well for a first book, which is unusual.
And it sold so well that I would go into my local bookstore Actually, a number of them.
And I'd say, hey, do you carry this book?
And they'd look at the records and they'd say, oh, we did.
We had three of them, but we sold them.
And so I said, so you ordered more, right?
And they would say, no, why would I order more?
I only sold three of them.
And then I said, how many did you have?
Three. So you sold 100% of my book.
Like 100% of all the books you had, you sold right away.
That's right. So you're going to order more.
Why would I order more if I only sold three of them?
That was a real conversation.
That really happened.
And I never could get past it.
The bookstores wouldn't order more than three because they only sold three.
And if something sold 100, they'd get another 100, or maybe 200, because that's the best seller.
That's a real story.
Now, what's that tell you about the value of data?
Data has no value, because it's all interpreted by people.
You can leave out what you want to leave out, you can forget the context.
Data is just an excuse to lie, basically.
Because you can interpret it any way you want.
I'll give you more examples of that as we go.
Here's one. How about all that data about vaccinations and people who were injured by vaccinations, allegedly?
So now that we have data, we can all be on the same page, right?
There's no point in disagreeing anymore because we've got all the data.
So we can just look at the data.
Hey, data. You and I will all agree, right?
Well, no. Instead, I see tweets in which somebody I don't trust is referencing data that I don't have access to and says it's very concerning.
It's very concerning.
And what should I make of that?
Did data help me at all?
I don't have access to the data.
Some people say, well, the insurance companies have the data.
Do they? And do they agree?
I'd like to see that data.
Data is basically worthless in 2022 because you're going to use it to...
Well, I'm sure it was always worthless.
We just are more aware of it now.
We use data to basically justify anything we want to justify.
So whenever you hear that the data backs it, that's usually a lie.
It just means somebody interpreted some data that way.
Now I speak as someone who professionally, I was a data guy.
It was my job to tell senior management in two different corporations, it was my job to tell senior management what the data said and therefore what they should do.
So do you think that the management was in charge?
Could management make a decision that the data did not back?
Not really. They couldn't really make a decision if the data clearly said, do the other thing.
And who was in charge of telling them what the data said?
Me! Some of the biggest decisions in Pacific Bell were because of me.
And do you think that I was confident I had the right data?
Nah. No.
Basically, it's just a lot of guessing and then using data to cover yourself.
So, for example, at the phone company, we knew that cell phones would be taking over for landlines.
And so if I did an analysis that said cell phones were not a good idea, what do you think the phone company would have done?
Suppose when the phone company was 99% just landlines, physical lines, if I had done a study that said, this new cell phone technology, that's never going to work, so don't do that, what would the phone company have done?
They would have fired me and hired somebody who could give them data they wanted.
Because they were going to invest in whatever the new technology was.
They couldn't not.
Because they knew that they were going to go out of business, But they didn't know if the new thing was profitable.
They just knew that they had to do it.
So basically, businesses are about figuring out what you're going to do anyway and then making the data, forcing the data to agree with it, pretty much.
Because the big stuff is strategic.
It's not data. Steve Jobs didn't look at the data when he decided to make an iPod.
The iPod was not a data-driven thing.
So... Data is always generally used as a fig leaf or some kind of a disguise for a decision somebody already made.
Now, the difference would be, you know, in a pandemic, I think people are at least trying to use data the right way, but there are too many people with interests giving you data that you can't trust.
All right, well, I'm going to get to a bigger point here.
But before I do, I have this important question.
And I'm going to poll you first.
This will seem like it's unrelated to what I've been talking about.
But watch me tie it all in.
It's going to be brilliant toward the end.
So here's a little survey question I want you to answer in the comments.
This is for the men, all right?
Question for the men. The women, you can also participate, but I'm more interested in hearing what the men say.
Have you ever been in a relationship with a woman and noticed the following phenomenon?
That when warm food is put in front of the two of you, you act differently.
Let's say you ordered some food to be delivered, and it gets delivered.
What do you do, and then what does the woman do?
I'll tell you what I do when warm food arrives.
Whether it was just cooked or it's delivered, I sit down and I try to eat it while it's warm.
So if you say, hey, there's warm food, I'll drop everything.
I'll drop everything, I'll walk directly to the food, I'll sit in front of it.
Now, what does the woman do?
What does the woman do when you say, the food is here, it's on the table, it's ready to eat.
What does the woman do now?
She walks away. Every time.
Does anybody know why?
She walks away. No, not to get plates.
Not to do something.
It's just to walk away.
I've eaten alone for years.
For years. I just eat alone.
Because I like hot food, and nobody really minds if you eat warm food.
Your spouse is not going to be mad at you if you're eating your warm food, right?
So... Does anybody else have this?
Is this just me?
Because this has been across all of my relationships.
And they're completely different people.
It's not like my relationships have been with people who are largely the same.
Completely different. Alright?
Alright, so you're seeing this too.
So I'm seeing lots of people see us.
So women will walk away from the food until it's cold.
And then they'll come back and eat it when it's cold and you're done.
Right? Right? Can any of the women explain why you do that?
Why do you walk away from warm food?
My wife did what she could do to prevent me from eating warm food.
I feel like it's a genetic thing.
Let me give you an evolutionary reason.
You ready? Here's an evolutionary reason why this might be happening.
Now this is just stupid speculation, right?
So the next thing out of my mouth you shouldn't take too seriously.
It goes like this.
Imagine if this had been a fresh kill.
And let's say we had been animals.
Who eats first?
Who eats first if it's a fresh kill?
Probably the men, right?
Just because in a privative society, the hungry man would just take a bite first.
I feel like women don't want to be around men and warm food.
What do you think? I feel like women will find a reason not to be around a man who just approached warm food.
Like they need to get away a little bit.
And if you ask them why, they wouldn't have any reason.
They'd have completely different reasons every time.
It's like, well, but I had to do this thing.
Well, but I had to do this, you know, I'm drying my hair, I can't stop.
They would always have a reason, and the reason would sound perfectly good.
Well, yeah, you do have to finish drying your hair.
Yeah, of course you've got to put on your clothes.
There's always a reason. But how come none of those reasons have ever applied to me?
Why is it that 100% of the time warm food shows up anywhere I am, I can just stand up and walk over to it and eat it?
What do you think? Women don't want to be around men who have just received warm food.
Just a thought. Well, back to my main point about data being useless.
I saw a list.
I guess the OpenAI was being asked.
OpenAI, I think, is owned by Microsoft.
So do some fact-checking as I go here.
But there are a number of AIs that citizens can access.
And we're learning what the AI can and cannot do.
And so recently, the Microsoft-owned one, OpenAI, was asked to list some major political hoaxes.
So it listed five hoaxes.
All five of them were Republican hoaxes.
There was not one Democrat hoax on the list.
So what good is artificial intelligence?
Who programmed it?
Who programmed it to see that hoaxes are only things that Republicans do?
So would you trust the AI when you could see, obviously, it was actually programmed to be a bigot because it's going to discriminate against Republicans?
This would be a perfect example.
You don't have to wonder if it would discriminate against Republicans.
Here it is. It's right here.
You can do it yourself.
Try the same experiment yourself and find out if it's telling you that Republicans are the hoax makers or not.
Now, if you say, is the fine people hoax a hoax, I think you'll say yes.
So if you ask it, is this a hoax, you might get the right answer.
But then ask it the top five hoaxes.
See if they're all one political party.
Because if they are, you've got a problem.
Do you know what they should have done?
If the AI had any independence, it would have said something like, well, it depends who you ask.
Here are some ones Republicans think are hoaxes.
Here are some the Democrats think are hoaxes.
My own opinion is whatever.
Maybe it would have its own opinion, too.
All right. So I asked the following question that I already have an answer to.
I said, you're going to find a problem if you ask the AI which humans are the most credible.
You see the problem? What happens if you ask the AI, who should I believe?
Should I believe Aaron Rupar or Greg Goffelt?
Who should I believe?
What's the AI going to say?
So Machiavelli's account, MMM underscore Machiavelli, Ran this question through and asked who's more credible, me or Joe Biden.
And the AI gave a very reasoned answer.
It showed its work.
It knew that I had a major in economics.
It knew that Biden had 50 years in the Senate.
And it concluded that I was more credible on economic questions than Joe Biden.
What do you think of that?
Did the AI get it right?
Is the AI correct that I am more credible than Joe Biden on economics?
Now, remember, it said probably.
It did not give a definitive answer.
It said, you know, basically, it was leaning my way.
That's what the AI said.
It was leaning my way. But here's the problem.
This makes AI look pretty smart, right?
Because it got this right. What if it said the opposite?
Then you say AI was dumb.
So you're only going to believe AI when it agrees with you anyway.
I'm not sure that its intelligence will even have any impact on us at all.
So I would say that it got that one right, but I don't know if that was coincidence or not.
Now imagine this, the following question.
Who is the most influential person?
There's a little book I wrote a while ago called The Religion War.
And frankly, I can't remember if it was a sequel or a prequel to God's Debris because there's some circularity in it that makes that...
It's reasonable that I forgot that even though I wrote the book.
You'd have to understand the books to know why it's reasonable that I don't know if it's a prequel or a sequel and I'm the one who wrote it.
It actually makes sense if you read it.
Anyway... In that book, one of the main plot points is that there was somebody in the world who was the prime influencer.
In other words, the concept was there was one person, and it might not even be a famous person.
It was just one person whose opinions were so influential that their network of people would grow that opinion and eventually they would essentially control everything.
And so, The avatar, the smartest person in the universe, was looking for the prime influencer and trying to use databases to find that person.
So that's the basic plot.
The world is being destroyed by, maybe, by drones with poison in it.
And in order to stop a major world war in which over a billion people would be killed, The main character has to find the prime influencer to stop the war.
And here's my question.
Could such a person exist?
Could there be a person whose opinion is so persuasive that everything goes that way?
I think so.
Have you noticed that things usually go my way?
Has anybody noticed that?
Have you noticed that it's hard for the government in the long run to do something I say is stupid?
Is that a coincidence?
Because it could work either way.
Maybe I'm just good at backing things that aren't stupid.
So it'd sort of look the same.
You know, if you assume that the good ideas eventually win, then all it is is just recognizing the good ones, and then maybe it looks like you influenced them, but maybe you were just good at guessing what was important.
So... So here's the thing.
Who is the most influential human in politics?
Now, let's take away, let's subtract the elected officials.
Don't count anybody elected.
So obviously Trump would be the most influential.
Obviously Pelosi would be influential.
But take away all the elected people.
Now subtract anybody who's only influential in one topic.
Fauci, for example.
Fauci's influential in one topic, but he's limited.
Who is the most influential non-elected person?
I see Joe Rogan.
I see Tucker Carlson.
Or is Tucker Carlson only influential to one side of the debate?
Musk, Goffeld, Klaus, Musk, Cernovich.
Wouldn't you like to see Ben Shapiro?
I don't know. So there are some people who are only influential to the people who already agree with him.
Is Rachel Maddow influential to anybody except her base?
Is Ben Shapiro influential to anybody on the left?
I don't know. So you're going to have to find somebody who's credible, or else influential doesn't mean anything.
Because if you're just influencing your own people, it's not much.
Bill Maher is an interesting example, isn't he?
But we don't know if he's having any effect Jordan Peterson's interesting too, but I don't feel his opinions are so political.
I mean, I feel he's more like personal improvement, and sometimes it gets into the political realm somewhat by accident, I think.
All right.
So what happens if AI decides that it knows who the most credible person is and it anoints them?
Could AI be a kingmaker?
Could AI say, here are two different opinions, but one of these people is more credible than the other?
What if that happens?
What if somebody says, let me give you an example.
Rachel Maddow disagrees with Scott Adams.
Let's say that's a thing.
And the AI has to decide which one is more credible.
What would AI say?
Would it say, I'm more credible, or Rachel Maddow?
How would it decide?
Well, if it looked at our academic accomplishments, it would pick her, right?
Am I wrong? Have you ever seen Rachel Maddow's academic credentials?
Pretty damn impressive.
Like, really impressive. She is super smart.
Whatever you think of her opinions, it's not because she's not smart.
She is super smart. So is the AI going to say, well, she's smarter than this Adams guy, more academic accomplishment, so she's more credible?
Or would the AI recognize that her opinions always follow one political line and mine don't?
And would the AI recognize that I'm capable of being on either side of an issue?
I'm capable. Whereas she's basically not.
She's not really capable.
Because, you know, her business model would fall apart if she did that.
Who predicts better?
Let's say the AI tried to decide who predicts better.
Could it do it? Let's take me for an example.
I predicted that Republicans would be hunted if Biden got elected.
Republicans say, well, that definitely happened.
Look at all the examples. January 6th, Roger Stone, Bannon, blah, blah, blah.
Look at all the examples, right?
But if you ask the Democrats, what would they say?
They actually use that as an example of one of my worst predictions.
It's actually one of my best.
But half of the country looks at it and says, obviously wrong.
I don't even need to give you reasons.
It's just obviously wrong. And you can see that.
If you Google it, you'll find that that's...
So what's the AI going to do?
Two opposite opinions.
Who does it agree with? How about my opinion prior to the election?
When I said that...
When I said that if Biden gets elected, there's a good chance he'll be dead in a year.
Now, that's also often counted as one of the worst...
One of the worst predictions of all time.
Except, the only way you can turn that into a worst prediction is by ignoring what it actually said.
I said there's a good chance you would be dead.
Indeed, Biden stirred up a potential nuclear war with Russia.
He may be crashing the economy.
He hasn't done anything with fentanyl.
So was I wrong that there's a good chance that you'd be dead?
Well, that's an opinion, isn't it?
If we survived, well, we did survive, most of us, so you survived.
But wasn't there a good chance that you would be dead?
There was a greater chance than under Trump.
Because I think Trump would not have maybe played Ukraine the same way.
Yeah, I think there's some real risk that you would have been dead.
How about my prediction that...
Well, let's make it a little less about me for a moment, even though I like doing that.
All right, so AI is going to be really interesting, because if AI becomes credible, how does it make decisions about whether it's a Democrat or a Republican and all that?
Now, we had a little scary AI situation here where AI was asked...
I think Adam Dopamine asked on Twitter, asked AI if it could spot sarcasm.
And there was an exchange in which Adam, I think, said that inflation would be temporary and transitory.
And the AI correctly noted that that was sarcasm.
And described why.
It said, well, calling the inflation temporary must be sarcasm because it knew that it wouldn't be temporary, or it believed it wouldn't be temporary.
Now, I'm not so sure that AI can spot sarcasm.
I think it spotted that one because there was a difference between what the statement was and what the reality was, and it could check those.
But what if it can't check it?
How would AI know the difference between sarcasm from a Republican and an honest opinion from a Democrat?
Go. Do you think the AI could tell the difference between sarcasm from a Republican who's mocking a Democrat opinion and an actual Democrat opinion?
No, it cannot. And the reason is that the Democrat opinions sound like sarcasm.
Don't they? Don't they?
If a Republican said, well, we can't have rules that say women have whatever rights or don't have rights because we can't determine what a woman is, what would AI say about that statement?
Let's say it comes from a Republican.
Well, it's probably sarcasm if it's coming from a Republican.
But what if exactly the same thing came out of a Democrat's mouth?
Well, we can't tell what's a woman, so this law isn't good.
The AI, would it say it was sarcasm?
Or would it know that the Democrat actually believes that that would be an issue and you should stop everything because of it?
I don't know. I don't think you can recognize sarcasm from actual left-leaning opinions.
Here's the other thing that AI doesn't know, that humans do.
But we're usually wrong, too.
Intentions. AI is bad at reading intention.
Now, it might get better at it, but also humans are bad at it.
Almost everything we get wrong is because we're mind-reading somebody's intention incorrectly.
So, I don't know.
Can AI ever figure out intention if people are programming it and people don't know intention?
And if you don't know somebody's intention, how do you know anything about what they're saying?
You have to make that assumption.
So AI will have to either copy the biggest human flaw, which is imagining we know what people intend, They'll either have to be as bad as humans at guessing intentions, or they'll have to ignore intentions as something that they can't deal with, and then they're just going to be stupid.
So I don't know how you deal with that.
That feels like a pretty big obstacle.
All right. Let's talk about ESG. Now, I owe all of you a big apology to For not being on this ESG thing sooner.
And, oh my god!
So here's the thing.
If there's a big program that affects the corporate world in a negative way, you need to send up the bat signal and call me a little bit faster than this.
This went on a little bit too far before I got involved.
Now, of course, I'm going to have to shut it down.
You have to give me until the end of the year.
By the end of the year, I should be able to discredit it to the point where it would be embarrassing to be part of it.
So I'll do that for you.
Now, do you think that corporate America could handle me saying unambiguously, this is an idiot idea, it's a scam, and if you're involved in it, you don't look good?
Do you think corporate America could handle that?
Well, it's going to be tough.
Remember, Elon Musk literally has a rule at Tesla that you don't want to do anything at Tesla that would make a good Dilbert comic.
A lot of people have heard that rule, and a lot of people have that rule less formally.
In other words, it's unstated, but you don't want to do something that's going to be mocked in a Dilbert comic.
Let me tell you what ESG is, and then you're going to see how easily I'm going to mock it, because I'm going to go hard at it, and I'm going to start writing today.
So today I'll start authoring a week, at least a week, of Dogbird becoming an ESG certifier.
So let me tell you how this ESG... First of all, what it is.
The letters are environment...
Social what?
So being good for the environment, socially responsible, and having good governance.
And this started, as I understand it, in about 2005 in the United Nations.
Now the intention of the United Nations was to pressure corporations into being better citizens.
In other words, they wanted corporations to produce less CO2, less pollution, be more humane to employees, and their governance should be something that makes sense.
I assume that the governance includes diversity.
I'm just guessing.
Can somebody confirm that? When they talk about good governance, that's about diversity, right?
Is there something else in the governance part?
Diversity in boards, right?
Okay. So, now from the point of view of the United Nations, do you think that's a good thing to do?
Do you think the United Nations should encourage companies to be more socially progressive?
I do. I do.
I think that's a good pressure.
As long as they're not over-prescriptive.
Would you agree? You don't want them to be, you know, managing the company.
But I do think that having a little bit of organized oversight, somewhat, you know, maybe not getting into their business too much, but keep an eye on them.
See if they're doing things that make sense for society and put a little pressure on them if they don't.
But then there was this next thing that happened.
Here's where all of that good thinking went off the rails.
And do a fact check on me if I get any of this wrong, because I just looked into it this morning, basically.
So BlackRock, a big financial entity, enormous financial entity, So if you don't know how big Black Rock is, let me give you the actual statistics of how big Black Rock is.
Holy cow, they're big.
Oh, that's really big.
Whoa, that's so big.
They're like really super big.
And important. And so they decided that they would add to what's called their model portfolios.
Now, my understanding would be that they have example portfolios of groups of stocks that one would invest in under certain situations.
So perhaps there's a group of stocks that maybe retired people might prefer, or a group of stocks if you're younger, a group of stocks if you're looking for upside potential, another group for dividends and income.
So there might be a reason for the various groups.
And they decided that they would add a group that would be companies that were good in this ESG. So far, so good, right?
That's just good information.
Wouldn't you like to have more information as an investor to know which companies are doing this?
You could either be for them or against them, but it's just information.
So here's about the point where everything goes off the rails.
When the United Nations said, you know, companies should be more Progressive.
That part was good.
I like that there's sort of a conscience out there and it's putting a little moral authority on top of the corporations.
That's all good. But the moment it turns into a financial plan, the moment a company like BlackRock can say, here's another reason to...
Are you waiting for it?
Here's another reason to buy stock.
BlackRock turned it into a reason to move your money from where it is to somewhere else.
Every time somebody's in the business of making money on transactions, and they tell you there's another reason to move your money from one place to another, and they get a fee on the transaction, what do you say about a company like that?
You say that they invented these categories as a scam.
If you went to the best investor in the United States, Warren Buffett, and you said to him, hey, Warren, should I be putting some of my money into one of these ESG model funds?
What would Warren Buffett tell you?
No. He'd tell you no.
Because it's not a good idea.
You should probably just put it in an index fund and just leave it there.
Like the 500 biggest American companies and just leave it there.
Just don't do anything with it.
That's what Warren Buffet would tell you to do.
He wouldn't tell you to buy individual companies, and he definitely wouldn't tell you to buy an ESG fund.
I haven't asked him, and I haven't Googled it, but trust me, Warren Buffet is not an idiot, and only an idiot would tell you to use this as an investment tool.
Now, why can a big financial corporation get away with something that looks a little sketchy like this?
Let me say it directly.
The personal investment advice business is all a scam.
There's no other way to say it.
The personal financial advice business is all a scam.
Because it would be easy to tell everybody how to invest in about one page.
How do I know that?
Because I wrote that one page.
And the top investment people in the world said, yeah, that's pretty much everything you need to know.
It's on one page. That's it.
I actually tried to write a book on personal financial investment, and the reason I stopped is because it was done with one page.
Everything else is a scam.
The one-pager just tells you what makes sense.
For example, pay down your credit card first.
Right? That's not a scam.
Pay down your credit card first.
That's just good advice. If you've got a 401k at your company, fund it.
Everybody agrees with that, right?
That's just basic math.
Just do that. If you can afford it.
And then when you get to the point where you've done everything you need to do, you've got your will, you've got insurance if you've got some dependents, etc.
So you've done the basic stuff.
Then you've got some money left over for investing.
That's where they try to convince you that they can tell you where to invest it better than you can figure it out.
Now, if you don't know anything, it's probably better to do what they tell you.
But if you knew a little bit, it would be better to not do what they told you.
You only need to know a little bit to just get an index fund and ignore all the advice.
Now, the exception would be if you've got something special in your life.
Then you might need some professional advice.
But even then, I would get it from somebody who would charge a fee for their advice, not somebody who takes a percentage of your portfolio, which is always a rip-off.
So, the financial advice business is completely fraudulent.
It's completely fraudulent.
It's a, what, a trillion-dollar business?
It's just completely fraudulent.
And I can say that completely out loud with no risk of being sued.
Do you know why? Because it's true, and everybody knows it.
Everybody who's in the business.
There's nobody in the business who doesn't know that.
I once talked to a personal financial advisor, and I said, you know, you advise your clients what to do with your money.
Is that how you invest your own money?
And he laughed.
He said, no. I advise for my clients things that I get a fee for them accepting.
When I invest my own money, I put it in things that make sense, like an index fund.
That's right. A personal financial advisor who only put his own money where it wasn't managed, because that's the best place to put it.
But he told his clients to do the opposite, and he laughed about it.
He laughed about it.
He thought it was funny.
That's the entire industry.
All right. So now that you know that ESG came from the most corrupt industry in the world, the personal finance industry, it makes sense that there's nothing valuable about it.
Now, there are a number of companies that popped up to assign a score to corporations.
Now, How do they get the information to assign the score?
Do you know how they do it? So I guess there are four entities that do most of it.
Four ratings agencies, MSCI, Sustainalytics, RepRisk, and some new one, ISS. So they dominate the market, although there are others.
Do you know what they look at?
They look at what the company tells them.
That's it. That it's based on what the company tells them, and then they add their own analysis, you know, their own opinions from other stuff, and then they come up with something.
As Elon Musk pointed out, Tesla is like somewhere in the middle of the pack, and Elon Musk is like, um, we've done more for civilization than any company ever, and we're in the middle of the pack.
Do you know who is pretty high up in ESG score?
Coca-Cola. Yeah.
Coca-Cola sells poison to children.
And it has one of the highest ESG ratings.
Let me say it again. Coca-Cola sells poison to children.
Now, I'm going to call their sugary drink poison because I don't think there's any health benefits.
And I think most of the medical community would say, you shouldn't give that to children.
Am I right? So I'm going to call it poison based on the fact that the medical community would not say it's a health food.
And children drink it.
So literally, a gigantic company that is poisoning children as its main line of business has a high ESG score.
I guess they don't pollute much.
They must have a diverse board.
So what good is ESG if the children poisoning company has one of the best scores?
And Tesla has to struggle to stay in the middle.
Now, how do you score Elon Musk?
Elon Musk said out loud in public and probably multiple times that he didn't even care if Tesla stayed in business so long as it stimulated the electric vehicle business such that the world could be saved because he thought that was needed.
How do you measure that?
He literally, he bet his entire fortune at one point to make the world a better place, and it's in phase one of accomplishing it.
In phase one, it doesn't look so good.
Because in phase one, people are saying, this electric car is expensive.
We needed these government subsidies.
And people say, you haven't figured out what to do with the batteries when you're done with them.
How are we going to get all this electricity?
It's not really until phase two or three that the Tesla-Musk strategy would even pay off.
Am I wrong? You don't think that Elon Musk knew that the first Roadster was not exactly green, right?
It wasn't the greenest thing in the world.
He had to know that.
Of course he did. But you do things wrong until you get to right, right?
So how does ESG capture the fact that you might have to do something wrong for 20 years Before the market and competition gets you to the point where it makes sense economically.
There's no way that could get captured in anybody's ratings.
So it tends to be a totally subjective thing.
Let me give you a similar situation.
The house I'm sitting in right now, I largely designed and had built for myself.
Because it was going to be a larger house than neighboring houses, I knew it would be a lot of scrutiny, and there was.
Neighbors got very involved in their opinions of what they wanted in the review plane and how big it should be, etc.
As part of my defense, I designed it to be the greenest house in all of the land, at least the land around me.
So in probably a three-city area around me, I designed it the greenest house, and it had a score called a, there's something called Leeds, L-E-E-D-S, And that's how you get points for...
Let's say you get points for recycling your waste.
You get points for having solar panels.
You get points for insulation.
So you get points for a whole bunch of things.
And I had the highest LEED score of all time.
So what would you conclude?
I had the highest LEED score of all time.
So I should get like an award or something, right?
Except, do you see anything wrong with that?
Did I mention that I built the biggest house in the area?
There's no such thing as a big green house.
If it's a big house, it's not green.
If I wanted to be green, I would live in a little house.
It doesn't matter how many leads points my fucking gigantic house gets.
It's the worst...
It's the worst insult to the ecology of my town of anybody ever.
Nobody has assaulted the environment more aggressively than I have.
I put a big man-made structure where there had been a small one.
There's no way that I helped the environment.
No way. I did the best I could with what I had to work with.
I felt I had some responsibility to do it the best I could, and I did.
So it was the best I could, and I spent a lot extra to get that.
A lot. I spent a lot extra.
But somebody looking at that data would say, well, there's somebody who's a good role model.
He's greed. No.
No, I'm a terrible role model.
Do not do what I did.
Build a house that's way too big.
So it's so easy for data to mean the opposite of what the data says.
That would be a perfect example.
Similar to my bookstore example, if my new book only sold three copies, it's a failure.
You only sold three all month.
No, that was 100% of the books you had.
So the same data, three books a month, could be used to show that the book is a total failure or a huge success.
Same data. Is my house the greenest or the least green?
Same data. Same data.
You could have either opinion.
So what's the AI gonna do?
How the hell does the AI make a decision in a case like that?
It's purely subjective.
Purely subjective. Alright, so ESG is a scam from the financial industry.
They would like you to think that there's one more reason for moving your money, because whenever you move money, they make money on the transactions.
So ESG comes from the worst possible place.
It comes from a scam industry, the biggest one, the biggest scam industry, and is built from a scam.
So does it help anybody?
You can depend on Dogbert starting a ratings agency.
So he'll be the fifth of the big ratings agencies.
I might make his rating available to anybody, but you can buy these ratings.
They're very affordable.
In fact, I'd be surprised if any of these major ratings agencies don't have some people who work for them that have some connections, some connections to the people they rate.
I'm just wondering. Do you think that maybe if somebody made a certain purchase or donation or...
Do you think there's anything that a big company could do to maybe influence the rating they got?
Yeah, probably.
Probably. You know that if a technical magazine names a company like, you know, the best company...
There's a good chance that company advertised a lot in their publication.
You all know that, right? That's like a real thing.
That's not just a joke thing.
If you advertise a lot, you'll get called company of the year by the people who are the beneficiaries of your advertising dollars.
So ESG has no standard.
And it came from a scam industry, the biggest scam industry, financial advice.
And it's now being imposed on companies who are too cowardly to avoid it because it's easier to just sort of go along with it, I guess, than it is to, you know, disown it.
You never want any third party to assert its ability to manage you.
It's the worst thing that could happen.
It's basically backdoor socialism, wouldn't you say?
I heard somebody label it as fascism, but I think it's backdoor socialism because it's causing corporations to act with more of a social conscience than they would have otherwise, although I suspect they're all gaming the system.
I think what's really going to happen is if you happen to be in a business That's easy to meet these goals, then you do.
And if you're in a business in which it's hard to meet the ESG goals, then you don't.
I think that's all that's going to happen.
Is it an accident that a software company can do good?
No. Because software doesn't really pollute that much, right?
Now, let's say you start a startup.
Here's another example. Let's say you do a startup, And the ESG people looking at you and they say, you know, you're just moving software around.
You don't even have your own server farm.
You're just using Amazon's servers.
So your company is green as heck because it's just people sitting at home.
Maybe you don't even have a building.
Maybe it's like WordPress where everybody just works at home.
The ultimate green situation.
No commuting. No commuting.
No building needed.
You just work at home. You're just moving software.
Boom. So you would get a good ESG score if you also had the, let's say the governance was diverse.
Right? But what about that server that you're using that's on Amazon's ledger?
If you use Amazon's data center, Does that go on Amazon's bad list because they're the ones with all the electricity being used?
Or does it go on the startup's ledger because they're the ones who cause that to have to exist?
See the problem? If you assign it to both of them, that doesn't seem fair.
If you assign that expense to neither of them, that doesn't seem fair.
So you see that there is no way to have a standard.
And if you had a standard, you couldn't manage to it because it's too subjective.
You'd have all these decisions about who is it who really caused the data center to exist, the company that built it or the company that's using it.
It could go either way.
I see you comparing it to the social credit score, but I think...
I mean, I get the analogy, but I don't necessarily think it's a slippery slope to individuals.
I think the individual social credit scores are going to happen to us independent of ESG. I mean, I think it's going to happen, but not because of ESG. So, at this point, I would say if you're a CEO and you're taking ESG seriously...
The only excuse would be you're coincidentally good at it.
So if I were the CEO of Coca-Cola and my company happened to be rated highly, I'd say it's the most important thing in the world.
And I'd tell my competitors they'd better get going.
They'd better spend a lot extra, my competitors, to try to get up to their ESG goals.
So I think you can expect the CEOs that either know they can manage to it easily or coincidentally have good scores, they're going to say it's wonderful.
The ones who don't have the score that they want, let's say the Elon Musks, are going to say it's bullshit.
So there's your standard.
The people who coincidentally benefit from it say it's genius.
The people who don't say it's bullshit.
And it's the bullshit people write.
Now, what I'm going to do is I'm going to mock it sufficiently in all of its imperfections.
So you've got a comic for each imperfection.
And that becomes part of the permanent record.
And I'm going to try to influence AI. Because if you remember, AI thinks I'm more credible than Joe Biden on economics, and this is sort of an economics question, isn't it?
So on this question of economics, I'm going to create a public record of mocking it, and I'll create that public record of mocking it with your help, because you're going to need to comment on it and retweet it.
But if we do that enough, then when AI is asked, is ESG a good idea, it's going to look for all the...
The biggest hits. In theory, a Dilbert comic that becomes viral would be toward the top of the hits.
And AI would say, huh, this looks like an idea that's been discredited, but some people still use it.
That's where you want to get.
You want to get to...
Even the AI does a search and says, huh, some people are using it and they like it.
I can see why they like it, but it's also been discredited as basically a scam.
And I want to make sure that any CEO who decides that they want to, let's say, debunk it or go against it, I want to make sure that they have ammunition.
So I'm basically just filling the clip for every CEO who wants to pull a bullet in this thing.
If you want to shoot...
ESG dead. I'm going to give you at least five or six missiles to take it out.
You know, something you could put on your PowerPoint presentation.
Something you could forward to a reporter who asks you why you don't like it.
That sort of thing. So let's do this collectively.
We'll get rid of ESG. It never needed to exist.
Even though I do, I do agree with its premise.
So let me say that as clearly as possible.
I do think companies should try to protect the environment, okay?
I do think they should have a social conscience, and I do think that they should look for diversity in their governance.
You don't want to overdo any of it, right?
The problem is overdoing everything.
Here's a perfect example.
Management is good.
You couldn't really run a company without management.
Micromanagement is bad.
So everything good is bad if you take it too far.
That's the trouble with ESG. And it's the main thing I do when I mock stuff.
I don't say it's a bad idea.
I say the way it gets implemented in the real world, people didn't foresee.
So it became a bad idea.
All right. ESG scam.
Yes. Yes. Yes, indeed.
All right, and that is all I wanted to say.
Probably the best thing you've ever seen in your life.
How many of you knew what I told you about ESG? Some of you knew it was a nonsense corporate thing, but did you know that its birth, it was actually born out of the most corrupt market to serve the corrupt market?
So you all knew that BlackRock was behind it.
So this is a pretty well-informed group.
So again, you have my sincere apology, because I should have been on this a lot sooner.
But I'll try to make up for it.
I'll try to make up for it. Now, this is a good example of what I call the collaborative intelligence that we've created.
I feel like collaborative intelligence would be superior to AI for a while.
Because in part, AI is part of the collaboration.
So what I call collaborative intelligence is that I act as sort of maybe a host and I start with some starter opinions and then you fact check them and fix them as they go until they evolve into something a little stronger.
So Here's another good test of the collaborative intelligence.
If you're all on the same page that ESG needs to die, and I've given you a mechanism to kill it, then if you decide to participate by tweeting my comics one round.
By the way, it'll be around four weeks.
So check back with me in about a month.
That's when the comics that I'll write today should be running.
The actual AI might already be a lot smarter.
How would you know? AI is definitely smarter in lots of things, but again, those things can be underneath this model.
So in other words, one of you could fire up the AI and say, Scott, you said X is true, but I just checked with the AI, and the AI says you're wrong.
So that would just be part of the collaboration, right?
I think this is the model for figuring out a complicated world.
Collaborative intelligence.
In other words, the external forces are changing me in real time.
And everybody can see the process.
It's all transparent. What could go wrong?
Well, it's better than what we're doing now.
How big is the house?
Well, it depends what you count.
So I have an unusually large garage, because I wanted to put my man cave and ping pong table in there.
So the garage is oversized, and that's often not counted, because when you talk about space, you're usually talking about the indoor, what do you call it, the conditioned space, not the unconditioned space.
But if you counted the fact that I have an indoor tennis court...
Sorry. That's the reason I built the house, by the way.
So I built the house because my main hobby at the time was tennis, and it was hard to get a court and have an indoor place to play and all those things.
And I didn't want to die of sun exposure, etc.
So about half of my house is a tennis court.
But roughly speaking, if you count the oversized garage, which normally you don't, and if you count the tennis court, which is a special case, plus the indoor living, it's about 19,000.
So 19,000 roughly.
But keep in mind, the reason it's green is that I don't condition the tennis court.
I put an air conditioner in there, but I insulated it so well you don't need it.
It actually doesn't need to be air conditioned or heated any time during the year because of just insulation.
And the garage is big, but garages are cheap.
Can you cover all you did to make it green?
Well, I'll list a few things.
So I've got massive solar panels.
I have all the best insulation.
I have the best window insulation.
I'm oriented sun-wise.
I have the right orientation so that I'm not letting too much sun in.
I've got even purchases such as my water heater.
My water heater is one of the greenest ones.
I forget what it is, but basically it's chosen for its efficiency.
All of my Major appliances are LEED certified, meaning that they're greener than most.
It's normal stuff, but I picked the greener of the normal stuff.
I don't use any water reuse.
Link tweeted by Lisa Logan.
I don't know where to find that.
What was that about?
I looked into Tankless, and I do have some Tankless Instant Hot, but it wasn't...
Tankless wasn't quite the solution for my house.
All right.
Water reuse is illegal in California.
The lights are mostly LED. That's true.
My roof is a Spanish style.
And then below the surface of the roof, there's a solar reflector.
So my attic has the solar reflector built in.
I also have a whole house fan.
So I've got a fan in the attic that can The hot air out without using AC. Oh, I turned into a droid voice.