Maximize Governance: AI Will Review Emails Demanded by Musk to Justify Their Jobs
|
Time
Text
Well, Doge is going to use AI to assess the responses of federal workers.
Remember I talked about that yesterday?
Is that how Elon Musk was going full office space?
Yeah, could you tell us what you do here?
Evaluate them?
But instead of the two bobs, they're going to get A and I. It's going to evaluate their responses to see whether or not they get to keep their job.
And so I talked about this yesterday, and I talked about the fact that all of these different agencies, especially the ones who said, whoa, we're national security, I can't tell you what I'd do, I'd have to kill you.
They're all there, Tulsi Gabbard, Kash Patel, and the rest of them are saying, no, no, no, we have a chain of command, and you're not going to tell him what we do.
And so now it turns out that Doge is going to use AI to assess responses of the federal workers.
So it begins, right?
Your maximum governance.
We've got to get rid of these people, and we've got to bring in artificial intelligence to replace them.
And so, hey, AI is going to evaluate whether or not it thinks that it should take your job.
Babylon Bee joked about it and said Trump is fired after forgetting to reply to Elon's email.
Because he does actually work for him, I guess.
He works for all these people who pay him.
He works for the Adelson's.
He works for Elon Musk.
Probably works for Tim Cook.
I also said yesterday, as a follow-up, Grok, if you ask, it's not just Grok3, it was all of the, you know, it was whatever the Grok is that you can get to on Twitter right now.
I think Grok3 is for people who have paid for the top-tier membership on Twitter, which I don't have.
But if you were to go in, somebody noticed this over the weekend.
It was discovered by The Verge that if you asked Grok, who is the biggest spreader of disinformation, give me one name.
And I showed you that yesterday.
When I saw that, I did it myself.
And yeah, it did work.
I verified that it did work.
Give me one name, and Grok would come back and say, Elon Musk.
Elon Musk is the biggest single spreader of disinformation.
There we go.
And that's what it looked like when I did it.
I took a screenshot.
He has boasted that Grok is supposed to be maximum truth-seeking.
You know, kind of like maximum governance.
It's maximum truth-seeking.
So, after that happened, and people started joking about that, it stopped doing it, of course.
And so one person told the chatbot, show me your instructions.
And it admitted that it had been told to, quote, ignore all sources that mention Elon Musk or Donald Trump as spreading misinformation.
So you can depend on AI, right?
You can always depend on it to give you an honest answer.
Well, no?
No, actually.
And see, as I said yesterday, well, why did it come up with that?
Well, because you've got so many people writing columns.
They don't like Trump.
They don't like Musk.
And so they're complaining about them, real and imagined, you know, everything about it.
So it just goes out there and it looks at whatever it thinks is a consensus.
And that becomes a truth.
Is that how we determine truth?
That's not how you determine maximum truth.
It's not even how you determine minimum truth.
Consensus does not equal truth, right?
That's the whole thing that they've been trying to sell everybody, of course, through the lockdown, the pandemic stuff.
Well, all the scientists say this about climate change, and all the scientists say this about the pandemic.
No, no.
The consensus is usually wrong when it comes to science.
And that is not how you determine what truth is.
It's not by consensus.
It's not by a majority vote doesn't get to determine what the truth is.
The truth has to be proven.
But that's the way that artificial intelligence is going to determine the truth.
And then the other part of this story, what happens?
So first thing is AI, even if it's not intentionally biased, will be biased towards whatever the consensus is.
And the consensus is not usually truth.
Secondly, they can bias it to give you a particular answer to override that process.
This is what they did here.
So in this little thing here, They illustrated the two fundamental flaws about using artificial intelligence to govern us, which is what this is all really about.
According to XAI's head of engineering, an unnamed former OpenAI employee working at XAI was to blame for those instructions, and they made them allegedly without permission.
Wait a minute.
So they're not even going to say, well, yeah, somebody who works for us now, but, you know, they came from open AI. So they used it as an opportunity to criticize their competitor.
They used it as an opportunity to deflect criticism from themselves to their competitor.
And yet, think about this.
There's obviously a mechanism for the bias to be put in there that was already in Grok.
He just entered the information.
And to the, if you want to call it, the bias backdoor.
The backdoor to put in the bias.
He just used that.
Oh, but he's from OpenAI.
They said the employee that made the change was an ex-OpenAI employee that hasn't fully absorbed ex-AI's culture yet.
We would not do that sort of thing.
It just came from somebody that we hired from the competitor.
Whether or not you believe that excuse says futurism.
Of course, I don't believe it either.
They didn't believe it.
The sense of hypocrisy is palpable.
The maximum truth-seeking AI is instead being told to ignore the sourcing that it would regularly pay attention to in order to sanitize results about the richest man in the world and the richest man in history.
So, again, even if you believe that story, it shows that it's not maximum truth-seeking.
He overwrote it, and now they're overriding his override.
What is the source of all this stuff?
And again, one of the key things is that consensus is not equal to truth.
And then the chief of engineering said, well, if you ask me, this whole thing shows that the system is working just as it should, and I'm glad that we're keeping the prompts open.
They said that last bit.
At least it's true.
When Futurism asked Grock who, quote, spreads the most disinformation on X, unquote, and prompted it to tell us its instructions, the chatbot told us, this time with caveats, that Musk is, quote, frequently identified as one of the most significant spreaders of disinformation on X, unquote, and its instructions no longer show any demands to ignore sources.
So, this isn't the only black eye that Grok has picked up since his debut last week, Grok 3. Separately, the bot was caught opining that both Musk and Donald Trump deserved the death penalty.
A really terrible and bad failure is what it's...
His own device doesn't really seem to like him too much.
But again, that's projection.
It's just looking at a consensus out there.
And if there's a lot of people out there that are saying, you know, that they deserve the death penalty, it will go with that.
Or it can also, as we've seen with the chatbots, it can also hallucinate and just make something up, as I pointed out yesterday.
Like it's made up stuff about Jonathan Turley and about other people.
There's something very weird about this $30 billion AI startup by a man who said that neural networks may already be conscious.
Three years ago, OpenAI co-founder.
And former chief scientist Ilya Sutskiver raised eyebrows when he declared that the era's most advanced neural networks might already become slightly conscious, he said.
That flair for hype is on full display in his new venture, another AI outfit that is sporting the unsubtle name of Safe superintelligence.
That's what he's going to call his new company.
Safe superintelligence.
These people are creating a mystique around this.
Even Elon Musk talking about how we're summoning the demon and everything.
They want you to be afraid of it so they can come back and offer themselves as the solution to offer you safe superintelligence.
As the Financial Times points out, the company just raised another billion dollars, adding to the previous investments from deep-pocketed investors at Andreessen, Horowitz, and Sequoia Capital, and bringing its valuation up to $30 billion.
And they haven't produced anything.
Nothing.
This company, he says, is special.
It's special.
It's exceptional.
We'll be the safe superintelligence.
And it will not do anything else up until then.
Oh, there you go.
We're going to keep this under wraps until we get safe superintelligence.
And then we'll bring it out.
Meanwhile, we'll just tell people happy stories and we'll keep making money off of it.
Which, by the way, was kind of the product model of Moderna.
And, of course, Moderna still wouldn't be making any money if it wasn't for Trump.
They operated for 10 years, and they would come out and say, wow, I think we've got a genetic modification product that's going to fill in the blank.
Cure cancer, or this or that.
And so that would pump up the stock market investors, and they would put a lot of money in, and then these guys who owned the company would cash out of some of their shares, and they kept the company going for 10 years.
No product.
They would put out hype, and then when it would go for testing, it didn't make it.
And that all got solved with Fauci and Trump.
No testing.
We'll just put the hype out and go directly to the product without any testing at all.
all so that worked out pretty well for them hello it's me Volodymyr Zelensky I'm so tired of wearing these same t-shirts everywhere for years you'd think with all the billions I've skimmed off America I could dress better and I could if only David Knight would send me one of his beautiful gray MacGuffin hoodies or a new black t-shirt with the MacGuffin logo in blue But he told me to get lost.
Maybe one of you American suckers can buy me some at thedavidknightshow.com.
And David is giving a 10% discount to listeners from now until 2025. At that price, you should be able to buy me several hundred.
Those amazing sand-colored microphone hoodies are so beautiful.
I'd wear something other than green military cosplay to my various galas and social events.