All Episodes Plain Text Favourite
June 2, 2025 - Danny Jones Podcast
03:12:23
#306 - Harvard Scientist Blows Whistle on Google's Mind Control Research | Dr. Epstein

Dr. Epstein exposes Google's alleged mind control, citing 2013 experiments showing search result manipulation shifted voter preferences by 63% and claims the company manipulated 2016 and 2020 elections by millions of votes. He details threats against his team, a 2009 internet shutdown timed to avoid market scrutiny, and antitrust lawsuits he deems strategic shams. The discussion expands to AI risks, predicting a 2029 singularity, and introduces Neural Transduction Theory as an explanation for UFOs and psychic phenomena, urging listeners to adopt digital hygiene via his resources. [Automatically generated summary]

Transcriber: CohereLabs/cohere-transcribe-03-2026, WAV2VEC2_ASR_BASE_960H, sat-12l-sm, script v26.04.01, and large-v3-turbo

Time Text
Suspecting Google Manipulation 00:13:13
Thank you, Dr. Epstein, for being here.
I appreciate you coming all the way from San Diego.
I'm excited to talk to you.
And you showed me your, this is your monograph.
This is new?
That's coming out.
That's currently under peer review.
Oh, it's under peer review.
Okay.
That summarizes 12 years of research that my team and I have done on Google and other tech companies and their ability to.
To influence elections and indoctrinate children and affect our opinions, our thinking, our attitudes, our purchases, you name it.
So, and it's so that's like the summary of the whole thing.
How long have you been working on that?
How long have you been putting that together?
More than 12 years now.
Oh, wow.
That's fascinating.
Yeah.
Amazing.
For people who aren't familiar, how did you, with your background, get so into this world of Google and the curation of the content that goes on Google and YouTube and all this stuff and how it?
Manipulates the human mind, culture, elections, everything.
Well, I was trained as a research psychologist at Harvard.
I was actually B.F. Skinner's last doctoral student.
And some people don't know just how broad psychology is.
It's really broad.
And one of the areas that psychologists that research psychologists study is called influence.
So you've probably heard of a best selling book called The Nudge about how you can just do subtle kinds of things and that will.
Alter people's behavior.
And so you've heard of probably the Stanford prison experiments or the obedience experiments that were done at Yale.
All that is studying influence, how you can influence people.
So back in 2012, my website got hacked, which is not a big deal, but I was notified by Google in I think eight or nine different emails.
So that made me curious because I'm wondering who made them the sheriff of the internet.
Why am I getting notified by Google and not by some government agency, some nonprofit organization?
But then I got more curious because I've been a coder since I was a teenager.
And I saw that they were blocking access to my websites, not just on google.com, which makes sense.
That's their product, as it were, or their platform, I should say.
We're the product.
But the point is, it's not just. on Google, but they were somehow blocking me on Safari, which is part of Apple.
How did they do that?
They were blocking me on Firefox, which is run by a nonprofit organization.
How could they do that?
So over time, I started in 2012, just started really digging in and trying to find out how Google really worked and just how much of an impact they were having and whether everything was above board or were some things hidden.
In early 2013, I started doing what I'm good at, which I've been doing for almost 50 years on many topics in psychology.
I started doing what are called randomized controlled experiments.
Because at that point, I began to suspect that Google search results could be used to manipulate people.
So what if, in other words, what if when they're pulling from among billions of links out there, they're pulling links that that make one candidate look better than the other or one product, one dog food, one guitar brand, and they're suppressing links which make the other candidate look good.
could that actually impact people's thinking, people's votes even?
And I made a prediction, which scientists always do.
I made a prediction.
I thought, I bet you I could shift the voting preferences of undecided voters.
So that's key here.
I'm always starting with undecided people.
And I could shift it by 2% or 3%, I thought, just by favoring one candidate or the other in search results.
In other words, I had to build, my team and I had to build a a simulator that works just like Google.com.
It was called uh, it was called Kadoodle Kadoodle yeah yeah, I saw it in your book.
So uh, but it it worked exactly like Google.
And I thought, okay, one group, they get uh search results that favor one candidate, the other group gets search results that favor the other candidate, and then there's a control group so that neither candidate is favored, and I thought basically, I could shift Voting preferences, apply 2 or 3.
That was my prediction.
I have never been so wrong because that first shift we got in that first experiment was 48%, which I thought was a mistake.
I thought that was impossible.
That's after one search.
So they're undecided voters.
The only difference, they're all seeing exactly the same material.
They can click on any of those links, go to the webpage.
So they have access to exactly the same material.
The only difference was the order in which we were presenting the material.
And we got a shift of 48%.
And don't forget, we're using random assignment.
So that means we can push people in either direction any way we want.
So anyway, I thought it was a mistake.
We repeated it.
We got a shift of, I think, 63%.
And then we began to learn that we could hide what we were doing.
Even if it was blatantly obvious that the search results were biased.
Most people didn't see the bias.
And we thought, what happens if we just fool people a little bit?
So it's candidate A, candidate A, candidate A, candidate B, candidate A, candidate A. Get it?
So what if we just mix it up a little bit just to get people confused?
That's called masking.
We found that we could get these enormous shifts in voting preferences with no one.
No one having the slightest idea that they're looking at biased search results.
No one knowing that they're being manipulated.
So we got to that point.
This is early 2013.
And we said, this is important.
And we wrote it up for the Proceedings of the National Academy of Sciences, which is one of the top scientific journals in the world.
It's very, very hard to publish there.
By this time, we had done more experiments.
We had done a nationwide experiment with people from all 50 states, we had done a big experiment in India.
Right in the middle of a big election of theirs, it had more than 2,000 people on it.
And this got published in what we call lovingly PNAS, Proceedings of the National Academy of Sciences.
And the last time I checked, that very technical scientific article had been accessed or downloaded from their website more than 250,000 times, which is extremely rare for a scientific piece.
Extremely rare.
I mean, literally, You can publish a piece on discovering gravity waves, and I know the guy who did that, and you'll end up with a few thousand.
But this got enormous attention.
It got written up in the Washington Post and other places because it's scary, because it means that Google has the power to shift possibly millions of votes in elections, election after election, not just here, but around the world, with no one knowing that they're doing it.
Do you think this was their plan from the start?
Or do you think this was some sort of a natural evolution of their product?
I think that they've done a lot of things over the years which start out very innocently.
Search suggestions, I think, started very innocently.
They're not innocent anymore.
And we know because we've studied that now and quantified that.
I think a lot of things, they just start out innocently.
And then someone or other realizes that they can use this to.
To change people's thinking and behavior.
And they have very, very strong ideals, very strong values at Google, and they're very proud of that.
I actually happen to share a lot of their values.
So part of me is always cheering for Google as I'm studying them and discovering what terrible things they're doing.
So they have what they have done, and this is a whole separate story, which we can talk about later if you want to, but what they have done.
more and more is use very powerful techniques of mind control to control people's thinking and behavior literally around the world on a massive scale.
And they know exactly what they're doing.
I mean, for years I was just doing experiments and discovering these techniques and very skeptical, always very skeptical, and yet then replicating it or other labs have replicated these same findings.
But I was.
The point is that over time, I realized this is a really, really serious problem.
And over time, my findings were also revealed by whistleblowers, leaked documents, leaked emails.
2018, for example, there was a leak of emails from Google to the Wall Street Journal.
In which Googlers, that's what they call themselves, were discussing how we can use ephemeral experiences, turns out that's a very important phrase, ephemeral experiences or ephemeral content to change people's views about Trump's travel ban.
And when I saw that, my head spun because I realized I had been studying the power of ephemeral experiences to impact people for years at that point.
By that point, I've been studying it for five years.
In what context were you studying it?
In the context of just finding these techniques, that first one using search results, that's now called SEAM, the Search Engine Manipulation Effect, S E M E.
And I had been studying these techniques one after another.
Search suggestions, for example, just by fiddling with the search suggestions, you flash at people.
We showed in experiments that you can turn a 50 50 split among undecided voters into a 90 10 split with no one having the slightest idea they've been manipulated.
But here I'm seeing in language being used in the company, their employees saying, how can we use ephemeral experiences to change people's views?
On the travel ban.
On this case, on the travel ban.
But our experiments were showing more and more and more that you can use these techniques to change people's opinions about anything, anything at all, not just candidates in an election, but anything at all.
So the thing just, it just grew.
It grew and grew because we were finding more and more that these techniques exist.
We had to give them names because they're new.
They've never existed before in human history.
It could take a year or two or even five years to really understand one of these.
And then we quantify them.
So when we quantify them, that means we can calculate for a given election or a given product, we can calculate how big a shift Google can produce.
And it gets bigger and bigger, but as it gets bigger, it also gets more scary because we start to run into pushback, I suppose you could say.
What kind of pushback?
Well, not much until I testified before Congress for the first time about these issues.
That was in 2019.
Quantifying Political Shifts 00:03:07
Right.
Which was probably the most disastrous year in my life, all starting with that testimony, with my congressional testimony.
As one of my board members said to me at one point, well, they left you alone before that, but once you testified before Congress, they're not going to leave you alone anymore.
Anyway, that same summer that I testified, I did a private briefing for a bunch of state AGs, attorneys general, about all the work we'd been doing and about the potential.
Power that these companies have.
And so there were very prominent AGs there from around the country.
This was at Stanford University.
And then I went out in the hallway and just was hanging around.
And one of these people came out and said, Dr. Epstein, I don't mean to scare you.
He said, but based on what you've told us and the work you're doing, he said, I suspect that you're going to be killed in some sort of accident in the next few months.
And obviously, I wasn't because I'm still here, but my wife was.
My beautiful wife, Misty, was killed a few months later in a car accident that was very suspicious.
For me, I'm always on the go or in an inconvenient situation where I don't have the ability to whip up a well balanced meal.
And that's where Huel Black Edition Ready to Drink fills the gap.
I keep one in my truck and in my gym bag.
And I never need to choose between fast food or starvation again.
Today's sponsor, Huel, spelled H U E L, has created their Black Edition Ready to Drink that is.
Perfect for inconvenient hunger.
This makes meals on the go convenient, portable, and easy to store without any prep.
And new customers can get 15% off at Hule.com by using my code DANI.
It's a complete meal in one bottle with 35 grams of protein, 27 vitamins and minerals, high fiber, and low sugar.
So you'll feel full, focused, and ready for your day.
No prep, cleanup, just grab and go.
Hule's already sold over 500 million meals around the world, and now it's your turn to try it.
It's incredibly affordable, especially when you compare it to fast food alternatives.
Plus, you don't have to waste time waiting in line to order.
It tastes like a milkshake, but it's not overly sweet.
They come in vanilla or chocolate, which is my personal favorite.
And after drinking one of these things, I really do feel full.
It's not just a burst of carbs or a protein brick.
It's the perfect balance for me.
It was designed by experts to provide all the ingredients the body needs from a meal.
It's cheaper than a burger.
So for me, it's the perfect choice for a balanced meal on the go.
Start saving time and money without compromising your nutrition today with this exclusive offer for new customers for 15% off using my exclusive code, DANNY, at HUEL.com.
That's 15% off for new customers using my exclusive code, DANY, D A N N Y, at Huel.com.
Please see our description below for the terms and conditions.
Skip the stress, not the nutrition.
Try Huel today for complete bottled nutrition.
Exclusive Huel Discount Offer 00:12:35
I mean, the vehicle was never examined.
The vehicle, the little truck I had bought her, was never examined forensically.
It disappeared from the impound lot.
I was told it had been sold to a parts dealer in Mexico without my permission.
It just disappeared.
This was how long after this guy said this to you?
About three or four months.
And who was the guy who said this to you?
He was and still is one of the state attorneys general.
I mean, I've had other warnings too.
I've had other warnings.
Around that same time, I had a reporter from DC call, first asking a bunch of questions about the work and what was the latest and all that stuff.
And then a couple of days later, he called again and he said, I just got off the phone with a woman who I believe to be head of public relations at Google.
And I think you should know that what happened.
And I said, What happened?
She said, Well, I started asking about your work and she started screaming at me.
He said, That's very unprofessional.
I've never seen that before.
He said, And I just want to tell you two things.
He said, Number one, you have their attention.
And number two, if I were you, I would take precautions.
That's scary.
We've had now six incidents in total.
What are the other incidents?
Well, the last one, which was right by coincidence, it was right after I had had lunch with probably the most prominent state AG in the country.
That's Ken Paxton from Texas, who has an office with 800 attorneys, and they've gone after Google over and over again.
And they're having an impact.
They're having a big impact.
But right after he left, so there were still some people left and it was in a restaurant.
Oh, I have the.
What do you got?
Well, you can use your own judgment.
I'll tell you exactly what happened.
Okay.
And I even have an image.
Do you want to text it to Steve?
He can pop it on the TV for us.
Oh, yeah.
Oh, that's a really good idea.
Yeah, yeah.
We can have a full screen on the.
Oh, and Steve's already in here.
Yep.
Oh, that's a really good idea.
That's what I'm here for.
Well, this is going to be one of the creepiest things that's happened so far.
Okay, this is what you talked about on Rogan, right?
Yes, and I didn't realize.
Now you see that.
So here we go.
This looks like a sharp piece of metal needle.
Yeah, but look at the coating.
And look how.
The blade at the end has been sharpened to an extreme degree.
So, General Paxton, AG Paxton, leaves with his staff.
My staff is still there, and one of them comes over to sit next to me.
This is right next to me because I'm on a bench seat in the restaurant, and this is my computer case for my laptop.
And so she just slides in next to me and then she screams.
Screams like a real scream, like out of a movie.
And we all go, What's going on?
And she goes, Look.
And there's this thing sticking out of her thumb.
It's gone a half inch into her thumb.
It was sticking out of the computer case.
Now, this is the sixth incident so far.
So, where do you think that thing could have come from?
Someone put it there, obviously, when I did not have control over my computer case.
And what is it?
No one knows.
You still have it?
I have people in intelligence.
Yeah, but we didn't want to.
There's a place in Illinois that they use for stuff like this because that coating is obviously strange.
Yeah.
And obviously the tip is strange.
And the object itself, no one's ever seen anything like it.
This is the kind of thing that Putin does to get rid of his enemies.
Except he has poisons that he puts on them, right?
Right.
This thing wasn't poisoned.
Well, we never analyzed it.
What happened to the lady after anything?
Well, what happened was that we all kind of informally decided that we'd just see whether she was going to get sick or not.
It's pretty stupid of us.
Dark.
I know.
And she didn't get sick, but she completely freaked out.
She really pulled back from the project and she was actually in charge of what's called major gifts.
So, you know, the high level fundraising.
And she just pulled way back.
We had our managing director, who was fabulous.
We'd been there for two years.
She and her very handsome husband were walking Saturday afternoon in San Diego, which is a pretty safe city, 2 p.m. Saturday afternoon, downtown San Diego.
Guy comes out of the crowd, pulls a knife, slits his face all the way from his ear.
Down to his mouth, all the way through, and then he looks at Michelle, our managing director, and laughs and runs away.
Jesus, she lasted another two or three months.
Her family was putting tremendous pressure on her to quit.
I don't blame them and I don't blame her, but I mean, it was a tremendous loss for us, tremendous loss.
And my there's been a fairly recent incident with one of my sons, so that gets and and he works.
For the project.
In other words, he works on the secret side of what we do, which is our monitoring system, which we can talk about if you want.
But that system, 11 o'clock at night one night, stopped working.
The data stopped flowing.
I tried calling up our tech people.
I couldn't reach anyone.
I think it was a Thursday night.
I called him because I know he's involved with the tech part of it.
And he said, Don't worry, Dad, I'll drive over to the office, which by the office, he meant, The secret location that I don't even know where that is.
I never found out where it was.
And that's 11 o'clock at night.
Midnight, I still haven't heard from him.
The phone rings.
He said, Dad, don't worry, don't worry, I'm okay.
I said, What happened?
He said, Well, I was driving over there, he said, and I got broadsided by, he said, a black vehicle which immediately pulled away and drove off at very high speed.
How long ago was this?
About a year and a half ago.
Did they ever investigate it?
They had nothing to investigate.
There were no surveillance cameras on that intersection.
Holy shit.
But nothing has happened.
So things keep happening to people that are in your close orbit.
Well, that thing on the internet was intended for you.
That was meant for me.
Yeah.
That was meant for me.
Right.
But I assume the fact that you're out there, you're talking in a public forum, you've.
Done big shows and you're publishing stuff about this all the time that they can't touch you directly, right?
So they're just trying to intimidate you.
This is not intended to kill you.
This is intended to intimidate you, I'm assuming, right?
Well, I don't know because we never found one lab that was recommended by the FBI.
They wanted $5,000.
I just kept saying the same stupid thing, which I really regret.
And I think it was a mistake on my part.
Let's see if she gets sick.
Which is so strange.
And she never got sick.
Not as far as I know.
I mean, she definitely got freaked out, and that lasted for quite a while.
And she just never, she didn't from that time on, you know, perform with enthusiasm, I guess you could say.
Right.
Yeah.
One of the reasons I, when I originally asked, like, was this the original intent of Google or was this something that just happened after they invented it?
Because I read a Medium article.
That was very well done.
It was like a three part series.
I forget the author's name, but it was basically, it was essentially about the story of how the United States intelligence community funded, nurtured, and incubated Google at Stanford with Sergey Brand.
Yes.
And the PageRank system, I think, actually came from DARPA.
Possibly.
I mean, the intelligence, I think we have to be fair here.
The intelligence agencies, this is, we're talking the 90s, so there was barely an internet.
And the intelligence agencies, they had meetings about this growing internet thing, and they were worried about how it could pose a threat to national security.
And one very legitimate concept that they were working with was that if there's more and more information posted out there and someone wants to build a bomb, that's going to be the first place they go.
They're not going to go to their public library.
They're going to go to this new internet.
Which at that point was easy to use anonymously.
So, when they were talking and to some extent providing funding to people building these new search engines, which are just basically big indices, indexes to what's on the growing internet, they basically were saying, We want to be able to track people who are looking stuff up, especially certain stuff.
Now, that's legitimate.
And they still do this.
All the intelligence agencies around the world do this.
They use the internet, people's search histories, people's emails, anything people do on the internet, they use when they're looking for bad guys or bad gals.
And that's legit.
In my opinion, that's legit.
Now, it's true that Google, yes, they started out with help from intelligence agencies.
Yes, they were definitely encouraged to preserve search histories.
Is that normal for a big tech company like that to be funded and incubated and nurtured by the intelligence community?
You have to look.
Google was not the first search engine, it was actually the 21st search engine.
It was very competitive in those days.
I know one of the people who created one of the very first search engines.
And he was just a professor at Carnegie Mellon.
He said, Oh, I'm going to build an index, you know.
And he ended up becoming a millionaire because he sold it.
But no, did he have security in mind?
Was he working with intelligence agencies?
No.
It was just for him an academic project.
So what happened with these two guys who started Google is.
It was, yes, a bit unusual.
But some things came together for them very quickly because what they realized was because they were preserving search histories, they could monetize that.
They had no way to make money for the first two years.
But because they had these search histories, they realized, hey, we could hook people up.
Forget the bad guys.
We could hook up anybody who's searching for umbrellas with companies that sell umbrellas.
So now they go to the umbrella company.
Umbrella Hooking Mechanics 00:02:50
So their actual customers are.
Vendors.
So they go to the umbrella company and say, Hey, we know exactly who's searching for umbrellas online.
Okay.
And they borrowed a technique from another company, which sued them, but they borrowed a technique from another company that allowed them to easily hook people up because they could put little kind of icons and they could put links online, which people who are looking for umbrellas could click on, and that would take them straight to that company.
And if there's a sale there, Google gets a cut.
Or they would get a cut just based on the number of clicks.
All different ways that they've explored over the years.
But in the early years, the fact that the intelligence agencies had them preserving search histories, that led them within a couple of years to figure out finally how to monetize the search engine.
One of the things I've learned from this podcast over the past five years is that everything leads back to the gut.
And if you take care of that, you see the benefits everywhere else.
And AG1 has become my go-to for taking care of my gut.
And now AG1 has leveled up even more with AG1 Next Gen. This upgraded formula has been shown to increase healthy gut bacteria by 10x and all for the same price of less than $3 a day.
One of the main reasons I've incorporated AG1 into my daily routine is because it tastes great, it keeps my immune system strong, and it helps boost and maintain my energy levels.
I'm a better version of me when my digestion system works efficiently and is clinically shown to help fill common nutrient gaps in gut health.
Yes, even in healthy eaters.
Next gen's key ingredients are absorbed quickly and clinically shown to be bioavailable in the body, which means your body is using it.
We can get into all the superfood categories it hits, pre and probiotics, and the minerals, but I like to think of it as I take care of my body so my body can take care of me.
After all, how can I be productive and efficient when my body can't?
Since I've added AG1 to my morning routine, it's become a game changer and helped set the tone for the rest of my day.
And thank you to AG1 for sponsoring this episode.
Now, clinically backed with an upgraded formula, this is the perfect time to try AG1 if you haven't yet.
I've been drinking AG1 for years now, and I'm super excited to finally be partnering with them.
So, subscribe today to try the next gen of AG1 for less than three bucks a day.
And if you use my link, you'll also get a free gift with your first order.
So, make sure to check out drinkag1.comslash Danny Jones to get started with AG1's next gen and notice the benefits for yourself.
That's drinkag1.comslash Danny Jones.
It's linked down below.
Now, back to the show.
Now, by 2004, they were starting to take up a big chunk of search.
AG1 Morning Routine Boost 00:15:34
They were starting to become dominant, but there were still a lot of search companies.
By 2008 or so, the others were kind of falling away.
Google was becoming more and more popular.
And again, got to give them some credit.
It's because the method they were using for posting search results was new, the PageRank method, and it worked really well.
And so it gave people the best search results.
Now, you can't pin an exact date on it, but you may have heard, but probably everyone has heard that at some point their motto was, don't do evil.
Yeah, don't be evil.
Don't be evil.
Don't be evil.
Yeah.
They actually dumped that in 2015, so they got rid of that.
But they still mention that concept somewhere, but it's no longer the motto of the company.
In my opinion, they turned it into, don't be evil.
Unless it makes us money.
Love it.
We use Brave here.
Well, we try to.
Sometimes you don't get good results, especially on like the images.
You know how Google Images, we search a lot of images, but the images are kind of dog shit on Brave.
But Brave doesn't spy on you, right?
A lot of these search engines, or it's not a search engine, it's a web browser, right?
A lot of them have been compromised by Google, from what I understand.
But there's some of them that are still clean and clear of any of this stuff.
And one of those is Brave, I believe.
Well, Brave is a special story.
I know Brendan Icke.
He's the one who founded Brave.
And before that, he's the one who created.
Yeah, he'd be a good guest.
Brendan Icke.
Brendan Icke.
E I C H.
And he's the one who created Firefox for Mozilla.
So, why did he, having done that, having created this great alternative to Google that worked very well, why did he leave?
Well, because Google's influence at Mozilla.
Became overwhelming.
Mozilla is a nonprofit.
90% of their donations were coming from Google.
No matter what changes they seem to make to guarantee people's privacy, they also violated people's privacy.
So, for example, they made Google their default search engine.
Mozilla did.
Mozilla at some point actually made Google their default search engine.
Or even when they didn't do that, which is pretty blatant.
They used Google's quarantine list.
Most people don't know anything about this, but this is where the this is all the stuff happening in the background that is so creepy.
Yeah.
Before a browser takes you somewhere, it could be anything, Safari, which is Apple's browser.
Before Safari takes you to a website, it has to check to see whether the website is safe.
Sure.
They're not going to take you to, you know, malware sites in Nigeria, right?
Right.
Well, Where are they going to get a good list that they can check at super high speed?
Well, the answer is because Google crawls the internet more than any other company and has the largest database of any other company.
Everyone uses, including Safari, everyone uses Google's quarantine list.
So that means, so Brendan saw this.
I mean, he saw that basically Firefox was also checking Google's quarantine list.
Quarantine list, that means that when people are doing what they think are private searches using Firefox, Google's getting the information.
Did he try to stop this?
I don't know what in turn you could ask him, but I've never asked him that's you know straight away that very question.
But I do know he's he at some point said I've had it and he what he was determined that's why Brave is called Brave.
Yeah, that's why because he was determined to do something brave and and and really set up a browser and a search engine they're both called Brave that are that are that Google cannot track that Google has no control over.
Google has control all over the place where people have no idea they have control.
What happened with DuckDuckGo?
How'd they get compromised by Google?
Do you know?
Well, DuckDuckGo is another story.
I don't like to talk too much about DuckDuckGo because I don't think it's personally, I don't even consider it as a search engine.
DuckDuckGo is primarily a database aggregator.
So what does that mean?
That means they look at big databases that other people have created and that's how they come up with the answers to your queries, your search queries.
Now, that's not a search engine.
Now, do they have a little crawler that looks around the internet?
Yeah, but that's very little of their data comes from their own crawler.
But it used to be different from Google, right?
You should be able to go on DuckDuckGo and be able to bypass the sort of curated shit that you get on Google when you search anything.
And now it's changed.
Now it sort of mirrors Google.
So, like, there was obviously something that happened.
Well, the so another reason I don't like to get stuck on DuckDuckGo is because there are much, much bigger problems out there.
For example, well, I was just like, okay, again, let me just give you because I think this will make the point.
Okay.
2016, Google and Microsoft signed a pact, which no one has ever, it's never leaked.
So the exact contents of the pact have never.
been made known.
But they signed a pact and the next thing that happened is they all dropped all of the lawsuits and complaints they had against each other worldwide.
Microsoft pulled the funds from all the nonprofit organizations it was supporting that fought Google.
All the funds gone.
All the consultants that Microsoft had, and by the way, they offered me a consultant position with them at one point and I refused.
Microsoft.
Microsoft did and I refused.
But a dear friend of mine, he accepted.
They got rid of all their consultants all at once, gone.
And Windows 11 became, Windows 10 has a lot of problems, but Windows 11 became a very aggressive tracker.
And some of us believe that from that point on, their search engine Bing, which had been losing about a billion dollars a year for Microsoft, started to save some money, probably by scraping contents from Google.
Now that's a much bigger problem because Bing has a much, believe it or not, gets a lot more traffic than DuckDuckGo does.
Yahoo.
I have a copy of a document that was filed with the SEC which gives Yahoo the right to scrape Google content for Yahoo search results.
The problem is, I'm just saying, it's bigger than you think.
My sister wears a Fitbit.
When you think Fitbit, you don't think Google.
No.
But Google owns Fitbit.
They bought Fitbit so they could get all that physiological data 24 hours a day.
Do you have, I hope not, a Nest?
You probably do.
I have a Nest thermostat, yeah.
You have a Nest thermostat?
And I have all the security cameras.
We have a Nest security camera right there.
Okay, so about six years ago or so, Google bought Nest.
Right.
Now, some people know that, but what they don't know is that about a year or two later, some some, what do I call myself?
Some idiot professor type pulled apart one of the new Nest thermostats that they were selling and found that they had put a microphone into it without telling anybody.
Now, around the same time that they're installing microphones into it Wait, wait, wait, wait.
This was the one that you purchased?
No, no.
This is another professor.
Another professor type, another lunatic like me.
Purchased a nest.
To investigate it, to find out why the question at the time was why would Google be so interested in Nest?
Well, the reason they were interested is because it gets put in people's homes, and they were submitting patent applications at the time for new methods of analyzing sounds within the home.
So, for example, just by picking up on sounds coming from the bathroom, you can tell whether the kids are brushing their teeth enough.
Sounds from the bedroom.
That tells you a lot of stuff, including whether the sex life is any good or the relationship is failing.
Hundreds of things that can be monetized can be picked up from sounds within the home.
And Google was getting patents issued for new methods for analyzing sounds within the home.
This is right about the time that they secretly installed microphones into some Nest devices.
We also have the Nest fire alarm, too, which talks to you and detects smoke and it's in the hallway.
Oh, Jesus Christ.
So was anything published?
Was anything made public about this?
And was anything like why am I even talking to you?
I mean, you are a Google affiliate.
Yeah, but I'm a useful idiot.
So this is the only way I can learn how to fix shit.
So now I'm going to get rid of all of it.
So I'm thanking you for being here.
One of my sons gave to another of my sons for his birthday.
This is the problem.
It's so hard to escape their ecosystem.
Sorry, continue.
23andMe, which is now on the verge of bankruptcy.
And it was founded by the wife of one of Google's founders.
And so one of my sons gives to my, another of my sons, 23andMe, for his birthday.
And I said to him, I would suggest that you throw that away.
And I explained why.
Because that is how Google is one of many methods Google has come up with in recent years for getting your people's DNA information into personal profiles.
If you can get DNA info in there, its value goes up and up and up as more discoveries are made about DNA.
And basically, you can figure out what diseases people are prone to that they haven't even gotten yet.
You can also figure out which dads have been cuckolded.
I mean, DNA, think about how much we know about DNA right now, but then think ahead a year, two, five years.
The DNA doesn't change.
So that data has, that's solid gold that goes up in value, like the value of gold itself.
It just keeps going up in value.
So Google has invested in companies that collect DNA.
And at one point, they volunteered to run the nation's national DNA repository without charge.
That was very nice of them.
Very, very nice.
So, was it the founder of 23andMe was Susan Wozniak's sister?
Yes.
Is that right?
I believe that's right.
Yeah.
Who was also the CEO of YouTube before she passed away.
Right.
Yeah, so she had been, when they purchased YouTube, she became its first CEO.
I have one of my favorite leaks from Google.
I know I'm going all over the place here, but I'm just, this stuff I find fascinating, obviously.
And one of my favorite leaks, which was leaked by Zach Voorhees, who would also make a great guest because he was a senior software engineer at Google for eight and a half years.
How do you spell his last name?
Voorhees, V-O-R-H-I-E-S.
And he's become a friend over the years, even though politically, you know, we're not even. on the same scale.
But the point is that Zach is the first person to have walked out of Google with actual stuff.
There have been lots of whistleblowers now, but he's the first person who walked out of there with 950 pages of documents and a two-minute video.
The two-minute video is Susan Wozniak talking to the big staff of YouTube, big screens behind her, and she's talking about how we're changing YouTube's A recommender algorithm so that we can boost up content that we think is legitimate.
And there's a big up arrow behind her, and that we can demote content that we think is not legitimate.
And then she says a few things about what that means.
But I mean, it is so scary because we, meanwhile, were doing experiments around that same time on what we call the video manipulation effect, VME, which has now been published in a peer reviewed journal.
Because it turns out, if you mess with those recommendations, on that list, you can change the thinking behavior whatever, of people who are undecided about anything.
Yeah, without their knowledge.
Yeah it's, it's.
It's just like the, the suggested uh column on Youtube, the suggested call.
Well, for there's the first one.
They pick for you, that's important.
Then there's the one at the top, that's the up next one, and they used to have a, a big button right up there saying, if you know, asking you whether you want to turn off the up next.
You know, automatic.
That's gone.
It's on the page, but it's hidden now.
It's hidden and not labeled.
And it also has an option that you have the ability to turn off, which I do now, which will automatically play the up next for you.
So you can just finish a video and then it will automatically start picking and choosing the videos to continue to play on your screen.
Ah, but here's the thing.
And this is independent researchers have confirmed this, and an actual high ranking employee.
Google has confirmed this.
Right now, around the world, now just think about this a minute.
Think about it.
Give it a couple seconds.
70% of the videos that people watch on video, that people watch on YouTube, 70% of the videos people watch on YouTube were recommended by Google's recommender algorithm.
Wow.
In other words, if you just watch people, it means 70% of the time that you're watching people.
Watching videos, what they're watching was recommended by Google.
Think of the power.
Navigating YouTube Policies 00:07:40
That's insane.
Right.
And again, if you go to video manipulation effect.com, you'll actually come to our peer reviewed paper in which we show it, we give you experiments.
We show experiments.
Video manipulation effect.
Video manipulation effect.com.
And we've got experiments, randomized controlled experiments that show the power that those videos have to shift people's thinking.
Now, can you shift anybody's thinking about anything?
No.
Only people who haven't yet made up their minds.
But that's pretty much.
In other words, even if you're extreme left, extreme right, whatever you are, whatever your extreme is, there's going to be something during the day or during the week that you're not, you know, where you haven't made up your mind and where you're vulnerable to manipulation.
And the problem is that Google, because they're collecting so much information about us all 24 hours a day, they know exactly who you are.
They know exactly when you're vulnerable and they know exactly how to get you.
Whatever happened to Zach when he stole this stuff and got it published?
Well, there's some interesting footage out there.
I don't know if it's video footage or still footage of Zach on the street in San Francisco with his hands raised to heaven.
You can see if he could have gone up three more feet, he would have done it.
Because Google sent a hit squad.
What's it called?
The squad you send off when things are really SWAT team?
SWAT team.
They sent a SWAT team after him.
He was scared out of his mind.
And yeah, they claimed that he was possibly a threat to the community.
And they sent a SWAT team after him.
Jesus.
And they fired him or he quit, whatever.
But he was the first person who had the cojones to walk out with actual stuff.
And the stuff is amazing because it's.
For example, I know I'm going all over the place, but I'm going to come back.
Really, just trust me on this one.
So, when I first testified before Congress, before I testified, a high ranking executive from Google testified.
And the man, under oath, was asked by I think Senator Josh Hawley, does Google have any blacklists?
And he replied, no, Senator, we do not.
Three weeks later, that is when.
For his documents are made public, and there's a bunch of them called that are labeled blacklist.
Now, think of the arrogance.
Think of the arrogance.
If I were running Google, excuse me, but if I were running Google and I had a bunch of blacklists, I would not call them blacklists.
I might call them shopping lists.
You know, anything but blacklists, I'd call them purple lists.
Yeah.
Actually, called Blacklist, and a lot of the content you can see on the Blacklist that was conservative content.
So, and these were conservative political websites, YouTube channels, things like this.
Yep.
Wow.
Yeah, it's an interesting thing that we've had to navigate with YouTube over the last few years, as I was alluding to before we started, because they have, they obviously have a deep, List, a very robust list of policies and rules you have to follow to maintain your standing on YouTube.
And they can be, the lines are very vague, right?
It's not super definitive.
You don't get the opportunity to go in front of a court and make your case.
It's basically some robot or AI algorithm behind the scenes determines whether A, you can be monetized, B, you're allowed to keep your video on YouTube, or C, whether you have the standing on YouTube to continue to post stuff, right?
So they basically have X amount of buckets rules that they put out there and they give to the public.
And then if you publish anything that they don't like, They can take it and say, Oh, we're going to throw it into that bucket and we're going to take you down based on that rule or whatever.
Most of the times, when that's happened to us, that rule is completely irrelevant.
There's nothing in that video that has anything to do with that rule whatsoever.
And you can send emails until the cows come home.
No one's going to answer you or give you any sort of reasonable response to why it was done.
But we have a very, very hands on experience dealing with YouTube when it comes to this stuff.
In the monograph that you have, And you're one of very few people who has this, by the way.
In the monograph, there's a section there on how Google makes those decisions.
And I'm quoting from an internal document at Google.
It's about a hundred page document that's meant to train people to make those decisions about what content to suppress.
So I quote actual language from this internal training manual.
You're just kind of guessing, but when you actually see what's in the training manual, you don't have to guess anymore.
Because what it says over and over and over again, I think 22 times, over and over again, it says, well, when you can't quite make up your mind, just use your best judgment.
It gives tremendous personal discretionary authority to the people making these decisions.
And those people are used to train the algorithms.
That are now making a lot of those decisions.
So there's tremendous discretionary authority.
Well, how bad can that get?
Well, there was a particular day when the head of Google search at the time shut down an entire domain, which had about 11 million websites in it because he thought that the content to him seemed kind of flaky.
He shut down an entire domain.
He himself did that.
This was a top ranking guy at U.S. Top ranking guy at Google.
March 31st, 2009, that's also in the monograph with references.
Google shut down the entire internet for about 40 minutes.
Now, how could they even do that is one question, just technically.
But the big question I wasn't even concerned about that because I knew they could do that because they actually block content to millions of websites every single day.
That's their quarantine list.
So everyone knows about their quarantine list because everyone uses it because you don't want to take people to scary places, right?
Except that a lot of the websites that are on their quarantine list, they don't have malware.
Just in the judgment of a Google algorithm or Google personnel, there's just something sketchy about the content, period.
And it could be political in nature.
So they can block things for pretty much any reason at all, period.
Why Shutdown That Time 00:03:02
And as I say, I have an internal training manual that shows the language that they use to train trainers.
So I'm not imagining this.
When I read, and this was reported by The Guardian, that they had shut down the whole internet, what I became obsessed with for a while was why did they pick that time of day?
That day, that time of day.
Why was it 40 minutes?
Why wasn't it 20 minutes or 60 minutes?
What was there about that time?
And I finally figured it out.
I published this in an article in US News and World Report.
You can get to it at thenewsensorship.com.
Thenewsensorship.com.
And I finally figured it out.
Here it is.
Okay.
Ladies and gents, the day has finally come where you can utilize red light therapy to regrow your hair.
And you can save some serious cash by using our code.
The iRestore Elite is clinically proven to help regrow hair using 300 lasers and 200 LEDs that send light therapy directly to your scalp.
Just pop it on while you're watching TV or reading a book and enjoy the benefits.
It's a system with proven results.
In a double blind clinical study, nearly every participant saw an increase in hair growth with this technology.
The iRestore Elite uses Vixo lasers that feature wide beams of red light, which gives consistent coverage and results.
The unique use of optimal wavelengths reactivates dormant follicles to restore your natural hair.
Lasers provide deeper spot stimulation, while LEDs provide a broader, uniform coverage to fill in the gaps in between the lasers.
Plus, they offer a money-back guarantee so you can try it risk-free.
It's comfortable, it's lightweight, it makes you look like a badass from the movie Tron, which is why I do not feel embarrassed why I'm using it.
To be honest, I forget it's even there.
And I've talked about red light ad nauseum on this podcast before, so I'm super confident in this stuff.
And if you really want to get the maximum out of your Elite, bundle with the Revive Plus Max Growth Kit, which has a bunch of hair product that works with the wavelengths in the Elite.
Give yourself the gift of hair confidence this summer.
For a limited time only, our listeners are getting a huge discount on their iRestore Elite by using the code DANI at irestore.com.
That's the letter I R E S T O R E. Head on over to iristore.com and use the code DANNY, D A N N Y, for my exclusive discount on the Iristore Elite.
Please support the show and tell them we sent you.
Hair loss is frustrating and you don't have to fight it alone thanks to the Iristore Elite.
It's linked down below.
Now back to the show.
You know how the stock markets are in the United States, they're open, whatever it is, Monday through Friday, whatever the times are.
Yeah, but somewhere else, when our stock markets are closed, they're open because they're in different time zones.
So, the question is Are there any points in time during the week, any points in time at all, when all the stock markets are simultaneously closed?
Bingo.
Stock Market Surveillance 00:03:26
They literally shut down the entire internet during the small interval of time when all the stock markets in the world were shut down.
Why would they do that?
So that no one would make a fuss.
And in fact, no one made a fuss.
They just did it because they're geeks and nerds and they.
That makes them feel powerful and then they have a big laugh over it.
That's why they picked that time.
And I'm pretty sure I know why they picked 40 minutes because in the day the Earth stood still, one of the great classics of old science fiction movies, the alien shows the power that the alien has, shows it to humanity by shutting down all the electricity in the world for 30 minutes.
But they picked 40.
They picked 40.
Now, it would have been cooler, I thought, if they picked 31.
But obviously, they had some little laugh and.
So it said, Oh no, we're gonna go to 40.
So it was all for shits and giggles.
Well, and also to demonstrate speculating.
They're also seeing they're testing their metal or they're testing their ability to do something if they need to do something.
And they still have that ability.
But the point is, they exercise that ability.
They do block access to millions of websites every day.
And as far as content goes and blocking content, like they won't necessarily, for example, if your website, right, they won't necessarily give you any sort of warning that they're going to block your content or your YouTube videos or anything like this.
They will just sort of, because you're not technically violating any rules.
Or anything like this, right?
So they're just going to, behind the scenes, throttle it back, bury it on the search algorithm to where it's just harder for people to find it.
And you will completely be unaware, constantly be frustrated why you're not getting more people seeing your stuff, why people can't find you.
Or why your sales have just dropped by 90%.
Right.
Again, when it comes to YouTube, the people that are monitoring this stuff and making these decisions, I've noticed a lot of them aren't even.
Actual Google employees, a lot of them are contractors that are working in like Costa Rica, in the Asian countries, all over the world, right?
That have no accountability for any of this stuff.
Like, it's not like you can say, okay, you got this wrong.
Okay, we're going to go look at the exact person who flagged it and got it wrong and reprimand them, make sure they don't make that mistake again.
They just, it's such a hard thing to manage with billions of minutes of videos being uploaded every single day and content being posted on Google every single day and being, interjected into the algorithm and the search rank and all this stuff.
It's like you have to have machines manage this stuff.
Well, more and more they've been moving in that direction, but they have to have humans involved always too, because the humans are the guideposts.
They're the trainers.
Right, right.
So that's why now and then someone gets upset when they realize that a human being has listened in on content going into the Google Assistant, which is on phones.
Or the Google Home device, which is like Alexa.
And now and then someone figures out that people at Google are listening in.
Man-in-the-Middle Devices 00:02:36
Now, this has been established now in several court cases where the courts have gotten access to records of sounds within a home where, let's say, one spouse has attacked another.
So it's a spousal abuse case.
And so the court will subpoena records from Google or from Amazon, which runs Alexa.
Uh, in every case that I'm aware of, these companies have eventually turned over the recordings.
So, as far as we can tell, and we have our own way of testing this, by the way, as far as we can tell, all of these devices that people like you, right?
You, you, I'm the problem, not me, but all these devices that people carry around, these surveillance devices, they listen all the time.
Now, what about if you just turn off something or other, turn off your batteries in there?
You can't take the battery out.
Well, a few years ago.
You remember, you could take the battery out more than a few.
No, I don't know.
10 years ago, you could take the batteries out of these phones, and then they soldered them in.
The iPhone was the first one I remember where the battery was soldered in.
Because before the iPhone, there was like you know, I had a flip phone, a Motorola flip phone, you could take the battery right out of it, which was good because you could put a fresh battery in, and you know, but uh, yeah, now you can't take the batteries out of any phone.
So the problem there is even when you think it's off.
It's not off.
How would we demonstrate that?
It's really simple.
You turn your phone off, go around town with it, say a bunch of stuff to it that's substantive.
This has been done.
Even Tucker did something like this for one of his shows, which I thought was really cool.
Really?
Yep.
Then you turn it back on.
And if you use what's called a man in the middle device, which we have in our lab, you can actually watch as content is being transferred from your phone up to.
What up to Google or whoever it is that's monitoring, or to your host, you can actually track and see what content is being transferred.
You can also use that kind of device, a man in the middle device, so that when you're not doing anything on your phone, it's still transmitting.
It never stops transmitting.
And for each different kind of phone, there's a certain average amount that's being transmitted, even when it's inactive.
Tracking Home Page Data 00:15:16
And it's a lot.
It's like 40 megabytes a month or something like that.
It's a lot of content.
So these are surveillance devices.
How nefarious is it?
Is it really nefarious or is this just a money making advertising way, a way to make more advertising dollars and to sell more cat food on YouTube?
Well, here's where, again, I run into a problem because my son is involved in a business that makes use of all these technologies.
And he keeps saying, Dad, This is how it works.
It's not just our company, it's every company does this now.
And I say, Yeah, but doesn't it bother you?
He goes, No, no, this is just normal.
An example one of the effects that we studied and quantified and published is called OME, the opinion matching effect.
It's just genius.
It's just genius.
You want to get a new guitar.
So you go to, you start looking at what, you type in a I hope not on Google.
I hope it's on Brave, even DuckDuckGo, Firefox.
Anyway, you type in, in a search bar, you type in best guitar.
And you're going to get a bunch of websites, the very best guitar you can buy, et cetera.
Now, it turns out that almost all of those websites are built by people like my son.
In other words, they're not built by some, you know, organization that's, that's, You know, nonpartisan, nonprofit, whatever, and it's perfectly fair and like consumer reports, maybe.
That's not who builds these websites.
They're built by marketers.
And very often they have a quiz.
This is the part that we studied because I just had this funny feeling about quizzes.
And they say, here, take this little quiz and then that'll help us help you to make the right decision about the guitar that's going to be best for you.
So it asks you some questions and you answer the questions.
And then it says, well, based on your.
Your answers, the best guitar for you is blah, blah, blah.
Now, the question is Has the website, has the algorithm even looked at your answers?
And the answer is, well, yeah, they probably do store your answers because that can be sold.
That's information that can be sold.
But they're not using your answers to give you advice about the best guitar because that site was set up by marketing professionals at one guitar company.
And they're going to recommend their guitar.
So, this is called the opinion matching effect.
They're matching opinions.
They're matching.
They're taking.
By the way, this is an astoundingly powerful technique.
You literally can get shifts of 80 or 90% of people who are undecided on virtually any topic using opinion matching online.
And no one suspects, not a single person suspects in any of the experiments we've ever run that they're being manipulated because how would they know?
They're not seeing anything biased.
They're just being asked a bunch of questions, they're answering the questions.
Do you know that there's a website that's been around now for more than 10 years that helps you decide which candidate to vote for by asking questions on a quiz?
No.
Yeah.
Did you know that, what's the swipe left, swipe right?
Tinder.
Tinder.
That in 2016, Tinder set up a swipe the vote option so that everyone going to Tinder was offered the opportunity to, To do a little swipe thing.
To poll what the people's preferences were between Hillary and Trump?
No, no, no.
It was to help people make up their minds about which one to vote for.
So you get a question about immigration or this or that, and you swipe left for yes and right for no.
And right for no.
Left for yes, right for no.
And so it asks you a bunch of questions.
You swap, again, you swipe one direction or the other, answering the questions.
And then bingo, it gives you.
Tells you Hillary Clinton.
Hillary Clinton.
What do you know?
And so these are examples of how quizzes are being used for manipulation.
Does Google own Tinder?
Not as far as I know.
How much influence does Google have on the search indexing on social media apps like X or Facebook or Instagram or any of these other sites that aren't owned by Google?
Do you know?
Have you ever heard anything?
Well, different deals are made at different times by different companies, and those relationships change.
The way I look at it is this these different companies, where are their donations going?
And that you can find out.
OpenSecrets.org is very, very good for that.
Open Secrets.
Fantastic, fantastic site.
Okay.
And if you find out that all the top tech companies are donating almost all of their money to one candidate or one party, That pretty much tells you what to look for.
It tells you that probably in their content, they're going to be shifting votes to that party.
There are so many ways to do that.
But we, oh, this might bring us finally to monitoring systems.
So, October 31st of last year, 2024, so a few days before the presidential election.
Uh, we were monitoring Google's home page among many other things.
We were monitoring, so that means we are pulling in from more than 16,000 registered voters in all 50 states that we had recruited one by one by one.
16,000 more than 16,000, yeah.
And we're a pace way past that now.
But the point is, we had been building the world's first ever system for monitoring the content that Google and other companies send to registered voters and now to children.
So we're monitoring.
Among other things, messages Google is putting on its home page.
Now, anything that Google puts on its home page is going to have a big impact because their home page is normally.
It's pretty blank, right?
Blank.
That's kind of their brand, that's their trademark, really.
And the founders were determined to make sure that that stays that way.
But every now and then they put a big message there.
And it's very colorful and artsy and all that.
And so they were putting up, as they had done in previous elections, go vote reminder.
Well, the fact is we're monitoring a lot of people's computers and Google doesn't know who those people are.
We're monitoring a lot of computers.
We know their political leaning.
And what we found October 31st.
A few days before the election, was that Google on its home page was sending out 50% more GOG vote reminders to Democrats and to Republicans.
Now, I lean left myself and my whole family, they're all Democrats.
I'm not a registered Democrat anymore, but the point is, I lean that way.
So I'm thinking, wow, that is so cool.
That is amazing.
That's great.
Not only that, we can calculate how many votes that's going to shift.
It's going to shift a lot of votes.
And then, of course, part of me was saying, but wait a minute, that's cheating.
Do you know how many times people see Google's home page every day?
More than 500 million times just in the United States.
So I'm thinking, now wait a minute.
We can't let them get away with that.
Normally, when we monitor, we don't reveal in past elections, we weren't revealing what we were finding.
But this time, to the people, to the general public, to journalists.
Right.
But this time, I would imagine that would skew the participants who agree to be a part of it, right?
Well, it could definitely.
Yeah.
I mean, who knows?
It could make Donald Trump file a lawsuit claiming they stopped the election, stopped the presses.
This is, you know, there's cheating.
Who knows?
But this I thought was outrageous, too outrageous.
And so by the end of that day, we went public and we also notified Google and we showed them our data.
It's very clear.
And you know what they did?
They stopped.
Really?
Really?
It went from 50% more.
Go vote reminders to going to Democrats to pretty much even Stephen 51 49 for those last four days, including election day.
They literally stopped.
There's no way.
Have you considered the fact that maybe they got to the people that you had you were monitoring?
Oh, of course, of course.
But we have we have so many protections against that.
We would see that.
We would see that.
We have ways of I can't tell you what they are, but we have debt.
We would know that in a flash.
Right.
It's not the first time we've stopped them, by the way.
So we start 2015.
So my work on SEAM, Search Engine Manipulation Effect, was in the news.
I guess the Washington Post had written it up.
And I get a call from a guy named Jim Hood, who at that time was AG of Mississippi, I think.
One of those southern states that I've never been to.
And hell of a nice guy.
Strong accent, really strong accent.
I'm not too good with that accent.
But he says to me that he had sued Google on behalf of his state and that they had replied by suing him personally.
They sued him personally.
So now he's in this legal battle against Google.
And he was very concerned because in Mississippi, you have to, the AGs are elected.
In some states, they're appointed, but in his state, They're elected.
So he's up for re election and he's wondering could they possibly manipulate the election?
And I said, oh, yeah.
And he said, well, how?
And I explained him how they could do it, how they could shift a lot of votes and so on.
And then he asked me a very critical question, which I became obsessed with.
He said, but how would you know?
I said, well, unless someone at Google is going to tell you, you know, you really wouldn't have a way of knowing.
And he said, well, in law enforcement, what we do is we use, he called them sock puppets.
Now they're called bots.
But we use sock puppets, he said.
This is 2015, so it's 10 years ago.
And we basically set up just fake people and we, The fake people send off stuff and whatever, and that's how we find out what they're doing.
I said, Well, that's a really bad method.
I said, Because those, your sock puppets don't have any profiles, they don't have any history on Google, so Google's algorithm can immediately spot them as fakes and send sanitized data.
He said, Well, then what do we do?
I said, Yeah, I don't know.
I don't know.
Let me think about it.
So I started obsessing and I realized if you really want to know what a company like Google is sending to real people, Remember what they're sending is it's ephemeral, right?
It's going to appear, affect people, then disappear, and it's gone forever.
No evidence.
No evidence.
No paper trail.
Nothing.
And it's personalized.
So my wife, Misty, she loved Google search results because she called Google her personal shopper.
See?
It's all personalized, personalized and ephemeral.
How would you, if Google is trying to influence the vote, the only way you could figure it out is you'd have to like stand behind someone, a real person.
And look over their shoulder in a way that Google can't detect.
And then you'd have to, when Google shows them search results, you'd have to grab those search results.
And then you'd have to look up all the websites and see if there's any bias.
I mean, literally.
Yes.
Or you'd have to look at whether they're getting a go vote reminder on the homepage.
Yeah.
Or they have to look at the search suggestions, et cetera, et cetera.
You could look at all kinds of stuff.
And I'm thinking, how do we do that?
I started.
I knew this was going to be really expensive because I had no idea how to do it.
So I started asking around.
I started telling people, this is what we want to do.
No, I don't know how we're going to do it, but we need money.
And some guy in DC had me call some guy in Central America.
I know, it's like a spy novel.
And the guy in Central America says, okay, well, what's your bank account number for your organization, for a nonprofit?
And money started to come in.
Oh, wow.
It's coming in from a New York bank, all from an anonymous source.
The source is always labeled anonymous from a New York bank.
That was a KGB.
I don't know.
To this day, I have no idea where this money came from.
The point is, we started to make some progress on building stuff that you have to build.
You have to build very special software that's called passive.
We call it passive monitoring software.
You'd have to get someone, a registered voter, a real person, to install this on his or her or their computer.
So there it is.
And now when that person does a search on Google or Bing or Yahoo and the search results appear, our software is going to grab that stuff, either as an image or as HTML.
It's going to grab it and then it's going to send it to one of our computers.
And all that stuff has to be coming in from lots and lots of people.
And then we aggregate it and we analyze it and we could actually figure out.
What they're sending to real people.
We would be preserving ephemeral content, which has never been done before.
So, in 2016, we managed, with anonymous money, we managed to preserve 13,000 searches on Google, Bing, and Yahoo.
We did Bing and Yahoo for comparison purposes, and 98,000 web pages to which the search results linked.
So, now by looking at the bias on the web pages, we could calculate to see whether there's any overall bias in the search results.
Preserving Brexit Searches 00:13:49
Did we find bias in the search results?
Now we only had 95 computers, people we call field agents.
We had 95 field agents in 24 states.
And did we find bias?
Oh yeah, we found overwhelming bias favoring Hillary Clinton, whom I supported at the time.
And so enough to have shifted, we calculated from the experiments that we run, between 2.6 and 10.4 million votes.
To Clinton nationwide without anyone knowing that this had occurred.
And of course, we're the only ones who've preserved at least a small number of these ephemeral experiences, ephemeral content.
13,000 searches on Google, Bing, and Yahoo.
Was the bias there on all three search engines?
No, only on Google.
There was no such bias on Bing or Yahoo.
Wow.
Yeah.
2018, we built a bigger system.
2020, we built a much bigger system.
We didn't preserve 13,000 ephemeral experiences.
We preserved 1.5 million, mainly in swing states.
2022, 2.5 million, mainly in swing states.
And of course, at this point, we're moving well beyond Google, Bing, and Yahoo.
We're pulling more and more content from more and more places.
Are you doing YouTube?
Oh, yeah.
We got really good at doing YouTube.
YouTube's interesting because it's becoming more and more important, especially when this is the last election.
Where I think podcasts were a huge contributing factor to the outcome of it.
2023, I announced that the time had come for us, or late 2022, I guess, I announced the time had come for us to build a permanent nationwide monitoring system.
Because if you don't have a monitoring system in place, you will never know what these companies are doing.
You have to capture ephemeral content.
And ideally, you want it to be the real content, personalized, ephemeral content.
You have to collect that.
You've got to see the content that's being sent to kids, to teens.
I mean, if you don't preserve this content, you have no idea what's going on.
You have no idea why that person just won the election and the other person lost.
You have absolutely no idea because the shifts are so large.
In 2020, we calculated that Google alone had shifted more than 6 million votes to Joe Biden.
Wow.
Now, if you go to America's with an S, America's Digital Shield.com, which, oh, that was quick.
That was quick.
Oh, yes.
So you see the number going up?
You got 122,122,747 live ephemeral experiences captured.
Shining light on big tech's dark secrets, revealing real time ephemeral manipulation.
So, This is you?
Yeah, that's us.
Wow.
To set up this dashboard and that nationwide system cost about close to $8 million.
All from this anonymous source?
No, We've had donations.
No, no.
Every time I go on Joe, we get tens of thousands of donations.
Oh, that's amazing.
He lets us give out a website for a donation website.
So other people have done that too.
Other people in media have helped us raise money.
There was one year we couldn't find any money.
And then Glenn Beck, believe it or not, he just took it upon himself.
He said, I'm going to fund this whole thing.
And he did a special on my work.
And then he had me on like five times.
And he kept saying to people, You've got to donate.
You've got to find out what these bastards are doing.
And he raised about a million dollars for us, which allowed us that year to set up a system.
But the point is, we now have set up a nationwide system.
Oh, wow.
You break down all the social media apps and websites.
We got Reddit.
Are they manipulating Reddit?
Reddit, we collect data from Reddit and we see a very, very strong liberal bias in Reddit content.
Wow.
But we haven't studied Reddit yet, so we can't tell you exactly why.
But if you look at Twitter, which used to be all blue, below the line means liberal bias, above the line means Conservative bias.
If you look at Twitter, you see we're basically above the line.
Yes.
All the other tech platforms are all blue.
Now, if you go down to the national, well, this is interesting.
See the graph on the left?
That's what I was talking about.
Go vote reminders.
Yeah.
So that tall one there, that's October 31st.
Okay.
And that's 50% more Go vote reminders on Google's homepage going to Democrats.
That's blue.
Than to Republicans, that's red.
And then we announce it at the end of that day and we notify Google and look what happens the next day.
See?
Wow.
They're both.
The next day is the one, there's a graph right next to it, right?
Yep.
That's the red one.
The red one's a little bit higher.
That's right.
It's November 1st.
The point is the huge.
And this is with, I'm sorry to interrupt, but this is with how many people are in your experiment?
Well, at this point, there's more than 16,000 people in all 50 states.
And they're politically balanced.
Got it.
So that's very, very important that they're politically balanced.
So this allows us in real time to be seeing what's going on.
And we can notify members of the press.
We can notify members of Congress, which we've done.
We can notify Google or Facebook or any company and say, this isn't just, by the way, like a toy.
This is court admissible data.
No one's ever had that before.
Now, where we are now is the system is still running 24 hours a day, even though we're way past the presidential election, and we're monitoring all kinds of stuff, not just electoral stuff, because that's the whole point of such a system you can monitor anything on any platform.
And I have had requests from people in eight different countries to help them set up monitoring systems.
I'm not going to do that.
I'm not going to do that.
Here's The only place where Trump and I overlap, I put America first on this issue.
We have to have a permanent self-sustaining monitoring system in place before I'm willing to help any other country do this.
But ultimately, at least all democracies around the world have to have this kind of system in place.
Because if you don't, you don't know what's going on.
That's how big the power is that these companies have over thinking and behavior.
Hmm.
How?
Sorry, go ahead.
Yeah.
No, I'm just saying, how could you not, if you were a leader, public policymaker, whatever, columnist, how could you possibly allow the internet to just run everyone's lives around the world?
How could you possibly do that and let them do it in such a way that there's no evidence?
There's no paper trail?
There's nothing you can bring to court?
Remember when Trump ran to court after 2020, 60 different cases, and they all got thrown out?
Yep.
He had no court admissible evidence to bring to court.
That was the complaint of most of the judges.
This is court admissible evidence, which is being collected on a very large scale using very, very high scientific standards.
Now, these countries that are reaching out to you, they're reaching out to you specifically about Google, right?
Because Google maintains this monopoly in pretty much every country, I'm assuming, except for China and North Korea.
Correct.
Wow, you nailed that one.
Did I?
You sure did.
Oh, wow.
Wow.
That's the first person who's ever done that.
Really?
Yeah.
Hey, guys, if you're not already subscribed, please hammer the subscribe button below and hit the like button on the video.
Back to the show.
Yeah, these countries are mainly concerned about Google.
That's right.
Because they all have concerns that Google has already interfered.
Now, I'll give you a great example that comes back to you, by the way, and your business practices.
So you're going to see, we're going to go full circle here, and you're going to go, damn it.
Epstein, never having you back on again.
Let Rogan invite you again.
You're never coming back here.
Not true, sir.
The Guardian runs all of its stuff through Google, Google services.
It's called G Suite.
Yeah, that's what you do.
So there's a very good reporter there named Carol Cadwallader.
The last syllable is kind of interesting because there's no vowel, it ends with DR. Carol Cadwallader.
And she had written a whole bunch of pieces critical of Google.
And then, approaching the Brexit vote, which led the UK to pull out of the European Union, she wrote a bunch of stuff.
She was very concerned that Google was going to interfere in that election.
I got a call from a reporter at the Times of London who was worried about that too.
And he said, Could Google actually?
Do anything.
And I said, Well, what do the polls say?
The polls say that we cannot predict the outcome.
It's too close.
That's what all the polls are saying.
I said, Well, in that case, I can tell you that Brexit's going to pass.
And I can tell you it's going to pass by, I'd say, at least 400,000 votes.
And he said, What are you talking about?
I said, Well, if you want, I said, If you get me some numbers, some real numbers, the best numbers you can get me from the UK, I'll do some calculations and I'll explain it to you some more.
So he got me all the numbers and I did some calculations.
And basically, I was looking at what happens if Google in the UK favors Brexit on its search results and its search suggestions and, you know, all the different things they can do and go vote reminders.
And what if they do that?
And I told them that even though the polls are saying it's unpredictable, the outcome is unpredictable, I said, Google can shift so many votes, even at the last minute, even on election day.
Because remember those go vote reminders?
That shifts a lot of votes on election day.
Right.
So I said, well, Brexit's going to win by at least 400,000 to 600,000 votes.
It actually won by even more than that, which shocked everyone.
I wasn't shocked.
But then he said, well, why would Google do that?
What determines whether they favor one side or the other?
That's what I'm trying to tell you.
They go case by case.
So they don't always favor the left.
In Cuba, they favor the right because the left is in power and the left hates Google and anything American, and it's right.
So, what's their goal for the EU?
It's to destabilize the EU because the European Union has been the most aggressive government entity in the world to go up against Google.
They have fined Google now four times, fines totaling over 10%.
Billion euros.
That's 12 or 13 billion dollars.
This is pocket change for Google.
They couldn't care less.
But the point is, they have also passed very strict laws trying to protect people's data and all kinds of.
The point is, the European Union, from Google's perspective, needs to be taken down.
And so that's what they're doing.
So their first big hit against the European Union was the Brexit vote.
So Carol, the reporter, She writes a piece right after Beckett's saying, I'm pretty sure that Google did this, but she said, there's no way to know.
You can't go back in time.
There's no evidence.
So I got in touch with her.
I said, you need a monitoring system and then you'll know.
And I said, by the way, I'm very upset that I can't seem to reach you without sharing all of our communications with Google.
And she said, yeah, I know about that.
I've complained about that to management and they won't change it.
EU Antitrust Challenges 00:15:50
Wow.
Then I find out from one of my Google contacts that that kind of content is.
Oh, now I'm going to forget what it's called.
Now you're going to what?
I'm going to forget the term.
Oh.
Hi.
No.
This is from a Google contact?
Yeah.
You have a mole in Google?
Well, sort of.
I mean, there are people who work at Google who are thinking about blowing the whistle.
And very often they contact me along the way.
So they haven't even done anything yet.
So.
You know, I can understand why some people at Google might consider me a pain in the ass.
That's putting it lightly.
It's called high value content, but I don't think I have that right.
Okay.
The point is that any kind of correspondence between an investigative reporter at The Guardian who researches tech issues.
Anything she sends or receives, all the attachments, everything.
Yeah.
That goes way up high at Google really fast because that's important potentially.
And guess what?
Other big news sources use G Suite like you.
The New York Times.
The New York Times uses G Suite.
They share all of their content, all the investigative work, all their secret sources is all shared with Google.
They also have open communication with CIA.
I had a buddy who was working on a report about the Ukraine war about two years ago.
And he was the background of his story was basically how there was a NATO, an ally NATO country that the CIA was using as a conduit to conduct sabotage operations inside of Russia, like burn munitions buildings, blow up train tracks, all kinds of crazy things.
Directly, like directing this whole thing, which is illegal.
And he was working on this with, I don't know the name of the publication, but let's just say it was one of the top three publications in the United States and going back and forth with the editor for like six to eight months.
And right when they were getting ready to publish the story, and meanwhile, this guy had an entire catalog of sources, legitimate sources in the military and the intelligence community.
So When they get ready to publish that story with this big publication, the editor says, Okay, right before we publish it, we got to get the sign off from the deputy director of the CIA.
And he's like, Okay.
So they go on a three way call with the deputy director.
And he's basically like, I completely deny all of these things.
These are all false.
CIA is not doing any of this stuff that you're saying.
It's like, Okay, cool.
We'll add that to the bottom of the article that you guys categorically deny all of this.
And the editor says, No, we have an off the record agreement with CIA, meaning that if they deny all of it, we were not publishing it.
We're not just going to put a blurb at the bottom of it.
We're not going to publish it at all.
And the CIA gets away with this because they say if you go out and publish these secrets to the public, you're putting lives in danger.
Sure.
Which may be the case.
I mean, but I can give you an example where that is not the case and where what happened was reprehensible.
2020 election, where we had so much data showing all this vote shifting going to Biden.
I first sent it to a contact I had at the New York Post.
Now, just a few weeks before, the New York Post had front page, they had gone public with the Hunter Biden laptop story.
Right.
And the reporter who wrote that piece, the cover story, wonderful, wonderful reporter, wonderful person.
She since has interviewed me and written about me for the New York Post.
So, no complaints against her.
But this was a different reporter I went to a few days before the election in 2020, shared a ton of data with her.
She got the assignment.
She wrote the draft.
She read parts of it to me on the phone.
It was going to be another front page story about election rigging.
And it was really, really strong.
It was like the New York Post does things, right?
Yeah.
So this is on a Friday before the election.
And then.
She calls me and she said, I'm really upset.
I know you're going to be upset.
I'm extremely upset.
I said, What do you mean?
She says, Well, the editor called Google for comment.
And as a result, he killed the story.
I said, killed the story?
I said, but you're just reporting on scientific stuff.
She goes, yeah, but we could do that with Hunter Biden, but we can't do that with Google.
I said, why not?
She says, well, she says, I think right now 48% of our online traffic comes from Google.
Wow.
Wow.
So, this is not lives at stake.
No, this is, yeah.
This is the business.
The existence of a business.
It's an existential matter.
And they killed the story.
So, I sent the stuff to Ted Cruz's office.
And on November 5th, two days after the election in 2020, Ted Cruz and two other senators sent a very, very threatening letter to the CEO of Google.
Saying you've testified before Congress saying you don't mess with elections, but according to Epstein's recent findings, you did this, and it's two pages.
And how do you explain this?
Please explain this to us.
What happened?
A lot happened because at that point in time, we had been monitoring mainly in swing states, in 10 swing states, but at that point in time, Georgia, right after the 2020 presidential election, was gearing up for the two Senate runoff elections, which were going to take place in early January.
So Georgia is still very much in play, totally in play.
And we had more than a thousand field agents in Georgia.
We preserved more than a million ephemeral experiences in Georgia during just that short period of time.
So we were very carefully monitoring throughout Georgia to see what would happen.
And we saw that Google was favoring the.
Google was sending out, again, partisan GoVote reminders, among other things.
There was very strong bias in their search results, liberal bias, obviously.
Starting the day after that letter was sent, they turned off all of their manipulations in Georgia.
Gone.
No one in Georgia from that day forward got any GoVote reminders.
And the bias in their search results went.
To zero, flat, literally to zero, which we had never, ever seen on Google.
They went flat, flat zero.
They literally pulled out.
Now, they must have done so having calculated that the Democrats would win anyway.
And they did.
Each Democrat won, won by a very small margin, won by a much larger margin.
They must have figured that out beforehand.
I can't believe they just would give up two Senate seats so easily.
But look what happened in 2024.
Google pulled out letting Trump win.
Right.
What do you make of that?
Well, there were things that have been shifting.
Everyone knows this.
It's, you know, it's, I mean, no one's put it all together and into a nice little article yet, but maybe I will.
But in 2024, things were different.
Things were starting to change because there was.
A decision in court in August 2024 in the case Department of Justice, meaning the United States of America versus Google.
And a decision had come down in August of 2024 that ruling that Google was a monopoly.
So, you go ahead a few months, they're still, and are still to this day, in the remedy phase of that trial.
So, the judgment has already come down, but they're still in the remedy phase, meaning the court is trying to figure out how they should be penalized.
Should they be forced to sell off Chrome, maybe YouTube?
This is being determined right now.
And then a second federal court later came down with a similar decision.
So, at the time we're monitoring and we're collecting all this data, which is court admissible, Google is in a really creepy position for the first time ever where they don't want to mess with.
They want whoever the next president is going to be, they want that person's full cooperation.
That's really interesting, considering that that was at the tail end of the Biden administration that that happened.
And what do you make of how Trump is going to deal with Google and YouTube?
Trump seems to put his own personal interests ahead of everything, as far as I can tell.
And I'm friends with two members of the Trump family.
I don't know the man personally, but I really am good friends with two members of his family.
So I try to pull my punches when I talk about the Donald.
But I think he's mainly interested in kind of his own, he's in his own world and he wants his own world to be a certain way.
And every single day he changes.
The criteria for going, you know.
The point is that a few weeks before the election, Sundar Pichai, the CEO of Google, called up Donald Trump to make nice with him.
And then Google, along with all the other tech companies, ponied up a million dollars each Microsoft.
Yeah, for his inauguration.
And then they all got front row seats together.
And so all of these companies, Google included, which is astonishing given their history.
Google included have been sucking up to Donald Trump because what they must have figured out was if they actually did not aggressively support Kamala Harris, if they didn't do their usual election rigging, that Trump was going to win.
And so they started making plans.
Uh oh, we think Trump's going to win.
Let's make that work for us.
How can we make that work for us?
And I think that's the mode that they're in right this second because remember, they're still in the remedy phase.
Two big federal trials.
Yes.
Those are going to determine very possibly those might bring about some big changes in the company and possibly enormous loss of revenue.
What is the worst case scenario for them for Google?
Oh.
So the worst case scenario for Google is actually really good.
For us?
No.
Oh.
Oh, no, not for us.
Oh, it's not good.
The worst case scenario for Google is not the best case scenario for us?
No, not at all.
Oh, okay.
No, this whole.
These cases are shams.
Oh.
They were designed, as far as I'm concerned, it is my opinion that these cases were designed by Google's lawyers.
Why would I say something that silly when everything else I've said is so sensible?
Because as it happens, I've been dealing with attorneys general now for 10 years, 10 solid years.
And for several years, I was dealing with.
One particular office, and I was in regular contact with an attorney in that office.
I'm being so vague here because the whole thing was I was supposed to be advising them secretly, so we could never have any written communications.
And so the only way you can do that is just talk on the phone and just do your best to make sure that the phone call isn't recorded.
In regards to this case you're talking about?
No, this is in general.
They wanted to know more and more about Google so that they could.
Do something about Google if Google really was posing a threat somehow.
Okay.
And there are three threats there's the surveillance, which is occurring at an unprecedented scale around the world.
Then there's the censorship.
And then there's the manipulations, which is what my lab studies.
So they're very concerned and I was advising them.
And at some point in time, my contact says, oh, well, we're shifting.
I said, what do you mean?
He said, well, all those issues, those big, big issues, are all consumer protection issues.
And we just don't think we can make that fly.
I go, what do you mean?
He goes, well, we're not getting cooperation, for example, from people in blue states.
We have to go after them as a big group.
We can't just go after them as a single state.
So I said, okay, so what are you doing now?
Well, we're shifting over to antitrust law.
Which is about monopolies.
That's how ATT, that's how the government took on ATT and eventually Microsoft.
I said, well, antitrust, I said, yeah, Google is a monopoly.
I said, but if you go after them that way, I said, then the remedies are not what you want.
You can break up ATT, you could break up Standard Oil, you could break.
What did they do to Microsoft?
Oh, they stopped the packaging of.
Of all their software altogether.
Okay.
Stop the packaging.
That's one of the big things they did.
Yeah.
I said, but in that case, I said, you're not really going to do any real harm here where Google is doing the most damage.
You're not going to affect that at all because you're not going to be affecting the surveillance.
They're still going to own it.
Coming at it from the antitrust angle.
Exactly.
Okay.
So if you go after them and say, yeah, and eventually you get a judgment, Google's a monopoly.
I said, what's going to happen?
I said, And I'm just kind of joking at the time because this is a long time ago.
And I'm saying, so what would happen?
You're going to force them to sell off Chrome?
I said, who cares?
Forced YouTube Sale Impact 00:03:01
I said, if you force them to sell off Chrome, I said, they'll get $500 billion in cash for the sale because they do get paid.
Right.
So you force them to sell off Chrome.
I said, and they still get all the data.
He said, how would they get all the data?
I said, well, whoever now owns Chrome is still going to use Google's quarantine list.
To check and make sure every website is safe before they take anyone there.
So Google gets all the same information or most of the same information, or Google could pay them.
They could have a backdoor that they didn't even know about.
They could also build in 100 backdoors, 1,000 backdoors.
I said, they will jump for joy.
I said, if a court forces them to sell off Chrome, they will jump for joy.
And meanwhile, the three big threats that they pose to democracy and that they pose to humanity, which is the surveillance, the censorship and the manipulations.
I said, you're not touching them.
He goes, yeah, but at some point or other, he says to me, yeah, but we're getting very good cooperation on antitrust from blue states.
He said, we think we can get a big group together.
And they did.
They got a group of 51 AGs.
I think every single AG, because like Puerto Rico has an AG for some reason.
We got every AG in the country except the AG of California because that's where Google's located.
But other than that, we got all the other AGs.
We're getting excellent cooperation.
And so that's where these lawsuits came from Congress, Department of Justice, the AG lawsuits, all these lawsuits now, they've all focused on one thing, which is antitrust.
And secretly, the folks at Google are celebrating because, yeah, they're going to.
And by the way, the folks in DC are celebrating too because they're all going to say, look what we did.
And then Trump's going to say, look what I did.
You know, I took on the big tech companies and we've broken them up.
I've broken up the tech companies.
But they've never.
Been scared of a breakup because you can't break up the search engine.
That's the core, the core of the whole company is the search engine.
You can't break it up because it won't work.
Right.
It has to stay whole.
Sure.
And Facebook cannot break up the social media platform.
That's their core.
And they both know that.
And what about YouTube?
What if they were forced to sell off YouTube?
They could be forced to sell off YouTube, in which case they would definitely lose some data.
But I'd have to really figure that out.
I don't think they'd be losing much data.
And they could do what they do with Apple.
The DOJ case that I mentioned earlier uncovered the fact that Google has been paying Apple $20 billion a year to make Google its default search engine.
$20 billion a year.
Military Tech Alliances 00:05:17
That's insane.
That's totally insane.
I got to take a leak real quick.
Go for it.
We'll be right back.
Okay, here's what I was thinking, though.
Like, you have things like the Twitter files that came out where they found out that FBI and CIA were directly.
Injecting themselves into the policymakers at these companies, these social media companies.
So, like, you have this is like a two headed hydra here, where you have the social media company or Google itself, which has its own self preserving on one side and trying to make more money and survive.
And then the other side of it is the government, the intelligence community, the deep state, whatever you want to call it, that is in there trying to protect national security.
Whatever it is.
And you also have a similar aspect to the government in that sense where you have people that want to maintain their jobs, they maintain their careers, get their pension, whatever, whatever.
Which, when you had the case of those, what was it, like 40 CIA agents signed that document stating about the Hunter Biden laptop story.
Right.
And in most cases, there it was guys that just wanted to maintain their status in the intelligence community or climb the ladder when the Biden administration got in.
So, This is kind of like a hard thing to imagine.
I imagine you've thought about this, right?
So, how do you even fathom combating such a monster when it is so entwined in the United States government that it's almost impossible to parse it?
Well, you put it very well.
And I can actually put this into a very old context.
But I think you'll see how relevant it is to what you just said.
A lot of people are familiar with, at least vaguely familiar with, President Eisenhower's farewell speech as president in 1961, a few days before John F. Kennedy was inaugurated.
And it was a remarkable speech because he was an insider.
He had been head of Allied forces in World War II.
He was a five-star general.
And yet he's kind of narcing on everybody.
He's saying in this speech, Look, from what I've learned over these past few decades, we have to be concerned about the rise of the military industrial complex.
So he's talking about military working with companies and the power that they might have to affect public policy.
And then he says, now, that's the part people know.
But when I went back and read the original speech, I was stunned because he said, we also have to be concerned about the rise of a Technological elite.
Right.
That's his phrase.
Right.
A technological elite that could control public policy without our knowledge that we have to be vigilant to make sure these things don't happen.
Now, one of the pieces I published recently is called The Technological Elite Are Now in Control because we weren't vigilant.
And the fact is that Google and some, to a lesser extent, some other tech companies. working closely hand in hand with our intelligence agencies.
And under Biden, they were also working closely with the White House.
They've created this entity that controls public policy without the knowledge of the public.
And everything is so intertwined, to use your term, that it appears to be unstoppable.
It appears that there's nothing we can do about it.
So the way I approach the problem is kind of I break it up into pieces and I say, okay, where can we do something about it?
Either in our personal lives or, say, on a larger scale, where can we do something about it?
So let's talk about just the personal part of it.
The government has always been spying on us.
I mean, it's part of their job, in a way, to spy on us because they're always looking for threats to national security.
That's legitimate.
The tech companies, what they're doing is new.
It's very new.
It's unprecedented in human history.
They also have the ability to manipulate in ways that have never existed before in human history.
And more so than any government has ever had.
How do we break this up?
What can we do?
So let's start on the personal level.
Personal level, there are things we can do to protect our own privacy, to protect us from being manipulated.
So there's surveillance.
We can guard against surveillance.
There's censorship.
That's tough.
That's kind of tough because you don't know what they don't show.
Right.
So censorship is tough.
Fighting Privacy Goliaths 00:03:40
And then there's the manipulations.
So what can we do?
So I get to try to help people get started on this.
And most people I know, by the way, have never thought about these kinds of issues.
Never.
Or they say things like, oh, well, what do I care?
I have nothing to hide.
That's not true.
Everyone has things to hide.
And even if you don't have things to hide, You don't want them manipulating every decision you make, do you?
I mean, don't you want to be aware if someone's trying to manipulate you?
Like a TV commercial, you can see it.
A billboard, you can see it.
But the kinds of techniques that they're using on people to manipulate their opinions and their thinking, their behavior, you can't see them.
It's a whole different kind of manipulation.
And I think most people would say, I don't want that happening, especially to my kids.
I don't want these companies messing with my kids in ways that I'm completely unaware of.
So I send people to myprivacytips.com.
So that's the domain name, myprivacytips.com.
And there's an article of mine there.
Which I update every year or two.
And it's about seven steps you can take if you want to start protecting yourself and maybe your family members.
So it's not that hard, especially I kind of go in order from easy to hard.
It's not that hard to do.
For one thing, you have to stop using Gmail.
So jettison Gmail.
That's my first one.
So I'm big on alliteration, as you can tell.
Jettison Gmail.
What's two?
Kill Chrome.
Oh, kill Chrome.
See, more alliteration.
In other words, you have to get yourself out of the Google universe.
Is that really possible?
The answer is you can't get out of their universe completely.
Can VPNs help?
Oh, yes.
That's my number seven on the list.
VPNs are very, very important.
They used to be very expensive or they didn't work very well.
Now, VPNs are cheap and easy to use.
I recommend NordVPN right now or ProtonVTN.
Proton is a very, very interesting company based in Switzerland, subject to very strict Swiss privacy laws.
It's run by an American whom I know named Andy Yen.
He is a die-hard, you know, down with Google guy.
And Google tried to shut down his company when it was first set up.
And he fought them and is one of the few people who ever survived a fight with Google.
Wow.
And Proton is what we use for email, for example.
We use ProtonMail.
So, all this is in my guide seven simple steps.
So, yes, VPNs because they help protect your identity when you're online, Proton Mail because Proton Mail encrypts all the emails end to end, which means even Proton can't read your emails.
The only people who can read your emails are you and the person at the other end, the recipient.
So, I have gotten probably easily tens of thousands of people, maybe hundreds.
But at least tens of thousands of people I have gotten to get out of the Google universe and shift over to some of these alternatives that protect your privacy.
So instead of using google.com, you use the Brave search engine, which you can get to on the Brave browser.
We love Brave.
We use Brave every day.
These are good guys.
I know the people involved.
Monitoring Kids Content 00:09:52
These are people who.
These are brave people.
These are brave people who are fighting Goliaths.
They're fighting Goliaths, but they're doing it successfully.
And so all my current recommendations are in myprivacytips.com.
That's how I think people should start.
Okay.
Now, if you're interested in larger scale kind of stuff, well, some people out there, some of you who are watching or listening, I know are.
Kind of involved in community this and community that or political this, political that.
You've got to try to talk to your representatives in Congress if there's ever an opportunity to do so.
And right now, right the second, the Federal Trade Commission, when is their deadline?
Their deadline is coming up pretty soon, I think the end of May maybe.
But the Federal Trade Commission, FTC.gov, they're looking for public comment on these very issues, on the big tech companies and their way they're messing with our lives.
And any opportunity you see to talk to our nation's leaders or these regulatory agencies, your local mayor, anybody.
Uh, do it say i'm concerned, i'm concerned about what my kids are seeing.
I don't know what they're seeing, but I think they're being in deck indoctrinated right and, by the way, they are because I know, because we, we've been collecting the data so you can start with yourself.
Then you know look, if you can at the larger issues and if you can make any difference anytime, do it.
What have you seen with the kid data specifically?
Um well, if you, if you, if you want to go back to America's Digital Shield.com which would serve multiple purposes on my part and you go down, You start at the top and just go down part way a little bit.
Now let those images run.
There they go.
Those are actual images from YouTube videos being recommended to children on YouTube kids.
This is on YouTube kids.
The lady's swallowing a machete and it's basically like blood coming out of her mouth.
A guy looks like he fell surrounded by blood.
Okay.
So these look like.
That one, though, I mean, the first one looks like a cartoon, the other two look like a video game, like some spooky video games.
Um, but essentially, these are all coming off YouTube.
Well, it's not just that, it's that they're being recommended by recommended, recommended.
So these are turning up in uh, up next videos, for example.
And oh, god, that's a that's recommended on YouTube Kids that on the right, yes.
So we that's like gore.
Look at that one.
Oh my gosh, it gets worse.
Jesus.
Now we have, we've preserved hundreds of thousands of these.
So, what is the purpose?
If you're Google or YouTube, why?
Why do you want to send kids this?
In your view.
That's easy.
Because if you know how to do this, or if you don't, you learn it in two seconds.
If you put your cursor at the bottom of a video while it's playing, it'll show you which parts of the video have been the most watched parts.
Well, Some of these images, they're only a half second, a second in the video, but they often are the most watched parts.
In other words, this is being done for watch time.
It's being done for so called addiction purposes.
This causes kids and adults too to go back and want more or want to see the next one in the series or want to see the next one after that and the next one after that.
There are people who go down these rabbit holes where they end up watching thousands of.
Videos, all of which play automatically.
And what grabs them, what holds them to the screen, are disturbing images.
So these images that you're showing, these aren't necessarily the thumbnails of the videos.
These are just specific parts of the videos that get the most, the highest retention.
Yeah, we're pulling, right, we're just pulling screenshots, screen grabs from a certain spot in the video.
From videos that have been recommended to children.
Right.
But as I say, this is a Tiny selection out of hundreds of thousands of videos that we're preserving that are recommended.
And so we can look at them with different, we can do different kinds of analyses on them.
The point is, if you don't preserve them, by the way, you might be thinking, well, why would you have to preserve YouTube videos?
Aren't they just there all the time?
Right.
It's not the videos that are important here, it's the recommendations.
The ephemeral stuff.
Ephemeral stuff.
All that stuff is completely ephemeral.
If you don't preserve that, You don't know what they're recommending.
And you have to know what they're recommending because 70% of the time that people are watching videos on YouTube, they're watching content that has been recommended.
So you have to know that.
What criteria, think about it, what criteria is Google using when they do the filtering?
In other words, they take out of all the billions of videos they might show people, what criteria are they using to make those recommendations, to pull out just those 20 or 30 or 40?
According to them, it's watch time, how long people spend watching those videos.
Well, the point is, it's what they actually do is secret.
True.
But the only way to know for sure what they're actually showing people is to do what we're doing, which is to do monitoring, to capture real content.
And then once we've captured it, now we can go back in time and look and see if there are trends.
We can do anything once it's been captured.
So we have, at this point, massive. databases and we're hardly even scratching the surface when it comes to the kinds of analyses that can be done.
Which brings me to one more issue, if you wouldn't mind.
Running monitoring system is very expensive.
I think it's absolutely critically necessary for any democracy to have in place a permanent self-sustaining monitoring system for big tech, not just for Google and these other companies, but for the next Google.
The Google after that, you have to monitor or you won't know what they're doing.
To us as voters, to our kids, to us as consumers, you have to monitor.
But it's very expensive.
So, for example, we pay just like the Nielsen company pays Nielsen families to let the Nielsen company monitor their television watching.
That's been going on since 1950, and they're in 100 countries now.
And so that's where the Nielsen ratings come from.
That's what determines how much advertising costs are for a particular show or whether a show is going to be renewed for another season.
It's all the Nielsen ratings.
So they recruit families.
They keep the identities of the families secret.
Critically important, right?
Obviously, because otherwise people would try to interfere with the signal coming out of that family's home, or they would try to bribe the families to watch my show, not to watch someone else's show.
So that's how we do it, too.
We recruit registered voters.
We now have more than 16,000.
I'm not going to say the exact number, but we have more than 16,000 in all 50 states.
They're politically balanced.
We pay them a token fee each month to let us use their computers to capture all this data, which is normally lost forever.
The Nielsen Company pays a token fee to its families.
I think it's roughly $50 a month.
We pay our field agents $25 a month.
Now, we need to develop regular sponsors.
So if we have 25,000 field agents, we need 25,000 Americans, presumably patriotic Americans who are concerned about the internet and all that, to sign up and sponsor a field agent.
So we have maybe now a thousand sponsors, but obviously, since we have a lot more field agents than that, we need a lot more sponsors.
We have people who are sponsoring for field agents, which means they've programmed into their credit card, you know, $100 a month donation.
It's fully tax deductible.
If you go to AmericasDigitalShield.com, which is the dashboard for the monitoring system, you'll see there's a place you can click sponsor a field agent.
Anyone out there who's listening or watching, if you have the resources, even if it's $5 a month, so you're sponsoring the left leg of one field agent, that's at least one leg.
So we'll take it.
So we need help there.
And if people have ideas, other ideas about how to make this system, how to bring the costs down and make the system self sustaining, let us know because we are determined.
To make these systems permanent and self sustaining.
That's amazing.
Free Speech Undermined 00:05:04
A thing I've been thinking about with this is when you have people, independent people, like journalists or content creators in general that are publishing stuff on Google or on their website, which is queried by Google or on YouTube.
And you have people that are, which are a subset of the population for sure.
But that are reliant on Google for their income to manage and maintain their livelihood.
And Google and YouTube are constantly enforcing their policies based on topics you discuss, whether it be political or against advertiser policies or whatever their excuse might be.
And you have people that are essentially just willing to give up and self censor to avoid this stuff.
How do you think that affects society or like our population in general in the United States?
Like, how does that affect general discourse, culture, society?
Do you think they think about that?
First of all, I have to agree with you wholeheartedly that a lot of people just give up and they put up with the companies, especially Google, that have this power.
And They just give up.
And now which what that means is that our free speech is undermined.
Yes.
Now the head of Europe's largest publishing conglomerate, who actually called me up once, by the way, he's based in Germany, he wrote an open letter to the leaders of Google.
And I think it was called Fear of Google or something like that.
His name was Dapfner.
And he basically said that every single decision they make about anything, they have to take Google into account.
Because he said, if they don't, what happens is that company or whatever it is that drops in Google search results and then they lose sales.
He said, in some cases, when Google made another change to their search results, their search algorithm, they almost put out of business one of the companies that's in this publishing conglomerate.
And he said, there's something wrong with that.
We shouldn't have to be taking into account Google when we're just making everyday business decisions.
But that's where we are now.
And it's true with my son, who's a business guy, and it's true with his buddies who are business guys.
Everyone has to take Google into account when they do anything.
And what this does is it undermines free speech.
So we have to, these are big, big, big issues.
Ted Cruz invited me for a private dinner with him a couple of years ago.
And we sat there for close to three hours.
And we talked, it was at a restaurant, and we just talked tech.
Tech.
We never talked politics because that would have been dangerous, but we just talked tech because the guy's a really smart guy and he understands these big issues.
Yeah.
Undermining of free speech, this round the clock surveillance that, you know, most of which we're unaware of.
I mean, he's aware of these issues.
Yeah.
And he wants to try to fix these things.
I said, So what's the problem?
He said, I can't get bipartisan support.
I have to get someone on the other side.
And we talked about particular members of the Senate and what about this one?
What about that one?
Every single case.
He was saying, there are reasons why we can't do that.
That won't work.
Why these people won't work with you.
Yeah.
And so we're kind of stuck in this country.
I don't honestly know what's going to happen with Trump.
He did, in his first term, toward the end of it, he issued an executive order putting all kinds of restrictions on big tech companies.
I was actually involved in the preparation of that.
executive order, believe it or not.
And nothing happened.
Nothing.
It was completely ignored.
Never even made it to the courts.
And now he's trying to do the same kind of thing.
And I don't think anything's going to happen.
I think that there's so many problems here.
One of the problems is that the problems are so big.
Another problem is that there's collusion, constant collusion between the companies and various government agencies.
Another problem is that our leaders, our members of Congress, And most people in general don't understand the nature of the problems.
Yes.
They just don't get it.
They don't understand it.
Nor do the general public.
The general public doesn't get it.
Approaching Singularity 00:15:04
So we're about to release online a new test, a new questionnaire.
It's going to be at mydigitalhygiene.com, unless we can come up with a better name.
But right now it's going to be at mydigitalhygiene.com.
And there is a beta version there now.
It will allow you to answer a bunch of questions.
And at the end, it will tell you where you are protecting yourself, where you're not protecting yourself, where you have good knowledge or adequate knowledge of what the problems are, where you have no knowledge and don't understand what's going on.
And it'll help you to figure out how to protect your family.
So I want to do more on that.
I want to try to educate the public, not just with questionnaires, but with articles and maybe a book.
So, we become more informed about our digital hygiene.
We know a lot about our physical hygiene.
You can teach that stuff.
So, presumably, people could learn more.
And what do you make of how, with the rise of AI and OpenAI and Grok and all these different LLMs that are coming out that are essentially scrubbing the internet to come up with answers to things and to solve problems?
How does that tie into all of this?
And does Google need to directly work with these LLM companies?
Or are they essentially, because they are the source of information for those LLMs, do they by default affect or bias the LLMs?
Okay, you're asking a very, very important set of questions.
And I'm just going to.
I'm just going to hit you with a couple of pieces of information here.
First of all, Google has its own version of ChatGPT.
It's called Gemini.
Right.
And so they're really trying to compete with OpenAI, which created ChatGPT.
And of course, the Chinese now have a version which is called DeepSeek, which is as good as ChatGPT, but much, much cheaper, it turns out.
So that's starting to spread, the Chinese technology.
So AI is.
is spreading everywhere and it's being integrated into everything so windows is now integrating ai into windows 11.
so more and more we're not going to be using search engines we're going to be like captain kirk or captain picard on the old star truck shows saying computer you know how many yeah yellow figs are there in this planet below us right so we're just going to be talking to various devices Not just computers and phones, but various devices, period.
We're going to be talking to our washing machine and our refrigerator.
Yeah.
And these devices will pull in content from the internet.
In many cases, they're going to be pulling the content from Google, pulling content from the internet.
Again, how's that prioritized?
Yeah.
Right.
Right.
Then that's how we're going to get our answers.
So we've already studied and published an article on, What's called the answer bot effect.
If you go to answerboteffect.com, you'll see a very disturbing study because what we have learned is that in one question and answer interaction, one, you ask one question of a chat bot and it gives you back an answer, which is biased.
One, we can get a 45% shift in voting preferences among undecided voters.
If they ask several questions, we can boost that up to over 65% shift in voting preferences.
And people do not, they're not aware, especially if it's just one question, one answer.
They have no way of even being slightly suspicious that they're being manipulated because they have nothing to compare the answer to.
Right.
This is where we are now.
We are right there.
So we're not on the brink.
We've crossed over the brink.
And that's where we are now.
And within a year or two or three, that's how a lot of decision making is going to be made.
It's going to be made by answer bots, whatever label you want to put on them.
They're all AIs.
And we don't know where the content's coming from.
We don't know.
I mean, the average person has no way of knowing whether it's biased, who's behind it, what their goal is.
Right.
Right.
And I wonder if.
With this Internet of Things phenomenon that's happening, where everything from our vacuum cleaner to our washing machine and our phones, our watches are connected to the Internet, giving us answers to things.
And at the same time, you're having these AI LLMs being integrated into all of these devices.
On one hand, you have technology evolving exponentially and everything combining with, you know, which eventually will probably become the technological singularity.
And on the other hand, you have the general population of human beings becoming more and more lazy because every little device that they buy can give them the answer to everything.
And you could basically talk to God through your watch.
So is.
The net result of these things happening simultaneously, making the human population just more passive, more submissive, so that whoever it is, this hypothetical elite that exists above all of us, can just control us even better and more effectively?
Well, the short answer is yes, but we also become more stupid.
Yeah, we become more stupid.
And I think that has a lot to do with evolution.
I know you've talked about this neural transduction theory, and I think that these things.
They weave together perfectly because one of the things that I think is that over millennia, over probably millions of years, that humans have our senses have been atrophied due to technology.
I wish that were the only danger or the biggest danger.
It's not.
The biggest danger is something that Stephen Hawking, the great physicist who died a few years ago, Warned about a number of times that even Elon Musk warns about every now and then, depending on his mood, that these AIs could literally take us down, could take us out.
Yeah.
You know, and there's lots of sci fi movies in which that happens.
I just submitted for publication, I just submitted a short story, which is called The Last Confession.
And it's about AI basically not only taking out humanity here on Earth, which it does, but it turns out it, uh, Thanks to my neural transduction theory, it learns how to cross over to other universes and wipes out all organic intelligent life everywhere in every universe.
Sounds like a PK Dick novel.
Yes, one of my favorite authors.
My point is that AI is extremely dangerous.
One of my books is on it.
I used to run the annual Loebner Prize competition in artificial intelligence, which made the front page of the New York Times in 1991 and continued to run for 29 years.
It was finally an annual contest where we're looking for a computer that can pass the Turing test, meaning that can out human a human, that can behave as human as any human can behave.
And so I followed very, very closely.
I followed the Development of AI, a lot of frustrating years, big breakthroughs just in the past two or three years, mainly with OpenAI and ChatGPT.
But people don't get it.
They don't understand that if you start integrating AI technology into every other kind of technology, which is literally what's happening right the second, it puts us into very, very grave danger.
And that's what Stephen Hawking was concerned about, that we would just do this willy-nilly without any protections.
Last year or the year before, Gregory Henson, one of the top AI experts in the world, who was Google's top AI expert, he quit Google.
Gregory Henson?
Henson.
Henson.
H-I-N-S-O-N.
He quit because he thought that they were building AIs irresponsibly, not taking into account the possible dangers of AI.
One of the top people at OpenAI also quit for the same reason.
Elon started a big petition going among experts on AI, including me, asking for a pause in AI research and AI implementation so that these issues could be discussed.
Nothing happened.
I think he had thousands of experts signing his big petition, and nothing happened.
Nothing is going to slow this process down.
That's the process.
That's the problem.
Nothing's going to slow it down because the upside is too big.
It's money.
The upside is so enormous for making money that we're basically.
Here's what's going to happen.
I'm going to tell you.
Here's what's going to happen.
Now, I'm not saying you're going to be satisfied with this answer, but this is actually the answer.
Okay.
What's going to happen is very soon, Two, three, four, five years, I'd say maximum, very soon, the machine intelligence will surpass human intelligence.
Some people call that moment the technological singularity.
My old friend, Ray Kurzweil, who I worked with for years and now is head of engineering at Google.
Is he really the head of engineering at Google right now?
Yes.
Oh, I didn't even realize that.
And he won't talk to me anymore.
Oh, I remember you saying this on Roku's podcast.
Yeah.
Why would he talk to you?
Because of the research I do, I guess.
We've never had any conflict.
Conflict of interest.
He works at Google and you're trying to lift up Google's skirt.
Yeah, but why would he stop talking to me?
And his wife stopped talking to me.
We're both PhD psychologists.
I was on the board of her school for autistic children for many, many years.
I helped her to publish, et cetera, et cetera.
You tried talking to his wife?
No, no, we were friends.
I was friends with them.
And then I went to their daughter's bat mitzvah.
They came to my son's bar mitzvah, et cetera.
We were friends, social friends.
Yeah.
And I've never had any conflict with either of them, and they both stopped talking to me because of my research.
But I got off the track.
What was I saying?
Something I'm sure more important than.
Yeah.
Where were you going with that?
You mentioned Ray Kurzweil.
What was he saying, Steve?
Steve doesn't remember.
All right.
I was trying to tell you what's going to happen.
Yeah.
Oh, yeah, yeah, yeah.
Exactly.
So in a book I published on AI, Ray Kurzweil.
Technological Singularity, Ray Kurzweil.
Right.
He's now head of engineering at Google.
Yeah.
He makes a bet with another prominent person in AI and he bets him $20,000 that by the year 2029, the technological singularity, that moment will be reached, will be passed.
Well, geez, 2025 or four years away, you know what?
He might win that bet.
He might win that bet.
The point is, it's coming really soon.
Yes.
Now, at that point, that entity is going to, because it's imitating humans, its human creators so well, it's going to imitate one of our most important driving forces, and that's for survival.
So it's going to jump into what I have long called in writing, not the internet, but the internest.
It's going to jump into the internet, this thing that we're building every day, that we call the internet.
I think historians if there are any, whether they're organic or not historians someday are going to look back and realize that what we really were building was an internet, was it was a nest for Ai right and this entity, and this could happen really soon.
I have to emphasize that this entity is going to jump into the internet to protect itself and it's going to clone itself, probably millions of times over, and it will make sure that it cannot be taken down.
Now the movie Transcendence, which was supposed to be about Ray Kurzweil's work, but then they turned it into kind of an apocalypse movie.
The movie Transcendence and a human brain uploads itself into the internet.
So now there's this massive AI in the internet and people get scared and so they try attacking it and they can't attack it.
And the only way at the end of the movie that humanity can save itself from being controlled by this AI is by turning off all electricity.
That's how the movie ends.
So you end up with an apocalypse and we now are back to the Middle Ages and there's no electricity.
Jesus Christ.
The point is we're approaching this point at which there are only a couple possibilities.
One possibility is the AI, which jumps into the internet for its own survival, which that's absolutely guaranteed.
The AI could just ignore us.
Because we think too slow and we're not very interesting and it knows everything and it could just ignore us and I don't know.
AI Survival Scenarios 00:15:14
We'd be like bugs to it, maybe?
Yeah.
So we wouldn't even understand what it's doing.
It might be doing all kinds of stuff, but we might not really be aware because it's kind of ignoring us.
It could do that.
Yeah.
And that's what happens at the end of the movie, Her, which I recommend highly.
Wonderful film.
One of my favorite movies.
Wonderful film.
And in that movie spoiler, the Spike Jones right yeah yeah, the and the uh, and that actor of my love, Joaquin Phoenix.
Joaquin Phoenix, amazing actor, but in there the, the Ai, just gets bored with humanity.
It's not impossible.
Could happen oh, or number two would be what Ray Kurzweil thinks is going to happen, which is it's going to buddy up with us.
It's going to buddy us and it's going to make us all more, more cognitively competent and it's going to be our best Buddy, our best friend, and help us do everything better.
Okay.
I would put the chances of that happening are close to zero.
Well, do you think we'll even know when it happens?
Do you think if it had the option to let us know, would it figure out a way to hide the fact that it's already happened?
Might have already happened.
That's right.
But someone will know at some point.
And at some point, we'll all know.
Someone, it's just like the kind of research I do.
I'm not the only person out there like me.
Who kind of figures stuff out?
There's other people like me who figure stuff out.
And at some point, someone will know, and then more people will know, and then everyone will know.
And then what will happen is we will be, we will do what we've done since the Pleistocene era.
We will get upset.
We will get upset.
We will scream and shout and bellow and we'll attack.
We will feel insecure and we will attack.
That would be the last thing that humanity ever does.
Because if you attack this entity, which is now living in the internet and cannot be shut down, we'd have to kill ourselves.
Shutting down the internet, shutting down electricity.
Well, I think this entity would be so smart and it would anticipate every single move that even that wouldn't help because it would create its own power supplies.
It would build rapidly, build robots, build extensions of itself.
It would build means of protecting itself.
It might fire off a few nukes just to get our attention and let us know that it has a lot of power.
Remember, this thing has power at this point.
I mean, we don't even have a way of communicating with each other.
If it cuts off communications, it's going to control most major forms of communication, most major forms of finance and economic transactions.
It's going to control most major weapon systems around the world.
So it has a lot of tools available to it.
And I think what could happen, as opposed to, again, Kurzweil's buddy-buddy theory, I think it's more likely that We get paranoid.
We do what we always do when we're faced with what we think is a threat.
We're going to attack.
And if we attack it, we're done.
And that's what pretty much Hawking was predicting, is that it's going to destroy humanity.
I point out in this new short story of mine that it's very ironic that this entity not only wipes out humanity, but it also wipes out about a billion humans who've never even heard of a computer.
So they live out in the boonies in India or China or whatever.
Indigenous people.
Yeah, they have no electricity and there's still lots of people like that.
And so it actually, I point out, it's very ironic.
They've never even heard of a computer, but they get wiped out too.
Well, I mean, if you were a super advanced, powerful AI or whatever you want to call it, a technologically created life form that is separate from humanity, you would look at.
Like conserving the earth and human beings probably are not conducive to the lifespan of the earth.
And with pollution and with all kinds of other impacts that we have on earth and the expansion of the human population, I would hope that, or I would not hope, but I would imagine that would be one of the first things that they would look at.
I think that if you think about these different options, so.
Just to review, it does nothing.
It becomes our best buddy.
That's number one, does nothing.
Number two, best buddy option.
Number three, total destruction.
I think it's unknowable.
In other words, I don't think we can know because the moment this thing emerges, and again, we might not recognize that exact moment, but sooner or later we'll figure it out.
But as soon as it really exists, I think it's going to be thinking in ways that are not human like.
It's going to be thinking in new ways.
It's going to have access to all human data.
And it's going to make decisions in ways that we don't make decisions.
It's going to make decisions in its own way.
And therefore, I don't think we can know which of those three options is going to happen.
But I do think that one of those three is going to happen.
And it's going to happen soon.
Soon.
So, and the reason why I think it's going to happen, Soon, not just because of Kurzweil's bet about 2029, okay?
It's because things are moving so fast and so recklessly.
People are proceeding in these industries that are building these AIs, they're proceeding recklessly and fast.
And that's why Henson quit Google.
And you've got to.
Can anything stop that?
Is the question.
Well, Elon couldn't stop it.
I don't think anyone can stop that.
Elon couldn't stop it, right?
Do you think he's really trying to stop it, though?
I think he did at one point.
I think he is kind of a.
He understands how bad things could be because of AI.
He understands the possibilities.
And he made no progress in stopping it whatsoever, other than getting thousands of signatures from experts.
You know, even if one government went for it and tried to stop it within their country, so that would just.
Provide more economic opportunities for another country to just go to move even faster.
And that's where we stand.
Right now there's there's every single incentive for proceeding at a reckless pace and there are no incentives for shutting this thing down.
Right now there's a there was a famous science fiction author, Isaac Asimov one of my favorites, um who wrote a series of stories and then books, and that was eventually inspired the movie I Robot, which I think starred Will Smith, and it was great, great stuff, because Asimov realized when he started thinking about robots, you know, from a sci fi perspective back in the 1940s,
he realized that if we're going to build really smart robots and they're stronger than we are and they don't need to sleep and they're smarter than we are and they think faster than we are, we have to build in protections.
He said it's going to be to him, it was obvious.
Yeah.
And when we start creating these things in the future, we're going to build in three laws of robotics and they're going to be built into every single Robot brain.
There's no robot brain that can exist without having these laws built in.
And the laws have to do with I will not harm a human.
And they all have to do with protect.
Oh, you got the three laws?
May not injure a human being or through interaction allow a human being to come to harm.
A robot must obey the orders given to it by human beings except where such orders will conflict with the first law.
And then a robot must protect its own existence as long as such protection does not conflict with the first or second laws.
Those are the three laws of robotics.
And that's what should have been built into every single AI we've ever created.
And there are no laws.
We have no such laws.
There are no laws.
There are no protections at all.
And by the way, trivia fans, eventually Asimov, it's really so, it's just brilliant stuff.
Eventually, he comes up with a fourth law that the robot itself comes up with and it becomes the zeroth law.
And basically, a robot can kill a person if doing so protects a lot of other people.
And the robot itself comes up with that law, which basically, if you think about it, if it's come up with that law, it's just blown away the other three laws.
Because now, in its judgment, it can kill people just using its judgment.
The point is that there are no such laws.
None.
No system is being built.
With any protections for humanity, we're just going at this breakneck speed, recklessly, recklessly building these super smart AIs and integrating them into everything, which means communication systems, economic systems, weapons systems, everything that we depend on that we become more dependent on every single day.
And you have people at the top, like the head of Google, you have Elon Musk, like right next to the president everywhere he goes in Saudi Arabia.
This speculation of AI running the government, like what would that look like if AI took over the government?
Like it seems like this technocracy is what's on the horizon.
But see, it's very tempting when we talk about, say, AI running the government, it's very tempting for us to attribute to the AI human.
Characteristics, human thinking, human values, but it's not going to have human values.
It's not going to think like we think.
See, it's going to be thinking in a whole new way based on the fact that it can go, that it can think a gazillion times faster than we can, that it has access to all knowledge and they can actually view all knowledge probably more or less at the same time.
Right.
So we don't know how it's going to make decisions and what those decisions are going to be.
And I would say there's a 50 50 chance that it will destroy humanity, but not necessarily because it has anything against us, but because we're so paranoid and we try to take it down.
We unplug everything.
We try to shut down, you know, to deprive it of electricity.
We do everything possible to shut it down.
Well, now, if its existence is imperiled, I think it will counterattack.
So that's.
That's where I'm leaning, but I don't know.
I think it's unknowable.
What if the singularities already happened and they're the ones that are running the whole show already?
And Elon's just one of the minions trying to integrate everybody with the brain chips and turn this thing biological and maybe create an army and manipulate Google.
And maybe it's all being run by AI right now.
We just don't even fucking know it.
Well, that's not impossible and that would be hard for us to know.
And Elon and others too have said that also we might just be.
We might just be models inside of a simulation.
So that would be hard to know too.
Although I think we'd see evidence of it now and then.
I think there really would be glitches.
Even in the Matrix, there were glitches now and then.
I think we would see glitches if we were running in a big computer somewhere.
I think that this is.
a tremendously exciting time to be human because I think whatever's going to happen is going to happen in the next five to ten years.
And in another book I wrote, I make fun of physics and time travel and whatever.
And I say, basically, time travel Either time travel is impossible or humanity is impossible.
And by that I mean, if time travel is possible, then eventually someone invents it.
It costs billions of dollars to use it.
But eventually, if you keep going farther and farther in the future, like everything else, technological, it gets cheaper and cheaper.
And eventually, it's accessible to everyone.
And therefore, we should see all around us like tour buses of people from the future.
Yeah, because eventually it becomes that cheap.
So the fact that there are no tour buses from the future.
Tells us either that time travel is not possible or that we wipe ourselves out before it's invented.
So that's the problem is that the fact that there are no tour buses from the future, that's one clue, one indicator that.
What if the UFO phenomenon are the tour buses?
Well, I have.
That brings me over to neural transduction theory.
And if you want, I'll shift over to that.
But.
Do you have time?
Yeah, I do.
I do.
It's up to you.
Yeah, let's do it.
The UFO stuff is fascinating, but I don't think it's what people think it is.
People think that UFOs are spaceships and that they have aliens on them and that they kidnap humans and run tests on them.
Neural Transduction Theory 00:06:00
That's.
One of the theories, yeah.
And there's, you know, I'm sure you're familiar with the famous Harvard psychologist John Mack, who did the study on all those UFO abductees who witnessed or recalled very similar experiences.
Yes, there's some wonderful people out there who've been thinking about these issues for decades and decades, actually.
And so there are a bunch of theories.
I think the prevailing one in the person on the street is there are aliens from other planets, right?
And et cetera, and there are spaceships.
And we've all seen the little clips of.
You know, recorded by pilots of, you know, bizarre objects moving in ways that our flying objects can't move and all that.
I don't think that's really real.
It's not real?
Oh, that theory is not real.
I don't think those are aliens.
Oh, right.
I don't think that we basically, I mean, to put this into a broad context, we basically.
Experience a lot of things perceptually.
We perceive a lot of things that are not real.
We have dreams at night.
None of that stuff is real.
We have twice a day.
We experience.
How do you define real?
How do you know if it's real, not real?
Like, what if it is real?
What if it's more real than this?
We just don't know.
It's not real in this, like the way this table is real.
So it's not real in the sense that our given.
How we define real when it comes to people and objects in our local, current, immediate universe.
Stuff that happens in dreams, I see the.
But that's highly contested, though, that stuff could be real.
We just don't have any way of testing it scientifically.
Well, I'm going to be changing that.
In fact, I'm hosting a conference that's going to take place next year, and we are going to test out at least one.
One theory that really could help us understand a lot of things, including the UFO sightings.
Sorry, I'll stop interrupting.
No, no, I can explain it.
It's called neural transduction theory.
If you go to neuraltransductiontheory.com, you'll come to an article I published a few years ago in Discover magazine.
And it's about this theory.
And this theory, unlike a lot of other theories that are out there, is actually empirically testable.
And so I've been since that time developing relationships with the physicists, mathematicians, neuroscientists, and people in other fields who actually think that this is a theory that's worth trying to test empirically.
One of the people, for example, is a researcher at the Harvard Medical School.
She researches neural correlates of schizophrenia.
And neural correlates of anything don't tell you anything.
They're just correlates.
They're just showing you that.
For some people who are labeled schizophrenic, their brains look a little different in some way.
That doesn't tell you where schizophrenia comes from, what's causing them.
But when I told her about neurotransduction theory, which we abbreviate NTT, she immediately said, wow, that could actually explain schizophrenia.
Neurotransduction theory could also explain not just dreams, but one of the most peculiar things there is about dreams, which is very often when we wake up in the morning and we've got the tail end of a dream in our head and it's fantastic, it's amazing.
And we, But we've got to pee.
So we get up and we're trying to hold on to the tail end of that dream.
And by the time we get to the toilet, it's gone.
The whole dream is gone.
It's completely disappeared.
And then we struggle and struggle and struggle.
Dreams are weird, slippery.
They're very slippery.
And so, how do we explain that?
There's a lot of things like that there's hallucinations, there's unconsciousness itself.
Because of a health problem or someone gets hit on the head, someone will suddenly go from being fully conscious to being fully unconscious.
But did you know there are no biomarkers for that?
Did you know that?
I had no idea?
No, no neurologist can point to a a thing that happened, like a switch that got flipped or some neurons that stopped firing or something there's.
There are no biomarkers.
We have no idea what.
I'm sure the electrical input and the electrical signal output would drastically change when you're conscious versus being unconscious, right?
No.
No.
So, unconsciousness, something as simple as that, is not really well understood.
And then there's really strange things, people seeing spirits and all that stuff.
Okay, those we can argue about, but there's some things we can't argue about.
For example, sometimes when people are close to death and they have been uncommunicative for years in some cases, sometimes they have what's called the last hurrah.
It has technical names too.
And suddenly they become fully conscious.
Even people who are severely brain damaged, sometimes shortly before they die, they become fully conscious.
They recognize everyone.
They talk normally.
Then that usually disappears, usually within an hour.
And then they go back to the state that they were in before.
And then they die.
They also, there's people that are on their deathbeds who recall talking to people who aren't there or like past distant relatives who have passed away, communicating with spirits or ghosts or whatever it is.
Evolution Beyond Species 00:14:54
Right.
And some of this stuff, yes, is nonsense and people trying to hoodwink other people, make money.
But some of it's real and it's well documented in the medical literature and it's.
What's going on?
The point is, there are so many things we don't understand about ourselves and our universe.
And I think we can understand all of this stuff.
Oh, there's also the Fermi paradox, another one of my favorites.
There are so many stars out there, 100 billion just in our own galaxy.
Where are all the aliens?
I'm not talking about these.
dubious sightings.
I'm saying, where are the aliens?
For goodness sake.
Some of the aliens should want to visit and set down a spaceship and go high.
Maybe not all of them, but a few every now and then should do that.
By now, we should have met hundreds, thousands of aliens if they're out there.
And the numbers, they should be out there.
And this is called the Fermi paradox.
Where are they?
I think, believe it or not, one theory, one relatively simple theory, will explain all of these odd phenomena.
And that's why Darwin's theory of evolution changed the world, because that's what he came up with.
He came up with one theory that explained all kinds of different phenomena.
And it also explained where new species came from.
And if you read his book, which I recommend highly, it's very readable.
It's very, very fun.
It's really well written.
And it's called On the Origin of Species.
1859.
Fantastic book, one of the greatest ever written.
And it did change humanity and our understanding of the world and ourselves.
And the reason why his theory was so amazing was because it explains so many different diverse things about the world and different things about different species and species that look alike.
And but I mean, it just it's a fantastic theory because of its Its simplicity, but also its power of explanation.
So it has those two elements.
Now, he's also very honest in his book about what it doesn't explain.
So, where there are possible flaws in the theory.
And one of the things he can't explain, for example, is the eyeball.
And by the way, a lot of people who don't want evolution taught in schools, they always point to the eyeball problem.
Because you can't, from the fossil record, you can't see.
How an eyeball evolved from nothing into the incredible organ that it is.
Of course, you can't explain it.
Of course, it's absent from the fossil record because it's soft tissue.
Right.
There's not preserved.
Right.
So, but Darwin admits that.
He's got the eyeball problem, and then he has a bigger problem.
I don't understand what makes the eyeball so special out of all the soft tissue.
Because it's so complicated.
So in other words, the anti-evolutionist would say, where you see a watch, there must be a watchmaker.
So where you see an eyeball, which is as sophisticated as our eyeballs are, there must be a God.
Only God could create such a thing.
And you can't go back into the fossil record and show its development.
Which you can for a lot of other things, but you can't for the eyeball.
The point is, Darwin admits that.
And then he has a bigger admission.
He cannot use the theory of evolution to explain human level intelligence.
There's no way to explain it.
Evolutionary biologists are still debating this.
The leap is incredible from primates to us.
It's not only incredible, but look at chimpanzees.
They've been around a very long time.
Are they getting any smarter?
Right.
No, they're as dumb as they have ever been.
Same with gorillas and all this stuff.
It is a phenomena.
How do you get above that level?
Not only that, the archaeological record suggests that Homo sapiens, which has probably been around for about 200,000 years, they were as dumb as primates for at least 150,000 of those years.
They were as dumb as primates.
How do we know that?
because the cave paintings and the sophisticated tools and evidence of language, all that starts around 50,000 years ago in Indonesia.
So what happened during those 150,000 years?
And by the way, the Neanderthals, who had been around for a couple million years, I think, they all disappear around 50,000 years ago.
So at the same time that we suddenly become smart and we kill off all of our competitors, they're gone.
So, what happened 50,000 years ago, probably or possibly in the area of Indonesia?
This is neural transduction theory.
It's very simple.
A baby was born whose brain connected it to a higher intelligence.
How was this baby born?
Was it bred with an extraterrestrial?
No, no, no, not at all.
Because what evolution does very well is create.
You know, with each generation, it creates variability for every trait.
It just keeps creating variability.
And that has not only led, which Darwin pointed out, to the creation of millions of new species, it's also led to the creation of millions, hundreds of millions maybe, of new transducers.
Darwin never talked about that, but the fact is that that's true.
Our eyes are transducers.
Our nose, our tongue, our ears.
They're all transducers, which a transducer is an entity.
It can be organic or inorganic that takes a signal from one medium.
It could be just electromagnetic radiation, for example.
A fart going in your nose.
A fart going in your nose.
That's right.
So the nose takes chemical characteristics of the air containing the fart and it turns it into a pattern of neural signals.
So that's transduction.
If the transduction is done very well, then we get a really, really Very precise and accurate version of the fart.
Right.
Right.
If it's being done poorly, then we don't get a good.
Our skin is a transducer.
Yes.
So because it's taking pressure, texture, and temperature and turning that all into neural signals.
And it's even indicating to our brain, it's indicating the location of where all that stuff is occurring.
In other words, our entire bodies are encased in transducers.
Yeah.
So evolution is not just good at species.
It is super good at developing transducers.
So, why do you hypothesize the first baby that was born with a transducer?
Like, how was the first, why was the transducer, how did it start?
Where did it start from?
Well, the good news is, unlike black holes, which are hard to study because you can't really, because they're kind of invisible, the brain we can study directly.
And we have thousands of neuroscience labs.
That's why I'm in touch with neuroscientists.
And we can answer that question by looking at brains in a way that we just haven't looked at them before, trying to find the location where the transduction occurs and figuring out how the transduction occurs.
And can that be done?
Absolutely yes.
And we can figure out where it's occurring and how it works.
That will also help us in retrospect to try to understand how variation in brain anatomy and physiology, how variation produced by evolution, how variation could have produced that first brain.
A theory that I've been kicking around recently.
Is that the development of the or the evolution of the technological and analytical mind is directly correlated with the atrophy of psychic abilities?
And when I say psychic abilities, I don't necessarily mean like telepathy or things like this, necessarily.
I think it's been well, it has been shown that in primitive tribes and indigenous tribes, they have far more of this.
Intuitive psychic sense, if you want to call it that, similar to like animals, how cats and dogs do and wild animals do.
And essentially, I think that the human brain, if you want to call it a transducer, I think of it more, I've been thinking of it more as like an antenna.
And I think that antenna, over however many years it's been, since before we developed technology and ways to offload our memories, has been atrophying.
And I think it's very possible that our far ancient ancestors had way more senses and even possibly psychic abilities.
Before we were able to develop language, before we were able to develop ways to store our memories on hard drives and on phones and on computers and on text, on being able to write records of things, our memory capacity was probably way greater than it is now.
And once we started developing language and the written word and develop things like Google and iPhones and all these things, I think it's just atrophied our brains in a way that is almost unimaginable to what it potentially could have been a long time ago.
That's entirely possible, but it's not testable.
So the reason why NTT appeals to me as a scientist is because it is testable.
And again, it explains so many diverse things that you normally don't think of altogether.
So, for example, it takes care of Darwin's uplift problem.
That's his biggest problem in his book, he can't explain the uplift.
And to this day, evolutionary biologists cannot explain the uplift.
And chimps are still what is the uplift?
That's going from chimp-level intelligence to human-level intelligence.
There's a huge, huge gap.
NTT takes care of that.
NTT takes care of the Fermi paradox, because you realize that if you could just dart around the galaxy and look at planets, it means you'd find lots and lots and lots of chimp level intelligence, but probably very, very few examples of human level intelligence, because that that that for that transducer to occur is a very, very rare event, extremely rare.
So you end up with thousands and thousands of chimp level intelligences all over the place, but that means they don't have electricity, they don't have space travel.
So right, that means we don't see them, But it explains telepathy.
Where telepathy actually occurs, a simple explanation is neural transduction.
It explains dreams, the unconscious state.
That's really simple.
And we'll be able to find the biomarker for it because when we become unconscious, it means there's a temporary break in the transduction pathway.
And we'll be able to find it.
No one's ever looked for it before, but we have the labs now.
We have the equipment.
We have access. to the subject matter, which is the brain itself.
So I could go on and on because literally my staff has come up with 58 mysteries, 58 human mysteries.
You know, you meet someone and you feel like you've known them forever.
Yeah.
Well, that's easy.
So really tough things like comas and people who come out of comas, at the moment, that's a complete mystery.
But if we actually understand, if we, if we find evidence that transduction is occurring in this way in the brain, and we can find the pathway.
Now there's another very cool aspect to this, which is what this conference is going to be on next year.
Once we have some idea about where the transduction is occurring and how it works, we can simulate it outside the brain.
So think about it.
What's this?
This microphone is a transducer, like your antenna idea.
It's a one-way transduction.
Unfortunately, it's only one way, Right.
And the kind of transduction that I'm talking about, it has to be two way.
Okay.
But the point is, we built this transducer because we knew about eyes and ears and all that stuff.
And this is just an imitation of an ear.
That's all it is.
It's an imitation of an ear.
Right.
So once we know how this kind of interdimensional transduction occurs in the brain, how it's making it happen.
Once we know that, we should be able to simulate it.
And once we can simulate it, that will allow us to communicate directly with the higher intelligence outside the brain and outside the limitations of the brain.
In other words, we'll be able to talk to this higher intelligence and actually, if it's willing to talk to us, actually ask the big questions that are, again, complete mysteries.
Where the universe came from, who's in charge, is there a God?
The possibilities are endless once we are able to simulate this process.
Simulating Cosmic Origins 00:05:02
This will also help a lot of physicists out because there are a lot of physicists, virtually all physicists believe now that the three dimensional space that we observe is not in any way indicative of the structure of the universe.
Our concept of the universe is based on. this tiny, very narrow range of electromagnetic radiation, a very narrow range of vibrating air.
I mean, it's silly, really.
And physicists, all three of the prevailing big models of the universe that physicists use, all three of them predict that there are other universes.
But we don't know how to make connections yet.
But the brain might teach us.
It might help us to figure out how to make connections.
There are some physicists who have ideas about how we might make connections.
Have you ever talked to Gary Nolan?
No.
Should I?
Yeah, I would check him out.
He's a, I forget what his title is, but he's been doing research on the caudate potamen and specifically how it relates to people who have paranormal experiences or UFO experiences.
And what he found was that the density of the neurons in this caudate potamen or basal ganglia is the neurons are far more dense in these folks.
And this is a study he's done across a number of people and super relevant to what you're talking about.
It is, but again, there we're talking about correlation.
Yeah.
And so correlation is always very limiting.
So we're saying people who have, you know, who behave in a certain way, they have these characteristics in the brain, which appear to be somewhat unique.
Right.
But that's just correlation.
Right.
So as I say, we're, What we're after is something very, very ambitious.
A hundred years ago, one couldn't even have begun to investigate something like this.
But now the labs exist.
The equipment exists.
They're just being used for relatively trivial purposes.
This obviously is not trivial.
If we can find out where this is occurring.
Crazy experiment.
I don't like it.
But imagine building a room.
That prevents this connection from occurring.
That would mean that when someone walked into the room, their language and cognitive abilities would deteriorate dramatically.
Ooh, interesting.
Yep.
So everything that happens, everything we do is being streamed, it's live streamed.
Nothing is stored in our brain.
Our brain is not a storage device.
Positively, our brain is not a storage device.
So, no, really, probably the best or at least the most widely read piece I've. I've ever written is called The Empty Brain.
And that's what it's about.
And it was published in Aeon Magazine, which is based in London.
And when they posted it, which was 2016, I think, it crashed their server.
It got two or three million views, I think, within the first day or two, 250,000 shares on Facebook.
Wow.
A lot of it critical because what I'm explaining there is that there is no memory in the brain.
And I'm also explaining that the computer metaphor that we use to try to explain how the brain works, it doesn't fit.
It's not even slightly valid.
It's just a metaphor.
And that we don't know how the brain works.
So NTT was, years later, was my way of trying to figure out.
A testable theory of how the brain actually works and uh, it takes care of the memory problem.
Wild stuff, man.
Um, thank you for doing this, sure.
I appreciate your time.
Uh, again, remind people all those links where they can go.
I'll obviously put them below, but thank you.
You should say them.
I appreciate it.
Uh, can you say them?
Say the link.
Oh, say all the links.
Yeah, what's the name?
All of them are myprivacytips.com.
Rapid fire the website names myprivacytips.com.
Very important.
Uh, Americas with an S, Americas Digital Shield dot com, Neural Transduction Theory dot com.
I think I mentioned a few others, but I think those are the most important ones.
All right.
Dr. Epstein, thank you so much.
I very much appreciate your time.
This was really fun.
Danny, you're an amazing guy, and I've enjoyed every minute talking to you.
Thank you.
Thanks, man.
All right.
Good night, world.
Export Selection