Dr. Robert Epstein exposes Google’s algorithmic manipulation—suppressing conservative content (e.g., blacklisting 11M websites) and skewing election results, like shifting 2.6–10.4M votes for Clinton in 2016—via the search engine manipulation effect (SEAM) and suggestion bias, proven in experiments showing up to 90% opinion shifts. Epstein’s opinion matching effect (OME) research reveals platforms like Tinder and YouTube push predetermined biases, while whistleblowers (Frances Haugen) confirm deliberate virality control. Despite congressional testimony and peer-reviewed warnings (Google’s Triple Threat), tech giants resist oversight, risking democratic erosion and psychological harm—especially for children—while Rogan agrees, citing Orwellian parallels and the need to defend free speech even when uncomfortable. Epstein’s lone monitoring efforts (tamebigtech.com) highlight a systemic failure: unchecked tech monopolies weaponize data, prioritizing profit over truth, with no credible counterbalance. [Automatically generated summary]
This is a very interesting subject because I think search engine results have always been a thing that people kind of take for granted that the search engine results is going to give you the most significant results At the top, and they don't really think about the fact that this is kind of curated.
And, you know, we found it many times because we use two different search engines.
We'll use Google, and then we'll say, well, if we can't find it on Google, use DuckDuckGo.
And oftentimes, when you're looking for something very specific, you'll find that you can't find it on Google.
Like, if it's in there, it's deep, deep, deep, you know, many pages in.
Whereas DuckDuckGo will give you the relevant search results very quickly.
So something's going on with search engines.
And from your research, what you have found is that it can significantly affect the results of elections.
It's also used for manipulation because they discovered quite a few years ago that if they control the ranking of the search results, they can control people's opinions, purchases, votes.
They can't control everyone's opinions because a lot of people already have strong opinions.
So the people they're going after are the people who are undecided, the people who are vulnerable, and they know exactly who those people are.
And they literally, your mind is putty in their hands.
So you should never, ever use Google or any other S&M product like Amazon Alexa is an S&M product or the Google Home device or Android phones.
Yes, in fact, Android phones, the equipment to prove this, which I didn't bring, but is so cheap now that literally anyone can confirm this.
Android phones, even if they're disconnected from your mobile service provider, even if you pull out the SIM card, okay, as long as the power is on, It is recording, tracking every single thing that you do.
So if you use it to read things, if you use it to listen to music, you use it to shop, whatever it is, and of course your location is always tracked.
Then when you go back online, the moment you're reconnected, it uploads all that information.
So some people wonder why their batteries run down, sometimes even when you're not really doing anything with your phone.
That's because with Android phones, I think it's 50 to 60 times per hour, it's uploading.
It's uploading about 12 megabytes of data per hour.
It's my friend Adam Curry, who's the original Podfather.
He's the guy who invented podcasting, and his company develops these de-Googled phones where they take out all the tracking stuff, everything, and it's basically running on a different operating system.
And then Brave introduced a Brave search engine, which now, fortunately, very recently, you can make the default search engine on Brave.
So Brave doesn't track at all.
Brave works faster than Chrome.
Chrome is Google's surveillance browser.
And Brave works even faster.
They're both built on what's called Chromium, so they're built on the same tech, except that Brave suppresses all ads, so it works much faster than Chrome does.
And, you know, now, again, you can make the default search engine on Brave, literally the Brave search engine.
And then I will go over, occasionally I'll go over to Firefox, because Firefox was actually developed by a guy named Brendan Eich, who might be really interesting for you to talk to, by the way.
And then he left Mozilla, which was the nonprofit organization that developed Firefox.
I've been a researcher for 40 years and I had a lot of research underway.
I've done research on teenagers and creativity and stress management, all kinds of things.
I'm still doing all that stuff.
But on January 1st of the year 2012, I got, I don't know, eight or nine messages from Google telling me that my website had been hacked and that they were blocking access.
So I thought, the first thing I thought was, why am I getting these notices from Google?
Who made Google the sheriff of the Internet?
Why isn't this coming from the government?
Why isn't it coming from some nonprofit organization?
So that got my attention.
And then, because I'm a coder, I've been a programmer since I was a teenager, and then I started wondering, wait a minute, okay, they're blocking me on the Google search engine.
So they have these crawlers that look at all the websites every day, and their crawler found some malware on my website.
That happens all the day, too.
Everyone gets hacked.
I'm sure you've been hacked, and Google itself has been hacked.
So I get that.
They're blocking me on Google.
Google.com search engine.
I get it.
Okay.
But I noticed they're also blocking me on Firefox, which is owned by a non-profit.
They're blocking me on Safari, which is owned by Apple.
I thought, how could that be?
These are completely separate companies.
Took me a while.
Took me a while to figure this out.
I finally published a piece in U.S. News and World Report, an investigative piece called The New Censorship.
And I described nine of Google's blacklists.
This was 2016, so this was a while ago.
In detail, I described nine of Google's blacklists.
I explained how the blacklists work.
I explained Google can literally block access...
On multiple platforms that aren't even theirs, they can block access to any website that Google at one point in time, 2009, I think it was, I don't know, I might get the date wrong.
Let's just say January, whatever, 30th.
Google blocked access to the entire internet for 40 minutes.
So what's happening with their system is because so many people are searching for things, because they're monitoring so many different things to add to their search engine, do they have some sort of ultimate control over the internet in some weird way?
Here it is right here.
Google blacklists entire internet.
Glitch causes world's most popular search engine to classify all web pages as dangerous.
Wow!
Google placed the internet on a blacklist today after a mistake caused every site in the search engine's result page to be marked as potentially harmful and dangerous.
Holy shit!
The fact that they can even do this, I like how it gives you, like, at the top, this article's more than 12 years old.
And geeks for fun, okay, sometimes for profit, but most of the time it's just for fun, just to be cool and get their kicks and show how powerful they are.
What I ended up doing was I started doing randomized controlled experiments to see what kind of shenanigans are out there, to see what kind of power these companies have, especially Google.
And I am still almost month by month making more discoveries, running more experiments, getting very disturbing data out.
I mean, so disturbing.
We just figured out, I think within the last month, that a single question and answer interaction on Alexa...
So you ask Alexa a question, and let's say it's about, I don't know, Some political issue or political candidate.
In other words, it favors, you know, one candidate, favors one party, favors one cause, right?
Question and answer interaction on Alexa, in a group of, say, 100 undecided people, can shift opinions by 40% or more, one interaction.
If there are multiple questions asked on that topic over time, you can get shifts of 65% or more, with no one having the slightest idea that they have been manipulated.
But are they doing it to manipulate you or is it just the fact that they distribute this information based on their algorithm?
It's manipulating you just by default because the higher or more likely you find this information from the search engine, like that's what's going to influence your opinion.
But are they doing it to influence your opinion or is that just the best answer?
So if I ask that to Alexa and then it pulls up these results, it's going to pull up supposedly the most relevant result.
Now, if you have something like Alexa where you're asking a question and it's just reading it back to you, there has to be some sort of curation of that information, right?
I... I have not followed Joe Rogan over the years.
Okay, I have five kids.
My two eldest sons are like your biggest fans in the universe.
My eldest son is technically a bigger fan than the other son because he's recently gained 60 pounds because of COVID. So he's definitely the bigger of the two fans.
Because you actually, you dig in and you really want to know.
And I'm now, I'm so...
Now I'm going to say something that's not so nice, which is, on this issue, by the questions you're asking me, I can tell you have no idea what's going on.
A few years ago, you probably heard that Google got busted because their Street View vehicles were driving up and down.
They're still driving up and down streets all over the world, but they had been driving up and down streets all over the world, more than 30 countries, for more than four years.
And they weren't just taking pictures of our houses and our businesses.
They were also sucking up Wi-Fi data.
I mean, we're talking terabytes of Wi-Fi data, passwords, everything, including a lot of very deeply personal stuff.
So someone just like me, a professor type, figured this out, reported them to the government, the government went after them.
And so this is called the Google Street View scandal.
And so they got a fined $25,000 for interfering with the investigation.
And they blamed the entire thing.
Google blamed the entire operation on one software engineer.
His name is Marius Milner.
So they fired him.
Oh, no, no, that's not true.
He's a hero at Google.
He's still working there.
If you look him up on LinkedIn, his profession is hacker.
He's a hero at Google.
They didn't fire him.
They love this kind of stuff.
So another possibility, besides the algorithm itself, is a single rogue programmer at the company can fiddle with content.
Can fiddle with any content.
And when a single rogue programmer does that, guess what?
That shifts thinking and opinions and behavior and purchases and votes.
A single rogue programmer can do it.
And then, of course, there's the executive level.
the executives can pass along a mandate, a policy.
Does that ever happen?
Oh yeah.
One of the leaks from Google that you may have seen, and I know the guy who leaked it, it's Zach Voorhees, who's also a good person for you to talk to because he was a senior software engineer at Google for eight years and then he just couldn't stand it anymore But unlike most of these people who've walked away, he brought with him 950 pages of documents and a video.
The video is two minutes long and it shows the CEO of YouTube, which is owned by Google, her name is Susan Wojcicki, and she's talking to her staff.
And she's explaining, this is 2017 after the horrible election results of 2016, and she's explaining how they're altering the up-next algorithm in YouTube to push up content that they think is legitimate and to suppress content that they think is not legitimate.
So if it's happening at that level, the executive level, Again, it still has the same effect.
Any of these possibilities, and there are others as well, ends up giving us content that impacts us and our kids especially in ways that people are entirely unaware of.
Well, that would take a whistleblower to figure that one out.
It was in the news at one point that the guy who was in charge of making these decisions, he actually has left Google, he once shut down an entire domain name which had 11 million websites on it because he thought it was kind of poor quality.
There's a lot of discretion involved in making these decisions, and a lot of the decisions that get made in very recent years, since Trump was elected, they happen to be decisions, for the most part, that suppress conservative content, but not always, not always.
Safari, before they take you anywhere, they've got to check Google's blacklist.
So not only is Google getting information about your search on Safari, the fact is if Google wants to block you from going there through Safari, they just add it to their blacklist.
In other words, if they put everything on their blacklist, then no one can reach anything.
In other words, Google is literally looking at billions of websites every day and it's looking for updates and changes and new websites and this and that.
So it's crawling and it's extracting information, especially looking for links because that's how it gets you good information.
It looks for what's linking to what.
But DuckDuckGo doesn't do that.
DuckDuckGo is looking at databases of information, and it's trying to answer your question based on information that is in databases.
And it seems like what happened with Google before anyone even understood That the data is so valuable.
Before anyone, it was too late.
It was already an inexorable part of day-to-day life that people were using that and that people were using Gmail and using all these services and just giving up their data.
So let's say if Donald Trump runs again in 2024 and they have a Trump campaign website, Google can decide that that website is a poor quality and deny people access to it so that when people go to Google Donald Trump, they will never see his website.
And there are a couple of the attorneys general whom I know who understand.
Doug Peterson from Nebraska, he totally understands.
Ted Cruz.
He was behind my invitation to testify before Congress.
A couple months later, he invited me to D.C. We sat down, had a four-hour dinner.
Fabulous.
We never stopped talking.
And we never talked politics.
We did not talk politics the whole time.
We just talked tech.
Cruz totally understands.
But he's hamstrung.
How do you fight the most effective mind control machine that's ever been developed, which also is very rich, has $150 billion in the bank right now in cash, makes huge donations to political candidates, and then can shift votes, millions of votes nationwide, without anyone knowing that they're doing so?
Okay, I don't know Sergey Brin, Larry Page, the founders.
I don't know them.
I've lectured at Stanford in the same building where they invented Google, which is kind of cool.
But I don't know them.
But I think these guys were and probably still are utopians.
I think they had the best intentions in mind.
Top executive ever to leave Google is a guy named James Whittaker, who's gone completely silent, by the way, in recent years.
Completely silent.
But he was the first real executive to leave Google.
He finally issued a statement.
He was under pressure.
You know, why did you leave?
Why did you leave?
He issued a statement, which you can find online.
It's fascinating to see this.
And he says, look, when I first joined Google, which was practically in the beginning, he said it was just a cool place and we were doing cool things and that was it, he said.
And then he said, a few years later, he said we turned into an advertising company.
He said, and it was no more fun.
It was brutal.
This is brutal, profit-driven ad company.
Now, if you don't think of Google as an ad company, then, again, you're not getting it.
They are the largest advertising company by a factor of 20. I think the next largest one is based in London.
But Google is, what it's doing is tricking you, tricking all of us.
Well, not me personally, but it's tricking you into giving up personal information 24 hours a day, even when you don't know you're giving up personal information.
And then it's monetizing the information mainly by connecting up vendors with potential buyers.
It's an advertising company.
And so Whitaker actually quit because the nature of the business changed.
And then, of course, everyone knows about Google's slogan, right?
Don't be evil.
But no one seems to know that they dropped that slogan in 2015. Didn't they just add it to a part of a larger slogan?
There was like a thing, I'm trying to remember what they exactly did, because we were sort of like, oh my god, they said don't be evil, and now they don't say it anymore.
Maybe they're evil, but I think they had added something and made it longer.
And so it wasn't that it's not their slogan anymore, it's just their slogan sort of morphed.
You just came up with it hypothetically, but you left out one area.
So the two areas you just nailed, one is to make money.
So they have three motives.
One is to make money, and that they do extremely well.
And no one who's tried to tangle with them has stopped that process.
In other words, the rate at which they're making money continues to increase every year.
So a few years ago when I was first looking at them, they were bringing in $100 billion a year.
Now they're bringing in $150 billion a year.
Money.
That's number one.
Number two, values.
I could talk for hours on this issue because of recent leaks of videos, PowerPoint presentations, documents, and of course what whistleblowers have been revealing.
They have very strong values there because the founders had very strong values and they hired people who had similar values and they have really strong values.
And they want the world to have those values.
They really think that their values are more valuable than other people's values, which means they don't understand what values are because...
Well, in the video, they're presenting this as an ability that we have.
This is an ability that we have.
So, that's the second area.
You nailed it.
Third one you didn't mention.
The third one is intelligence, because they had some support, Page and Brin, right in the very beginning at Stanford.
They had some support and had to be in regular touch with representatives from the NSA, the CIA, and another intelligence agency.
The intelligence agencies were doing their job, okay?
They realized that the internet was growing.
This is the 1990s.
So they realized that the internet is growing.
And they were thinking, hey, these are people building indexes, indices to the content.
So sooner rather than later, we're going to be able to find threats to national security by looking at what people are looking up.
If someone is going...
Online, they're using a search engine to find out instructions for building bombs, for example.
Okay, that's a potential threat to national security.
We want to know who those people are.
So right from the outset, and this is totally unlike Brave.
Okay, Brave doesn't do this.
But right from the very, very beginning, the Google search engine was set up to track and preserve search history.
So in other words, to keep track of who's doing the search and where did they search, that is very, very important to this day for intelligence agencies.
So Google, to this day, works very closely with intelligence agencies, not just in the U.S., but other agencies around the world.
So those are the three areas.
Money, values, intelligence.
And the intelligence stuff...
Is legit.
I mean, it's legit.
You know, it is an obvious place.
If you're in law enforcement, that's an obvious place to go to find bad guys and girls.
So Google has this ability that they've proclaimed that they can sort of shift culture and direct the The opinion of things and direct public consciousness.
How much of a percentage do you think they have in shifting?
Now you're getting close to what I actually do, what I've been doing for now for over nine years.
I quantify.
This is exactly what I do.
Every single day, that's what I do.
My team, my staff, that's what we do.
And it's cool.
Talk about cool.
We're doing the cool stuff now.
Google is not.
We're doing the cool stuff.
Because we have discovered a number of different tools that Google, and to a lesser extent other companies use, to shift thinking and behavior.
And what we do in randomized controlled experiments, which are also counterbalanced and double-blind and all that stuff, we measure The ability that these tools have to shift thinking and behavior.
And we pin it down to numbers, percentages, proportions.
We can make predictions in an election about how many votes can be shifted if they're using this technique or these three techniques.
Yeah, that's what we do.
So we started with the search engine.
And it took years, years of work, but we really, I think at this point, have a good understanding of what the search engine can do.
But then along the way, we discovered other tools that they have and which they are definitely using.
Well, the first one we called SEAM, search engine manipulation effect.
And that means they're either allowing, you know, one candidate or one party to rise to the top, you know, in search rankings, or they're making it happen.
And you don't know for sure whether, you know, which is occurring unless there's a whistleblower or there's a leak.
Okay, but the fact that it's occurring at all, that's important.
In a way, we don't care.
Because if it's just the algorithm that's doing it, well, that's horrible.
That means literally a computer program is deciding who's going to be the next president, who's going to be the next senator.
Do we want that decision made by an algorithm?
Anyway, we spent a lot of time on that.
We're still studying SEAM. Then we learned about SSE, which is search suggestion effect.
When you start to type...
Ooh, in fact, if you have your phone handy, this will be fun.
If you start to type a search term into the box, a search box, suggestions flashed at you.
As fast as you're typing, that's how fast those suggestions come.
We learned in controlled experiments that by manipulating the suggestions that are being flashed at people, we could turn a 50-50 split in a group of undecided voters into nearly a 90-10 split without anyone having the slightest idea that they're being manipulated.
And the reason why we started that work is because in June of 2016, a news organization, a small news organization released a video which went viral on YouTube and then got blocked on YouTube.
Frozen.
Still frozen.
But then it continued to go viral on Facebook, so 25 million views.
In this little video, this news organization is saying, we've made a discovery.
When you go to google.com and you look for information about Hillary Clinton, you can't get any negative search suggestions.
It would give you nothing, probably, for Clinton body count.
But as you're typing, you go, Clinton B, it would go, you know, Clinton buys the best clothes.
I don't know.
It would give you something like that.
It would not give you something negative.
So, for example, Hillary Clinton is, you do it on, and they showed this, you do it on Yahoo!, You do it on Bing, and you get Hillary Clinton is the devil.
If you have a plate of sewage and you put a nice piece of Hershey's chocolate in the middle, It does not make the sewage look any more appetizing.
So we're drawn to negatives.
Well, Google knows this, okay?
And we've quantified it.
Basically, if we allow one negative to pop up in a list and the rest are neutral or positive suggestions, that one negative for certain demographic groups can draw 10 to 15 times as many clicks as the other suggestions.
So one of the simplest ways to support a candidate or a cause is for your candidate or cause, you suppress the negatives.
It's a simple lookup.
You're looking up what's called the linguistic valence of the term.
Simple lookup takes a nanosecond.
And if it's your cause, your candidate, you delete it.
Well, we were able to estimate that to some extent.
And by the way, this landscape keeps changing.
So I'll give you an example.
When they first came up with search suggestions, actually one engineer there came up with this thing and it was cool.
And it was an opt-in thing when it first came out.
I think it was 2009. And it was cool and it was helpful because that was the idea initially.
So, then over time, I think, you know, with a lot of these services, a lot of these, you know, these little gizmos, people figured out that, wait a minute, we can do things, you know, that maybe we didn't intend to in the beginning, but we can use these for specific purposes.
So, anyway, so, at some point, or rather a couple years later, It was no longer opt-in.
In fact, it was automatic and you can't opt out.
That's the first thing that happened.
And then you may remember there were always 10 items in the list initially.
But then in 2010 or so, suddenly they dropped to four items.
So in our experiments we actually figured out why they were showing four items and we went public with that information in 2017 and three weeks later Google went back to ten items.
Because four is exactly, we know from the research, is exactly the number of search suggestions that allows you to maximize your control over people's searches.
Because look, if the list is too long and you've got a negative in there...
They're not gonna see it.
I mean, imagine if you had 100 search suggestions and you had one negative, right?
So it has to be short enough so that the negative pops out, right?
But it can't be too short.
If it's too short, then the likelihood that they type in their own damn search term and ignore your suggestions goes up.
So there has to be this optimal number.
It turns out the optimal number to maximize your control over search is four.
And we also learned that you are being manipulated on Google from the very first character you type into the search box.
Well, it turns out everywhere in the world where Amazon does business, if you try to search for anything beginning with the letter A, and you type A, Google suggests Amazon.
Why is that?
Well, it turns out Amazon is Google's largest advertiser.
And...
Google is Amazon's largest single source of traffic.
It's a business relationship.
Get it?
If you type T, you're going to get Target and so on.
But what's interesting is when you type G. Just type G. What do you think I'll get?
For most people, to answer your question, for most people, and folks out there, literally, pick up your phones, go to google.com, which, by the way, this is the last time you're ever going to use google.com, but just type in G and see what you see.
Most people, if they're getting five suggestions, four out of the five will be for Google.
So, the lesson there is if you're starting a new company, don't start...
Don't name it with a G. Don't name it with a G, right.
So, what they're showing, the point is, has to do with their agenda, their motives, okay?
Every single thing that they're doing has to do with their motives, which have to do with money, Values and intelligence.
And a public library does not do that.
You go, you borrow some books, you ask some questions, you get some answers, that's that.
That's the way the internet was meant to be.
It wasn't supposed to be this.
The whole internet around the world controlled mainly by two huge monopolies.
And to a lesser extent by some smaller monopolies like Twitter.
It wasn't supposed to be that way.
It was supposed to be like the public library.
And it is possible, you see, you can set up a company like Brave that doesn't play these stupid games and doesn't fool you and it's not deceptive.
This is the business model that Google invented.
It's called the surveillance business model.
It's fundamentally deceptive.
Because up here at the level that you're interacting with it, it looks like public, library, free, cool.
And down here underneath, it's something completely different.
There's no reason for that.
Tim Cook, who's still the CEO of Apple, has publicly said, this is pretty recent, publicly said that this is a creepy business model and it should not be allowed.
Well, that is one area where Apple deserves credit, right?
That Apple has not taken up that same sort of net-like surveillance where they just kind of cast the net over everything you do and then sell it to advertisers.
And you can opt out of certain things in terms of like allowing apps to track purchases or allowing apps to track your use on other devices or on other applications rather.
So you probably know that Microsoft was Google's enemy number one.
Microsoft sued Google in practically every courtroom in the world.
Microsoft was submitting regulatory complaints.
Microsoft was funding organizations that existed to do nothing else but fight Google.
For a long, long time.
Early 2016, Google and Microsoft signed a secret pact.
So the fact that the pact was signed, that somehow leaked.
But to this day, no one knows the details of what's in it, except here's what happened.
Simultaneously, both companies around the world dropped all complaints against each other.
Google, excuse me, Microsoft withdrew all of its funding from all the organizations it had been supporting.
And there are some people who believe, because Bing, Microsoft's search engine, which draws about 2% of search, by the way, it's no Google, it had been bleeding money for Microsoft for years, and some people believe that Bing...
As part of this deal, started drawing search results from Google.
We don't know, but we do know this, that Windows 10 is a tracking tool.
Windows 11 is a tracking tool.
These...
These new operating systems are so aggressive in tracking that it's very, even if you're a tech geek like me, it's very, very hard to get rid of all the tracking.
So I'm still using Windows 8.1, believe it or not, or Windows 7. Why didn't you switch to Linux or Unix or something like that?
Well, we use that for certain purposes as well, but for general stuff that you do, if you're using desktops and laptops, Windows is still the way to go, except the company shifted.
It has been shifting towards the surveillance business model, as thousands of other companies have, including Verizon.
And the real issue here seems to be that this wasn't a thing 20 years ago.
It's a thing now, and it's the most dominant thing in terms of the way people access information, the way people get data, the way people find answers.
What is it going to be in 20 years from now?
I mean, it seems like there's so much potential for control and so much potential for manipulation and that it could only just get worse.
If there's no regulation put in place and there's no way to stop use of algorithms, use of curated data, what is this going to be like?
Have you sort of extrapolated?
unidentified
Have you looked at the future and Yeah, that's what I do.
And everyone always points to certain language from his speech.
This is his retirement speech, his last speech just a few days before John F. Kennedy became president.
And it was a very shocking speech because this is a guy who was head of Allied forces in World War II. This is a, you know, I don't know, four-star general.
I mean, he's an insider.
And in this speech, he says, you know what, this terrible kind of...
This entity has begun to emerge, you know, and I've watched it.
And he called it the military-industrial complex.
And you probably remember hippies like, you know, with signs and screaming, no military-industrial complex.
And Eisenhower actually warned about the growth of this military-industrial complex and how it's taking over businesses and it's affecting the government and blah, blah, blah.
What he failed to note is that he also warned in the same speech about the rise of a technological elite that could control public policy without anyone knowing.
And Google is by far the most aggressive, the most dangerous.
You know, Facebook, there's chaos within Facebook, but, you know, we had this amazingly from Francis Haugen just recently, Of documents showing that, you know, people at Facebook are very much aware that their social platform creates turmoil, terrible turmoil on a massive scale, and that they like that.
They encourage that because the more turmoil, the more traffic, the more traffic, the more money.
But knowing that you're creating turmoil, Here's my thought on that.
Is it just human nature?
Because you were saying before about the negativity bias, that people gravitate towards things that are negative.
And that's one of the things that you'll find if you use YouTube.
When you go on YouTube, if you're a person who likes to get upset at things and you're a person who likes to...
Look for things that are disturbing or upsetting or political arguments, whatever.
You'll get those in your suggestions over and over and over again.
But if you're not interested in that, if you're only interested in airplanes and you start Googling airplanes or cars or watches, that's what it'll suggest to you.
It doesn't have to suggest to you negativity.
You gravitate towards that, naturally.
And so the algorithm represents what you're actually interested in.
So is it Facebook's fault that everyone, not everyone, most people generally interact more with things that are negative or things that upset them?
But if their business model is to engage with people and to keep people engaged by giving them content that makes them stay engaged and click on links and read more and spend more time on the platform, And the only thing that it's doing is highlighting what you're actually interested in.
What are they supposed to do?
Are they supposed to make less money and then have no suggestions and have no algorithm and just leave it all up to chance?
Just leave it all up to you go find what you're interested in and then keep finding what you're interested in through a direct search.
Like through you trying to find these things directly with no suggestion whatsoever.
We're using real content from YouTube, real videos from YouTube, All the titles, everything comes from YouTube, except we have control over the ordering, and we have control over the up-next algorithm.
That's where the power lies, the up-next algorithm.
So one of the things we learned recently, not from Francis Haugen, but from someone else who left Facebook, is that 70% of the videos that people watch on YouTube now around the world Are suggested by YouTube's UpNext algorithm.
Okay, we never do the, you know, subject pool at the university where you get, you know, 50 students from your college to take, you know, to be your researcher.
We never do that.
So we're always reaching out to the community or we're doing things online.
So we do big studies online.
And we are getting very diverse groups of people.
We're getting—literally, we're getting people from lists of registered voters.
So we're getting people, you know, who look like the American population.
Yeah, but the internet, you see, the internet, though, because there are no regulations and rules, it does allow for some pretty evil things to take place.
And the fact is, in our experiments, we do these, usually our experiments have hundreds of people in them.
Sometimes they have thousands of people.
And we can fuck with people and they have absolutely no idea.
I'm just talking about something else that's happening.
There are websites that will help you make up your mind about something.
So, for example, there's a whole bunch of them right now that'll help you decide whether you're really a Democrat or you're really a Republican.
And the way they do that is they give you a quiz.
And based on your answers to how you feel about abortion and immigration and this and that, at the end of the quiz, they say, oh, you are definitely a Republican.
Sign up here if you want to join the Republican Party.
And this is called opinion matching.
And the research we do on this is called OME, the opinion matching effect.
And there are hundreds of websites like this.
And when you get near an election, a lot more of them turn up because the Washington Post will give you a quiz and help you decide who to vote for.
And Tinder, Tinder, okay, which is used for sexual hookups.
Option on Tinder during the 2016 election, you swipe left if you think this, you swipe right if you think that, and then at the end of it, they say, oh, you should be voting for Hillary Clinton.
But how do you know when one of these websites is helping you make up your mind?
How do you know whether the algorithm is paying any attention to your answers at all?
Sometimes these websites are not paying any attention to your answers.
They're just telling you what they want to tell you, and they're using this quiz to suck you in, and then they add in—oh, this we love—they add in a timer.
So in other words, after you finish the quiz, it'll go tick, tick, tick, tick, tick, computing, computing, computing.
And there's this delay creating the impression that they're really thinking hard.
And then they give you your answer.
So all that is for credibility to manipulate you.
Now, so over here we're going to websites and we're typing in random answers.
On the other side, we're doing experiments in which we...
We are giving people quizzes, and then we are giving people recommendations, and then we are measuring to see whether we can change anyone's mind.
And we're getting shifts of 70 to 90% with not a single person Not one person recognizing that they're being manipulated.
Not even one person recognizing that there's bias in the results we're giving them.
Not one!
Because how could you see the bias?
How could you see the manipulation?
You've just taken a quiz!
You're trying to make up your mind.
The thing that's so scary about this kind of manipulation is It attracts exactly the right people.
And when you – so you spoke to Congress about this?
You spoke in front of Congress?
Mm-hmm.
Right.
And when you did, was there any sort of urgency?
Did anybody understand what the implications of this are?
Did anybody understand like this is – we were literally looking at these massive technologies that are used throughout the world that can completely change the way policy is directed, the way people are elected, who's in charge, what the public narrative the way people are elected, who's in charge, what the public narrative is on a variety Dr. Well, there are some people There's a guy named Blumenthal.
Well, I'm not thinking that they haven't already taken over, but I'm thinking, like, how much more control can they have in 20 years, if 20 years ago they didn't have any?
Like, as technology advances, do you think that this is going to be a deeper and deeper part of our world?
Oh, I published a few years ago an essay calling for his resignation.
Roger McNamee, who was one of the first supporters financially of both Google and Facebook, he actually published a book about two years ago saying it was called Zucked.
How how Zuckerberg has taken over the world and he basically he said in that book straight out that if he had known What these companies were going to turn into Google and Facebook he would never Never have backed them in those early days Jamie, did we ever find out what Facebook, or Google rather, changed their...
The reason why I ask is, like, what kind of computer systems were involved in cars from 2002 as opposed to...
Do you remember the...
The story of the journalist, Michael Hastings, who wrote a story about a general during the time of Obama's administration, there was a volcano that erupted in Iceland and he was stuck overseas.
I believe it was Afghanistan or Iraq.
I think it was Afghanistan.
So he was over there writing a story for Rolling Stone, and because he was over there for so long, because he was trapped, because no flights were going, because the air cover was so bad because of this volcano, they got real comfortable with him.
And these soldiers started saying things, not even thinking this guy is like, you know, he's not one of them.
He is a journalist, and he's going to write all these things about it.
So he wrote this very damning article.
The general in question got fired.
And then this guy, Michael Hastings, started talking about how he was fearing for his own life.
And cut to sometime in the future, he sped up.
There's actually a video of it.
Sped up on Sunset Boulevard towards the west side and slammed into a tree going 120 miles an hour.
There was an explosion.
The car's engine was many yards from the car itself, and there was a lot of speculation.
That not only did the government have the ability to manipulate, that intelligence agencies had the ability to manipulate people's cars, but it's something they've actively done.
And people were very concerned that this guy was murdered because of what he had done.
Because that general wound up getting fired.
Obama wound up firing him because it made Obama look bad.
And it starts out saying the kind of things that I've been saying to you, which is the future crimes, they're actually here now.
And this is an ex-FBI guy who wrote the book.
And he's talking about how tech is being used now to not only commit crimes, but to assassinate people.
One of the simplest ways to do it is you hack into a hospital computer and you change dosages on medication.
If the person you're going after has been hospitalized, that's a really simple way to just knock them off and have it look like just some silly little glitch or something.
So yeah, there's a lot of ways now that you can commit crimes that have never existed before.
And as far as I'm concerned, the kinds of things I study, in my opinion, should be considered crimes.
And I don't think we should ever be complacent and just say, oh, it's the algorithm.
Algorithms are written by people.
Algorithms are modified.
Google modifies its algorithm, its basic search algorithm, 3,000 times a year.
She's from Texas originally, and I think about her Pretty much nonstop.
I'm still wearing my wedding band, even though the accident was two years ago.
I don't know.
I know that the accident made news, not just here but in Europe, because some people thought it was suspicious that my beautiful My wife, you know, we've been together for eight years and my beautiful wife was killed in this horrendous fashion.
And, you know, obviously I have pissed off some people at some big companies.
And I have work coming out.
I mean, the work I have coming out, I have right now 12 scientific papers under review and four that are in press, in other words, that have been accepted.
So I have stuff coming out that is over and over again, it's like a sledgehammer, is going to make certain companies really look, well, very evil, I would say.
Do you think that they have the ability to suppress the kind of coverage of the data that you're putting out to the point where it's not going to impact them?
Like, how much has it impacted them currently?
I mean, we're talking about committing murder or potentially committing murder.
Like, how much have you impacted them if they're still in complete and total control and they're still utilizing all these algorithms and making massive amounts of profit?
So I do want to talk to you about the monitoring stuff, because there is a way.
There's more than one way, but there's one very practical way to literally just push these companies out of our personal lives and out of our elections.
And I've been working on that project since 2016. That project started because of a phone call I received from a state attorney general, Jim Hood.
He was attorney general of Mississippi at the time.
He called me in 2015 and he said, could Google mess with my reelection as attorney general?
Because in that state they elect them.
And I said, well, yeah, very easily.
And he said, well, how would they do it?
And I explained how they do it, et cetera, et cetera.
And he was very, very concerned.
And he said, but how would you know that they're doing it?
And my mind just started to spin.
I was thinking, gee, I don't know.
Well, a whistleblower, you know, a warrant, something.
And I became obsessed with trying to figure out how to know what these companies are actually showing real people.
Now, here and there, there's some researchers at Columbia who should be ashamed of themselves.
There's some reporters at The Economist who should be ashamed of themselves.
Here and there, people have set up a computer that they anonymize, and they type in lots of search terms, and they get back all these searches, and they conclude that there's no bias.
But that doesn't tell you anything because Google's algorithm can easily spot a bot, can easily spot an anonymized computer.
Okay, so Google has a profile on you that has more than three, the equivalent of, more than three million pages of content.
Now, you're probably thinking, well, how could I generate that?
Because everything you do goes into that profile.
So, yeah, it's a lot of content.
But the point is, they know the difference between you, because you have a big old profile, and an anonymized computer or a bot because there's no profile.
Right.
So it turns out, this is the simplest thing in the world to do, is that when they see a bot, okay, they just send out unbiased content.
We've shown this ourselves.
There's nothing to it.
But that's not the challenge that General Hood was basically giving me.
He was saying, how would you find out what real people are seeing?
So, 2016, I got some funds.
I don't even know where they came from, but anyway, and we started recruiting people.
We call them field agents.
This is exactly what that company does, Nielsen, that does the Nielsen ratings.
They've been doing it since 1950. They're now in 47 countries.
And they recruit families and they keep their identities very secret.
And they equip the families with special gizmos so they can keep an eye on what television shows they're watching.
And that's where the Nielsen ratings come from, which are very important because they determine how much those shows can charge for advertising.
They determine whether or not a show stays on the air.
So it's important.
So we started recruiting field agents.
And we developed custom software literally from the ground up.
And when we screen a field agent and we say, okay, you want to join us?
We install on their computer special software, which allows us, in effect, to look over their shoulders.
This is with their permission, obviously.
Look over their shoulders and we can take snapshots.
So when we sign these people up, we're taking lots of snapshots, you know, all day long.
And then information's coming in and it's being aggregated.
So we can look at what real voters are being sent by Google, Facebook, YouTube, anybody.
And we take all kinds of precautions to make sure these people cannot be identified.
We deliberately had a small group of people, Gmail users, to make it easy for Google to identify those people.
Guess what?
They got unbiased content.
But everyone else was getting highly biased content.
We preserved 13,000 election-related search results on Google, Bing, and Yahoo.
So it's 130,000 search results.
So each one has 10 results in it.
So it's 130,000 links.
And we also then also preserved the web pages.
So we had 98,000 unique web pages.
And then we analyzed it.
We found extreme pro-Hillary Clinton bias on Google search results, but not on Bing or Yahoo.
Now, here's number four, disclaimer number four.
I supported Hillary Clinton.
But still, I was very disturbed by this, extremely disturbed, because we knew from the experiments we had run that that was enough bias to have shifted over a period of time among undecided voters somewhere between 2.6 and 10.4 million votes without anyone having the slightest idea that this had occurred.
That's 2016. 2018, we monitored the midterms.
We preserved 47,000 searches.
So we were expanding.
We're getting bigger.
47,000.
And we found enough bias on Google, but not Bing or Yahoo, to have shifted 78 million votes.
That's spread across hundreds of elections, though, with no one knowing.
2020, we went all out.
We had more money, we went all out.
And we recruited 1,735 field agents just in swing counties, just in swing states, because we knew that's where the action was going to be.
We preserved 1.5 million ephemeral experiences, and I'll define that if you want, On Google, Bing, Yahoo, YouTube, Google's homepage, Facebook.
We, at this point, know how to preserve pretty much anything.
We preserve three million webpages.
And we're getting to the climax here.
We decided, which we hadn't done in the past, on October 30th, 2020, before the election, a few days before the election, we decided to go public with some of our initial findings.
And we did.
And as a result, on November 5th, two days after the election, three U.S. senators sent a very threatening letter to the CEO of Google Just summarizing all my work, my preliminary stuff.
And guess what happened then in Georgia?
We had over a thousand field agents in Georgia.
Google turned off the bias like that.
Google stopped with their homepage Go Vote reminders.
They stayed out of Georgia.
What does this say?
This tells you that if you monitor, if you do to them what they do to us 24 hours a day, you do that to them and you look for any kind of manipulation, any kind of bias, any kind of shenanigan, and you make that public, you expose it.
So doesn't this highlight that if our government is concerned about legitimate threats to democracy and legitimate threats to the way information is distributed and free speech and manipulation, that they should be monitoring Google.
But is the problem money?
Because of the amount of money that they give to campaigns, the amount of money they give to support causes that these politicians back.
This should be done by probably a consortium, bipartisan or nonpartisan.
Nonprofit organizations and, you know, we should have hearings.
We should have, you know, very—everything should be transparent.
We should have wide representation of people serving on the boards and all that kind of like— Well, the UN, but this is a narrow kind of task.
Here's what we need.
We need to set up now, because now we know how to do it.
We need to set up a permanent, large-scale monitoring system in all 50 states in the United States.
That's how we start.
Eventually, we have to help people in other countries set up similar systems.
That is how now and in the future—see, that's the real answer to your future question— That is why now and in the future, that is how, now and in the future, we can get control over emerging technologies.
Not just Google, but the next Google and the Google after that.
There is no way to know what these companies are doing unless you are monitoring.
One of the simulators we have now that we developed actually within the past year, which is fabulous, I'm so proud of my staff, we have an Alexa simulator.
I mean, it just works just like Alexa.
And it talks.
It's fantastic.
Except we control what it's going to say.
And sure enough, can we shift people's, oh yeah, easy peasy, nothing.
But what that tells you is that's one of the things we have to monitor.
or we have to monitor the answers that the so-called personal assistants are giving people.
Because if they give biased answers, that shifts thinking and behavior.
And, you know, what if all of these companies all favor the same party?
Right.
Which they do.
What if all of these companies all favor the same candidate?
Which they do.
You add up these manipulations.
And basically what Eisenhower predicted, it's here now.
2016. And I bet you Mark Zuckerberg has been kicking himself in the butt ever since.
On election day, if Zuckerberg, with one click, if he had sent out go-vote reminders...
Just to Democrats that day?
Because, you know, I mean, Trump won basically by, what, 47,000 votes in four states?
I mean, if Zuckerberg had sent out go-vote reminders just to Democrats, and he knows who the Democrats are, right?
He could have generated that day 450,000 more votes for Hillary Clinton than she got.
How do we know that?
From Facebook's own published data.
They published a study in 2012 showing how they could get more people to vote in 2010 by sending out vote reminders.
If you just take the data that they published and move it over to 2016 and say, okay, Mark, press the button, Hillary would have absolutely won the election.
He, I'm sure to this day, is kicking himself because he didn't do it.
But how would you know?
See, on any given day, any given election, how would you know whether that kind of reminder is going out, number one?
And number two, who it's going to?
Is it going to everybody?
Or is it going just to select group?
Is it targeted?
There's no way to know that unless you have monitoring systems in place.
With a monitoring system, you would know within seconds or minutes If a targeted message like that was being sent out...
Based on the experience that we just had a few months ago, where we got Google to stay out of Georgia, and by the way, we positively got them to stay out of Georgia because we had over a thousand field agents in Georgia, and we were collecting a massive amount of...
We collected more than a million ephemeral experiences.
I guess I'm going to have to define that.
In Georgia, I'm telling you, Google...
We have never seen...
So little bias in Google search results ever since we started monitoring in 2016. What's an ephemeral experience?
Okay.
2018, a leak to the Wall Street Journal from Google.
Bunch of emails.
One Googler is saying to others, how can we use ephemeral experiences to change people's views about Trump's travel ban?
In other words, I didn't make up this term.
This is from Google.
Internally, this is the kind of lingo that they use.
What's an ephemeral experience and why would they want to use ephemeral experiences to change people's minds?
Because an ephemeral experience is, well, most of the kinds of interactions we have online involve ephemeral experiences.
Search, you type a search term, you see a bunch of search results, it has an impact on you, you click on something, it disappears, it's not stored anywhere, and it's gone forever.
So there are these brief experiences, like a news feed, a list of search suggestions, an answer box, that affect users, disappear, stored nowhere, authorities cannot go back in time and figure out what people were being shown.
That's why internally at Google they want to use ephemeral experiences to impact people because unless someone like me, and I'm the only one doing this, unless some crazy guy like me is setting up monitoring systems and keeping everything secret while it's running, no one will ever know.
That you just flipped an election.
No one will ever know.
As I say, the most powerful mind control machine ever invented, and it relies, for the most part, on ephemeral experiences, meaning no one knows.
For example, let's say we're aggregating information that they're getting on the search engines, let's say.
So it's coming in.
Our software is set up so that if the information we're getting from any particular field agent doesn't look right, then it goes over to human review.
So what could that mean?
That could mean, for example, that they are using an algorithm.
They're trying to tilt things in a particular direction.
So they're not actually typing in anything.
They're not using the computer the normal way they would use it, which is what they're supposed to do.
It means they've now developed or been equipped with an algorithm to, boom, just start generating a lot of stuff, which would mess up our numbers, right?
Well, those people immediately are flagged, and when that happens and we can't exactly figure out what's going on, we dump them.
And we dump their data.
If their information is coming in faster than a person can type, we dump them.
But there are other indications, too.
I mean, I can't reveal all that, but we're taking precautions exactly like Nielsen has been doing all the way since 1950. It can be done.
It's hard to answer that question because as I keep learning more, and believe me, what we've learned in the last year easily eclipses what we learned in the previous eight years.
We're learning so much.
The team is growing.
Our capabilities are growing.
I'll say at one point in time, what I was concerned about was how can we get Google under control?
So I published an article in Bloomberg Businessweek.
There's a great backstory there because, you know, it was scheduled to come out and then someone or other made a phone call to someone else and then, boom, the piece got pulled.
And this is a solution to the Google problem, literally.
The editor in chief is literally having arguments with the, you know, the higher ups, the publishers, because they pulled my piece on how to get Google under control, how to solve the Google problem.
I was scheduled to testify before Congress the following Tuesday.
The article had been pulled.
The editor-in-chief was determined to get this piece out.
He got it published in their online version on Monday, the day before the hearing.
So what is this about?
This is very simple, very light touch regulation.
The way to completely disarm Google is to make their index, which is the database they use to generate search results, to make it public.
And there's precedent for that.
The government has done that before.
It's very, very light-touch regulation.
And Google could still sell it when people like Bing want to use a lot of it, a lot of data from the database.
They could still make money.
But what would happen in that case, though, is that hundreds of other search engines would now be set up, and then thousands.
All pulling really good data from Google's database.
And then they would go after niche audiences.
And they'd all be giving great search results, but they're going after Lithuanians.
I mean, there's an antitrust action right now in progress against Google, and it's the attorney generals, I believe, from every single state in the United States except California.
Because the attorney general of California, his main supporter is Google.
Google's based in California.
So it's so crazy that they have this massive antitrust action in progress, and the AG of California is staying out of it.
Well, the fact is that depending on who the leadership is at any point in time at Google, they might look at that idea and say, hey, look, this will be great for us.
I mean, if you're saying that Google's sending out these messages, right, and that most of their users or the majority of their users are Democrats, right?
You're saying Google's sending out this message, go vote.
And through that message, because of the bias, because of the difference in the numbers, more Democrats are getting it because more Democrats use Google, right?
So when you get to social media, the picture gets very complicated.
However, here's what you got to know.
It's algorithms that determine what goes viral.
Everyone believes this crazy myth.
Everyone believes this.
Everyone I know believes this.
My kids believe this.
Everyone believes that virality is mysterious.
It's like winning the lottery.
And that's not true.
Because if I control the algorithms, okay, I determine what's going to go viral and what is not.
Now that's, again, a tremendous source of power.
And of course, they do want a bunch of stuff to go viral, even crazy negative stuff, because more traffic, more money.
But the bottom line is they control the algorithms that determine what goes viral.
That's where a lot of the power lies in the world of social media.
That's where the Francis Haugen revelations are extremely important.
And just having that underbelly, that ugly underbelly of that company exposed.
So no matter how you look at this, for us to sit by, Eisenhower's speech actually says that we have to be vigilant.
He uses the word vigilant.
We have to be vigilant so that we don't let these kinds of powers take over our government, our democracy, our nation.
And we have not been vigilant and we're not being vigilant now.
And the research, you know, that we do and the monitoring systems both, the research is over here and the monitoring stuff's over here, that reminds me every single day.
I mean, I'm looking at numbers every single day.
You're keeping me away from my data and my research, by the way.
But I'm reminded every single day of just how serious this stuff is.
This is deadly serious for the future of not just our country, but all of humanity.
And the fact that people don't know it, or that sometimes I've given speeches, sometimes people say, I don't care.
I had a friend who worked at Google during the time they were working and having negotiations with China, and her position was that China was just going to copy Google's tech if they didn't do that.
And, you know, Google's an expert at doing that kind of suppression.
They're the biggest censors in the history of humankind.
But still, look...
I know I'm a very idealistic person.
I've handed out tests of idealism in my classes when these are young people in their 20s.
And I outscore them.
I've always outscored all of my students.
I'm very idealistic.
I believe in truth, justice, the American way, like Superman, and all that crazy stuff.
But...
I'm going to do my best to get people to wake up.
That's why I said, yes, I'll give up a day of looking at my numbers.
I'm going to come and talk to you because I am trying to get people to listen.
I'm trying to figure out how to get people to listen.
People must listen.
Let me put it another way.
That monitoring system I keep talking about, That's not optional, okay?
That's not optional.
That must be set up.
If we don't set that up, we will have no clue.
We will not understand not only why this person or that person won an election, we will not understand what's happening with our kids.
I have five kids.
When my daughter Janelle was about 12, and I'm sure you've done this.
I think you have kids roughly that age.
So I did the thing a dad does sometimes.
I went into her bedroom just to check on her.
And I noticed one of her little electronic devices, the old iPod or whatever it was, is sitting next to her pillow.
And then I looked a little closer and I went, what?
There were five electronic devices encircling her pillow.
It's our kids that we need to be thinking about here.
It's not just their future, but literally, how are they being impacted right now?
What kind of content are they being shown?
Is it pornographic?
Is it violent?
I don't know.
Are they being pushed one way or another politically?
We are in the process right now of trying to expand our research to look at kids and to see what content these kids are being shown.
Because it doesn't matter how vigilant you are as a parent, the fact is 99% of what your kids are seeing online or experiencing online, you're unaware of.
And that's why, as I say, Solving these problems is not optional.
We must solve these problems.
We must set up monitoring systems.
And it's relatively cheap, by the way, because now that we've done it repeatedly, we know how to do it.
Because I had written to that executive at Google, who was supposed to be on that panel in Germany, and just telling him about my work, giving him links and so on, because he's a former professor.
It was only a few days after that that this guy showed up at our house.
And then it was a few days after that that the Google executive pulled out of that conference.
And so they're not interested in communicating with you.
They've obviously either told people not to communicate with you or the people that you would like to talk to are aware of your work and they feel that it would negatively impact their job or their career.
This has just been, for me, in many ways, a nightmare, an absolute nightmare, because there are people who won't help us, who won't serve on our board, who won't do this, who won't do that.
We had an intern lined up who was very, very good.
You know, we get some really sharp people.
They come from all over the world, actually.
And we had this person all signed up, her start date was set up, and she called up and she said, I can't do the internship.
I said, why not?
My grandmother looked you up online and she thinks that you're like some sort of Trump supporter.
And she said she'll cut me off if I do this internship.
Well, ever since I testified, terrible things have happened.
One of my board members said to me, look, he said...
He said, in a way, you should be grateful and pleased that they left you alone for so many years.
He said, but that for them was, you know, that was it.
That was the final straw.
And, you know, what happened after that hearing was Trump tweeted about my My testimony.
Hillary Clinton, whom I've been supporting forever, Hillary Clinton replies to Trump on Twitter and says, this man's work has been completely discredited.
And then if we found out that someone who was, like, say, if Donald Trump, you know, if the Democrats found out that Donald Trump had implemented some sort of a system, like you're talking about, people would be furious.
So that's the problem there, is that everything is personalized and everything you're seeing there is based on you and your 20-plus year history and the 3 million pages of information they have about you.
I've been trying to close my Facebook page for I think at least three years now.
They won't let me close it.
They won't let me change it.
It's still up there.
I didn't even set it up originally.
I think it was Misty, my wife, who set it up.
But they won't let me touch it.
And they won't let me close it.
Speaking of which, okay, I'm sitting next to a guy on an airplane the other day, and he's saying how he's very proud that he doesn't use any social media.
I said, so, wait, you mean you don't have a Facebook page?
And he goes, oh, no.
Positively, I do not have a Facebook page.
I said, you have a Facebook page.
He goes, no.
What are you telling me?
He says, I know.
I don't have a Facebook page.
I would know if I had a Facebook page.
I said, no, you don't understand.
Every time someone mentions you on Facebook or posts a photo in which you appear, that goes into your Facebook profile.
You have a big, an enormous Facebook profile, except that you can't see it.
Well, it's not only that, but even when you think you're...
I mean, Google...
God, do they ever say anything truthful publicly?
That's a big question.
But, I mean, Google claims, for example, you can delete your Google data.
You can go through the motions of saying, I want to delete my Google data, and then from that point on, you can't see your Google data.
But they don't delete it.
They never delete it.
Even if they deleted it on one server, it's sitting there and backup after backup after backup.
And not only that, if you read, I think I'm the only one who reads these things, but if you read Google's Terms of Service and Google's Privacy Policy...
It says right in there, we reserve the right to hold on to your data as we might be required by law or in any other way that protects Google.
I think that what these things are, I think- We're at a time in history where you can't look at them as just private companies because the ability to express yourself is severely limited if you're not in those platforms.
I think they should be looked at like utilities and I think they should be subject to the freedoms that are in our Constitution and the Bill of Rights and I think the way the First Amendment protects free speech It should be protected on social media platforms because I think as long as you're not threatening someone or doxing someone or putting someone in harm or lying about them,
I think your ability to express yourself is a gigantic part of us trying to figure out the truth.
Like when it comes to what are people's honest opinions about things?
Do we know?
You know, we don't know if honest opinions are suppressed.
Because they don't match up to someone's ideology I think that's it's a critical aspect of what it means to be American to be able to express yourself freely and To find out how other people think is educational if you only exist in an echo chamber and you only hear the opinions expressed of people that Align with a certain ideology, that's not free speech.
I think free speech is critical.
I think the answer to bad speech, and this is not my thought, this is many brilliant people believe this, is better speech, more thought, is more convincing arguments, more logical, sustained reasoning and debate and discussion.
And I think as soon as they start suppressing ideas, as soon as they start suppressing and deleting YouTube videos and banning people from Twitter for things that have now been proven to be true, right?
There's a lot of people that were banned because they questioned the lab leak theory.
I mean, I was happy, of course, that this happened, but I think it was dead wrong for Twitter and Facebook to literally cut off communication between the current president of the United States who's still in office and his supporters.
And the real question, too, is how much manipulation was being done by Federal agents in the January 6th event like did they engineer?
people going into the capital did they Encourage them and you saw that Ted Cruz conversation with the woman from FBI where she said I can't answer that Did the FBI incite violence?
I can't answer that.
You can't answer that.
That should be never.
Would they incite violence?
Would the FBI manipulate people to do something illegal that would not have done that?
Look, if you pay attention to those people, like if you watch, there's a great documentary on HBO, this QAnon documentary.
And you realize, like, how easily manipulated some of these poor folks are.
They get involved in these movements.
Now, if somebody wanted to disparage a political party or to maybe have some sort of a justification for getting some influential person like Donald Trump offline, that would be the way they would do it.
I need people to provide funds, but also to help us find funds.
This is the year where I think we should set up this first large-scale nationwide monitoring system, which could be used not only to keep an eye on these midterm elections, but we could finally start to look at our kids.
That's become my main concern now, is our kids.
Because we don't know.
We don't understand what the hell they're doing.
We don't know what they're looking at, what they're listening to.
But I can tell you for sure that a lot of what's happening is really being done very deliberately and strategically by the big tech companies.
Because they're going to do—they have control over the information that everyone has access to, and they're going to do what's best for them, what makes them the most money, what spreads their values, and, of course, sometimes what's good for intelligence purposes.
They're going to do those things, and we have no idea what they're doing unless we track them.
So anyway, that's my fantasy.
This is the year where we're going to get this thing running in a way that it would be self-maintaining.
So it would continue year after year after year.
Not optional.
I've said that before.
It's not optional.
So if people go to tamebigtech.com, they can get more information.
I actually created a special booklet that we're going to give out for free.
I had a copy to bring you, and I left it in my car.
But we have this booklet.
I took my congressional testimony, I updated it, I expanded it, and I turned it into an essay, which is called Google's Triple Threat.
To democracy, our children, and our minds.
And it says right on it, prepared for Joe Rogan's whatever.
And I am doing this with the help of all my wonderful teammates.
I am so far still the only one.
And that's disgusting.
That's horrible.
There's something fundamentally wrong with that picture.
But when you think about the internet and how many people on the internet are, you know, interested in politics and interested in the influence of big tech and the dangers of big tech.
When they talk about psychological dangers, like Jonathan Haidt's work with young girls and self-harm and suicide and the rise of depression amongst young people, you would think that this would also be something that people would investigate and dig into.
The fact that you're the only one, it's very strange.
I'm gone at the moment, so we have two cats in our office, and I'm the poop cleaner.
So, when I'm gone, that means someone else has to clean the poop.
So I said to my associate director last night, I said, just remember that the more credentials you get, the more responsibilities you get, the more poop you're going to have to clean.
And that's the truth.
So it's very tough.
I don't like being in this position and I do wonder about Misty.
I'll probably always wonder about Misty and I'll never know because, again, her truck disappeared.