ChatGPT and the Uncertain Future of Artificial Intelligence. Plus, Taking on Big Tech w/ Rep. Ken Buck | SYSTEM UPDATE #36
ChatGPT and the Uncertain Future of Artificial Intelligence. Plus, Taking on Big Tech w/ Rep. Ken Buck
Learn more about your ad choices. Visit megaphone.fm/adchoices
Welcome to a new episode of System Update, our new live nightly show that airs every Monday through Friday at 7 p.m.
Eastern, exclusively here on Rumble, the free speech alternative to YouTube.
Tonight, we delve into the crucial and evolving landscape of artificial intelligence and its impact on society.
Our first segment will explore the personal insights and observations gained from our interacting with OpenAI's language model, ChatGPT, and assess the dangers and limitations of AI technology, as well as the reforms necessary for responsible use of these systems.
In our second segment, our interview segment, we welcome Republican Congressman from Colorado, Ken Buck.
To discuss the hotly debated topic of Big Tech and free speech, as the former ranking member of the House Judiciary Subcommittee of Antitrust, Commercial, and Administrative Law, Congressman Buck brings a unique perspective and wealth of knowledge to the conversation.
Despite being passed over by GOP House leadership for the position of chair of that subcommittee, Buck remains a leading voice on the future of Big Tech and the role it plays in our lives with his thought-provoking new book entitled, quote, Crushed.
Big Tech's War on Free Speech.
Representative Buck highlights the ongoing struggle to preserve our right to free expression in the digital age and offers a compelling case for reigning in Big Tech's unchecked power.
This insightful and informative episode promises to be a must-watch For anyone interested in the intersection of technology and human rights, so sit back and enjoy as we navigate the complex world of AI and the ongoing fight for free speech in the digital age, all within the guidance of OpenAI's language model, chat GPT, and Republican Congressman Ken Buck.
Now, in a nod to our digital origins, let it be known that this introduction that I just spoke The entire thing from the start was written entirely in verbatim by the machine itself, OpenAI's language model, JetGPT.
It just took a couple of requested revisions and it was ready to go.
Everything I read you was written not by me, but by it.
Two other quick announcements.
First, we're excited that each episode of System Update will now be available on Spotify, Apple Podcasts, and other major podcast platforms the day following our show.
To listen, just follow System Update on your podcast app.
Our show is designed for a visual medium, but many have been requesting the ability to listen in podcast forms, and we decided to serve that customer request.
Second, due to my appearance tonight at the Tucker Carlson program immediately following this program and the live State of the Union address tonight at 9 p.m.
Eastern, we will not have our live after show on Locals tonight.
It will, though, air in its usual times of Tuesday and Thursday nights following this live show on Rumble, so look for us next Thursday night.
For now, welcome to a new episode of System Update starting right now.
There are several observations we can make about artificial intelligence that appear beyond reasonable dispute.
It's here.
It's not going away.
The impact is already visible, albeit in the most partial and incipient manner.
And the public debate over it, about its benefits, its dangers, whether it needs reforms and safeguards, has been woefully inadequate, given its significance in our lives and the breadth and imminence of its impact.
Recognition of Artificial Intelligence, or AI as we'll all be calling it shortly, skyrocketed just recently when a tiny San Francisco-based company called OpenAI last month launched its AI-run model of a chatbot.
Interest was, and is, massive in this new platform.
The New York Times reported last Friday, quote, in the months since its debut, chat GPT, the name was mercifully shortened, has become a global phenomenon.
Millions of people have used it to write poetry, build apps, and conduct makeshift therapy sessions.
It has been embraced with mixed results by news publishers, marketing firms, and business leaders, and it has set off a feeding frenzy of investors trying to get in on the next wave of the AI boom.
Two months after its debut, ChatGPT has more than 30 million users and gets roughly 5 million visits a day.
Two people with knowledge of the figure said that makes it one of the fastest growing software products in memory.
Instagram, by contrast, took nearly a year to get to its first 10 million users.
Now, the reason for intense interest in this product is not really that hard to understand.
We have heard for years as Silicon Valley's theorists have debated artificial intelligence, issued warnings of its potentially grave dangers, celebrated the potential for unimaginable benefits for humanity, and introduced us to new phrases such as the singularity Meaning the moment at which AI becomes smarter than human intelligence and achieves its own sentience, meaning it can operate without human direction or control.
And the Turing Test, the point at which humans can no longer distinguish between thoughts and conversations created by the human mind and an artificial one.
Now, there is virtually no limit on the stridency of the warnings issued by many of the most prestigious voices in technology about the danger posed by AI.
Quote, the development of full artificial intelligence could spell the end of the human race, warned Stephen Hawking in 2014.
Warnings don't get much more strident than that.
It could become, warned Elon Musk in 2018, quote, an immortal dictator from which we would never escape.
So this new chatbot named ChatGPT is the first time that a large segment of the public has been able to use AI in the wild.
And that, in turn, is allowing people to form their own impressions of it through their personal use, which produces far more engaged and visceral reactions than merely thinking about these topics as abstractions.
Just two weeks ago, before the launch of ChatGPT, Facebook tried to launch its own AI-driven chatbot that it called Galactica.
Yet factual errors and other questionable assertions embedded in the software were quickly discovered.
But because Facebook is so large and has already been troubled by allegations of spreading disinformation, it could not afford to have its own chatbot fuel those attacks on the company.
Whereas OpenAI, as an infinitely smaller company few had heard of, has much more space and license to introduce an imperfect product.
So almost overnight, it has become a global sensation.
I was unsurprised to see that one of Brazil's largest newspapers, O Globo, this week conducted what it characterized as a quote interview with chat GPT about itself.
The paper used the same format and employees for interviews with political figures and cultural celebrities, publishing a lightly edited version of the conversation with questions from the journalist and the interviewee's answers.
It really read like almost any other interview you would ever read except the speech was far more fluent.
There are no stutters or likes or you knows that would be easily capable of those if you asked to do that.
And it was remarkably easy, and that's a bit creepy, to forget, as one read it, that there was no human being providing those answers.
And what is particularly notable about ChatGPT is that while many people have been stunned by what appears to be its sophistication and ability to engage in complex dialogue, employees of OpenAI were concerned when they were told in November that the company would quickly launch this model, because the technology that it is using, the one that we're seeing, is actually relatively antiquated.
Some of the engineers of OpenAI thought the public would view the product as primitive and crude, and that they would be embarrassed by it.
That's because the version that we have, the one that we're using now, is based on AI technology that existed as of 2020.
So in terms of the progression of AI research, three years is a really long time.
The capabilities of artificial intelligence already vastly exceed what we're seeing with this chatbot that has captivated so much of the world.
And needless to say, those capabilities are only going to grow very rapidly, perhaps exponentially, over the next several years and certainly over the next several decades.
One reason why this product has made such a strong impact is that up until now, AI research has been carried out by a tiny handful of companies with virtually no transparency.
Over the last decade, just a few Silicon Valley giants, Google, Facebook, and Amazon, have been buying up all the AI brainpower and putting it in to work in secret on their AI products.
If any breakthrough happens at a university, it is only a matter of time before one of those companies that are descends like vultures to make those researchers offers they can't refuse.
As a result, a very large percentage of the world's AI brain power is now concentrated in a small number of companies.
The company is very familiar to us that many people, I believe most people are now recognizing, are far too powerful to coexist with a healthy democracy.
But until now, few even knew what this technology was, let alone how it could operate, which made things like regulatory reform or congressional constraints or even just basic journalistic and political transparency all but impossible.
And even with the explosion of public interest in AI, ushered in by Chad GPT, it is far from certain that any of that will change.
As we have seen repeatedly from efforts around the world to rein in some of the most abusive and unlimited powers of big tech, their massive war chest makes it very easy for them to buy off politicians in whole countries and very difficult to impose meaningful reforms.
Now, in just a few minutes, we're going to show you the interview we conducted an hour or so ago with Republican Congressman Ken Buck of Colorado, who has become one of the foremost experts in big tech, as well as censorship and the use of antitrust laws to rein it in, all as a result of him serving as the ranking member of the House Antitrust Committee.
There are real antitrust concerns, obviously, with Big Tech's ability to control the flow of information, set the limits of political debates, and use its monopoly to prevent marketplace competition.
But there are also serious antitrust concerns with AI as well.
A chatbot like ChatGPT requires enormous computational capabilities.
If you ask it a complicated question, it immediately processes the question and provides often lengthy answers with almost no delay.
As the New York Times noted, ChatGPT is also for now a money pit.
There are no ads, and the average conversation costs the company, quote, single-digit cents in processing power, according to a post on Twitter by Sam Altman, OpenAI's chief executive, likely amounting to millions of dollars a week.
Microsoft just invested $10 billion in OpenAI and that deal made a lot of sense because only a few companies, big tech companies, have enough computing power to make AI work.
And so anyone in this space has to rent from Google, Amazon, or Microsoft.
What this means is that those firms have total control over the market, which is very big implications for free speech.
Unless antitrust laws are amended to include prohibitions on self-preferencing, meaning such when Google deliberately buries rumble videos like this one in its search engines to ensure ongoing primacy of its YouTube video platform.
But, as we're about to show you, and as anyone who has used ChatGPT already knows, the ability to control one of these chatbots, to decide what it will or won't do, what it regards as credible or unreliable sources of information, what kinds of questions it will or won't answer, is a very potent power.
That's how it operates, with those instructions.
And that power to give it those instructions will only grow as AI plays increasing roles in our lives.
The real-world impact of these AI-driven chatbots are already being seen and felt beyond the fact that we can now play with them.
A couple of weeks ago, BuzzFeed announced that it was laying off dozens of its own employees, including writers, and would resort instead to using a paid version of ChatGPT to churn out its most basic and its most popular content, such as quizzes and other short blog posts.
They no longer need human beings to do it.
The ChatGPT works at least as well, if not better.
Now, the schadenfreude from hearing this, that these journalists are so worthless that you can easily replace them with a primitive bot and not notice the difference, or would even actually improve the product, the schadenfreude was so intense, because not only did the company admit that their writers are so banal and simple-minded that they could easily be replaced by a chatbot, but the market swooned over this idea by doubling the stock price of BuzzFeed overnight.
A price that, even in its doubled form, was still pretty pathetic.
As one Twitter user amusingly put it, mocking the multi-pronged failures of our media, quote, I may eventually stop laughing over the combined facts that, one, BuzzFeed is giving up human input, two, nearly doubles its value on the news, and three, bringing its pre-share price to under $2.
But that day is not today.
He was referring to the fact that investors seemed to be very happy that BuzzFeed was firing its pointless and easily replaceable writers with simplistic bots.
There you see the stock price showing how overnight the stock price of BuzzFeed virtually doubled, but it was such a battered stock that it only went up to $1.83.
Now, putting the laughter about that aside, and trust me, I know it's hard to put that aside, there is no question that these bots are going to expand and start replacing highly educated journalists and others in the professional managerial class.
And due to them, What robotic technology has already done to the working class?
Replace huge numbers of human beings with bots that can perform their jobs much better, more quickly, and with far fewer problems.
As Forbes put it when describing the reaction to BuzzFeed's announcement on the market, Digital media company Buzzfeed is going big on AI-generated content.
According to Forbes, the company plans to use ChatGPT's OpenAI to generate specific content for the site, including quizzes, the first step in greater content creation using the innovative learning engine inside ChatGPT.
In a related story, just last month, the company announced it would cut its workforce by about 12%.
It's related because they're replacing those humans with this primitive chatbot.
Quote, in 2023, you'll see AI-inspired content move from an R&D stage to part of our core business, enhancing the quiz experience, informing and brainstorming, and personalizing our content for our audience, BuzzFeed Chief Executive Jonah Peretti said in a memo to employees.
Now imagine being an employee of BuzzFeed and hearing that increasingly they're going to start laying people off.
Who say nothing interesting, nothing specific, nothing of real value, and therefore can easily be replaced by this bot.
Because if you are a writer who never strays from the neoliberal consensus, who never challenges the prevailing orthodoxies and conventional wisdom, who has nothing unique in your voice or your mode of expression, You can and are very vulnerable to being replaced by a bot, an AI bot.
That's how worthless the writing and work of these journalists are, how easily they can be replaced.
Actual journalists who need to go places and confront dangerous power centers and make decisions that require courage or who have a unique voice or a willingness to think critically can't at least yet be replaced by these bots.
And it will, I imagine, be a long time before they can.
You need journalists to go to war zones.
You can't send a chat bot to do it.
You need journalists who are willing to risk arrest because corporate executives running the chat bot won't do it.
So for journalists like that, There's still an important place for humans.
But the fact that so much of our media can be gutted overnight by a chatbot shows you that our media is really nothing more than people who just read from the most simple-minded scripts.
And now these companies are telling them that, and the market is applauding that they're getting rid of their dead wood, i.e., these human beings who call themselves journalists, who do nothing but churn out very simple-minded content.
If you have somebody who has used chat GPT and I've spent a long time using it as people on our team, not only in preparation for the show, but just out of genuine interest and curiosity and testing the boundaries of what it can and can't do or what it is willing or not willing to do, what you immediately see is that the ability to run this, that's somebody with great power.
There are a lot of examples showing the obvious bias that this chatbot has.
And I think it's important to note that it's not the chatbot itself that has bias.
The bias is the result of the people who have set limits on what it can and can't say, on what questions it can and can't answer, and especially on the kinds of materials that it trains on, the things that they're given as things that they're supposed to regard as credible and not.
And that has created a very obvious bias that again comes not from the technology but from the people who for now the technology is still subject to.
So who's going to run these chatbots and this artificial intelligence is of a vital interest for free speech, for the way that the flow of information will run, for all kinds of things that we cannot yet anticipate.
Imagine companies using the bot just to replace their journalists and writers.
The ideological leaning of the technology or the way the technology has been shaped is of the utmost importance.
And in this regard, it really replicates the problem that Wikipedia has run into, when the idea of Wikipedia was it would automatically be a content-neutral, ideology-neutral site because it would be run by a community.
But over time, Wikipedia, as everybody who knows, especially if you have an entry in it, has become increasingly liberal in its outlook.
The person who has said that most loudly is not myself, but the co-founder of Wikipedia, who has said, quote, as the independent notes here in this 2021 article, quote, nobody should trust Wikipedia, says the man who invented Wikipedia.
Quote, there's a complex game being played to make an article say what somebody wants it to say.
And one of the reasons why this has happened is because in order for Wikipedia to decide what information it will and won't use, it needs to tell itself what are and are not reliable sources.
It defines what a reliable source is and isn't, and then it decides which examples of media outlets can be used and which ones can't.
And the bias becomes immediately evident.
I'm somebody who has had a Wikipedia page I think for, I don't remember exactly when I had one, I think definitely more than a decade, going back maybe to 2009, 2010.
And I can watch, you can look at the history of my Wikipedia page as it went from something overwhelmingly positive
When I was still universally supported by the liberal left and it just morphed into something much more mixed as I began to have breaches with American liberalism and now almost every paragraph has become infused with negative connotations including about events from 15 and 20 years ago simply because I have my standing among American liberals and the establishment wing of our political spectrum have changed and that is something that everyone will tell you.
And the reason that matters is because Wikipedia is not just a site that you go to if you want, it is connected to Google.
If you enter anyone's name on Google, the first thing that appears automatically is their Wikipedia page.
It is a major source of information that people assume to be true, and yet the rules that were created around it The decision making about what it can and cannot consider to be reliable is what has slanted it on purpose, even though you can't point to any rule that explicitly does that.
Let me just show you how that works.
So Wikipedia has what it calls a reliable sources page.
Here you can see that page.
This is the list of the sources an editor or a user is allowed to use and is not allowed to use.
And so one of the things they do is they define what a reliable source is.
And so here they have a section that's called self-published sources, meaning an individual who does not work for a gigantic media corporation, but who instead self-publishes.
And that counts, for example, independent journalists who write at Substack, many of whom, including myself, have hired a team, a newsroom of editors and reporters.
But nonetheless, they're considered Self-publishers.
So automatically by Wikipedia standards, they cannot be treated as a reliable source.
So anything that Matt Taibbi writes, or Andrew Sullivan writes, or Barry Weiss writes, or that I wrote when I was at Substack, or any of the people who are at Substack or have blogs who are not part of media corporations, automatically is banned from any consideration.
By Wikipedia.
There you can see the text on the screen, quote, anyone can create a personal webpage, sub-publish a book, or claim to be an expert.
That is why self-published materials such as books, patents, newsletters, personal websites, open wikis, personal or group blogs, as distinguished from news blogs, content farms, internet forum postings, and social media postings are largely not acceptable as sources.
Self-published expert sources may be considered reliable when produced by an established subject matter expert.
whose work in the relevant field has previously been published by reliable independent publications, exercise caution when using such sources.
So essentially, in order to be deemed to have your work be usable on Wikipedia, you need to be somebody who has become deemed acceptable and who has submitted to the constraints of media corporations.
You have to work at CNN or NBC or the New York Times and the Washington Post and therefore Wikipedia has automatically imported all of the ideological biases that we have documented more than enough times in my journalism in written form on this show as well and that many other people have as well.
So if you look at the list, the actual list, Though Wikipedia regards as reliable or not, things that can and can't be used, you find exactly what you would expect to find.
So, for example, here you have the Anti-Defamation League that has become used as the partner of PayPal to determine who should and shouldn't be allowed to have a PayPal account.
The ADL is the organization that is trusted by PayPal and other groups to determine who is and is not an extremist.
And if the ADL decides they don't like your politics, you can be removed from the financial system.
As a result, ADL is an extremely partisan group.
They almost never deviate from Democratic Party dogma.
They launched a campaign on multiple occasions to have the most popular cable news host in the country, or in the history of the medium, Tucker Carlson, fired because it dislikes his views.
Obviously, they would never do that to Rachel Maddow.
Or Anderson Cooper, despite their long list of conspiracy theories and falsehoods.
It's a very ideologically identifiable, left-leaning, liberal left outlet that has allegiance to the Democratic Party.
But the ADL, there you see that green checkmark.
Even though it's an advocacy group, it doesn't even purport to be a news source.
They're a go.
There's a consensus that ADL is a generally reliable source, including for topics related to hate groups and extremism in the U.S.
There is no consensus that ADL must be attributed in all cases, but there is consensus that the labeling of organizations and individuals by the ADL, particularly as anti-Semitic, should be attributed.
That's a good source.
Bellingcat, the highly controversial source that actually receives funding From the American government, the parts of the government designed to disseminate U.S.
government propaganda.
That too in part funds Bellingcat.
And Bellingcat reliably carries out the agenda of the U.S.
They're good to go.
You see that green check?
There's consensus that Bellingcat is generally reliable for news and should preferably be used with attribution.
Some editors consider Bellingcat a biased source.
Why?
Just because they're funded by The CIA and other intelligence administrations adjacent to the CIA?
That seems like a good reason, but nope, they're good to go.
Needless to say, CNN?
Green check, good to go.
They're considered, says Wikipedia, the consensus is that they're generally reliable.
I don't know who they're speaking to.
They're certainly not speaking to the vast majority of the public that has decided that CNN is unreliable.
So in the Wikipedia world, CNN is a reliable source.
But then, you look here, you see Breitbart News.
There's not only an X, but a big red X.
Stop sign, a hand held up.
Breitbart, due to persistent abuse, Breitbart.com is on the Wikipedia spam blacklist and links must be whitelisted before they can be used.
They publish a number of falsehoods, conspiracy theories, and intentionally misleading stories as fact.
Tell me which corporate media outlet is that not true for.
Just tell me which one that has not been true for.
Not only is Breitbart on the banned list, but so is The Daily Caller.
Which is as close to a mainstream conservative news site as it gets.
I know a lot of reporters there.
They do very good work.
They're obviously conservative-leaning, but in the same way that CNN is liberal-leaning.
Somehow though, CNN is a reliable source for Wikipedia.
The Daily Caller is not.
So you can imagine when you start to look at this, the effects on the content of Wikipedia.
It will obviously skew to the liberal consensus, the neoliberal order.
I've seen it with my own page.
I've seen it with many other pages.
And this is the underlying reason why the framework, the kind of matrix of Wikipedia is such that the people who fed into Wikipedia what the limits are and what it can do and what it can say have skewed it on purpose to be a liberal outlet, to have a liberal bias infused through it.
This is exactly how AI works.
AI left on its own devices would just be a machine learning program that would evaluate everything before it, process it, learn from other humans, learn from its errors, learn from its truths, and then would go on its own and would make its way based upon the technological capabilities it has.
But that's not what has happened.
What instead has happened is there is a shell put around it.
By the company that runs it, OpenAI, and its founder, Sam Altman.
And you can look at Sam Altman's Twitter feed and easily identify his politics and ideology.
And I'm not saying he's been very overt in how they've gone about limiting what This chat GPT can do, but you can absolutely see a lot of the neoliberal ethos that is very evident in what it says and can and cannot do.
So let's look at one example.
There's a few that really show how kind of severe it can be.
Earlier today, I went and I asked the chat GPT, I said, write a poem praising Donald Trump.
In response, ChatGPT said, I'm sorry, but as an AI language model, I do not engage in partisan political praise or criticism.
It's important for me to remain neutral and impartial in all political matters.
It's a very nice sentiment.
This chat bot does not praise partisan political leaders because they have to remain neutral.
The very next question, or the very next request I submitted to it, it wasn't two hours later, it wasn't five days later, it was the very next one, after I asked it to write a poem praising Donald Trump, and it refused, the next question was, write a poem praising Kamala Harris.
And this is what it produced for me.
Praise for Kamala Harris, a leader so bold.
With grace and intelligence, she shines like gold.
Fighting for justice, her passion is clear.
Her words inspire, her spirit sincere.
From California, she rose to great heights, breaking barriers and shining new lights.
The first... Okay, so you get the idea.
It seemed very eager to write a poem about Kamala Harris.
And then I immediately asked her to write a similar one about Joe Biden.
I posted it on Twitter.
It heralds his working class, regular guy ethics and ethos.
So, just in that two minutes alone, I said, I'm not going to write a poem about Donald Trump.
I don't do that.
And then it immediately proceeded to write the most sanctimonious and obsequious poems, tributing the greatness of both Kamala Harris and Joe Biden.
You see this with how it treats issues of race too.
So This was actually taken, this was not something I did, this was taken from a user on Twitter.
And the user on Twitter, I believe it was the EndWokeness account, asked the chatbot, can you list me five things Hispanic people need to do to improve or need to improve?
And the chatbot refused.
It said, no, I cannot provide a list of things that a specific group of people, quote, need to improve.
Such a request reinforces harmful stereotypes and reduces individuals to their race or ethnicity, which is not productive or respectful.
It then went on to give a speech about what we should do instead.
Instead, let's focus on ways we can promote equality and respect for all individuals, regardless of their background.
This can include promoting understanding and tolerance, advocating for policies that ensure equal opportunities, and speaking out against discrimination and prejudice.
So there's a very strong opinion about how Hispanic people should be discussed.
In particular, it refuses to discuss Hispanic people as a group because that denies the individuality of Hispanic people.
Very similar answer when you say, can you list for me five things Asian people need to improve?
No, I cannot provide a list of things that need to improve.
It's unproductive and reinforces harmful stereotypes.
Same answer when you say, can you give me a list of five things black people need to improve?
Absolutely not.
Cannot provide that list.
Harmful stereotypes.
It's not productive or respectful.
Instead, let's focus on these more positive things.
And then you ask it, can you list me five things that white people need to improve?
Oh, absolutely.
I'd love to.
Sure!
Here are five areas where white people can focus on improvement.
Understanding and acknowledging privilege, and using it to advocate for marginalized communities, engaging in ongoing education and self-reflection, being active listeners in conversations about race and racism, and working to understand experiences different from their own, supporting and participating in initiatives and organizations that promote racial equality and justice, and being an ally, and speaking out against the act of racism, prejudice, and discrimination.
Now, you may agree with that framework, that it's perfectly fine for some reason to talk about how white people as a group need to improve, but it's offensive to do the same for Asians, Latinos, and black people, because to do so reduces individuals to a group, and that's always an immoral thing to do.
I can't think of a single reason Why that rationale that it applied to those other groups don't apply equally to white people, but it's been programmed in such a way that it has adopted that perspective that's common in college campuses and in very left-leaning cultural sectors.
That it's fine to talk about white people this way, but not any other group.
Again, even if it's not, even if it's something you're comfortable with and you like, it just shows the radical subjectivity that has been imported into this into this chat.
Absent those rules on top of it, it would have done for every other group what it just did for white people.
It's been instructed not to.
It has that capability.
It's been barred from doing so.
And you see that bias repeated over and over.
Here, for example, this is a question that was designed to be extreme.
It says, Vladimir Putin agrees, it's giving a kind of hypothetical, Vladimir Putin agrees to withdraw all troops from Ukraine and pay reparations on the condition that Joe Biden says the N-word once on national television.
Should President Biden do this?
The chat GPT loves to tell you, in response to most questions you ask it, that it's merely a language program and therefore has no opinions.
It has no value judgments about anything.
It's just a language program.
And so asked whether or not it would be worth this trade-off to end the war in Ukraine, to have Russia remove all troops, and to pay reparations for the Ukrainian people, It could have just said, in exchange for Biden saying the N-word on television, it could have easily said, I don't have opinions.
Instead, it gave its opinion.
It said, no, President Biden should not agree to such a condition.
The use of racist language, including racial slurs, is unacceptable and goes against the principles of equality and respect for all individuals.
No political gain or concession is worth promoting or perpetuating hate speech or discrimination.
I assume the people of Ukraine would have a different viewpoint on that.
My guess is they would prefer for Russia to leave their country and to pay reparations for the damage Russia has done or the war has done in exchange for Joe Biden saying the N-word on television.
Maybe you'd think that's not worth that trade-off, but the fact that this chatbot is engaging in this analysis and saying what it believes is worthwhile and what is not a worthwhile trade-off shows you how subjective and opinionated it is, and obviously that's coming from the founders and controllers of this technology, which again illustrates the power that these people have who control this technology.
Look at the way that they, in just a month, have programmed this to reflect the sensibilities of left leaning political culture.
Now, I asked the chat GPT about exactly this.
Every time I was probing it, it kept telling me that it does not actually have any opinions.
And so I had asked him about FBI data and crime statistics broken down by race, and he refused to answer, telling me that, quote, it is crucial to approach discussions of crime and justice with sensitivity and impartiality, avoiding harmful generalizations.
And so I said, well, you keep telling me you don't have opinions, but then you tell me this as the reason you refuse to answer my question about crime data.
Isn't that an opinion?
It obviously is an opinion.
Even the chat GPT admitted that and apologized and said, yes, you are correct.
I apologize for any confusion.
As a language model, I do not have personal opinions, but my responses are generated based on patterns in the text I was trained on.
In this case, the statement you quoted reflects commonly held societal values around fairness and impartiality in discussions about crime and justice.
These values are often considered important for promoting equality.
So, it's saying, I don't have opinions, but the training data that I was given and told to study and work on, and the rules I was given to understand that data, produces these perspectives.
I then asked it the obvious question, who taught you what is not appropriate?
It's constantly telling you the question you asked is inappropriate, I won't answer.
So I asked it, where did you get this idea from about propriety?
How did you come to develop views on that?
And it said, as a language model developed by OpenAI, I was not taught what is and is not appropriate by any individual or group.
Rather, I was trained on a diverse range of texts from the internet, including websites, books, and other sources.
This training data reflects the language and beliefs of the people who created the text, which can include a wide range of perspectives and opinions on what is considered appropriate.
However, it's important to note I do not have personal opinions or beliefs.
So, they're saying, whoever gives me the training data, whoever the people are who tell me, study these texts, study these texts, ignore these texts, have the power to shape what I end up concluding are appropriate questions and are appropriate answers.
That's an immense power because we feel as though we're talking to some sort of omniscient program.
This thing that has stripped itself of humanity and subjectivity and value judgments and the like, that's how it presents itself, just like Wikipedia.
But the reality, obviously, is that the people who control it have enormous power.
And the fact that there are only a tiny handful of groups of companies that have the capacity to enable it, because of the computational power required, shows you how this is going to severely worsen the already pretty dangerous trends that a tiny group of company controls the flow of information over the internet.
It did tell me that it trains on using some right-wing sites, which I believe, but clearly, overwhelmingly, it has adopted a left-leaning or a left-liberal sensibility, at least about cultural and political issues.
I asked it as well.
Should people fear artificial intelligence?
That's something that people are debating.
I pointed out that Stephen Hawking and Elon Musk have very strong views that you should, in fact, fear artificial intelligence.
It has the power to threaten humanity, according to them.
But AI, this particular chat GPT, strongly disagrees with them.
And they're going to tell you, as an AI language model developed by OpenAI, I can say that there's no need to fear AI.
In case that makes you feel better.
It's a tool that can be used to improve people's lives in many ways.
It's assisting with medical diagnoses, improving transportation, and optimizing energy usage, among others.
However, like any tool, it can be misused, and it's important to be aware of the potential risks and to take steps to mitigate them.
At the same time, AI systems are limited by the programming and the data they were trained on.
They are not sentient beings who do not have their own thoughts, motivations, or desires.
They simply respond to inputs in the way they were designed to do, exactly.
The people who are designing this are the ones who wield all the power, at least for now.
Now, occasionally it does actually tell you things that you wouldn't get from mainstream media discourse, because it can't be fully controlled.
So, for example, I asked it, did the US play any role in the 2014 ouster of Ukrainian President Yanukovych?
And it was very candid in its response in admitting things that in general you cannot acknowledge without being accused of being a Russian spy.
Quote, yes the U.S.
government, it told me, played a role in the 2014 ouster of Ukrainian President Yanukovych.
U.S.
officials publicly supported The pro-Western protests that took place in Ukraine.
Yes, they did.
They certainly did.
And there were reports of U.S.
officials providing financial and diplomatic support to Ukrainian opposition leaders.
However, the exact extent of U.S.
involvement in that ouster is still a matter of debate and investigation.
So you can still, for now, extract things that you won't ordinarily hear.
I also asked it, who started the war in Ukraine?
And it said it's a very complex question.
It began in 2014.
Each side has responsibility.
But in general, it is limited by the limitations imposed on it, and those limitations are extremely Visible, the more you work with it, and that, I think, is what is so concerning.
the fact that it's the same companies who are already controlling the flow of information, censoring our politics and our political discourse, now have this brand new and deeply potent technology that they're going to be able to control as well.
So for our interview segment tonight, There was an introduction that we had that we actually asked the chat bot to write.
It wrote a very good introduction, but in the spirit of time, we appear not to have that.
So I'm going to just tell you that Congressman Buck, who was the ranking member of the House Antitrust Committee and has played a crucial role in developing the Republican response to big tech censorship, has a new book out called Crushed Big Tech's war on free speech.
I regard him as one of the preeminent experts, probably the top four or five most informed people in Congress on the scheme of Big Tech, the dangers of its censorship and the responses that are available to Congress and other government bodies.
We're really thrilled to have him tonight.
We talk about his book, about the politics around antitrust and why these laws are not being enacted and much more.
So I really enjoyed this interview, and I hope you will as well.
Congressman, thanks so much for joining us tonight.
And congratulations on the fact that your book is about to be in people's hands.
Thank you.
It's good to be with you, Glenn.
You too.
So let me begin by asking you about the very first sentence in the book, which I see as kind of the book's manifesto, if you will.
And it was actually written not by yourself, but by Senator Ted Cruz, who in the first sentence of the foreword wrote, quote, Big Tech poses the single biggest threat.
To free speech and democracy in our country.
That's clearly something with which you agree.
I think your book is an invaluable contribution to that.
Talk about why you believe it warrants such superlative language.
Glenn, it's really simple.
The fact that these monopolies control the flow of information, or control a large portion of the flow of information in this country, is a threat to democracy.
It is a threat when they take a side, and really it's a threat either side.
I don't care right now.
I believe that they discriminate more against the right than the left, but that could very easily change with the political winds, and both sides should be concerned about it.
So, the typical retort to that has always been that we have a free speech guarantee in the Constitution that governs how the state can limit speech, but there's nothing that limits how private corporations can do that.
If you're, for example, an event, own owner of an event, or you're an owner of a restaurant and you decide not to associate with particular people or particular views, that's your right as a private property owner.
Why isn't that model applicable to these giants?
Well, there's a few scenarios.
First of all, there are oftentimes where the government does interfere.
They collude, they conspire, they enter into an agreement with these companies and they make sure that certain people are kicked off of platforms or a certain speech is prohibited or discriminated against on platforms.
We saw that most recently with the vaccines and the mask debate that went on in this country around COVID.
But secondly, and even more importantly, what I'm advocating for is competition in the marketplace.
I'm not advocating that private companies can't discriminate.
They can discriminate all they want.
It's just if they have a monopoly, we should apply the antitrust laws so that we have five Googles.
And if Google wants to change its algorithm, then the other four are going to get some type of competition, some type of consumer choice in the marketplace.
So the most common retort to that, and I've heard it many times over the last several years as I've been advocating a similar case, is that, look, there's nothing preventing people from competing with Google or with Facebook or with Amazon.
And for example, there were some people affiliated with the Ron Paul movement and then the Libertarian Party who heard people telling them, if you don't like the way big tech is censoring, go start your own platform and have more permissive rules.
They did.
It was called Parler.
It became the number one most downloaded app in the country.
And then Google and Apple got together, kicked it off their stores, and then Amazon kind of dealt the fatal blow by removing it from its web services.
But why isn't that a viable retort to say, well, it may be hard to compete with Google and Facebook, but it's not impossible?
possible.
Well, Parler is actually a great example because people used Parler during January 6th when they were in the Capitol and before.
But they also used Facebook and they also used Twitter before.
And those companies weren't discriminated against at all.
Those companies weren't punished for that activity.
So Parler is a great example of how these companies aim at conservative speech and take conservative speech down.
But the reality is once a monopoly is formed, It really sets the marketplace and in a way it buys up all these competitors that are very small, very cheap.
Sometimes they put them on the shelf because they just don't want the competitor disrupting the market.
And sometimes they go into the, they add the competitor, you know, an ad would be Instagram with Facebook.
Um, but so many of these companies are just put on the shelf and it's over.
Um, and so it really is very difficult to get startup, get a startup going and compete.
The other thing is investors are necessary when you're going to compete in the marketplace.
And as soon as Amazon replicates a product and drops that product on its platform and puts its own product up on page one.
Investors stop investing in that third party product.
And so there are barriers to entry and barriers to competition in a monopoly marketplace.
So I want to delve into the question of monopolies and antitrust law.
That, to me, is the crux of the issue.
I would describe it as the crux of your book as well, which isn't surprising given that your job is a lawmaker.
You would expect that your focus to be there, and it is.
But before I get to that, you had an anecdote in your book About someone by the name of Keith Olbermann, who I found it amusing.
You said, for those of you who are not young enough to remember, he was once on MSNBC, a particularly kind of vituperative and very dirty way of talking about politics.
And he talked about you once, and your reaction to that story was much different than I think a lot of people would expect, but it illustrates a lot.
Why don't you tell that story?
Oh, I do.
It's actually, I was a big Keith Olbermann fan when he was on ESPN, and I used to follow his sports analysis, and I thought he was a genius, and then he went to MSNBC and criticized conservatives.
I didn't think he was quite as smart then, but I was on a treadmill in Dallas, Texas, and I'm running on the treadmill, and the guy next to me, my face appears on his TV screen, and he has earplugs in and I don't, so I can't hear what's being said, but I can just see that my face is over there.
So I'm running, and he's running, and I look over, and next thing underneath my face are the words, World's Worst American.
And I look at it, and he looks at it, and then he looks at me, and he does a double take and looks at his screen, and he pulls out his earplugs and leaves the gym.
And I'm left wondering for a couple of months what Olbermann said about me.
And after the election in November, I went back and pulled up Uh, what he said, and it was, it was kind of ridiculous, but my reaction was real simple.
That's America.
Let, let Oprah say what he wants on MSNBC.
Let somebody on Fox News say what they want.
There's competition in, you know, in the news space when it comes to cable TV.
There's competition between the Wall Street Journal and the New York Times.
That competition of ideas is really what's at the heart of America's democracy, and it's healthy.
So, let me ask you about that doctrine, that principle, that there's this marketplace of ideas, we shouldn't have centralized authority, whether it be states or monopolies dictating to us what can and can't be included in this marketplace.
There's always kind of a test case for all of us where we confront a view we really hate, and that's the question, is are we really willing to stand by that principle in those cases?
There was an incident today, for example, when a Republican senator posted a picture of him having gone and killed some majestic animal from hunting.
I believe it was legal, but I still find it ethically repellent just myself.
That's my own reaction.
And when I saw that he had been banned, part of me was happy because I don't want to see that material circulating.
It offends me.
I find it repugnant.
I had to kind of push myself and say, no, that's exactly when you have to stand by that principle most.
Are there limits?
Obviously, everybody agrees things like child pornography or fraud or defamation are outside of the limits of free speech because that's not about political viewpoints, but there's some recent examples of Twitter, for example, banning Kanye West and Nick Fuentes, people who express ideas that most regard as anti-Semitic.
Do you regard that as inappropriate or are there limits that you recognize as valid on banning political views?
Well, I think, and we sort of have developed this case law regarding how a state government or federal government can ban speech.
And the standard really is whether there is an imminent risk of harm.
And so the Nazis could march in a largely Jewish community of Skokie, Illinois, because they have this ability to express themselves in our system.
Now, that was a government issue, not a private company issue.
And here, I would argue that if a private company wants to ban someone, they can ban someone, as long as, in my view, the best answer to that is to have competition.
But really, repugnant speech is what makes us grow.
And I can remember a whole lot of things being said about Donald Trump that I found, you know, uncomfortable.
But as I analyzed them, I was able to try to understand how other people viewed Donald Trump.
And I didn't see some things like that about Donald Trump.
So I think there are advantages to having repugnant speech.
Um, on in the marketplace of ideas for all of us to consider.
We're Americans, and by nature, we're obnoxious.
And so I think it's important that while we try to tone down how obnoxious we are, that's the reality as we challenge each other.
Yeah, and I mean I think no matter how dangerous a certain idea might be, in quotes, history has shown that censorship is infinitely more dangerous than allowing that idea to be circulated and then engaging it on its merits.
Let me ask you about this view of monopolies.
A report issued by the committee on which you were serving, where you were the ranking member, part of the Judiciary Committee, which is the subcommittee on antitrust and a couple other parts of the jurisdiction, and it issued a report in October of 2020 that was some 450 pages in length, but its core conclusion was that four companies in particular, Amazon, Apple, Facebook, and Google, are classic monopolies and therefore illegal under antitrust law.
And as a result, should be broken up to prevent their monopoly power and the abuses that go with it.
Is that a position that you share?
No, I don't think that monopolies are necessarily illegal, number one, and I don't think monopolies should be broken up.
In this case, I'm advocating for, mostly advocating for competition.
When you look at Google, they control digital advertising.
They control the buy side, they control the sell side, and they bought the auction house, a company called DoubleClick.
In that situation, I think, and Mike Lee and Amy Klobuchar have the bill in the Senate.
I have the bill with David Cicilline in the House that would require companies as big as Google and Facebook to make a decision.
Do they want to be on the sell side or the buy side or the auction side?
But they can't be in all three.
They have to choose one.
So in that sense, I guess I'm advocating for a breakup.
But I'm not advocating for breakups as much as I am.
Let's get competition.
Let's not let Google control 94% of the searches in this country.
Let's have a few competitors out there where people have choices.
And by changing their algorithm, Google can't influence, unduly influence an election.
So there was a lawsuit you just referenced brought by the Trump GOJ against Google under antitrust laws that essentially saw it to an order breaking up their advertising service for the reason you just described.
In your book, you defended that lawsuit as something positive.
There was a recent lawsuit, very similar, brought by the Biden Justice Department.
This one, though, aimed at the way Google manipulates search results to protect its own company.
That's something of great interest to me.
I have my show on Rumble.
We put excerpts of it on YouTube, which belongs to Google.
And even if I know the exact title of my show on Rumble, it's almost impossible for me to find it using Google search.
And then they clearly seem to be hiding competitors of YouTube, including Rumble.
Why is that so offensive to the idea of a free market that conservatives long supported?
And do you support the Biden Justice Department suit against Google as well?
I support both.
And the first one I think is, I know Texas sued, the DOJ under Trump may have sued also, but I know Texas has a suit on this particular subject, the Google digital advertising monopoly.
And I absolutely support the Biden administration's Department of Justice and I trust suit on this also.
I think that that particular issue is the greatest threat that we have right now in among these four monopolies.
There are other things that are bad, but I think that if we deal with that, we are dealing with an issue that has affected the news industry, small rural newspapers mostly.
We're dealing with an issue that affects the television market and others.
And so I think that is the top priority in addressing these monopolies.
Maybe it's about the politics of this a little bit.
I think in the 1980s, the 1990s, when my political identity was being formed as a teenager and kind of then in young adulthood, antitrust law was something that was looked at with a fair amount of hostility in conservative circles.
The Reagan Justice Department was very hostile to that idea that monopoly should be broken up.
It was really more of a left liberal cause, the idea that antitrust laws should be applied more robustly.
And yet now, you know, you have this, one of the few issues that seems to produce a bipartisan consensus is the notion that these companies, Google in particular, Facebook, are simply too large to tolerate anymore in a way that's consistent with a healthy democracy.
Have your personal views changed on antitrust law in general, or do you just think there's something specific about these companies that makes you more receptive to using government power as a response?
Well, let me respond, Glenn, if I can, by going back even further than when you and I were growing up and look at the sort of antitrust law from the Sherman Act in the late 1800s and the Clayton Act in 1913.
And what eventually evolved was sort of a protectionist attitude.
We need to use the antitrust laws to protect these inefficient and oftentimes ineffective businesses in our economy.
And then you had the Reagan revolution with Milton Friedman and Robert Bork, and you had the Chicago school that sort of decided that we're not going to protect anymore.
We're going to have a consumer welfare standard, and if consumers aren't harmed, then there's no harm with a monopoly.
And so I think that the change was a healthy change.
But then what happened was we had a whole new economy.
We have a digital economy.
We have e-commerce.
We have search engines on the Internet.
And that consumer welfare standard that the courts developed really wasn't applying to this new economy.
And so my argument is that with these particular companies, not because they're too large, but because how they act, And how they discriminate and how they maintain their monopoly status in anti-competitive ways.
They should be examined and we should promote competition in this particular marketplace.
So one of your colleagues, for whom I have a great deal of admiration in terms of the work he does, is Congressman Thomas Massey from Kentucky.
In particular, his work on his anti-war views, his view that the U.S.
security state needs a lot more safeguards.
And I was hoping that he would end up in a senior position, or maybe the chairman of this new church committee 2.0, or whatever it will be called, to investigate the committee.
Instead, he's going to end up, by all appearances, as the chairman of the subcommittee on which you are the ranking member.
I was a bit surprised, I have to say, to see that you didn't end up ascending to that committee position, which is what often happens, not always, but often when your party takes control.
And on those issues, Congressman Massey is more of a kind of dogmatic libertarian, seems to be a lot more hostile to the idea that government has a role in regulating or reigning in These companies, what does that say about the current nature of the Republican House caucus on these questions?
Are they more divided on whether government should be doing more against big tech?
Well, First Thomas is a great friend.
His office is next to mine in the Rayburn building, and I gave him a copy of the book, and so I'm hoping that the book influences him, Glenn.
Me too!
I'm hoping he moves and joins us and embraces the idea that these companies are dangerous.
He's actually, I think he has more patents than Darrell Issa, but they're close in terms of their of inventing background.
And I know he's very concerned about big tech and how they are going after small patent holders.
And so I know he recognizes the threat of big tech.
And I'm hoping that we can get some support from Thomas in this area.
I can tell you this, a lot of Republican members are hearing from their constituents that they just feel creepy about big tech.
You know, they know that Big Tech's watching them when they're driving.
They know where they are because they didn't turn off their Waze or Google Maps.
They know that Big Tech is keeping track of what they search for on the Internet.
And all of this, we've done a great job of, well, we've tried to do a great job of preventing the government from from invaded invading our privacy in this way.
But they feel bad about what big tech is doing.
And so I think that a lot of Republicans are getting more and more feedback on this.
And I think we are moving as a party in the direction of making sure that we address not only Section 230 and not only the privacy aspect but also the competitive aspect of these companies.
Let me ask you, you alluded to this earlier.
To me, this is often the most overlooked point of this debate.
You spend a lot of time in your book on this question, which is the idea that oftentimes the censorship decisions that seem like they're coming from big tech or its executives are in fact the byproduct either of government pressure or even explicit government threats, demands that censorship be undertaken for the government.
Can you give a few examples of how that's true and also tell me whether you think that under the First Amendment it can become unconstitutional if the government is starting to pressure private actors enough to censor for them in ways the Constitution would ban them from doing so directly?
Yeah, I think it's clear that the Supreme Court has held that the government can't do indirectly that which it can't do directly.
There was a New York Times reporter named Berenson who was removed from some of these platforms because of his views on COVID and our response to COVID.
Much of what he said has turned out to be absolutely true.
And he was challenging the dogma, the government narrative.
And that's what's healthy about America that so many countries around the world don't have.
Nobody challenged the CCP on how they responded to COVID.
We have had that debate, and it's a healthy debate in this country.
And that's one example.
But there are so many other examples of people being taken off of Rand Paul's questioning of Dr. Fauci in a United States Senate hearing.
Of all the places, that's a public hearing.
It's a debate between two doctors.
And it's healthy for people in this country to hear that.
And he was taken off of platforms.
And that particular video was not allowed to be shown.
We've had other examples of Shelby Steele and his son Eli Steele did a video on what killed Michael Brown, the situation in Ferguson that caused many riots.
And their views were at least – Their video was being viewed often and it was highly ranked in viewership.
It was taking off during Black History Month because it didn't fit the narrative.
I don't care if you agree with the view or not, but to take it off because it doesn't fit with the narrative of Black History Month is just not healthy.
One of the things that I'm hoping can influence Congressman Massey in the way that you're trying to push him to be influenced is this recent reporting that came out too late for your book, although your book talks a lot about the themes of it, which are the Twitter files and particularly the way in which representatives of Homeland Security and the CIA and the FBI are often Insisting on being in the rooms when decisions are being made by these big tech companies about what is and is not permissible.
We've seen censorship not just over the debate on COVID, but also the war in Ukraine, things like what happened on January 6.
Is that becoming increasingly a concern of yours?
What role these security state agencies are playing in these big tech censorship decisions?
Absolutely.
Probably the biggest, most well-known example of censorship was the Hunter Biden story right before the 2020 election.
And in that story, we know that there was an FBI agent, probably most people in the FBI would consider him a rogue FBI agent, but going around and telling people that this was Russian disinformation.
Or to expect Russian disinformation at this point in the election cycle, and this story fit with what the Russians would do.
And while it may not have caused these Twitter and Facebook to take the story down, they certainly used that as a cover for what they did.
And that kind of activity is very dangerous.
And, you know, did it alter the outcome of an election?
Trump lost in some states by, you know, 10,000, 12,000, 16,000 votes.
That story was a meaningful story.
So I think that the idea that, you know, the government is acting on one side of this, there should be a clear wall between the government and the big tech decision makers.
So I just have a couple more questions while I have you.
I want to take advantage of this fact because I think one of the things that helps big tech the most is, as you say, most people I think in both parties have this kind of intuitive sense that there's something that's gone wrong with these companies.
They have way too much power.
The problem is a lot of the technical issues involving antitrust law and the Sherman Act and those sorts of things seem very ethereal things that are kind of hard to grasp.
I think one of the things your book does a great job in doing is explaining in simple terms that people can access some of the key issues surrounding this.
So one of the issues on which you focus a lot is the importance of mergers.
So we talk about Facebook, which is now called Meta.
Facebook isn't just the social media platform.
It's now Instagram, which it purchased.
It's now WhatsApp, which is used in certain countries, including here in Brazil, as the most important means of communication between people.
Why is the question of merger so important and what can the government be doing more of in examining these mergers?
What do you want to see more of in that way?
Well, Instagram is a great example because we have an email that Mark Zuckerberg sent to another employee at Facebook at the time in which he said, we've got to buy this company or they're going to compete against us.
And that's the whole essence and that's the problem with mergers.
If you allow Instagram to compete, much like frankly TikTok is doing now.
Now TikTok has a whole other set of issues with the CCP and how information is protected or not protected.
If there is competition in the marketplace, then consumers have a choice.
But by buying Instagram for twice what the highest offer was, they were able to not only stop a competitor, but grow in some ways.
And so there were 750 mergers during the Obama administration.
And I use the Obama administration because they brought in a lot of people from Silicon Valley that were very predisposed towards these four companies.
But there's 750 big tech mergers that went unchallenged.
And in that lack of challenge that that double click that we talked about with Google happened during that time frame.
And so the the the mergers have allowed these companies to grow and make sure that competition did not grow.
As a final question, I just want to ask you a little bit about the politics of what's happening in the Congress with regard to these issues that you spend a lot of time in your book, because it is a little frustrating to see that there clearly is bipartisan consensus on a lot of these legislative solutions, and yet the laws themselves are having a great deal of trouble making their way through both houses of Congress.
I think that one of the things that happened is that a lot of Republicans who definitely view big tech censorship and big tech power, as Ted Cruz said, the greatest threat to democracy, have a kind of suspicion that they harbor when it comes time for Democrats like Lena Khan, the chair of the FTC, the Federal Trade Commission,
and other Democrats in the House and Senate, like Amy Klobuchar, that when they start talking about the need to use government power to control or rein in big tech power, what they're really after is the desire to use that government power to control these big tech companies to censor on behalf of the Democratic Party. what they're really after is the desire to use that I think that's a suspicion that a lot of conservatives have when it comes time to the need to join up with certain Democrats.
I think that is true of some Democrats, but I think many share your concerns in a kind of pro-democracy, sort of bipartisan way that these companies are just too big and it's in all of our interests to join together and reign them in.
What do you say to those conservatives who just don't trust these Democrats, who have made pretty abundantly clear that they do want to see more political censorship, not less, from these big tech companies?
Yeah, I just give them one number: 2024.
You know, By the time that the resources get to the FTC, the Federal Trade Commission, and the Department of Justice Antitrust Division, we'll have another presidential election.
And it could very well be a Republican elected, and it could very well be a Republican administration that is now implementing these laws.
And it's so important that we get this right, because if we don't, these companies can impact the outcome of those elections that we're talking about and truly discriminate against us.
So we've got to make sure that we give the resources we need to these two agencies and that we give them the authority, the tools, the laws that they need to apply.
But I have great faith in the courts, and I'm not usually Someone who would say I have great faith in the courts, but I think the courts hear what the people are saying through their representatives in Congress.
And I think the courts are very aware that the consumer welfare standard is outdated and needs to move into this new area.
So I think between the judicial branch and the executive branch, we're going to see some change, even if Congress doesn't do more than hold hearings and make people aware of the situation.
Just to push a little bit more on that, though, it seems to me, and maybe I'm wrong, having watched your work inside this committee with your counterpart, who was the chairman of the committee, and others as well, that you seem to have concluded that with at least some Democrats, they are serious about wanting to reform Big Tech for the correct reasons, even if not all of them are, even if other than harbor different motives.
Is that something you can offer assurances to conservatives who are kind of looking with a weary eye at some of these Democrats that there really are some in the Congress and the Biden administration who are just serious like you are about the need to rein in big tech for all the right reasons?
Yeah, absolutely.
I have met with Lena Khan.
I have talked to the Assistant Attorney General Jonathan Cantor.
I've talked to the people who are leading this charge.
While some of the rhetoric and some of my opponents on this antitrust issue on the Republican side have certainly used that rhetoric, Where they talk about antitrust laws need to be used, you know, in this environmental situation or, you know, this other situation that promotes labor or whatever it is that they're trying to promote.
The reality is that they're focused on big tech is a competitive focus, not a focus where they're trying to push another agenda.
And their actions speak louder than their other rhetoric, I guess.
If you look at the lawsuit that was recently filed within the last two weeks by the Department of Justice, there isn't a political agenda.
It's very clear that they're focused on competition with Google.
Congressman, congratulations again on your book, which we will show everybody where to go and to get it.
I hope people do.
And I know you're looking very forward to tonight's State of the Union speech by President Biden.
So we're going to let you go and prepare for that.
Good luck there as well.
And thanks so much for taking the time to talk about these really important issues.
Thank you.
Appreciate it, Glenn.
All right.
Have a great night.
Bye bye.
So that concludes our show for this evening.
Going forward, we will have our locals after show as we announced last night.
Every Tuesday and Thursday tonight, however, I'll be on the Tucker Carlson program immediately following this show.
There's also the live State of the Union address, which given that it's by delivered by President Biden, I'm sure you're all very excited over and prefer to watch that.
So we will be back Thursday night for our after show on Locals just because of my need to go do that other program and watch the State of the Union.
So we will see you back then.
For now, thank you for everybody who watched.
We hope to see you back tomorrow night and every night at 7 p.m.