All Episodes Plain Text
Nov. 14, 2023 - Skeptoid
18:53
Skeptoid #910: How to Spot Misinformation

Much content online is designed for high engagement, not for accuracy. Learn about your ad choices: dovetail.prx.org/ad-choices

Transcriber: nvidia/parakeet-tdt-0.6b-v2, sat-12l-sm, and large-v3-turbo
|

Time Text
Spotting Online Misinformation 00:01:39
They say getting information off the internet is a little like trying to drink from a fire hose.
You get way more than you may want or need.
While that by itself might be a good thing, the bad part is that there's usually a hidden payload of misinformation mixed in with just about anything you find online.
Is that true?
And if it is, what can we do to purify it?
We're going to find out right now on Skeptoid.
A quick reminder for everyone, you're listening to Skeptoid, revealing the true science and true history behind urban legends every week since 2006.
With over a thousand episodes, we're celebrating 20 years of keeping it focused and keeping it brief.
And we couldn't have done it without your curiosity leading the way.
And now we're even offering a little bit more.
If you become a premium member, supporting the show with a monthly micropayment of as little as $5, you get more Skeptoid.
The premium version of the show is not only ad-free, it has extended content.
These episodes are a few minutes longer.
We get rid of the ads and we'll replace them with more Skeptoid.
The Extended Premium Show available now.
Come to skeptoid.com and click Go Premium.
You're listening to Skeptoid.
I'm Brian Dunning from Skeptoid.com.
The Polarization Trap 00:06:31
How to Spot Misinformation.
Welcome to the show that separates fact from fiction, science from pseudoscience, real history from fake history, and helps us all make better life decisions by knowing what's real and what's not.
Way back in 2007, I did an episode on how to spot pseudoscience, which was mainly a checklist of fallacious logic you could use to get a good sense of whether a particular claim was based on sound science or nonsense.
That was a good list, and it still stands.
But since 2007, the type of claims we typically want to evaluate has changed.
The rise of social media algorithms has increased the number of ways that scientific misinformation can be spun and also broadened the types of misinformation you're likely to be exposed to.
It's no longer just homeopathy and 9-11 truth claims.
Today, it's political and social claims intended to outrage you and get you to share the content.
So in 2022, I did an episode on how to spot fake news.
And that was also a good list that holds up.
But it focused mostly on stories published on news websites and quasi-news websites.
So today we have something optimized for maximum relevance today.
How to recognize general misinformation.
And just as importantly, how you can help reduce its spread.
One thing that has really emerged and taken a front row seat in the 21st century is something called affective polarization.
That's affective with an A, not effective with an E. Affective polarization is the tendency for people with partisan feelings to actively dislike people from the opposing political party.
Sociologists are actively studying why it has become such a prominent feature in the world, nowhere better demonstrated than among Democrats and Republicans in the United States.
The cause is likely multifactorial, but one cause has certainly been the rise of social media algorithms, whose growth has closely tracked that of affective polarization.
The basic plot, and this is not just some random conjecture, it's been the subject of much study, is that articles on social media platforms get promoted, meaning shown to more people, when they get high engagement.
If I post a picture of a potted plant, nobody reacts to it and the algorithm ignores it.
If I post a picture of children being sacrificed by cultists, it triggers tremendous outrage and everyone clicks a reaction, reposts it, or makes a comment.
The algorithm then promotes that post to even more people, triggering even more reactions, and people spend more time on the platform exercising their outrage.
This equals more exposure for advertisements and thus more revenue.
This system has taken huge advantage of affective polarization.
When you look at the recent presidential elections, Supreme Court case topics, and social and religious divisions, high affective polarization equals more outrage on social media posts and thus far higher engagement.
If you've ever clicked the angry reaction or reshared an article on social media revealing some horrible new thing that the opposing political party is up to, chances are you were shown that post because the algorithm knew from your past behavior that your political polarization meant you were very likely to take that action on it.
And those extra minutes you spent on the site just made someone some money.
When so many people around the world are at the mercy of such an effective influence, it's no surprise that world governments have used it to their advantage to sow division and instability in each other's elections.
Perhaps the most famous example of this is Russia Today, a propaganda news agency founded by the Russian government in 2008 to plant divisive articles as fodder for the social media networks, and then separately creating millions of bots and fake accounts to amplify the content from those articles online.
But that's only one example.
To some degree, virtually every nation does this to its enemies.
Everyone's gotten a lot more sophisticated since the days of Tokyo Rose and Voice of America.
Even Samuel Adams, during the American Revolution, had five stated objectives for his anti-British propaganda, one of which was to outrage the masses by creating hatred for the enemy.
The net result of all of this is a vast amount of online information being shared to both advocate and to oppose just about anything you can imagine, especially anything that shocks and outrages anyone.
And here is a very important point.
These online articles and posts seem highly believable to us, regardless of their accuracy, because of the flip side of the affective polarization coin, which is a tendency to automatically like and trust people of our own political party, the same people who originally posted the articles the algorithm is showing to us.
So because we all see this content coming from trusted sources, the people we follow online, we automatically take it as fact.
Because of this, misinformation is harder to recognize than ever before, but it's not impossible.
So without further ado, let's dive right into the checklist.
Is it a divisive issue that casts some group as the villain?
This is perhaps the biggest red flag that your article could well be propaganda that ranges anywhere from exaggerated to spun to outright false.
Is it a negative article about some horrible new action by some group or nation or demographic you already dislike?
Real news articles are not divisive.
International Conflict Checklist 00:02:16
They report on important events.
Sometimes these include crimes or international conflicts.
But real unbiased news sites understand that all international conflicts are nuanced and complex.
So they generally won't report a one-sided perspective that casts one combatant as the bad guy.
If the article seems to fit your preconceptions a little too perfectly, take it as a warning that an algorithm showed you something it knew you'd react to.
Hey everyone, I want to remind you about a truly unique and once-in-a-lifetime adventure.
Join me and Mediterranean archaeologist Dr. Flint Dibble for a skeptoid sailing adventure through the Mediterranean Sea aboard the SV Royal Clipper, the world's largest full-rigged sailing ship.
This is also the only opportunity you'll have to hear Flint and I talk about our experiences when we both went on Joe Rogan to represent the causes of science and reality against whatever it is that you get when you're thrown into that lion pit.
We set sail from Malaga, Spain on April 18th, 2026 and finished the adventure in Nice, France on April 25th.
You'll enjoy a fascinating skeptical mini-conference at sea.
You'll visit amazing ports along the Spanish and French coasts and Flint will be our exclusive onboard expert sharing the real archaeology and history about every stop.
We've got special side quests and extra skeptical content planned at each port.
This is a true sailing ship.
You can climb the rat lines to the crow's nest, handle the sails.
You can even take the helm and steer.
This is a real bucket list adventure you don't want to miss.
But cabins are selling fast and this ship does always sell out.
Act now or you'll miss this once-in-a-lifetime opportunity.
Get the full details and book your cabin at skeptoid.com slash adventures.
Hope to see you on board.
That's skeptoid.com slash adventures.
Fact-Check Politicians 00:08:22
Does the headline blame a divisive political figure?
The classic divisive misinformation article calls out some politician, whether it's a governor, congressperson, or the sitting president, and is all about some outrageous, unbelievable new thing they are trying to push through.
Algorithms love to push these stories because so many people share them, adding comments about how outraged they are.
Now there are two sides to this coin.
Highly partisan politicians will often use divisive terminology and will often point at boogeymen in order to keep their base fired up and leverage that affective polarization to keep themselves popular.
But they don't all always do it.
Often you'll find that a report of their outrageous behavior has nothing more than a morsel of truth to it and that there's much more to the story and their real comments in context were not outrageous at all.
So just be aware that the divisive politicians you like may be engaging in the former and the divisive politicians you hate may be engaging in the latter.
There's much more sanity in the world than insanity, and it's just the algorithms that would have you think otherwise.
Search for it on an unbiased news site.
If your article is low-quality information intended to spark divisive outrage, then you will probably not find that story at all on high-quality, unbiased news sites.
So this raises the question of how to find those.
What news sources are both reliable and unbiased?
Fortunately, I can give you my favorite four right now.
Associated Press, Reuters, United Press International, and the BBC.
These are according to my preferred source, the Interactive Media Bias Chart from Adfontus Media, which, incidentally, ranks Skeptoid as both highly reliable and unbiased.
Look for it on a fact-checking website.
If the story is bogus, someone else has almost certainly already done the work for you.
Check it out.
Search for it on a couple of your favorite of these top four fact-checking websites.
Snopes, PolitiFact, FactCheck.org, and BBC Reality Check.
And for those of you springing to your keyboards right now to tell me how incredibly biased those sites are and how could I be so gullible, spare yourself the effort.
It's you who has already been fooled.
Always do a quick double check on the source.
Is the article from a familiar news site that you know to be legitimate?
If it's not, then you'd better do a quick check to see if this site is for real, if it's a parody or satire site, or just some garbage site that was thrown together recently without a trustworthy provenance.
Finally, what to do when you find misinformation.
Whenever you see a post on social media that you recognize to be algorithm-driven propaganda, starve it of oxygen.
Hide it, block or mute the sender when appropriate.
If it's posted by a friend, message the friend that it looks like it's probably algorithm-driven propaganda.
Make sure you do it as a private message and not as a comment on the post itself, because any comment, even a negative one, counts as engagement and boosts that article even more.
Finally, I recommend cleaning up your own sources.
If you use a tool like Apple News or some other news alert service to bring you the day's headlines, scrub those of any biased sources.
Whenever some alert pushes me an article from a source that's off-center, I block that source, relying mainly on content from Reuters and AP.
It may take a while to get used to it, since many of us have come to enjoy being in our favored echo chamber, and we look forward to each day's outrageous news about a hated political figure.
But once you do, what you'll find is that it becomes easier and easier to spot the stuff your friends are sharing as algorithm-driven misinformation.
I would like to close by sharing a personal thought that I've had these past few years.
It's something that I first took notice of during the COVID-19 pandemic, and it was a tendency that I caught in myself of prejudging people.
I would see a stranger in a store on the street, and I would make a judgment based on their clothes, their car, their mannerism, or even something overt such as some slogan printed on their shirt.
And based on that alone, I thought I knew all I needed to know about whether they followed government-mandated COVID restrictions or whether they disregarded them in favor of their own personal freedom or their own research.
I made snap judgments on people and decided if I liked and trusted them or disliked and distrusted them.
And the moment I realized I was doing this, I realized that I was an active part of the problem and was a contributor to the worldwide rise in affective polarization.
My choice was to prefer to be part of the solution to the degree I was able.
And now when I see a person I'm inclined to dislike, I try to see instead something that we share to find some common ground.
Even though I'm unlikely to have any interaction with that person, I still force myself to see them in a positive light.
The more I do it, the easier it becomes, and the less susceptible I am to online misinformation.
Some people correlate the rise in affective polarization to the rise in internet access.
Is this true?
In the ad-free and extended premium feed, we're going to have a look at some research that studied this question.
To access it, become a supporter at skeptoid.com slash go premium.
A great big Skeptoid shout-out for our premium supporters like FSI Crocker from Aberdizzle, Deborah Wade and the four Wadlings, Daniel Leong, and Brad Fonseca.
Let me give you a shout out too.
It's easy.
Just log into the members portal at skeptoid.com and click shout outs and stories to tell me what you want me to say.
And remember, every Skeptoid episode has a transcript page at skeptoid.com with complete bibliographic references and further reading suggestions.
And they also print out nicely into formatted PDF documents that you can share.
You're listening to Skeptoid, a listener-supported program.
I'm Brian Dunning from skeptoid.com.
Hello everyone, this is Adrienne Hill from Skookum Studios in Calgary, Canada, the land of maple syrup and mousse.
And I'm here to ask you to consider becoming a premium member of Skeptoid for as little as five US dollars per month.
And that's only the cost of a couple of Tim Horton's double doubles.
And that's Canadian for coffee with double cream and sugar.
Why support Skeptoid?
If you are like me and don't like ads, but like extended versions of each episode, Premium is for you.
If you want to support a worthwhile nonprofit that combats pseudoscience, promotes critical thinking, and provides free access to teachers to use the podcast in the classroom via the Teacher's Toolkit, then sign up today.
Remember that skepticism is the best medicine.
Next to giggling, of course.
Until next time, this is Adrienne Hill.
From PRX.
Export Selection