How can we protect ourselves and our children from online hate and misinformation? Julian interviews Imran Ahmed about the latest work the Center for Countering Digital Hate has done on TikTok, Twitter, and Facebook ads for state-sponsored antisemitic propaganda from Iran.
Show Notes
Parents Guide to TikTok
Learn more about your ad choices. Visit megaphone.fm/adchoices
Julian here with a brief titled Updates on Countering Hate.
Our returning guest is Imran Ahmed and you'll hear more about him in a moment as we get to the interview.
Before we do, I want to remind you that you can follow us on Instagram at ConspiritualityPod.
You can also support us via Patreon or Apple Subscriptions to get Monday bonus episodes.
And those of you who are on Patreon, you also have the option to upgrade to the level where you'll also get live streams and behind-the-scenes videos.
Our book will be out very soon.
Please buy it.
We're very proud of it.
Imran Ahmed, it's great to have you back.
No, it's wonderful to be back.
It's wonderful to be back.
Thank you for joining us, jet lag notwithstanding.
For anyone unaware, you are the founder and CEO of the Center for Countering Digital Hate, which in my estimation uses rigorous research to hold tech platforms to account, to consult with government on policy, and to educate the public about the anti-democratic dangers of conspiracy theories, lies, And hateful propaganda.
Is that a fair summary?
It is.
It is.
I mean, and I think the one thing I would add there is that the unique way in which they have flooded into our into our information ecosystem as a result of the decisions, the active decisions taken by the most dominant platforms for communication on Earth today, connecting 4.5 billion of us, Facebook, Instagram, Twitter, TikTok and YouTube.
We've had you on twice before.
So today you are becoming our most featured guest, which is wonderful for us.
Congratulations to you.
We've had you on before to talk about how anti-vaxxers use social media to target people vulnerable to their disinformation.
And then again, when you were in town covering the Defeat the Mandates rallies.
But honestly, it feels like much more.
Because your really big reports in 2021 on the disinformation dozen and then on pandemic profiteers are really central pieces of research in our work as well.
So we reference them all the time.
And so I check back in and find that your team has been incredibly busy of late.
I count six new reports so far this year, most of those in the last couple of months in terms of them being released.
So I want to ask you about three things today.
Toxic Twitter, your most recent work on how Iran spreads anti-Semitism on social media, and then your report going back actually to December on the dangerous content served up routinely to young people on TikTok.
So let's start with Twitter.
When last you were here, I don't know if you remember, Elon Musk was on the verge of buying Twitter.
There was a lot of back and forth on whether or not he was going to be forced to do that.
And I know that the CCDH has covered the impact of his reign.
Just this past week, we've seen how his posture as a free speech absolutist is entirely malleable in a hypocritical way to fit the actual censorship demands of authoritarian leaders like Erdogan as he struggles to maintain power in Turkey.
So what can you tell us about the prevalence of hate, misinformation and conspiracism under this new Twitter regime?
Well, our great fear when we last talked was that Elon Musk clearly never wanted to buy Twitter.
I mean, you know, there may have been a moment in which he really, really felt like it, but then he spent months trying to get out of it, trying to get his lawyers to find a way that he could avoid buying this poison chalice.
And he was forced to do so in the end by the courts.
Now, The problem with Musk is that he's incapable of admitting a mistake.
And given that he'd promised free speech and he'd promised a whole bunch of pretty extreme types that he would allow them free reign on that platform, you know, the first thing he did when he came on board was eviscerate the moderation teams.
He made it clear through his own behavior and through what he said that he would be allowing a lot more hate speech and a lot more different potentially harmful types of speech than before, in
particular COVID disinformation.
And that was really a bat signal. It was a bat signal up in the sky for hate actors that they
could see, they could see that suddenly they were welcome on that platform, they could
really unleash themselves. And it also attracted new hate actors to that platform.
So what we found was that within the first couple of weeks of Elon Musk taking over, there was an enormous jump in the amount of hate speech on that platform.
The way that we did it is we looked at the prevalence of various slur words and how frequently they were used on the platform, and we saw enormous spikes, a 400% increase in the number of times the n-word was used.
Um, in words that slurred Jewish people, that slurred gay people, that slurred trans people, that slurred, um, uh, Hispanic people.
And all of those words leapt up after, after Musk took over.
Now, now he claims that's because people are racist in real life and so therefore they're doing so.
But that, and he even claimed, well maybe that's because black people are reclaiming the word for themselves.
But that usage is kind of, that's the baseline that you expect anyway.
That doesn't explain a sudden 400% increase after he takes over.
We then did a series of other studies as well, looking at, for example, Bluetick accounts
and finding they were much more likely to spread disinformation about Ukraine vaccines
and the climate crisis than legacy verified accounts.
We found that he'd reinstated tens of thousands of accounts of neo-Nazis, white supremacists,
misogynists, spreaders of dangerous conspiracy theories.
And we studied 10 of those hateful accounts.
And then we really kind of found the answer to it.
You know, I kept being asked when I was doing interviews, like, why has Musk put up this bat signal?
And really, you have to understand it this way.
Musk is Musk always does what's good for him and in the end he's a businessman and so we looked at 10 accounts including Andrew Tate's account that were reinstated and we worked out that by looking at the number of impressions their tweets get and
and then multiplying that by how frequently an advert appears per impression of a tweet
and then multiplying that by the cost per thousand impressions to an advertiser, you can work out
what's the value of the impressions generated by those 10 accounts.
But $19 million for 10 accounts that he reinstated.
So what he was doing was bringing back these high-frequency posters that post really contentious material that generates lots of attention and lots of controversy and anger.
Why was he doing that?
In the end, the distressingly banal reason of It made him money.
As simple as that.
You know, then we had the phenomenon of these so-called Twitter files.
And I was perhaps mildly surprised not to see that the CCDH would be included
in the supposed villainous plot by all of these think tanks and universities,
the media, the deep state, trying to censor free speech under this false premise,
supposedly, of fighting disinformation.
And we actually talked a couple of weeks ago to a digital propaganda researcher,
Renee D'Aresta, who has both now the dishonest independent journalism
of the Twitter files, courtesy of Schellenberger and Taibbi, trained on her, as well as the House Subcommittee on Weaponization
of Government for her trouble, now coming after her and claiming that she's
part of the censorship industrial complex, or indeed the leader of it.
What are your impressions of how Elon and his handpicked substack journalists and this specious Republican committee are actually affecting the fight against disinformation and how the public understands it?
Well, I mean, I feel terribly bad about some of the organizations that are being targeted
because some of them are people that have done incredibly important and impactful work
in exposing Russian disinformation and exposing the impact of disinformation on American citizens
and also the cost in terms of lives lost.
It's not just Renee DeResta.
There's people like Nina Jankiewicz who've been slurred.
There's Harvard University's disinformation unit, Brown University, Stanford.
Everyone's got guns trained on them.
CCDH hasn't.
And I think that's down to two reasons, really.
One is the strategic decision that we took three and a half years ago when we set up
to never take government money.
And the truth is that researching disinformation is costly.
It takes time.
It takes real expertise.
We have, you know, over 20 staff now and they cost money to pay.
But we chose never ever to take government money or tech money.
And so our connections to both government and tech are that we are critical We're critical and honest.
To tech, we're critical and honest opponents.
Quite often, to government, we're critical and honest friends.
Because we believe that in the end, regulation and legislation will be required.
But I think there was that one reason.
And the second reason is, I think that they've worked out that if CCDH gets any more attention, it just helps us.
You know, we are very, very good at riding any attention that's given to us because, frankly, our independent research is strong enough that I would put it up, you know, I'd put it up on Fox News, on Newsmax.
In fact, our research has been featured on Fox News and Newsmax.
When we come to talk about the deadly by design report, That had an enormously positive response across the board from left to right.
So I think that's been difficult for them.
They don't want to give us the attention that they fear we would weaponize for our own benefit.
And that's me being very, very candid about why I think they haven't really come after us.
But I do think that this is part of an assault on... What is quite incredible is that people right now are going after the people who study disinformation rather than the people who promulgate disinformation, the people who produce it and distribute it.
The scumbag anti-vaxxers, the climate deniers, the hate actors, the neo-Nazis, And as you know, you and I will talk about shortly, the Iranian regime.
So they'd rather go after Redi Daresteh than Ayatollah Khomeini spreading hatred against Jewish people.
It's just topsy-turvy and it shows that they've lost their moral compass.
Absolutely.
Yeah, I mean, you mentioned Rene Dureste as part of that team that really did a lot of the groundbreaking research in understanding Russian interference in 2016.
So, speaking of state-sponsored disinformation, you just brought up Iran.
You published a report very recently called State Hate.
Tell me more, I think a lot of people aren't aware of this, about press TV.
And Palestine declassified, because I think here the impression I'm getting is we see ubiquitous anti-Semitic conspiracy memes being served up in different ways to both the left and the right, yeah?
Yeah, and what was really interesting about this report is that it showed the extent to which the left is just as vulnerable to hate and the disinformation that underpins hate as the right.
So, Press TV is Iran's state propaganda channel aimed at Western audiences.
It's in English.
It's not in Farsi.
It's designed to spread hatred in our countries.
And their primary target is Jewish people living in the US, UK and in Europe.
So the report that we... and Press TV, keep in mind that it actually has been subject to intervention by governments.
So the UK, for example, has banned Press TV in the UK.
In the US, the FBI seized Press TV's domain.
because it was seen as being part of a foreign state propaganda operation targeting US citizens.
And in fact, Press TV's biggest audience is actually in the US, just marginally, but still,
the US is the biggest market for Press TV.
What we found was that they've managed to circumvent all this official activity,
which was where the levers that governments could pull were actually about broadcast
and their capacity to regulate, to an extent, broadcast when it comes to foreign propaganda.
And instead, the willing enablers of the...
Iran's anti-Semitic and also, don't forget, anti-LGBTQ+, you know, anti-dissident, anti-democracy, anti-liberalism, small-l liberalism, you know, the ideals upon which the constitution were written, which they oppose fervently.
They're able to circumvent that thanks to Twitter and Facebook.
Twitter and Facebook give them an audience of 11.5 million viewers in Western countries, but that's just direct viewers.
Think about the retweets, the engagement they get by spreading hatred.
We know that that amplifies it to tens and hundreds of millions more people.
And we specifically looked at one program called Palestine Declassified that's hosted by two Western useful idiots, a guy called Chris Williamson, who's a former Member of Parliament in the UK, and a guy called David Miller, who is a Anti-Semite and a sacked professor.
So he was a professor at Bristol University and was sacked.
But they come from a left-wing political persuasion.
They're from the far left in the UK, from the sort of Corbynite left, which was supporting Jeremy Corbyn when he was leader of the Labour Party.
But they spread kind of the Left-wing antisemitic ideas, so the ideas that Jews are greedy, that Jews are willing to sell out, essentially that they will put their personal greed ahead of human life in all instances, that Jews are fundamentally venal, and those form part of a series of slurs that are uniquely left-wing, that
Scholars like Dr. Dave Rich in the UK have identified as being part of the canon of left-wing antisemitic thought.
Well, these guys are constantly renewing that.
And it was really dispiriting doing that report because you just realize that We have this kind of like central line into the hearts of our society which has been put into place by these social media companies giving direct access to people's mobile devices and through notifications and everything else and through the amplification of algorithms giving them access to their attention.
But there is absolutely no curation whatsoever.
And, you know, what really struck me was the idea that the Ayatollah Khomeini, that these platforms will say, well, they have a right to free speech.
I'm like, there is no right to free speech in Iran.
There is no right to protest.
There's no right to express your identity.
There's no right to love who you want, because you might be strung up by a crane and people are talking about that regimes.
Fundamental First Amendment rights?
Are you kidding me?
So, I think it's just a really, really clear example of how profit-obsessed and how willing these people are to do business with the worst regimes in the world.
And, you know, reinforced Julian, as you said right at the beginning of this, by Elon Musk's recent decision to start censoring the opposition to Erdogan in Turkey because The increasingly authoritarian, autocratic Erdogan in Turkey, because he wanted to retain market access because it's all fundamentally about dollars.
Yeah, so let's keep Twitter up and running in the name of free speech, even if we have to censor anyone who is trying to fight for democracy within that regime.
Yeah, it is quite remarkable, quite cynical, and yet it's the clearest possible example of how these companies do not live up to any of the ideals they talk about.
If they really believed in freedom of speech, they would have allowed the Turkish opposition to speak.
If they really believed in human rights and the value and the dignity of each human being, they would not be broadcasting anti-Semitic propaganda via their channels and promoting it.
I want to turn to your report on TikTok.
One of your guiding missions seems to be to impress upon governments the importance of holding big tech platforms to account for the harm they cause.
And as you were just saying, for how they monetize that harm and the messaging that perpetuates that harm and to make them stop.
So tell us about the case of Molly Russell, because this was heartbreaking to read about.
Yeah, it is heartbreaking.
Molly Russell was 14 years old.
She lived in northwest London.
And in 2017, she died from an act of self-harm.
After what the coroner found, that she'd become overwhelmed by disturbing content on social media, primarily Instagram, also Pinterest.
And Molly's father, Ian Russell, who is one of the most tenacious and courageous people I have ever met.
He forced through the courts for Meta and Pinterest to be named as respondents to the coroner's inquest.
So Meta and Pinterest had to provide evidence, they had to respond to the coroner, and in the end the coroner found them partly responsible, both companies responsible for her death.
What the coroner found was that she'd been flooded by algorithmically recommended images, which both glamorized and normalized the idea that if you feel bad inside yourself, the normal thing to do is to hurt yourself on the outside too.
And if you feel really bad about yourself, it is normal to take your own life.
You have to imagine the thousands upon thousands of images that are being flooded into someone
to fundamentally reprogram the way they think, to override one of the most fundamental human
instincts, that of self-preservation, that to be alive.
Ian actually recently joined the board of CCDH in the UK for
We have two organisations, one is a 501c3 in the US, one is a non-profit organisation in the UK which operate as one entity.
And Ian and I have now worked together to write a parent's guide to help parents to understand a little bit about how to create those dialogues between parents and their children.
But I mean, you know, the case of Molly Russell is it's one of those that so many parents I speak to tell me.
That is at the front of their mind when they see their kids hunched over their phones or their laptops or their iPads.
Kate Winslet talked about it the other day in the BAFTAs, talked about the terror that parents feel and we've just launched a PSA actually all across the United States which It's kind of an update to that old PSA.
Do you remember that old PSA?
It was sort of, it's 10pm, do you know where your kids are?
Well, nowadays we know where our kids are, but we don't know who's with them.
We don't know what algorithms are promoting what content to them.
And it's updating that and saying, well look, Actually, regulators and legislators, it's time that you had our backs, because we want to make sure that the content being flooded into our kids' eyeballs isn't stuff that might fundamentally hurt them and destroy their capacity to live meaningful, happy lives.
Now, look, that was what we found with... Yeah, sorry.
I find it really... Julian, you know that I got married last year, in 2021, and my wife Elizabeth and I very much hope to have children, and it's just one of those things that even... This is the only bit of my work that we ever talk about, and we talk about it quite frequently, sitting on the sofa at night, just going, well, how will we marry?
Yeah, yeah, I have a little girl who just turned five.
And we talk about this all the time, like it's coming.
It's coming.
So tell us about this TikTok study, because the findings were really way beyond what I would have even imagined.
It's called deadly by design.
What did you do?
What did you find?
Well, I mean, Deadly by Design is the name of the report, and I'll tell you what, we spent so much time discussing whether or not that was too strong, and what we decided was that it was both eye-catching but accurate, because what we found in the study was that we simulated, we created accounts as 13-year-old girls in four different countries, the US, the UK, Canada, and Australia.
We then just let the system give us what it wanted to give us.
So on the For You page, that's where TikTok will algorithmically recommend to you what it wants you to see.
And what we found was that for a 13-year-old girl, within 2.6 minutes of creating a new account and going onto the For You page, self-harm content is being recommended to those accounts.
Within eight minutes eating this sort of content, On average, every 39 seconds something harmful is being recommended to 13-year-old girls.
Now here's what was really disturbing, is that when we then changed the names of the accounts, so instead of having a girl's name like Susan, we changed it to Susan Lose Weight.
And that's based on research that shows that when kids do have eating disorders or have mental health issues, they quite often express that in their bio or their username.
Molly's account, she had a second Insta account which was which expressed her inner angst,
the chaos she felt inside.
And when we changed the names to Seize and Lose Weight, those accounts got 12 times as much self-harm content.
So the algorithm recognizes vulnerability, and instead of doing what you or I would do, Julian,
if we saw a child that was clearly vulnerable, which is to protect and to make sure
that we were being as cautious as possible and as careful as possible
to protect that child's mental health, the algorithm says,
We found a way to addict this kid.
I know what their vulnerability is.
Let's give them 12 times as much.
And I think that's why we felt that there was no other name for this algorithm than to call it deadly by design.
It is by design that this algorithm kills.
We also looked at the eating disorder content and we saw that it was sort of meshed together through hashtags, and the content on those hashtags had 13.2 billion views.
So when TikTok's initial response to our study was, well, this is a one-off, it must have been an accident that they saw this on their test accounts and there's a small sample size, our response was, it's not a one-off, it's not even a million-off, it's a 13.2 billion off.
So, something is systemically wrong.
And Julian, in any other circumstance, any other product in the world, if it was killing children, if you were giving children eating disorder and self-harm content, you would run into the factory floor, you would pull the brake and you would say, stop, stop, this is an emergency.
Our product is dangerous and we've got to work out what the hell's gone wrong.
Their response was to gaslight us initially and then to carry on with business as usual.
And that is outrageous and so emblematic and typical of an industry that has lost its moral compass, has lost the connection between its profits and the impact that it has on its people.
Yeah.
I mean, you talk about how these young girls are getting surfed up a lot of Very dangerous content, especially around eating disorders.
We covered the Andrew Tate story, as everyone did, a month or two ago.
And I think you've talked, too, about how TikTok was one of the main vectors for Andrew Tate getting incredible influence over boys, you know, as young as 11 with his really corrosive, toxic ideas.
Well, the Andrew Tait stuff we did before we did the eating disorder stuff, and we found there was this really interesting parallel that the 14-year-old boy accounts that we set up for the Andrew Tait study got Andrew Tait videos within 2.5 minutes of setting up a new account, and the girls got self-harm content within 2.6 minutes.
This is a platform that algorithmically is pushing our children in a particular direction.
It's teaching boys that relationships between men and women are fundamentally based on violence, dominance, and power.
And it's teaching girls that they're not good enough, they're not pretty enough, they're never going to be loved, and that they need to starve themselves with 700-calorie-a-day diets.
And there is a fundamental misogyny that underpins both.
A fundamental hatred of women that underpins both of those.
And also a hatred of those young men too, because those young men will never ever get to enjoy what you and I have in our lives, which is to have relationships with our partners, which make us more than just The sum of two individuals.
There's something transcendent and wonderful about a healthy, functioning relationship.
And those boys will never get to enjoy that because they believe relationships are a zero-sum game, that they can't transcend the components.
And that is so tragic to me.
Yeah, I agree with you on that.
And in both cases, I think, to add my two cents, what you see is very young people being taught that the way to deal with their very normal, although it exists on a spectrum, human vulnerability, is to go down these incredibly destructive paths
where the possibility of the healing and the connection that comes from coming to terms with your own vulnerability
and sharing it with another human being is completely disdained and spat upon.
Yeah, I mean, I, you know, funnily enough, I filmed a talk show
a few months ago, and I was, the guy I was debating was Rollo Tomasi,
considers himself to be the godfather of male supremacism.
He claims to have trained Andrew Tate, and it was interesting with Tamasi, it was so clear that he knew how to ensnare vulnerable young men and then take them down a radicalization pathway that ended with them being as broken as he was.
I mean, I felt he was a broken human being himself.
His value system was so topsy-turvy.
And, you know, you see this with conspiracists as well and with hate actors.
They want to make people as broken as they are.
That's part of the impulse to radicalize, to proselytize.
They know that their belief systems are outwith the norms of our society, outwith what we tolerate within our society.
And they want to make more people as intolerable as they are, because then maybe they wouldn't be so...
Yeah, they're going to tell all of these young people the really dark truth about how in order to gain some kind of mastery around, especially with the boys, right?
To gain mastery in the realm of relationship, you have to basically be a sadist and anything else is to be a sucker, essentially.
It was amazing watching that, watching you With these three guys who were basically trying to outcompete one another as to who had the most updated version of misogynistic posturing.
I think what was so interesting, though, about both these studies is the disconnect between what the kids were experiencing and what their parents realized.
So in particular, I remember seeing a focus group of like parents watching their kids talk about about Andrew Tate, hearing their boys talk about Andrew Tate
and their boys were incredibly fluent in his ideology, in his ideas. Some of them were quite
sympathetic, some were less sympathetic to him, but their parents were watching and their parents were
like, what the hell is going on?
Who is Andrew Tate?
And why is my kid saying these horrifying things about women?
Again, going back to the PSA, there is this fundamental disconnect.
Personally, I'm in my mid-40s.
I don't really use TikTok.
I find it gives me a migraine, to be honest.
I mean, to be frank, most screens do these days, but I realize that a lot of parents just have no idea about the information ecosystem their kids are immersed in.
You know, we've talked before about how the internet changes it, so we don't watch the same TV programs, we don't sit around the telly as a family anymore, but this is fundamentally different.
This is about parents just having no idea at all.
to the kind of content that can be really, really, can re-socialize their children completely,
that they're being fed. And I think that's part of the reason that, as an organization,
we haven't ever looked at these sorts of things before, but we just felt it was so exigent,
it was so important that we deal with it. If you remember, Julian, when we were set up,
we were actually an anti-hate organization. It was in March 2020 that we pivoted everything at
CCDH to look at vaccines, because we felt that the need was so exigent and there was such a threat to
human life that we had to shift everything to looking at disinformation about vaccines and
about COVID. In this instance, we have partially pivoted the organization towards looking at
children in particular, because the need, again, is so exigent, and the potential future harm to
young people is so enormous.
An entire generation of kids raised on algorithms who have a phenomenally distorted view of themselves, their bodies and their relationships to others within society.
And that's why we're really, really sort of focusing hard on this as an issue right now.
I know that you mentioned it a few minutes ago too, you have a parent's guide that people can download for free from your website.
What are some of the top recommendations to parents that are in that guide?
I know that it's specifically for TikTok, it may have broader application as well.
Yeah, and people can go and find the parents guide immediately at protectingkidsonline.org.
So that's protectingkidsonline.org, which is also linked to from our PSA.
But the guide Ian and I wrote together, so Ian, Molly's father and I wrote together, and we spent a lot of time thinking about this because what we don't say is ban your kids from using platforms.
That's just not realistic.
They'll find ways around it, especially as they get well into their teen years, they will find ways around it.
The other thing is that just invading their privacy isn't necessarily going to be the most effective way either because they may set up Secondary accounts, finsters, other things, so they can look at things and not let you see what they're looking at.
So instead, what we encourage is for a dialogue and a real symmetry of information exchange, where we start discussing with our kids, what are you seeing on those platforms?
Tell me the kinds of content that you're seeing frequently.
And then for parents to then help kids to contextualize that information, to create a trust between the two, so there's an exchange of information that's symmetrical, and that allows kids, allows parents to understand what sort of stuff their kids are seeing, and allows kids to understand whether or not some of that content is normal, to understand, you know, parents to explain to them a little bit about algorithms and why is there seeing that so frequently, And to help them to also, they may make the decision collectively, together, that actually this stuff is so toxic that you'd rather not see it.
So maybe, you know, take time away from the platform, or quit a platform entirely, or restrict the time that you spend on that platform, or reprogram the algorithm.
So TikTok, after our report, announced that it would allow people to reset the algorithm, so it would go back to basics and allow you to reprogram it all over again.
And, you know, it will allow for, I think, a richer, more honest dialogue that doesn't have Well, either ignorance or deception at its heart, which is the only way that parents can cope with it.
But the other thing we say is it's time for you to get off your bums and start asking for change, because the truth is we're having to write this guide because platforms and, to be frank, Congress have not had the backs of parents in America and around the world.
And, you know, just before Christmas, there was a Kids Online Safety Act, which should have been voted on.
It should be well on its way to being law by now.
But instead, Congress decided to sit on it.
The committees sat on it and it wasn't pushed for a vote.
That was, to my mind, outrageous.
Every single month we delay, children are being damaged by this content.
And parents are absolutely desperate for someone to have their back.
The polling that we do show that parents are really worried, but they feel there's nothing that can be done about it.
There is something that can be done about it.
The problem is that Congress doesn't have parents' backs right now.
Imran, it's always good to talk to you.
You're fighting the good fight on multiple levels, I think more than anyone I know.
So thank you so much for sharing with us where you're at in your work.
It's my pleasure to be back with you.
And thank you so much again for elevating our work and taking such an interest in it.
Thanks for listening to Conspirituality.
We'll see you on the main feed for new Thursday episodes, occasionally here on Saturdays for these briefs and on Patreon.