Predators, Algorithms, and Profit: How New Mexico Took Down Meta
New Mexico Attorney General Roald Torres details a landmark $300 million settlement against Meta, achieved by proving the platform's algorithms actively connect minors with predators rather than acting as passive bulletin boards. By circumventing Section 230 immunity through evidence of intentional addictive design, Torres likens this victory to the "Big Tobacco" moment, exposing how social media companies knowingly exploited half a million children daily. Looking ahead, he advocates for privacy-preserving age verification and iterative policies to regulate AI, framing this legal blueprint as a necessary Progressive Era-style reform to secure a safer digital public square. [Automatically generated summary]
Transcriber: CohereLabs/cohere-transcribe-03-2026, sat-12l-sm, and large-v3-turbo
|
Time
Text
Big Tobacco Moment for Tech00:06:03
Axis Mundi.
Springtime for me is the season where I go outside, take a hike, jump in the ocean, plant my garden.
It's also the moment when I look forward to spending time outside around a campfire or with my kids at the park, the people I love the most.
And that reminds me of the responsibility of protecting them and taking care of our financial future.
Policy Genius makes dealing with financial planning simple.
I trust Policy Genius because their team works around the clock to fulfill my needs.
They compare quotes from the top insurance companies around the country.
To make sure I get the right rate, prioritize your peace of mind.
Go to Policy Genius, an online insurance marketplace that allows you to compare quotes from some of America's top insurers.
Their team will help you get what you need fast.
Protect a life you've built.
With Policy Genius, you can see if you can find 20 year life insurance policies starting at just $276 a year for $1 million in coverage.
Head to policygenius.com to compare life insurance quotes from top companies and see how much you could save.
That's policygenius.com.
Welcome to Straight White American Jesus.
I'm Brad Onishi, author of American Caesar, founder of Axis Mundi Media.
Today, at the beginning of this week, I usually do a solo episode to break down some headlines or an issue or an article, but I have an interview that I just have to share with you today.
And I wanted to make sure we got it out quickly.
So today I'm joined by Attorney General Roald Torres of New Mexico.
He's New Mexico's 32nd Attorney General.
And this is the AG who you might have seen a headline, you might have read an article, some of you might have dug into this deeply, but this is the AG, who took on Meta in court and won just a couple of weeks ago,
a landmark decision about Meta's lack of protections for young people, a settlement of over $300 million, and a second court appearance and a second half of the case to be decided here in about a month that will involve more deliberation about exactly how much money will be awarded and how much Meta will need to pay.
But as I get into with Attorney General Torres, this is a case that Could be the kind of big tobacco moment for big tech and for social media, because instead of talking about them as kind of platforms or as a bulletin board for content where they can skirt responsibility for the stuff that is posted, the gross, disgusting content, the predatory content,
the ways their products expose underage people to predators, to harmful images, videos, and so on, this may change the game.
Now, some of you may not be convinced of that.
But I did ask him about this.
I asked him about the nuts and bolts of the case, the way that they created a fake profile of an underage girl to demonstrate the ways that young people are preyed upon on meta platforms, but also what the product design means and the addictive propensities of some of these products and how all that fits into the mix.
I want to thank some of you in our Discord community who really helped me flesh out some of my questions and thoughts about this interview.
And I hope that it's something that will.
Shed some light on where we're headed.
I think you know that for me, big tech is a constant concern because of the fascist elements that are now in Silicon Valley.
It's a big part of my forthcoming book, American Caesar.
So, having a chance to talk about this with Attorney General Torres for me was important.
It taught me a lot, and I hope it does for you too.
Before we go to the interview, I want to ask you to do a couple things.
Think about subscribing to our newsletter.
I want to ask you to go subscribe to One Million Neighbors, a new podcast series from Access Movie Media about how Americans in the Midwest, specifically the Twin Cities, helped to resettle one million Southeast Asian refugees in the 1970s and how that set the table for the activism and neighborism we are seeing today in St. Paul and in Minneapolis.
I want to ask also for you to think about becoming a subscriber.
That's the only way we can do this show.
You can find that in the show notes.
It's 50 bucks a year, and it is exactly why we are here so often, bringing you interviews, coverage, analysis.
It's in the code and everything else.
Appreciate all of you.
Hope that you learn a lot from our conversation.
Here we go.
As I just said, we just have an extra special guest today, somebody who I just can't wait to talk to and who has done something that I think a lot of folks felt like maybe would never happen, and that is held Meta accountable in some way.
So that is Attorney General Torres from the great state of New Mexico.
Thank you for joining me.
Thanks for having me.
Appreciate it.
It's great to have you here.
And I have so many questions about this case and so many issues I want to try to flesh out here.
I know people are intensely interested in this.
I know you're doing interviews all over the place to talk about it.
But the case really centers on a teenage girl named Issa who signed up for Facebook, was ready to kind of get on there like all of us and be on social media.
Soon she is inundated with unwanted messages, people in her inbox sending her x rated photos.
But the problem is Issa's not real.
Issa's part of Operation Metaphile.
Tell me about that.
Well, it actually starts with my work about 20 years ago as an internet crimes prosecutor in this same agency.
I used to work on child pornography and child solicitation cases, and it was a real eye opening sort of moment for me to realize that what was occurring 20 years ago in the deepest and sort of darkest corners of the internet had all migrated onto some of the biggest social media platforms.
Designing Predatory Platforms00:15:11
I think there was a growing sense of awareness around the psychological impact, the addictive nature of these products, the way in which it amplified the possibility of body images and suicidal ideation and self harm.
But it was the element of potential sexual exploitation that really got me interested.
And what that led to was the development of a slightly different approach from a litigation standpoint that really honed in on what it would be like to be a young girl in these spaces.
And that's how the undercover account of Isabi was developed.
It was developed using the same techniques that we use in criminal investigations.
As you said, she was inundated, absolutely flooded with requests for graphic sexual material, sexual solicitations.
What was even more shocking is that in response to that explosive growth, rather than raising some concern, Meta had actually delivered information to the account about how Issa could grow her following, how she could amplify and monetize that growth.
And I think that was the moment where it really came through that this was a much deeper and darker problem inside the company, and one that only grew as we got further and further along in the case.
One of the terms that came out of the case was that Facebook acts as a virtual victim identification service.
And I'm wondering if you can help us understand what that means.
Well, it really comes down to the mechanics of the algorithm and product design.
Arturo Bejar, who was one of the leading whistleblowers who worked on the safety team and did some research inside the company and then came forward with revelations about their lack of concern or the lack of response to some of the things that he issued.
When he was on the stand, he said, Look, these products are very good at connecting people with their interests.
And if you have an interest in young girls, the product will be very good at connecting you with young girls.
And if you think about it, most people who are on social media platforms do so.
You know, they go there trying to connect with friends, trying to have social interaction, but at the same time, they're creating a digital sort of representation of the things that interest them, the things that they are motivated to look at, the content that they are drawn to.
Those same mechanics, you know, for somebody who's just in the space operating in a normal way, it may connect them with a vacation that they may be interested in or a car that someone's trying to sell them or a pair of sneakers.
But in the, if it's a predator, What it's going to do is connect them with other, with young people, with other users on the platform and in that space that the platform has, knows, and can identify as fitting an interest for someone who's engaged in predatory behavior.
And it's those mechanics that drive exploitation in these spaces.
And frankly, those mechanics that were at the heart of the product liability case that we presented in Santa Fe.
Correct me if I'm wrong.
From what I understand, Facebook employees were aware of the scope of this, kind of going back to 2018, pre pandemic.
There's been a deep awareness in the company for some time, both as to the addictive nature of the product itself, which was another key component of the presentation we made to the jury.
This is a company that employs behavioral scientists, folks that are constantly trying to adjust the user interface and add features that will.
That will increase engagement.
At the same time, there are aspects and elements of that experience that lend themselves to the kind of predatory behavior that we've identified.
And one of the things that we were primarily concerned about is the way in which adult users were able to communicate with underage users.
Ironically and sadly, that used to be a place where referrals to law enforcement used to occur, where we used to see some of that communication traffic on the company's own messaging apps.
The day after we filed the lawsuit, the company actually made the decision to implement end to end encryption.
And what that effectively meant is that they blinded themselves to the nature of the communication that was at issue in our case.
They have since pulled back from that.
Right before we got a verdict in the case, they announced that they were going to stop doing end to end encryption.
I think in part because they recognized that that was going to be a really damning thing, not only in the eyes of the jury, but in the eyes of the public, because that's the kind of design choice and feature that was implemented.
Under the guise of protecting people's privacy.
But what it really meant is blinding themselves to a widespread traffic of sexual exploitation by their own accounts, something on the order of half a million children every single day all over the planet are exposed to sexually explicit or sexually exploitative material.
And that's just based on what we know from their own accounts.
What we also know is they just haven't made nearly enough in terms of the investments needed to make those spaces safer.
So, in order to win this case, you had to do something that is Proved really difficult to do.
And that is circumvent or somehow reckon with Section 230.
So, folks listening, some of you are going to be highly aware of this.
You're product designers, you're in the space, some of you are not.
Section 230, it goes back to 1934, renewed in the mid 90s.
But it basically says if you're a computer service provider, if you're somebody who's a bulletin board for social media posts, online content, and et cetera, you cannot be held liable.
For the information that's posted by somebody else, somebody who's using your platform, somebody who's using your virtual bulletin board.
And usually in these cases, Meta and other platforms have been able to say, we are the bulletin board.
We are not the content creators.
We're not the authors.
How did your strategy in this case get around the kind of usual Section 230 defense?
Yeah, so Section 230 was amended and updated in the Communications Decency Act in 1997.
So, for context, before, I mean, this is back at a time when I was still waiting for a dial up tone, you know, through AOL.
There were no smartphones, there were no social media platforms, and the technological landscape had evolved dramatically.
So, the first and most important thing for people to understand is that the lawsuit wasn't about content, it's not about the third party content of individual people, even individual people.
Who have an abhorrent and criminal fascination with children.
What it was focused on were the specific design choices, the features of the product itself.
And in that sense, it's very analogous to what had happened in Big Tobacco, you know, in the late 1990s.
This was a product that was knowingly and intentionally engineered to be addictive.
It was knowingly a product that was dangerous to young people.
And the company, despite being well aware of those dangers and those known potential harms, Misled and lied to the public, including parents and young people, about the relative safety.
And so, what we did is we focused on how the product is designed with respect to any content.
In other words, it's content neutral.
Infinite Scroll is a good example.
The idea that you are given or fed an automatic display of video doesn't have anything to do with the content, but they know it's a feature that is explicitly designed to engage a developing brain in a way.
That is on their propensity for addictive behavior.
We also can point to specific features that enhance and facilitate the connection between predators and underage people.
And so, by focusing on design and by focusing on communications from the company about that design, we were able to, you know, prevail in a motion to dismiss under Section 230.
It was certainly something that they have, you know, they've hidden behind for years.
They tried to get out from this lawsuit using that same theory.
Unfortunately, states' consumer protection laws are something that I think are going to be a vehicle for advancing this kind of litigation, not only here, but across the country.
On this issue, someone in our community put this really well, and they're a product designer.
I was asking them about this, and they said, Look, the product design aspect of this case is really novel because it establishes agency and bias on the part of Meta.
So, if in Section 230, the onus on the party posting the content, if you're the one putting gross things on a bulletin board and we catch you, All right, you're in trouble, pal.
Here, the agency's on the people who've designed the bulletin board, and that bulletin board makes you more close to a publisher than it does a passive poster, a passive space where people can fill in a platform.
I mean, do you agree with that?
That what's happening here is really putting agency on Meta as designing something that will put content in front of young people, others, that is potentially harmful, potentially addictive, and certainly opens them up to predators.
Yeah, I think that's right.
I think Meta likes to use that bulletin board analogy.
Other social media companies do too.
This idea that someone wants to sell a bike, they're going to give you a post about a bike and you're going to put it up and people see it or they don't.
The problem is that this isn't a bulletin board at all.
This is a company that, because of its business imperatives, has to maximize engagement and growth.
And the need to maximize engagement and growth actually takes them from being passive and neutral as to the content.
To stepping into a more active role through their algorithm in feeding specific types of harmful or potentially harmful connections and interactions with users on the platform.
And at the same time, they have a well documented business interest in getting young people hooked early because they've actually done studies inside these companies about the long term return on investment on getting a younger and younger member of our society, a child in our society, onto the platform.
And then they can project forward how to monetize that over time.
But that requires keeping that young person engaged, not letting them off, right?
Because they're trying to commodify and monetize attention, they have an incentive to build an increasingly powerful attention machine.
And that attention machine is also connected to this interest ranking process that they have, knowing full well that predators are in this space.
So when you combine those two things together, you create.
A very clear picture of their liability because they're not just neutral.
They're not just passively sharing information.
They're actively making connections and amplifying information in a way that is dangerous and then lying to people about that danger.
I think for folks, the big tobacco comparison is coming into focus.
And I want to go there in a second.
But it strikes me that there's kind of two things that we're talking about.
And I wonder if you can help us make the links, at least how they work in your mind.
One is, Really, you know, this profile of a 13 year old girl, Issa, who signs up on Facebook soon has 3,500 friends because the friend requests just keep coming in from men all over the world.
Message box is full of gross messages, sexually explicit messages, pictures of people's bodies, and so on and so on and so on.
So, that to me seems to be an issue of design that it's possible for someone like Issa to be targeted in that way and exposed in that way.
The other is this addictive part the idea that.
I'm a developing person.
I'm 13, I'm 15, I'm 17.
And you're designing a product you know is going to have an addictive propensity for me.
Are those two linked?
Because I know there's another case out of California.
There's other cases that are popping up about the addictive part.
And I think I'm trying to figure out how that addictive propensity fits in with the vulnerability and exposure dynamic that goes with being a young person online.
Yeah, so our case actually contained both elements, both an evidentiary presentation about the addictive nature of the product and the potential harm that arises from that anxiety, depression, inability to focus, and some of the impact.
We actually had direct testimony from educators in New Mexico, healthcare professionals in New Mexico about the impact that it has.
There was some conversation and discussion about the unique physiological vulnerabilities of young people that make them even more vulnerable.
I mean, in truth, Every human being on these platforms is subject to the same pressures, but those pressures are even more pronounced for a developing brain.
But it is the combination of trying to build an ever younger user base and lure them into a place where they simultaneously know that there are not a few predators, but there is just a sea of predatory behavior.
One of the things that jumped out at us is when we did Operation Metaphile.
Because we were responding to this accusation that, you know, it wasn't the original undercover account was not represented.
And we tested it again using our criminal investigators.
We took three people into custody.
It turned out that two of the three had been previously flagged by the company itself, and yet they were still allowed to create new accounts, right?
So that's an example of how the addictive properties of the product lure young people into this space.
And at the same time, it is a space that is already populated by dangerous people and the company.
The company is aware of both of those things.
It knows that it's growing the potential population of victims on the platform, and at the same time, failing to identify and remove predators from that same location.
And just the two of those things together that I think created a very unique presentation for the jury.
It was somewhat different than the verdict that was reached in the LA case and in other cases that are focused more expressly on how the social media experience itself caused that harm.
We didn't have to do that in part because we're only identifying product features that are explicitly and clearly dangerous and misrepresentations about those product features.
But by shining a light on both of those, I think it crystallizes the harm that's present for everyone.
Signaling Accountability Now00:16:21
Is this a similar approach you're taking in the case with Snapchat?
Snapchat is slightly different.
Snapchat, we are focused more on the way in which that platform facilitates sextortion.
We found in the course of our investigation that that is a place where a lot of young people are lured into this false sense of being able to share, you know, private, what they perceive to be private content that is then weaponized against them.
But again, this is related to both design features and affirmative misrepresentations on the part of the company.
But So, we're looking at a slightly different type of harm, but the same functional presentation of design features combined with misrepresentations, which will be a part of that case.
But we're not done with Meta.
I mean, you have to remember we go back to the court on May 4th for the second half of this case, which is the public nuisance claim.
In that case, we're going to, in that moment, we're going to have the opportunity to not only ask for additional financial recovery, To remediate the harm for New Mexico's kids.
But we're going to ask for a specific injunctive relief against the company, real age verification, changes to its algorithm, an independent monitor of a whole sweep of changes that we think are necessary to establish clear guardrails for young people in these spaces.
And I think that is going to be a real eye opener because if we can create that new framework here, it will be a blueprint for what could happen not only around the country, but around the world.
I want to come back to that point because you really led the way here.
You were willing to take on Meta in essence on a solo mission, which not a lot of people would have.
But I want to come back to this big tobacco comparison because I think it's really interesting to me.
You're in your 40s.
I'm in my 40s, not to out you here as somebody in their 40s, but here we are.
Sorry.
I feel like I was a kid at the tail end of smoking as kind of a thing you could do without feeling social.
Pressure or negativity.
You know, there was a smoking section at the restaurant when I was five years old.
It's hard to explain to my students these days who are 19, like, yeah, you could, you could, there was a time they thought you could smoke in the back of the plane and that was no big deal.
And my point with that is, decades later, we've reached a place where we almost universally recognize smoking as harmful, not something you can do inside, not something you should do around kids.
Now, do people still smoke?
Of course.
Are they free to do that as adults?
Sure.
But then the social pressure to me is pretty impactful.
And I bring that up because I think.
On one hand, you've mentioned big tobacco, and a lot of folks have taken away from this case and this victory that this is the big tobacco moment for social media, that we are now going to recognize through our courts the harm that social media can and does have, especially on young people.
Others, though, say, no, be careful here.
There's a clear, direct link between smoking cigarettes and nicotine and physical harm and disease and so on.
We're not there yet with social media.
We don't have the science, we don't have the data, so you need to hold your horses.
I'm wondering where you come down on this.
I mean, I think I know, but I'd love to hear what you think on all of that.
Yeah, I mean, there was a report today along those same lines where they were talking about people who are heavy users of social media have a lower level of support for democracy and a higher level of sort of tolerance for political violence.
But then the article went on to say, well, we don't know if there's a causal relationship and we don't know if it's actually these people are predisposed to that, which is what makes them heavy social media users, or it's running in reverse.
For someone in my position, I am not going to wait decades and decades to confirm some link in this because, look, I've got teenagers.
I'm around young people.
I can see in them, separate and apart from the possibility of sexual exploitation, just the impact that it has had on their ability to socialize, their ability to make connections, their ability to concentrate, the way in which it Impacts their mental health.
And I think we are at a moment where we are finally getting accountability with everyday people who can hear.
They get to hear everything that Meta has to say about, well, you can't demonstrate the link.
You don't get to hear that.
You don't get to establish the connection.
And they get to hear from us about the opposite.
And then they get to render a decision.
And I think the risk of doing nothing is so great and so profound for an entire generation of people that we can't afford to wait.
Now, I would love and I encourage people to engage in longitudinal studies that really dig into the unique dynamics of the social media experience.
And I am sympathetic to people who are looking to those spaces to form communities when they are under threat in the real world.
I have simply seen too much in terms of the psychological harm and the potential for real world sexual violence and sexual exploitation to sit back and wait for the resolution of that scientific and academic debate.
Because I don't live in the scientific or academic debate space.
I live in a world where I'm worried about people getting harmed right now.
And what can I do?
And I'm going to use every available tool at my disposal.
What I wish is that Congress would wake up and finally engage on this issue.
This is.
An issue that doesn't divide the country.
It is not a blue issue or a red issue.
It's not a progressive issue or a conservative issue.
It's an issue where I meet with people who will disagree with me on virtually everything that I do from a policy standpoint.
But when they know about this work, they are absolutely supportive.
And that should tell people something that there is a real hunger for meaningful action in this space.
And my hope is that the verdicts that have been delivered are just the beginning of a new awakening.
And to your point, I want to wake up in a world when I'm an old man and we look back at videos, sort of like we look back at those old black and white movies where they're smoking in the doctor's office.
I was just thinking about it.
Yeah.
Yeah.
I mean, we look back on it now.
My kids look back at those same things and they're like, oh my God, you guys did that?
And you know what?
I bet you if you played some of the newsreels from back then, you probably find somebody who says, well, the scientific link hasn't been proven between smoking and cancer, which is why we have all these doctors, right?
Stumping for cigarettes.
We got to a place where we look back at that moment and think, my God, how did you ever do that?
I think you're going to have that same experience.
I think, I hope that when my kids are having their own kids and I'm a grandfather, that they look back at this time and they say, can you believe we were just handing out Smartphones to people, letting them download whatever they wanted without any real appreciation that there was predatory behavior or psychological harm in those spaces.
And it'll be shocking to them.
I hope it's shocking to them.
And I hope it's something that we can put in the past and we can move towards a place where we're developing really safe, thriving digital spaces for everyone.
It's difficult.
As I thought about this conversation and I discussed it with some people in our community, there's a real sense of the internet is a place where if you are a queer kid in a small town in New Mexico, or if you're Somebody who feels like they're being bullied at school or they have an identity that is not accepted in their town or their family.
Sometimes the internet is the place where you find allies, a coalition, safety.
However, as you're saying, it is also often a place where predators lurk and danger is always around the corner.
Exploitation is something that is a possibility at every turn.
We have to seemingly find a way to regulate one and allow for the benefit of the other in some fashion.
I agree.
You know, one of the things that Barbara, just to go back to the academic space that I am a part of, the social scientist Barb F. Walter says, anytime anyone asks her about saving democracy, a robust public square where people actually are engaged with each other in good faith, she says, you have to regulate social media.
Like her answer every time is that if you want a reinvigorated democracy and a public square where neighbors actually speak to each other, the first thing you can do as Congress is regulate social media.
That doesn't mean eradicating it.
It doesn't mean making it illegal.
It doesn't mean, though, that looking back on a time when we had children and adults online for 13 and 15 and 18 hours a day, unregulated, we're going to look back and say, that is the least healthy thing I can imagine for someone's emotions, for their psychology, and so on.
So I agree with you on that front.
I hope we're moving towards that kind of horizon where we see this kind of unregulated presence online and interaction as something that will be akin to.
The doctor smoking in your appointment or whatever may be.
All right.
Let me ask you this as we wind down here.
Some folks are going to say, look, this is good.
We need to protect kids from predators.
We do not want 13 year old girls on Facebook being approached by men with disgusting intentions.
What this is going to lead to, though, is Facebook asking for an age verification and other platforms doing that.
We've seen that with Discord recently.
They were going to do a kind of ID verification.
The backlash was so robust that.
Discord sort of backtrack there.
What do you say to those kinds of questions?
Well, look, I think Americans, especially now, are right to be concerned about privacy.
They're right to be concerned about any system that mandates the disclosure of sensitive personal information.
We just sent a letter to Congress warning them about the dangers of mass surveillance and things like that.
So I'm very sensitive to those things.
I actually think this might be a place where technology itself offers a path forward.
I've been interested to hear about some of the age estimation technologies that are being applied in different spaces.
That don't require the disclosure or the sharing of sensitive personal information.
I think that's something that we're going to be looking at very, very carefully.
But I also, you know, I think we are as a society comfortable with the basic unawareness that if a child faces severe danger, serious potential harm in a particular space, that we ask for some, you know, verifiable, Information that we can look at.
And if we can have a user take, you know, using age estimation technology and come up with an assessment of we think this person is 12, and then we provide an opportunity for them to come in and say, no, I'm actually 22 and I just look a little different.
I'd like to appeal that decision.
I think there's a process that we can establish that both protects people's privacy rights and interests, but also ensures that only folks that are safe in these spaces are allowed to be there.
I think that would go a long way.
I think we're going to have to be flexible in terms of finding solutions.
We're going to have to avoid the trap that we always find ourselves in in this country, where we immediately break ourselves into it's all of one thing or it's all of another, and start thinking about these as challenges and problems that we have to work through in an iterative way.
And in that respect, ironically, we need lawmakers and policymakers to act a little bit more like the innovators in the technological space.
You never meet an innovator in technology who sets technology and then they walk away and it's done forever.
Well, that's kind of what we do with, like, take Section 230.
We did that in 1997.
We haven't touched it in nearly 30 years.
Maybe it's time to go back and get comfortable iterating policy and iterating the law so that we can match the speed and pace of technological innovation.
Because that's the only way we're going to sort of work through this process together.
But we have to avoid this idea of, well, Since we can't protect everyone and every interest that we have, we should do nothing to protect anyone.
And that's something that I've always resisted.
I think it's particularly important in this moment to be clear about what your objectives are and flexible as to the means and strategies that you employ to achieve those objectives.
And I think if we do that with some humility, with some attention, with some sense of purpose that we have to stay at it for a while, I think we'll go a long way.
Just to go back to something you said earlier, we hear a lot about protecting children.
On both sides of the aisle, often in different ways and via different issues.
This seems to be a bipartisan place where one might find Congress could actually, folks could reach across the aisle and actually get something done.
I know we need to wind down.
Let me ask you one more question.
You are very concerned about AI.
I've seen coming out of your office a lot of work to battle what might be called the threat of AI.
How do you see the threat of AI related to what we've talked about today, addictive propensities in technology and predatory possibilities for underage people?
Well, you see some of the same parallels lack of disclosure, lack of clarity from the companies as to being clear with customers and users about the nature of the potential harms and also the limitations of the technology itself.
You see some of the same problems with policymakers running into a complex problem that they don't fully understand.
And so the solution is this response of, well, let's do nothing.
I don't think either of those is acceptable.
We introduced an AI accountability, a synthetic deepfake bill.
Unfortunately, didn't get a lot of traction in the last legislative session.
We'll be introducing that.
It's all predicated on clear lines of accountability and potential liability for people who are misleading customers, misleading vulnerable populations about this.
But I think if you take a step back and you look at the verdict that we just had against Meta, it's incredibly important that we signal accountability now as we approach this sort of the end of the first sort of or second age of social media before we are fully immersed.
In the new wave of artificial intelligence, we cannot allow this idea that companies like this can act with impunity, that they are unaccountable.
We have to set guardrails and signals to market actors now so that they understand when they're developing the tools and technologies of the future, including and especially AI, that they understand at a very basic level, they will be held accountable for the product design choices that they make and they will be held accountable for the things that they lie to the public about.
And if they understand that, And if they understood that back at the inception of the social media age, I think we'd be in a very different space.
So I view it very much as my responsibility to start sending those signals now and then also assuming some responsibility for trying to navigate as best I can what basic parameters and safeguards we can put in place because the harms that are potentially there from artificial intelligence, I think, will probably dwarf what currently exists in social media.
So it's incredibly important that we act with a sense of purpose and urgency in this moment.
If we've lived through the second Gilded Age, I hope this is the beginning of the new progressive era.
Hope for a New Beginning00:01:36
And we would have never had a 40 hour work week or Labor Day or Saturdays off if not for all of those reforms and regulations then.
And I hope this is the beginning of some of that now.
So I know you're back to court in about a month.
A.G. Torres, thank you for your time.
We appreciate all the information and perspective you've given.
Are there things people can look out for on the horizon in addition to that second part of this case?
Other things that are happening on these frontiers?
Well, to your point, we have active investigations on a number of AI companies right now, and I anticipate some formal legal action to be taken in that space in the very near future.
That's great.
Thank you so much.
We appreciate your time.
Thank you.
All right, y'all, that'll do it for us today.
Thanks for listening.
Send in your feedback, give us a comment, put it in our Discord.
Let us know what you think about what we talked about.
And if you can, go subscribe and hit a review button.
Five stars.
Tell us what you think on Apple Podcasts.
It really does help.
You can go to straightwhiteamericanjesus.com.
It's a brand new website.
We're super proud of it and it has so much info there.
You can also go to axismundi.us.
We're a podcast network built for public scholars to bring their expertise into the public square so that everybody can learn, understand, and see the wonders of religion, the pro social and pro democracy aspects, but also the threats that religion poses.