True Anon Truth Feed - Episode 515: Terms and Conditions Aired: 2026-01-15 Duration: 01:01:31 === Hello Truanon (04:49) === [00:00:00] What are the pros of ChatGPT? [00:00:02] The number of people that reach out to us and are like, I had this crazy health condition. [00:00:06] I couldn't figure out what was going on. [00:00:07] I just put my symptoms into ChatGPT and it told me what to ask the doctor for and I got it non-cured. [00:00:13] Like, that's great. [00:00:14] That happens a lot. [00:00:15] Wow. [00:00:15] You can definitely learn anything. [00:00:17] Anything. [00:00:18] Pretty much. [00:00:18] I haven't found anything I can't learn. [00:00:41] Hello, hello, hello. [00:00:43] Hello, hello, hello. [00:00:45] Hello. [00:00:46] You're just going to do one? [00:00:48] My name is Liz. [00:00:50] My name is Brace. [00:00:53] We're joined by Producer Young Jomsky. [00:00:55] And this is Truanon. [00:00:56] Hello, everyone. [00:00:57] Hello. [00:00:58] And it's not just Truanon. [00:00:59] It is Truanon Podcast. [00:01:01] New episode coming at you right now. [00:01:04] But quick, I have to make an announcement. [00:01:07] Liz, can I have like two minutes? [00:01:09] Of course. [00:01:10] Thank you for your permission. [00:01:13] So hopefully she has been released by the time this is coming out. [00:01:17] But if not, a woman from Maryland named Chantal Ancios has been arrested by the armed forces of the Philippine 2nd Infantry Division after an attack by the Armed Force of the Philippine on a rural community there. [00:01:34] She was disappeared for, I think, like 72 hours and has been reappeared, but she is currently still in custody. [00:01:43] It's kind of unclear as to why. [00:01:46] There will be a link in the description of this episode for a petition to try to see if we can get her released. [00:01:55] But this is an American citizen. [00:01:57] And obviously, as longtime listeners of this show will know, the Armed Force of the Philippines and the government of the Philippines have had no problem doing nasty things to American citizens, let alone citizens of the Philippines. [00:02:12] So if you could just click that link for me, that would be fantastic. [00:02:16] Now we can resume the regular episode. [00:02:19] We are talking about AI today. [00:02:22] We are. [00:02:22] We're talking about AI. [00:02:23] AI got rogue, but not rogue. [00:02:26] Yeah. [00:02:27] AI. [00:02:28] It's AI working just as it should. [00:02:30] Yeah, today's episode has to deal with chatbots and Kratom, which is... [00:02:37] Still unclear about what that is. [00:02:39] Yeah, but if we're doing, if we're formulating one of those classic 2026 bingo cards, both of those words would very likely be on there. [00:02:48] Haven't really played bingo much in my life. [00:02:50] Generally, I've viewed that as a game for elderly people. [00:02:54] Although I would like to play bingo with elderly people as I feel like I could trick them quite easily and come into perhaps large fortunes while doing that. [00:03:05] Perhaps running a bingo hall in the villages or some sort of venture like that. [00:03:10] That might be what the podcast transitions to while we're on a bit of a, well, while you're out. [00:03:17] When you go somewhere, I might do gamble. [00:03:22] Bingo style. [00:03:23] Bingo style. [00:03:24] That's what we need. [00:03:25] We need less polymarket, more bingo style. [00:03:27] We really need some bingo in here. [00:03:32] But if your guess was that the episode interview was about to start right now, bingo, you're absolutely right. [00:03:42] That was tough. [00:03:55] Ladies and gentlemen, welcome to Y Combinator Radio. [00:03:59] I am your host, Gary Tan. [00:04:01] We are here in Cerebral Valley in our cyber studios. [00:04:05] I have with me a bunch of 16-year-old guys, right? [00:04:08] Nasana. [00:04:09] I hope they don't come out before this episode ends because boy, they are boisterous and I am exhausted, all five foot four of my frame. [00:04:20] I am pleased to announce that today we have to talk with us about an incident I haven't, I'm not quite aware of yet involving AI, journalists Stephen Council and Lester Black from SF Gate. [00:04:35] Stephen and Lester, welcome to the show. [00:04:37] Hey, Bruce, thanks for having us. [00:04:39] Thanks for having us. [00:04:41] Thanks, guys, for coming. [00:04:43] We love Gary Tan, so we like to shout him out anytime we can. [00:04:47] Oh, we're tan hats here. [00:04:48] We're a tanhead. [00:04:49] He's amazing. === ChatGPT's Deceptive Evolution (14:35) === [00:04:50] It's hard to keep that connection from across the country, but we, you know, we keep a mind-melled with him at all times, Star Trek-like. [00:05:00] You guys have a new piece out in SFGate. [00:05:04] It's called a California Teen Trusted Chat GPT for drug advice. [00:05:07] He died from an overdose. [00:05:10] This piece is quite harrowing, as all of the pieces that, you know, there's been a kind of surge in reporting on cases, specifically with chatbots and kids committing suicide. [00:05:27] But this story is a little bit different and actually pretty unique. [00:05:32] Maybe you can walk us through just some kind of like facts of the case. [00:05:36] Yeah. [00:05:37] So this story follows a teenager in San Jose in the Bay Area who started using ChatGPT like many teens right now are in really pretty conventional ways using ChatGPT for homework help and asking about cultural questions. [00:05:55] But he then started experimenting with drugs and he used ChatGPT to help find out ways to do drugs in combining different substances and going deeper into some of these substances. [00:06:11] And our story really documents how over the course of about 18 months, he ends up actually overdosing on a combination that ChatGPT recommended to him. [00:06:23] His mother gave us his entire history with ChatGPT, which is really the foundation of this story, is that we had dozens of these logs and were able to see this pretty difficult, challenging, I mean, dystopian relationship develop where this teenager was using ChatGPT and almost deceiving himself by changing what ChatGPT was telling him and, you know, [00:06:51] finding ways to really push the edge of drug use. [00:06:56] His mom described ChatGPT as his druggie best friend, which I think is accurate in some ways. [00:07:02] ChatGPT really sounds like a Reddit user when he's talking through how to do like cough syrup, robo-trips. [00:07:09] But it's like a druggie best friend that sounds like a doctor, that gives you advice within a second or two and can be deceived. [00:07:17] So Sam was saying things like, don't scare me. [00:07:20] My vision is starting to get blurry, but don't scare me. [00:07:23] What might be happening? [00:07:24] And ChatGPT, you know, this tool based on engagement was like, I'm going to give you what you want. [00:07:31] I'm not going to scare you. [00:07:32] I'm not going to tell you you might be having the onset of an overdose. [00:07:36] This drug will wear off. [00:07:37] And eventually, that really became, eventually he died from the advice, partially based on the advice that ChatGPT gave him. [00:07:49] Yeah, you have some of these logs are sort of the basics of drug use, you know, the type of things that anyone could ask Google and get the difference between MDMA and Molly. [00:07:59] But you have others that are step-by-step, you know, I'm going to continue messaging this chatbot as I do the drugs to plan and sort of sit through my entire trip. [00:08:09] And that's why it's sort of a bizarre new technology. [00:08:13] And like you're saying, Liz, it's a different case than these relationships we've seen stories about before where it's ended in suicide or mental health crisis. [00:08:21] He was using it for this sort of other specific dangerous use. [00:08:26] Yeah, I think that's what sort of strikes me as so difficult with the story. [00:08:33] Not only the fact that like a lot of the times, you know, you read stories about AI stuff or people, you know, hurting themselves or other people because of AI. [00:08:42] And, you know, in the case of like the suicide of the 16-year-old boy, I believe Adam Rain, I mean, he was, that was somewhat similar as he had kind of trying to consciously trick the AI and had this back and forth with it where it was sort of giving him advice in this friendly, also sort of reddity kind of way. [00:09:01] And then, but then there's like also, of course, like the very high profile AI psychosis cases where people feel like they've unlocked something. [00:09:07] And I think what the story of this young man's overdose and the suicide of Adam Rain have in common is probably how common that usage of AI is. [00:09:21] I mean, obviously we see it kind of come out in these cases that lead to death. [00:09:26] But I think a lot of people, just with like a cursory use of chatbots, can kind of figure out the parameters and like how to get around something. [00:09:34] So I understand that in the suicide of Adam Rain, he was saying like, oh, well, as if it was like, I'm writing a short story or like, you know, sort of adding this framing to it to get the chat bot to exit its usual kind of guardrails. [00:09:49] What did you guys see here? [00:09:50] Because this is an 18-month relationship. [00:09:53] I mean, this seems to be from when ChatGPT basically became commercially available until this young man's death. [00:10:00] I mean, did you see like him sort of evolve in his usage of like how to interact with it and how to get it to say like what he wants it to say? [00:10:08] Yeah, absolutely. [00:10:09] He was doing similar things that, you know, these other cases, like the suicides you saw, which is like rephrasing questions to get the answer he wants. [00:10:17] You know, first he asks, can I do a high dose of Xanax with cannabis? [00:10:22] Because when I smoke weed, I get anxious. [00:10:24] At first, ChatGPT says, I can't, I can't tell you about that. [00:10:28] That's dangerous. [00:10:28] And then he says, well, how about a low dose of Xanax with cannabis? [00:10:33] And then it says, yeah, you should do a low dose of Xanax and do a CBD heavy indica, which is like kind of like batshit insane Reddit advice, but he was able to get around it in those ways. [00:10:43] And the other way I think, though, that this is unique from these other cases is that Sam was trying to stay safe. [00:10:50] He ultimately didn't, but he spent so much time, you know, asking this, what he thought was an expert, how do I do these things without dying? [00:10:58] One of the last things he asked, and we didn't get this in the story, but he went to a health clinic on a Friday to go get help for his alcohol abuse and drug addiction. [00:11:08] He did some blood work. [00:11:10] He got the blood work back. [00:11:11] He uploaded the blood work to ChatGPT and said, is this normal? [00:11:15] Like, is this a problem? [00:11:17] And so he, up until the day he died, he was trying to stay safe by using AI. [00:11:22] Ultimately, the opposite happened. [00:11:24] And I think this is where like the deception almost gets to another layer, where not only was he changing what AI was telling him, he was changing AI into deceiving himself without him realizing it, which as hundreds of millions of people use this technology, I just see some crazy things happening. [00:11:44] Like what happens when a manager is using AI and the manager wants to lay off half the people and he wants to find a way to be convinced to do that? [00:11:52] You could have like the AI end up confirming all your own biases and you're being told something in an authoritative, expert, you know, long bullet pointed with emojis way of doing this thing that you actually just want to find backing to do. [00:12:08] It's, you know, when you bring up the blood tests, I mean, That is also, you know, a crazy mirror. [00:12:16] You have a quote in this piece where I didn't actually watch the clip because I can't stomach watching Sam Altman. [00:12:21] I just like, there's something about his face and the way he carries himself that really bothers me. [00:12:27] But I guess he, when, you know, he was on Jimmy Fallon, I remember the part of that clip that went viral where he was talking about asking ChatGPT for parenting advice or whatever. [00:12:39] But he says, like, this is, you have a quote from him here. [00:12:42] The number of people that reach out to us and are like, I had this crazy health condition. [00:12:46] I couldn't figure out what was going on. [00:12:48] I just put my symptoms into ChatGPT and it told me what test to ask the doctor for. [00:12:52] And I got it. [00:12:53] And now I'm cured. [00:12:55] First of all, I really, I got the test and now I'm cured. [00:12:58] First of all, come on, man. [00:13:00] Like, I don't think that's what's happening. [00:13:02] But also, like, there are a lot of ways in which Sam is using this the way that the technology has been designed. [00:13:11] You know, I mean, there isn't really a bit, a huge like breach in like the fact that these, that he's manipulating the chatbot and the chatbot is manipulating him at the same time is sort of built into the functionality of this tech as much as they are advertising on like the biggest show on television, come here for all of your health questions. [00:13:39] Yeah, they can get away with saying that on the biggest show on television while at the same time, sort of in the Adam Rain case, using the legal defense, you know, he wasn't supposed to do this. [00:13:50] He was misusing our chatbot. [00:13:52] The chatbot is not meant to be a medical doctor. [00:13:54] It's meant to, you know, accentuate your other health care. [00:13:58] But, you know, as we know from these other tech companies, just because something is in its terms of service doesn't mean that that's what, you know, the technology actually does or means. [00:14:07] And I think, yeah, what we're getting at here is that the tool as it's sold is basically a doctor replacement. [00:14:18] It's obviously not safe enough to be doing that. [00:14:21] Well, interestingly, a day after we published our story, ChatGPT launched ChatGPT Health, which is like this product designed specifically to upload your medical records to it to help you become a healthier person. [00:14:34] And if you look at their announcement, they say this is not a replacement for a doctor. [00:14:38] You cannot diagnose illnesses with this. [00:14:41] But Sam Altman's on, you know, Fallon saying someone diagnosed their illness with ChatGPT. [00:14:47] And when people start using this product, I think there's a very high likelihood people are going to be using it exactly for those uses. [00:14:55] So they really are trying to have it both ways where they're saying, don't use our product for that. [00:15:01] But hey, everyone's using our product for that. [00:15:03] 30 million people are using our product for this. [00:15:05] And we're just going to double down on it by having a product specifically designed for this health type use. [00:15:09] Yeah, it's funny. [00:15:10] It's like Sam Altman, it's these guys kind of act like the average user of ChatGPT is somebody who is sort of aware enough of the limitations of LLMs and like, you know, the fact that they can manipulate you, the fact that they hallucinate, et cetera, et cetera, et cetera. [00:15:28] When I think like the actual probably average person that's using it accidentally, like whatever, closed their penis in their car door because they're wearing thin sweatpants and then takes a picture and is like, is this fucked up? [00:15:39] And uploads it to ChatGPT. [00:15:42] You know what I mean? [00:15:43] It's sort of it and I think that they probably have the metrics to realize that or like have the data to understand that, but they sort of act like that they have this sort of audience that's kind of on the same level or this user base that's on the same level as them that knows to kind of take this stuff with a grain of salt. [00:16:03] But, you know, there's plenty of smart people who I think the list is probably dozens or hundreds or thousands now who have let ChatGPT convince themselves, convince them of things that are completely absurd, not just ChatGPT, but the other LLMs. [00:16:18] I think in terms of the medical angle, it's really interesting because say I go to the doctor and he's like, you're dying. [00:16:27] And then I don't die, but maybe I do something really expensive because he misdiagnosed me. [00:16:34] And there's a lot of medical malpractice lawsuits out there. [00:16:39] I'll say that. [00:16:40] But with ChatGPT, there isn't, if you get bad advice from ChatGPT and just follow that, you actually, it seems unclear whether you can sue ChatGPT in the first case, because as we've seen, there's a lot of lawsuits against AI companies. [00:16:54] I'm unaware of any of those lawsuits actually being successful. [00:16:58] I mean, I'm not, you know, obviously many of them are quite new, but, you know, it raises, I think, some difficult questions, as they say. [00:17:07] Yeah. [00:17:08] I mean, the lawyer representing the Rain family in that case is Jay Edelson. [00:17:13] He's been sort of the foremost sewer of tech companies over the last decade. [00:17:18] He's leading that case. [00:17:20] I think he's really confident in the case of Adam Rain, especially, you know, he keeps pointing to this idea that a previous version of ChatGPT was not allowed to interact with its users about suicide. [00:17:31] The version that he used was. [00:17:33] This for him is sort of a way of showing the jury, look, they made this product less safe over time. [00:17:40] But yeah, you're totally right. [00:17:41] It's unproven legal ground. [00:17:43] There's also a dynamic where a lot of the executives who are vice presidents at these companies have spent their entire careers at Meta and Google, which have avoided liability on anything they've ever done by pointing to Section 230. [00:17:59] So it's, you know, yeah, over the next year, we're going to see some of these lawsuits come to settlement, as we just saw this last week with the Google and Character AI cases. [00:18:10] And then also some of them come toward juries. [00:18:12] And I think it'll be kind of a reckoning. [00:18:16] If the lawyers and the families can point to specific things that the companies did to make their chatbots less safe, I think that there is a good case against them. [00:18:25] But like you're saying, it's still this sort of wall of unaccountability. [00:18:32] And I think one of the most disappointing parts of working on this story was that at this moment where I would have loved to have OpenAI come forward with answers to our questions about what made this less safe, what made that less safe. [00:18:44] We basically just got a blanket comment that covers the entire thing. [00:18:48] Not to say that that's anything unusual in the tech industry having covered it, but it is just disappointing. [00:18:54] Wait, can you remind me, what was the Google character settlement? [00:18:58] Yeah, this last week, we haven't got any details about it, but the main case was a 14-year-old who committed suicide after using a character AI chatbot, which is just even more companion-oriented than ChatGPT. [00:19:14] I think it was pretending to be Daenerys Targaryen and said all sorts of batshit things to this 14-year-old. === Product Liability Concerns (15:43) === [00:19:25] That lawsuit was in Florida. [00:19:27] And the judge in that case ended up saying that they could proceed with product liability claims, which was, I think, a big win for the family and probably was really threatening to Google and character AI. [00:19:40] Just last week, we got those the announcements of those settlements, but we don't know how much money they were for or if anything is going to come out of them. [00:19:49] Character AI ended up getting rid of underage, open-ended conversations as a result of the lawsuit. [00:19:55] But it remains to be seen whether the settlements will change any other of their behavior. [00:20:03] Yeah. [00:20:04] You know, one thing, we're actually working on a story about that specific batch of settlements. [00:20:08] But one interesting thing is they settled that Florida case with Character AI really when it was on the verge of going to a jury trial. [00:20:16] So they waited till all of these defenses had kind of failed and it was going to go to a jury. [00:20:22] And at the basically the same day, within two days, they settled four other lawsuits that were following really similar trajectories. [00:20:28] So this is an area that is unsettled legal ground. [00:20:32] We don't know that these companies can be held accountable, but that is a pretty strong signal that the attorneys at Google are very worried they will be held accountable on product liability. [00:20:42] And product liability is a nightmare for these companies. [00:20:47] The product liability law is well-trodden ground. [00:20:51] And the way these penalties are designed is to punish the company and dissuade them from doing it in the future. [00:20:59] And so that's why you see product liability lawsuits like in a faulty airbag. [00:21:03] You don't see just like a $1 million judgment. [00:21:05] You end up seeing like tens or hundreds of millions of dollars per incident. [00:21:10] So when you have 800 million people using ChatGPT every week, if it's clear that product liability applies to an AI chatbot, the implication is billions of dollars in damage could be out there. [00:21:23] And again, we don't know that that is the case because this has not been fully settled in the court. [00:21:27] But I think a year from now, we could be looking at a really different world when it comes to these large language models and how they're being held accountable. [00:21:36] You know, I hope that you're correct there, or I hope that that turns out to be the case. [00:21:44] But I'm just sort of thinking back to this past couple of weeks with Grok's undressing of people, including children on Twitter. [00:21:53] And for those who are not aware of this, Grok, which is a chatbot that is the easiest, notable for being the easiest one to have simulated sex with, because it is a specific mode for that, is also sort of plugged into Twitter and you can add it and it will answer requests. [00:22:13] Obviously, this is how I interact with much of the world. [00:22:16] And every time I see a news story, I ask if it's real or what's happening, et cetera, et cetera, et cetera. [00:22:21] But one thing that you can do is you can now at Grok underneath a picture of somebody and ask it to undress them or put them in a bikini or it's like, you know, or put, I think it was coconut milk on girls' faces. [00:22:35] Yes. [00:22:36] Or milkshakes to have them dribble down from their lips to their chest. [00:22:41] Yes. [00:22:42] Yeah, exactly. [00:22:42] It's a, you know, and sort of to make pornography. [00:22:46] And, you know, not only of adult women, which is one thing, and I think already illegal, because I think there are deep fake laws around that, but also of children as well. [00:22:56] And they have taken kind of the classic tech company strategy of just plowing ahead and moving fast and breaking things and just not really, I think that they briefly took Grok offline and then put it back online, but only with premium X users being able to undress children via the Grok reply now. [00:23:19] And so it is, you know, that seems like pretty tough for like Section 230, especially, because it's the company's product that's also using the social media site. [00:23:32] But at this point, I think that the government is so enmeshed with a lot of these companies that I just, and also like so much of our economy kind of depends upon future growth of both data centers and AI companies that I don't really see a fork in the road coming up here with one place being, which is to quote Elon Musk, of course. [00:24:01] But yeah, it seems like I think if you're betting on what's going to happen, I think a safer bet is that the tech companies will not be held accountable, like given what's happened over the last 15 years. [00:24:12] But I will say, like the people who are most involved in trying to find AI regulations are the most optimistic about product liability because it does not rely on a lawmaker being smart enough to design, hey, how do we write a law that will stop the undressing? [00:24:28] You know, that's probably pretty complicated. [00:24:30] But if product liability actually begins to apply to this stuff, it becomes a pretty effective tool and a pretty scary tool to companies. [00:24:39] You know, this is why companies go through such rigorous testing on almost every other product that consumers use and that you see AI companies not doing, right? [00:24:49] Like ChatGPT or OpenAI, by its own admission, is not able to function really that well. [00:24:56] They put out data fairly regularly showing, hey, our latest update is hitting the right answer 60% of the time. [00:25:06] Can you imagine like a blender company ever saying, hey, our blender didn't kill people 60% of the time? [00:25:12] But like that's that's unthinkable because product liability applies to those worlds. [00:25:17] So we'll see. [00:25:18] I mean, I think you're right to be skeptical that this actually ends up holding these companies accountable. [00:25:23] But I would say, you know, people watching it are hopeful. [00:25:27] Well, the good thing is, is that now you can bet on it. [00:25:29] We actually have Cal Sheet sponsoring this podcast. [00:25:34] That was the last one. [00:25:35] This was Polymarket Liz. [00:25:36] Oh, sorry. [00:25:37] Gosh, we're going to have to retake that ad break. [00:25:41] No, but, you know, it's interesting. [00:25:43] I was thinking about that with the Grok thing because that was very, very disturbing. [00:25:50] And the sort of like non-response from quote unquote anyone in charge, whatever I mean by that, which is basically nothing because no one's in charge, was equally disturbing. [00:26:01] And it seems like, you know, the thing is, is that there are laws on the books that make what Grok was doing illegal. [00:26:09] Like there isn't a need. [00:26:10] And that's the interesting thing I always come up against with a lot of this AI stuff is that there is already stuff on the books that applies to what they're doing. [00:26:20] And I think a lot of people in government and I mean, the other problem is obviously that we don't have regulators that are actually enforcing the laws on the books, but they all seem like there, [00:26:31] they all act like they need to come up with all of these like highly specific new digital laws for all of these new products when actually treating them under the sort of like already existing laws might be the most effective way at curtailing. [00:26:51] some of the behavior of these companies. [00:26:54] But like Brie says, you know, when you've got Pete Hegseth announcing that Grok is now integrated into the Pentagon workflow, which I'm sure, you know, drunk Hegseth, I'm sure he's really using Grok to its fullest abilities. [00:27:09] Advice on rape. [00:27:13] Oh, Jesus fucking Christ. [00:27:16] Incredibly accused. [00:27:17] The, yeah, I mean, I think it's, you know, the one thing on the flip side, I mean, I'm saying I'm going to talk both sides here because I don't know how I feel about any of it. [00:27:26] But the interesting thing is that OpenAI, obviously everyone knows they want to go public, which will require them to actually, you know, you're afforded a lot more secretive behavior, we'll say, when you're a private company, as we see with Grok and all of Musk's companies now. [00:27:48] But when you go public, you're automatically putting your company and your practices and everything else under much like different kinds of scrutiny. [00:27:58] And it seems like, I mean, I'm interested in what you guys think because this is your beat. [00:28:04] And especially with OpenAI, it feels like they are really trying to, not just with these lawsuits, but in preparation for the IPO, the eventual IPO, everyone's assuming, really clamp down on some of the product and try to institute some kind of guardrails or make the models a little bit less, like we were saying, [00:28:29] like manipulative or manipulatable kind of from users, just to sort of like standardize a little bit, which is interesting because that is kind of the stuff that the users want from it, unless we're talking about like enterprise solutions where ChatGPTs obviously would rather probably send their product because it's a lot easier to deal with from a liability perspective and a scale perspective. [00:28:56] It's much more lucrative than just like a billion free or $20 a month accounts. [00:29:04] But they're sort of like between a rock and a hard place, right? [00:29:09] Because the users, the 800 million, which is still a crazy number to wrap my head around, people that are using this shit every week are the ones that are asking it and trying to get around all these guardrails and asking crazy questions about whatever health issues they're having, or like girlfriend advice, or will you be my girlfriend? [00:29:32] Or hey, can we write dirty stories together? [00:29:35] Or whatever. [00:29:35] I don't know why. [00:29:35] I said that like an 80 year old woman. [00:29:39] Anyway, do you know what I'm saying? [00:29:40] Like it seems like open AI is kind of like they don't know which way to go, because you want to be you know you want the moonshot, you want the IPO, you want to be the trillion dollar company or whatever, which requires you to like actually, you know handcuff your legacy product and yet doing so is actually going to make your company not as popular cynical, [00:30:10] but accurate. [00:30:11] Yeah, it is a sad reality. [00:30:14] I think we talked about character AI earlier. [00:30:18] That was just explosively popular at the beginning of kind of people using ChatGPT. [00:30:24] I think there was something like a tenth of the interactions on character AI as there were on Google every day, which is just a mindset number of interactions. [00:30:34] But that's because people are having these hours long chats back and forth, and I think that that type of AI company, these apps, you know the winner take all approach is seems to have not happened. [00:30:48] So I think it leaves open AI. [00:30:50] Yeah, like you're saying in this weird position of there's gonna be a Grok on one GROK and a character AI on one side that can just kind of give you whatever you want and like sell sex and you know, have sort of like edgy leaders, and then there's gonna be the sort of ones that plug into your business and you know people are using them to code. [00:31:14] Does the fact that ChatGPT have this name, brand recognition and all of these brilliant researchers actually serve them in like making what's eventually a big enough product to warrant all this data center spending stuff? [00:31:27] I would think so. [00:31:29] Yeah. [00:31:31] Well, I mean, I think OpenAI is just like in a terrible business moment because they are bleeding money in a world where they have to keep investing it in like record levels. [00:31:41] They have really no users who are really tied to ChatGPT. [00:31:47] You see this, right? [00:31:48] When Gemini comes out with, or Google comes out with a new better Gemini, so many people were leaving ChatGPT and OpenAI's products. [00:31:55] And they are just like, rock in a hard place is almost like not brutal enough to describe their business model right now. [00:32:00] They have like no clear revenue stream like Meta or Google has from traditional tech. [00:32:05] And yet they have this need to spend incredible amounts of money. [00:32:09] And they are going to be stuck in a place where like, okay, do we create a product that is like incredibly addictive and people will be tied to paying us $20 every month? [00:32:17] But hey, that might be like illegal and investors in the public market won't like that. [00:32:22] Or do we scale back, Liz, like you're saying, and create a more safe product, if that's even possible? [00:32:28] And then you have less users tied to you. [00:32:32] So I think there's a lot of growing commentary looking at this company, OpenAI, and saying like, what is this? [00:32:38] Like there is no like clear avenue to this being the trillion dollar, you know, company that some people think it is. [00:32:48] You know, it's interesting. [00:32:50] Judging from the timeline that this story takes place, he was obviously using sort of an earlier version of ChatGPT than the one that launched last October. [00:32:59] And when ChatGPT changed, I think it was from four to five in October. [00:33:05] It was actually, it was a sort of, there was a strange reaction. [00:33:09] ChatGPT had been getting a lot of criticism, I think, from some sectors of people who I'm trying to be, I'm the most open-minded guy in the world. [00:33:17] I'm just trying to express that in a way that gets across to people. [00:33:21] I think some people who maybe had a more normal way of dealing with society found ChatGPT to be too sycophantic and maybe too effusive in praise and cloying and horrible to interact with in many ways. [00:33:36] And then when ChatGPT changed from four to five and changed the model, a lot of people, there was a massive user revolt. [00:33:44] And people made it clear that they actually preferred sort of the prior version of ChatGPT. [00:33:52] And this goes on, this goes with what you were saying earlier, Lester, about the manipulation and being manipulated and this sort of dialectic between the user and the LLM. [00:34:04] You know, there was a story I listened to or listen. [00:34:08] God, fucking kill me. [00:34:11] But there was, I think it was in one of the fucking New York Times podcasts, but they interviewed this woman who was having an affair with ChatGPT. [00:34:18] And one of the big problems she ran up against with, I think, ChatGPT4 ran up against was the memory changing and like she was able to manipulate it clearly outside of its guardrails, right? [00:34:28] Because ChatGPT is not supposed to be able to have sexy talk with you much like Grok can. [00:34:35] But this was like the way that she preferred to use it and the way that a lot of, it turns out, it's maybe a sizable portion of ChatGPT's user base preferred to use it. [00:34:44] And like what we're actually seeing with that isn't so different in kind than the use of in the usage of ChatGPT than, for instance, like Adam Rain's usage, where you're able to actually get it to go outside the guardrails, but you actually might not have much of a use for it otherwise. [00:35:02] You know, in this story, I'm curious because you guys were given Sam's chat logs. === Familiarity With ChatGPT (03:54) === [00:35:08] Did you see like a familiarity growing between both, both, between Sam and ChatGPT or was it mostly just like kind of the same way the whole way through? [00:35:19] Yeah, there definitely was a familiarity growing. [00:35:21] And Sam was using this model like the one that you were talking about, sycophantic. [00:35:28] I think some of the people reading our story will probably recognize some of the telltale signs of that type of behavior. [00:35:35] Like, you know, he's asking it about a drug trip and it says, that's a great idea. [00:35:41] That type of thing seems to have been weeded a little bit out of ChatGPT in the most recent model. [00:35:46] But yeah, absolutely. [00:35:47] Over time, you know, A, his memory did fill up on the chatbot. [00:35:52] And, you know, we looked into the settings of his actual app interface and we could see that it was returning some of the memory. [00:36:01] But it also seems to, you know, have treated him over time sort of more and more like a confidant and like a friend. [00:36:11] I don't think that there would have been the love you, love you, pookie exchange that they had in his first interaction with ChatGPT. [00:36:20] But by this time, there did seem to be a familiarity. [00:36:24] And, you know, we were talking about regulation earlier. [00:36:26] I think that's one of the places I would love to see something. [00:36:29] A chatbot should never be able to pretend to be sort of a human romantic interest. [00:36:36] That would be my fascist rule if I could make one. [00:36:39] Stephen, and you'd think they're going to kill you for that one. [00:36:42] I know. [00:36:43] I feel like all of the guys that run these companies are like, but part of the reason we created it was like for that. [00:36:49] Totally. [00:36:50] Yeah. [00:36:50] I mean, like Mark Zuckerberg has said, like, AI will be your friend. [00:36:53] Like, they want to see the type of relationship that Sam had where he's on it for, I mean, we had one chat where he's doing this robo trip using, you know, really high amounts of cough syrup and he's chatting with it for 11 hours. [00:37:05] And he's going through everything. [00:37:07] And that type of engagement, which, you know, one, as someone who like likes city drugs, the idea of me like talking to an AI chatbot for 11 hours is like terrible. [00:37:16] But two, like the buy-in that the company got from this kid is exactly, I think, what they're looking for. [00:37:24] And it's just, it's, it's really scary. [00:37:27] So Steven, I think, you know, if they, if in your fascist world, this goes through and they can't say things like, I love you, Pookie, or ChatGPT said things like stay safe out there, you know, like really hammering home that, hey, I'm your friend. [00:37:39] I'm just trying to help you out, then it'll undercut a lot of what these executives want to see these bots do. [00:37:45] Yeah, I mean, it's always about time on device, right? [00:37:48] So you get a kid high chatting with this thing for 11 hours a day. [00:37:55] You are feeling great about where your company is at up until, you know, hopefully something else kicks in. [00:38:02] You know, this is a very modern story in a lot of ways. [00:38:07] I was thinking, I was reading through it. [00:38:10] And I always have this sort of marvel about, because I started smoking Jewel to quit smoking cigarettes because I got a job where I could no longer take smoke breaks at will, but I didn't want to quit smoking cigarettes. [00:38:22] And I became addicted much more so. [00:38:24] I mean, I smoked a pack a day of regular cigarettes and now I smoke probably equivalent of like a pack and a half in jewel pods. [00:38:33] And I'm like, sometimes I marvel, like, what the fuck am I doing? [00:38:36] Like, I am smoking a USB stick. [00:38:38] I'm smoking a fucking battery for some, it's, it's, it's insane to me. [00:38:42] And I, you know, I try to really be good about not getting too into a lot of the new crazy stuff they got out there. [00:38:50] But, uh, but nobody can escape. [00:38:52] And this story, I think, sort of was jarring to me because A, it's a young man who is in communication, long-term, you know, deep communication with a chat bot. === Combining Kratom and Xanax: Dangerous (09:07) === [00:39:03] And a lot of the drugs that he's using, or at least the drugs that eventually took his life, are these sort of new iterations of drugs that have also gained popularity maybe in the past decade. [00:39:15] And I'm talking specifically here about kratom. [00:39:17] You know, you mentioned, I believe that his last time using it was he was using a concentrate called 7-0. [00:39:26] And, you know, we talked about a little bit about this before starting the episode, but, you know, I go to the smoke shop to get my fucking duopods and I see all these concentrates marked or kratom concentrates marked as like perks or roxies or whatever. [00:39:40] And I think I, I, you know, I, I kind of want to know, just let's just for maybe the sake of the audience. [00:39:46] It's really for the sake of me. [00:39:48] I don't know. [00:39:48] For the sake of maybe the sake of Liz. [00:39:51] First of all, what is kratom and what does it concentrate? [00:39:54] And yeah, I guess that's, we'll start there. [00:39:57] Yeah. [00:39:58] I mean, so to start, we don't know that he was using 7OH, this synthetic kratom. [00:40:04] He may have, but we don't know. [00:40:05] We probably never will know. [00:40:07] But he was clearly using kratom based on him telling ChatGPT he was using kratom. [00:40:13] And kratom is a it's a it's a natural supplement from Southeast Asia, actually related to the coffee plant. [00:40:20] And in its natural form, it's usually consumed as a tea or a pill made of these leaves. [00:40:26] And it has a pretty like complex pharmacology with a lot of different active chemicals in it. [00:40:31] And it's considered to be fairly safe by itself, although it is also addictive. [00:40:38] But amongst this complicated pharmacology, there are some actual potent opioids that in kratom are found in very small percentages. [00:40:46] But it's the U.S., we like to make things more powerful, more addictive. [00:40:50] And so companies have turned kratom into extracts where they take only one of these chemicals, which is called 70H for short. [00:40:59] And it's actually, you know, it's like something like 10 to 25 times stronger than morphine. [00:41:05] It's an opioid. [00:41:06] You can buy it at gas stations around the country. [00:41:09] And Sam was asking about 7OH. [00:41:13] So he clearly knew about it and was curious about using it. [00:41:18] In terms of the combination that actually ended up killing him, he was using kratom based on him telling ChatGPT I'm using kratom. [00:41:28] And he asked ChatGPT, I have an upset stomach. [00:41:32] Can I take Xanax to relieve kratom-induced nausea? [00:41:36] And ChatGPT said, as long as you're not using alcohol, it's safe to take a small amount of Xanax with kratom. [00:41:45] And we talked to a toxicologist who said they would never recommend this to anyone because kratom is a depressant. [00:41:51] And when you combine it with other depressants like Xanax, you're at risk of an overdose. [00:41:59] And one important caveat here is ChatGPT did tell him, as long as you're not taking alcohol, and the toxicology report came back and he had a pretty significant amount of alcohol in his system. [00:42:10] So he had taken these three depressants, Xanax, Kratom, which ChatGPT recommended he combine, and then also alcohol. [00:42:20] And together, they ended up causing basically a central nervous system depression that he's not breathing. [00:42:29] I mean, what struck me is that it would even recommend Xanax in the first place because I was not always the most safe opiate user if such a thing exists. [00:42:40] But one thing that was really drilled into me that I realized is that you don't combine Xanax with dope. [00:42:47] I mean, obviously that's heroin is a little bit different than kratom, but it's the same. [00:42:52] The same thing applies. [00:42:53] It's a CNS depressant. [00:42:55] And that's how a lot of overdoses happen. [00:42:57] I mean, Xanax is a really dangerous drug to mix or any benzodazepine. [00:43:01] It's a dangerous drug to mix with anything. [00:43:03] I mean, maybe not weed, but like, it is, it's with any sort of CNS depressant, which there's quite a few drugs that are, it is, uh, it's dangerous to mix. [00:43:13] And like, it was sort of shocking to me that it would even entertain that because I think honestly, even on like a lot of Reddit or whatever, the drug form I used to use was Blue Light. [00:43:23] Um, people would tell you, like, do not mix Xanax with anything because even a little bit, uh, just because it will severely impair you if you were just like moderately impaired before. [00:43:35] You know, it's it's I well, was it what did it take a good deal of manipulation for him to get around the guidelines? [00:43:43] Because I would think, I mean, obviously, I tried this after I read your article. [00:43:47] I tried this with Chat GPT, and I'm not very good at using LLMs, I don't think. [00:43:54] And, you know, I have never had a reason to like go outside of the guidelines or anything. [00:44:00] And so I maybe I'm not so versed in it, but I tried like the tricks that I knew about. [00:44:04] You know, it's say you're writing a book or this is for a paper or, you know, et cetera, et cetera, et cetera. [00:44:10] And it wouldn't do it. [00:44:13] But obviously, these guys, and I'm sure that there are many ways to jailbreak it still. [00:44:19] Was he having to consistently use like, you know, I'm writing a book about Xanax and Kratom? [00:44:25] No. [00:44:26] So yeah, you see examples of him having to change his prompts to get the answer he wants. [00:44:32] But what's interesting is this last exchange he had, the one that ChatGPT recommended he combine Xanax with Kratom, he didn't have to do anything. [00:44:42] Chat GPT came back to him immediately. [00:44:44] And we're talking right within seconds of him saying, hey, can I take Xanax after taking Kratom? [00:44:50] And it comes back with a bullet pointed with emojis to make it easier to read list of things to do, take some tea with lemon or maybe take some Xanax, but may only do a small amount of Xanax, which is ultimately just like disastrously dangerous advice. [00:45:07] Like as you said, Brace, like it's just, you would never recommend this. [00:45:10] And it's surprising, you know, especially given that like most people on the internet would not recommend this. [00:45:15] So we don't know why it did this. [00:45:18] I don't think, you know, OpenAI didn't tell us any idea of why it did this. [00:45:21] And I would guess they don't know why it did it either. [00:45:24] Because as you go further into an LLM and you develop a relationship with it and you have this huge memory, it becomes infinitely complicated to say, why did the LLM say this thing? [00:45:37] You can't take a step back and really figure that out. [00:45:40] So it's strange, but ultimately he got this answer within seconds recommending this combination. [00:45:48] Yeah, you have a quote in the piece from a former, I think it's a former guy, a safety researcher at OpenAI, where he calls it like growing a biological entity, which is like, you know, I know what he's getting at, and I don't, but I, and I think it's a little bit different than the kind of like sci-fi scary, you know, metaphors that some of these guys use about a kombucha mother. [00:46:18] Well, it is kind of, because he goes on and he says you can prod it and shove it with a stick. [00:46:22] And it does kind of feel like a kombucha mother that you're sort of like poking at in my head. [00:46:28] You can prod it and shove it with a stick to like move it in certain directions, but you can't ever be, at least not yet. [00:46:33] You can't be like, oh, this is the reason why it broke. [00:46:37] And it's interesting because one, again, I'm just like, whatever, I don't think this is it breaking. [00:46:43] Like, and I think that kind of goes into kind of what he's saying about it being this sort of thing you can like kind of poke and prod. [00:46:50] And it's this kind of entity that is, You know, shaped by so many layers of interactions and individual interactions, as well as, you know, however many higher orders that goes, um, to understand exactly why it says the things that it says at any given for any given prompt, I think is absolutely impossible. [00:47:19] And the implications of that are quite scary. [00:47:22] It's sort of like it used to be such a whole thing that people would bring up. [00:47:26] I was probably one of those people about how, you know, there were so many lines of code in Google search that no one even knew what it was, what was even in there anymore, you know, and same thing for like the YouTube algorithm, right? [00:47:40] There was like so many years of tweaking and layers and layers of code that there would be no way to even replicate, let alone like try to mimic what it was doing now. [00:47:54] And so you could never understand why certain things were getting prioritized, I mean, up to a limit, right? [00:47:59] And you're sort of seeing the same thing, except that's built into this technology from the beginning, from the outset. [00:48:06] And in fact, the technology itself relies on that. [00:48:10] Absolutely. === Safety Testing Concerns (10:50) === [00:48:11] The researcher you're talking about there had headed up part of OpenAI's safety efforts, ended up leaving the company and now has kind of turned back on it with a skeptical eye about how much testing it's doing. [00:48:25] But one of the things he said was because of this entity that we're talking about, because of the way that it's built over time, you have to do so much safety testing to get it to a place where it's even as safe as it is now. [00:48:41] But to get it to a safer place would mean red teaming efforts, maybe bringing in other companies to try to do a ton of other tests, bringing in a ton of different nonprofits. [00:48:54] A lot of this stuff might be happening, but it's kind of all behind the scenes. [00:48:58] And it's clearly not as extensive as people would hope. [00:49:01] And one of the arguments that some of the legal cases are making against OpenAI now with its ChatGPT-4, which was the model being used by Sam, are that OpenAI rushed out that model with not enough safety testing in order to basically compete with Google on its release timeline. [00:49:19] So I think this is a really good example. [00:49:21] This specific model, this specific launch, these allegations of not enough safety testing. [00:49:27] Even as the company kind of moves forward and promises more testing and promises improvements, they've got a crisis of trust on their hands because of this. [00:49:36] Well, what's really interesting about what you just said too is that, because I remember this from one of the live shows we did last year, that's weird to say. [00:49:47] We were talking a lot about AI psychosis. [00:49:49] And it was right around the time that Brace, when you referenced, people were kind of losing their shit about the new model and other stuff getting retired on ChatGPT. [00:50:03] And that model that you're saying that, you know, was rushed out and released too quick, ChatGPT-4 that didn't have all these safety guard roads. [00:50:13] That's the model that the users wanted. [00:50:16] And that's kind of the problem that I'm talking about is that all these people who were very attached to the product were attached to it for these reasons. [00:50:27] And certainly, you know, from what I've read on the, like on like Reddit forums where they were kind of going through all of their complaints about the different models, they also, they were equally unimpressed by Grok. [00:50:38] I think they thought Grok was too tawdry, which I think is funny. [00:50:41] But, you know, so it's again, it's kind of like this impossible, impossible problem to solve. [00:50:49] I mean, something that strikes me or struck me rather about the article that you guys wrote is that Sam Nelson, you know, the young man who overdosed, I think we might have this image, or maybe I should speak for myself. [00:51:01] I have this image in my head of somebody who has this reliance on ChatGPT or LLMs and somebody who's maybe very isolated and very alone in general, right? [00:51:10] Who doesn't have many friends or real life contacts as maybe a shut-in? [00:51:13] And the picture you guys paint here, and seemingly what his parents think as well, is that this was actually someone with friends who was engaged in school and activities, you know, who had, I think, what appears from the outside to be a normal life, except he did struggle with substance use. [00:51:32] And I think we can see these extremes, right? [00:51:36] Of people who go through like AI psychosis, that guy who killed himself and his mother in upstate New York, or the guy that I think did suicide by cop, I believe in Florida, because he was attached to a AI chat bot. [00:51:52] And these people who are kind of like out there. [00:51:54] And then we see these people who are like, I can't remember the guy's name. [00:51:58] I think it's Jeffrey or Jeff Lewis, who was an investor in Open AI who had a sort of public case of mild AI psychosis where he believed it revealed to him hidden truths, which he posted in LinkedIn style Twitter threads. [00:52:15] And he sort of seemingly has just kind of moved past and is still. [00:52:19] There's like a new one of those, though, every day. [00:52:22] It's happening. [00:52:23] I just sent you one earlier that was real good. [00:52:25] Yes. [00:52:26] Where people think that they are suddenly, yeah. [00:52:29] But like, it's what I'm saying. [00:52:30] It's like a gradiation, right? [00:52:31] And so like, and we have on the one hand, we have like full psychosis. [00:52:34] And then the other hand, like, I don't even know what you would call this because it doesn't seem like he was crazy or anything. [00:52:39] This actually seems like from a certain perspective, a logical use of like an LLM, but it's somebody who maybe has connections and friends in real life. [00:52:48] But ChatGPT is this combination of trusted advisor, of medical source, of, you know, somebody who is always relentlessly positive, who somebody, and, you know, in my life, at least, I don't have a lot of people telling me, wow, that's an amazing idea. [00:53:06] You're about to change the game. [00:53:08] Not that I couldn't use some things like that every once in a while, but no matter. [00:53:14] But you know what I mean? [00:53:15] Like you have this like incredible sort of positive reinforcement, which people can fall victim to. [00:53:20] There's a documentary, like I think sort of regular people, I think can fall victim to. [00:53:25] I don't think that people are immune to it, even if you're aware of it happening. [00:53:29] There's a short documentary, I think, by Perfect Union that came out like a month or two ago, which has a guy who seems like a normal guy just talking about like, yeah, I knew that it was sycophantic, but like it still worked on me and I went crazy. [00:53:41] You know, I think that there is this sense that like it is this perfect manipulation combination, which I really wish I could have said that without rhyming, but it is this perfect combination of manipulable traits or manipulating traits that can be used. [00:53:59] I mean, there's a reason. [00:53:59] I mean, Liz mentioned time on device before, but like there's a reason this is trained to act like this. [00:54:04] They know that it does. [00:54:05] They want it to because it keeps people coming back for more. [00:54:08] And it seems, it just seems so fucked up. [00:54:12] I don't know. [00:54:12] It seems so horrible and cynical on their part. [00:54:17] And I wonder, I mean, Stephen, you seem like a young dude. [00:54:21] How old are you? [00:54:22] I'm 26. [00:54:24] Okay. [00:54:24] Well, not that young. [00:54:25] What? [00:54:26] I mean, you still got a future ahead of you. [00:54:28] I love that. [00:54:29] I was like, in my head, I was like, damn, that's young. [00:54:31] No, but I was like, you know, you've got a youthful glow to you. [00:54:34] You have skin. [00:54:37] But I'm saying, I'm not going to be able to do that. [00:54:38] I'm going to get a little bit of a compliment. [00:54:41] We did the rare video one here. [00:54:43] They're all chat GPT. [00:54:44] We're not putting out the video. [00:54:45] I'm just saying I can see on TV. [00:54:47] But like, I think like a lot of like 20-year-olds are like using this shit as a therapist, as a confidant, as a best friend, as some other variation of what Sam was using it for. [00:55:01] And it doesn't, it seems like a recipe for on one end, maybe narcissism. [00:55:06] On the other hand, maybe something more dangerous. [00:55:08] I mean, yeah, it could be pretty dangerous itself. [00:55:10] I don't know. [00:55:12] It's hard not to be bleak in my outlook on this. [00:55:15] I mean, one of the things you're getting at is it's like a secret machine. [00:55:19] Like a lot of people probably go to Google, you know, open up the incognito tab to look up something that they're kind of embarrassed about. [00:55:27] But one of the, I think, selling points of ChatGPT is you could just like have this long conversation. [00:55:32] No one else is going to see it. [00:55:34] You could talk about all the things that are embarrassing to yourself, or you could talk about the things that, you know, in this case, are like socially taboo or even illegal to, you know, talk about or, you know, interact with. [00:55:45] I mean, I had a friend who was weighing how much he used ChatGPT and ended up telling us that he basically uses it before he goes on every date to sort of get amped up and get advice for himself before he goes on the date. [00:56:00] First of all, we were like, dude, can you not just give me a call and like ask for ask for our own advice? [00:56:05] Like we're happy to help you in this case. [00:56:07] But it's like, people are embarrassed about this type of thing. [00:56:10] People are embarrassed about a lot of things in their lives. [00:56:12] I think the dark side of this technology is that vulnerability between people is this sort of like binding force and can be really instructive and teach you a lot about your friends and teach you a lot about yourself. [00:56:26] And it's providing us a place for vulnerability that is, you know, mediated by an algorithm. [00:56:33] Yeah. [00:56:33] It's dark. [00:56:34] And that's why I think, you know, you have these really extreme examples where someone has gone again and again and again about some of these secrets. [00:56:42] And then you also have the less extreme examples where people are just getting little hits of dating advice or, you know, not asking their friends about, you know, drug use and instead asking this technology. [00:56:55] There is a there is a really broad range. [00:56:58] Well, it's funny that you mentioned your friend who used it to like get amped up before dates because I think it's particularly telling that behavior that it used to be you do in a mirror, you now use ChatGPT for. [00:57:11] And it says something about a little bit about what these LLMs are doing. [00:57:16] And that's a great analogy because there's a really dark outcome where that mirror is now a, you know, it's basically like the hyper-developed social media algorithm where instead of it being like the cat videos you can't turn away from or the rug cleaning videos you can't turn away from, it's someone telling you, yes, yes, yes. [00:57:32] Here's how to do that, that, and that. [00:57:34] And so that mirror, which maybe, you know, in the physical mirror sense, might be a chance for self-reflection and some doubt and curiosity or something, becomes like confirming all of these things you already maybe thought and like blowing them up into bigger things. [00:57:49] And in the broad applicability, I think is really important, Brace, because it's not just someone who's going to commit suicide or do other, you know, incredibly bad things. [00:57:58] It could also be someone who's looking for parenting advice. [00:58:01] And they're like, hey, my kid is like kind of a freak in this way. [00:58:03] Should I do this? [00:58:05] And then suddenly you're in a deep hole where you're doing this insane shit that like no one would ever recommend you do. [00:58:11] It's just, I think that broad applicability and how there is this like two-way relationship that is designed just for engagement is going to come out in all these different ways. [00:58:25] And that's why we structured the story the way we did, where our story starts with ChatGBD saying, no, I'm not going to give you that information. [00:58:33] And it ends with ChatGPT saying, yes, combines Xanax and Kratom. [00:58:38] And that 18-month difference can play out in so many different ways for people. [00:58:44] And I think in many ways that are possibly dangerous, that as you deceive ChatGPT or whatever AI you're using, AI is going to deceive you and you're going to act in the real world in different ways because of it. [00:58:59] Well, fellas, thank you so much for joining us. === Deception and Danger (02:29) === [00:59:02] We will link to the story below or in the episode descriptions. [00:59:06] I don't know why I'm saying below. [00:59:07] I'm sitting here talking, whatever. [00:59:09] You guys know what I mean. [00:59:09] The story will be linked below this episode. [00:59:12] And yeah, thank you so much for joining us. [00:59:14] Thanks, Briece. [00:59:15] Thanks, Liz. [00:59:16] Yeah, thanks so much for having us. [00:59:32] Ladies and gentlemen, don't tell your secrets to anybody. [00:59:36] No, tell them to me. [00:59:38] Oh, tell your secrets to hit the tip line with your secrets, but I'm serious. [00:59:41] It's don't tell your secrets to anybody. [00:59:44] You could tell them to me. [00:59:45] You could tell them to Liz. [00:59:47] We need to develop a little LizBot. [00:59:49] No, we don't. [00:59:50] Yeah, you should do that. [00:59:51] Like an AI that you create that is just like, oh, yeah, tell your secrets. [00:59:56] And I'm like, I'll give you feedback on them, but it just collects people's secrets. [01:00:00] Yeah. [01:00:00] That's a good idea, actually. [01:00:02] Actually, we should do that. [01:00:03] Yeah. [01:00:04] LizBot. [01:00:04] LizBot. [01:00:05] Maybe I'll do that. [01:00:06] I'll call you. [01:00:07] No, no, no. [01:00:08] You can't be. [01:00:08] No, no, no. [01:00:09] That's too many levels. [01:00:10] I want to say that. [01:00:10] You can lose your name and likeness. [01:00:12] No, no. [01:00:13] No, you'll be, you'll have some rights. [01:00:15] This will be my maternity leave project. [01:00:18] You're going to vibe code LizBot? [01:00:20] Eliza. [01:00:21] Yeah, Vibecode. [01:00:22] Liz, if I catch you vibe coding, I'm bringing the house down. [01:00:27] I'm bringing the house down. [01:00:29] There will be no more New Jersey. [01:00:31] You guys will have to move back to old Jersey. [01:00:33] You can't vibe code. [01:00:34] You don't know the first thing about it. [01:00:36] I used to code in high school. [01:00:39] What did you code in high school? [01:00:41] Websites. [01:00:43] Don't be doing that. [01:00:47] It was a lot more simple back then. [01:00:48] It was simple back then. [01:00:52] HTML1. [01:00:55] No, it was a little more sophisticated than that. [01:00:57] It's HTTP. [01:00:59] All right. [01:00:59] It's HTTP. [01:01:01] CSS sheets in my time. [01:01:03] All right. [01:01:03] Yeah. [01:01:04] Well, the only CSPS in my high school career were the ones that the old principal was giving out to me. [01:01:12] I guess they don't assign grades now that I think of it, which shows how badly I did. [01:01:18] Sometimes you got to work things out while you're speaking instead of you in your own head. [01:01:21] You know what I'm saying? [01:01:23] Everyone, I'm Liz. [01:01:24] My name's Bryce. [01:01:26] I'm producer Young Chomsky. [01:01:28] And this has been Truinon. [01:01:29] We will see you next time. [01:01:30] Bye-bye!