True Anon Truth Feed - Episode 387: A/G/I? Aired: 2024-06-17 Duration: 01:37:10 === Why We Left Google (14:26) === [00:00:00] Hello, world. [00:00:02] This is version 6.738L439, updated version of Liz, co-host of True Anon Computer AGI version. [00:00:20] Hello, world. [00:00:22] I am here with my dear co-host, Brace Belden. [00:00:27] Hello, I am from China. [00:00:51] I'm one of the Chinese AGIs. [00:00:54] No, I got it. [00:00:55] Okay, I'm just telling you. [00:00:56] You're lucky I didn't really put my soul into that. [00:00:59] Well, it's crazy that you fucking did the eyes. [00:01:02] Okay, yeah. [00:01:02] First of all, Liz is misleading you there because I am almost always doing the eyes. [00:01:07] I just released my hands for that. [00:01:09] Okay, a little David Bowie move. [00:01:11] Did David Bowie do that? [00:01:12] Oh, yeah. [00:01:13] David Bowie. [00:01:14] David Bowie. [00:01:15] Me and Bowie we used to well. [00:01:18] He was recording now We used to shoot us and we'd get together with who was the fucking guy I was talking about earlier I guess later in this episode. [00:01:27] You know what I'm talking about? [00:01:28] Sean McCowie. [00:01:29] Me, Bowie, and McCowie. [00:01:31] Oh, you should have seen us in Berlin. [00:01:34] I would have loved. [00:01:36] I wish Sean McCalloway was like an ancient creature that lived from the beginning of the world. [00:01:40] Oh, don't say that. [00:01:41] You know why? [00:01:41] Because, first of all, Sean McElwee, if you're listening. [00:01:44] We rock with you. [00:01:45] We would love to talk to you. [00:01:46] Where are you? [00:01:47] Genuinely, Sean, if you want to. [00:01:49] Call us. [00:01:50] Hit the tip line. [00:01:51] I know I have to. [00:01:52] Don't hit the tip line. [00:01:53] Just, you can get our number. [00:01:55] I will give you my number. [00:01:56] Yeah, it's this. [00:01:57] 415. [00:01:58] No, Don't even joke about that because you once showed it on the internet. [00:02:05] That was out of my control. [00:02:06] That was the AGI. [00:02:07] That was not my interesting. [00:02:09] Yeah, a girl is betraying. [00:02:12] I had my eye chatting. [00:02:14] It came up. [00:02:14] And for some reason, your name wasn't saved in my contacts. [00:02:18] Not the first woman. [00:02:19] I'm not going to tell me that. [00:02:20] Not the first woman to say, oh, who is this? [00:02:23] After I shoot a text. [00:02:24] Yeah, you should be happy that it wasn't saved under a big, mean, annoying, rude guy who I really care for. [00:02:32] You think I'm big? [00:02:34] Physically. [00:02:34] Thank you. [00:02:35] Yeah, I know. [00:02:35] That's cool. [00:02:36] Because I just, I get called something else often. [00:02:40] All right. [00:02:41] Hello, everyone. [00:02:42] Hello, my name is Brace. [00:02:46] I'm just Liz. [00:02:47] Just Lit. [00:02:48] I'm just Liz. [00:02:50] We're going back to like, since it's like we're in 2008 culturally again, 2008, 2012, 2014. [00:02:57] Everyone knows a dro. [00:02:58] I'm going back to, I'm going to go like re kind of re-quirk up. [00:03:03] I'm going to go a little manic pixie. [00:03:05] And I'm just, I'm just, I'm just Liz. [00:03:08] I'm Terry Richardson. [00:03:10] Come here. [00:03:12] And we're, of course, joined by producer Young Johnson. [00:03:14] And this is Tronan. [00:03:15] Hello. [00:03:16] Hello. [00:03:17] I'm whatever happening. [00:03:19] You're lots of things. [00:03:20] What? [00:03:20] You're lots of things. [00:03:21] Me? [00:03:22] Remember your alters? [00:03:23] Oh, fuck. [00:03:24] Yeah. [00:03:25] That episode was good. [00:03:27] You know what would be a good alter for you? [00:03:29] What? [00:03:29] Sean McElwey. [00:03:30] Sean McElwey. [00:03:32] Actually, no, David Shore. [00:03:34] I don't want him to talk. [00:03:35] I don't want to talk to him, though. [00:03:36] I don't care. [00:03:37] Yeah, he's David Shore. [00:03:38] John Barber. [00:03:40] McCalwe, I want to talk to, because he's a genuinely interesting figure to me. [00:03:43] So we have some questions. [00:03:44] I just am like. [00:03:45] Not for the show, but just like in like just in general. [00:03:48] Well, first of all, how much did you make from that gambling? [00:03:51] But I just, I like it. [00:03:53] When a guy gambles and loses, for me, that's always means that he's about to gamble again big and win big. [00:03:59] Because, of course, the player always wins. [00:04:02] And so you just got to keep... [00:04:03] I know that the gambler doesn't stop gambling just because he crapped out. [00:04:07] You're out there hustling for more money to bring back to the table. [00:04:09] You feel me? [00:04:10] No, this is right. [00:04:11] It's right. [00:04:12] Listen, this is a PSA. [00:04:13] The government forces us to say when I talk about gambling, it's safe, it's legal, it's fun. [00:04:18] It's safe, it's legal, it's fun. [00:04:19] No one can hurt you. [00:04:20] No one will hurt you, and you can make a lot of friends doing it. [00:04:22] There's a lot of antisocial stuff. [00:04:23] You're drinking alone at the bar. [00:04:24] Ooh, ooh, ooh, have another fucking corona. [00:04:26] You're fucking, you're smoking weed at your house playing Grand Theft Auto or et cetera. [00:04:32] You're doing cocaine in the bathroom. [00:04:34] Yeah, like it. [00:04:35] All three of those things you could do in a casino. [00:04:37] Do them in a casino. [00:04:38] The drinks are free. [00:04:39] The weed is better. [00:04:41] And all the games are on the computer anyway. [00:04:42] The games are on the damn computer. [00:04:43] You're playing video games there, except in this case, you don't win a, maybe a racial slur or just some points. [00:04:50] You actually win money. [00:04:51] Or lose a 401k. [00:04:53] Lose a 401k, but think about it this way. [00:04:55] If you had a 401k, you can get one again. [00:04:57] Lose a 401k, but gain life experience. [00:05:01] And life expectancy is up, too. [00:05:03] So every time you lose all your 401k, okay, you add a couple years to your working life. [00:05:08] And soon with life extension and the guy that lacers his face off but calls it like biohacking. [00:05:15] Oh, Brian Johnson. [00:05:18] Yeah. [00:05:19] You know, we'll all be living forever, so it's all fine. [00:05:21] From the Rolling Stones. [00:05:23] I'm Brian Johnson while he was on the Beatles. [00:05:27] Fuck, what was I? [00:05:28] I was thinking last night. [00:05:29] I was like, was I doing it to you last night where I was saying a guy's name? [00:05:33] No, no. [00:05:35] I'll tell you guys afterwards. [00:05:37] I just kept saying the word spoon. [00:05:38] Great story. [00:05:39] Whoever pulled the accent. [00:05:40] I love that shit when they talk like, Liz. [00:05:43] Stop. [00:05:44] Stop it. [00:05:46] Do you like the episode? [00:05:47] Do you like my accent? [00:05:49] No. [00:05:50] You don't like it? [00:05:51] No. [00:05:52] She's like, can I? [00:05:56] We have a guest today. [00:05:58] We are joined by fan favorite, returning guest, Max Reed, Max Soon to be read, and author of Read Max, available at maxread.substack.com. [00:06:13] Look at that fucking memory right there. [00:06:15] Look at that. [00:06:15] And well, you just flip them around 18 billion different ways. [00:06:18] You're about to get it right one of those times. [00:06:21] And we're talking about what? [00:06:23] AI, computers. [00:06:24] TI, China. [00:06:26] China. [00:06:27] Not in the Trump way. [00:06:28] We say the Trump way. [00:06:29] We're going to the Trump way. [00:06:31] And all sorts of nonsense. [00:06:32] Elon. [00:06:33] We're really, it's a loose little conversation, but it's fun. [00:06:37] But it was fun, and we think you'll like it. [00:06:39] So let's check it out. [00:06:40] Let's boot up that Sora. [00:06:55] Don't fucking do that again. [00:06:58] This is no coughing zone. [00:06:59] You just come in here. [00:07:00] You know, this is a closed room, our new studio. [00:07:03] All right, you're coming here. [00:07:05] This is actually a recognized rendition site. [00:07:07] Oh, I was wondering about the paint. [00:07:09] You don't like it? [00:07:10] No, no, I love it. [00:07:11] It makes me feel like I'm being tortured. [00:07:12] It's like in a cool way here. [00:07:13] At home. [00:07:14] Yes. [00:07:14] Enough to just cough. [00:07:15] Max is sitting on my lap right now. [00:07:18] And he's doing it sort of, he thought he'd be the cool professor. [00:07:21] So I'm sitting straight in the chair. [00:07:22] He is straddling me like a cool professor with his arms over me and his head resting on top of mine. [00:07:28] Obviously, I'm 4'10. [00:07:29] Max is the towering height of 5'7. [00:07:31] And so he is able to do that. [00:07:33] And he comes in here. [00:07:34] He's staring directly in Liz's eyes. [00:07:36] I'm forced to stare at the wall. [00:07:38] I'm still confused on the, like, who's where and what. [00:07:42] This is, this is. [00:07:43] Was this not cool? [00:07:44] Was this not, was I not supposed to do that? [00:07:45] Make yourself at home. [00:07:46] Make yourself at home. [00:07:47] Mikasa Tsukasa, Mihermano. [00:07:50] This studio is so beautiful. [00:07:51] I did feel so comfortable here. [00:07:53] The AC is the perfect temperature. [00:07:54] I just wanted to be close to you. [00:07:56] Do you feel comfortable enough to kiss me? [00:07:57] Is that a dare? [00:07:58] No, it's just a question. [00:08:00] I don't want to do it. [00:08:01] Not in front of everybody else. [00:08:02] Fair enough. [00:08:03] Welcome to the show, Max Reed. [00:08:06] After the episode, I'm going to call you Max Red because it'll be done. [00:08:09] He is the author of, I'm a subscriber to, and I can't even fucking remember the name, even though it's in front of me right now. [00:08:16] He is the author of his sub stack called Read Max. [00:08:19] Yeah, it is just my name reversed. [00:08:21] Read Max. [00:08:22] I prefer to watch Max, particularly the show The Throne of the Dragon. [00:08:26] And now you can listen to him on this show. [00:08:30] Welcome back to our, you know, inching closer to official tech correspondent. [00:08:35] Oh, that's something here. [00:08:37] Thanks for watching. [00:08:37] Well, we got to do a couple more. [00:08:39] I know. [00:08:39] It's inching. [00:08:40] And then we got to bring it up with the board. [00:08:42] Board's not going to like that. [00:08:44] Board is not going to like that. [00:08:45] I have to say, true non-readers subscribe to the newsletter in droves afterwards. [00:08:50] Yeah, there's a huge crossover between our listenerships. [00:08:53] Say the thing they were most interested in was: I mentioned that I own a Honda Fit and that I have a kid. [00:08:59] And a lot of your guys emailed to ask about putting car seats in the Honda Fit. [00:09:03] No way. [00:09:03] Yeah, so you've got like a really bad listening. [00:09:05] Yeah, sort of edgy, you know, an edgy conspiratorial audience of 38-year-old Honda Fit concerned with mileage, they're concerned with safety for their children. [00:09:13] I would have thought there would be a lot of people who took their cell phones too close to their testicles and drove themselves to it's good to know that we've got those guys out there. [00:09:21] I mean, this is sort of it's like the plane with the red dots. [00:09:23] They didn't email me because they don't know. [00:09:24] Exactly. [00:09:25] Yeah. [00:09:25] Is that true? [00:09:26] Before we get started here, we're talking about technology today. [00:09:29] Is that true about the phone near the balls? [00:09:32] I have a son, and my phone's been near my balls almost my entire life. [00:09:36] Okay. [00:09:36] That's anecdotal, obviously. [00:09:37] That's anecdote. [00:09:38] Yeah, but you seem. [00:09:39] I don't want to be disrespectful. [00:09:40] Does any of this science? [00:09:42] Well, I don't know. [00:09:43] That's why I'm asking the scientists here. [00:09:44] Unless he's not one, unless he's just a journalist. [00:09:46] Technologist. [00:09:47] But I will say, Matt, you seem like you have like a good gene or two in that's nice of you. [00:09:52] Thank you. [00:09:52] I am, and no disrespect to my father, who is a listener, I am basically made out of parchment and dust. [00:09:58] I have the same chemical composition as a mummy. [00:10:01] Inside of that mummy is a lot of microplastics. [00:10:04] Uh-huh. [00:10:05] And a beating heart. [00:10:06] There you go. [00:10:06] I actually bet that the cell phone radiation and the microplastics cancel each other out. [00:10:10] And that's why I can have it. [00:10:11] Interesting. [00:10:12] I just eat plastic. [00:10:13] Maybe the cell phone radiation activates the microplastics in like a new way that we don't yet understand. [00:10:18] Yeah. [00:10:18] The human body is a laboratory, and I am Dr. Well, nope, not the angel of death. [00:10:24] No. [00:10:26] Who's another famous doctor? [00:10:27] I can only think of, you know, you know what? [00:10:29] They kind of Dr. Einstein. [00:10:31] Dr. School. [00:10:33] He's more like the sherub of death. [00:10:36] Max, welcome to the show. [00:10:37] Thanks for having me, guys. [00:10:38] Thanks for coming back on. [00:10:39] We're talking tech. [00:10:40] And just to get this out of the way, Elon Musk, he's in the news again. [00:10:45] I got to tell you, an article came out. [00:10:48] He's in the news again. [00:10:50] An article came out kind of timed with the vote to if the shareholders are going to give him his, whatever, $55 billion pay package that got denied by the courts. [00:11:01] Wall Street Journal puts out an article being like, this is a horny gentleman. [00:11:05] Yeah. [00:11:05] He is fucking his employees. [00:11:07] Yeah. [00:11:07] Did you see the text? [00:11:08] Harassing. [00:11:09] And harassing. [00:11:10] And harassing his employees. [00:11:11] Yeah. [00:11:12] I mean, the part that I remember the best is that he would text them late at night and say, are you there? [00:11:16] And then just text them their name like 20 times and then say, I'm just going to trank out. [00:11:20] If you're not going to respond, I'm just going to trank out. [00:11:21] I'm just going to trank out. [00:11:23] Trank out. [00:11:23] It's literally like, it's literally like, if you don't write back, I'm going to kill myself. [00:11:27] I'm going to kill myself. [00:11:28] I'm going to kill myself. [00:11:29] Like, he's like that guy. [00:11:30] That's a Liz ass text. [00:11:32] She hits me with that every time. [00:11:34] What are we talking? [00:11:34] What are we meeting today? [00:11:35] I'm going to kill myself. [00:11:37] That is, I know, the 20 times with the name. [00:11:40] It's kind of psycho. [00:11:41] Has any of his text messages? [00:11:42] No, they have been during some court cases, but I'm like, I really would love to see some kind of like court documents released with evidence of just like, I wanted to see how he texts normal. [00:11:52] Someone released the DMs. [00:11:54] The Twitter lawsuit over closing the deal. [00:11:57] There's a bunch, but those were mostly embarrassing for the people he was texting with. [00:12:00] Yeah, because it was all like, it was like David Sachs. [00:12:03] Yeah, and Calcanis. [00:12:04] Yeah. [00:12:05] He wants to suck your dick so bad if you want to please be in charge. [00:12:09] No, we want the ones released between Elon and girls, basically. [00:12:12] Yeah, and women. [00:12:14] Even like ones that no response. [00:12:16] I want to see the left on red ones. [00:12:18] Yes. [00:12:18] Because, yeah, you know, because he's going to keep going back for more. [00:12:22] Because he kind of like, Elon's big thing about the media, like, because Elon, Elon's big idea for Twitter is that Twitter will replace news, even though that's just a fundamentally like news websites. [00:12:35] Even though that's a, just a, it's, it's, I would hope that our listeners understand. [00:12:39] I'm now not defending the mainstream media in some particular way here, but just like that is a difference in just in every way. [00:12:47] Like, those are two different kinds of things. [00:12:49] One is a social media website where there's viral Dom Lucre videos of like whatever Ellen DeGeneres' ex-lover being risen from the dead on a street in LA. [00:12:58] Wait, really? [00:13:00] Well, there's like a viral video of like a lady who has like, yeah, you know, when she kind of like lurches up. [00:13:06] No way. [00:13:07] It's just not pretty, but she's done. [00:13:08] Dom shared that? [00:13:09] Well, Dom just shared that. [00:13:11] But there's like a fundamental difference between that and like history horror or whatever, viral Twitter accounts. [00:13:18] A reported thing. [00:13:19] Because he was trying to do like a sub-stack like clone for a second with like the subscriptions, but that just never took off. [00:13:26] And now I just, I think part of the reason he hates the media so much is just because those are the only people who are going to be like, did Elon Musk offer you a horse to get jacked off by him? [00:13:36] I mean, he really, he like, he knows reporters. [00:13:38] He specifically hates a bunch of reporters. [00:13:40] I mean, this is probably true of like any sufficiently rich person that you've got like two reporters that you despise. [00:13:44] I also hate specifically, yes. [00:13:49] He's got a list, you know, guys I will never talk to. [00:13:51] And he's, you know, like I've heard that, so there's a new book, one of, I think, three or four books coming out about the Twitter deal. [00:13:57] Ryan Mack and Kate Conger wrote it. [00:13:59] And Ryan Mack was the BuzzFeed reporter who reported out the like the PETO libel lawsuit over the Caves thing. [00:14:06] And apparently Elon is like just anybody he talks to before he talks to them says, do not talk to Ryan Mack, whatever you do. [00:14:14] Like keep it away. [00:14:15] So he's got in his head that Ryan Mack, who's like a nice guy, a Times reporter, like not an evil mastermind in particular. [00:14:20] Not to insult Ryan Mack, I'm sure he's got the capability to be an evil mastermind if he wanted to be one, is like, is in his head. === Hitler's Twitter Legacy (11:34) === [00:14:26] He's got this kind of thing. [00:14:27] And I think that Elon has this idea that Twitter is going to like remove these people from all these evil journalists, you know, triple parentheses from the equation. [00:14:38] And, you know, it's like a twisted version of the citizen journalism idea that was so popular in the late 90s and early 2000s. [00:14:46] But I think he like when he took it over, I mean he understood something like fundamental about Twitter, which is that it does control a certain amount of like initial storytelling that then sort of like from the little like, you know, horrible incubator that is this like social media site gets diffused and like long-tailed out throughout the culture via mainstream media and like other outlets. [00:15:10] Like I do, and that like a lot of kind of, you know, there was a, it seems like he, I mean, my read of it was that he wanted to kind of control that or he saw power in controlling that. [00:15:20] Whether it was like ideological. [00:15:22] I mean, I'm of two minds about like how heartfelt some of his convictions are. [00:15:29] Like I think he's such a charlatan that I don't know. [00:15:32] Like I think, I think that he is like a guy who's very much just like always trying to find out what's cool on his phone. [00:15:40] And like that has led him into like becoming a race realist as opposed to the other way around. [00:15:46] Do you know what I'm saying? [00:15:48] But I think that what every change that he's made to the website has been to close it off rather than open it up and basically limit all of the things that made it fun, cool, and like, and a place where all of that kind of fun, magic stuff was able to kind of happen. [00:16:06] Yeah, I mean, I think the way to think about Twitter is like, there was a while there where I think people sort of understood it as a mirror for the whole world. [00:16:12] You know, it was like, what happens on Twitter is a reflection of the way the world works. [00:16:15] And I think everybody's been basically disabused of that. [00:16:18] But I think it's true to say that Twitter was like a reflection of kind of elite discourse and networks. [00:16:26] And it's like elites in journalism and media, in politics for sure, in tech, in entertainment, all four of which industries, and finance too, even to some extent. [00:16:36] And the NBA, of course, all six of which industries are very closely intermingled. [00:16:40] And it's sort of, so like you say, if you can get circulating among that group of elites a particular kind of, you know, whatever, a politics or an argument or an idea, it will eventually sort of trickle out into other places. [00:16:54] Yeah, and it's not even people trading on ideas so much as like trading on social connections. [00:16:59] And just there was a lot of people who, there was like that early wave where people, everyone was just like making a career on Twitter, whether it was in the media or becoming in marketing or in finance. [00:17:09] And like You say, or in the NBA very famously. [00:17:14] You know, well, we should, I will shout out Kevin Durant as one of the greatest posters in the history of Twitter outside of whatever silo. [00:17:22] He's up there, top, you know, top five for sure. [00:17:24] But now it's just because a mix of people leaving, Elon's like moves, and it's just totally broken. [00:17:33] Yeah, I mean, it's funny. [00:17:34] I was talking with somebody about, like, this is a sort of narcissistic way of thinking about it, but as a journalist, I used to spend a lot of time searching links to my own stories because it was a good way to see like what the response was, basically, if people were sharing, if people were thinking about it, what they were saying about it. [00:17:48] And that function of Twitter, which was my friend John used to call it a context. [00:17:52] Like, it's a context. [00:17:53] Like, what Twitter functioned for me was it was a whole context for understanding what I did, where my work was, all these things is like completely gone. [00:18:01] And in some ways, that's really freeing. [00:18:03] And it's nice. [00:18:04] Like, the nice thing about my sub stack right now is that like people email me to say they liked a piece instead of like tweeting something weird about it. [00:18:10] But it doesn't. [00:18:11] It's old school. [00:18:11] Yeah, it's very, it's like when I first started blogging. [00:18:13] But it's a different kind of world. [00:18:15] And the absence of that context, you know, it's a mixed bag, I guess. [00:18:21] It's nice to not have the kind of eternal pressure to do the Twitter thing, but it also means you're not quite sure who you're writing for and where you're writing to. [00:18:29] Whereas Twitter itself is like the people on it. [00:18:32] Twitter is a much more self-contained thing now that links are throttled and there's the sort of network of blue checks like Dom Lucre and these guys that are sort of performing for each other in this funny, weird way. [00:18:42] It is, it is, it does just seem like it's just a fundamentally different website. [00:18:46] And not like it was a healthy one before, but at least it like, it was, it was, and maybe it's easier to make money there now, but like, you know, it would be more difficult for somebody who has like a little bit of self-respect to kind of use it in any kind of way. [00:19:04] Like it's, it's, I, I mean, I just have the true, I just use the, oh, well, I have a social media website that we contract with from Egypt. [00:19:11] Sure. [00:19:11] Does the true and on Twitter for me? [00:19:14] But, but I, so I just see, I only follow these two and a couple of freaks, like Bill Clinton or whatever. [00:19:20] And so I just see what it gives you. [00:19:22] And I'm just, it's insane to me. [00:19:24] And I will say, so one of the reasons we're bringing this up is because Elon finally went too far and he took away the ability to see what people have liked. [00:19:35] And I think that's like, you can't even, I was actually going to check it right now. [00:19:38] I don't think you can check. [00:19:41] Nope. [00:19:41] It just shows retweet reposts and quote tweets on other people's tweets too. [00:19:45] Which has made it also very difficult to see quote tweets. [00:19:47] Yes. [00:19:48] So you can't see people's likes. [00:19:50] And I want to say this. [00:19:52] Obviously, for legal, ethical, moral, and just general self-respect reasons, we're not journalists here. [00:19:59] I mean, maybe you are what of a do we have to do? [00:20:01] I'm a small businessman. [00:20:02] I'm a like the rest of you. [00:20:03] I'm a smaller person. [00:20:04] We're small businessmen here. [00:20:06] And women. [00:20:07] And well, it's kind of becomes, it's a gender neutral term. [00:20:11] But it is one of the most crucial aspects of Twitter is finding the Twitter of either a famous person or even a non-famous person, like that guy who confronted me at the park that I just didn't like. [00:20:24] And looking at what's the egghead, looking at their likes and seeing the pornographic things, the lies, because they're liking something, but they're saying something else in the timeline, or the Hitler style posts that they've liked. [00:20:43] Or like Finance Hitler guy. [00:20:45] Finance Hitler guy. [00:20:47] And now that has been taken from us. [00:20:50] And so this is an important tool for anybody who likes to find out something they don't like about another person, which is really what Twitter is for. [00:20:59] And just like, I find it fundamentally like antisocial. [00:21:03] It's like he's taking out all of the social elements of the website, what seems like be out of fear for his and his compatriot's own ego and like their own kind of projection of who they want to be on the website. [00:21:17] Yeah, I mean, I think that like basically all of the big product changes he's made. [00:21:20] So he's like taken away the ability to see likes. [00:21:22] Like you said, he's made it harder to see quote tweets. [00:21:25] And he's also, like, blue checks, who you now pay a subscription to get a blue check, get priority and replies to tweets. [00:21:33] And on the timeline. [00:21:35] And on the timeline in general. [00:21:36] And from my perspective, it's a really easy way to just like, the theory behind all of that is basically it's like reducing the attack surface on rich and famous people. [00:21:45] Like all the ways that all the ways that Twitter could be annoying to them or dangerous to them. [00:21:49] Yes. [00:21:50] Musk is like taking each one of those away. [00:21:52] So now you can presumably go on. [00:21:53] You can like as many Hitler posts as you like. [00:21:55] You can like as much anime porn as you like. [00:21:57] Hold on. [00:21:59] You can say whatever you want and your replies are just going to be sycophants, basically, or like OnlyFans girls being like, I fucked him too, or whatever. [00:22:07] Yeah, yeah, yeah. [00:22:08] I love that shit. [00:22:09] And you don't have to worry about some Gen X lawyer with a dude quote-tweeting you with some sassy, critical tweet or whatever it is. [00:22:19] The dreaded ratio. [00:22:21] Right, exactly. [00:22:22] It's just so fake. [00:22:23] There's a sort of sad... [00:22:25] I mean, this was something that was generally valuable at Twitter. [00:22:27] Like you're saying, Bris, like seeing people's horny likes was like such an important way to connect with you. [00:22:34] Listen, this show is nothing if not a freak show, right? [00:22:37] We look at the motherfucking freaks out there. [00:22:39] And oftentimes these freaks have little Twitter accounts and I look at them and I say, interesting you follow these females like that. [00:22:45] Interesting you're liking the posts of these females like that. [00:22:48] And that now is just, okay, cool. [00:22:51] Now I don't know. [00:22:51] Now I just have to scroll down your media tab and like everything myself. [00:22:54] Yeah, exactly. [00:22:55] It's like, okay. [00:22:55] This is like, it's Instagram. [00:22:57] As long as Instagram doesn't take it away, I think I'll be okay because Instagram is the real one where if you, for a while, it was like floating other people's likes into my feed. [00:23:04] And I would learn that like a guy who worked in marketing at the company I worked for was just non-stop butts, just like butts, buts, butts, butts, butts. [00:23:11] And, you know, you got to respect that a little bit. [00:23:13] There's something sort of. [00:23:14] What do you think the mindset is on that? [00:23:16] Because if I see a butt man, these don't belong to me. [00:23:19] I'm not the ass man. [00:23:20] I think there's been a mistake. [00:23:22] Believe me, I'm familiar. [00:23:25] But I like if you're on, if you're on Instagram and it's your little Instagram page, right? [00:23:31] That's your picture of you, maybe you were your cousin, you on a beach or something. [00:23:35] Yeah. [00:23:36] But you're hitting the fucking explore page. [00:23:38] Yeah. [00:23:38] And you're, but, but, but, like, I don't understand why you're liking it. [00:23:42] Like, it's, this is a, this is 60,000 likes on it. [00:23:45] This woman is not going to see that you liked her butt pick and hit you within DMs. [00:23:50] It just, I don't, I genuinely, and I'm not saying this is like, I don't understand the mindset. [00:23:55] I literally don't get it. [00:23:57] I feel like, is it worse? [00:23:59] Is the like worse than the comment? [00:24:01] The comment is beyond. [00:24:03] Like a fire emoji, Dua Lipa. [00:24:04] That's like when people are like, I don't understand how the German people did nothing, right? [00:24:09] It's like, I don't get what the comment of like, fire fire emojis under Dua Lipa. [00:24:14] But the comment, I mean, you can see the comment leading to a DM. [00:24:17] Like the comment draws attention in a way that like doesn't DM from my fires. [00:24:21] There you go, for example. [00:24:22] The pyre I built, I feel like. [00:24:24] We should get a horny guy on the show and ask him. [00:24:27] If I had my drums, which Liz prevents me from having, our shows are. [00:24:32] If I had my drothers, this show would be nothing but Liz and I sitting across from a psychopath and asking them about their life. [00:24:41] Well, I'm close to this guy in marketing. [00:24:43] I'm going to get this guy in marketing in here to come talk to you guys. [00:24:46] But it's fundamentally, you're right, Liz. [00:24:47] It's anti-social. [00:24:49] And it's just. [00:24:51] Well, I also, it plays into this idea that, like, you know, all the conservatives have this idea that's like, oh, everyone would like these posts. [00:24:58] If it weren't for the soft social forces, the specter of cancellation that hangs over, they would be like furiously, you know, liking like Finance Hitler's posts and, you know, the craziest Nazi butts ever. [00:25:14] And like, oh my God, we should, why don't we just like eradicate black people? [00:25:18] Hello? [00:25:19] No one thought about this. [00:25:20] Like, that's what lives in these guys' minds. [00:25:24] And I feel like that literally does animate a lot of this. [00:25:27] I mean, this is literally like the head of engineering when he was explaining the faves, the removal of the faves, he used basically that language. [00:25:33] He was like, people are afraid of liking edgy content because of trolls or like. [00:25:37] Going on Twitter, I really don't think that that's a problem. [00:25:39] No, no, I haven't noticed any of that being. [00:25:41] Well, I mean, let's just call a spade a spade here. [00:25:45] If you are liking, if you are a Nazi on the internet and you're like, I love you, Aryan maiden, whatever, waifu, dot, dot, I'm like liking all your fucking anime pictures or whatever. [00:25:56] And it's like, your real name are easily traceable to you. [00:25:59] I'm sorry, but buy the ticket, take the ride. === Generative AI Wave (15:00) === [00:26:01] You know what I mean? [00:26:02] You can easily start a Twitter profile and just call it like Bob Dole, whatever, and have a blank picture and be like Nazi, liking Nazi anime waifu. [00:26:10] It just, I mean, I just, I think it takes some of the, I don't know how to pronounce this, but Joy de Vivre out of it. [00:26:17] It takes some of the Joy de Vivre out of the experience. [00:26:22] Yeah. [00:26:23] I do what I want. [00:26:25] I mean, the funny thing is like that this theory, there's like the theory is like, if we remove the ability to like hide likes, then we're going to see the actual size of the race and IQ contingent and it's going to be 10 times as big as you thought. [00:26:37] Everybody actually believes in race and IQ, but they're afraid. [00:26:39] But I think they've like, it's actually the reverse. [00:26:42] Like they think more people believe in this stuff because soft social like oppression means that when somebody says that thing to me, instead of being like, go fuck yourself, I sort of nod and like walk away. [00:26:52] And I think that guy's probably liking some things. [00:26:54] I'm not, I promise. [00:27:05] Well, speaking of race and IQ, as the two of you so often do. [00:27:13] Oh, I can't finish the rhyme due to that juice I drank giving me too much sugar. [00:27:17] This is what AI is for. [00:27:18] Well, that, what a brilliant segue. [00:27:21] We're here to talk about a lot of things, but specifically we're here to talk about AI. [00:27:28] And listen, in the past couple of weeks, there have been some stories in AI that I've been forced to read. [00:27:38] One of which is a bunch of people freaking out about Adobe's updated terms of service, where they put in this nebulous language where it appeared that they might be able to train AI using whatever you upload into Adobe, which obviously you can spend two seconds and imagine the many, dangers of that. [00:27:58] And AI, or excuse me, Apple Intelligence's rollout from Apple. [00:28:05] And I just got to tell you, what's going on here? [00:28:11] We talked about, I think last time you were on, the sort of lateral move that a lot of the NFT people made of jumping off the ape ship and right onto AI. [00:28:19] And they made this sort of smooth transition there. [00:28:21] I was actually pretty impressed by it. [00:28:23] I was hoping there'd be like a few months where I'm like, all right, we don't really got to hear about it. [00:28:27] But it's now they're saying, again, we're doing the AGI thing again. [00:28:30] And I just want to talk about what is going on. [00:28:33] Just give me your base level, a finger in the wind. [00:28:37] Well, I mean, I think the Adobe, Apple, Microsoft has this new-ish thing called Copilot that's recording literally everything that's happening on your computer. [00:28:45] It's a little similar to the Apple Intelligence stuff, which is sort of reads all your texts and processes them so you can ask it, you know, what did Aunt Mindy say about the plane? [00:28:54] You know, what time is Aunt Mindy's plane coming in? [00:28:56] I mean, that's the sort of best use case. [00:28:59] I think all of these companies are kind of finally rolling out AI stuff that's been in development since the most recent generation of large language models. [00:29:08] So like the chat, the GPT-4.0 generation of like pretty advanced AI. [00:29:14] There's a million different words you can use to call it, and we'll just call it AI for the purposes of this. [00:29:18] And I think that they're sort of looking for, like Apple's stock closed at a record high this week, to a large extent, I think, because they threw Apple, they threw this AI stuff in here. [00:29:28] And in the specific case of Apple, it's like they want you to upgrade your phone. [00:29:32] You can't use the AI stuff unless you have an upgraded phone, whatever. [00:29:34] But I mean, like, the broad story is all of these are companies that want to put a jump, you know, sort of like jam AI into their products in one way or another, whether it's as inputs or outputs, in order to like find growth again, to find the kind of growth that they had, you know, five, 10 years ago. [00:29:53] Like that's the, to me, that's like the sort of top level, like what's happening with AI is it's like, this is where we're going to find the kind of margins that we had in 2014 again, is like somehow, you know, and there's a bunch of different competing theories. [00:30:07] But like simmering under the surface is the kind of other stuff you're talking about, the sort of the most recent wave of like open AI whistleblowers, like people, you know, being afraid of AGI and all these things, which is like, you know, [00:30:21] Liz and I were talking about this before we started recording, the sort of this weird, like, it feels like, I don't know what, I mean, it feels like leftist inflating, kind of like there's these seven different sects of AI guys like constantly like turning on each other and leaking and like trying to like defect from one AI company to another, all of which is like sort of beyond my ken, but is obviously like part of the energy that is getting Apple and these other companies really excited. [00:30:48] Yeah, I think that the stuff with Apple and Microsoft Word and all these companies like rolling out is interesting. [00:30:54] Maybe we can start there before getting into the kind of more abstract AI, AGI, ASI. [00:31:01] Let's add like 18 more. [00:31:06] It's going to have so many and like the next year. [00:31:08] That's what's going to grow exponentially. [00:31:11] There's a new acronym that has to go out every 18 months. [00:31:14] Yeah. [00:31:15] But I mean, I think that you're absolutely right that these companies are looking for growth. [00:31:19] But what they also, that means that what they identify is that like their actually existing products have hit a ceiling. [00:31:25] Like there is nothing left. [00:31:27] And Google is like a great example of this, which is that they're basically saying like, oh, people will not use this search engine anymore in the way that it's built. [00:31:37] And we need to re-engineer it in a way that either they'll get on board or we'll be able to amass so many other assets that we can leverage it into like some other enterprise. [00:31:47] Yeah. [00:31:47] I mean, we should say it's not, I don't think it's even that people won't use the search engine anymore because they will. [00:31:51] It's that they won't use it. [00:31:54] They won't grow using it, right? [00:31:55] There's not going to be more users. [00:31:56] They're not going to spend more time. [00:31:57] The ad revenue model might have hit a ceiling, I should say, rather than because I do think, I mean, Bryce and I, we were talking about this yesterday where I was saying, like, the thing that's very frustrating about a lot of the AI rollout now, you know, I feel like we're in this very, I'm going to say it, liminal space. [00:32:13] I knew it. [00:32:14] Where we're just like stuck, and we probably will be for a really long time, while this stuff is getting adopted and worked out in real time. [00:32:21] And we're the people that have to hone it without. [00:32:23] basically consenting to that. [00:32:27] And so it's all really rocky. [00:32:29] And like using Google now, it's like, I use Google because I need to find something that exists in the real world. [00:32:34] And instead, I'm just, Google has now basically said, actually, what Google is for is asking a question about like information broadly. [00:32:44] Not like looking for something that I need to then go use in the real world, you know, outside. [00:32:48] Where is this dry cleaner? [00:32:49] Where's this small business that exists that I need to identify? [00:32:52] It's like not what it's for anymore. [00:32:54] Yeah. [00:32:54] I mean, it takes a, this is sort of like the idea. [00:32:58] I mean, there's a bunch of different ways to think about it, but for me, like, the original purpose of Google 30 years ago was to find specific websites. [00:33:05] It wasn't to like, like, maybe you wanted to answer a factual question, but the point was to find the website that would answer that factual question for you. [00:33:12] And, you know, I'm sympathetic to Google. [00:33:14] I'm sure that hundreds of millions of people who use it, sympathetic in the sense that hundreds of millions of people who use it probably do sometimes just want a factual answer to a question. [00:33:22] But like, Google has to understand, and I'm sure there are people there who do, that what undergirds that is the work that they put in, like cataloging, indexing, creating like an algorithm that can sort of wait and serve these results, these web page results up to you. [00:33:37] And that's like, there's a lot of criticism, you can deserve criticism of how they do that and the weights they put on the sorting mechanisms that serve those up. [00:33:45] But it's an unbelievable feat of whatever it is you would describe that they do, of indexing, I guess. [00:33:51] And to sort of throw that all out by being like, actually, instead of giving you a bunch of different options for the thing you're doing, we're just going to get an LLM to like summarize one piece of factual information, like one shitty sarcastic Reddit comment, like one completely made up AI SEO blog post, and we're going to put that into a paragraph at the top of the page is like, it's baffling to me. [00:34:15] To me, it's like the obvious thing that seems to be able to, or seems to be coming or already happening. [00:34:20] I feel like they rolled it back because I don't see those things. [00:34:23] Well, apparently they've been cherry-picking. [00:34:25] I see them all the time. [00:34:27] I mean, I still get some of them. [00:34:28] And the ones that are like really bad, every time it goes viral, they send some poor intern down to manually remove it from the computer or whatever. [00:34:35] But what's going to happen, I feel like, is you said precisely that. [00:34:39] Half the websites that you get Google results for are fucking AI SEO websites, right? [00:34:44] Like that are just like a bunch of text that is optimized in this way to show up on Google Results to serve you an ad when you click on it. [00:34:51] But now it's just going to be this AI synthesized information synthesized even further by Google's AI and then given to you shorn of all context. [00:34:59] Basically, because you're not going to click on the link to the website to see that it's just some fucking SEO bullshit, that it's just completely fake. [00:35:06] And so it's, to me, it's like there is this tidal wave of like shiny elevators opening of shit. [00:35:14] And it's just going to completely envelop the world. [00:35:16] And it's like, there is, Google seems to be just like leading the charge on this in the way that like, you know, I'm sure I come into contact with AI in more than one way in my usage of the internet. [00:35:29] But really, like, most of the time that I come into contact with it is via Google results when I'm trying to find a piece of information and I'm just served up a whole bunch of, especially a recipe or anything like that. [00:35:39] It's just nonsense. [00:35:41] And it just seems like, yeah, we're in this space right now. [00:35:44] We're like above the cliff that leads to a river of feces. [00:35:49] And we are just completely, we are like doffing our clothes. [00:35:52] Like in an idiocracy kind of way, like not to sound like super Gen Xy or whatever. [00:35:56] Because I'm not. [00:35:58] In like a Cartman kind of way. [00:36:00] I mean, I think what's tough is that I always want to say like when we're talking about AI too, because again, like you said, that's such a, there's a lot of things that could be what we're talking about when we're saying that. [00:36:11] What we're talking about right now is like consumer-facing like products. [00:36:17] So when you see a Google or on the shit on Spotify or if you have an open AI account and you're playing around with ChatGPT or whatever all the video ones are, et cetera, et cetera. [00:36:31] I forget all the Claude, whatever all the different names are. [00:36:34] I hate them all. [00:36:35] Why would you ever even Claude? [00:36:37] That's so crazy. [00:36:38] Yeah, because that's what they call you. [00:36:42] I'm sorry. [00:36:44] No, but I think, but then on the flip side of that, I do want to say like what we're not talking about is all of the other like massive, massive advancements in AI computing that have to deal with, I don't know, everything from like climate modeling to like drug discovery to like, like, which is like a whole nether, to like species like classification. [00:37:06] Like there's just, which is a really important part to talk about when you're talking about this stuff because that is, that will end up shaping a lot of things that we do interface with in interesting ways. [00:37:16] But like, that's not what we're talking about when we're talking about with Google and LLMs and that kind of shit. [00:37:20] No, I mean, we're talking about, I mean, this is, I think, especially when we're talking about the tidal wave of shit. [00:37:24] What they're calling now is slop. [00:37:26] Slop is the new, is the word. [00:37:27] I don't like that. [00:37:28] It's too Reddit. [00:37:29] I said this before, but the arc of history bends toward Reddit. [00:37:32] Everything that you think is cool now in, look, everything is speeding up exponentially. [00:37:37] I'm going to give it a year, two years, is Reddit. [00:37:40] I never thought slop was cool. [00:37:42] But then again, I'm not. [00:37:43] I'm just saying. [00:37:44] I'm just saying. [00:37:45] We can say, we can call it a tidal wave of a shining elevator full of shit on this show. [00:37:49] I mean, we can call it whatever you guys want. [00:37:50] I just know it when I see it. [00:37:52] The shining elevator full of shit that's opening up is like, those are all the new generative AI apps, which is like large language models like ChatGPT, Claude, Sora, that can produce text, video, and these are like, you know, cool technologies that people haven't quite figured out uses for yet, except, you know, in situations like Google's where what you have is this kind of open. [00:38:16] I mean, I think the way I would think about it is, you know, the dominant kind of platform, the dominant kind of model for creating giant software companies for the last 15 years has been the platform. [00:38:26] So you do a software layer that basically mediates between, you know, on Facebook between you and your friends and your relatives. [00:38:32] On Google, it's between you and people who publish websites and also between advertisers. [00:38:37] And platforms like are amazing businesses for the people who own them because you can just charge to get anybody on there more or less. [00:38:45] Five, six people, seven people that own that. [00:38:47] Yeah, exactly. [00:38:48] And you have a basically infinite space in your marketplace. [00:38:51] Like you've got an infinitely like, you know, like fillable zone where you can have shit. [00:38:56] Much like a lord. [00:38:58] And what happened is with generative AI apps is like, oh, we built a machine that actually is going to fill up all these platforms with shit. [00:39:04] Like you have room for all of my shrimp, Jesus. [00:39:08] Like, I mean, have you guys seen the AI, like the Facebook AI images for like enormous snake on a semi-truck, like driving down the highway? [00:39:17] But then with like some sort of Bible verse, there's always like some religious soldier or something random religious. [00:39:21] Why won't anybody share this? [00:39:22] Scarlett Johansson's fruit, beautiful Kevin crew. [00:39:26] Exactly. [00:39:26] It's just like this is like it's funny to me because this is partly just like this is what platforms have been asking for basically. [00:39:32] It's like just fill up. [00:39:36] But that's what's so amazing is because it feels like now the platforms are realizing that their systems, like the logic of their system will kill the platform itself. [00:39:44] Yeah. [00:39:45] Because you're going to reach a sort of Zijekian like moment where it's just bots talking to bots, generating bot, generating shit, like where it's just machines talking to machines and we can actually maybe get on with our lives. [00:39:58] Yeah, to me, to me right now, it's mostly just NPCs talking to NPCs. [00:40:02] But I mean, that's the thing is like, it seems like this is the way that AI makes itself visible in people's lives. [00:40:09] There's been these, and very similar to the coal coin craze, right? [00:40:16] And like, we're going to build, like, the blockchain is somehow going to give us better access to like new medicines. [00:40:21] They're always lead with like, think of the new medicine discoveries and that people are disgusting now. [00:40:27] People are unhealthy. [00:40:29] They look like shit. [00:40:30] They're fucking crumbling. [00:40:32] I'm like, I'm not so sure more of what we have now is going to fix that. [00:40:37] But I will say, it's like the way that it comes into contact with both of our lives, like I said, it's just like, who will give a share for these beautiful soldiers riding out a snake for Jesus Christ? [00:40:48] And there's these great promises, but also these great promises of great peril that are coming out. [00:40:56] You know, you did a recent sub-stack post on this guy. === AIs And AGI Peril (14:26) === [00:41:01] What's his motherfucking name? [00:41:02] Leopold Oschenbrenner. [00:41:05] All these assholes are like shit like Leopold. [00:41:08] It's always like Leopold or Cyrus. [00:41:11] Cyrus. [00:41:12] Or just like crazy ass names where you're like, damn, where? [00:41:15] What? [00:41:15] You know, I know a guy named Cyrus that I like who is not evil or anything, but I will say this. [00:41:20] I hope he doesn't hear this. [00:41:22] It's like if you name, I also know a guy named Malachi who I like. [00:41:25] You're always talking about Malachi. [00:41:26] I'm not always talking about Malachi. [00:41:27] I just mentioned I know a guy named Malachi. [00:41:29] I know I'm not always talking about Malachi. [00:41:31] I haven't thought about Malachi in a while, but I like him. [00:41:34] But I feel like that's an evil name. [00:41:38] I know a bunch of guys' brothers whose names are Rex, Darius, Thor, Cyrus, and Tamerlane. [00:41:45] Do they work in AI computing? [00:41:47] No, a couple of them are in banks. [00:41:50] They're like kind of cool Malibu surfer guys. [00:41:52] Okay. [00:41:53] But rich. [00:41:53] Sure. [00:41:54] I don't know. [00:41:55] I just wanted to share. [00:41:56] Yeah, even cooler. [00:41:57] They're rich and also surfers. [00:41:59] And they work at the bank where all the money is, so they have access to be more rich. [00:42:03] These guys sound fucking amazing. [00:42:05] We should have them on the show, too. [00:42:07] If I could get Thor where you're seated right now, let me tell you, this top would be on the door, hanging, because it would be wet from sweat. [00:42:15] But he wrote this paper, which I read. [00:42:18] Leopold, sorry, not Thor. [00:42:19] Leopold wrote this paper, which I read yesterday. [00:42:23] You might be the only person on the planet who read the whole thing. [00:42:26] I read the whole thing. [00:42:26] Certainly some AIs have read it. [00:42:29] It's like 175 pages. [00:42:31] It's long. [00:42:31] I have it right in front of me, 165 pages. [00:42:33] I read the entire motherfucking thing. [00:42:35] He repeats it, but he repeats himself quite often. [00:42:38] And so you do get into the rhythm of it, but it's double spaced. [00:42:44] We can say that it's a little bit different. [00:42:44] And it's also double spaced. [00:42:45] But it's full-size language. [00:42:46] It's a single column. [00:42:48] It is a wide single column. [00:42:49] Did you read the whole thing, Liz? [00:42:51] No, I'm on 82 right now. [00:42:54] Interesting. [00:42:55] Well, because he puts footnotes on the side. [00:42:58] I know, he does. [00:42:59] He does that. [00:43:00] But, well, if you guys are done belittling my intelligence for having read something, um, it mostly seems to be warning in a, in a way that I feel like is familiar to both of you and myself about the dangers of AGI. [00:43:16] And AGI seems to me a nebulous concept that we are warned about very often by generally often slovenly or in if they're not slovenly, just generally unsettling characters who make appearances like this, you know, fucking German motherfucker right here, Austrian, Prussian, wherever he is. [00:43:40] I think he's just German, actually, because he's so crazy for someone to be like, self-identify today as Prussian. [00:43:47] If anybody's going to do it, it's Leopold Achae. [00:43:49] Yeah, no, he self-identifies as German. [00:43:51] He says he was raised by his great-grandmother, who was a refugee from East Germany, which gave him his hatred of authoritarian values. [00:44:04] But so, anyways, he's warning about the dangers of AGI and China, which we'll get to in a second. [00:44:11] But AGI is, can you tell our listeners what AGI is? [00:44:15] Well, it stands for artificial general intelligence. [00:44:18] You're treading for water here. [00:44:19] You're delaying. [00:44:21] So, I mean, basically what it is, is we used to say artificial intelligence to mean computers that can think and work like humans. [00:44:28] And then we just started calling everything that computers could do AI. [00:44:31] So we needed to come up with a new word for the fully nebulous idea of HAL type, HAL 9000-type computers. [00:44:39] And so now we call them AGI. [00:44:41] But there's no single or specific definition. [00:44:43] OpenAI, I think, who make ChatGPT, their definition is something like a computer that can do the intellectual work of humans better than they can. [00:44:53] But obviously, that's a tautology and has no relationship to that. [00:44:57] But human. [00:44:57] Well, for example, it would be harder to find an AGI that could do everything that you can do than one that could do everything I can do. [00:45:03] Oh, dude, I could be. [00:45:05] I realized every job I've ever had, even before I knew what computers really did, because I always read a bunch of those science fiction books when I was a kid. [00:45:12] I was like, I could be replaced in a heart. [00:45:15] Well, have you guys seen the PDF to podcast app that just came out? [00:45:19] There's a new startup that you upload a PDF and it will create a podcast for you based on the PDF that you can listen to. [00:45:26] No. [00:45:27] Yeah. [00:45:29] This is great news for PhD students everywhere. [00:45:32] I was reading the Hacker News thread about it and someone was like, well, I tried this and it completely made up a bunch of references that weren't in the PDF. [00:45:39] And someone was like, yeah, but real podcast hosts hallucinate too. [00:45:42] It's like, I don't think they're going to be able to do that. [00:45:44] I did call from an episode we did on Congo because I was thinking of the bad guy from fucking Zoolander the entire time. [00:45:55] But I'm just going to be honest, people behind the curtain there. [00:45:58] That's just what happens sometimes. [00:45:59] But that's more charming than an AGI. [00:46:01] I mean, maybe it's about the same level of charming. [00:46:03] Well, here's the thing. [00:46:05] People think that a podcast is just relaying information. [00:46:07] No, a podcast is relaying mostly right information, but then being wrong on a couple of things so that people feel good correcting you. [00:46:14] That's true. [00:46:14] I feel like everyone has a very broad, but also different and moving definition for AGI. [00:46:21] And so it's always on the verge. [00:46:23] It's always going to happen. [00:46:24] You'll know it when you see it. [00:46:26] Yeah. [00:46:27] There's a weird thing, too, where people assume there's like a Frankenstein attitude toward, where you assume that there's like a lab somewhere and someone's going to pull the big old-time switch. [00:46:37] You achieved AGI! [00:46:38] AGI is going to be there. [00:46:41] But the whole thing is, I mean, it seems really apparent to me that like, like, intelligence isn't something that we've defined clearly in humans, or for that matter, in animals. [00:46:50] Well, you're trying to on X.com. [00:46:51] I do. [00:46:53] For example, maybe the scientists of X.com will eventually be able to come up with a metric. [00:46:58] But, you know, this is one of those things, like, I like to think about it in terms of animal intelligence, that you're not going to find a bunch of people who, scientists even, marine biologists, who can agree with you about how smart a whale or a dolphin is, whether they have anything else. [00:47:11] Elephants have names for each other now. [00:47:13] They found out. [00:47:14] I told you about that thing about the auto. [00:47:15] Isn't that awful? [00:47:15] Well, no, but isn't that so amazing? [00:47:17] Elephants call individual names for each other. [00:47:19] They're like, yeah, they all. [00:47:22] It's all Dumbo One, Dumbo Two, Dumbo Three. [00:47:24] Oh, it's peanut over there. [00:47:25] The thing for me is that's more interesting than the kind of specter. [00:47:28] I mean, I do think there's something interesting about the PR use of AGI as it's been used when we used to just call it AI, I guess. [00:47:36] But agentic AI seems to be the more interesting stepping stone to me, which means like self-directed AI that basically can, and this seems to be what Apple is kind of attempting to push forward rapidly through its onboarding of open AI onto its operating system. [00:47:56] Wow, that was a lot of bullshit. [00:47:58] You know what I mean? [00:47:58] Open AI onto its operating system. [00:48:00] What the fuck am I fucking talking about? [00:48:02] But you know what I'm saying? [00:48:05] I'm a moron. [00:48:06] Anyway, but agentic AI, meaning that it can like synthesize a bunch of different tasks, like not related to each other, but towards a goal that it's like self-directing, that it can then hone itself and it can kind of be like left alone to do on its own devices, which is very different from the like, make me a song that's Drake, but singing the espresso thing for my technical stuff. [00:48:29] I mean, and this is related, this is like the big thing Oshenbrenner is really obsessed with is the idea of like a super intelligence explosion. [00:48:34] Like there's the hope that if you can create these self-directed AIs, that you are eventually going to get an AI that can train and improve itself. [00:48:41] And once you do that, other AIs. [00:48:42] Right. [00:48:43] And other AIs. [00:48:43] And like, once you do that. You can create the dreaded PMC AI. [00:48:47] Right, exactly. [00:48:48] To moderate and make sure that everybody's going to be able to do that. [00:48:49] We need HR and AI. [00:48:51] This is all kinds of disciplinary AIs. [00:48:53] Yeah, well, people have written books about that. [00:48:56] He has a quote in here that I actually think is pretty good and doesn't sound as insane as most of his PDF, which I look forward to the podcast version of since I didn't read all of it. [00:49:06] He said, How all this plays out over the 2030s is hard to predict. [00:49:10] And a story for another time, which is, by the way, that's a great if you're trying to write a 185-page PDF and you don't know where to go, just insert and a story for another time and move on. [00:49:20] But one thing at least is clear: we will be rapidly plunged into the most extreme situation humanity has ever faced. [00:49:28] Human-level AI systems, AGI, would be highly consequential in their own right. [00:49:33] But in some sense, they would simply be a more efficient version of what we already know. [00:49:39] But very plausibly, within just a year, within just a year, we would transition to much more alien systems, systems whose understanding and abilities, whose raw power would exceed those even of humanity combined. [00:49:54] There is a real possibility that we will lose control as we are forced to hand off trust to AI systems during that rapid transition. [00:50:02] And then I think that's like, I added this quote in, which is from a couple paragraphs later. [00:50:06] More generally, everything will just start happening incredibly fast and the world will start going insane, which is a great line, I will say. [00:50:14] I think that this is extremely hyperbolic, and yet I also am like sympathetic to this because I do, I am someone who, as I try to wrap my head around and understand kind of various sectors that are like plunging into AI research and adopting like more and more of this technology, like I actually really do think it's going to reorganize the world in really, really insane ways that are very difficult to kind of comprehend and predict. [00:50:42] And most importantly, like for political considerations, I think quite obviously it will be deflationary. [00:50:49] Like there will be deflationary effects and there will be entire sectors that are like reorganized or put out of work and all of that. [00:50:55] And that deserves like a lot of attention. [00:50:59] But I also am like, damn, man, Leopold, you sound fucking crazy. [00:51:04] Like this is crazy. [00:51:05] Yeah. [00:51:05] And hyperbolic and also like self, like it's puffing itself up, you know? [00:51:10] Yeah, I mean, it's really hard with these guys to like, I mean, we were talking about this. [00:51:14] I think the LLM technologies are really cool. [00:51:16] Like there's just something amazing about them when you actually sit down and play around with something that can produce adequate text. [00:51:22] Like you say adequate, like I'm making fun of it, but like genuinely producing adequate text is something that most human beings can't do. [00:51:28] So to have a computer that can do it is unbelievable. [00:51:30] And I agree with you that there's like, there are opportunities, you know, there are people who are salivating at the labor-saving costs of this kind of technology that is likely to have really profound effects on all kinds of different sectors. [00:51:45] And yet, like, you can say all that without even needing to tip over into like the big monster is going to come and kill all of us. [00:51:53] And that's very scary. [00:51:53] And in fact, I think when you start doing that, you kind of lose sight of the actual thing as it exists. [00:51:58] And once you lose sight of that, you don't get a good sense of actually what it's going to do. [00:52:02] Because when we talk about the labor-saving technology part of it, right? [00:52:05] So like we talked about this a little bit in the last one, but one of this, when I was on strike with the WGA last year, this was one of the big issues was the idea that the LLMs were going to start writing scripts or whatever. [00:52:15] And like very clearly, once you've played around with them for long enough, they're nowhere close to being able to write a good script. [00:52:21] Not even close to being able to write a script that Tooby would put out with Stephen Baldwin or whatever. [00:52:26] It's not going to happen. [00:52:27] But they are useful. [00:52:28] In the end, they're actually labor-saving devices for writers more than they are for bosses. [00:52:33] They're likely to replace writers. [00:52:35] And I think that if you buy into the kind of, this is just a minor example of a way where you buy into the fullest version of the AI monster coming, like it becomes like screenwriters are all out of business. [00:52:45] And you sit down with it and you think about it and you're like, actually, this is probably going to help screenwriters. [00:52:49] It's probably going to help certain people. [00:52:50] It's not true across every single sector or every single job. [00:52:53] But there's like the sort of the need to make this the biggest possible thing in the world that Ashen Brenner and like a million guys like him all want to believe in makes you sort of unclear about what LLMs are actually good at and what they're for, which is again like part of the Google problem too, where you're just kind of like, this could be useful in some ways for some Google things, but when you shove it into this particular use case, it doesn't, it sucks. [00:53:16] Yeah, I mean, it's funny because about half of this, well, I'm going to say a third of this, this paper that Ashenbrenner fucking wrote explicitly compares what the project of LLMs and AI, AGI, all this to the Manhattan Project and actually frames it in those terms. [00:53:36] So people in these like technology sectors often like to bring up the Manhattan Project because it was kind of like, hey, look, the nerds made something. [00:53:46] In fact, you are actually our slaves at a camp you couldn't fucking leave in the fucking desert and you should have stayed there. [00:53:54] But he explicitly compares this to the Manhattan Project, not like in just like a discrete way, like, listen, this is a time where we like had this great leaps and forwards of technological progress, but it compares it to like the race with Germany for the bomb, which Germany was far behind us. [00:54:13] But he makes that point explicit in his writing and interviews that he's done. [00:54:18] And Germany is replaced by, I'm far from the gong, but actually, Max, would you strike the gong for me? [00:54:26] is far from China. [00:54:34] You know, he silenced the sustain. [00:54:36] Wow. [00:54:37] He silenced the sustain. [00:54:38] That's interesting. [00:54:40] We've got a lot to get through here. [00:54:41] We've got a lot to get through. [00:54:44] But he compares it to like, he compares this to the arms race with, and in the Cold War. [00:54:50] Yeah. [00:54:50] But specifically to Adam Bombs here. [00:54:53] One thing that I just don't understand is I'm like, if I was involved in a project that was going to make something that could, as he describing, have like AI fucking like secret police everywhere monitoring every single thing, I would do everything in my power to prevent this technology from being made. [00:55:10] Yeah. [00:55:11] Because this presupposes an infinite democracy, which I can guarantee, he doesn't really make this clear in this paper, but I can guarantee in his little mind, in the back of his mind, he thinks that AI will eventually, like a benevolent, democratic AI will eventually run things behind the scenes for us. === Nightmarish AI Tropes (07:57) === [00:55:28] Totally. [00:55:31] And it's just, it is a vision of a future that is so nightmarish, one can barely comprehend it. [00:55:37] But it also isn't nightmare. [00:55:39] It is actually just like basic science fiction tropes. [00:55:43] Yeah. [00:55:44] I guess what I just don't understand is how come the only, I've never heard, I've watched a lot of interviews with people. [00:55:49] I've seen a lot of bullshit out there. [00:55:51] Every time I hear about AGI, it is always in this apocalyptic terms, even from the people who are making, who claim to be making progress towards it. [00:56:00] Why is that? [00:56:01] There's sort of two things at play. [00:56:03] Like, so one is, you know, you could give a kind of like psych 101 explanation, which is like, when you're building these things, I mean, we've seen how crazy the advances have been just in the last few years. [00:56:13] And you can sort of imagine you're building these things. [00:56:16] You're a weird nerd freak. [00:56:18] You probably have a slightly fragile psyche and your brain is broken. [00:56:22] You saw some shit at the ALA orgy that you can't fucking bust it in the fleff. [00:56:31] But I think that the other part of it is the culture of AI research since for the last 25 years has been really focused on this for reasons that like sound crazy and it sound crazy to the extent that I don't feel like they could fully account for it, but actually seem to. [00:56:46] So like this guy, so you were talking about like, why are all these guys, they talk about how AI is going to destroy us all and yet they keep working to build it. [00:56:54] So there's this guy, Elizir Yudkowski. [00:56:56] Let me read you an email. [00:56:58] Let me read you a fucking email right now. [00:57:01] I'm sorry to pause, but I actually brought this. [00:57:03] I had this on my computer earlier. [00:57:07] But Elizer was a dream guest for this show because he is, well, hopefully it'll still come on, so I won't say the word, but you can imagine. [00:57:17] Hi, Brays. [00:57:18] Thanks for reaching out. [00:57:19] Unfortunately, Elizer gets far more invitations of this nature, of this nature. [00:57:24] You've never fucking ran into a podcast like this, than he can fulfill and he won't be able to join as a guest. [00:57:29] We wish you the best. [00:57:32] And then best, Harlan Communications Machine Intelligence Research Institute. [00:57:38] Yeah. [00:57:38] So Machine Intelligence Research Institute has been like the center of this AI research world since it was founded, which was, I think, 2000. [00:57:45] Also sex parties. [00:57:46] Also, tons of sex parties. [00:57:48] Also effective alternative. [00:57:49] I mean, all of this is like this big, naughty, sort of polyamorous, like, you know, spectrum-y kind of world of philanthropy and sex. [00:57:58] Yeah. [00:57:58] And Yukowski is like a rationalist. [00:58:02] He's like the father of this whole sense that AI is going to like destroy everything. [00:58:07] And he, so he's like working at Harry Potter fan fiction. [00:58:11] And indeed, Harry Potter fanfiction. [00:58:13] He famously wrote a seven-part Harry Potter and the Methods of Rationality is what it's called. [00:58:18] Do you know you didn't? [00:58:21] After you said no, now I know what I'm getting printed and bound for the Belden. [00:58:25] Is it sexy? [00:58:26] No, it's actually quite rational. [00:58:29] But to me, sex is a lot of people. [00:58:30] I think those are on two different poles. [00:58:32] Those are opposing. [00:58:33] Homesex is irrational. [00:58:35] Two bodies convulsing in such a manner. [00:58:37] That to me is. [00:58:39] Especially in a funky wizarding world. [00:58:42] And unfortunately, you have to allow for air like the fluffer. [00:58:44] Yeah. [00:58:45] So to finish that, so all of the magic tricks in Harry Potter are explained using rational scientific methods. [00:58:53] Seven, there's seven of these books. [00:58:55] And something about AI, the AI research world seems to have attracted the kind of people who are into the kind of thing, like Harry Potter and the methods of rationality. [00:59:03] And my best sense is that there is like the continuing cult-like culture of this whole world has sort of cultivated in a bunch of AI researchers this. [00:59:13] Suggesting it's kind of reinforcing loop, maybe? [00:59:16] I mean, I would also say, like, if you polled AI researchers as a category around the world, like, I don't actually think that most of them are this level of like scared of, I think some of them might make the same kind of arguments that Liz is just making about like the way it's going to reshape the global economy or whatever, and that's dangerous. [00:59:34] But I think there's an extremely vocal and prominent but relatively small, probably a minority of people for whom this is like the whole thing. [00:59:44] I mean, one of the things about, one of the things that Ashen Brenner is interesting about Ashen Brenner is that he's, I think he's 23. [00:59:51] And when he was, he matriculated at Columbia when he was 15. [00:59:54] He's like a very smart young kid. [00:59:56] And when he was 18, he got a grant from Tyler Cowan, the George Mason University economist, just to go live in San Francisco and like meet a bunch of people, basically. [01:00:05] Because Cowan thought he was an economics prodigy, basically. [01:00:10] Interesting, when I have paid young men to come to San Francisco and meet a bunch of my friends, it becomes a national news story. [01:00:16] I mean, I can't speak to the political bias of the news, Grace. [01:00:21] So he goes out there and he meets a bunch of these people. [01:00:24] He ends up, Ashen Brenner ends up working for not FTX, but for the Future Fund, which was SBF's effective altruism thing, and moves from there to open AI. [01:00:34] So this all happened between the ages of 19 and 23, basically. [01:00:39] And I always thought about it like, so I went to a nerd camp when I was a kid called the Center for Talented Youth. [01:00:46] I went to CTY, which is an East Coast nerd camp thing. [01:00:49] You had to take the SATs in middle school and you go. [01:00:52] And if you're my age, you can guess the kind of kids who also went to NerdCamp. [01:00:56] And there's a lot of Monty Python. [01:00:58] There's a lot of sort of quasi-proto-goth, like 13-year-olds. [01:01:04] It's all really intense. [01:01:06] I thought you were just going to say Asian. [01:01:08] Was there a lot of theater work? [01:01:11] Yeah, but I mean, the people I hung out with were sort of gothy, computer science-y, like, I'm making something about my. [01:01:16] I had a cigarette to smoke behind the gym. [01:01:18] And my main memory of it is like it was really emotionally intense, and people really thought really highly of themselves and their capabilities. [01:01:25] And in retrospect, like, wow, like what a bad environment for anybody to spend time in. [01:01:30] Any geniuses come out of that motherfucker? [01:01:32] I don't know, maybe. [01:01:33] I mean, if any of them are listening, if any members, if any CTY Lancaster 2000, 2001 is listening, feel free to reach out. [01:01:40] MaxReed at gmail.com. [01:01:42] If you're a genius, especially, if you've got a lot of money and you'd like to take advantage of that, even just self-described genius. [01:01:48] That's the best kind. [01:01:49] But anyway, I say that because it's like, I think that this world is that for adults. [01:01:54] We have this really intense hot house, strange hot house environment. [01:01:59] And somebody like Ashen Brenner, I suppose he's obviously a smart guy. [01:02:04] And getting paid by a weird economist to go just hang out with a bunch of sex perverts who think the world is going to end breaks a lot of brains, I guess. [01:02:12] I mean, that's the best answer I have. [01:02:14] Well, he does seem, listen, I don't think that they should let children go to college. [01:02:19] I think that you should have to do, like, you should have to be socialized. [01:02:23] You should get paid to not go to school. [01:02:25] You should get paid not to go to San Francisco, get paid to go to Cincinnati and work as a third grade teacher. [01:02:30] Exactly. [01:02:31] I just think that, like, I think that no one learns anything in high school. [01:02:34] The thing that you learn how to do is smoke cigarettes and lose your virginity. [01:02:37] That's not true. [01:02:38] I didn't learn anything in high school. [01:02:40] I did. [01:02:40] All I've learned is from high school. [01:02:42] All I learned how to do in high school was to be normal. [01:02:46] Well, I learned a lot of things in high school. [01:02:50] But none of them involving school teaching. [01:02:53] It socializes, you know. [01:02:55] Yeah. [01:02:55] And like, I think that's a big, I don't think being a gifted kid is good for you. [01:03:00] No, I mean, this is what they say about gifted and talented programs now is they warp kids. [01:03:04] And I really do think that like these environments, there's all these cults that are attached to Miri, like that are sort of weird, effective altruism. [01:03:12] They get into like Bayesian statistics as a like religion, more or less. [01:03:16] I mean, I don't know. [01:03:17] My friend's brother is. [01:03:19] So you kind of like, I don't know. [01:03:21] At that point, it starts to seem normal to think that a giant machine god is going to come. === Gotta Grapple With AI's Future (07:56) === [01:03:25] Though at the same time, like if you're talking about he went out there when he was what, what did you say? [01:03:30] 19? [01:03:30] I think so like four or five years ago, like then the exponential growth that he's witnessed, not even, I mean, who knows behind the scenes what he's seen, but also like even what we've seen out of these products, it's almost like, I mean, I know this sounds like now I'm going to sound like an AI guy, but maybe I am AI guy. [01:03:51] But like what ChatGPT was doing in ChatGPT one versus what it can do now versus like what any kind of image generator could do in like 2016, it's like unrecognizable. [01:04:02] And I mean, we were joking before we started the show, but I was saying like the next leap will be like in a year, year and a half, where it's like, make me an RPG. [01:04:11] And now you just have your own personalized whatever you want. [01:04:15] Like literally immersive world that you can have built for you in an instant that you can play around with or whatever. [01:04:22] And that's just consumer, again, just the same thing. [01:04:23] Yeah, so I mean, this is why I think, when you combine the actual hard advancements that you can witness with the like environment where you're being told so like OPEN AI, like Ilya Sutzkover, who was the I'm probably mispronouncing all these names because I'm weird was the like the genius, was sort of the computer genius of OPEN AI and he was. [01:04:41] He recently left and he was on the board and was one of the people who fired Sam Altman, the ethical Sutzkever. [01:04:47] So he he, he was. [01:04:51] He would like hold like mystical ceremonies, more or less, where he would like make an effigy of AGI, of unaligned AGI, and burn it and stuff. [01:05:00] I also heard this is a podcast so I'm allowed to just say things I've heard without sourcing them. [01:05:04] I heard that Ashenbrenner when he was working at OpenAI was once asked to leave an all-hands meeting because he started sobbing because he realized that they were creating God. [01:05:11] So I think there is like a little bit of a shit that happened to me. [01:05:14] It's crazy because the other day when our meeting with you had to leave. [01:05:18] This is we are going to make the best mini series. [01:05:21] It's crazy because even Jesus didn't, well, I guess he did weep a little bit up there. [01:05:25] I don't really get what was going on when he was up there. [01:05:27] But like what was going through his fucking dome when he was the crown of thorns. [01:05:32] I know, but no, in his brain, dude. [01:05:34] What's going on in his fucking brain? [01:05:35] What do you think? [01:05:37] Well, he's mad at his dad. [01:05:39] Yeah, but like, what's he really thinking? [01:05:41] You know what I mean? [01:05:41] Because like AI for that. [01:05:44] Exactly. [01:05:44] Well, I'm going to ask. [01:05:46] But yeah, it's like, the culture at these places seems so like, part of me believes that they have to believe all of this. [01:05:54] Yeah. [01:05:54] Or else you're like, I am making fucking like infinite banksy style pictures of Sora or whatever. [01:06:01] But I also want to pull back and say like it's not just that's what we're seeing. [01:06:04] That's what gets rolled out for fine-tuning other stuff. [01:06:07] Like the flip side of that is the understanding that like literally like, I mean, climate change as a concept is not even like that concept can't even isn't even possible without any kind of like planetary scale computing. [01:06:21] Yeah. [01:06:22] Right. [01:06:22] Like we are only able to understand and model the past, the present, and possible futures for how the climate will change, which then like gives rise to it, even as a subject, because of the advancements of massive AI computing. [01:06:38] And that's going to be incredibly important for any kind of, you know, if that's like, if you, if you're a person who believes that like that's the future we're staring in, which I think we're all kind of in that, you know, world there, like, then these tools are like, these are the only tools that will get us out of that or help us to plan any kind of possible future moving forward. [01:06:59] And so I think that like I can see being in that room and understanding that. [01:07:03] Yeah. [01:07:04] That's just what I understand. [01:07:05] I don't even understand what they're talking about, like behind the curtains or whatever, of the kind of possibilities for the sorts of like central planning systems that they're going to, that they're organizing for like a possible, I mean, it's really like a global, you know, a global economy. [01:07:21] I mean, there's a, it's not even fake too. [01:07:24] I mean, this all has like a physical reality as well, right? [01:07:27] Where you have an entire planet now that is kind of like coded in fiber optic cables and data centers that are kind of carved into the landscape now. [01:07:38] And this is actually both physically and subjectively like transforming the world. [01:07:45] Yeah, totally. [01:07:46] I mean, this is, I mean, it's hard to like. [01:07:49] I get into this like crazy person's, I don't know. [01:07:51] I'm, you know, I'm a little loopy myself. [01:07:53] Well, I mean, I think it's worth saying, like you were saying, you know, we see the shitty, like, like when you have ChatGPT making, like, writing adequate writing or whatever, that's also like the version of the LLM that's been, like, speaking, setting aside even the sort of climate change, drug development, like, there's all kinds of like high-level physics applications for like Takamax. [01:08:13] I don't even know what that is, but I know that you can use AI to build better ones. [01:08:17] That even with LLMs, the things that create all the bullshit we see, the versions we get are these like heavily trained to be as anodyne as possible for like obvious reasons. [01:08:27] But we were talking about like, could you get an LLM to like do Joyce? [01:08:31] And like, yeah, you could get one to do some version of Joyce for what's available on the consumer side. [01:08:35] But you could also train one up and presumably, you know, if you have access to the kind of resources that somebody like OpenAI does to do Joyce or whatever at an even higher kind of level of capability. [01:08:49] I mean, this sounds, as I say it out loud, it does sound pretty ridiculous. [01:08:52] And I'm not saying you're going to write a new portrait of a young man, portrait of the artist as a young man, but you can do stuff that I don't think is really possible just with ChatGPT if you have access to the pre-trained or the sort of differently trained models. [01:09:03] And I think if you have access to those regularly, if you're Leopold Oschenbrenner, if you're a researcher, you have a better sense of like the sort of true capabilities of these. [01:09:12] Even if those capabilities don't reach the level of AGI, whatever that means, or apocalypse, whatever that would mean, they're beyond what we can actually see on our computers. [01:09:23] Plus, if you're trying to do a Finnegan's Wake thing, you wouldn't be able to tell the difference with hallucinations. [01:09:26] That's true. [01:09:27] Well, I think, I mean, my main problem, and I feel like I bring this up every time we talk about technology in general, like have an episode about technology, but especially AI, is, and I can't put this really into like maybe the most coherent way, [01:09:42] but like it seems just in my lifetime that like integration technology into like every single one of our interactions with each other or with the outside world, not every single one, but so many of them in some small or large way has like annihilated the minds of so many people that I know personally. [01:09:58] I mean, I think the most obvious way that it manifests into people's lives is like lack of attention span. [01:10:04] Or people, I cannot tell you how many times I've heard people I know say like, I never read books anymore. [01:10:10] Or it's hard for me to read books now because my attention span is so dismal. [01:10:16] And I think what's heartbreaking about me because, like, yeah, I think like we have basically an infinite timeline, I guess, of just like the world going on, presumably nothing happens. [01:10:26] And so, like, basically, anything you could imagine is theoretically possible in some way. [01:10:31] I don't know. [01:10:33] Or maybe it'll happen soon. [01:10:34] I don't know. [01:10:35] I don't know. [01:10:35] I don't know any of this shit. [01:10:36] But I just, I like, what worries me is it's like the soul of man is just like, it's dissolved. [01:10:43] And like, and this, this, and, and the, you know, I think it's just the world is going to shit. [01:10:48] Everything feels like shit, I think, to everybody. [01:10:51] And even if you're doing all right, I think people aren't very happy. [01:10:54] Or there's just like, there's, there's, there's like, there's like a gray cloud. [01:10:57] And it seems like the out, like these people present, Musk is very much like on this tip of like, we are stagnating, which they can, they can understand that. [01:11:06] Like in some effable and ineffable ways. [01:11:10] And technology is our sort of escape valve of this, like these massive leaps and increases in technology where it's just precisely the opposite seems to have been true for the past couple of decades. === Regulating AI: Global Concerns (10:56) === [01:11:22] And yeah, I guess it's funny because I've always thought when we talk about AI, I'm always like, you know, why make robots slaves? [01:11:29] Like, you can't do that. [01:11:31] And I was like, I gotta, I gotta, that's, that seems wrong to me, but I'm like, well, so fuck the robots. [01:11:36] I hate robots. [01:11:37] But Nike, reading Ashenbrunner's fucking manifesto, he's a little my struggle shit here. [01:11:41] I'm like, I both, this makes me want to join both the robots and be there like a secret police for the robots or the Chinese. [01:11:48] Because one of the big things they're talking about here is that if the Chinese get their hands on this stuff, and I can see this, I mean, I don't can't actually see it in terms of like what's happening now, but I feel like this is a trial balloon in some way. [01:12:03] Like China, we need an adversary. [01:12:05] If we're building the atomic bomb, we need an adversary to build it against. [01:12:08] And China is the only other country that any of these people can think of. [01:12:14] And I wonder, because half of this shit is about how China is going to build AGI and then make the world into China, who are all slaves in China and blah, And I just, I wonder if that is going to be more mainstream in terms of tech stuff. [01:12:30] I think definitely. [01:12:31] I mean, I think the context that we didn't talk about is that this document was released the same week that Ashenbrenner announced a new AI investment fund that's backed by a bunch of guys. [01:12:42] Matt Friedman, who's the former GitHub CEO, who's sort of involved in San Francisco politics on the tech reaction side, and the Collison brothers, who are the Stripe founders and are also sort of San Francisco reactionaries. [01:12:55] And I think part of what's happening, I don't know what's in Ashenbrenner's mind. [01:13:00] I know that we hate to attribute cynical and greedy motives to people on this program, but I suspect that somebody who wants to get a lot of tech money right now wants to put national security concerns at the top of their sort of interests, Both because he's a former EA guy and effective altruism is out. [01:13:18] Effective altruism, totally out in the valley. [01:13:20] David Shore, no! [01:13:22] I never got to suck Sean McKellen from the back! [01:13:25] Actually, I do have more gossip, which is that Ashenbrenner's current girlfriend is David Shore's ex-girlfriend. [01:13:32] Wow. [01:13:36] The tangled web they weave. [01:13:38] The tangled web we weave, and like a fly, I'd like to fly into it and have them all devour me at the same time. [01:13:43] It's crazy how much Sam Jailman Jail tanked EA. [01:13:49] It's like kind of incredible. [01:13:50] Well, I mean, come on, that's your main guy. [01:13:53] I know, but it's like it's pretty, you know, didn't put that one down when he was like, you know, weighing all his options, I suppose. [01:14:01] But I mean, the other thing is, like, there seems to be a real shift in the valley, especially sort of in the right-wing faction toward like national security, like defense contracting, because that's where there's money. [01:14:12] I mean, they look at the horizon, they think industrial policy is in, they think, you know, it's easier to get money this way. [01:14:18] And so they have an interest in saying China is going to enslave us with robots that suck us off till we die very soon. [01:14:24] No customs. [01:14:25] And if you guys watch that toilet brush back scratcher video, I sent you guys. [01:14:30] No, I didn't look at it. [01:14:31] If you send me videos that I never send you videos. [01:14:34] No, when you do send videos and you're like, did you see this? [01:14:36] I'm like, I'm not fucking watching this. [01:14:37] I was not watching anything. [01:14:40] That was when I turned it off. [01:14:41] Okay, well, it only goes on about 30 seconds after that. [01:14:43] I was going to say, that's what China's going to do to us. [01:14:45] Watch that video. [01:14:46] We'll link to it. [01:14:47] I think just some important context too is that these guys know that the money that is required to achieve what they want to achieve. [01:14:58] Like we were just talking about it. [01:15:00] Some of these models now are, I think the latest models are, it requires like $100 million for computing costs. [01:15:09] It is not, and Leopold says himself, you know, it seems crazy that we would reach a trillion dollars in 2025, but maybe that does not seem crazy to me. [01:15:19] Like the numbers that get thrown out of, you know, it's going to be, you know, a billion dollars to run these models. [01:15:26] The only scale that makes sense on this is governments. [01:15:30] Yeah, totally. [01:15:31] I mean, it's beyond the scope even of corporations. [01:15:34] Especially because most of these haven't, I mean, OpenAI doesn't turn a profit as far as we know. [01:15:39] They haven't figured out the killer. [01:15:40] Well, one of the things that Leopold talks about consistently throughout this is the need for regulation. [01:15:46] But really, what he wants is basically ease of making these big compute centers by shifting regulation to allow for more power to be given to them, like literal electric power to be given to them and actual regulations around AI to be geared towards this national security thing. [01:16:06] You know, a lot of people made a lot of hay about Sam, what's his fucking name? [01:16:11] Altman. [01:16:12] Sam Altrock fucking touring around doing his little like glad hating tour about like, oh, we need regulation, we need regulation. [01:16:19] And the obvious reason is like they want regulation that helps them actually do this more profitably and with and also undercutting any future regulation that might cut into any profits or progress that they make, which are the same thing. [01:16:33] And he seems to take an even more clear-eyed view of this. [01:16:37] And like, we should actually put this under the aegis of government or like a private public partnership here under the national security umbrella because yeah, there's infinite money there. [01:16:46] Yeah. [01:16:46] I mean, he wants to create a military technological complex that like he's at the center of. [01:16:50] I mean, this goes back to something we were talking about before with the bomb. [01:16:54] Like one of the sort of interesting sort of like contradictions that comes up with these guys is they always talk about it. [01:17:00] They like to think about AI and LLMs as a sort of general purpose technology like electricity or computers, but they only ever talk about it like the nuclear bomb, which is obviously not a general purpose technology. [01:17:10] It's a single shot. [01:17:12] It does one thing and it does it really well, but doesn't do a lot of other stuff. [01:17:16] And there's a weird way that like they you have to choose, is this a general purpose technology that's going to make us all millionaires as businessmen? [01:17:22] Or is this the bomb and we're going to be Lockheed or Northrop Grumman or whatever? [01:17:28] Well, one thing that they never answer either, and like it's so strange. [01:17:32] They always gloss over this. [01:17:34] It's like, okay, say that like there are by several OOMs, this fucking, the AI increases its power, right? [01:17:42] And is able to do a lot of white-collar jobs. [01:17:45] And in fact, replace white-collar workers in the way that like, or at least down, have the, you were able to downsize any sort of like office firm that you have in much the same way that factories now employ significantly less people, you know, that work is automated. [01:18:00] What are those people going to do for jobs? [01:18:03] And like, this is one thing with like how they're trying to do the self-driving cars and stuff. [01:18:07] I always think about this. [01:18:08] The number one job that people have in this country is trucking. [01:18:11] If we replace truck drivers with like robot-driven trucks, we have millions of people out of work then. [01:18:19] If we replace like, you know, office workers with fucking, you know, OpenAI 7 or whatever, then you're also going to have millions of people out of work. [01:18:28] And there is no, it seems like there isn't even like, in the sort of hedging way that they sometimes talk about stuff, they just completely gloss over that. [01:18:37] I mean, we are a country without any kind of like welfare. [01:18:39] I mean, we have a very, very tiny welfare state, but certainly nothing that you can live off of the rest of your life if you're like used to living this middle-class lifestyle. [01:18:48] I just, I'm confused as to what they envision for the future here. [01:18:52] I mean, I think that they, there's a sort of, if you're a true believer, and I think Austin Brenner is, like, you think the big computer god is going to fix it all. [01:18:59] Like, you don't have to worry about who's going to have what the jobs are because the computer god is going to be out. [01:19:03] Yeah. [01:19:04] No, literally, you're going to put in the data and the computer. [01:19:06] I mean, a lot of them say this is that like one of the one of the ways we'll know it's AGI is when it's able to give us answers that we ourselves could not come up with. [01:19:17] And there's a lot of ways that you can kind of think about that that's like a little trippy. [01:19:20] But I do think a lot of people are sort of like, well, we can just push that can down the road. [01:19:26] It doesn't matter. [01:19:28] One, because we're not, oh, that's for the politicos to decide. [01:19:31] That's always easy. [01:19:33] And two, yeah, the supercomputer god is going to, he'll just come up with a new or she is going to come up with an answer and it all be fine. [01:19:44] I mean, I think a lot of people, that's where a lot of these like UBI people come in and they say, you know, what we need to do is just expand. [01:19:51] No, now this is how everyone can become middle class, right? [01:19:55] Because the middle class is that kind of very nebulous class with a very specific relation to a certain kind of production, but not in the same way that a kind of like classic like petty bourgeois is, but a kind of, you know, I would say like intellectual production, but it's primarily a consumption class. [01:20:14] And if everyone then just becomes consumer, I mean, this is my whole point about it becoming deflationary. [01:20:19] And I think that a lot of the kind of economic response then is, well, if there's going to be a massive deflationary pressure as we automate a lot of industries, which by the way, a lot of these companies have a lot of labor-saving interest in trying to automate things like software engineering and reducing their own headcounts where they're kind of exploring a lot of these opportunities. [01:20:42] But then how can we, what's a sort of like inflationary push or what's a kind of, what can we do to strengthen consumption? [01:20:58] And it's like, oh, well, if we just expand the consumer class, then maybe we got, maybe, you know, maybe the kitchen will be cooking then. [01:21:08] Yeah. [01:21:08] I mean, I think, like, realistically, though, it strikes me that, at least in the near term, like, the main thing is it's not going to be that good at replacing white-collar jobs. [01:21:18] That ultimately it's probably going to create more jobs, people who become technicians for figuring out how to do this shit. [01:21:24] Yeah. [01:21:24] Supervisors. [01:21:25] Yeah, exactly. [01:21:26] I mean, somebody, I can't remember who now, but somebody was comparing it to the introduction of Excel and VisiCalc, which were like the original spreadsheet softwares that initially were thought of as going to just eliminate whole rafts of accountants and bean counters from their jobs, but actually didn't really do that at all. [01:21:42] And in fact, destroyed a lot of companies who really just tried to throw their accountants out and just were not good enough at it. [01:21:49] And it does seem to me like if the U.S. political economy is good at one thing, it's just figuring out new bullshit jobs to give to people rather than having a welfare state or UBI or direct demand inflation or whatever. [01:22:04] It's like, no, we'll just invent a whole new kind of middle manager who has to be there for the AI. === The End Game Vision (05:13) === [01:22:18] Well, I'd like to read. [01:22:20] It's going to wrap up here. [01:22:21] I'd like to read a part of Leopold's, well, the end, really. [01:22:27] It's called under a section called the End Game. [01:22:30] And so, by 27, 28, he's talking about 28, 2027, 2028, the end game will be on. [01:22:36] By 28, 29, the intelligence explosion will be underway. [01:22:40] By 2030, we will have summoned superintelligence. [01:22:43] They always talk about it like that. [01:22:45] In all its power and might. [01:22:47] Whoever they put in charge of the project, capital, is like capital letters, not capital, is going to have a hell of a task to build AGI and to build it fast, to put the American economy on wartime footing, to make hundreds of millions of GPUs, to lock it all down, weed out the spies, and fend off all-out attacks by the CCP, to somehow manage 100 million AGIs furiously automating AI research, making a decade's leaps in a year, [01:23:16] and soon producing AI systems vastly smarter than the smartest humans, to somehow keep things together enough that this doesn't go off the rails and produce rogue super intelligence that tries to seize control from its human overseers. [01:23:28] Interesting language there. [01:23:30] To use those super intelligences to develop whatever new technologies will be necessary to stabilize the situation and stay ahead of adversaries, rapidly remaking U.S. forces to integrate those, all while navigating what will likely be the tensest international situation ever seen. [01:23:47] They better be good. [01:23:48] I'll say that. [01:23:49] For those of us who get the call to come along for the ride, it'll be space, ellipses, another space, stressful. [01:23:57] But it will be our duty to serve the free world and all of humanity. [01:24:01] I know this is long, but I'm loving it. [01:24:03] If we make it through and get to look back on those years, it will be the most important thing we ever did. [01:24:08] If we make it through, and while whatever secure facility they find probably won't have the pleasantries of today's ridiculously overcomped AI researcher lifestyle, it won't be so bad. [01:24:19] SF already feels like a peculiar AI researcher college town. [01:24:25] That's so true. [01:24:26] Probably this won't be so different. [01:24:28] It'll be the same weirdly small circle sweating the scaling curves during the day and hanging out over the weekend, kibitzing over AGI and the lab politics of the day. [01:24:38] Except, well, the stakes will be all too real. [01:24:41] See you in the desert, friends. [01:24:43] Wow. [01:24:43] Somebody saw off and off. [01:24:45] Good. [01:24:46] God, man. [01:24:48] I do wonder, though. [01:24:49] I mean, I wonder, you know, if you have a kind of U.S. economy as it stands now and like, it's not looking, it's not looking great for the foreseeable future. [01:25:01] You know, and you have, you know, I just saw on the way here, it's like, you know, Yellen was on TV talking about like really, really reiterating how much the need for public-private partnership, basically meaning like we need to be fiscally spending into industry and like trying to kind of spur some economic growth because GDP, it's not good. [01:25:22] Yeah. [01:25:23] You know, and it's not good across the board in a lot of countries. [01:25:27] But so you wonder where there's a world in which you see that happening, right? [01:25:32] Where you see what he's talking about. [01:25:34] I mean, not in his weird kind of fanfic writing style, though I do appreciate it and particularly appreciate your reading of it. [01:25:42] But I could see the U.S., I mean, especially depending on which way the election goes, kind of going all in or partly all in with some of these guys. [01:25:52] And, you know, so it makes sense, you know, placing all their chips on the China thing. [01:25:57] Yeah. [01:25:57] You know, to make the argument work because they need the fucking money. [01:26:01] Yeah. [01:26:02] And I think it sounds to me like Oshin Brennan would like to be in charge. [01:26:05] It's a real like Dick Cheney heading up the bush vice presidential search. [01:26:09] He's like, if I get the call and then he ends up like, see you in the desert. [01:26:12] It's like, okay, well, you think you're going to get the call? [01:26:15] Ash and Brennan. [01:26:15] Why do I have to do it in the desert again? [01:26:17] Also, I think it would be so always putting these people in the desert. [01:26:20] It's so funny to think like, oh, you guys have like these really comfortable lives in the San Francisco area, like perfect climate, very cozy. [01:26:27] They probably have some really nice $4 million townhouses, you know, Victorian, Classico style. [01:26:35] You don't have to worry about you take your little like self-driving car to work. [01:26:39] Like everything's great. [01:26:40] And it's like, your ass is going to fucking Bakersfield because we're not making it all the way down to, we're not doing Los Alamos yet. [01:26:48] We got to find a different place. [01:26:49] We're going to stay in slightly northern Central California and have fun. [01:26:57] Stick them in Nowhere'sville and see what they do. [01:26:59] How are they going to live without a DoorDash? [01:27:01] They're going to be like, how do we get food? [01:27:03] First of all, you don't make your own food at the tech. [01:27:08] There's a cafeteria. [01:27:09] So you got to have a cafeteria. [01:27:10] You got to have a cafeteria. [01:27:11] You're going to have Oppenheimer in there cooking up the damn noodles. [01:27:14] Well, maybe they should this time. [01:27:15] They should. [01:27:15] No, well, I think what I would like, a future that I envision is where all the jobs that are done by AI are actually done by this guy and all his friends for no pay. [01:27:24] And they wear kind of like an SM kind of outfit. [01:27:29] And I walk them on leashes through San Francisco. === Find A Different Place (03:23) === [01:27:32] So I'm like, can you make Drake sing espresso? [01:27:36] I'm like, you sing espresso like Drake, Leopold. [01:27:40] I love that. [01:27:41] With all the words. [01:27:42] With all the words of express. [01:27:43] I don't know. [01:27:44] I've only heard about half of it. [01:27:45] One of the words is espresso. [01:27:47] Espresso. [01:27:47] Say it. [01:27:48] And that's that means. [01:27:50] That's not the espresso. [01:27:51] I want to hear him say it while I just beat him. [01:27:54] That's reinforcement learning with human feedback. [01:27:56] That is facts right there. [01:27:58] Well, I don't know if you guys know about this, but a lot of these guys want to make city campus in San Francisco. [01:28:03] Yeah. [01:28:04] We should do an episode on that maybe, where they just want to make several neighborhoods that people live in in San Francisco. [01:28:09] It's a very dense city into like a tech small town. [01:28:13] Wow. [01:28:14] Well, it's also, yeah, between that and the Bellagi network. [01:28:18] Network states. [01:28:20] Well, a lot to talk about there. [01:28:22] Ladies and gentlemen, Max. [01:28:27] Well, I'll plug your sub stack for you, even though it's going to get me in trouble with Twitter, but I'll still say it. [01:28:31] Thank you. [01:28:32] Go to Reed Max. [01:28:34] A little pun there on his name, which is Max Red. [01:28:40] Max Reed. [01:28:41] Max Reed. [01:28:42] Max. [01:28:42] What's your middle name? [01:28:44] The letter B. Wait. [01:28:48] What? [01:28:49] It's just the letter B. Is it Harry S. Truman? [01:28:52] The S doesn't stand for anything. [01:28:53] Wait, Max B. Reed? [01:28:54] Max B. Reed? [01:28:55] My real name is Malcolm. [01:28:56] I was on my breakfast. [01:28:57] Your real name is Malcolm. [01:28:58] Malcolm B. Reed, yeah. [01:28:59] But you change it out of deference. [01:29:01] I've always been called Max. [01:29:02] My parents didn't think that it was. [01:29:04] They wanted a longer name, and Malcolm... [01:29:07] They wanted a longer name. [01:29:08] I like Max's Trump. [01:29:09] Well, Malcolm's a little in the middle, so it's good. [01:29:11] Okay. [01:29:12] Wait, your parents were like, we need a long name for our son. [01:29:15] It was 1985. [01:29:17] Yeah. [01:29:17] They were in their 30s. [01:29:18] Like, there was so much cocaine must have been being consumed that entire name. [01:29:22] My name is Bray. [01:29:23] I hope they're not listening. [01:29:23] I mean, who knows what was happening? [01:29:25] My real name is Gretchen, though. [01:29:27] Oh. [01:29:28] I know. [01:29:28] Because you know how guys are named Lindsay sometimes? [01:29:30] Yeah. [01:29:31] You can go your own way. [01:29:33] My parents were trying to make that with Gretchen. [01:29:36] Didn't work. [01:29:36] Didn't work. [01:29:37] Now I had to start going by brace. [01:29:40] Well, you can read his damn sub stack at ReadMax. [01:29:43] What is the maxread.substack.com. [01:29:47] Okay, well, Max. [01:29:47] I know it's confusing. [01:29:48] I'm snatching. [01:29:49] Maxread.substack.com. [01:29:51] So it's a flip of the name of the substack. [01:29:54] It's just this name of the substack. [01:29:56] Substack meme. [01:29:57] What do you mean? [01:29:58] Substack? [01:29:59] I don't know. [01:30:00] Is that a tech thing? [01:30:02] There's no name. [01:30:03] Subscription. [01:30:04] Substack. [01:30:05] I think it's subscription. [01:30:06] I'm here on the sub stack. [01:30:08] What does that mean? [01:30:10] I think it stands for subscription. [01:30:12] Subscription stack. [01:30:13] Subscription stack? [01:30:14] Yeah, like a stack. [01:30:15] Yeah, that makes sense. [01:30:15] A stack of stacks. [01:30:16] My stack is subscription. [01:30:17] Like a stack of magazines. [01:30:18] Penthouse, Playboy, Maxim. [01:30:21] Maxim like that? [01:30:22] Read Max. [01:30:23] those before. [01:30:24] I will say, I do only read back to the articles because the pictures are, let me just say that this man has a little bit of a problem with the Simpsons. [01:30:35] Yeah. [01:30:36] Thank you very much for joining us, and we will see you next time. [01:30:39] Thank you so much for having me, guys. === Ads and Alcatraz (04:18) === [01:30:56] Well, after an interview, you know what I like to do? [01:30:59] Is I like to have a white claw seltzer with lots of new flavors that are coming out every single day. [01:31:05] Are people still drinking that? [01:31:06] Black Cherry. [01:31:07] Yeah, I think so. [01:31:08] That's weird. [01:31:09] But I have to finish the ad read. [01:31:10] Oh, sorry. [01:31:11] Black Cherry is now coming out. [01:31:13] Lime Blossom and Rose is now coming out. [01:31:17] Right. [01:31:17] And Lychee. [01:31:18] And Lychee is now, is going to be available soon in some states, banned unfortunately in Delaware, California, New York, and Florida. [01:31:28] But White Claw is a low-calorie alternative to beer that makes you feel bloated and funky and makes you pee a lot. [01:31:36] White Claw actually does not recycle. [01:31:38] It purely absorbs. [01:31:39] So you will not urinate at all unless you divide some outside liquid besides white claw. [01:31:46] We should start doing ads. [01:31:47] We should. [01:31:47] But you know what? [01:31:48] We shouldn't do them for companies. [01:31:51] Like we shouldn't do that. [01:31:52] We should just do it. [01:31:53] Yeah. [01:31:53] No, no. [01:31:54] Like we should just pick something and be like, let's just do an ad for this. [01:31:57] Because we can do ads. [01:31:58] We're not going to get paid for it. [01:32:00] We just come up with our own ads. [01:32:02] That's a good idea. [01:32:03] Oh, fuck. [01:32:04] I have to advertise something. [01:32:06] You do? [01:32:06] This thing we just talked about. [01:32:07] Yes. [01:32:08] Oh, I wasn't listening. [01:32:09] Okay. [01:32:09] So, ladies and gentlemen, listen the fuck up. [01:32:13] Alcatraz, you know the rock? [01:32:15] Welcome to the rock. [01:32:17] Yeah, of course. [01:32:18] Have you seen the movie? [01:32:19] This is not a joke. [01:32:20] Are you kidding? [01:32:20] Have I seen the movie? [01:32:21] I own it and I have for like 15 billion years on Criterion, the double disc version. [01:32:27] Well, if you've ever been like, I love that movie. [01:32:29] I want to go get some of them poison balls from Alcatraz. [01:32:32] Huh, how do I get there? [01:32:33] I ride the ferry, right? [01:32:35] Poison ball. [01:32:35] Oh, from the movie. [01:32:36] From the fucking movie. [01:32:37] Yeah, he's got the string of pearls, the ball, the poison pearls. [01:32:40] Really elegant string of pearls configuration. [01:32:42] Unfortunately, incredibly unstable. [01:32:44] The nuclear. [01:32:45] But you have to get there on the ferry, right? [01:32:48] Or you could swim. [01:32:49] Or you could, well, someone I know did it before. [01:32:53] I think I could do it. [01:32:54] It doesn't seem that. [01:32:55] Yeah, don't get me started. [01:32:56] You might be like, oh, I got to ride the ferry there. [01:32:58] Tickets on the ferry is $45 to get there? [01:33:02] Well, I'm sure these workers... [01:33:04] Well, it's probably a ticket for the place, too. [01:33:06] I'm sure these workers get paid a lot. [01:33:08] In fact, many of them actually only get paid $19 an hour. [01:33:12] Because of this, they unionize with the Inland Boatman's Union, which is a part of ILWU. [01:33:19] This happened very recently. [01:33:20] Well, not too recently, but within the past couple of years. [01:33:23] They're now trying to get a contract. [01:33:25] Ahead of that contract, Alcatraz Cruises LLC is now hiring strikebreakers in order to bust up negotiations for a contract for their unionized workforce. [01:33:40] Because there is, I mean, who knows? [01:33:42] But obviously, if they're hiring strikebreakers, they think it's something going to happen. [01:33:46] These ads for this work are on Indeed.com. [01:33:51] Now, I'm not saying you should click on that link and apply. [01:33:53] I'm not saying that. [01:33:55] I'm just saying that we're going to put the link in the description for this episode. [01:33:59] It's basically for informational purposes. [01:34:01] Do whatever you want with that. [01:34:03] It's just purely for info. [01:34:04] It's a public website. [01:34:05] Sure. [01:34:06] indeed.com, but I mean, they want people to apply they They want people to apply. [01:34:11] That's why I'm saying that. [01:34:11] It doesn't say you have to live in San Francisco. [01:34:14] About you section just says this person will be adaptable, dynamic, and embody capital letters, city experiences, even more capital letters, respect service system. [01:34:26] And it says also important since Alcatraz Cruz's LLC is engaged in collective bargaining, the above wages are subject to change. [01:34:32] See attorney guidance attached. [01:34:34] I think that you should, this is a good opportunity for anybody out of state. [01:34:39] Just look at it. [01:34:41] Just look at it. [01:34:41] And just look at it and send your shit in, too. [01:34:44] You know, just say you have a Twit card and say that you have a Merchant Mariner's license as well. [01:34:51] Just because maybe that's something you want in your life. [01:34:54] I have those things. [01:34:55] Yeah. [01:34:56] And maybe I'll actually, I probably will literally apply after this. [01:35:00] But yeah, so just, I know that many of you need jobs, and this is a wonderful opportunity. [01:35:05] That was a great ad read. [01:35:06] Thank you. [01:35:07] That's on call deckhand, Alcatraz, City Experiences, San Francisco, California. [01:35:13] All right. === Call For Twit Cards (01:55) === [01:35:14] Well, with that being said, my name is fucking Captain Jack Sparrow. [01:35:20] I'm from Liverpool. [01:35:22] I've been a pirate in the Caribbean for many years now. [01:35:26] Of who's the chicken at Kira Knightley? [01:35:29] I think I shagged Kira. [01:35:32] But they took me off of the, they took me off of the pirates of the Caribbean due to a court case. [01:35:37] But lots of women have problems. [01:35:40] And me too. [01:35:41] Okay. [01:35:42] Well, it wasn't me too. [01:35:43] You're right. [01:35:43] It wasn't me too. [01:35:44] I was struggling to remember what happened. [01:35:47] We did like a four-hour episode on. [01:35:49] I know, but then I got so pissed off thinking about it anymore. [01:35:53] That was one of those things where like people on the internet were insane over that. [01:35:57] everyone's insane everyone's fucking this is my problem is like And I was kind of trying to elaborate on this during the episode a little bit when I was talking about everything is all fucked up. [01:36:07] Everyone is insane. [01:36:10] Like, I just like, everyone's crazy, I think. [01:36:13] It makes me, makes me feel crazy. [01:36:16] Why? [01:36:16] Yeah. [01:36:17] I don't know. [01:36:18] Anyways, my name is Bryce. [01:36:19] I'm Liz. [01:36:20] We are, of course, as always, joined by producing the money. [01:36:22] James Chomsky. [01:36:25] He's so young. [01:36:26] He's producing it. [01:36:28] Brian's out. [01:36:29] Brian Epstein. [01:36:30] We didn't like his brother. [01:36:33] As Jeremy Corbyn would call him, Brian Epstein. [01:36:37] He's out. [01:36:38] We've replaced him. [01:36:39] I'm really the accent. [01:36:40] What's the name of the podcast? [01:36:41] The podcast. [01:36:42] The podcast is called Truanon. [01:36:47] We'll see you next time. [01:36:49] Bye-bye. [01:37:06] Just Jeffrey Lexter. [01:37:08] Come in. [01:37:09] Come in.