| Time | Text |
|---|---|
|
State of AI with Sebastian and Nathan
00:02:03
|
|
| The following is a conversation all about the state of the art in artificial intelligence, including some of the exciting technical breakthroughs and developments in AI that happened over the past year and some of the interesting things we think might happen this upcoming year. | |
| At times, it does get super technical, but we do try to make sure that it remains accessible to folks outside the field without ever dumbing it down. | |
| It is a great honor and pleasure to be able to do this kind of episode with two of my favorite people in the AI community, Sebastian Rashka and Nathan Lambert. | |
| They are both widely respected machine learning researchers and engineers who also happen to be great communicators, educators, writers, and Twitterers, ex-posters. | |
| Sebastian is the author of two books I highly recommend for beginners and experts alike. | |
| First is Build a Large Language Model from Scratch and Build a Reasoning Model from Scratch. | |
| I truly believe in the machine learning computer science world, the best way to learn and understand something is to build it yourself from scratch. | |
| Nathan is the post-training lead at the Allen Institute for AI and author of the definitive book on reinforcement learning from human feedback. | |
| Both of them have great X accounts, great sub stacks. | |
| Sebastian has courses on YouTube. | |
| Nathan has a podcast. | |
| And everyone should absolutely follow all of those. | |
| This is the Lex Friedman podcast. | |
| To support it, please check out our sponsors in the description where you can also find links to contact me, ask questions, get feedback, and so on. | |
| And now, dear friends, here's Sebastian Rashka and Nathan Lambert. | |
| So I think one useful lens to look at all of this through is the DeepSeek, so-called DeepSeek moment. | |
|
DeepSeek's Decline?
00:11:33
|
|
| This happened about a year ago in January 2025 when the open weight Chinese company DeepSeek released DeepSeek R1 that I think it's fair to say surprised everyone with near or at state-of-the-art performance with allegedly much less compute for much cheaper. | |
| And from then to today, the AI competition has gotten insane, both on the research level and the product level. | |
| It's just been accelerating. | |
| Let's discuss all of this today and maybe let's start with some spicy questions if we can. | |
| Who is winning at the international level? | |
| Would you say it's the set of companies in China or the set of companies in the United States? | |
| And Sebastian, Nathan, it's good to see you guys. | |
| So Sebastian, who do you think is winning? | |
| So winning is a very broad term. | |
| I would say you mentioned the DeepSeek moment. | |
| And I do think DeepSeek is definitely winning the hearts of the people who work on open weight models because they share these as open models. | |
| Winning, I think, has multiple time scales to it. | |
| We have today, we have next year, we have in 10 years. | |
| One thing I know for sure is that I don't think nowadays, 2026, that there will be any company who is, let's say, having access to a technology that no other company has access to. | |
| And that is mainly because researchers are frequently changing jobs, changing labs. | |
| They rotate it. | |
| So I don't think there will be a clear winner in terms of technology access. | |
| However, I do think there will be the differentiating factor will be budget and hardware constraints. | |
| So I don't think the ideas will be proprietary, but the way or the resources that are needed to implement them. | |
| And so I don't see currently a take-it-all scenario where a winner takes it all. | |
| I can't see that at the moment. | |
| Nathan, what do you think? | |
| You see, the labs put different energy into what they're trying to do. | |
| And I think to demarcate the point in time when we're recording this, the hype over Anthropic's Cloud Opus 4.5 model has been absolutely insane, which is just, I mean, I've used it and built stuff in the last few weeks, and it's almost gotten to the point where it feels like a bit of a meme in terms of the hype. | |
| And it's kind of funny because this is very organic. | |
| And then if we go back a few months ago, we can get the release date in the notes as Gemini 3 from Google got released. | |
| And it seemed like the marketing and just like wow factor of that release was super high. | |
| But then at the end of November, Cloud Opus 4.5 was released and the hype has been growing. | |
| But Gemini 3 was before this. | |
| And it kind of feels like people don't really talk about it as much. | |
| Even though when it came out, everybody was like, this is Gemini's moment to retake kind of Google's structural advantages in AI. | |
| And Gemini 3 is a fantastic model, and I still use it. | |
| It's just kind of differentiation is lower. | |
| And I agree with Sebastian where you're saying with all these, like the idea space is very fluid, but culturally Anthropic is known for betting very hard on code, which is Cloud Code thing, is working out for them right now. | |
| So I think that even if the ideas flow pretty freely, so much of this is bottlenecked by human effort and kind of culture of organizations where Anthropic seems to at least be presenting as the least chaotic. | |
| It's a bit of an advantage. | |
| And if they can keep doing that for a while. | |
| But on the other side of things, there's a lot of ominous technology from China where there's way more labs than DeepSeek. | |
| So DeepSeek kicked off a movement within China. | |
| I say kind of similar to how ChatGPT kicked off a movement in the US where everything had a chatbot. | |
| There's now tons of tech companies in China that are releasing very strong frontier open weight models to the point where I would say that DeepSeek is kind of losing its crown as the preeminent open model maker in China. | |
| And the likes of Z.ais with their GLM models, Mini Max's models, Kimmy Moonshot, especially in the last few months, have shown more brightly. | |
| The new DeepSeek models are still very strong, but that's kind of a, it could look back as a big narrative point where in 2025, DeepSeek came and then all it kind of provided this platform for way more Chinese companies that are releasing these fantastic models to kind of have this new type of operation. | |
| So these models from these Chinese companies are open weights. | |
| And depending on this trajectory of business models that these American companies are doing, could be at risk. | |
| But currently, a lot of people are paying for AI software in the US. | |
| And historically in China and other parts of the world, people don't pay a lot for software. | |
| So some of these models like DeepSeek have the love of the people because they are open weight. | |
| How long do you think the Chinese companies keep releasing open weight models? | |
| I would say for a few years. | |
| I think that, like in the US, there's not a clear business model for it. | |
| I have been writing about open models for a while, and these Chinese companies have realized it. | |
| So I get inbound from some of them. | |
| And they're smart and realize the same constraints, which is that a lot of U.S. tech companies and other IT companies won't pay for an API subscription to Chinese companies for security concerns. | |
| This has been a long-standing habit in tech. | |
| And the people at these companies then see open weight models as an ability to influence and take part of a huge growing AI expenditure market in the US. | |
| And they're very realistic about this. | |
| And it's working for them. | |
| And I think that the government will see that that is building a lot of influence internationally in terms of uptake of the technology. | |
| So there's going to be a lot of incentives to keep it going. | |
| But building these models and doing the research is very expensive. | |
| So at some point, I expect consolidation, but I don't expect that to be a story of 2026, where there will be more open model builders throughout 2026 than there were in 2025. | |
| And a lot of the notable ones will be in China. | |
| You were going to say something? | |
| Yes. | |
| You mentioned DeepSeek losing its crown. | |
| I do think to some extent, yes, but we also have to consider, though, they are still, I would say, slightly ahead in the other ones. | |
| It's not that DeepSeek got worse. | |
| It's just like the other ones are using the ideas from DeepSeek. | |
| For example, you mentioned Kimmy, same architecture. | |
| They're training it. | |
| And then again, we have this leapfrogging where they might be at some point in time a bit better because they have the more recent model. | |
| And I think this comes back to the fact that there won't be a clear winner. | |
| It will just be like that. | |
| And one person releases something, the other one comes in. | |
| And the most recent model is probably always the best model. | |
| Yeah. | |
| We'll also see that Chinese companies have different incentives. | |
| So like DeepSeek is very secretive, whereas some of these startups are like the mini maxes and Z.AIs of the world, those two literally have filed IPO paperwork and they're trying to get Western Mindshare and do a lot of outreach there. | |
| So I don't know if these incentives will kind of change the model development because DeepSeek famously is built by a hedge fund, High Flyer Capital. | |
| And we don't know exactly what they like. | |
| We don't know what they use the models for or if they care about this. | |
| They're secret in terms of communication. | |
| They're not secret in terms of the technical reports that describe how their models work. | |
| They're still open on that front. | |
| And we should also say on the Opus 4.5 hype, there's the layer of something being the darling of the X echo chamber on Twitter Echo Chamber and the actual amount of people that are using the model. | |
| I think it's probably fair to say that ChatGPT and Gemini are focused on the broad user base that just want to solve problems in their daily lives. | |
| And that user base is gigantic. | |
| So the hype about the coding may not be represented in the actual use. | |
| I would say also a lot of the usage patterns are, like you said, name recognition, brand and stuff, but also muscle memory almost where, you know, like ChatGPT has been around for a long time. | |
| People just got used to using it and it's kind of like almost like a flywheel. | |
| They recommend it to other users and that stuff. | |
| One interesting point is also the customization of LLMs. | |
| For example, ChatGPT has a memory feature, right? | |
| And so you may have a subscription and you use it for personal stuff, but I don't know if you want to use that same thing at work, you know, because it's the boundary between private and work. | |
| If you're working at a company, they might not allow that or you may not want that. | |
| And I think that's also an interesting point where you might have multiple subscriptions. | |
| One is just clean code. | |
| It keeps has nothing of your personal images that you or hobby projects in there. | |
| It's just like the work thing. | |
| And then the other one is your personal thing. | |
| So I think that's also something where two different use cases. | |
| And it doesn't mean you only have to have one. | |
| I think the future is also multiple ones. | |
| What model do you think won 2025? | |
| And what model do you think is going to win 26? | |
| I think in the context of a consumer chatbot, as a question of are you willing to bet on Gemini over ChatGPT, which I would say in my gut feels like a bit of a risky bet because OpenAI has been the incumbent and there's so many benefits to that in tech. | |
| I think the momentum, if you look at 2025, was on Gemini's side, but they were starting from such a low point. | |
| I think RIP BARD and these earlier attempts of getting started, I think huge credit for them for powering through the organizational chaos to make that happen. | |
| But also it's hard to bet against OpenAI because they always come off cast as so chaotic, but they're very good at landing things. | |
| And I think like personally, I have very mixed reviews of GPT-5, but it had to have saved them so much money with the hidline feature being a router where most users are no longer charging their GPU costs as much. | |
| So I think it's very hard to dissociate the things that I like out of models versus the things that are going to actually be a general public differentiator. | |
| What do you think about 2026? | |
| Who's going to win? | |
| I'll say something, even though it's risky. | |
| I will say that I think Gemini will continue to take progress on ChatGPT. | |
| I think Google scale when both of these are operating at such extreme scales. | |
| And Google has the ability to separate that research and product a bit better. | |
| We hear so much about OpenAI being chaotic operationally and chasing the high-impact thing, which is a very startup culture. | |
| And then on the software and enterprise side, I think Anthropic will have continued to success as they've again and again been set up for that. | |
| And obviously, Google's Cloud has a lot of offerings, but I think this kind of like Gemini name brand is important for them to build. | |
| And Google's Cloud will continue to do well. | |
| But that's kind of a more complex thing to explain in the ecosystem because that's competing with the likes of Azure and AWS rather than on the model provider side. | |
| So in the infrastructure, you think GPUs give an advantage? | |
| Largely because the margin on NVIDIA chips is insane, and Google can develop everything from top to bottom to fit their stack and not have to pay this margin. | |
| And they've had a head start in building data centers. | |
| So all of these things that have both high lead times and very hard margins on high costs, Google has just kind of a historical advantage there. | |
| And if there's going to be a new paradigm, it's most likely to come from OpenAI, where their research division again and again has kind of shown this ability to land a new research idea or a product. | |
| I think like deep research, Saura, 01 thinking models, like all of these definitional things have come from OpenAI, and that's got to be one of their top trades as an organization. | |
| So it's kind of hard to bet against that. | |
| But I think a lot of this year will be about scale and optimizing what could be described as low-hanging fruit in models. | |
|
Fun Example of Fast Thinking
00:13:45
|
|
| And clearly, there's a trade-off between intelligence and speed. | |
| This is what ChatGPT 5 was trying to solve behind the scenes. | |
| It's like, do people actually want intelligence, the broad public, or do they want speed? | |
| I think it's a nice variety, actually, or the option to have a toggle there. | |
| I mean, first for my personal usage, most of the time when I look something up, I use ChatGPT to ask a quick question, get the information I wanted fast. | |
| For most daily tasks, I use the quick model. | |
| Nowadays, I think the auto mode is pretty good where you don't have to specifically say thinking or non-thinking and stuff. | |
| Then again, I also sometimes want the pro mode. | |
| Very often, what I do is when I have something written, I put it into ChatGPT and say, hey, do a very thorough check. | |
| Are all my references correct? | |
| Are all my thoughts correct? | |
| Did I make any formatting mistakes? | |
| And are the figure numbers wrong or something like that? | |
| And I don't need that right away. | |
| It's something, okay, I finish my stuff, maybe have dinner, let it run, come back and go through this. | |
| And I think, see, this is where I think it's important to have this option. | |
| I would go crazy if for each query, I would have to wait 30 minutes or 10 minutes either. | |
| That's me. | |
| I'm like saying over here, losing my mind, that you use the router and the non-thinking model. | |
| I'm like, how do you live with that? | |
| It's like my reaction. | |
| I'm been heavily on ChatGPT for a while. | |
| Never touched five non-thinking. | |
| I find it its tone and then its propensity of errors. | |
| It's just like has a higher likelihood of errors. | |
| Some of this is from back when OpenAI released 03, which was the first model to do this deep search and find many sources and integrate them for you. | |
| So I became habituated with that. | |
| So I will only use GPT 5.2 thinking or pro when I'm finding any sort of information query for work, whether that's a paper or some code reference that I found. | |
| And it's just like I will regularly have like five pro queries going simultaneously, each looking for one specific paper or feedback on an equation or something. | |
| I have a fun example where I just needed to answer as fast as possible. | |
| For this podcast, before I was going on the trip, I have a local GPU running at home and I wanted to run a long RL experiment. | |
| And usually, I also unplug things because you never know if you're not at home, you don't want to have things plugged in. | |
| And I accidentally unplugged the GPU. | |
| It was like my wife was already in the car and it's like, oh, dang. | |
| And then basically, I wanted as fast as possible, a bash script that runs my different experiments and the evaluation. | |
| And I did something I know. | |
| I learned how to use the bash interface or bash terminal. | |
| But in that moment, I just needed like 10 seconds, give me the command. | |
| This is a hilarious situation, but yeah. | |
| So what did you use? | |
| So I did the non-thinking fastest model. | |
| It gave me the bash command to chain different scripts to each other. | |
| And then the thing is, like, you have the T thing where you want to route this to a log file. | |
| Top of my head, I was just like in a hurry. | |
| I could have thought about it myself. | |
| By the way, I don't know if there's a representative case, wife waiting in the car. | |
| You have to run, you know, plug the GPU, you have to generate a bash script. | |
| It sounds like a movie. | |
| It's making impossible. | |
| I use Gemini for that. | |
| So I use thinking for all the information stuff. | |
| And then Gemini for fast things or stuff that I could sometimes Google, which is like it's good at explaining things. | |
| And I trust that it has this kind of background of knowledge and it's simple. | |
| And the Gemini app has gotten a lot better and it's good for that sort of things. | |
| And then for code and any sort of philosophical discussion, I use Claude Opus 4.5, also always with extended thinking. | |
| Extended thinking and inference time scaling is just a way to make the models marginally smarter. | |
| And I will always edge on that side when the progress is very high because you don't know when that'll unlock a new use case. | |
| And then sometimes use Grok for real-time information or finding something on AI Twitter that I knew I saw and I need to dig up and I just fixated on. | |
| Although when Grok 4 came out, the Grok 4 was super heavy, which was like their pro variant, was actually very good. | |
| And I was pretty impressed with it. | |
| And I just kind of like muscle memory, lost track of it with having the ChatGPT app open. | |
| So I use many different things. | |
| Yeah. | |
| I actually do use Grok4 heavy for debugging, for like hardcore debugging that the other ones can't solve it. | |
| I find that it's the best. | |
| And it's interesting because you say ChatGPT is the best interface. | |
| For me, for that same reason, but this could be just momentum. | |
| Gemini is the better interface for me. | |
| I think because I fell in love with their best needle in the haystack. | |
| If I ever put something that has a lot of context, but I'm looking for very specific kinds of information, make sure it tracks all of it. | |
| I find at least the Gemini for me has been the best. | |
| So it's funny with some of these models, if they win your heart over for one particular feature on a one particular day for that particular query, that prompt, you're like, this model is better. | |
| And so you'll just stick with it for a bit until it does something really dumb. | |
| There's like a threshold effect, some smart thing, and then you fall in love with it. | |
| And then it does some dumb thing. | |
| And you're like, you know what? | |
| I'm going to switch and tryClaude or ChatGPT and all that kind of stuff. | |
| This is exactly like you use it until it breaks, until you have a problem. | |
| And then you change the LM. | |
| And I think it's the same how we use anything, like our favorite text editor operating systems or the browser. | |
| I mean, there are so many browser options, Safari, Firefox, Chrome, all the relatively similar, but then there are edge cases, maybe extensions you want to use, and then you switch. | |
| But I don't think there is anyone who types the same thing, like the website into different browsers and compares them. | |
| You only do that when the website doesn't render, if something breaks, I think. | |
| So that's a good point. | |
| I think you use it until it breaks and then you explore other options, I think. | |
| On the long context thing, I was also a Gemini user for this, but the GPT 5.2 release blog had like crazy long context scores where a lot of people were like, did they just figure out some algorithmic change? | |
| It went from like 30% to like 70% or something in this minor model update. | |
| So it's also very hard to keep track of all of these things. | |
| But now I'm looking more favorably at GPT 5.2's long context. | |
| So it's just kind of like, how do I actually get to testing this never-ending battle? | |
| It's interesting that none of us talked about the Chinese models from a user usage perspective. | |
| What does that say? | |
| Does that mean the Chinese models are not as good? | |
| Or does that mean we're just very biased and US focused? | |
| I do think that that's currently the discrepancy between just the model and the platform. | |
| So I think the open models, they are more known for the open weights, not the platform yet. | |
| There are also a lot of companies that are willing to sell you the open model inference at a very low cost. | |
| I think like OpenRouter, it's easy to do the look at multi-model things. | |
| You could run DeepSeq on Perplexity. | |
| I think all of us sitting here are like, we use OpenAI GPT-5 Pro consistently. | |
| We're all willing to pay for the marginal intelligence gain. | |
| And anyone that's like these models from the US are better in terms of the outputs. | |
| I think that the question is, will they stay better for this year and for years going? | |
| But it's like, so long as they're better, I'm going to pay for it and use them. | |
| I think there's also analysis that shows that the way that the Chinese models are served, you could argue due to expert controls or not, is that they use fewer GPUs for replica, which makes them slower and have different errors. | |
| And it's like speed and intelligence. | |
| If these things are in your favor as a user, I think in the US, a lot of users will go for this. | |
| And I think that that is something that will spur these Chinese companies to want to compete in other ways, whether it's like free or substantially lower costs, or it'll breed creativity in terms of offerings, which is good for the ecosystem. | |
| But I just think the simple thing is the US models are currently better and we use them. | |
| And I try Chinese models. | |
| I try these other open models and I'm like, fun, but not going to, I don't go back to it. | |
| We didn't really mention programming. | |
| That's another use case that a lot of people deeply care about. | |
| So I use basically half and half cursor and cloud code because I find them to be like fundamentally different experience and both useful. | |
| What do you guys, you program quite a bit. | |
| So what do you use? | |
| What's the current vibe? | |
| So I use the Codex plugin for VS Code. | |
| You know, it's very convenient. | |
| It's just like a plug-in and then it's a chat interface that has access to your repository. | |
| I know that cloud code is, I think, a bit different. | |
| It is a bit more agentic. | |
| It touches more things. | |
| It does the whole project for you. | |
| I'm not quite there yet where I'm comfortable with that because maybe I'm a control freak, but I still would like to see a bit what's going on. | |
| And Codex is kind of like right now for me, like the sweet spot where it is helping me, but it is not taking completely over. | |
| I should mention one of the reasons I do use cloud code is to build the skill of programming with English. | |
| I mean, the experience is fundamentally different. | |
| You're, as opposed to micromanaging the details of the process of the generation of the code and looking at the diff, which you can in cursor, if that's the idea you use, and then changing, altering, looking and reading the code and understanding the code deeply as you progress versus just kind of like thinking in this design space and just guiding it at this macro level, | |
| which I think is another way of thinking about the programming process. | |
| Also, we should say that cloud code, it just seems to be somehow a better utilization of Cloud Opus 4.5. | |
| It's a good side-by-side for people to do. | |
| So you can have cloud code open, you can have cursor open, you can have VS Code open, and you can select the same models on all of them and ask questions. | |
| And it's very interesting. | |
| Like the cloud code is way better in that domain. | |
| It's remarkable. | |
| We should say that both of you are legit on multiple fronts, researchers, programmers, educators, tweeterers, and on the book front too. | |
| So, Nathan, at some point soon, hopefully has an RLHF book coming out. | |
| It's available for pre-order, and there's a full digital pre-print, just making it pretty and better organized for the physical thing, which is a lot of why I do it because it's fun to create things that you think are excellent in the physical form when so much of our life is digital. | |
| I should say, going to perplexity here, Sebastian Roshka is a machine learning researcher and author known for several influential books. | |
| A couple of them that I wanted to mention, which is a book I highly recommend, Build a Large Language Model from Scratch, and the new one, Build a Reasoning Model from Scratch. | |
| So, I'm really excited about that. | |
| Building stuff from scratch is one of the most powerful ways of learning. | |
| Honestly, building an LLM from scratch is a lot of fun. | |
| It's also a lot to learn. | |
| And, like you said, it's probably the best way to learn how something really works because you can look at figures, but figures can have mistakes. | |
| You can look at concepts, explanations, but you might misunderstand them. | |
| But if you see there is code and the code works, you know it's correct. | |
| I mean, there's no misunderstanding. | |
| It's like it's precise. | |
| Otherwise, it wouldn't work. | |
| And I think that's like kind of like the beauty behind coding. | |
| It is kind of like it doesn't lie. | |
| It's math, basically. | |
| So, even though with math, I think you can have mistakes in a book, you would never notice because you're not running the math when you are reading the book. | |
| You can't verify this. | |
| And with code, what's nice is you can verify it. | |
| Yeah, I agree with you about the LLM from scratch book. | |
| It's nice to tune out everything else, the internet and so on, and just focus on the book. | |
| But, you know, I read several like, you know, history books. | |
| It's just less lonely somehow. | |
| It's really more fun. | |
| Like, for example, on the programming front, I think it's genuinely more fun to program with an LLM. | |
| And I think it's genuinely more fun to read with an LLM. | |
| But you're right. | |
| Like, this distraction should be minimized. | |
| So you use the LLM to basically enrich the experience, maybe add more context. | |
| Maybe the, I just, the rate of aha moments for me in a small scale is really high with LLMs. | |
| 100%. | |
| I would, I also want to correct myself. | |
| I'm not suggesting not to use LLMs. | |
| I suggest doing it in multiple passes, like one pass just offline focus mode. | |
| And then after that, I mean, I also take notes, but I try to resist the urge to immediately look things up. | |
| I do a second pass. | |
| It's just like for me more structured this way. | |
| And I get less. | |
| I mean, sometimes things are answered in the chapter, but sometimes also it just helps to let it sink in and think about it. | |
| Other people have different preferences. | |
| I would highly recommend using LLMs when reading books. | |
| For me, it's just, it's not the first thing to do. | |
| It's like the second pass. | |
| By way of recommendation, as I say, I do the opposite. | |
| I like to use the LLM at the beginning to lay out the full context of like, what is this world that I'm now stepping into? | |
| But I try to avoid clicking out of the LLM into the world of like Twitter and blogs. | |
| And because then you're now down this rabbit hole. | |
| You're reading somebody's opinion. | |
| There's a flame war about a particular topic. | |
| And all of a sudden you're no longer, you're now in the realm of the internet and Reddit and so on. | |
| But if you're purely letting the LLM give you the context of why this matters, what are the big picture ideas? | |
| But sometimes books themselves are good at doing that, but not always. | |
| This is why I like the ChatGPT app because it gives the AI a home in your computer when you can focus on it rather than just being another tab in my mess of internet options. | |
|
Engaging AI Models
00:10:00
|
|
| And I think Claude Code in these particular does a good job of making that a joy where it seems very engaging as a product design to be an interface that your AI will then go out into the world. | |
| And it's something that is very kind of intangible between it and codecs, is that it just feels kind of warm and engaging, where codecs can often be as good from OpenAI, but it just kind of like feels a little bit rougher on the edges, whereas Cloud Code makes it fun to build things, particularly from scratch, where you just don't like you don't have to care, but you trust that it'll make something. | |
| Like, obviously, this is good for websites and kind of refreshing tooling and stuff like this, which I use it for, or data analysis. | |
| So, my blog, we scrape Hugging Face. | |
| We keep the download numbers for every data set and model over time now. | |
| So, we have them. | |
| And it's like, Claude was just like, Yeah, I've made use of that data, no problem. | |
| And I was like, That would have taken me days. | |
| And then I have enough situational awareness to be like, Okay, these trends obviously make sense and you can check things. | |
| But that's just a kind of wonderful interface where you can have an intermediary and not have to do the kind of awful low-level work that you would have to do to maintain different web projects and do this stuff. | |
| All right. | |
| So, we just talked about a bunch of the closed-weight models. | |
| Let's talk about the open ones. | |
| So, tell me about the landscape of OpenLM models, which are interesting ones, which stand out to you and why. | |
| Where do you mention DeepSeek? | |
| Do you want to see how many we can name off the top of our head? | |
| Yeah, without looking at notes: DeepSeek, Kimmy, MiniMax, Z.ai, Antling. | |
| We're just going Chinese. | |
| Let's throw in Mistral AI, Gemma. | |
| Yeah, GPT-OSS, the open source model by ChatGPT. | |
| Actually, NVIDIA Nemotron had a, or NVIDIA had a really cool one, Nemotron3. | |
| There's a lot of stuff, especially at the end of the year. | |
| Quinn, one may be the one. | |
| Oh, yeah, Quinn was the name, the obvious name. | |
| I was trying to get through the, you can get at least 10 Chinese and at least 10 Western. | |
| I think that, I mean, OpenAI released their first open model since GPT-2. | |
| When I meant talked, when I was writing about OpenAI's open model release, they were all like, don't forget about GPT-2, which I thought was really funny because this is such a different time. | |
| But GPT-OSS is actually a very strong model and does some things that the other models don't do very well. | |
| And I think that selfishly, I'll promote a bunch of like Western companies. | |
| So both in the US and Europe have these fully open models. | |
| So I work at Allen Institute for AI. | |
| We've been building Olmo, which releases data and code and all of this. | |
| And now we have actual competition for people that are trying to release everything so that other people can train these models. | |
| So there's the Institute for Foundation models or slash LM360, which has had their K2 models of various types. | |
| Apertus is a Swiss research consortium. | |
| Hugging Face has small LM, which is very popular. | |
| And NVIDIA's Nematron has started releasing data as well. | |
| And then Stanford's Marin community project, which is kind of making it so there's a pipeline for people to open a GitHub issue and implement a new idea and then have it run in a stable language modeling stack. | |
| So this space, that list was way smaller in 2024. | |
| So I think it was like just AI2. | |
| So that's a great thing for more people to get involved and to understand language models, which doesn't really have a like a Chinese company that has an analog. | |
| While I'm talking, I'll say that the Chinese open language models tend to be much bigger and that gives them this higher peak performance as MOEs, where a lot of these things that we like a lot, whether it was Gemma and Nematron, have tended to be smaller models from the US, which is starting to change from the US and Europe. | |
| Mistral Large 3 came out, which was a giant MOE model, very similar to DeepSeek architecture in December. | |
| And then a startup RCAI and both Nematron have Nematron and NVIDIA have teased MOE models of this way bigger than 100 billion parameters, like this 400 billion parameter range coming in this like Q1 2026 timeline. | |
| So I think this kind of balance is set to change this year in terms of what people are using the Chinese versus US open models for, which will be, which I'm personally going to be very excited to watch. | |
| First of all, huge props for being able to name so many. | |
| Did you actually name Llama? | |
| No. | |
| I feel like this was not a purpose. | |
| R.I.P. Llama. | |
| All right. | |
| Can you mention what are some interesting models that stand out? | |
| So you mentioned Quen3 is obviously a standout. | |
| So I would say the years almost book ended by both DeepSeek version 3 and R1. | |
| And then on the other hand, in December, DeepSeek version 3.2, because what I like about those is they always have an interesting architecture tweak that others don't have. | |
| But otherwise, if you want to go with, you know, like the familiar but really good performance, Quen3 and like Nathan said, also GPT OSS. | |
| And I think GPT-OSS, what's interesting about it is kind of like the first public or like open-weight model that was really trained with tool use in mind, which I do think is kind of a little bit of a paradigm shift where the ecosystem was not quite ready for it. | |
| So with tool use, I mean that the LLM is able to do a web search to call a Python interpreter. | |
| And I do think this it's a standout because I think it's a huge unlock because one of the most common complaints about LLMs are, for example, hallucinations, right? | |
| And so in my opinion, one of the best ways to solve hallucinations is to not try to always remember information or make things up. | |
| For math, why not use a calculator app or Python? | |
| If I asked the LLM who won the Soccer World Cup in 1998, instead of just trying to memorize, it could go do a search. | |
| I think mostly it's usually still a Google search. | |
| So GPD, GPT-OSS, they would do a tool call to Google, maybe find the FIFA website, find, okay, it was France. | |
| It would get you that information reliably instead of just trying to memorize it. | |
| I think it's a huge unlock, which I think right now is not fully utilized yet by the open source open weight ecosystem. | |
| A lot of people don't use tool call modes because I think it's first is a trust thing. | |
| You don't want to run this on your computer where it has access to tools, could wipe your hard drive or whatever. | |
| So you want to maybe containerize that. | |
| But I do think that is like a really important step for the upcoming years to have this ability. | |
| So a few quick things. | |
| First of all, thank you for defining what you mean by tool use. | |
| I think that's a great thing to do in general for the concepts we're talking about. | |
| Even things as sort of well established as MOEs, you have to say that means mixture of experts. | |
| And you could kind of have to build up an intuition for people what that means, how it's actually utilized, what are the different flavors. | |
| So what does it mean that there's just such explosion of open models? | |
| What's your intuition? | |
| If you're releasing an open model, you want people to use it as the first and foremost thing. | |
| And then after that comes things like transparency and trust. | |
| I think when you look at China, the biggest reason is that they want people around the world to use these models. | |
| And I think a lot of people will not. | |
| If you look outside of the US, a lot of people will not pay for software, but they might have computing resources where you can put a model on it and run it. | |
| I think there can also be data that you don't want to send to the cloud. | |
| So the number one thing is getting people to use models, use AI or use your AI that might not be able to do it without having access to the model. | |
| I guess we should state explicitly. | |
| So we've been talking about these Chinese models and open weight models. | |
| Oftentimes the way they're run is locally. | |
| So it's not like you're sending your data to China or to whoever developed to Silicon Valley, whoever developed the model. | |
| A lot of American startups make money by hosting these models from China and selling them, selling tokens. | |
| It's called like selling tokens, which means somebody will call the model to do some piece of work. | |
| I think the other reason is for US companies, like open AI is so GPU deprived. | |
| Like they're so they're at the limits of the GPUs. | |
| Whenever they make a release, they're always talking about like our GPUs are hurting. | |
| And I think there's like in one of these like GPT-OSS release sessions, Sam Altman said, like, oh, we're releasing this because we can use your GPUs. | |
| We don't have to use, we don't have to use our GPUs. | |
| And OpenAI can still get distribution out of this, which is another very real thing because this doesn't cost them anything. | |
| And for the user, I think also, I mean, there are users who just use the model locally, how they would use ChatGPT, but also for companies, I think it's a huge unlock to have these models because you can customize them, you can train them, you can add post-training, add more data, specialize them into, let's say, law, medical models, whatever you have. | |
| And the appeal, you mentioned, Lama, the appeal of the open weight models from China is that the open weight models are also the licenses are even friendlier. | |
| I think they are just unrestricted open source licenses where if you use something like Lama or Jamma, there are some strings attached. | |
| I think it's like an upper limit in terms of how many users you have. | |
| And then if you exceed, I don't know so many million users, you have to report your finance situation to, let's say, Meta or something like that. | |
| And I think, well, it is a free model, but there are strings attached and people do like things where strings are not attached. | |
| So I think that's also one of the reasons besides performance why the open weight models from China are so popular because you can just use them. | |
| There's no catch in that sense. | |
| The ecosystem has gotten better on that front, but mostly downstream of these new providers providing such open licenses. | |
| It was funny when you pulled up perplexity. | |
| It said Kim EK2Thinking hosted in the US, which is just like an exact, I've never seen this, but it's an exact example of what we're talking about where people are sensitive to this. | |
| Like Kimmy K2Thinkhinking and Kim EK2 is a model that is very popular. | |
| People say that it has very good creative writing and also in doing some software things. | |
| So it's just these little quirks that people pick up on with different models that they like. | |
| What are some interesting ideas that some of these models have explored that you can speak to, like that particular are interesting to you? | |
| Maybe you can go chronologically. | |
|
Advancements in Fully Connected Architectures
00:15:00
|
|
| I mean, there was, of course, DeepSeek R1 that came out in January, if we just focus on 2025. | |
| However, this was based on DeepSeek version 3, which came out the year before in December 2024. | |
| There are multiple things on the architecture side. | |
| What is fascinating is you can still, I mean, that's what I do from scratch coding projects. | |
| You can still start with GPT-2 and you can add things to that model to make it into this other model. | |
| So it's all still kind of like the same lineage, the same, there's a very close relationship between those. | |
| But top of my head, DeepSeek, what was unique there is the mixture of X. | |
| I mean, they were not inventing mixture of experts. | |
| We can maybe talk a bit more what mixture of experts means. | |
| But just to list these things first before we dive into detail, mixture of experts, but then they also had multi-head latent attention, which is a tweak to the attention mechanism, where this was, I would say, 2025, the main distinguishing factor between these open weight models, different tweaks to make inference or KV cache size. | |
| We can also define KV cache in a few moments. | |
| But to kind of make it more economical to have long contexts, to shrink the KV cache size. | |
| So what are tweaks that we can do? | |
| And most of them focused on the attention mechanism. | |
| There is multi-head latent attention in DeepSeq. | |
| There is group query attention, which is still very popular. | |
| It's not invented by any of those models. | |
| It goes back a few years, but that would be the other option. | |
| Sliding window attention, I think, almost reuses it, if I remember correctly. | |
| So there are these different tweaks that make the models different. | |
| Otherwise, I put them all together in an article once where I just compared them. | |
| They are very surprisingly similar. | |
| It's just different numbers in terms of how many repetitions of the transformer block you have in the center and like just little knobs that people tune. | |
| But what's so nice about it is it works no matter what. | |
| You can tweak things, you can move the normalization layers around, you get some performance gains. | |
| OMO is always very good in ablation studies, showing what actually does to the model if you move something around. | |
| Ablation studies, does it make it better or worse? | |
| But there are so many, let's say, ways you can implement a transformer and make it still work. | |
| Big ideas that are still prevalent is mixture of experts, multi-ad latent attention, sliding window attention, group query attention. | |
| And then at the end of the year, we saw a focus on making the attention mechanism scale linearly with inference token prediction. | |
| So there were Quen3 Next, for example, which added a gated DeltaNet. | |
| It's like kind of like inspired by state-space models where you have a fixed state that you keep updating, but it makes essentially this attention cheaper or it replaces attention with a cheaper operation. | |
| And it maybe is useful to step back and talk about transform architecture in general. | |
| Yeah, so maybe we should start with the GPT-2 architecture, the transformer that was derived from the attention is all you need paper. | |
| So the attention is all you need. | |
| Paper had a transformer architecture that had two parts, an encoder and a decoder. | |
| And GPT went just focusing in on the decoder part. | |
| It is essentially still a neural network and it has this attention mechanism inside. | |
| And you predict one token at a time, you pass it through an embedding layer. | |
| There's the transformer block. | |
| The transformer block has attention modules and a fully connected layer. | |
| And there are some normalization layers in between, but it's essentially neural network layers with this attention mechanism. | |
| So coming from GPT-2, when we move on to GPT-OSS, there is, for example, the mixture of experts layer. | |
| It's not invented by GPT-OSS. | |
| It's a few years old. | |
| But it is essentially a tweak to make the model larger without consuming more compute in each forward pass. | |
| So there is this fully connected layer. | |
| And if listeners are familiar with multilayer perceptrons, you can think of a mini multilayer perceptron, a fully connected neural network layer inside the transformer. | |
| And it's very expensive because it's fully connected. | |
| If you have thousand inputs, thousand outputs, it's like 1 million connections. | |
| And it's a very expensive part in this transformer. | |
| And the idea is to kind of expand that into multiple feed-forward networks. | |
| So instead of having one, let's say you have 256, but it would make it way more expensive because now you have 256, but you don't use all of them at the same time. | |
| So you now have a router that says, okay, based on this input token, it would be useful to use this fully connected network. | |
| And in that context, it's called an expert. | |
| So a mixture of experts means you have multiple experts. | |
| And depending on what your input is, let's say it's more math-heavy, it would use different experts compared to, let's say, translating input text from English to Spanish. | |
| It would maybe consult different experts. | |
| It's not quite clear. | |
| I mean, it's clear cut to say, okay, this is only an expert for math and for Spanish is a bit more fuzzy. | |
| But the idea is essentially that you pack more knowledge into the network, but not all the knowledge is used all the time. | |
| That would be very wasteful. | |
| So you're kind of like during the token generation, you're more selective. | |
| There's a router that selects which tokens should go to which expert. | |
| It adds more complexity. | |
| It's harder to train. | |
| There's a lot of, you know, that can go wrong, like collapse and everything. | |
| So I think that's why OLMO3 still uses dense. | |
| I mean, you have, I think, OLMO models with mixture of experts, but dense models, where dense means also it's jargon. | |
| There's a distinction between dense and sparse. | |
| So mixture of experts is considered sparse because we have a lot of experts, but only few of them are active. | |
| So that's called sparse. | |
| And then dense would be the opposite where you only have like one fully connected module and it's always utilized. | |
| So maybe this is a good place to also talk about KVCash, but actually before that, even zooming out, like fundamentally, how many new ideas have been implemented from GPT-2 to today? | |
| Like, how different really are these architectures? | |
| Picture like the mixture of experts. | |
| The attention mechanism in GPT-OSS, that would be the group query attention mechanism. | |
| So, it's a slight tweak from multi-head attention to group query attention. | |
| So, there we have two. | |
| I think they replaced layer norm by RMS norm, but it's just like a different normalization layer. | |
| Not a big change, it's just like a tweak. | |
| The non-linear activation function, people familiar with deep new networks, I mean, it's the same as changing sigmoid with Relu. | |
| It's not changing the network fundamentally. | |
| It's just like a tweak, like a little tweak. | |
| And that's about it, I would say. | |
| It's not really fundamentally that different. | |
| It's still the same architecture. | |
| So, you can convert one from one, you can go from one into the other by just adding these changes, basically. | |
| This fundamentally is still the same architecture. | |
| Yeah, so for example, you mentioned my book earlier. | |
| That's a GPT-2 model in the book because it's simple and it's very small. | |
| So, 124, 120 million parameters approximately. | |
| But in the bonus materials, I do have almost three from scratch, GEMA3 from scratch, and other types of from scratch models. | |
| And I always started with my GPT-2 model and just tweaked or added different components, and you get from one to the other. | |
| It's kind of like a lineage in a sense, yeah. | |
| Can you build up an intuition for people? | |
| Because sort of when you zoom out, you look at it, there's so much rapid advancement in the AI world. | |
| And at the same time, fundamentally, the architectures have not changed. | |
| So, where is all the turbulence, the turmoil of the advancement happening? | |
| Where's the gains to be had? | |
| So, there are the different stages where you develop the network or train the network. | |
| You have the pre-training. | |
| Now, back then, it was just pre-training with GPT-2. | |
| Now, you have pre-training, mid-training, and post-training. | |
| So, I think right now we are in the post-training focus stage. | |
| I mean, pre-training still gives you advantages if you scale it up to better, higher-quality data. | |
| But then we have capability unlocks that were not there with GPT-2. | |
| For example, ChatGPT, it is basically a GPT-3 model, and GPT-3 is the same as GPT-2 in terms of architecture. | |
| What was new was adding the supervised fine-tuning and the reinforcement learning with human feedback. | |
| So, it's more on the algorithmic side rather than the architecture. | |
| I would say that the systems also change a lot. | |
| I think if you listen to NVIDIA's announcements, I talk about these things like you now do FP8, you can now do FP4. | |
| And what is happening is these labs are figuring out how to utilize more compute to put it into one model, which lets them train faster. | |
| And that lets them put more data in. | |
| And then you can find better configurations faster by doing this. | |
| So, you can look at like the essentially the tokens per second per GPU is a metric that you look at when you're doing large-scale training. | |
| And you could get, you can go from like 10K to 13K by turning on FP8 training, which means you're using less memory per parameter in the model. | |
| And by saving less information, you do less communication. | |
| You can train faster. | |
| So, all these like system things underpin way faster experimentation on data and algorithms. | |
| That is kind of like it's this kind of loop that keeps going, where it's kind of hard to describe when you look at the architecture and they're exactly the same. | |
| But the code base used to train these models is going to be vastly different. | |
| And you could probably like, I don't, the GPUs are different, but you probably train GPT-OSS 20B way faster in wall clock time than GPT-2 was trained at the time. | |
| Like you said, they had, for example, in the mixture of experts, this FP4 optimization, for example, where you get more throughput. | |
| But I do think this is for the speed. | |
| This is true, but it doesn't give the model new capabilities in a sense. | |
| It's just how much can we make the computation coarser without suffering in terms of model performance degradation. | |
| But I do think, I mean, there are alternatives popping up to the transformer. | |
| There's text diffusion models, completely different paradigm. | |
| And there's also, I mean, although text diffusion models might use transformer architectures, but it's not an auto-regressive transformer. | |
| And also, MAMBA models, it's a state-space model. | |
| But they do have trade-offs. | |
| And what's right is there's nothing that has replaced the autoregressive transformer as state-of-the-art models. | |
| So, like for state-of-the-art, you would still do that, go with that thing. | |
| But there are now alternatives for the cheaper and like alternatives that are kind of making compromises, but it's not just one architecture anymore. | |
| There are little ones coming up. | |
| But if we talk about the state-of-the-art, it's pretty much still the transformer architecture auto-regressive derived from GPT-2, essentially. | |
| I guess the big question here is we talked quite a bit here on the architecture behind the pre-training. | |
| Are the scaling laws holding strong across pre-training, post-training, inference, contact size, data, synthetic data? | |
| I like to start with the technical definition of scaling law, which kind of informs all of this. | |
| The scaling law is a power law relationship between, you can think of the x-axis, so kind of what you are scaling is a combination of compute and data, which are kind of similar. | |
| And then the y-axis is like the held out prediction accuracy over next tokens. | |
| We talk about models being autoregressive. | |
| It's like if you keep a set of text that the model has not seen, how accurate will it get when you will train? | |
| And the idea of scaling laws came when people figured out that that was a very predictable relationship. | |
| And I think that that technical term is continuing. | |
| And then the question is, like, what do users get out of it? | |
| And then there are more types of scaling where OpenAI's 01 was famous for introducing inference time scaling. | |
| And I think less famously for also showing that you can scale reinforcement learning training and get kind of this log x-axis and then a linear increase in performance on y-axis. | |
| So there's kind of these three axes now where the traditional scaling laws are talked about for pre-training, which is how big your model is and how big your data set is. | |
| And then scaling reinforcement learning, which is like how long can you do this trial and error learning that we will talk about, we'll define more of this. | |
| And then this inference time compute, which is just letting the model generate more tokens on a specific problem. | |
| So I'm kind of bullish where they're all really still working, but the low-hanging fruit has mostly been taken, especially in the last year on reinforced learning with verifiable rewards, which is this RLVR, and then inference time scaling, which is just why these models feel so different to use, where previously you would get that first token immediately. | |
| And now they'll go off for seconds, minutes, or even hours generating these hidden thoughts before giving you the first word of your answer. | |
| And that's all about this inference time scaling, which is such a wonderful kind of step function in terms of how the models change abilities. | |
| They kind of enabled this tool use stuff and enabled this much better software engineering that we were talking about. | |
| And this is when we say enabled almost entirely downstream of the fact that this reinforced learning with verifiable rewards training just kind of let the models pick up these skills very easily. | |
| So it let the models learn. | |
| So if you look at the reasoning process, when the models are generating a lot of tokens, what it will be often doing is it tries a tool, it looks at what it gets back, it tries another API, it sees what it gets back and if it solves the problem. | |
| So the models, when you're training them, very quickly learn to do this. | |
| And then at the end of the day, that gives this kind of general foundation where the model can use CLI commands very nicely in your repo and handle Git for you and move things around and organize things or search to find more information, which if we're sitting in these chairs a year ago, it's something that we didn't really think of the models being doing. | |
| So this is just kind of something that has happened this year and has totally transformed how we think of using AI, which I think is very magical. | |
| It's such an interesting evolution and just so unlocks so much value. | |
| But it's not clear what the next avenue will be in terms of unlocking stuff like this. | |
| I think that there's a we'll get to continual learning later, but there's a lot of buzz around certain areas of AI, but no one knows when the next step function will really come. | |
| So you've actually said quite a lot of things there and said profound things quickly. | |
| It would be nice to unpack them a little bit. | |
| You say you're bullish basically on every version of scaling. | |
| So can we just even start at the beginning? | |
| Pre-training, are we kind of implying that the low-hanging fruit on pre-training scaling has been picked? | |
| Has pre-training hit a plateau or is even pre-training still you're bullish on? | |
| Pre-training has gotten extremely expensive. | |
| I think to scale up pre-training, it's also implying that you're going to serve a very large model to the users. | |
| So I think that it's been loosely established. | |
| The likes of GPT-4 and similar models were around 1 trillion, like this order of trillion parameters at the biggest size. | |
| There's a lot of rumors that they've actually gotten smaller as training has gotten more efficient. | |
|
Pre-Training's Future Still Bright
00:16:30
|
|
| You want to make the model smaller because then your costs of serving go down proportionately. | |
| These models, the cost of training them is really low relative to the cost of serving them to hundreds of millions of users. | |
| I think DeepSeek had this famous number of about $5 million for pre-training at cloud market rates. | |
| I think Olmo 3, section 2.4 in the paper, we just detailed how long we had the GPU clusters sitting around for training, which includes engineering issues, multiple seeds. | |
| And it was like about $2 million to rent the cluster to deal with all the problems and headaches of training a model. | |
| So these models are pretty, like a lot of people could get $1 to $10 million to train a model, but the recurring costs of serving millions of users is really billions of dollars of compute. | |
| I think that you can look at like a thousand GPU rental, you can pay $100,000 a day for, and these companies could have millions of GPUs. | |
| You can look at how much these things cost to sit around. | |
| So that's kind of a big thing. | |
| And then it's like, if scaling is actually giving you a better model, is it going to be financially worth it? | |
| And I think it'll kind of slowly push it out as AI solves more compelling tasks. | |
| So like the likes of Cloud Opus 4.5, making cloud code just work for things. | |
| I think I launched this project called like the Atom Project, which is like American Truly Open Models in July. | |
| And that was like a true vibe-coded website. | |
| And like, I have a job, make plots and stuff. | |
| And then I came back to refresh it in the last few weeks. | |
| And it's like Cloud Opus 4.5 versus whatever model at the time was like just crushed all the issues that it had from building in June and July. | |
| And like it might be a bigger model. | |
| There's a lot of things that go into this, but that's like, there's still progress coming. | |
| So what you're speaking to is the nuance of the y-axis of the scaling laws that the way it's experienced versus on a benchmark, the actual intelligence might be different. | |
| But still, your intuition about pre-training, if you scale the size of compute, will the models get better? | |
| Not whether it's financially viable, but just from the law aspect of it. | |
| Do you think the models will get smarter? | |
| Yeah. | |
| And I think that there's, and this sometimes comes off as like almost like disillusioned from people, leadership AI companies saying this, but they're like, it's held for 13 orders of magnitude of compute or something. | |
| Like, why would it ever end? | |
| So I think fundamentally it is pretty unlikely to stop. | |
| It's just like eventually we're not even going to be able to test the bigger scales because of all the problems that come with more compute. | |
| I think that there's a lot of talk on how 2026 is a year when very large Blackwell compute clusters, it's like gigawatt scale facilities, hyperscalers, are coming online. | |
| And these were all contracts for power and data centers that were signed and sought out in like 22 and 2023. | |
| So before or right after ChatGPT. | |
| So, it took this two to three year lead time to build these bigger clusters to train the models. | |
| Wow, there's obviously immense interest in building even more data centers than that. | |
| So, that is like kind of the crux that people are saying is like these new clusters are coming. | |
| The labs are going to have more compute for training. | |
| They're going to utilize this, but it's not a given. | |
| And it's like, I've seen so much progress that I expect it and I expect a little bit bigger models. | |
| And I expect, I would say it's more like we will see a $2,000 subscription this year. | |
| We've seen $200 subscriptions. | |
| It's like that could 10x again. | |
| And these are the kind of things that could come. | |
| And they're all downstream of this like bit, big, bit, bigger model that offers just a little bit more cutting edge. | |
| So, you know, it's reported that XAI is going to hit that one gigawatt scale early 26 and full two gigawatt by year end. | |
| How do you think they'll utilize that in the context of scaling laws? | |
| Is a lot of that inference? | |
| Is a lot of that training? | |
| It ends up being all of the above. | |
| So I think that all of your decisions when you're training a model come back to pre-training. | |
| So if you're going to scale RL in a model, you still need to decide on your architecture that enables this. | |
| We were talking about like other architectures and using different types of attention. | |
| We're also talking about mixture of experts models. | |
| This sparse nature of MOE models makes it much more efficient to do generation, which becomes a big part of post-training. | |
| And it's like you need to have your architecture ready so that you can actually scale up this compute. | |
| I still think most of the compute is going in at pre-training because you can still make a model better. | |
| You still want to go and revisit this. | |
| You still want the best-based model that you can. | |
| And in a few years, that'll saturate and the RL compute will just go longer. | |
| Is there people who disagree with you that say basically pre-training is dead? | |
| It's all about scaling inference, scaling post-training, scaling context, continual learning, scaling data, synthetic data. | |
| People vibe that way and describe it in that way, but I think it's not the practice that is happening. | |
| It's just the general vibe of people saying this thing. | |
| The excitement is elsewhere. | |
| So the low-hanging fruit in RL is elsewhere. | |
| Like, for example, we released our model in November for every company has deadlines. | |
| Our deadline was like November 20th. | |
| And for that, our RL run was five days, which compared to 2024 is a very long time to just be doing post-training at a model of like 30 billion parameters. | |
| It's not a big model. | |
| And then in December, we had another release, which was just, we let the RL run for go for another three and a half weeks and the model got notably better. | |
| So we release it. | |
| And like, that's a big amount of time to just allocate to something that is going to be your peak for the year. | |
| So it's like, there's these types of decisions that happen when they're training a model where they just like can't, they can't leave it forever. | |
| You have to keep, you have to keep pulling in the improvements you have from your researchers. | |
| So that's like you redo pre-training. | |
| You'll do this post-training for a month, but then you need to give it to your users. | |
| You need to do safety testing. | |
| So there's kind of just like, I think there's a lot in place that reinforces this cycle of just keep updating the models. | |
| There's things to improve. | |
| You get a new compute cluster that lets you do something maybe more stablely or faster. | |
| It's like you hear a lot about Blackwell having rollout issues where at AI2, most of the models we're pre-training are on like one to 2,000 GPUs. | |
| But when you're pre-training on 10,000 or 100,000 GPUs, you hit very different failures. | |
| So GPUs are known to break in weird ways. | |
| And doing a 100,000 GPU run is like, you're pretty much guaranteed to always have at least one GPU that is down. | |
| And you need to have your training code handle that redundancy, which is just a very different problem. | |
| Whereas like, well, we're doing like, oh, I'm playing with post-training on a DJX Spark or you have your book. | |
| It's like, or people learning ML. | |
| It's like what they're battling to train these biggest models is just like mass distributed scale. | |
| And it's a very different, but that's somewhat different than like, are these, like that's a systems problem in order to enable the scaling laws, especially at pre-training. | |
| You need all these GPUs at once. | |
| When we shift to reinforcer learning, it actually lends itself to heterogeneous compute because you have many copies of the model. | |
| And to do a primer for a language model reinforcer learning, what you're doing is you have two sets of GPUs. | |
| One is you can call it the actor and one you call the learner. | |
| The learner is where your actual reinforcer learning updates are going to do. | |
| These are traditionally policy gradient algorithms, proximal policy optimization, PPO, and group relative policy optimization, gRPO are the two popular classes. | |
| And on the other side, you're going to have actors which are generating completions. | |
| And these completions are the things that you're going to grade. | |
| So reinforcer learning is all about optimizing reward. | |
| And in practice, what you can do is that you can have a lot of different actors in different parts of the world doing different types of problems. | |
| And then you send it back to this highly networked compute cluster to do this actual learning where you take the gradients and you need to have a tightly meshed network where you can do different types of parallelism and spread out your model for efficient training. | |
| So there's just like a lot of every different type of training and serving has these considerations you need to scale. | |
| Like we talked about pre-training, we talked about RL, and then inference time scaling is like, how do you serve a model that's thinking for an hour to 100 million users? | |
| I'm like, I don't really know about that, but I know that's a hard problem. | |
| And in order to give people this intelligence, there's all the systems problems and we need more compute and you need more stable compute to do it. | |
| But you're bullish on all of these kinds of scaling is what I'm hearing. | |
| On the inference, on the reasoning, even on the pre-training. | |
| Yeah, so that's a big can of worms here. | |
| But so there's basically two of the NOPs are the training and the inference scaling, where you can get gains. | |
| And so in a world where we had, let's say, infinite compute resources, you want to do all of them. | |
| So you have training, you have inference scaling. | |
| And training is like a hierarchy. | |
| It's pre-training, mid-training, post-training. | |
| Changing the model size, more training data, training a bigger model gives you more knowledge in the model. | |
| And the model, let's say, has a better, it's like a better base model back in the day, or still we call it foundation model. | |
| And it unlocks, but you don't, let's say, have the model be able to solve your most complex tasks during pre-training or after pre-training. | |
| You still have these other unlock phases where you have mid-training, non-context, for example, post-training with LRVR that unlocks capabilities that the model has in terms of just knowledge in the pre-training. | |
| And I think, sure, if you do more pre-training, you get a better base model that you can unlock later. | |
| But like Nathan said, it just becomes too expensive. | |
| So we don't have infinite compute. | |
| So you have to decide, do I want to spend that compute more on making the model larger? | |
| But, you know, it's like a trade-off. | |
| It's like in an ideal world, you want to do all of them. | |
| And I think in that sense, scaling is still pretty much alive. | |
| You would still get a better model. | |
| But like we saw with GPD 4.5, it's just not worth it. | |
| I mean, it's like, because you can, let's say, you can unlock more performance with other techniques at that current moment, especially if you look at inference scaling. | |
| That's one of the biggest gains this year with O1, where it took a smaller model further than pre-training a larger model like GPD 4.5. | |
| So it's like, I wouldn't say pre-training scaling is that. | |
| It's just like there are other more attractive ways to scale right now at the moment. | |
| But at some point, you will still want to make some progress on the pre-training. | |
| The thing is also to consider where do you want to spend your money. | |
| If you spend it more on the pre-training, it's like a fixed cost. | |
| You train the model and then it has this capability forever. | |
| You can always use it and so forth. | |
| With inference scaling, you don't spend money during training. | |
| You spend money later per query. | |
| And then it's also like the math. | |
| How long is my model going to be on the market if I replace that in half a year? | |
| Maybe it's not worth spending $5 million, $10 million, $100 million on the training it longer. | |
| Maybe it's just, I will just do more inference scaling and get the performance from there. | |
| It maybe cost me $2 million in terms of user queries. | |
| It becomes a question of how many users you have and then doing the math. | |
| And I think that's also where it's interesting where JGPT is in a position, I think they have a lot of users, where they need to go a bit cheaper, where they have that GPT-5 model that is a bit smaller. | |
| Other companies that have, as if your customers have other trade-offs, for example, there was also the math Olympiad or some of these math problems where JGPT or OpenMAI, they had a proprietary model, and I'm pretty sure it's just like a model that has been maybe fine-tuned a little bit more, but most of it was during inference scaling to achieve this peak performance in certain tasks where you don't need that all the time. | |
| And but yeah, long story short, I do think all of these pre-training, mid-training, post-training, inference scaling, they are all still things you want to do. | |
| It's just finding at the moment in this year, it's finding the right ratio that gives you the best bang for the buck, basically. | |
| I think this might be a good place to define pre-training, mid-training, and post-training. | |
| So pre-training is the classic training, one next token prediction at a time. | |
| You have a big corpus of data. | |
| And Nathan probably also has very interesting insights there because of almost three. | |
| It's a big portion of the paper focuses on the right data mix. | |
| So pre-training is essentially just, you know, train across entropy loss training on next token prediction on a vast corpus of internet data books, papers, and so forth. | |
| It has changed a little bit over the years in a sense, people used to throw in everything they can. | |
| Now it's not just raw data, it's also synthetic data where people re let's say rephrase certain things. | |
| So synthetic data doesn't necessarily mean purely AI made up data. | |
| It's also taking something from an article, Wikipedia article, and then rephrasing it as a QA question or summarizing it, rewording it and making better data that way. | |
| Because I think of it also like with humans, if someone, let's say, reads the book compared to a messy, I don't know, no offense, but like Reddit post or something like that. | |
| I do think you learn no offense, but I think there's going to be a post about this. | |
| Yeah, Sarajevo. | |
| Some Reddit data is very coveted and excellent for training. | |
| You just have to filter it. | |
| I think that's the idea. | |
| I think it's like if someone took that and rephrases that in a, let's say, more concise and structured way, I think it's higher quality data that gets the LLM. | |
| Maybe the same, you get the same LLM out of it at the end, but it gets there faster. | |
| It trains faster because, let's say, if the grammar and the punctuation is correct, it already learns the correct way versus getting information from a messy way and then learning later how to correct that and stuff like that. | |
| So I think that is how pre-training evolved and how still why scaling still works is that it's not about just the amount of data. | |
| It's also the tricks to make that data better for you in a sense. | |
| And then mid-training is, I mean, it used to be called pre-training. | |
| I think it's called mid-training because it was awkward to have pre-training and post-training, but nothing in the middle, right? | |
| It sounds a bit weird to have pre-training and post-training, but what's the actual training? | |
| So the mid-training is usually similar to pre-training, but now it's a bit more, I would say, specialized than pre-training. | |
| It's the same algorithm, but what you do is you focus, for example, on long context, like one example, you have long context documents. | |
| The reason you don't do that during just pure pre-training is because you don't have that many long context documents. | |
| You have a specific phase. | |
| And one problem of LLMs is also still, it's a neural network. | |
| It has the problem of catastrophic forgetting. | |
| So you teach it something, it forgets other things. | |
| And you want to, it's not 100% forgetting, but you know, it's like no free lunch. | |
| You can't, it's also the same with humans. | |
| If you ask me some math I learned 10 years ago, I don't know. | |
| I would have to look at it again. | |
| Nathan was actually saying that he's consuming so much content that there's a catastrophic forgetting issue. | |
| I'm like trying to learn so much about AI. | |
| I was like, I was learning about pre-training parallelism. | |
| I'm like, I lost something and I don't know what it was. | |
| I don't want to unpromote LLMs, but it's, I think, the same kind of in that sense, how humans learn. | |
| I mean, the quantity is not always better because, yeah, it's like being selective. | |
| And the mid-training is being selective in terms of quality content at the end. | |
| So the last thing the LM has seen is the quality stuff. | |
| And then post-training is all the fine-tuning, supervised fine-tuning, DPO, reinforcement learning with verifiable rewards, with human feedback, and so forth. | |
| So the refinement stages. | |
| And it's also interesting. | |
| It's like the cost thing, right? | |
| I mean, it's like pre-training, you spend a lot of money on that right now. | |
| RL, a bit less. | |
| RL, you don't really, I would say, teach it knowledge. | |
| It's more like unlocking the knowledge. | |
| It's more like a skill learning, like how to solve problems with the knowledge that it has from pre-training. | |
| There are papers, actually three papers this year, last year, 2025, on RL for pre-training. | |
| But I mean, I don't think anyone does that in production. | |
| Toy examples for now. | |
| Toy examples, right? | |
| But to generalize RL, post-training is more like the skill unlock where pre-training is like soaking up the knowledge, essentially. | |
| A few things that could be helpful for people. | |
| A lot of people think of synthetic data as being bad for training the models. | |
| You mentioned the DeepSeek get OCR, which is optical character recognition paper. | |
| A lot of labs did. | |
| AI2 had one that had multiple. | |
| And the reason that each of these labs has these is because there's vast amounts of PDFs and other digital documents on the web that are in formats that aren't encoded with text easily. | |
|
LLM Generated Data Curated by Humans
00:14:24
|
|
| So you use these OLMO CR, these DeepSeek OCR, and we called our Olmo CR to extract what can be trillions of tokens of candidate data for pre-training. | |
| And pre-training data set size is on the order of trillions, is measured in trillions of tokens. | |
| Smaller models from researchers can be something like 5 to 10 trillion. | |
| Quen is documented going up to like 50 trillion. | |
| And there's rumors that these closed labs can go to like 100 trillion tokens. | |
| And just getting this potential data to put in, I think they have a very big funnel. | |
| And then the data you actually train the model on is a small percentage of this. | |
| Like this character recognition data would be described as synthetic data for pre-training in a lab. | |
| And then there's also the things like ChatGPT now gets wonderful answers and you can train on those best answers and that's synthetic data. | |
| It's very different than like early ChatGPT, lots of hallucinations, data when people became grounded in synthetic data. | |
| One interesting question is, if I recall correctly, Olmo 3 was trained with less data than specifically some other open weight models, maybe even OLMO2, but you still got better performance. | |
| And that might be one of the examples how the data helps. | |
| It's mostly down to data quality. | |
| I think if we had more compute, we would train for longer. | |
| I think we ultimately see that as just like something we would want to do. | |
| And especially with big models, you need to have more compute because we talk about having more parameters and we talk about knowledge. | |
| And essentially, there's a ratio where big models can absorb more from data. | |
| And then you're going to, you get more benefit out of this. | |
| It's like one of these. | |
| Any logarithmic graph in your mind is like a small model will level off sooner if you're measuring tons of tokens and bigger, bigger models need more. | |
| But mostly is we aren't training that big of models right now at AI2 and getting the highest quality data we can is the natural starting point. | |
| Is there something to be said about the topic of data quality? | |
| Is there some low-hanging fruit there still where the quality could be improved? | |
| It's like turning the crank. | |
| So I think historically in the open, there's been like a canonical best pre-training data set that has moved around between who has the most recent one or the best recent effort. | |
| Like AI2's Dolmo was very early with the first Olmo and Hugging Face had FindWeb. | |
| And there's a DCLM project, which has been kind of like a, which is, it stands for data comp language model. | |
| There's been data comp for other machine learning projects, and they have had a very strong data set. | |
| And a lot of it is the internet is becoming fairly closed off. | |
| So we have common crawl, which I think is hundreds of trillions of tokens, and you filter it. | |
| And it looks like being a lot of scientific work where you're training classifiers and making decisions based on how do you prune down this data set into the highest quality stuff and the stuff that suits your tasks. | |
| So previously, language models were tested a lot more on like knowledge and just kind of conversational things, but now they're expected to do math and code. | |
| So to train a reasoning model, you need to remix your whole data set. | |
| And there's a lot of actually wonderful scientific methods here where you can take your gigantic data set, you sample a lot of really tiny things from different sources. | |
| So you say you have GitHub, Stack Exchange, Reddit, Wikipedia. | |
| You can sample small things from them and you train small models on each of these mixes and measure their performance on your evaluations. | |
| And you can just do like basic linear regression and it's like, here's your optimal data set. | |
| But if your evaluations change, your data set changes a lot. | |
| So a lot of Ulmo 3 was new sources for reasoning to be better at math and code. | |
| And then you do this mixing procedure and it gives you the answer. | |
| And I think that's a lot of that's happened at labs this year. | |
| It's like there's new hot things, whether it's like coding environments or web navigation. | |
| You just need to bring in new data. | |
| You need to train your whole pre-training so your post-training can work better and stuff like this. | |
| So that's like the constant re-evolution and redetermining of what they care about as their for their models. | |
| Are there fun anecdotes of what sources of data are particularly high quality that we wouldn't expect? | |
| You mentioned Reddit sometimes can be a source. | |
| Reddit was very useful. | |
| I think that like PDFs is definitely one. | |
| Especially archive. | |
| Yeah. | |
| So like AI2 has run Semantics Scholar for a long time, which is a like what you can say is a competitor to Google Scholar with a lot more features. | |
| And to do this, AI2 has found and scraped a lot of PDFs for openly accessible papers that might not be like behind the closed paid garden of a certain publisher. | |
| So like truly open scientific PDFs. | |
| And if you like, you sit on all of these and you process it and you can get value out of it. | |
| And I think that like a lot of that style of work has been done by the Frontier Labs much earlier. | |
| And it's just like, you need to have a pretty skilled researcher that understands how things change models and they bring it in and they clean it. | |
| And that's a lot of labor that like I think of a lot of Frontier labs when they scale researchers a lot more goes into data. | |
| You have people like if you want to, if you join a frontier lab and you want to have impact, the best way to do it is just make find new data that's better. | |
| And then like the fancy, glamorous algorithmic things, like figuring out how to make O1 is like the sexiest thought of a scientist. | |
| It's like, oh, I figured out to scale RL. | |
| And there's a group that did that. | |
| But I think most of the contributions is like I'm going to make the data better or I'm going to make the infrastructure better so that everybody in my team can run experiments 5% faster. | |
| At the same time, I think it's also one of the closest guarded secrets what your training data is for legal reasons. | |
| And so there's also, I think, a lot of work that goes into hiding what your trading data was, essentially. | |
| Like trying the model to not give away the sources because you have legal reasons. | |
| The other thing to be complete is that some people are trying to train on only licensed data, where Common Crawl is a scrape of like the whole internet. | |
| So if I host multiple websites, I'm happy to have them train language models, but I'm not explicitly licensing what governs it. | |
| And therefore, this license, like Common Crawl is largely unlicensed, which means that your consent really hasn't been provided for how to use the data. | |
| There's another idea where you can train language models only on data that has been licensed explicitly. | |
| So that kind of governing contract is provided. | |
| And I'm not sure if Apertus is the copyright thing or the license thing. | |
| Know that the reason that they did it was for an EU compliance thing where they wanted to make sure that their model fit one of those checks. | |
| And also, on that note, also, for example, there's also the distinction between the licensing. | |
| So, some people, like you said, they just purchase the license. | |
| Let's say they buy a book online, let's say an Amazon Kindle book, or let's say a mining book or something, and then use that in the training data. | |
| And that is like the gray zone because you paid for the content and you might want to train it. | |
| But then there are also restrictions where even that shouldn't be allowed. | |
| And so, that is like where it gets a bit fuzzy. | |
| And yeah, I think that is right now still a hot topic. | |
| And also, big companies like OpenAI, they approached private companies for their proprietary data. | |
| And private companies, they become more and more, let's say, protective of their data because they know, okay, this is going to be my mode in a few years. | |
| And I do think that's like the interesting question where if LLMs become more commoditized, and I think a lot of people learn about LLMs, there will be a lot more people able to train LLMs. | |
| Of course, there are infrastructure challenges, but if you think of big industries like pharmaceutical industries, law, finance industries, I do think they at some point will hire people from other frontier labs to build their in-house models on their proprietary data, which will be then again another unlock with pre-training that is currently not there because even if you wanted to, you can't get that data. | |
| You can't get access to clinical trials most of the time and these types of things. | |
| So, I do think scaling in that sense might be still pretty much alive if you also look in domain-specific applications because we are still right now in this year just looking at general purpose LLMs on ChatGPT, Anthropic, and so forth. | |
| They are just general purpose. | |
| They're not even, I think, scratching the surface of what an LLM can do if it is really specifically trained and designed for a specific task. | |
| I think on the data thing, this is one of the things where like this happened in 2025 and we totally forget it. | |
| Is Anthropic lost in court and was owed $1.5 billion to authors? | |
| Anthropic, I think, bought thousands of books and scanned them and was cleared legally for that because they bought the books and that is kind of going through the system. | |
| And then the other side, they also torrented some books. | |
| And I think this torrenting was the path where the courts said that they were then culpable to pay this billions of dollars to authors, which is just like such a mind-boggling lawsuit that kind of just came and went. | |
| Like that is so much money from the VC ecosystem. | |
| These are court cases that will define the future of human civilization because it's clearly the data drives a lot of this. | |
| And there's this very complicated human tension of, I mean, you can empathize, you're both authors. | |
| There's some degree to which, I mean, you put your heart and soul and your sweat and tears into the writing that you do. | |
| It feels a little bit like theft for somebody to train your data without giving you credit. | |
| And there are, like Nathan said, also two layers to it. | |
| Someone might buy the book and then train on it, which could be argued fair or not fair. | |
| But then there are literally straight up companies who use pirated books where it's not even compensating the author. | |
| That is, I think, where people got a bit angry about it specifically. | |
| Yeah, but there has to be some kind of compensation scheme. | |
| This is like moving towards something like Spotify streaming did originally for music. | |
| You know, what does that compensation look like? | |
| You have to define those kinds of models. | |
| You have to think through all of that. | |
| One other thing I think people are generally curious about, I'd love to get your thoughts. | |
| As LLMs are used more and more, if you look at even archive, but GitHub, more and more of the data is generated by LLMs. | |
| What do you do in that kind of world? | |
| How big of a problem is that? | |
| Largest problems to infrastructure and systems. | |
| But from an AI point of view, it's kind of inevitable. | |
| So it's basically LLM generated data that's curated by humans, essentially. | |
| Right. | |
| Yes. | |
| And I think that a lot of open source contributors are legitimately burning out. | |
| If you have a popular open source repo, somebody's like, oh, I want to do open source AI. | |
| It's good for my career. | |
| And they just vibe code something and they throw it into the P. You might get more of this than I do. | |
| So I have actually a case study here. | |
| I have a repository called ML Extend that I developed as a student 15 years, 10 years ago. | |
| And it is a reasonably popular library still for certain algorithms, I think, especially like frequent data mining stuff. | |
| And there was recently, I think, two or three people who submitted a lot of PRs in a very short amount of time. | |
| I do think LMs have been involved in submitting these PRs. | |
| Me as the maintainer, there are two things. | |
| First, I'm a bit overwhelmed. | |
| Like, I don't have time to read through it because, especially, it's an older library that is not a priority for me. | |
| At the same time, I kind of also appreciate it because I think something people forget is it's not just using the LLM. | |
| There's still a human, you have a human layer that verifies something. | |
| And that is, in a sense, also how data is labeled, right? | |
| So that's like one of the most expensive things is getting labeled data for RL back in human feedback phases. | |
| And this is kind of like that, where it goes through phases and then you get actually higher quality data out of it. | |
| I don't mind it in a sense. | |
| It can feel overwhelming, but I do think there is also value in that. | |
| It feels like there's a fundamental difference between raw LLM generated data and LLM generated data with human in the loop that does some kind of verification, even if that verification is a small percent of the lines of code. | |
| I think this goes with anything like where people think also sometimes, oh, I can just use an LLM to learn about XYZ, which is true, you can, but there might be a person who is an expert who might have used an LLM to write specific code. | |
| There is kind of like this human work that went into it to make it nice and throwing out the not-so-nice part to make it to kind of like pre-digest it for you. | |
| And that saves you time. | |
| And I think that's it's that that's the value add where you have someone filtering things or even using the LLMs correctly. | |
| I think this is still labor that you get for free with you. | |
| For example, read an article, let's say substake article. | |
| I could maybe ask an LLM to give me opinions on that, but I wouldn't even maybe know what to ask. | |
| I think there is still value in reading that article compared to me going to the LLM because you are the expert. | |
| You select what knowledge is actually spot on, should be included, and you give me this very this executive summary. | |
| And this is kind of a huge value add because now I don't have to waste three, five hours to go through this myself, maybe get some incorrect information and so on. | |
| And so I think that's also where the future still is for writers, even though there are LLMs that expert can kind of like save you time. | |
| It's kind of fascinating to actually watch, and I'm sure you guys do this, but for me to look at the difference between a summary and the original content, even if it's a page-long summary of a page-long content, it's interesting to see how the summary, LLM-based summary, takes the edge off. | |
| Like what is the signal it removes from the thing? | |
| The voice is what I talk about a lot. | |
| Voice. | |
| Well, voice, I would love to hear what you mean by voice. | |
| That's really powerful. | |
| But sometimes there's like literally insights. | |
| Like in removing an insight, you're actually fundamentally changing the meaning of the thing. | |
| So I'm continuously disappointed how bad LLMs are at really getting to the core insights, which is what a great summary does. | |
| Even if you go, and I have these extensive, extremely elaborate prompts where I'm like really trying to dig for the insights and it's still not quite there, which I mean, that's a whole deep philosophical question about what is human knowledge and wisdom and what does it mean to be insideful and so on. | |
| But when you talk about the voice, what do you mean? | |
| So, when I write, I think a lot of what I'm trying to do is take what you think as a researcher, which is very wrong, which a researcher is trying to encapsulate an idea at the frontier of their understanding. | |
| And they're trying to put what is a feeling into words. | |
| And I think that my writing, I tried to do this as the writing, which makes it come across as raw, but also high information in a way that is like some people will get it and some won't. | |
| And that's kind of the nature of research. | |
|
Taking Feelings to Words
00:03:17
|
|
| And I think this is something that language models don't do well, particularly they're all trained with this reinforced learning from human feedback, which is designed to take feedback from a lot of people and in a way average how the model behaves from this. | |
| And I think that there's it's going to be hard for a model to be very incisive when there's that sort of filter in it. | |
| And I think this is kind of a wonderful fundamental problem for researchers in RLHF is like this provides so much utility in making the models better, but also the problem formulation is kind of like there's this not in it that you can't get past. | |
| So that's what I think of is like these language models don't have this prior and their deep expression that they're trying to get at. | |
| I don't think it's impossible to do. | |
| I think there's stories of models that really shock people. | |
| Like I think of like, I would love to have tried Bing Sydney. | |
| And does like, does that have more voice? | |
| Because it would so often go off the rails on people in a was historically obviously a scary way. | |
| Like telling a reporter to leave his wife is a crazy model to potentially put in general general adoption. | |
| But that's kind of like a trade-off. | |
| Like, is this RLHF process like in some ways adding limitations? | |
| That's a terrifying place to be as one of these frontier labs and companies because millions of people are using them. | |
| There was a lot of backlash last year with the GPT-4.0 getting removed. | |
| And I personally never used the model, but I've talked to people at OpenAI where they're to the point where they get emails from users that might be detecting subtle differences in the deployments in the middle of the night. | |
| And they email them and they're like, my friend is different. | |
| And they like find these people, employees' emails and send them things because they're so attached to this set, which is a set of model weights and a configuration that is deployed to the users. | |
| We see this with TikTok. | |
| You open it. | |
| I don't use TikTok, but supposedly in like five minutes, the algorithm gets you. | |
| It's like it's locked in. | |
| And I don't like those are language models doing recommendations. | |
| Like, I think there are ways that you could do this with a language model within like five minutes of chatting with it. | |
| The model just gets you. | |
| And that is something that people aren't really ready for. | |
| Like, I think that don't give that to kids. | |
| Like, don't give that to kids at least until we know what's happening. | |
| But there's also going to be this mechanism. | |
| What's going to happen with these LLMs is they're used more and more. | |
| Unfortunately, the nature of the human condition is such that people commit suicide. | |
| And so what journalists would do is they will report extensively on the people who commit suicide and they would very likely link it to the LLMs because they have that data about the conversations. | |
| If you're really struggling in your life, if you're depressed, if you're thinking about suicide, you're going to probably talk to LLMs about it. | |
| And so what journalists will do is they will say, well, the suicide was committed because of the LLM. | |
| And that's going to lead to the companies because of legal issues and so on, more and more and more taking the edge off of the LLM. | |
| So it's going to be as generic as possible. | |
| It's so difficult to operate in this space because, of course, you don't want an LLM to cause harm to humans at that level. | |
| But also, this is also the nature of the human experience is to have a rich conversation, a fulfilling conversation, one that challenges you from which you grow, you need that edge. | |
|
Balancing AI Edge & Harm
00:03:13
|
|
| And that's something extremely difficult for AI researchers on the ROHF front to actually have to solve because you're actually dealing with the human condition. | |
| Like a lot of researchers at these companies are so well motivated. | |
| And there's definitely like Zimanthropic and OpenAI or culturally so want to do good through this for the world. | |
| And there is, it's such a like, I'm like, ooh, I don't want to work on this. | |
| Because on the one hand, a lot of people see AI as a health ally, as somebody they can talk to about their health confidentially, but then it bleeds all the way into this talking about mental health and things where it's heartbreaking that this will push like be the thing where somebody goes over the edge, but other people might be saved. | |
| And I'm like, I don't like, there's things that as a researcher training models, it's like, I don't want to train image generation models and release them openly because I don't want to enable somebody to have a tool on their laptop that can harm other people. | |
| Like I don't have the infrastructure in my company to do that safely. | |
| But it's like, like, there's a lot of areas like this where it's just, it needs people that will approach it with the complexity and just kind of conviction of like, it's just such a hard problem. | |
| But also, we as a society, as users of these technologies, need to make sure that we're having the complicated conversation about it versus just fear-mongering. | |
| Big tech is causing harm to humans or stealing your data, all that kind of stuff. | |
| It's more complicated than that. | |
| And you're right. | |
| There's a very large number of people inside these companies, many of which you know, many of which I know, that deeply care about helping people. | |
| They are considering the full human experience of people from across the world, not just Silicon Valley, people across the United States, people across the world, what that means, what their needs are. | |
| It's really difficult to design this one system that is able to help all these different kinds of people across the different age groups, cultures, mental states, mental conditions, all that kind of stuff. | |
| I wish that the timing of AI was different with the relationship of big tech to the average person. | |
| So, like, big tech's reputation was so low. | |
| And with how AI is so expensive, it's like inevitably going to be a big tech thing where it takes so many resources. | |
| And people say that the US is quote unquote betting the economy on AI with this build out. | |
| And it's like to have these be intertwined at the same time is just makes for such a hard communication environment. | |
| It would be good for me to go talk to more people in the world that hate big tech and see AI as a continuation of this. | |
| And one of the things you actually recommend, one of the antidotes that you talk about is to find agency in this whole system, as opposed to sort of sitting back in a powerless way and consuming the AI slop as it quickly, rapidly takes over the internet. | |
| More find agency by using that to build stuff, build apps, build. | |
| So you, one, that actually helps you build intuition, but two, it's empowering because you can understand how it works, what the weaknesses are, and it allows you, it gives your voice power to say, like, this is fucked up. | |
| This is bad. | |
| This is bad use of the technology. | |
| And this is good use of the technology. | |
| And you're more plugged into the system then. | |
| So you can understand it better and you can steer it better as a business. | |
| I think it's a good point. | |
|
Mundane Tasks and Intuition
00:06:42
|
|
| You brought up agency. | |
| Instead of ignoring it and saying, okay, I'm not going to use it. | |
| I think it's probably long-term healthier to say, okay, it's out there. | |
| I can't put it back. | |
| You know, like internet computers back then when they came out, how do I make best use of it? | |
| And how does it help me to up-level myself? | |
| The one thing I worry here, though, is like if you just fully use it for something you love to do, the thing you love to do is not no longer there. | |
| And that could potentially, I feel, like lead to burnout. | |
| For example, if I use an LM to do all my coding for me, now there's no coding. | |
| I'm just managing something that is coding for me. | |
| Two years, let's say later, if I just do that eight hours a day, I have something code for me. | |
| Do I feel fulfilled still? | |
| Is this like, yeah, I mean, is this like hurting me in terms of being excited about my job, excited about what I'm doing? | |
| Am I still proud to build something? | |
| So, there's on that topic of enjoyment, it's quite interesting. | |
| We should just throw this in there: that there's this recent survey of about 791 professional developers, professional meaning 10 plus years of experience. | |
| That's a long time. | |
| Yeah. | |
| As a junior developer? | |
| Yeah, in this day and age. | |
| So, the results here on many fronts are surprising. | |
| So, they break it down by junior and senior developers, but I mean, it just shows that both junior and senior developers use AI-generated code in code they ship. | |
| So, this is not just for fun, sort of intermediate kind of learning things. | |
| This is code they ship. | |
| And so, it's 25% like most of them use around 50% or more. | |
| And what's interesting is for the category of over 50% of your code that you ship is AI-generated, senior developers are much more likely to do so. | |
| But you don't want AI to take away the thing you love. | |
| I think this speaks to my experience, these particular results I'm about to say. | |
| So, together, about 80% of people find it either somewhat more enjoyable or significantly more enjoyable to use AI as part of the work. | |
| I think it depends on the task. | |
| For my personal usage, for example, I have a website where I sometimes tweak things on the website. | |
| I personally don't enjoy this. | |
| So, in that sense, if the AI can help me to implement something on my website, I'm all here for it. | |
| It's great. | |
| But then, at the same time, when I solve a complex problem, well, if there's a bug and I hunt this bug and I find the bug, it's the best feeling in the world. | |
| It's like you get so much joy, like, oh, it's like you feel like great. | |
| But now, if you don't even think about thinking about the bug, you just go directly to the LLM. | |
| Well, you never have this kind of feeling, right? | |
| But then there could be the middle ground where, well, you try yourself, you can't find it, you use the LLM, and then you don't get frustrated because it helps you and you move on to something that you enjoy. | |
| And so, I think looking at these statistics, I think also the difference is what is not factored in its averaging over all the different scenarios where we don't know if it's for the core task or if it's for something mundane that people would not have enjoyed otherwise. | |
| So, in a sense, AI is really great for doing mundane things that take a lot of work. | |
| So, for example, my wife the other day, she has like a podcast for like book, like book discussions, a book club, and she was like transferring the show notes from Spotify to YouTube. | |
| And then the links somehow broke. | |
| And she had in some episodes, because it is custom many books, like 100 links or something, and it would have been really painful to go in there and fix each link manually. | |
| And so, I suggested, hey, let's try ChatGPT. | |
| We copied the text into ChatGPT and it fixed them. | |
| And instead of two hours going from link to link, fixing that, you know, it made that type of work much more seamless. | |
| There was no frustration fixed. | |
| I think everyone has a use case where AI is useful for something like that that would be really boring, really mundane. | |
| I, for me personally, since we're talking about coding, and you mentioned debugging, a lot of the source of the enjoyment for me, more on the cursor side than the clogged code side, is the, I have a friend, I have a co-what's that called? | |
| A pair programmer. | |
| Like, it's less lonely. | |
| You made debugging sound like this great joy. | |
| No, I would say, I would say debugging is like a drink of water after you've been going through a desert for days. | |
| So, like, you're you skip the whole desert part where you're suffering. | |
| So, like there, sometimes it's nice to have a friend who's who can't really find the bug, but can give you some intuition about the code. | |
| And you're together with that friend going to the desert and then together find that drink of water. | |
| So, at least for me, maybe it speaks to the loneliness of the programming experience. | |
| That is a source of joy. | |
| It's maybe also related to delayed gratification. | |
| I'm a person who, you know, even as a kid, I like the idea of Christmas presents, having them, getting them better than actually getting the presents. | |
| I would look forward to the day I get the presents, but then it's over and I'm disappointed. | |
| And maybe it's something like also with, let's say, food. | |
| I think food tastes better when you're really hungry. | |
| And with, yeah, you're right with debugging. | |
| It is not always, you know, great. | |
| It's like often frustrating. | |
| But then if you can solve it, then it's great. | |
| But there's also like a sweet Goldilocks zone if it's too hard and it's wasting your time. | |
| But I think that is another challenge, though. | |
| How will people learn? | |
| I mean, the chart we looked at, we saw that more senior developers are shipping more AI-generated code than the junior ones. | |
| And I think it's very interesting because intuitively, you would think it's the junior developers because they don't know, let's say, how to do the thing yet because they are more junior. | |
| And so they use AI to do that thing. | |
| It could either mean the AI is not good enough yet to solve that task, but it could also mean experts are more effective at using it. | |
| They know where and better how to use it and review the code and they trust the code then more. | |
| And so I think one issue in the society in the future will be though, how do you become an expert if you never try to do the thing yourself? | |
| And I think one way it's always like for me, how I learn is by trying things myself. | |
| Like math textbooks, if you look at the solutions, yeah, you learn something, but I think you learn actually better if you try first and then you appreciate the solution differently because you know how to put it into your mental framework. | |
|
Contamination And Inference Challenges
00:14:33
|
|
| And if LLMs are here all the time, would you actually go through the length at struggling? | |
| Would you be willing to struggle? | |
| Because struggle is not nice, right? | |
| I mean, it's struggling. | |
| And if you use the LLM to do everything at some point, you will never really take the next step. | |
| And then you will maybe not get that unlock that you would get as an expert using an LLM. | |
| So it's like, you know, it's like, I think there's like a Goldilocks sweet spot where maybe the trick here is you make dedicated offline time where you study two hours a day and the rest of the day use LLMs. | |
| But I think it's important also for people to still invest in themselves, in my opinion, to not just, you know, LLM everything. | |
| Yeah, there is. | |
| And we together as a civilization, that we each individually have to find that Goldilocks own and in the programming context as developers. | |
| Now, we've had this fascinating conversation that started with pre-training and mid-training. | |
| Let's get to post-training. | |
| A lot of fun stuff in post-training. | |
| So what are some of the interesting ideas in post-training? | |
| The biggest one from 2025 is learning this reinforced learning with verifiable rewards. | |
| You can scale up the training there, which means doing a lot of this kind of iterative generate grade loop. | |
| And that lets the models learn both interesting behaviors on the tool use and software side. | |
| This could be searching, running commands on their own and seeing the outputs. | |
| And then also that training enables this inference time scaling very nicely. | |
| And it just turned out that this paradigm was very nicely linked in this, where it's this kind of RL training enables inference time scaling, but inference time scaling could have been found in different ways. | |
| So it's kind of this perfect storm of the models change a lot and the way that they're trained is a major factor in doing so. | |
| And this has changed how people approach post-training dramatically. | |
| Can you describe RLVR popularized by DeepSeek R1? | |
| Can you describe how it works? | |
| Yeah, fun fact, I was on the team that came up with the term RLVR, which is from our 203 work before DeepSeek, which is we don't take a lot of credit for being the people to popularize the scaling RL, but as fun as what academics get as an aside, is the ability to name and influence the discourse because the closed labs can only say so much that one of the things you can do as an academic is like you might not have the compute to train the model, | |
| but you can frame things in a way that ends up being, I describe it as like a community can come together around this RLVR term, which is very fun. | |
| And then DeepSeek is the people that did the training breakthrough, which is they scaled the reinforcer learning, which was you have the model generate answers and then grade the completion if it was right. | |
| And then that accuracy is your reward for reinforcement learning. | |
| So reinforcement learning is classically an agent that acts in an environment and the environment gives it a state and a reward back and you try to maximize this reward. | |
| In the case of language models, the reward is normally accuracy on a set of verifiable tasks, whether it's math problems, coding tasks, and it starts to get blurry with things like factual domains, like that is also in some ways verifiable or constraints on your instruction, like respond only with words that start with A. Like all of these things are verifiable in some way. | |
| And the core idea of this is you find a lot more of these problems that are verifiable and you let the model try it many times while taking these RL steps, these RL gradient updates, the infrastructure evolved from this reinforced learning from human feedback, where in that era, the score they were trying to optimize was a learned reward model of aggregate human preferences. | |
| So you kind of change the problem domains and that let the optimization go on to much bigger scales, which kind of kick-started a major change in what the models can do and how people use them. | |
| What kind of domains is RLVR amenable to? | |
| Math and code are the famous ones. | |
| And then there's a lot of work kind of on what is called the rubrics, which is related to a word people might have heard as LM as a judge, which is like for each problem, I'll have a set of problems in my training data set. | |
| I'll then have another language model and ask it, what would a good answer to this problem look like? | |
| And then you could try the problem a bunch of times over and over again and assign a score based on this rubric. | |
| So that's not necessarily verifiable, like a math and code domain, but this rubrics idea and other scientific problems that might be a little bit more vague is where a lot of the attention is, where they're trying to push this set of methods into these kind of more open-ended domains where the models can learn a lot more. | |
| I think that's called reinforcement learning with AI feedback, right? | |
| That's the older term from it that was coined in Anthropic's constitutional AI paper. | |
| So it's like a lot of these things come in cycles. | |
| Also, just one step back for the RLVR. | |
| So I think the interesting, beautiful thing here is that you ask the LLM, let's say, a math question, and then you know the correct answer. | |
| And you let the LLM, like you said, figure it out. | |
| But how it does that, I mean, you don't really constrain it much. | |
| There are some constraints you can add, like use the same language, don't switch between Spanish and English. | |
| But let's say you're pretty much hands-off. | |
| You only give the question and the answer. | |
| And then the LLM has to, you know, just the task to arrive at the right answer. | |
| But the beautiful thing here is what happens in practice is that the LLM will do a step-by-step description. | |
| Like, you know, like as a student or like as a mathematician, how you would derive the solution, it will give you, well, it will use those steps. | |
| And that helps actually the model to improve its own accuracy. | |
| And then, like you said, the inference scaling. | |
| So inference scaling loosely means basically spending more compute during using the LM during inference. | |
| And here the inference scaling is that the model would use more tokens. | |
| Also, I think in the R1 paper, they showed the longer they train the model, the longer the responses are. | |
| They grow over time. | |
| They use more tokens. | |
| So it becomes more expensive, becomes more expensive for simple tasks. | |
| But these explanations, they help the model with the accuracy. | |
| There are also interesting papers showing what the model explains does not necessarily have to be correct, or maybe it's even unrelated to the answer, but for some reason, it still helps the model. | |
| Like this is the fact that it is explaining. | |
| And I think it's also, again, I don't want to anthropomorphize these LLMs, but it's kind of like how we humans operate, right? | |
| If there's a complex math problem, let's say in a math class, you usually have a notepaper and you do it step by step, you cross out things, and the model also self-corrects. | |
| And that was, I think, the aha moment in the R1 paper. | |
| They called it AHA moment because the model itself recognized it, made a mistake and then said, ah, I did something wrong. | |
| And so let me try. | |
| And I think that's just so cool that this falls out of just giving it the correct answer and having it figure out how to do it, that it kind of does, in a sense, what a human would do. | |
| Although LMs don't think like humans, it's kind of like an interesting coincidence. | |
| And the other nice side effect is it's great for us humans often to see these steps. | |
| It builds trust, but also we learn, we can double-check things. | |
| There's a lot in here. | |
| I think some of the debate, there's been a lot of debate this year on if the language models, like these aha, I think the aha moments are kind of fake because in pre-training, you essentially have seen the whole internet. | |
| So you have definitely seen people explaining their work, even verbally, like a transcript of a math lecture. | |
| You try this, oh, I messed this up. | |
| And what reinforcement learning is, this RLVR is very good at doing is amplifying these behaviors because they're very useful in enabling the model to think longer and to check its work. | |
| And I agree that it is very beautiful that this training kind of the model learns to amplify this in a way that is just so useful at the final answers being better. | |
| I can give you also a hands-on example. | |
| I was training the Quen3 base model with RLVR on Math 500. | |
| The base model had an accuracy of about 15%. | |
| Just 50 steps, like in a few minutes with RLVR, the model went from 15% to 50% accuracy. | |
| And the model, you can't tell me it's learning anything about fundamentally about math. | |
| The Quenn example is weird because there's been two papers this year, one of which I was on that talks about data contamination in Quenn, and specifically that they trained on a lot of this special mid-training phase that we spent like a minute on. | |
| So it's weird since they train on problems that are almost identical to that. | |
| Exactly. | |
| And so you can see that basically the RL, it's not teaching the model any new knowledge about math. | |
| You can't do that in 50 steps. | |
| So the knowledge is already there in the pre-training. | |
| You're just unlocking it. | |
| I still disagree with the kind of premise because there's a lot of weird complexities that you can't prove. | |
| Because one of the things that points to weirdness is that if you take the Quen3 so-called base model, and you could Google on the screen, you could Google like math data set hugging phase, and you could take a problem. | |
| And what you do, if you put it into Quinn3 base, all these math problems have words. | |
| So it would be like Alice has five apples and takes one and gives three to whoever and there are these word problems. | |
| With these Quen-based models, why people are suspicious of them is if you change the numbers but keep the words, Quen will produce like a very high debt without tools, will produce a very high accuracy, like decimal representation of the answer, which means there's some like at some time it was shown problems that were almost identical to the test set and it was using tools to get a very high precision answer. | |
| But a language model without tools will never actually have this. | |
| So it's kind of been this big debate in the research community is like how much of these reinforcement learning papers that are training on Quen and measuring specifically on this like math benchmark where there's been multiple papers talking about contamination is like how much can you believe them. | |
| And I think this is what caused the reputation of RLVR being about formatting because you can get these gains so quickly and therefore it must already be in the model. | |
| But there's a lot of complexity here that it's not really like controlled experimentation. | |
| So you don't really know. | |
| But if it weren't true, I would say distillation wouldn't work, right? | |
| I mean, distillation can work to some extent. | |
| But the thing is, that is, I think, the biggest problem in LLM research, this contamination, because we don't know what's in the data. | |
| Unless you have a new data set, it's really impossible. | |
| And the same, you mentioned math, the math data set, which is if a question and an answer and an explanation is given, but then also even something simpler like MMLU, which is a multiple choice benchmark. | |
| If you just change the format slightly, like, I don't know, you use a dot instead of a parenthesis or something like that, the model accuracy will vastly differ. | |
| I think that that could be like a model issue rather than a general issue. | |
| It's not even malicious by the developers of the LLM, like, hey, we want to cheat at that benchmark. | |
| It's just it has seen something at some point. | |
| And I think the only fair way to evaluate an LLM is to have a new benchmark that is after the cutoff date when the LLM was deployed. | |
| Can you lay out what would be the sort of the recipe of all the things that would be going to post-training? | |
| And you mentioned our RLVR was a really exciting, effective thing. | |
| Maybe we should elaborate. | |
| RLHF still has a really important component to play. | |
| What kind of other ideas are there on post-training? | |
| I think you can kind of take this in order. | |
| I think you could view it as what made O1, which is this first reasoning model possible, or what will the latest model be? | |
| And they actually have you're going to have similar interventions at these where you start with mid-training. | |
| And the thing that is rumored to enable O1 and similar models is really careful data curation, where you're providing a broad set of like what is called reasoning traces, which is just the model generating words in a forward process that is reflecting like breaking down a problem into intermediate steps and trying to solve them. | |
| So at mid-training, you need to have data that is similar to this to make it so that when you move into post-training, primarily with this verifiable rewards, it can learn. | |
| And then what is happening today is you're figuring out which problems to give the model and how long you can train it for and like how much inference you can enable the model to use when solving these verifiable problems. | |
| So as models get better, certain problems are no longer like the model will solve them 100% of the time. | |
| And therefore, there's very little signal in this. | |
| If we look at the gRPO equation, this one is famous for this, because essentially the reward given to the agent is based on how good a given action, action is a completion, is relative to the other answers to that same problem. | |
| So if all the problems get the same answer, there's no signal in these types of algorithms. | |
| So what they're doing is they're finding harder problems, which is why you hear about things like scientific domains, which is like that's so hard, like getting anything right there. | |
| And if you have a lab or something, it just generates so many tokens or much harder software problems. | |
| So the frontier models are all pushing into these harder domains and they can train on more problems and the model will learn more skills at once. | |
| The RLHF link to this is kind of like RLHF has been and still is kind of like the finishing touch on the models where it makes the models more useful by improving the organization or style or tone. | |
| There's different things that resonate to different audiences. | |
| Like some people like a really quirky model and RLHF could be good at enabling that personality. | |
| And some people hate this like markdown bulleted list thing that the models do, but it's actually really good for quickly parsing information. | |
| And RLHF, this human feedback stage is really great for just putting this into the model at the end of the day. | |
| So it's what made ChatGPT so magical for people. | |
| And that use has actually remained fairly stable. | |
| This formatting can also help the models get better at math problems, for example. | |
| So it's like the border between style and formatting and like the method that you use to answer a problem is actually they're all very closely linked in terms of when you're training these models, which is why RLHF can still say make a model better at math, but these verifiable domains are a much more direct process to doing this because this kind of makes more sense with the problem formulation, which is why it kind of ends up all forming together. | |
| But to summarize, it's like mid-training is give the model the skills it needs to then learn. | |
| RL and verifiable rewards is let the model try a lot of time. | |
| So put a lot of compute into trial and error learning across hard problems. | |
| And then RLHF would be like finish the model, make it easy to use and kind of just round the model out. | |
| Can you comment on the amount of compute required for RL VR? | |
|
Prioritizing RAM and Compute
00:03:07
|
|
| It's only gone up and up. | |
| So I think Grok4 was famous for saying they use a similar amount of compute for pre-training and post-training. | |
| Back to the scaling discussion, they involve very different hardware for scaling. | |
| Pre-training is very compute bound, which is like this flops discussion, which is just how many matrix multiplications can you get through in one time. | |
| And because RL, you're generating these answers, you're trying the model in the real world environments, it ends up being much more memory bound because you're generating long sequences. | |
| And the attention mechanisms have this behavior where you get a quadratic increase in memory as you're getting to longer sequences. | |
| So the compute becomes very different. | |
| So when in pre-training, we would talk about a model. | |
| I think if we go back to like the Biden administration executive order, it's like 10 to the 25th flops to train a model. | |
| If you're using flops in post-training, it's a lot weirder because the reality is just like, how many hours are you allocating how many GPUs for? | |
| And I think in terms of time, the RL compute is getting much closer because you just can't put it all into one system. | |
| Like pre-training is so computationally dense where all the GPUs are talking to each other and it's extremely efficient, where RL has all these moving parts and it can just take a long time to generate a sequence of 100,000 tokens. | |
| If you think about GPT 5.2 Pro taking an hour, it's like, what if your training run has a sample for an hour and you have to make it so that's handled efficiently? | |
| So I think in GPU hours or just like wall clock hours, the RL runs are probably approaching the number of days as pre-training, but they probably aren't using as many GPUs at the same time. | |
| There's rules of thumb where in labs, it's like you don't want your pre-training runs to last more than like a month because they fail catastrophically. | |
| And if you were planning a huge cluster to be held for two months and then it fails on day 50, the opportunity costs are just so big. | |
| So you kind of don't want to just, people don't want to put all their eggs in one basket, which is like GPT-4 was like the ultimate YOLO run and nobody ever wanted to do it before where it took like three months to train and everybody was shocked that it worked. | |
| Where I think people are a little bit more cautious and incremental now. | |
| So RL VR is more, let's say, unlimited how much you can train or get still benefit where RLHF, because it's a preference tuning, you reach a certain point where it doesn't really make sense to spend more RL budget on that. | |
| So just a step back with preference tuning. | |
| So there are multiple people that can give multiple, let's say, explanations for the same thing and they can both be correct. | |
| But at some point, you learn a certain style and it doesn't make sense to, you know, iterate on it. | |
| My favorite example is like if relatives ask me what laptop they should buy, I give them an explanation or ask them like, yeah, what is your use case? | |
| Like they, for example, prioritize battery life and storage. | |
| Other people like us, for example, we would prioritize RAM and compute. | |
| And so, but both answers are correct, but different people require different answers. | |
| And with preference tuning, well, you're trying to average somehow. | |
| Like you are asking data labelers to give you the right, well, not the right preferred answer. | |
| And then you train on that. | |
| But at some point, yeah, you learn that average preferred answer. | |
| And there's no, I think, reason to keep training longer on it because, you know, it's just the style. | |
|
Reward Models Simplified
00:08:22
|
|
| Where with RLVR, you literally give the model, well, you let the model solve more and more complex difficult problems. | |
| And so I think that it makes more sense to allocate more budget long term to LRVR. | |
| And also that right now we are in LRVR 1.0 blend, where it's still like that simple thing where we have a question and answer, but we don't do anything with the one stuff in between. | |
| So there was a, I mean, multiple research papers also by Google, for example, on process reward models that also give scores for the explanation, how correct is the explanation. | |
| And I think that will be the next thing, let's say, RLVR 2.0 for this year, focusing in between question and answer, like how to leverage that information, the explanation, to improve the explanation and help it to get better accuracy. | |
| But then, so that's one angle. | |
| And there was a DeepSeq math version 2 paper where they also had interesting inference scaling there. | |
| First, they had developed models that grade themselves, a separate model. | |
| And I think that that will be one aspect. | |
| And the other, like Nathan mentioned, it will be for LRVR branching into other domains. | |
| The place where people are excited are value functions, which is pretty similar. | |
| So process reward models are kind of like process reward models assign how good something is to each kind of intermediate step in a reasoning process, where value functions apply value to every token the language model generates. | |
| Both of these have been largely unproven in the language modeling or this reasoning model era. | |
| People are more optimistic about value functions forever, for whatever reason now. | |
| I think process reward models were tried a lot more in this pre-01, pre-reasoning model era. | |
| And a lot of people had a lot of headaches with them. | |
| So I think a lot of it is the human nature of like value models have a very deep history in reinforcement learning. | |
| They're one of the first things that were core to like deep reinforced learning existing is like training value models in this. | |
| So right now the literature, people are excited about trying value models, but there's very little proof in it. | |
| And there are negative examples in trying to scale up process reward models. | |
| These things don't always hold in the future. | |
| I think we came to this discussion by talking about scaling. | |
| And a simple way to summarize what you're saying with like you don't want to do too much RLHF, which is essentially the signal scales is people have worked on RLHF for language models for years, especially in Intense Interest after ChatGPT. | |
| And the first release of a reasoning model trained with RLVR, OpenAIS 01, had a scaling plot where if you increase the training compute logarithmically, you get a linear increase in evaluations. | |
| And this has been reproduced multiple times. | |
| I think DeepSeek had a plot like this. | |
| But there's no scaling lot for RLHF where if you increase the compute, you get some performance. | |
| In fact, the seminal scaling paper for RLHF is scaling laws for reward model over optimization. | |
| So it's like that's a big line to draw with RLVR and the methods we have now and in the future. | |
| Like they will follow the scaling paradigm, which is like the best runs you can let to run for an extra 10x and you get a few x performance, but you can't do this with RLHF. | |
| And that is just going to be field defining and how people approach them, where I'm a shill for people academically to do RLHF. | |
| And that's a good way to describe it. | |
| It's like to do the best RLHF, you might not need the extra 10 or 100x of compute, but to do the best RLVR, you do. | |
| So I think there's a, what I say is a seminal paper from what was a meta internship. | |
| It's called, it's like the art of scaling reinforcer learning with language models. | |
| What they describe as a framework is scale RL. | |
| And their incremental experiment was like 10,000 B200 hours, which is like thousands or tens of thousands of dollars per experiment. | |
| And they do a lot of them, which is just like, this cost is not accessible to the average academic, which is a hard equilibrium where it's trying to figure out how to learn from each community. | |
| I was wondering if we could take at this point a bit of a tangent and talk about education and learning. | |
| If you're somebody listening to this who's a smart person interested in programming, interested in AI, so I presume building something from scratch is a good beginning. | |
| So can you just take me through what you would recommend people do? | |
| So I would personally start, like you said, implementing a simple model from scratch that you can run on your computer. | |
| The goal is not if you build a model from scratch to have something you use every day for your personal projects. | |
| It's not going to be your personal assistant replacing an existing open weight model or ChatGPT. | |
| It's to see what exactly goes into the LLM, what exactly comes out of the LLM, how the pre-training works in that sense on your own computer, preferably. | |
| And then you learn about the pre-training, the supervised fine-tuning, the attention mechanism. | |
| You get a solid understanding of how things work. | |
| But at some point, you will reach a limit because small models can only do so much. | |
| And the problem with learning about LLMs at scale is I would say it's exponentially more complex to make a larger model because it's not that the model just becomes larger. | |
| You have to now think about charting your parameters across multiple GPUs. | |
| Even for the KV cache, there are multiple ways you can implement it. | |
| One is just to understand how it works, just to grow the cache. | |
| That's like a cache you grow step by step by, let's say, concatenating lists, growing it. | |
| But then that wouldn't be optimal in GPUs. | |
| You wouldn't do that. | |
| You would pre-allocate a tensor and then fill it in. | |
| But that adds again another 20, 30 lines of code. | |
| And for each thing, you add so much code. | |
| And I think the trick with the book is basically to understand how the LLM works. | |
| It's not going to be your production-level LLM. | |
| But once you have that, you can understand the production level LLM. | |
| So you're trying to always build an LLM that's going to fit on one GPU. | |
| Yes. | |
| Most of them I have. | |
| I have some bonus materials on some MOE models. | |
| I think one or two of them, they may require multiple GPUs. | |
| But the goal is to have it on one GPU. | |
| And the beautiful thing is also you can self-verify. | |
| It's almost like RLVR. | |
| When you code these from scratch, you can take an existing model from the Hugging Face Transformer library. | |
| So the Hugging Face Transformer library is great. | |
| But if you want to learn about LLMs, I think that's not the best place to start because the code is so complex because it has to fit so many use cases. | |
| Also, some people use it in production. | |
| It has to be really sophisticated and it's really intertwined and really hard. | |
| It's not linear to read. | |
| It was started as a fine-tuning library. | |
| And then it grew to be like the standard representation of every model architecture and the way it's loaded. | |
| So Hugging Face is like the default place to get a model. | |
| And Transformers is the software that enables it. | |
| So people can easily load a model and do something basic with it. | |
| And all Frontier labs that have open weight models have a Hugging Face Transformers version of it, like from DeepSeek to GPT OSS. | |
| That's like the canonical weights that you can load there. | |
| But again, also even Transformers, the library is not used in production. | |
| People use then SG Lang or VLLM. | |
| And it adds another layer of complexity. | |
| We should say that the Transformers library has like 400 models. | |
| So it's one library that tries to implement a lot of LLMs. | |
| And so you have a huge code base, basically. | |
| It's like huge. | |
| It's like it's, I don't know, maybe millions, hundreds of thousands of lines of code. | |
| And it's like understanding the part that you want to understand is finding the needle in the haystack. | |
| But what's beautiful about it is you have a working implementation. | |
| And so you can work backwards from it. | |
| What I would recommend doing, or what I also do, is if I want to understand, for example, how almost three is implemented, I would look at the weights in the model hub, the config file. | |
| And then you can see, oh, they used so many layers. | |
| They use, let's say, group query attention or multi-head attention in that case. | |
| And you see all the components in like a human readable, I don't know, 100 lines of config file. | |
| And then you start, let's say, with your GPT-2 model and add these things, you know? | |
| And the cool thing here is you can then laud the pre-trained weights and see if they work in your model. | |
| And you want to match the same output that you get with a transformer model. | |
| And then you can use that as a basically as a verifiable reward to make your architecture correct. | |
| And then it's kind of, sometimes it takes me a day to with almost three that challenge was rope for the position embeddings. | |
| They had a yarn extension and there was some custom scaling there and I couldn't quite match these things. | |
|
Character Training Insights
00:15:59
|
|
| And in this struggle, you kind of understand things. | |
| But the cool thing is at the end, you know you have it correct because you can unit test it. | |
| You can check against the reference implementation. | |
| And I think that's maybe one of the best ways to learn, really, like to basically reverse engineer something. | |
| I think that that is something that everybody that's interested in getting to AI today should do. | |
| And I think that's why I liked your book is like I came to language models from this RL and robotics field. | |
| Like I never had taken the time to just like learn all the fundamentals. | |
| And this transformer architecture I describe as being like so fundamental as like deep learning was a thing that I had to learn in the past and people need to do this. | |
| I think that where a lot of people kind of get overwhelmed is how do I apply this to have impact or find like a career path? | |
| Because like AI and language models make this fundamental stuff so accessible and people with motivation will learn it. | |
| And then it's like, how do I get the cycles on gold to contribute to research? | |
| And I think that I'm actually fairly optimistic in this because the field moves so fast that a lot of times the best people like don't fully solve a problem because there's a bigger, lower, like a bigger problem to solve that's very low-hanging fruit. | |
| So they move on. | |
| And I think that a lot of what I was trying to do in this RLHF book is like take post-training techniques and just describe how people think about them influencing the model and what people are doing. | |
| And then it's remarkable how many things I just think are just like people stop studying them or don't. | |
| So I think people trying to get narrow after doing the fundamentals is good. | |
| And then reading the relevant papers and being engaged in the ecosystem, it's like you actually, the proximity that random people have online from the leading researchers, like no one knows who the anonymous account on X and ML is very popular for whatever reason. | |
| And no one knows who all these people are. | |
| Like it could just be random people that study the stuff deeply, especially with the AI tools to just be like, I don't understand this, keep digging into it. | |
| I think is a very useful thing. | |
| But there's a lot of research areas that just like are maybe three papers that you need to read. | |
| And then one of the authors will probably email you back, but you have to put in a lot of effort into these emails to understand the field. | |
| Like I think it would be for a newcomer easily weeks of work to feel like they can truly grasp like what is a very narrow area. | |
| But I think going narrow after you have the fundamentals would be very useful to people because it's like I became very interested in like character training, which is like how you make the model funny or sarcastic or serious. | |
| And like, what do you do to the data to do this? | |
| And it's like a student at Oxford reached out to me and it's like, hey, I'm interested in this. | |
| And I advised him. | |
| And I was like, that paper now exists. | |
| And it's like, I don't know, there's like two or three people in the world that were very interested in this. | |
| He's a PhD student, which gives you an advantage. | |
| But like for me, that was a topic I was waiting for someone to be like, hey, I have time to spend cycles on this. | |
| And I'm sure there's a lot more very narrow things where you're just like, oh, it doesn't make sense that there was no answer to this. | |
| And I think that it's just like, there's so much information coming that people are like, I can't grab onto any of these. | |
| But if you just actually stick in an area, I think there's a lot of interesting things to learn. | |
| Yeah, I think you can't try to do it all because it would be very overwhelming and you would burn out if you try to keep up with everything. | |
| For me, for example, I haven't kept up with computer vision in a long time, just focused on LMs. | |
| But coming back to your book, for example, I think this is also a really great book and a really good bang for the buck because you want to learn about RLHF. | |
| I wouldn't go out there and read RHF papers because I would be, you would be spending time. | |
| I just edited the book and I was like, there's a chapter where I had to be like, X papers say one thing and X papers say another thing, and we'll see what comes out to be true. | |
| What are some of the, just to go through some of the table of content, some of the ideas we might have missed in the bigger picture, the post-training? | |
| So, first of all, you do the problem setup, training overview, what are preferences, preferences, data in the optimization tools, reward modeling, regularization, instruction tuning, rejection sampling, reinforcement learning, i.e., policy gradients, direct alignment algorithms, | |
| then constitutional AI and AI feedback, reasoning and inference time scaling, to use and function calling, synthetic data and distillation, evaluation, and then open question section, over optimization, style, and information, and then product, UX, character, and post-training. | |
| So, what are some ideas worth mentioning that connect both the educational component and the research component? | |
| You mentioned the character training, which is pretty interesting. | |
| Character training is interesting because there's so little out of it, but we talked about how people engage with these models and like we feel good using them because they're positive, but that can go too far. | |
| It could be too positive. | |
| And it's like, essentially, it's how do you change your data and or decision making to make it exactly what you want. | |
| And OpenAI has this thing called a model spec, which is essentially their internal guideline for what they want the model to do. | |
| And they publish this to developers. | |
| So, essentially, you can know what is a failure of OpenAI's training, which is like they have the intentions and they haven't met it yet, versus what is something that they like actually wanted to do and that you don't like. | |
| And that transparency is very nice, but all the methods for curating these documents and how easy it is to follow them is not very well known. | |
| I think the way the book is designed is that the reinforced learning chapter is obviously what people want because everybody hears about it with RLVR. | |
| And it's the same algorithms and the same map, but it's just like you can use it in very different documents. | |
| So, I think the core of RLHF is like how messy preferences are. | |
| It's essentially a rehash of a paper I wrote years ago. | |
| But this is essentially the chapter that'll tell you why RLHF is never ever fully solvable because, like, the way that even RL is set up is that it assumes that preferences can be quantified and that multiple preferences can be reduced to single values. | |
| And I think it relates in the economics literature to the von Neumann-Morgenstein utility theorem. | |
| And, like, that is the chapter where all of that philosophical, economic, and like psychological context tells you what gets compressed into doing RLHF. | |
| So, it's like you have all of this, and then later in the book, it's like you use this RL math to make the number go up. | |
| And I think that that's why I think it would be very rewarding for people to do research on is because it's like quantifying preferences is something that is just like humans have designed the problem in order to make preferences studyable. | |
| But there's kind of fundamental debates on like an example: in a language model response, you have different things you care about, whether it's accuracy or in style. | |
| And when you're collecting the data, they all get compressed into like, I like this more than another. | |
| And it's like, like, that is happening. | |
| And there's a lot of philosophy, there's a lot of research in other areas of the world that go into like, how should you actually do this? | |
| I think social choice theory is the subfield of economics around how you should aggregate preferences. | |
| And there's like, I was, I went to a workshop that published a white paper on like, how can you think about using social choice theory for RLHF? | |
| So I mostly would want people that get excited about the math to come and have things that they could stumble into and learn this kind of broader context. | |
| I think there's a fun thing. | |
| I just keep a list of all the tech reports that I like of reasoning models. | |
| So in the in chapter 14, which is kind of like a short summary of RLVR, there's just like a gigantic table where I just like list every single reasoning model that I like. | |
| So there's just like, I think in education, a lot of it needs to be like, at this point, it's like what I like because the language models are so good at the math, where it's like famous paper, direct preference optimization, which is like a much simpler way of solving the problem than RL. | |
| The derivations and the appendix skip steps of math. | |
| And it's like, I tried for this book, like I redid the derivations and I'm like, what the heck is this log trick that they use to change the math? | |
| But doing it with language models, they're like, this is the log trick. | |
| And I'm like, I don't know if I like this, that the math is so commoditized. | |
| I think like some of the struggle and reading this appendix and following the math, I think is good for learning. | |
| And I, yeah, so we're actually returning to this often just on the topic of education. | |
| You both have brought up the word struggle quite a bit. | |
| So there is value. | |
| If you're not struggling as part of this process, you're not fully following the proper process for learning, I suppose. | |
| Some of the providers are starting to work on models for education, which are designed to not give actually, I haven't used them, but I would guess they're designed to not give all the information at once and make people work to do this. | |
| So I think you could train models to do this. | |
| And it would be a wonderful contribution where like all of this stuff in the book, you have to reevaluate every decision for it, which is such a great example. | |
| I think there's a chance we work on it at AI too, which I would, which I was like, oh, I think this is. | |
| It makes sense. | |
| I do something like that. | |
| Did that the other day for video games, for example? | |
| I sometimes, for my pastime, play video games. | |
| Like, I like video games with puzzles, you know, like Zelda and Metroid. | |
| And there's this new game where I got stuck and I already got stuck. | |
| And I was like, okay, I, you know, I don't want to struggle for like two days. | |
| And so I use an LLM. | |
| But then you say, hey, please don't add any spoilers. | |
| Just, you know, I'm here and there. | |
| What do we have to do next? | |
| And the same thing you can do, I guess, for math, where you say, okay, I'm here at this point. | |
| I'm getting stuck. | |
| Don't give me the full solution, but what is something I could try? | |
| You know, like where you kind of carefully probe it. | |
| But the problem here is I think it requires discipline. | |
| And a lot of people do math for, like, I mean, there are a lot of people who enjoy math, but there are also a lot of people who need to do it for their homework. | |
| And then it's like the shortcut. | |
| And yeah, we can develop an educational LLM, but the other LLM is still there. | |
| And there's still a temptation to use the other LLMs. | |
| I think a lot of people, especially in college, they understand the stuff they're passionate about. | |
| They're self-aware about it. | |
| And they understand it shouldn't be easy. | |
| Like, I think we just have to develop a good taste. | |
| We talk about research taste, like school taste, about stuff that you should be struggling on and stuff you shouldn't be struggling on, which is tricky to know because sometimes you don't have good long-term vision about what would be actually useful to you in your career. | |
| But you have to develop that taste. | |
| I was talking to maybe my fiancé or friends about this. | |
| And it's like, there's this brief 10-year window where all of the homework and all the exams could be digital. | |
| But before that, everybody had to do all the exams in Blue Book because there was no other way. | |
| And now after AI, everybody's going to need to be in Blue Books and oral exams because everybody could cheat so easily. | |
| It's like this brief generation that had a different education system that everything could be digital and but you still couldn't cheat. | |
| And now it's just going to go back, which is pretty funny. | |
| You mentioned character training, just zooming out on a more general topic. | |
| For that topic, how much compute was required? | |
| And in general, to contribute as a researcher, are there places where not too much compute is required where you can actually contribute as an individual researcher? | |
| For on the character training thing, I think this research is built on fine-tuning about 7 billion parameter models with LoRa, which is like a essentially you only fine-tune a small subset of the weights of the model. | |
| I don't know exactly how many GPU hours that would take. | |
| But it's doable. | |
| Not doable for every academic. | |
| So, the situation for some academics is like so dire that the only work you can do is doing inference where you have closed models or open models and you get completions from them and you can look at them and understand the models. | |
| And that's very well suited to evaluation, which you become, you want to be the best at creating representative problems that the models fail on or show certain abilities, which I think that you can break through with this. | |
| So, I've like, I think that the top end goal for a researcher working on evaluation, if you want to have career momentum, is the frontier labs pick up your evaluation. | |
| So, it's like you don't need to have every project do this. | |
| But if you go from a small university with no compute and you figure out something that Claude struggles with, and then the next Claude model has it in the blog post, like there, there's your career rocket ship. | |
| I think that that's hard, but it's like if you want to scope the maximum possible impact or the minimum compute, it's something like that, which is just get very narrow and it takes learning of where the models are going. | |
| So, you need to like build a tool that tests where or not Claude 4.5 will fail. | |
| If you're going to do a research, if I'm going to start a research project, I need to think where the models in eight months are going to be struggling. | |
| But what about developing totally novel ideas? | |
| This is a trade-off. | |
| I think that if you're doing a PhD, you could also be like, it's too risky to work in language models. | |
| I'm going way longer term, which is like, what is what is the thing that's going to define language model development in 10 years, which I think that I end up being a person that's pretty practical. | |
| I mean, I went to my PhD where it's like, I got into Berkeley, worst case, I get a master's and I go work in tech. | |
| It's like, I'm very practical about it. | |
| So, I'm like, the life afforded to people to work at these AI companies, the amount of like OpenAI's average compensation is over a million dollars in stock a year for employee. | |
| Any normal person in the US to get into this AI lab is transformative for your life. | |
| So, I'm pretty practical of like, there's still a lot of upward mobility working in language models if you're focused. | |
| And the outcomes is like, look at these jobs. | |
| But from a research perspective, the transformative impact in these academic awards and be the next Yon Lacoon is from not working, not caring about language model development very much. | |
| It's a big financial sacrifice in that case. | |
| So, I get to work with some awesome students and they're like, should I go work at an AI lab? | |
| And I'm like, you're getting a PhD at a top school or you're going to leave to go to a lab. | |
| I'm like, I don't know. | |
| Like, if you go work at a top lab, I don't blame you. | |
| Don't go work at some random startup that might go to zero. | |
| But if you're going to open AI, I'm like, it could be worth leaving a PhD for. | |
| Let's more rigorously think through this. | |
| So, where would you give a recommendation for people to do a research contribution? | |
| So, the options are academia. | |
| So, get a PhD, spend five years publishing. | |
| Compute resources are constrained. | |
| There's research labs that are more focused on open weight models. | |
| And so working there or closed frontier labs, research labs. | |
| Open AI Anthropic, XAI, so on. | |
| The two gradients are the more closed, the more money you tend to get. | |
| And but also you get less credit. | |
| So in terms of building a like a portfolio of things that you've done, like it's very clear of what you have done as an academic and you have done this. | |
| And versus if you are going to go be like trade this fairly reasonable progression for being a cog in the machine, which could also be very fun. | |
| So I think it's very different career paths. | |
| But the like the opportunity cost for being a researcher is very high because PhD students are paid essentially nothing. | |
| So I think it ends up rewarding people that have a fairly stable safety debt and they realize that they can operate in the long term, which is they want to do very interesting work and get a very interesting job. | |
| So it is a fairly like it is a privileged position to be like, I'm going to see out my PhD and figure it out after because I want to do this. | |
| And I think a lot of academic, like at the same time, the academic ecosystem is getting bombarded by funding getting cut and stuff. | |
| So there's just like so many different trade-offs where I understand plenty of people that are like, oh, I can't deal with this funding search. | |
| I grant got cut for no reason by the government, or I don't know what's going to happen. | |
| So I think there's a lot of uncertainty and trade-offs that, in my opinion, favor just like take the well-paying job with meaningful impact. | |
| Because not also like you're getting paid to sit around at OpenAI, you're building the cutting edge of things that are changing millions of people's relationship to tech. | |
|
High Risk, High Reward Lab Life
00:04:55
|
|
| But publication-wise, they're being more secretive, increasingly so. | |
| So you're publishing less and less and less and less. | |
| And so you are having a positive impact at scale, but it's you're a cognitive machine. | |
| I think it's honestly, it hasn't changed that much. | |
| So I have been in academia. | |
| I'm not in academia anymore. | |
| At the same time, I wouldn't want to miss my time at academia. | |
| But what I wanted to say before I get to that part, I think it hasn't changed that much. | |
| I was working in, like, I was using AI or machine learning methods for applications in computational biology with collaborators. | |
| And a lot of people went from academia directly to Google. | |
| And I think it's the same thing. | |
| Back then, professors were like, you know, sad that their students went into industry because they couldn't carry on their legacy in that sense. | |
| And I think it's the same thing. | |
| It's like it hasn't changed, I think, that much. | |
| The only thing that has changed is the scale. | |
| But, you know, cool stuff was always developed in industry that was closed. | |
| You couldn't talk about it. | |
| And I think the difference now is, well, your preference, do you like to talk about your work, publish, or you are more in a closed lab? | |
| That's one difference, the compensation, of course, but it's always been like that, I think. | |
| So it really depends on where you feel comfortable. | |
| And it's also nothing is forever. | |
| The only thing right now is there's a third option, which is starting a startup. | |
| That's a lot of people doing startups. | |
| Very risky move, but can be high. | |
| It's a high risk, high reward type of situation where joining an industry lab, I think, is pretty safe. | |
| Also, upward mobility. | |
| Honestly, I think if once you have been at an industry lab, it will be easier to find future jobs. | |
| But then again, it's like, yeah, how much do you enjoy the team and working on proprietary things versus how do you like the publishing work? | |
| I mean, publishing is stressful. | |
| It is, you know, like acceptance rate at conferences can be arbitrary, can be very frustrating, but also high reward. | |
| If you have a paper published, you feel good because your name is on there. | |
| You have a high accomplishment. | |
| And, you know, I feel like my friends who are professors seem on average happier than my friends who work at a frontier lab, to be totally honest. | |
| Because that's just grounding. | |
| And the frontier labs definitely do this 996, which essentially is shorthand for work all the time. | |
| Can you describe 996 as a culture that's, I believe you could say, invented in China and adopted in Silicon Valley? | |
| What's 996? | |
| It's 9 a.m. to 9 p.m. | |
| Six days a week. | |
| Six days a week. | |
| What is that? | |
| 72 hours. | |
| Okay. | |
| So is this basically the standard in AI companies in Silicon Valley? | |
| More and more this kind of grind mindset? | |
| Yeah, I mean, maybe not exactly like that, but I think there is a trend towards it. | |
| And it's interesting. | |
| I think it almost flipped because when I was in academia, I felt like that because as a professor, you had to write grants, you had to teach, and you had to do your research. | |
| It's like three jobs in one. | |
| And it is more than a full-time job if you want to be successful. | |
| I feel like now, like Nathan just said, the professors, in comparison to a lab, I think they have less even maybe pressure or workload than at a frontier lab because they work a lot. | |
| They're just so fulfilled by working with students and having a constant runway of mentorship and like a mission that is very people oriented. | |
| I think in an era when things are moving very fast and are very chaotic, it's very rewarding to people. | |
| Yeah. | |
| And I think at a startup, I think it's a pressure. | |
| It's like you have to make it. | |
| And it's like, it is really important that people put in the time, but well, it is really hard because you have to deliver constantly. | |
| And I've been at a startup. | |
| I had a good time, but I don't know if I could do it forever. | |
| It's like an interesting pace and is exactly like we talked about in the beginning. | |
| These models are leapfrogging each other and they are just constantly like trying to take the next step compared to their competitors. | |
| It's just ruthless, I think, right now. | |
| I think this leapfrogging nature and having multiple players is actually an underrated driver of language modeling process where competition is so deeply ingrained to people. | |
| And these companies have intentionally created very strong culture. | |
| Like Anthropic is known to be so culturally, like deeply committed and organized. | |
| I mean, like we hear so little from them and everybody, it's Anthropic seems very aligned. | |
| And it's like being in a culture that is super tight and having this competitive dynamic is like, talk about a thing that's going to make you work hard and create things that are better. | |
| So I think that this, that that comes at the cost of human capital, which is like, you can only do this for so long and people are definitely burning out. | |
| I think I wrote a post on burnout. | |
|
Silicon Valley Bubble
00:04:51
|
|
| It's like, I tread in and out of this myself, especially trying to like be a manager of full mode training. | |
| It's a crazy job doing this. | |
| The book Apple in China by Patrick McGee, he talked about how hard the Apple engineers worked to set up the supply chains in China. | |
| And he was like, they had saving marriage programs. | |
| And he told in a podcast he was like, people died from this level of working hard. | |
| So I think that's just like it's a perfect environment for creating progress based on human expense. | |
| And there's going to be a lot of, there's a lot of the human expense is the 996 that we started this with, which is like people do really grind. | |
| I also read this book. | |
| I think they had a quote word for if someone had to go home to spend time with their family to save the marriage. | |
| And it's crazy. | |
| Then colleagues say, okay, this is like red alert for this situation. | |
| We have to let that person go home this weekend. | |
| And, um, but at the same time, I don't think they were forced to work. | |
| It's really, they were so passionate about the product, I guess, that it is you get into that mindset. | |
| And I had that sometimes as an academic, but also as an independent person, I have that sometimes. | |
| I overwork and it's unhealthy. | |
| I had, you know, I had back issues, I had neck issues because I did not take the breaks that I maybe should have taken. | |
| But it's not because no one forced me to, it's because I wanted to work because that's what OpenAI and Anthropology are like, they want to do this work. | |
| Yeah, but there's also, there's also a feeling of fervor that's building, especially in Silicon Valley, aligned with the scaling laws idea where there's this hype where the world will be transformed on a scale of weeks and you want to be at the center of it. | |
| And then, you know, I have this great fortune of having conversations with a wide variety of human beings. | |
| And from there, I get to see all these bubbles and echo chambers across the world. | |
| And it's fascinating to see how we humans form them. | |
| And I think it's fair to say that Silicon Valley is a kind of echo chamber, a kind of silo and bubble. | |
| I think bubbles are actually really useful and effective. | |
| It's not necessarily a negative thing because it could be ultra productive. | |
| It could be the Steve Jobs reality distortion field because you just convince each other the breakthroughs are imminent. | |
| And by convincing each other of that, you make the breakthroughs imminent. | |
| Bern Hobart wrote a book classifying bubbles, but essentially one of them is financial bubbles, which is like speculation, which is bad. | |
| And the other one is like, I don't know the term, but effectively for build outs because it pushes people to build these things. | |
| And I do think AI is in this, but I worry about it transitioning to a financial bubble, which is like, it's. | |
| Yeah, but also in the space of ideas, that bubble. | |
| You are doing a reality distortion field, and that means you are deviating from reality. | |
| And if you go too far from reality while also working, you know, 996, and you might miss some fundamental aspects of the human experience, including in Silicon Valley. | |
| And this is a common problem in Silicon Valley. | |
| It's like it's a very specific geographic area. | |
| You might not understand the Midwest perspective, the full experience of all the other different humans in the United States and across the world. | |
| And you speak a certain way to each other, you convince each other of a certain thing. | |
| And that can get you into real trouble. | |
| Whether AI is a big success and becomes a powerful technology or it's not, in either trajectory, you can get yourself into trouble. | |
| So you have to consider all of that. | |
| Here you are, a young person trying to decide what you want to do with your life. | |
| The thing that is, I don't even really understand this, but the SF AI memes have gotten to the point where permanent underclass was one of them, which was the idea that the last six months of 2025 was the only time to build durable value in an AI startup or model. | |
| Otherwise, all the value will be captured by existing companies and you will therefore be poor, which like that's an example of the SF thing that goes so far. | |
| I still think for young people that going to be able to tap into it, if you are really passionate about wanting to have impact in AI, like being physically in SF is the most likely place where you're going to do this, but it has trade-offs. | |
| I think SF is an incredible place, but there is a bit of a bubble. | |
| And if you go into that bubble, which is extremely valuable, just get out also. | |
| Read history books, read literature, visit other places in the world. | |
| Twitter is not, and Substack is not the entire world. | |
| I think I would say one of my one of the people I worked with is moving to SF. | |
| And it's like, I need to get him a copy of The Season of the Witch, which is a history of SF from like 1960 to 1985, which goes through like the hippie revolution, like all the gays kind of taking over the city and that culture emerging and then the HIV AIDS crisis and other things. | |
|
Scaling Text Diffusion Models
00:15:17
|
|
| And it's just like that is so recent and so much turmoil and hurt, but also like love in SF. | |
| And it's like, no one knows about this. | |
| It's a great book, Season of the Witch. | |
| I recommend it. | |
| A bunch of my SF friends who do get out recommended it to me. | |
| And I think that's just like living there. | |
| Like I lived there and I didn't appreciate this context. | |
| And it's just like so recent. | |
| Yeah. | |
| Okay. | |
| Let's, we talked a lot about, we talked a lot about a lot of things, certainly about the things that were exciting last year. | |
| But this year, one of the things you guys mentioned is exciting is the scaling of text diffusion models and just a different exploration of text diffusion. | |
| Can you talk about what that is and what the possibility holds? | |
| So different kinds of approaches than the current LLMs. | |
| Yeah, so we talk a lot about the Transformer architecture and the autoregressive transformer architecture, specifically like GPT. | |
| And it doesn't mean no one else is working on anything else. | |
| So people are always on the, let's say, look out for the next big thing. | |
| Because I think it would be almost like, yeah, stupid not to, because sure, right now the transformer architecture is the thing and it works best. | |
| And there's right now nothing else out there. | |
| But, you know, it's always a good idea to not put all your eggs into one basket. | |
| So, people are developing other things, alternatives to the autoregressive transformer. | |
| One of them would be, for example, text diffusion models. | |
| And listeners may know diffusion models from the image generation, like Stable Diffusion popularized it. | |
| There was like a paper on generating images. | |
| Back then, people used GANs, the generative adversarial networks. | |
| And then there was this diffusion process where you iteratively denoise an image and that resulted in really good quality images over time. | |
| Stable diffusion was a company. | |
| Other companies built their own diffusion models. | |
| And then people are now like, okay, can we try this also for text? | |
| Doesn't make intuitive sense yet because it feels like, okay, it's not something continuous like a pixel that we can differentiate. | |
| It's like a discrete text. | |
| So how do we implement that denoising process? | |
| But it's kind of like similar to the BERT models by Google. | |
| Like when you go back to the original transformer, so there were like the encoder and the decoder. | |
| The decoder is what we are using right now in GPT and so forth. | |
| The encoder, it's more like a parallel, let's say, technique where you have multiple tokens that you fill in in parallel. | |
| So GPT models, they do autoregressive one token at a time. | |
| You complete the sentence one token at a time. | |
| And in BERT models, you have a text, let's say, sentence that has gaps. | |
| You mask them out. | |
| And then one iteration is filling in these gaps. | |
| And text diffusion is kind of like that, where you are starting with, let's say, some random text, and then you are filling in the missing parts or you're refining them iteratively. | |
| And you have multiple iterations. | |
| And the cool thing here is that this can do multiple tokens at the same time. | |
| So it's kind of like the promise of having it more efficient. | |
| Now, the trade-off is, of course, well, how good is the quality? | |
| It might be faster. | |
| And then now you have this dimension of the denoising process. | |
| The more steps you do, the better the text becomes. | |
| And people, you know, I mean, you can scale in different ways. | |
| They try to see if that is maybe a valid alternative to the autoregressive model in terms of giving you the same quality for less compute. | |
| Right now, I think it's, you know, there are papers that suggest, okay, if you want to get the same quality, you have to crank up the denoising steps and then you end up spending the same compute you would spend on an autoregressive model. | |
| The other downside is, well, it's parallel, which sounds appealing, but some tasks are not parallel. | |
| Like, you know, like reasoning task tool use, maybe where you have to ask an code interpreter to give you an intermediate result, and that is kind of tricky with diffusion models. | |
| So there are some hybrids, but the main idea is can we parallelize it? | |
| And so interesting avenue. | |
| I think right now there are mostly research, let's say, models out there, like LADA and some other ones. | |
| I saw some by startups, some deployed models. | |
| There is no big diffusion model at scale yet, like Gemini chat GPD scale on that level. | |
| But there was an announcement by Google or like a site where they said they are launching Gemini diffusion and they put it into context of their, I think, Nano 2 model. | |
| And they said basically for the same quality on most benchmarks, we can generate things much faster. | |
| So you mentioned what's next. | |
| I don't think the text diffusion model is going to replace autoregressive LLMs, but it will be something maybe for quick, cheap at scale tasks. | |
| Maybe the free tier in future will be something like that. | |
| I think there's a couple examples where it's I've heard that it's actually been started to be used. | |
| I think to paint an example of why this is so much better. | |
| For example, when GPT-5 is taking 30 minutes to respond, it's generating one token at a time. | |
| And this diffusion idea is essentially generate all of those completion, all of those tokens in the completion in one batch, which is why it could be way faster. | |
| And I think it can be suited. | |
| The startups I'm hearing are like code startups where you have a code base and you have somebody that's effectively vibe coding and they say, make this change. | |
| And a code diff is essentially a huge reply from the model, but it doesn't have to have that much external context. | |
| And you can get it really fast by using these diffusion models. | |
| So that's what I've heard of one example: is that they use these text diffusion to generate really long diffs because doing it with an autoregressive model would take minutes. | |
| And that time for like a user-facing product causes a lot of churn. | |
| So like every second, you lose a lot of users. | |
| So I think that is going to be this thing where it's going to grow and have some applications. | |
| But I actually thought that different types of models were going to be used for different things more sooner than they have been. | |
| So I kind of trade off. | |
| I think that the tool use point is the one that's stopping them from being like most general purpose because like cloud code and this chat GPT with search, like the autoregressive chain is interrupted with some external tool. | |
| And I don't know how to do that with the diffusion setup. | |
| So what's the future of tool use this year and then in the coming years? | |
| Do you think there's going to be a lot of developments there? | |
| How that's integrated to the entire stack? | |
| I do think right now, I mean, it's mostly on the proprietary LLM side, but I think we will see more of that in the open source tooling. | |
| And I think, I mean, it is a huge unlock because then you can really outsource certain tasks from just memorization to actual, you know, like instead of having the LLM memorize what is 23 plus 5, just use a calculator. | |
| So do you think that can help solve hallucination? | |
| Not solve it, but reduce it. | |
| So still the LLM needs to know what, like when to ask for a tool core. | |
| And the second one is, well, it doesn't mean the internet is always correct. | |
| You can do a web search, but let's say I asked who won the World Cup in, let's say, 1998. | |
| It still needs to find the right website and get the right information. | |
| So it can still go to the incorrect website and give me incorrect information. | |
| So I don't think it will fully solve that, but it is improving it in that sense. | |
| And so another cool paper earlier this year, I think it was December 31st. | |
| So it's not technically 2026, but close. | |
| So like the recursive language model. | |
| That's a cool idea to kind of take this even a bit further. | |
| So just to explain, so Nathan, you also mentioned earlier it's harder to do cool research in academia because of the compute budget. | |
| If I recall correctly, they did everything with GPT-5. | |
| So they didn't even use local models. | |
| But the idea is, let's say, if we want context task, instead of having the LLM solve all of it in like one shot or even like in a chain, you break it down into sub-tasks. | |
| You have the LLM decide when, like, what is a good, let's say, sub-task, and then recursively call an LLM to solve that. | |
| And I think something like that, also then adding tools and, you know, each one, maybe you have like a huge QA task. | |
| Each one goes to the web and gathers information and then you pull it at the end together and stitch it back together. | |
| Like where I think there's going to be a lot of unlock using things like that, where you don't necessarily improve the LLM itself, you improve how the LLM is used and what the LLM can use. | |
| One downside right now with tool use is you have to give the LLM permission to use tools. | |
| And that will take some trust, especially if you want to unlock things like having an LLM answer emails for you, not even answer, but just sort them for you or select them for you or something like that. | |
| I don't know if I would today give an LLM access to my emails, right? | |
| I mean, this is like a huge risk. | |
| I think there's a cool one last point on the tool use thing. | |
| I think that you hinted at this, and we've both come at this in our own ways, is that the open versus closed models use tools in very different ways, where open models, people go to Hugging Face and you download the model and then the person's going to be like, oh, what tool do I want? | |
| And I don't know, EXA is my search preferred search provider, but somebody else might care for a different search startup where you release a model. | |
| Needs to be useful for multiple tools for multiple use cases, which is really hard because you're making like a general reasoning engine model, which is actually what GPT OSS is good for. | |
| But on the closed models, you're deeply integrating the specific tool into your experience. | |
| And I think that open models will struggle to replicate some of the things that I like to do with closed models, which will be like, I don't know, you can reference a mix of public and private information and something that I keep trying every three to six months. | |
| I try like Codex on the web, which is just prompting a model to make an update to some GitHub repository that I have. | |
| And it's just like, like that set of secure cloud environment is just so nice for just like send it off and do this thing and come back to me. | |
| And these will probably help define some of the local open and closed niches. | |
| But I think initially, because there was such a rush to get this tool use working, that the open models were on the back foot, which is kind of inevitable. | |
| I think there's so much research, so many resources in these frontier labs, but will be fun when the open models solve this because it's going to necessitate like a bit more flexible and potentially interesting model that might work with this recursive idea to like be an orchestrator and a tool use model. | |
| So hopefully the necessity drives some interesting innovation there. | |
| So continual learning. | |
| This is a long-standing topic, important problem. | |
| I think that increases in importance as the cost of training of the models goes up. | |
| So can you explain what continual learning is and how important it might be this year and the coming years to make progress? | |
| This relates a lot to this kind of SF zeitgeist of what is AGI, which is artificial general intelligence, and what is ASI, artificial superintelligence, and what are the language models that we have today capable of doing. | |
| I think the language models can solve a lot of tasks, but a key milestone among the AI community is essentially when AI could replace any remote worker taking in information and solving digital tasks and doing them. | |
| And the limitation that's highlighted by people is that a language model will not learn from feedback the same way that an employee is. | |
| So if you hire an editor, the editor will mess up, but you will tell them. | |
| And if you hired a good editor, they don't do it again. | |
| But language models don't have this ability to modify themselves and learn very quickly. | |
| So the idea is if we're going to actually get to something that is a true like general adaptable intelligence that can go into any remote work scenario and needs to be able to learn quickly from feedback and on job learning. | |
| I'm personally more bullish on language models by being able to just provide them with very good context. | |
| You said, like maybe offline said that like you can write extensive documents to models where you say, I have all this information. | |
| Here's all the blog posts I've ever written. | |
| I like this type of writing. | |
| My voice is based on this. | |
| But a lot of people don't provide this to models and the models weren't designed to like take this amount of context previously. | |
| Like the agentic models are just starting. | |
| So it's this kind of trade-off of, do we need to update the weights of this model with this continual learning thing to make them learn fast? | |
| Or the counter argument is we just need to provide them with more context and information and they will have the appearance of learning fast by just having a lot of context and being very smart. | |
| So we should mention the terminology here. | |
| So continual learning refers to changing the weights continuously so that the model adapts, adjusts based on the new incoming information, does so continually and rapidly and frequently and so on. | |
| And then the thing you mentioned on the other side of it is generally be referred to as in context learning. | |
| As you learn stuff, there's a huge context window. | |
| You can just keep loading it with extra information every time you prompt a system, which I think both are legitimately can be seen as learning. | |
| It's just a different place where you're doing the learning. | |
| I think to be honest with you, continual learning, the updating of weights, we already have that in different flavors. | |
| I mean, if you think about how, so I think the distinction here is, do you do that on a personalized custom model for each person or do you on a global model scale? | |
| And I think we have that already with going from GPT-5 to 5.1 and 5.2. | |
| It's maybe not immediate, but it is like a curated update, a quick curated update where there was feedback by the things it couldn't do, feedback by the community. | |
| They updated the weights, next model, and so forth. | |
| So it is, I mean, kind of like a flavor of that. | |
| Other, even finer-grained example is like RLVR. | |
| You run it, it updates. | |
| The problem is you can't just do that for each person because it would be too expensive to update the weights for each person. | |
| And I think that's the problem. | |
| So unless you get, I mean, even at OpenMAI scale, building data centers, it would be too expensive. | |
| I think that is only feasible once you have something on the device where the cost is on the consumer, like what Apple tried to do with the Apple Foundation models, putting them on the phone. | |
| And then they learn from the experience. | |
| A bit of a related topic, but this kind of maybe anthropomorphized term, but memory. | |
| What are the different ideas of the mechanism of how to add memory to these systems? | |
| As you're increasingly seeing so. | |
| So personalized memory, especially. | |
| So right now it's mostly like context, basically stuffing things into the context and then just recalling that. | |
| But again, I think, well, it's expensive because you have to, I mean, you can cache it, but still you spend tokens on that. | |
| And the second one is you can only do so much. | |
| I think it's more like a preference or a style. | |
| I mean, a lot of people do that when they solve math problems. | |
| You say, especially you can add previous knowledge stuff, but you also give it certain preference prompts, do what I preferred last time, whatever, like something like that. | |
| But it doesn't unlock new capabilities. | |
| So for that, one thing people do use still is LoRa, LoRa adapters. | |
| These are basically, instead of updating the whole weight matrix, there are two smaller weight matrices that you kind of have in parallel or overlay. | |
|
World Models in Robotics
00:14:58
|
|
| It's like the delta. | |
| But yeah, you can do that to some extent. | |
| But then again, it is economics. | |
| So there were also papers, for example, LoRa learns less but forgets less. | |
| It's like, you know, it's no free lunch. | |
| If you want to learn more, you need to use more weights, but it gets more expensive. | |
| And then again, if you learn more, you forget more. | |
| And it's like you have to find that Goldilocks zone, basically. | |
| We haven't really mentioned it much, but implied in this discussion is context length also. | |
| Is there a lot of innovations that's possible there? | |
| I think the colloquially accepted thing is that it's a compute and data problem where you can, and sometimes like small architecture things, which are like attention variants. | |
| So if you have, we talked about like hybrid attention models, which is essentially if you have what looks like a state space model within your transformer and like those are better suited because you have to spend less compute to model the furthest along token. | |
| And I think that, but those aren't free because they have to be accompanied by a lot of compute or the right data. | |
| So how many sequences of 100,000 tokens do you have in the world and where do you get these? | |
| And I think it just ends up being pretty expensive to scale them. | |
| So we've like gotten to pretty quickly to like a million tokens of input context length. | |
| And I would expect it to keep increasing and like get to like 2 million or 5 million this year, but I don't expect it to go to like 100 million. | |
| That would be like a true breakthrough. | |
| And I think those breakthroughs are possible. | |
| Like the continual learning thing, I think of it as a research problem where there could be a breakthrough that just makes transformers work way better at this and it's cheap. | |
| Like these things could happen with so much scientific attention, but turning the crank, it'll be consistent increases over time. | |
| I think you're also looking at the extremes. | |
| I think there's again no free lunch. | |
| So the one extreme to make it cheap, you have, let's say, an RNN that has a single state where you save everything from the previous stuff. | |
| It's like a specific fixed size thing. | |
| So you never really grow the memory because you are stuffing everything into one state. | |
| But then the longer the context gets, the more information you forget because you can't compress everything into one state. | |
| Then on the other end, you have the transformers, which try to remember every token, which is great sometimes if you want to look up specific information, but very expensive because you have the KB cache that grows, the dot product that grows. | |
| But then, yeah, like you said, the MAMBA layers, I mean, they kind of have the same problem, I would say, like an RNN. | |
| You try to compress everything into one state. | |
| You're a bit more selective there. | |
| But then I think it's like this Goldilocks zone again. | |
| With Nemotron 3, they found like a good ratio of how many attention layers do you need for the global information where everything is accessible compared to having these compressed states. | |
| And I think that's how I think we will scale more by finding better, let's say, ratios in Goldilocks zone, like between making it cheap enough to run, but then also making it powerful enough to be useful. | |
| And one more plug here, the recursive language model paper, that is one of the papers that tries to kind of address the long context thing. | |
| So what they found is essentially instead of stuffing everything into this long context, if you break it up into these smaller, multiple smaller tasks, so you save memory by having multiple smaller chords, you can get actually better accuracy than having the LLM try everything all at once. | |
| I mean, it's a new paradigm. | |
| We'll see, you know, there might be other flavors of that. | |
| So I think with that, we will still make improvement on long context. | |
| But also, like Nathan said, I think the problem is for pre-training itself, we don't have as many long context documents as other documents. | |
| So it's harder to study basically how LMs behave and stuff like that on that level. | |
| There are some rules of thumb where essentially you pre-train a language model, although we pre-trained at like 8K context length and then extended to 32K with training. | |
| And there's some rules of thumb where you're just like essentially doubling the training context length takes like 2x compute. | |
| And then you can normally 2 to 4x the context length again. | |
| So I think a lot of it ends up being kind of compute bound at pre-training, which is in this, like we talked about this. | |
| Everyone talks about this big increase in compute for the top labs this year. | |
| And that should reflect in some longer context windows. | |
| But I think on the post-training side, there's some more interesting things, which is as we have agents, the agents are going to manage this context on their own. | |
| Where now people that use cloud code a lot dread the compaction, which is when cloud takes its entire full 100,000 tokens of work and compacts it into bulletin lists. | |
| But what the next models will do, I'm just not a novel. | |
| I'm sure people are already working on this, is essentially the model can control when it compacts and how. | |
| So you can essentially train your RL algorithm where compaction is an action, where it shortens the history. | |
| And then the problem formulation will be, I want to keep the maximum evaluation scores that I have gotten while the model compacts its history to the minimum length, because then you have the minimum amount of tokens that you need to do this kind of compounding auto-regressive prediction. | |
| So there's actually pretty nice problem setups in this where these agentic models learn to use their context in a different way than just plow forward. | |
| One interesting, also recent example would be DeepSeek version 3.2, where they had the sparse attention mechanism where they have essentially like a very efficient small lightweight indexer. | |
| And instead of attending to all the tokens, it selects, okay, what tokens do I actually need? | |
| I mean, it's almost comes back to the original idea of attention where you are selective, but attention is always on. | |
| You have maybe zero weight on some of them, but you use them all. | |
| But they are even more like, okay, let's just mask that out or like not even do that. | |
| And even with sliding window attention, almost that is also kind of like that idea. | |
| You have that rolling window where you keep it fixed because you don't need everything all the time. | |
| Occasionally, some layers you might, but it's wasteful. | |
| But right now, I think, yeah, if you use everything, you're on the safe side. | |
| It gives you the best bang for the buck because you never miss information. | |
| And right now, I think this year will be more also the year figuring out, like you said, how to be more smart about that. | |
| I think right now people want to have the next state of the art. | |
| And the state of the art is, happens to be the brute force expensive thing. | |
| And then once you have that, like you said, keep that accuracy, but let's see how we can do that cheaper now, like tricks. | |
| Yeah, all this scaling thing. | |
| Like. | |
| The reason we get the Claude 4.5 Sonnet model first is because you can train it faster and you're not hitting these compute walls so soon and they can just try a lot more things and get the model faster, even though the bigger model is actually better. | |
| I think we should say that there's a lot of exciting stuff going on in the AI space. | |
| My mind has recently been really focused on robotics, so we have today really almost entirely didn't talk about robotics. | |
| There's a lot of stuff on image gen video generation. | |
| I think it's fair to say that the most exciting research work in terms of the amount intensity, fervor is in the LLM space, which is why I think it's justified for us to really focus on the LLM that we're discussing. | |
| But it would be nice to bring in some certain things that might be useful, for example, world models. | |
| There's growing excitement on that. | |
| Do you think there would be any use in this coming year for world models in the LLM space? | |
| Yes, I do think so, also with LLMs. | |
| What's an interesting thing here is, I think if we unlock more LLM capabilities, it also automatically unlocks all the other fields, because or not unlocks, but like makes progress faster, because you know a lot of researchers and engineers use LLMs, like we said, for coding. | |
| So even if they work on robotics, if you like, optimize these LLMs that help you with coding. | |
| You know it's like it pays off, but then yeah. | |
| So world models are interesting. | |
| It's basically where you have the model run, a simulation of the world, in a sense like a little toy thing of the real thing, which can again unlock capabilities like that are the LM was not aware of. | |
| It can simulate things, and I think see, this is like something I think LLMs they just happen to work well by pre-training and then doing the next token prediction. | |
| But we could do this even a bit, you know, like sophisticated in a sense. | |
| So what I'm saying is like with there's like I think it was by META, a paper, Coda world models, so where they basically apply the concept of world models to LLMs again, where they so, instead of just having next token prediction and verifiable rewards, checking the answer correctness, they also make sure the intermediate variables are correct. | |
| You know, like it's kind of like a the model is learning, basically a code environment in a sense, and I think this makes a lot of sense. | |
| It's Just like expensive to do, but this is like making things more sophisticated, like modeling, like modeling the whole thing, not just the result. | |
| And so it can add more value. | |
| I remember when I was a grad student, there is a so there's a competition called CASP, I think, where they do protein structure prediction. | |
| Like they predict the structure of a protein that is not solved yet at that point. | |
| So in a sense, this is actually great. | |
| And I think we need something like that for LLMs also, where you do the benchmark, but no one does. | |
| So you hand in the results, but no one knows the solution. | |
| And then after the fact, someone revealed that. | |
| But AlphaFold, when it came out, it crushed this benchmark. | |
| I mean, there were also multiple iterations. | |
| But I remember the first one, I'm not an expert in that subfield, but the first one explicitly modeled the physical interactions of the physics of the molecule, also like the angles, impossible angles. | |
| And then in the next version, I think they got rid of this. | |
| So just with brute force, scaling it up. | |
| And I think with LLMs, we are currently in this brute force scaling because it just happens to work. | |
| But I do think also at some point, it might make sense to bring back this thing. | |
| And I think with world models, I think that is where I think that might be actually quite cool. | |
| I mean, yeah. | |
| And of course, also for robotics, that is completely unrelated from LLMs. | |
| Yeah, yeah. | |
| And robotics is very explicitly. | |
| So there's the problem of locomotion and manipulation. | |
| Locomotion is much more solved, especially in the learning domain. | |
| But there's a lot of value, just like with the initial protein folding systems, bringing in the traditional model-based methods. | |
| So you don't, it's, it's unlikely that you can just learn the manipulation or the whole body local manipulation problem end to end. | |
| That's the dream. | |
| But then you realize when you look at the magic of the human hand and the complexity of the real world, you realize it's really hard to learn this all the way through the way I guess AlphaFold 2 did. | |
| I'm excited about the robotic learning space because I think it's collectively getting like supercharged by all the excitement and investment in language models generally, where they're getting like the infrastructure for training transformers, which is like a general modeling thing, is becoming like world-class industrial tooling where anything, wherever that was a limitation for robotics, it's just like way better. | |
| There's where more compute. | |
| And then on top of like they take these language models and use them as kind of central units where you can do interesting explorative work around something that kind of already works. | |
| And then I see it emerging as like kind of like we talked about hugging face transformers and hugging face. | |
| I think when I was at Hugging Face, I was trying to get this to happen, but it was too early as like these open robotic models on Hugging Face and having people be able to contribute data and fine-tune them. | |
| I think we're much closer now that the investment in robotics and I think self-driving cars is related and enables this, where it's like once you get to the point where you can have this sort of ecosystem where somebody can download a robotics model and maybe fine-tune it to their robot or share data sets across the world. | |
| And there's some data, there's some work in this area, like RTX, I think, is a few years ago, where people are starting to do that. | |
| But I think once they have this ecosystem, it'll look very different. | |
| And then this whole post-chat chat GPT boom is putting more resources into that, which I think is a very good area for doing research. | |
| This is also resulting in much better, more accurate, more realistic simulators being built, closing the sim to real gap in the robotic space. | |
| But, you know, you mentioned a lot of excitement in the robotics space and a lot of investment. | |
| The downside of that, which happens in hype cycles, I personally believe most robotics people believe that it's not robotics is not going to be solved at the time scale as being kind of implicit or explicitly promised. | |
| And so, what happens when there's all these robotics companies that spring up and then they don't have a product that works, then there's going to be this kind of crash of excitement, which is nerve-wracking. | |
| There's hopefully something else will come in and keep swooping in so that the continued development of some of these ideas keeps. | |
| I think it's also related to the continual learning issue, essentially, where the real world is so complex. | |
| Where with LLMs, yeah, you don't need to really have something learn for the user because there are a lot of things everyone has to do. | |
| Everyone maybe wants to, I don't fix their grammar in their email or code or something like that. | |
| It's more constrained, so you can kind of prepare the model for that. | |
| But preparing the robot for the real world, that's harder. | |
| I mean, you have the foundation models, the robotic foundation models, but you can learn certain things like grasping things. | |
| But then again, I think everyone's house is different, you know, like it's so different. | |
| And that is, I think, where the robot would have to learn on the job, essentially. | |
| And I think that, I guess, is the bottleneck right now, like how to, you know, customizing it on the fly, essentially. | |
| I do, I don't think I can possibly understate the importance of the thing that doesn't get talked about almost at all by robotics folks or anyone is safety. | |
| All the interesting complexities we talk about, learning, all the failure modes and failure cases, everything we've been talking about LLM, sometimes it fails in its interesting ways. | |
| All of that is fun and games in the LLM space. | |
| In the robotic space, in people's homes, across millions of minutes, billions of interactions, you really are almost allowed to fail never. | |
| When you have embodied systems that are put out there in the real world, you just have to solve so many problems you never thought you'd have to solve when you're just thinking about the general robot learning problem. | |
| I'm so bearish on in-home learned robots for consumer purchase. | |
|
Humans in the Loop
00:09:05
|
|
| I'm very bullish on self-driving cars and I'm very bullish for robotic automation, e.g., like Amazon distribution, where Amazon has built whole new distribution centers designed for robots first rather than humans. | |
| There's a lot of excitement in AI circles about AI enabling automation and like mass scale manufacturing. | |
| And I do think that the path to robots doing that is more reasonable, where it's like a thing that is designed and optimized to do a repetitive task that a human could conceivably do, but doesn't want to. | |
| And then I'm much, but it's also going to take a lot longer than people probably predict. | |
| I think that the leap from AI singularity to we can now scale up mass manufacturing in the US because we have a massive AI advantage is one that is troubled by a lot of political and other challenging problems. | |
| Let's talk about timelines. | |
| Specifically timelines to AGI or ASI. | |
| Is it fair, like as a starting point, to say that nobody really agrees on the definitions of AGI and ASI? | |
| I kind of think there's a lot of disagreement, but among I've been getting pushback where a lot of people kind of say the same thing, which is like a thing that could reproduce most digital economic work. | |
| So like the remote remote worker is a fairly reasonable example. | |
| And I think OpenAI's definition is somewhat related to that, which is like an AI that can do a lot of economic, like a certain number of economically valuable tasks, which I don't really love as a definition, but I think it could be a grounding point because language models today, while immensely powerful, are not this remote worker drop-in. | |
| And there are things that you could think of that could be done by an AI that are way harder than remote work, which are like solving a finding an unexpected scientific discovery that you couldn't even posit, which would be an example of something that somebody says is like an artificial super intelligence problem or like taking in all medical records and finding linkages across certain illnesses that people didn't know or figuring out that some common drug can treat some niche cancer. | |
| Like they would say that that is like a super intelligence thing. | |
| So these are kind of natural tiers. | |
| My problem with it is that it becomes deeply entwined with like the quest for meaning of AI and this religious aspects to it. | |
| So there's kind of different, there's different paths you can take it. | |
| And I don't even know if the remote work is a good definition because what exactly is that? | |
| It's like perfect tool use. | |
| I actually, I mean, I like, I don't know if you like the originally titled AI 27 report. | |
| They focus more on code and research taste. | |
| So the target there is the superhuman coder. | |
| So they have several milestone systems, superhuman coder, superhuman AI researcher, then super intelligent AI researcher, and then the full ASI artificial superintelligence. | |
| But after you develop the superhuman coder, everything else falls quickly. | |
| There, the task is to have a fully autonomous, like automate coding. | |
| So any kind of coding you need to do in order to perform research is fully automated. | |
| And from there, humans would be doing AI research together with that system. | |
| And they will quickly be able to develop a system that actually can do the research for you. | |
| That's the idea. | |
| And then initially their prediction was 2027, 28. | |
| And now they've pushed it back by three to four years to 2031, mean prediction. | |
| Probably my prediction is even beyond 2031. | |
| But at least you can, in a concrete way, think about how difficult it is to fully automate programming. | |
| Yeah, I disagree with some of their presumptions and dynamics on how it would play out. | |
| But I think they did a good, like they did good work in the scenario defining milestones that are concrete and to tell a useful story, which is why the reach for this AI 2027 document well transcended Silicon Valley is because they told a good story and they did a lot of rigorous work to do this. | |
| I think the camp that I fall into is that like AI is like so-called jagged, which will be excellent at some things and really bad at some things. | |
| I think that when they're close to this automated software engineer, what it will be good at is that traditional ML systems and front end, the model is excellent at, but the distributed ML, the models are actually really quite bad at because there's so little training data on doing large-scale distributed learning and things. | |
| And this is something that we already see. | |
| And I think this will just get amplified. | |
| And then it's kind of messier in these trade-offs. | |
| And then there's like, how do you think AI research works and so on? | |
| So you think basically superhuman coder is almost unachievable, meaning like because of the jagged nature of the thing, you're just always going to have gaps in capabilities. | |
| I think it's assigning completeness to something where the models are kind of superhuman at some types of code. | |
| And I think that will continue. | |
| And people are creative. | |
| So they'll utilize this like incredible abilities and like to fill in the weaknesses of the models and move really fast. | |
| Always kind of be this. | |
| I've perceived for a long time this dance between the humans are enabling this thing that the model can't do, and the best, the best AI researchers, the ones that can enable this superpower. | |
| And I think this aligns like to what we already see. | |
| I think like cloud code for building a website, you can stand up a beautiful website in a few hours or do data analysis, and I don't think it's. | |
| It's going to keep getting better at these things and it'll pick up some new code skills and stuff that it'll get along the way. | |
| And kind of linking to what's happening in in big tech is like this AI 2027 report is like it leans into the singularity idea where I think research is messy and social and largely in the data and ways that AI models can't process. | |
| But like what we do have today is really powerful and these tech companies are all collectively buying into this with tens of billions of dollars of investment. | |
| So like we are going to get some much broader version of chat Gpt, a much better version of cloud code than we already have. | |
| I think that it's just like hard to predict where that is going, but the like, bright clarity of that future is why some of the most powerful people in the world are putting so much money into this and I think it's just kind of small differences between like we don't actually know what a better version of chat, Gpt is, but also like can it automate AI research? | |
| I would say probably not, at least in this time frame. | |
| Like big TECH is going to spend a hundred billion dollars, much faster than we get a automated AI researcher that enables a AI research singularity. | |
| So you think your prediction would be what like? | |
| If this is even a useful milestone, we're more than 10 years out. | |
| I would say less than that on the software side, but I think longer than that on the things like research. | |
| Let's just like for fun. | |
| Try to imagine a world where all software writing is fully automated. | |
| Can you imagine that world by the end of this year? | |
| The amount of software that will be automated will be so high. | |
| But it's like it'll be the things of like you're trying to train a model with r l and you need to have multiple bunches of Gpus communicating with each other. | |
| That'll still be hard, but I think it'll be much easier. | |
| One of the ways to think about this, so the full automation of programming is just think of like lines of useful code written, the fraction of that to the number of humans in the loop. | |
| So presumably there'll be for a long time humans in the loop of software writing is just be fewer and fewer relative to the amount of code written right and the the, the sc superhuman coder. | |
| I think the the, the presumption there is, it goes to zero, the number of humans in the loop. | |
| What does that world look like when the number of humans in the loop is in the hundreds, not in the hundreds of thousands? | |
| I think software engineering will be driven more to system design and goals of outcomes, where I do think software is largely going to be. | |
| I think this has been happening over the last few weeks, where people have gone from a month ago like oh AI, agents are kind of slop, which is a famous Carpenty quote, to like the what is a little bit of a meme of, like the industrialization of software, when anyone can just create software at their fingerprints, like I Do. | |
| You think we are closer to that side of things? | |
| And it takes direction and like understanding how the systems work to extract that best from the language models. | |
| And I think it's hard to accept the gravity of how much is going to change with software development and how many more people can do things without ever looking at it. | |
| I think what's interesting is to think about whether these systems will be independent, like completely independent, in the sense that, well, I have no doubt that LLMs will kind of at some point solve coding in a sense like calculators solve calculating, right? | |
| So at some point, humans developed a tool that you never need a human to calculate that number. | |
| You just type it in and it's an algorithm. | |
|
Intermediate AI Integration
00:13:32
|
|
| You can do it in that sense. | |
| And I think that's the same probably for coding. | |
| But the question is, so I think what will happen is, yeah, you will just say build that website. | |
| It will make a really good website and then you maybe refine it. | |
| But will it do things independently where so will you be still having humans asking the AI to do something? | |
| Like, will there be a person say build that website? | |
| Or will there be AI that just builds websites or something or whatever? | |
| I think using talking about building websites is too simple. | |
| The problem with websites and the problem with the web, you know, HTML and all that kind of stuff, it's very resilient to just slop. | |
| It will show you slop as good as showing slot. | |
| I would rather like think of like safety critical systems like asking AI to end-to-end generate something that manages logistics or manages cars, a fleet of cars, all that kind of stuff. | |
| So end-to-end generates that for you. | |
| I think a more intermediate example is take something like Slack or Microsoft Word. | |
| I think if the organizations allow it, AI could very easily implement features end-to-end and do a fairly good job for like things that you want to try. | |
| You want to add a new tab in Slack that you want to use. | |
| And I think AI will be able to do that pretty well. | |
| Actually, that's a really great example. | |
| How far away are we from that? | |
| Like this year. | |
| See, I don't, I don't know. | |
| I don't know. | |
| I don't know how bad production code bases are, but I think that within like on the order of low years, a lot of people are going to be pushed to be more of like a designer and product manager where you have multiple of these agents that can try things for you and they might take one to two days to implement a feature or attempt to fix a bug. | |
| And you have these dashboards, which I think Slack is actually a good dashboard where your agents will talk to you and you'll then give feedback. | |
| But things like, like, I make a website, it's like, you want to make a logo that's passable? | |
| Like, I think these like cohesive design things and this style is going to be very hard for models and deciding on what to add at the next time. | |
| I just, okay, so I hang out with a lot of programmers, and some of them are a little bit on the skeptical side in general. | |
| That's just vibe-wise, they're like that. | |
| I just think there's a lot of complexity involved in adding features to complex systems. | |
| Like if you look at the browser, Chrome, if I wanted to add a feature, if I wanted to have tabs as opposed to up top, I want them on the left side. | |
| Interface, right? | |
| I think we're not, this is not a next year thing. | |
| One of the cloud releases this year, one of their tests was we give it a piece of software and leave cloud to run to recreate it entirely. | |
| And it could already almost rebuild Slack from scratch, just given the parameters of the software and left in a sandbox environment. | |
| From the scratch part, I like almost better. | |
| So it might be that the smaller, newer companies are advantaged and they're like, we don't have to have the bloat and complexity. | |
| And therefore, this feature exists. | |
| And I think this gets to the point that you mentioned that some people you talk to are skeptical. | |
| And I think that's not because the LLM can't do XYZ. | |
| It's because people don't want it to do it this way. | |
| Some of that could be a skill issue on the human side, unfortunately, we have to be honest with ourselves. | |
| And some of that could be an under-specification issue. | |
| So, programming, like you're like, you're just assuming this is like in relationships and friendships communication type of issue. | |
| You're assuming the LM somehow is supposed to read your mind. | |
| I think this is where spec-driven design is really important. | |
| Like, you just using natural language, specify like what you want. | |
| I think that's like if you talk to people at the labs, they use these in their training and production code. | |
| Like, Cloud Code is built with Claude Code, and they all use these things extensively. | |
| And Dario talks about how much clouds code owned. | |
| And it's like these people are slightly ahead in terms of the capabilities they have, and they probably spend on inference. | |
| They could spend 10 to 100 plus X as much as we're spending. | |
| Like, we're on a lowly $100 or $200 a month plan. | |
| Like, they truly let it rip. | |
| And I think that that, like, with the pace of progress that we have, it seems like where a year ago we didn't have Cloud Code and we didn't really have reasoning models. | |
| And it's like the difference between sitting here today and what we can do with these models. | |
| And it seems like there's a lot of, like, there's a lot of low-hanging fruit to improve them. | |
| The failure modes are pretty dumb. | |
| It's like, Claude, you tried to use the CLI command I don't have installed 14 times and then I sent you the command to run. | |
| It's like that thing from a modeling perspective is pretty fixable. | |
| So I agree with you. | |
| I've been becoming more and more bullish in general. | |
| Speaking to what you're articulating, I think it is a human skill issue. | |
| So Anthropic is leading the way or other companies in understanding how to best use the models for the program and therefore they're effectively using them. | |
| I think there's a lot of programmers on the outskirts. | |
| They're like, they don't, I mean, there's not a really good guide on how to use them. | |
| People are trying to figure it out exactly. | |
| It might be very expensive. | |
| Like it might be that the entry point for that is $2,000 a month, which is only tech companies and rich people, which is like, like, that could be it. | |
| But it might be worth it. | |
| I mean, if the final result is a working software system, it might be worth it. | |
| But by the way, it's funny how we converge from the discussion of timeline to AGI to something more pragmatic and useful. | |
| Is there anything concrete and interesting and useful and profound to be said about timeline to AGI and ASI? | |
| Or are these discussions a bit too detached from the day-to-day? | |
| There's interesting bets. | |
| So there's a lot of people trying to do reinforcement learning with verifiable rewards, but in real scientific domains where there's startups that are spending like they have hundreds of millions of dollars of funding and they have wet labs where they're having language models propose hypotheses that are tested in the real world. | |
| And I would say that I think they're very early or they're early, but with the pace of progress, it's like maybe they're early by six months and they make it because they were there first, or maybe they're early by eight years and you don't really know. | |
| So I think that that type of moonshot to branch this momentum into other sciences is like, okay, like that would be very transformative if like alpha fold moments happen in all sorts of other scientific domains by like a startup solving this. | |
| I think there are startups, I think maybe Harmonic is one where they're going all in on language models plus lean for math. | |
| I think you had another podcast guest where you talked about this recently. | |
| And it's like, we don't know exactly what's going to fall out of spending $100 million on that model. | |
| And most of them will fail, but a couple of them might be big breakthroughs that are very different than ChatGPT or Cloud Code type software experiences. | |
| Like a tool that's only good for a PhD mathematician, but makes them 100x effective. | |
| Okay, I agree. | |
| I think this will happen in a lot of domains, especially also like domains that have a lot of resources like finance and legal and pharmaceutical companies. | |
| But then again, is it really AGI again? | |
| Because we are now specializing it again. | |
| And then again, is it really that much different from back in the day how we had specialized algorithms? | |
| I think it's just the same thing, more way more sophisticated. | |
| But I don't know, is there a threshold when we call it AGI? | |
| I guess I think the real cool thing is here that we have the foundation models that we can specialize. | |
| I think that that's like the breakthrough at some point. | |
| Right now, I think we are not there yet because, well, first, it's too expensive, but also, you know, like ChatGPT doesn't just give away that ChatGPT to customize it. | |
| I think once that's going to be true in some way, and I think I can imagine this as a business model that ChatGPT, OpenAI says at some point, like, hey, you know, Bank of America, for 100 million, we will do your custom model or something like that. | |
| And I think that will be the huge economic value add. | |
| The other thing, though, is also companies. | |
| I mean, right now, what is the differentiating factor? | |
| I mean, if everyone uses the same LLM, if everyone uses ChatGPT, they will all do the same thing again. | |
| I mean, then, well, everyone is moving in lockstep, but usually companies, they want to have a competitive advantage. | |
| And I think there's then no way around using some of their private data and experimenting and maybe specializing. | |
| It's going to be interesting, yeah. | |
| Sitting in the pace of progress, it does just feel like things are coming. | |
| I don't think the AGI and ASI thresholds are particularly useful. | |
| I think, I guess, the real question, and this takes us to the remote worker thing, is when are we going to see a big, obvious leap in economic impact? | |
| Because currently, there's not been an obvious leap in economic impact of LL models, for example. | |
| And that's, you know, aside from AGI or ASI or all that kind of stuff, there's a real question of like, when are we going to see a GDP like joke? | |
| Yeah, it's like, what is the GDP made up of? | |
| Like, a lot of it is like financial services. | |
| So, like, I don't, I don't know what this is. | |
| It's just hard for me to think about the GDP bump, but like, I'd say that software development becomes valuable in a different way when you no longer have to look at the code anymore. | |
| So, when it is like cloud will make you a small business, which is essentially cloud can set up your website, your bank account, your email, and your whatever else. | |
| And like, you just have to express like what you're trying to put into the world. | |
| Like, that's not just an enterprise market, but it is a hard, like, I don't know how you get people to try doing that. | |
| I guess if ChatGPT can do it, like, people are trying ChatGPT. | |
| I think it boils down to the scientific question of how hard is tool use to solve. | |
| Because a lot of the stuff you're applying, the remote work stuff is tool use. | |
| It's like how computer use, like, how you have an LM that goes out there, this agentic system, and does something in the world and only screws up 1% of the time. | |
| Computer use is a good example of what labs care about, and we haven't seen a lot of progress on. | |
| We saw multiple demos in 2025 of like cloud can use your computer or OpenAI had KUA, and they all suck. | |
| So, like, they're also investing money in this. | |
| And I think that will be a good example. | |
| Where that's actually something where it just seems pretty like taking over the whole screen seems a lot harder than having an API that they can call in the back end. | |
| And some of that is you have to then set up a different environment for the model to work in. | |
| Like, they're not working on your MacBook. | |
| They are individually interfacing with Google and Amazon and Slack, and they handle all these things in a very different way than humans do. | |
| So, some of this might be structural blockers. | |
| Also, like specification-wise, I think the problem is also for arbitrary tasks. | |
| Well, you still have to specify what you want your LLM to do, and how do you do that in a what is the environment? | |
| How do you specify? | |
| You can say what the end goal is, but if it cancels the end goal, with LLMs, if you ask it for text, you can always clarify, do substeps. | |
| How do you put that information into a system that, let's say, books a travel trip for you? | |
| You can say, well, you screwed up my credit card information, but even to get it to that point, like, how do you as a user guide the model before it can even attempt that? | |
| I think the interface is really hard. | |
| Yeah, it has to learn a lot about you specifically. | |
| And about this goes to continue learning about the general mistakes that are made throughout and then mistakes that are made through you. | |
| All the AI interfaces are getting set up to ask humans for input. | |
| I think Cloud Code, we talk about a lot. | |
| It asks feedback on questions if it doesn't have enough specification on your plan or your desired. | |
| It starts to ask questions. | |
| Would you rather? | |
| We talked about memory, which saves across chats, which it's first implementation is kind of odd, where it'd be like, it'll mention my dog's name or something in the chat. | |
| I'm like, you didn't need to be subtle about this. | |
| Like, I don't care. | |
| But the things that are emerging are ChatGPT has the pulse feature, which is like a curated couple of paragraphs with links to something to look at or to talk about. | |
| And people talk about how the language models are going to ask you questions, which I think is a very, it's probably going to work. | |
| The language model is like, it knows you had a doctor appointment or something. | |
| It's like, hey, how are you feeling after that? | |
| Which is like, again, goes into the territory of humans are very susceptible to this and there's a lot of social change to come. | |
| But also like they're experimenting with having the models engage. | |
| Some people really like this pulse feature, which is it processes your chats and automatically searches for information and puts it in the ChatGPT app. | |
| So there's a lot of things coming. | |
| I used that feature before and I always feel bad because it does that every day and I rarely check it out. | |
| It's like how much money, like, I mean, compute is burned on something I don't even look at, you know, where it's like it's kind of there's also a lot of idle compute in the world, so don't feel too bad. | |
| Okay. | |
|
Models Engage, Heat Dissipates
00:06:42
|
|
| Do you think new ideas might be needed? | |
| Is it possible that the path to AGI, whatever that is, however we define that, to solve computer use more generally to solve biology and chemistry and physics, sort of the Dario definition of AGI or powerful AI? | |
| Do you think it's possible that totally new ideas are needed? | |
| Non-LLM, non-RL ideas. | |
| What might they look like? | |
| We're now going into philosophy land a little bit. | |
| For something like a singularity to happen, I would say yes. | |
| And the new ideas can be architectures or training algorithms, which is like fundamental deep learning things. | |
| But there's in that nature pretty hard to predict. | |
| But I think we will get very far even without those advances. | |
| Like we might get this software solution, but it might stop at software and not do computer use without more innovation. | |
| So I think that it's like a lot of progress would be coming. | |
| But if you're going to zoom out, like there are still ideas in the next 30 years that are going to look like that was a major scientific innovation that enabled the next chapter of this. | |
| And I don't know if it comes in one year or in 15 years. | |
| Yeah. | |
| I wonder if the bitter lesson holds true for the next hundred years, what that looks like. | |
| If scaling laws are fundamental in deep learning, I think the bitter lesson will always apply, which is compute will become more abundant. | |
| But even within abundant compute, the ones that have a steeper scaling loss slope or a better offset, like this is a 2D plot of performance and compute. | |
| And like even if there's more compute available, the ones that get 100x out of it will win. | |
| It might be something like literally computer clusters orbiting Earth with solar panels. | |
| The problem with that is heat dissipation. | |
| So you get all the radiation from the sun and you don't have any air to dissipate heat. | |
| But there is a lot of space to put clusters. | |
| There's a lot of solar energy there and you could figure out the heat dissipation. | |
| But there is a lot of energy and there probably could be engineering will to solve the heat problem. | |
| So there could be. | |
| Is it possible, and we should say that it definitely is possible how likely it is, is the question, that we're basically going to be plateauing this year. | |
| Not in terms of the system capabilities, but what the system capabilities actually mean for human civilization. | |
| So on the coding front, really nice websites will be built. | |
| Very nice autocomplete. | |
| Very nice way to understand code bases and maybe help debug, but really just a very nice helper on the coding front. | |
| It can help research mathematicians do some math. | |
| It can help you with shopping. | |
| It can help you with, it can help. | |
| It's a nice helper. | |
| It's clippy on steroids. | |
| What else? | |
| It may be a good education tool and all that kind of stuff. | |
| But computer use turns out extremely difficult to solve. | |
| So I'm trying to frame the cynical case in all of these domains where it kind of there's not a really huge economic impact. | |
| We realize how costly it is to train these systems at every level, both the pre-training on the inference, how costly the inference is, the reasoning, all of that. | |
| Like, is that possible? | |
| And how likely is that, do you think? | |
| When you look at the models, there's so much obvious things to improve. | |
| And it takes a long time to train these models and to do this art. | |
| And that it'll take us with the ideas that we have multiple years to actually saturate in terms of whatever benchmark or performance we are searching for. | |
| It might serve very narrow niches. | |
| Like the average ChatGPT Anders million user might not get a lot of benefit out of this, but it is going to serve different populations by getting better at different things. | |
| But I think what everybody's chasing now is a general system that's useful to everybody. | |
| So, okay, so if that's not, that can plateau, right? | |
| I think that dream is actually kind of dying. | |
| As you talked about with the specialized models, where it's like, and multimodal is often like video generation is a totally different thing. | |
| That dream is kind of dying is a big statement. | |
| Because I don't know if it's dying. | |
| I don't know if every, I don't know, if you ask the actual Francio lab people, I mean, they're still chasing it, right? | |
| I do think they are still like rushing to get the next model out, which will be much better than the not just a relative term, but will be better than the previous one. | |
| And I can't see them slowing down. | |
| I just think the gains will be made or felt more through not only scaling the model, but now find so I feel like there's a lot of tick tip. | |
| It's like, well, let's just put the better model in there and better model and better model. | |
| And now people are, okay, let's also at the same time improve everything around it too. | |
| Like, you know, like the engineering of the context and inference scaling. | |
| And the big labs will still keep doing that. | |
| And now also the smaller labs will catch up to that because now it's just like they are hiring more. | |
| There will be more people. | |
| LLMs, it's kind of like, you know, like a circle. | |
| They also make them more productive. | |
| And it it's just it's like amplifying. | |
| I think what we can expect is amplification, but not like a change of any like a paradigm change. | |
| Think that is true, but everything will be just amplified and amplified and amplified. | |
| And I can see that continuing for a long time. | |
| Yeah, I guess my statement with the dream is dying depends on exactly what you think it's going to be doing. | |
| Like, cloud code is a general model that can do a lot of things, but it's not like necessarily like it depends a lot on integrations and other things. | |
| Like, I bet Cloud Code can do a fairly good job of doing your email. | |
| And the hardest part is figuring out how to give the information to it and how to get it to be able to send your emails and stuff like this. | |
| But that's just kind of like, I think it goes back to like what is the one model to rule everything ethos, which is just like a thing in the cloud that handles your entire digital life and is way smarter than everybody. | |
| It's like it's operating in a. | |
| It's a. | |
| It's an interesting leap of faith to go from cloud code becomes that which it like in some ways is. | |
| There's some avenues for that, but I do think that like, the rhetoric of the industry is a little bit different. | |
| I think the immediate also thing we will feel next as a normal person using Lms is will probably be related to something like also trivial, like making figures right now. | |
|
Making Knowledge Accessible
00:04:13
|
|
| Uh, Llms are terrible at making figures. | |
| Is it because we are getting served the cheap models with very less or like less inference compute than behind the scenes? | |
| Maybe some like there are some cranks, we can already get better figures. | |
| But if you ask today, I don't draw a flowchart of xyz. | |
| It's most of the time terrible and it is kind of like a very simple task for a human. | |
| I think it's almost easier sometimes to draw something than to write something. | |
| Yeah, the multimodal understanding does feel like something that is odd that it's not better solved. | |
| I I think we're not saying one actually obvious thing that we're not actually realizing. | |
| That's a gigantic thing that's hard to measure, which is making all of human knowledge accessible to the entire world like we, I don't I one of the things I think is hard to articulate, but there's just a huge difference between google search and an Llm, like I feel like I can basically ask an Llm anything and get an answer. | |
| And less is doing less and less and less hallucination. | |
| And that means understanding my own life, figuring out a career trajectory, figuring out how to solve the problems all around me uh, learn about anything through human history. | |
| That, like I, I feel like nobody's really talking about that because they just immediately take it for granted that it's just this is awesome, that's why everybody's using it is because you get answers for stuff. | |
| And like the impact of that across time, like think about this is not just in the United States, it's all across the world, like kids throughout the world being able to learn these ideas. | |
| Like the impact that has across time is probably that's where the real like talking about Gdp, it won't be like a leap, it'll be. | |
| That's how we get to mars, that's how we build these things, that's how we have a A million new open AIs, all the kind of innovation that happens from there. | |
| And that's just this quiet force that permeates everything, right? | |
| Human knowledge. | |
| I do agree with you. | |
| And in a sense, it makes knowledge more accessible. | |
| But it also, I think, depends on what the topic is. | |
| For something like math, in a sense, you can ask it questions, it answers. | |
| But if you want to learn a topic from scratch, I think that again, like we talked about this earlier, I think the sweet spot is, I mean, there are really good math textbooks where someone laid it out linearly, and that is like a, let's say, proven strategy to learn this topic. | |
| And it does make sense if you start from zero to ramp up to get like a information dense text to soak it up. | |
| But then you use the LLM to make infinite exercises. | |
| Like you have problems in a certain area or have questions, something's uncertain, or like you are uncertain about certain things. | |
| You ask it to generate example problems. | |
| You solve them and you have questions. | |
| And then maybe you need more background knowledge and you ask it to generate that. | |
| And I think, but then it won't give you anything, let's say, that is not in the textbook. | |
| It's just packaging it differently, if that makes sense. | |
| But then there are things I feel like where it also adds value in a more, I mean, timely sense, where there is no good alternative besides a human doing it on the fly. | |
| For example, if you, I don't know, like, let's say you're planning to go to Disneyland and you try to figure out which tickets to buy for which park when, well, there is no textbook on that. | |
| There is no information dense resource on that. | |
| There's only the sparse internet. | |
| And then there is a lot of value in the LLM. | |
| You just ask it. | |
| You have the constraints. | |
| I'm traveling these days. | |
| I want to go there and there. | |
| Please figure out what I need, when and from where, what it costs and stuff like that. | |
| And it is very customized on the fly package. | |
| And then this is like one of the thousand examples and exercise personalized personalization is essentially like pulling information from the sparse internet, the non-information dense thing where there is no better version that exists. | |
| It just doesn't exist. | |
| You make it from scratch almost. | |
|
Proposition Ads for Dominance
00:04:17
|
|
| And if it does exist, it's full of, speaking of Disney world, like full of, what would you call it, ad slop. | |
| Like you just, it's impossible. | |
| Here, you go any city in the world. | |
| What are the top 10 things to do? | |
| LLM is just way better to ask than anything on the internet. | |
| Well, for now. | |
| That's because they're massively subsidized and they're going to be paid for by ads. | |
| I look forward to it. | |
| It's coming. | |
| Oh, I hope there. | |
| I mean, I'm hoping there's a very clear indication of what's an ad and what's not an ad in that context. | |
| I did a little, I mean, that's something I mentioned a few years ago. | |
| It's like, I don't know, if you are looking for a new running shoe, is this a coincidence that Nike maybe comes up first? | |
| Maybe, maybe not. | |
| But I think there are clear laws around this. | |
| You have to be clear about that. | |
| But I think that's what everyone fears. | |
| It's like the subtle, you know, subtle message in there or something like that. | |
| But it also brings us to the topic of, I guess, ads, where I think this was the thing OpenMeI tried to launch in 2025. | |
| And just to, because I think it's still not making money in that other way right now. | |
| So that like having really like ad spots in there. | |
| And then the thing though is they couldn't because, well, there are alternatives without ads and people would just flock to the other products. | |
| And it also is just like crazy how, yeah, like they're one-upping each other, spending so much money to just get the users. | |
| I think so. | |
| Like some Instagram ads, I don't use Instagram, but I understand the appeal of paying a platform to find users who will genuinely like your product. | |
| And that is the best case of things like Instagram ads. | |
| But there are also plenty of cases where advertising is very awful for incentives. | |
| And I think that a world where the power of AI can integrate with that positive view of like, I am a person and I have a small business and I want to make the best, I don't know, damn steak knives in the world. | |
| And I want to sell them to somebody who needs them. | |
| And if like, if AI can make that sort of advertising thing work even better, that's very good for the world, especially with like digital infrastructure, because that's how the modern web has been built. | |
| But that's not to say that like addicting feeds so that you can show people more content is a good thing. | |
| So it's like, I think that's even what OpenAI would say is they want to find a way that can make the monetization upside of ads while still giving their users agency. | |
| And I personally would think that Google is probably going to be better at figuring out how to do this because they already have ad supply and they figure out how to turn this demand in their Gemini app into useful ads. | |
| Then they can turn it on. | |
| And somebody will figure. | |
| I don't know if I think it's this year, but there will be experiments with it. | |
| I do think what holds companies back right now is really just that the competition is not doing it. | |
| It's more like a reputation thing. | |
| It's just like, I think people are just afraid right now, like ruining or losing the reputation, losing users, because it would make headlines if someone launched these ads. | |
| Unless they were great. | |
| But the first ads won't be great because it's a hard problem that we don't know how to solve. | |
| Yeah, I think also the first version of that will likely be something like on X, like the timeline where you have a promoted post sometimes in between. | |
| It will be something like that where it will say like promote it or something like small and then there will be an image or something. | |
| I think right now the problem is who makes the first move. | |
| If we go 10 years out, the proposition for ads is that you will make so much money on ads by having so many users that you can use this to funnel better RD and make better models, which is why YouTube is dominating the market for any like Netflix is scared of YouTube. | |
| Like they have the ad, like they make, I don't, I pay $28 a month for premium. | |
| They make at least $28 a month off of me and many other people. | |
| And they're just like creating such a dominant position in video. | |
| So I think that's the proposition, which is that ads can make you have a sustained advantage in what you're spending per user. | |
| But there's so much money in it right now that it's like somebody starting that flywheel is scary because it's a long-term bet. | |
|
Mark Zuckerberg's AI Insights
00:13:12
|
|
| Do you think there'll be some like crazy big moves this year, business-wise? | |
| Like somebody like Google or Apple acquiring Anthropic or something like this. | |
| Dario will never sell, but we are starting to see some types of consolidation with like Grok for $20 billion and Scale AI for almost 30 billion and countless other deals like this that they're structured in a way that is actually detrimental to the Silicon Valley ecosystem, which is this sort of licensing deal where not everybody gets brought along rather than a full acquisition that benefits the rank and file employee by getting their stock vested. | |
| Like that's a big issue for Silicon Valley culture to address because the startup ecosystem is the lifeblood where if you join a startup, even if it's not that successful, your startup very well might get acquired on a cheap premium of it and you'll get paid out for this equity. | |
| And these licensing deals are essentially taking the top talent a lot of the times. | |
| I think Grok, the deal for Grok to NVIDIA is rumored to be better to the employees, but it is still this antitrust avoiding thing. | |
| But I think that this trend of consolidation will continue. | |
| I've been, me and many smart people I respect have been expecting consolidation to have happened sooner. | |
| But it seems like some of these things are starting to turn, which, but at the same time, you have companies raising ridiculous amounts of money for reasons that you don't, like, I'm like, I don't know why you're taking that money. | |
| So it's maybe like mixed this year, but some consolidation pressure is starting. | |
| What kind of surprising consolidation do you think we'll see? | |
| So you're saying Anthropic is a never. | |
| I mean, Grok is a big one. | |
| Grok with a Q, by the way. | |
| Yeah. | |
| There's just a lot of startups and there's a very high premium on AI startups. | |
| So there's a lot of like, there could be a lot of stuff, 10 billion range acquisitions, which is a really big acquisition for a startup that was maybe founded like a year ago. | |
| I think Manus AI from this company that's based in Singapore that Meta founded was founded eight months ago and then had a $2 billion exit. | |
| And I think that there will be some other big, like many billion dollar acquisitions like permanently. | |
| Yeah, people rumored them to Apple. | |
| I think there's a lot of pressure and liquidity in AI. | |
| There's pressure on big companies to have outcomes. | |
| And I would guess that a big acquisition gives people leeway to then tell the next chapter of that story. | |
| I mean, yeah, I guess cursor. | |
| We've been talking about code and somebody acquires cursor. | |
| They're in such a good position by having so much user data. | |
| Yeah. | |
| And we talked about continual learning and stuff. | |
| They had one of the most interesting, like two sentences in a blog post, which is that they had their new composer model, which was a fine-tune of one of these large mixture of expert models from China. | |
| You can know that by asking gossip or because the model sometimes responds in Chinese, which none of the American models do. | |
| And they had a blog post where they're like, we're updating the model weights every 90 minutes based on real world feedback from people using it, which is like the closest thing to real world RL happening on a model. | |
| And it's just like in one of their blog posts, which is super cool. | |
| And by the way, I should say I use Composer a lot because one of the benefits it has is it's fast. | |
| I need to try it because everybody says this. | |
| And there'll be some IPOs potentially. | |
| You think Anthropic, OpenAI, XAI? | |
| They can all raise so much money so easily that they don't feel a need to like so long as fundraising is easy, they're not going to IPO because public markets apply pressure. | |
| I think we're seeing in China that the ecosystem is a little different with both MiniMax and Z.ai applying for filing IPO paperwork, which will be interesting to see how the Chinese market reacts. | |
| I actually would guess that it's going to be like similarly hypey to the US. | |
| And so long as all this is going and not based on the realities that they're both losing a ton of money. | |
| I wish more of the American gigantic AI startups were public because it would be very interesting to see how they're spending their money and have more insight. | |
| And also just to give people access to investing in these because I think that they're some of the most like formidable, like they're the companies of the era. | |
| And the tradition is now for so many of the big startups in the US to not go public. | |
| It's like we're still rating for Stripe and the IPO, but Databricks definitely didn't. | |
| They raised like a Series G or something. | |
| And I just feel like it's a kind of a weird equilibrium for the market where it's like, I would like to see these companies go public and evolve in that way that a company can. | |
| You think 10 years from now, some of the frontier model companies are still around? | |
| Anthropic, OpenAI. | |
| I definitely don't see it to be a winner takes all unless there truly is so algorithmic secret that one of them finds the glasses flywheel because the development path is so similar for all of them. | |
| Google and OpenAI have like all the same products and then like Anthropic's more focused. | |
| But when you talk to people, it sounds like they're solving a lot of the same problems. | |
| So I think, and there's offerings that'll spread out. | |
| There's a lot of, it's a very big cake that's being made that people are going to take money out of. | |
| I don't want to trivialize it, but so OpenAI and Anthropic are primarily LLM service providers. | |
| And some of the other companies like Google and XAI linked to X does other stuff too. | |
| And so it's very possible if AI becomes more commodified that the companies that are just providing LLM will die. | |
| I think they will. | |
| The advantage they have, they have a lot of users and I think they will just pivot. | |
| I think then if they figure out, so like Anthropic, I think, pivoted. | |
| I don't think they originally planned to work on code, but it happened that they found, okay, this is like a nice niche and now we are comfortable in this niche and we push on this niche. | |
| And I can see the same thing once maybe, let's say, hypothetically speaking, I'm not sure if it will be true, but let's say Google takes all the market share of the general chatbot. | |
| Maybe OpenAI will be then focused on some other topic. | |
| They have too many users to go away in foreseeable future, I think. | |
| I think Google is always ready to say, hold my beer with AI mode. | |
| I think that the question is if the companies can support the valuations. | |
| I think I'd see the AI companies being looked at in some ways, like AWS, Azure, and GCP are all competing in the same space and all very successful businesses. | |
| There's a chance that the API market is so unprofitable that they go up and down the stack to products and hardware. | |
| They have so much cash that they can build power plants and build data centers, which is a durable advantage now. | |
| But there's also just a reasonable outcome that these APIs are so valuable and so flexible for developers that they become little eggs of like something like AWS. | |
| But AWS and Azure are also going to have these APIs. | |
| So there's some like that's like five or six people competing in the API market is hard. | |
| So maybe like that's why they get squeezed out. | |
| You mentioned RIP Llama. | |
| Is there a path to winning for Meta? | |
| I think nobody knows. | |
| They're moving a lot. | |
| So they're signing licensing deals with Black Forest Labs, which is an image generation or mid-journey or client mainness. | |
| So I think in some ways it's on the product and like consumer-facing AI front. | |
| It's too early to tell. | |
| I think they have some people that are excellent and very motivated being close to Zuckerberg. | |
| So I think that there's still a story to unfold there. | |
| Llama is a bit different where Llama was the most focused expression of the organization. | |
| And I don't see Llama being supported to that extent. | |
| I think it was a very successful brand for them. | |
| So they still might do some part of participation in the open ecosystem or continue the Llama brand into a different service because people know what Llama is. | |
| You think there's a Llama 5? | |
| Not an open weight one. | |
| It's interesting. | |
| I think also just to recap a bit, I think, I mean, Lama was the, I would say, pioneering open weight model. | |
| And then Lama 123, a lot of love. | |
| But I think then, I think what happened, just hypothesizing or speculating, I think the leaders at Meta, like the upper executives, they I think they got really excited about Lama because they saw how popular it was in the community. | |
| And then I think the problem was trying to, let's say, monetize the open source, not monetize the open source, but like kind of use the open source to make a bigger splash in a sense, like to kind of force it. | |
| Almost it felt forced, like developing these very big Lama 4 models to have like the best, like to be on the top of the benchmarks. | |
| But I don't think the goal of Lama models is to be on top of the benchmarks, beating, let's say, ChatGPT or other models. | |
| I think the goal was to have a model that people can use, trust, modify, understand it. | |
| So that includes having smaller models. | |
| They don't have to be the best models. | |
| And what happened was just these models were, of course, like the benchmarks suggest that they were better than they were because I think they had like specific models trained on preferences that they performed well on the benchmark. | |
| It's kind of like this overfitting thing to kind of force it to be the best, but then at the same time, well, they didn't do the small models that people could use. | |
| And I think that no one could run these big models then. | |
| And then there was kind of like a weird thing. | |
| And I think it's just because people got too excited about headlines pushing the frontier. | |
| I think it's too much like on the benchmarksing side. | |
| Yeah, I think it imploded under political press, like internal political fighting and misaligned incentives. | |
| So I think the researchers want to build the best models, but there's a layer of organization and manager that is trying to demonstrate that they do these things. | |
| And then there's lots of pieces and rumors where how like some horrible technical decision was made and how that comes in. | |
| And it just seems like it kind of got too bad where it all just crashed out. | |
| We should also give huge props to Mark Zuckerberg. | |
| I think it comes from Mark actually. | |
| From Mark Zuckerberg from the top of the leadership saying open source is important. | |
| I think that's like that if the fact that that exists means there could be a llama five where they learn the lessons from the benchmarksing and say we're going to be GPT OSS and provide really awesome library of open source. | |
| What people say is that there's a debate between Mark and Alexander Wong, who is very bright, but much more against open source. | |
| And to the extent that he has a lot of influence over the AI org, it seems much less likely because it seems like Mark brought him in for like a fresh leadership aid in directing AI. | |
| And if the like opened or closed is no longer the defining nature of the model, I don't expect that to be a defining argument between Mark and Alex. | |
| They're both very bright, but I just like I have a hard time understanding all of it because Mark wrote this piece in July of 2024, maybe, which was like probably the best blog post at the time saying the case for open source AI. | |
| And then July 2025 came around and it was like we're re-evaluating a relationship with open source. | |
| So it's just kind of like. | |
| But I think also the problem, not the problem, but I think, well, we may have been a bit also too harsh, I think. | |
| And that caused some of that because I think, I mean, we as open source developers or the open source community, because I think even though the model was maybe not what everyone hoped for, it got a lot of backlash. | |
| And I think that was a bit unfortunate because I can see that as a company, now they were hoping for positive headlines. | |
| And instead of just getting no headlines or not these positive headlines, in turn, they got negative headlines. | |
| And then all it kind of reflected bad on the company. | |
| And I think that is also something like where you it's maybe a spite reaction, almost like okay, we have not. | |
| We try to do something nice, we try to gave give you something cool like an open source model, and now you are, like you know, kind of like being negative about us, even like like for the company. | |
| So in that sense it looks like well, maybe then we'll change our mind. | |
| I guess, I don't know. | |
| Yeah that's, that's where the uh, the dynamics of discourse on x can lead us as a community astray, Because sometimes it feels random. | |
| People pick the thing they like and don't like. | |
| And you can see the same thing with Grok 4.1 and Grok Code Fast One. | |
| I don't think ViWise people love it publicly, but a lot of people use it. | |
| So if you look to Reddit and X, they don't really give it praise from the programming community, but like they use it. | |
| And the same thing with probably with Llama. | |
| I don't understand. | |
| I don't understand the dynamics of either positive hype or negative hype. | |
| I don't understand it. | |
| I mean, the story of one of the stories of 2025 is the U.S. feeling the gap of LLMA, which is like all the rise of these Chinese open weight models to the point where I was like, that was the single issue I've spent a lot of energy on in the last five months is like trying to do policy work to get the U.S. to invest in this. | |
|
U.S. vs Chinese Open Models
00:15:20
|
|
| Tell me the story of Adam. | |
| Atom Project is, it started as me calling it the American Deep Seek Project, which doesn't really work for DC audiences, but it's the story of like, what is the most impactful thing I could do with my career, which is that the Chinese open weight models are cultivating a lot of power and there is a lot of demand for building on these open models, especially in enterprises in the U.S. that are very cagy about these Chinese models. | |
| Going to perplexity, the Atom Project, American Truly Open Models, is a US-based initiative to build and host high-quality, genuinely open-weight AI models and supporting infrastructure explicitly aimed at competing with and catching up to China's rapidly advancing open source AI ecosystem. | |
| I think the one sentence summary would be that, or two sentences. | |
| One is a proposition that open models are going to be an engine for AI research because that is what people start with. | |
| Therefore, it's important to own them. | |
| And the second one is, therefore, the U.S. should be building the best models so that the best research happens in the U.S. | |
| And the U.S. companies take the value from being the home of where AI research is happening. | |
| And without more investment in open models, we have all the plots on the website where it's like, Quinn, Quinn, Quinn, Quinn. | |
| And it's all these models that are excellent from these Chinese companies that are cultivating influence in the U.S. and China and internationally. | |
| And I think the U.S. is spending way more on AI and the ability to create open models that are half a generation or a generation beyond what the cutting edge of a closed labs is costs orders of like $100 million, which is a lot of money, but not a lot of the money to these companies. | |
| So therefore, we need a centralizing force of people who want to do this. | |
| And I think we got signed engagement from people pretty much across the full stack, whether it's policy. | |
| So there has been support from the administration? | |
| I don't think anyone in the, like technically in government has like signed it publicly, but I know that people that worked in AI policy, both in Biden and Trump administration, are very supportive of trying to promote open source models in the U.S. | |
| I think, for example, AI2 got a grant from the NSF for $100 million over four years, which is like the biggest CS grant the NSF has ever awarded. | |
| And it's for AI2 to attempt to this. | |
| And I think it's a starting point. | |
| But the best thing happens when there are multiple organizations building models because they can cross-pollinate ideas and kind of build this ecosystem. | |
| Like, I don't think it just works if it's just Lama releasing models to the world, because then you can see Lama can go away. | |
| The same thing applies for AI2, where it's like, I can't be the only one building models. | |
| And I think that it's like that it becomes a lot of time spent on talking to people, whether they're in policy. | |
| I know NVIDIA is very excited about this. | |
| I think Jensen Wong has been specifically talking about the urgency for this. | |
| And they've done a lot more in 2025, where the Nematron models are more of a focus. | |
| They've started releasing some data along with NVIDIA's open models. | |
| And like very few companies do this, especially if NVIDIA's size. | |
| So there is signs of progress. | |
| And we hear about Reflection AI, where they say their $2 billion fundraise is dedicated to building U.S. open models. | |
| And I feel like their announcement tweet is like it reads like a blog post sound right. | |
| And I think that that cultural tide is starting to turn. | |
| I think in July was when we had like four or five deep sea caliber Chinese open weight models in zero from the US. | |
| And that's that's the moment where I was released this. | |
| And I was like, I guess I have to spend energy on this because nobody else is going to do it. | |
| So it takes a lot of, it takes a lot of people contributing together. | |
| And I don't say that like the Atom project isn't like the thing that's helping to move the ecosystem, but it's people like me doing this sort of thing to get the word out. | |
| Do you like the 2025 America's AI Action Plan that includes open source stuff? | |
| The White House AI Action Plan includes a dedicated section titled Encourage Open Source and Open Web AI, defining such models and arguing they have unique value for innovation and startups. | |
| Yeah. | |
| I mean, like the AI action plan is a plan, but largely, I think it's like maybe the most coherent policy document that has come out of the administration. | |
| And I hope that it largely succeeds. | |
| And I know people that have worked on the AI action plan, and the challenge is taking policy and making it real. | |
| And I have no idea how to do this as an AI researcher, but like largely a lot of things in that were very real. | |
| And there's a huge build out of AI in the country. | |
| And it's like there are a lot of issues that people are hearing about from water use to whatever. | |
| And like we should be able to build things in this country, but also we need to not ruin places in our country in the process of building it. | |
| And it's worthwhile to spend energy on. | |
| I think that's a role that the federal government plays. | |
| It's like they set the agenda. | |
| And with AI setting the agenda that open weight should be a first consideration is like that's a large part of what they can do. | |
| And then people think about it. | |
| Also for education and talent for these companies, it's, I think, very important because otherwise, you know, if there are only closed models, how do you get the next generation of people contributing at some point? | |
| Because otherwise, you will at some point only be able to learn after you joined a company. | |
| But then at that point, like, how do you hire talented people? | |
| How do you identify talented people? | |
| And I think open source is that's even for a lot of things, but also even just for educating the population and training the next generation of researchers. | |
| It's the way or the only way. | |
| The way that I could have gotten this to go more viral is to tell a story of Chinese AI integrating with an authoritarian state and being ASI and taking over the world. | |
| And therefore, we need our own American models. | |
| But it's very intentional for why I talk about innovation and science in the US, because I think it's both more realistic as an outcome, but just like, it's a world that I would like to manifest. | |
| I would say, though, also, even like, let's say, any open weight model, I do think is a valuable model. | |
| Yeah. | |
| My argument is that we should be in a leading position. | |
| But I think that it's worth saying it so simply because there are still voices in the AI ecosystem that say we should consider banning releasing open models due to the safety risks. | |
| And I think it's worth adding that I think effectively that's impossible without making the U.S. like it have its own great firewall, which is also known to not work that well because the cost for training these models, whether it's one to $100 million, is attainable to a huge amount of people in the world that want to have influence. | |
| So these models will be getting trained all over the world. | |
| And these, we want the models, especially when like, I mean, there are safety concerns, but we want these information and tools to flow freely across the world. | |
| And into the US so that people can use them and learn from them. | |
| And we like stopping that would be such a restructuring of our internet that it seems impossible. | |
| Do you think maybe in that case the big open weight models from China are actually a good thing in a sense, like for the US companies? | |
| Because maybe the US companies you mentioned earlier, they are usually one generation behind in terms of what they release open source versus what they are using. | |
| For example, GPTOS might not be the cutting edge model. | |
| Jamma 3 might not be. | |
| But they do that because they know this is safe to release. | |
| But then when they see these companies see, for example, there is DeepSeek version 3.2, which is really awesome, and it gets used and there is no backlash, there is no security risk, that could then again encourage them to release better models. | |
| Maybe that, in a sense, is a very positive thing. | |
| 100%. | |
| These Chinese companies have set things into motion that I think would potentially not have happened if they were not all releasing models. | |
| So I think that it's like I'm almost sure that those discussions have been had by leadership. | |
| Is there a possible future where the dominant models, AI models in the world are all open source? | |
| Depends on the trajectory of progress that you predict. | |
| If you think saturation and progress is even coming within a few years, so essentially within the time where financial support is still very good, then open models will be so optimized and so much cheaper to run that they will win out. | |
| Essentially, this goes back to open source ideas where so many more people will be putting money into optimizing the serving of these open weight common architectures that they will become standards. | |
| And then you could have chips dedicated to them and it'll be way cheaper than the offerings from these closed companies that are custom. | |
| We should say that AI 27 report kind of predicts one of the things it does from a narrative perspective is that there will be a lot of centralization. | |
| As the AI system gets smarter and smarter, the national security concerns will come to be and you'll centralize the labs and you become super secretive and there'll be this whole race from a military perspective of how do you between China and the United States. | |
| And so all of this fun conversations we're having about LLMs, the generals, the soldiers will come into the room and be like, all right, we're now in the Manhattan Project stage of this whole thing. | |
| I think 2025, 6, 7, 27, I don't think something like that is even remotely possible. | |
| I mean, you can make the same argument for computers, right? | |
| You can say, okay, computers are capable and we don't want the general public to get them or chips, even AI chips, but you see how, like, you know, Huawei makes chips now, you know, took a few years. | |
| But, and I think that I don't think there is a way you can contain something like that, like knowledge like that. | |
| I think in this day and age, it is impossible. | |
| Like the internet, I don't think this is a possibility. | |
| On the Manhattan Project thing, one of my funny things making Adam is I think that like a Manhattan project-like thing for open models would actually be pretty reasonable because it wouldn't cost that much. | |
| But I think that that will come. | |
| It seems like culturally the companies are changing. | |
| But I agree with Sebastian and all the stuff that you just said. | |
| It's just like, I don't see it happening nor being helpful. | |
| Yeah, I mean, the motivating force behind the Manhattan Project is there was civilizational risk. | |
| It's harder to motivate that for open source models. | |
| There's not civilizational risk. | |
| You think on the hardware side, we mentioned NVIDIA a bunch of times. | |
| Do you think Jensen and Nvidia are going to keep winning? | |
| I think they have the downside that they have to iterate a lot and manufacture a lot. | |
| And I think they probably what they're doing, they do innovate. | |
| But I think there's always the chance that there is something who does something fundamentally different, who gets very lucky and then does something. | |
| But the problem is, I think, adoption, you know, like the mode of NVIDIA is probably not just the GPU. | |
| It's more like the CUDA ecosystem and that has evolved over so many, I think, two decades. | |
| I think, I mean, even back when I was a grad student, I was in a lab. | |
| We did biophysical simulations, molecular dynamics, and we had a Tesla GPU back then just for the computation. | |
| It was 15 years ago now. | |
| And it just, they built this up for a long time. | |
| And that's like, that's the mode. | |
| I think it's not the chip itself, although they have now the money to iterate and build and scale. | |
| But then it's really the compatibility. | |
| It's like, well, if you're at that scale as a company, why would you go with something risky where it's only a few chips that they can make per year? | |
| You go with the big one. | |
| But then I do think with LLMs now also it will be easier to design something like CUDA. | |
| So it took 15 years because it's hard. | |
| But then now we have LLMs. | |
| We can maybe replicate CUDA. | |
| And I wonder if there will be a separation of the training and the inference compute as we kind of stabilize a bit more and more and more computers needed for inference. | |
| That's supposed to be the point of the Krok acquisition. | |
| And that's why part of what Verarubin is, where they have a new chip with no high bandwidth memory, which is one of the or very little, which is one of the most expensive pieces. | |
| It's designed for pre-fill, which is the part of inference where you essentially do a lot of matrix multiplications and then you only need the memory when you're doing this autoregressive generation. | |
| You have the KV cache swaps. | |
| So they have this new GPU that's designed for that specific use case. | |
| And then the cost of ownership for per flop or whatever is actually way lower. | |
| But I think that NVIDIA's fate lies in the diffusion of AI still. | |
| Their biggest clients are still these hyperscale companies, whether it's like Google obviously can make TPUs. | |
| Amazon is making Tranium. | |
| Microsoft will try to do its own things. | |
| And like so long as the pace of AI progress is high, NVIDIA's platform is the most flexible and people will want that. | |
| But if there's stagnation, then creating bespoke chips, there's more time to do it. | |
| It's interesting that NVIDIA is quite active in trying to develop all kinds of different products. | |
| They tried to create areas of commercial value that will use a lot of GPUs. | |
| But they keep innovating. | |
| And they're doing a lot of incredible research. | |
| Everyone says the company is super oriented around Jensen and how operationally plugged in he is. | |
| And it sounds so unlike many other big companies that I've heard about. | |
| And so long as that's the culture, I think that I will expect them to keep progress happening. | |
| And it's like he's still in the Steve Jobs era of Apple. | |
| So long as that is how it operates, I'm pretty optimistic for their situation because it's like it is their top order problem. | |
| And I don't know if making these chips for the whole ecosystem is the top goal of all of these other companies. | |
| They will do a good job, but it might not be as good of a job. | |
| Since you mentioned Jensen, I've been reading a lot about history and about singular figures in history. | |
| What do you guys think about the single man-woman view of history? | |
| How important are individuals for steering the direction of history in the tech sector? | |
| So, you know, what's NVIDIA without Jensen? | |
| You mentioned Steve Jobs. | |
| What's Apple without Steve Jobs? | |
| What's XAI without Elon? | |
| Or DeepMind without Demis? | |
| People make things earlier and faster, where scientifically, many great scientists credit to being the right place at the right time and still making the innovation, where eventually someone else will still have the idea. | |
| So I think that in that way, Jensen is helping manifest this GPU revolution much faster and much more focused than without having a person there, it would do. | |
| And this is making the whole AI build out faster. | |
| But I do still think that eventually something like ChatGPT would have happened and a build out like this would have happened, but it probably would not have been as fast or like I think that's the sort of flavor that is applied. | |
| People, these individual people are people who are placing bets on something. | |
| Some get lucky, some don't. | |
| But if you don't have these people at the helm, it will be more diffused. | |
|
100 Years of Compute
00:12:46
|
|
| It's almost like investing in a ETF versus individual stocks. | |
| Individual stocks might go up, might go down more heavily than an ETF, which is more balanced. | |
| It will eventually go up over time. | |
| We'll get there. | |
| But it's just like, you know, like focus, I think is the thing, passionate focus. | |
| Isn't there a real case to be made that without Jensen, there's not a reinvigoration of the deep learning revolution? | |
| It could have been 20 years later is the thing that it would say. | |
| Yeah, yeah. | |
| Or like another AI win, like a deep learning winter could have come if GPUs weren't around. | |
| That could change history completely because you could think of all the other technologies that could have come in the meantime and the focus of human civilization could the Silicon Valley would be captured by a different hype. | |
| But I do think it is, I mean, there's certainly an aspect where it was all planned, the GPU trajectory. | |
| But on the other hand, it's also a lot of lucky coincidences. | |
| For example, or good intuition, like the investment into this, let's say, biophysical simulations. | |
| Or like, I mean, I think it started with video games and then it just happened to be good at linear algebra because video games require a lot of linear algebra. | |
| And then you have the biophysical simulations. | |
| And then, but still, I don't think the plan, the master plan was AI. | |
| I think there was just, it happened to be Alex Krushevsky. | |
| So someone took these GPUs and like, let's try to train a neural network on that. | |
| And it happened to work really well. | |
| And I think it only happened because you could purchase those GPUs. | |
| Gaming would have created a demand for faster processors if NVIDIA had got out of business in the early days. | |
| That's what I would think. | |
| Like, I think that the GPUs would have been different for the Alex. | |
| But I think like GPUs would still exist at the time of AlexNet and at the time of the Transformer. | |
| It was just hard to know if it would be one company as successful or multiple smaller companies with worse chips. | |
| But I don't think that's like a hundred year delay. | |
| It might be a decade delay. | |
| Well, it could be one, two, three, four, five decade delay. | |
| I mean, I just can't see Intel or AMD doing what NVIDIA is. | |
| I don't think it would be a company that exists. | |
| I think it would be a different company who would rise. | |
| Silicon Graphics or something. | |
| Yeah, some company that has died would have done it. | |
| But it does like just looking at it, it seems like these singular figures, these leaders have a huge impact on the trajectory of the world. | |
| Obviously, incredible teams behind them. | |
| But having that kind of very singular, almost dogmatic focus is necessary to make progress. | |
| Yeah, I mean, even with GPT, it wouldn't exist if there wasn't a person, Ilya, who pushed for the scaling, right? | |
| I mean, Dario was also deeply involved in that. | |
| You read some of the histories of OpenAI. | |
| It almost seems wild thinking about how early these people were like, we need to hook up 10,000 GPUs and take all of OpenAI's compute and train one model. | |
| There's a lot of people there that didn't want to do that. | |
| Which is an insane thing to believe. | |
| To believe scaling before scaling has any indication that it's going to materialize. | |
| Again, singular figures. | |
| Speaking of which, 100 years from now, this is presumably post-singularity, whatever singularity is. | |
| When historians look back at our time now, what technological breakthroughs would they really emphasize as the breakthroughs that led to the singularity? | |
| So far we have touring to today, 80 years. | |
| I think it would still be computing, like the umbrella term computing. | |
| Just I don't necessarily think it's even like 100 years, 20 years from now, it would be AI. | |
| It would could be still well be computers, you know, just we are now taking better advantage of computers, but like the fact of computing. | |
| It's basically Moore's Law kind of discussion. | |
| You're not, even the details of CUDA and GPUs won't even be remembered. | |
| And it won't be all this software turmoil. | |
| It'll be just obviously compute. | |
| I generally agree, but it's like, is the connectivity of the internet and compute able to be merged? | |
| Or is both of them? | |
| I think the internet will probably be related to, yeah, I mean, communication, it could be a phone, internet, satellite, that stuff. | |
| Where, yeah, and compute is more like the scaling aspect of it. | |
| It's possible that the internet has completely forgotten that internet is wrapped into the phone networks, like communication networks. | |
| This is just another manifestation of that. | |
| And the real breakthrough comes from just the increased compute, is the Moore's Law broadly defined. | |
| Well, I think the connection of people is very fundamental to it. | |
| So it's like you can talk to anybody, you want to find the best person in the world for something there somewhere in the world. | |
| And being able to have that flow of information, the AIs will also rely on this. | |
| I think I've been fixating on the like when I said the dream was dead about the one central model. | |
| And the thing that is evolving is like people have many agents for different tasks. | |
| People start doing this with different clouds for different tasks. | |
| And it's described as many AGIs in the data center where each one manages and they talk to each other. | |
| And like that is so reliant on networking and free flow of information on top of compute. | |
| But like networking, especially with GPUs, is such a part of scaling up compute. | |
| Like the GPUs and the data centers need to talk to each other. | |
| Anything about neural networks will be remembered? | |
| Like, do you think there's something very specific and singular to the fact that there's neural networks? | |
| It's seen as a breakthrough, like a genius that you're basically replicating in a very crude way the human mind, the structure of the human brain, the human mind. | |
| I think without the human mind, we probably wouldn't have neural networks because it just was an inspiration for that. | |
| But at the other end, I think it's just so different. | |
| I mean, it's digital versus biological that I do think it will probably be more like grouped as an algorithm. | |
| That's massively parallelizable on this particular kind of compute. | |
| Could have well been like genetic computing, like genetic algorithms, just as parallelized thing. | |
| It just happens that this is more efficient, works better, you know? | |
| And it very well could be that the LLM, you know, the neural networks, the way we architect them now is just a small component of the system that leads to singularity. | |
| I think if you think of it 100 years, like society, I think, can be changed more with more compute and intelligence because of autonomy. | |
| But it's like looking at looking at this, like what are the things from the Industrial Revolution that we remember? | |
| We remember like the engine is probably the equivalent of the computer in this. | |
| But there's a lot of other physical transformations that people are aware of, like all the cotton gen and all these things that these machines that are still known, air conditioning, refrigerators, like some of these things from AI will still be known. | |
| Like the word transformer could still very well be known. | |
| I would guess that deep learning is definitely still known, but the transformer might be evolved away from in 100 years of with ASI AI researchers everywhere. | |
| But I think deep learning is likely to be a term that is remembered. | |
| And I wonder what the air conditioning and the refrigeration of the future is that AI brings. | |
| Is there, if we travel forward 100 years from now, we transport there right now. | |
| What do you think is different? | |
| How do you think the world looks different? | |
| First of all, you think there's humans? | |
| You think there's robots everywhere walking around? | |
| I do think specialized robots for sure for certain tasks. | |
| Humanoid form? | |
| That I'm maybe half humanoid. | |
| We'll see. | |
| I think for certain things, yes, there will be humanoid robots because it's just amenable for the environment. | |
| But like for certain tasks, it might make sense. | |
| What's harder to imagine is how we interact with devices and what humans do with devices. | |
| I'm pretty sure it will probably not be the cell phone, will probably not be the laptop. | |
| Will it be implants? | |
| I mean, it has to be brain computer interfaces, right? | |
| I mean, 100 years from now, it has to, like, given the progress we're seeing now, there has to be, unless there's legitimately complete alteration of how we interact with reality. | |
| On the other hand, if you think of cars, cars are older than 100 years, right? | |
| And it's still the same interface. | |
| It's not, we haven't replaced cars with something else. | |
| We just made the cars better, but it's still steering wheel, it's still wheels, you know. | |
| I think we'll still carry around a physical brick of compute because people want some ability to have a private, like you might not engage with it as much as a phone, but having something where you can have private information that is yours as an interface between the rest of the internet, I think is something that people will still exist. | |
| It might not look like an iPhone and it might be used a lot less, but I still expect to have people carry things around. | |
| Why do you think the smartphone is the embodiment of private? | |
| There's a camera on it. | |
| Private for you, like encrypted messages, encrypted photos, you know what your life is. | |
| Like, I guess this is a question on how optimistic on brain machine interfaces you are. | |
| Is all of that just going to be stored in the cloud and your whole calendar? | |
| Like it's hard to think about processing all the information that we can process visually through brain machine interfaces presenting something like a calendar or something to you. | |
| Like it's hard to just think about knowing without looking, you know, your emailing box. | |
| Like you signal to a computer and then you just know your emailing box. | |
| Like what does that, like, is that something that the human brain can handle being piped into it non-visually? | |
| Like I don't know exactly how those transformations happen because humans aren't changing in 100 years. | |
| I think agency and community are things that people actually want. | |
| Local community, yeah. | |
| So like people you are close to being able to do things with them and being able to describe meaning to your life and to be able to do things. | |
| I think that that is maybe if not in 100 years, I don't think that human biology is changing away from those on a time scale that we can discuss. | |
| And I think that like UBI does not solve agency. | |
| I do expect mass wealth and I hope that it is spread so that the average life does look very different in 100 years. | |
| But that's still a lot to happen in 100 years if you think about countries that are early in their development process to getting access to computing and internet, like to build all the infrastructure and to have policy that shares one nation's wealth with another is I think it's an optimistic view to see all of that happening in 100 years while they still being, well, they are still independent entities and not just like absorbed into some international order by force. | |
| But there could be just better, more elaborate, more effective social support systems that help alleviate some levels of basic suffering from the world. | |
| You know, the transformation of society where a lot of jobs are lost in the short term. | |
| I think we have to really remember that each individual job that's lost is a human being who's suffering. | |
| That's like a when jobs are lost, it scales a real tragedy. | |
| You can make all kinds of arguments about economics or it's it's all going to be okay. | |
| It's it's good for the GDP. | |
| There's going to be new jobs created. | |
| Fundamentally at the individual level for that human being, that's real suffering. | |
| That's a real personal sort of tragedy. | |
| And we have to not forget that as the technologies are being developed. | |
| And also my hope for all the AI slop we're seeing is that there will be a greater and greater premium for the fundamentals aspects of the human experience that are like in person, the things that we all like seeing each other talking together in person. | |
| The next few years are definitely going to be increased value on physical goods and events and even more pressure on slop. | |
| So there'll be so they'll keep the slop is only starting. | |
| The next few years will be more and more diverse versions of slop. | |
| They would be drowning in slop. | |
| So I'm hoping that we society drowns in slop enough to snap out of it and be like, we can't like none. | |
| Like it just doesn't matter. | |
| We all can't deal with it. | |
|
Art's Enduring Value
00:06:19
|
|
| And then like the physical has such a higher premium on it. | |
| Even like classic examples, I honestly think this is true. | |
| And I think we get tired of it. | |
| We are already kind of tired of it. | |
| Same with, I mean, even art. | |
| I don't think art will go away. | |
| I mean, you have paintings, physical paintings. | |
| There's more value, not just monetary value, but just more value, appreciation for something that is the actual painting than a photocopy of that painting. | |
| It could be a perfect digital reprint of that. | |
| But there is something when you go to a museum and you look at that art and you see that real thing and you think about, okay, a human, I don't know, it's like a craft. | |
| You have like appreciation for that. | |
| And I think the same is true for writing, for talking, for any type of experience where it will be, I do unfortunately think it will be like a dichoma, like it will be like a fork where more some things will be automated. | |
| Like, you know, there are not as many paintings as there used to be 200 years ago. | |
| There are more, more photographs, more photocopies. | |
| But at the same time, it won't go away. | |
| There will be, you know, value in that. | |
| I think that the difference will just be a bit, you know, what's the proportion of that. | |
| But personally, I have a hard time reading things where I obviously see it's obviously AI generated. | |
| I'm like, sorry, there might be really good information there, but I have like a certain, nah, not for me. | |
| Eventually they'll fool you and it'll be on platforms that give ways of verifying or building trust. | |
| So you will trust that Lex is not AI generated having been here. | |
| So then you have trust in this channel. | |
| But it's harder for new people that don't have that trust. | |
| Well, that that will get interesting because I think fundamentally, I think it's a solvable problem by having, you know, trust in certain outlets that they won't do it, but it's all gonna be kind of trust-based. | |
| There will be some systems to authorize, okay, this is real, this is not real. | |
| There will be some tell tale signs where you can obviously tell this is AI generated and this is not, but they won't I mean, some will be so good that it's hard to tell and then you have to trust and um well, th that that will get interesting and a bit problematic. | |
| The extreme case of this is to watermark all human content. | |
| So, all photos that we take on our own have some watermark until they are edited or something like this. | |
| And software can manage communications with the device manufacturer to maintain human editing, which is the opposite of the discussion to try to watermark AI images. | |
| And then you can make a Google image that has a watermark and use a different Google tool to remove the watermark. | |
| Yeah, it's going to be mom's racist. | |
| And we've been mostly focusing on the positive aspects of AI. | |
| I mean, there's also all the capabilities we've been talking about can be used to destabilize human civilization with even just relatively dumb AI applied at scale and then further and further super intelligent AI systems. | |
| Of course, there's the sort of doom or take that's important to consider a little bit as we develop these technologies. | |
| What gives you hope about the future of human civilization? | |
| Everything we've been talking about, are we going to be okay? | |
| I think we will. | |
| I'm definitely a worrier both about AI and non-AI things, but humans do tend to find a way. | |
| I think that's what humans are built for is to have community and find a way to figure out problems. | |
| And that's what has gotten us to this point. | |
| And I think that the AI opportunity in related technologies is really big. | |
| And I think that there's big social and political problems to help everybody understand that. | |
| And I think that that's what we're staring at a lot of right now. | |
| It's like the world is a scary place and AI is a very uncertain thing. | |
| And it takes a lot of work that is not necessarily building things. | |
| It's like telling people and understanding people that the people building AI are historically not motivated or wanting to do. | |
| But it is something that is probably doable. | |
| It just will take longer than people want. | |
| We have to go through that long period of like hard, distraught AI discussions if we want to have the lasting benefits. | |
| Yeah, through that process, I'm especially excited that we get a chance to better understand ourselves at the individual level as humans and at the civilization level. | |
| It answers some of the big mysteries. | |
| Like, what is this whole like consciousness thing going on here? | |
| Seems to be truly special. | |
| Like, there's a real miracle in our mind. | |
| And AI puts a mirror to ourselves and get to answer some of the big questions about like, what, what is this whole thing going on here? | |
| Well, one thing about that is also what I do think makes us very different from AI and why I don't worry about AI taking over is, like you said, consciousness, we humans, we decide what we want to do. | |
| AI in its current implementation, and I can't see it changing. | |
| You have to tell it what to do. | |
| And so you have still the agency. | |
| It doesn't take the agency from you because you have to, it becomes a tool. | |
| You can think of it as a tool. | |
| You tell it what to do. | |
| It will be more automatic than other previous tools. | |
| It's certainly more powerful than a hammer. | |
| It can figure things out, but it's still you in charge, right? | |
| So the AI is not in charge. | |
| You're in charge. | |
| You tell the AI what to do and it's doing it for you. | |
| So in the post-singularity, post-apocalyptic war between humans and machines, you're saying humans are worth fighting for. | |
| 100%. | |
| I mean, this is the movie Terminator they made in the 80s, essentially. | |
| And I do think, well, the only thing I can see going wrong is, of course, if things are explicitly programmed to do the thing that is harmful, basically. | |
| I think actually in that in a Terminator type of setup, I think humans win. | |
| I think we're too clever. | |
| It's hard to explain how we figure it out, but we do. | |
| And we'll probably be using local LLMs, open source LLMs, to help fight the machines. | |
| I apologize for the ridiculousness. | |
| Like I said, Nathan already knows. | |
|
Thank You Notes
00:00:49
|
|
| I've been a big fan of his for a long time. | |
| Been a big fan of yours, Sebastian, for a long time. | |
| So it's an honor to finally meet you. | |
| Thank you for everything you put out into the world. | |
| Thank you for the excellent books you're writing. | |
| Thank you for teaching us. | |
| And thank you for talking today. | |
| This was fun. | |
| Thank you for inviting us here and having this human connection, which is an extremely valuable human connection. | |
| Thanks for listening to this conversation with Sebastian Rashka and Nathan Lambert. | |
| To support this podcast, please check out our sponsors in the description, where you can also find links to contact me, ask questions, give feedback, and so on. | |
| And now, let me leave you with some words from Albert Einstein. | |
| It is not that I'm so smart, but I stay with the questions much longer. | |