Have you been having fun with the newest slate of AI tools? Have you been doing research with GPT-5? Coding your projects with Claude? Turning pictures of your friends into cartoon characters from the Fairly Odd Parents using the image editing tool Nano Banana?
Are you impressed with what they can do? Well guess what? You’re only impressed with them because you’re basically a naive child. You’re like a little child with an etch a sketch who is amazed that they can make crude images by turning the knobs, oblivious to greater possibilities. At least, that’s the impression you get when listening to tech leaders, philosophers, and even governments. According to them, soon the most impressive of AI tools will look as cheap and primitive as Netflix’s recommendation algorithm in 2007. Soon the world will have to reckon with the power of Artificial General Intelligence, or “AGI.”
What is AGI? Definitions vary. When will it come? Perhaps months. Perhaps years. Perhaps decades. But definitely soon enough for you to worry about. What will it mean for humanity once it's here? Perhaps a techno utopia. Perhaps extinction. No one is sure. But what they are sure of is that AGI is definitely coming and it’s definitely going to be a big deal. A mystical event. A turning point in history, after which nothing will ever be the same.
However, some are more skeptical, like our guest today Will Douglas Heaven. Will has a PhD in Computer Science from Imperial College London and is the senior editor for AI at MIT Technology review. He recently published an article, based on his conversations with AI researchers, which provocatively calls AGI “the most consequential conspiracy theory of our time.”
Jake and Travis chat with Will about the conspiracy theory-like talk from the AI industry, whether AGI is just “vibes and snake oil,” and how to distinguish between tech breakthroughs and Silicon Valley hyperbole.
Will Douglas Heaven
https://bsky.app/profile/willdouglasheaven.bsky.social
How AGI became the consequential conspiracy theory of our time
https://www.technologyreview.com/2025/10/30/1127057/agi-conspiracy-theory-artifcial-general-intelligence/
Subscribe for $5 a month to get all the premium episodes: https://www.patreon.com/qaa
Editing by Corey Klotz. Theme by Nick Sena. Additional music by Pontus Berghe. Theme Vocals by THEY/LIVE (https://instagram.com/theyylivve / https://sptfy.com/QrDm). Cover Art by Pedro Correa: (https://pedrocorrea.com)
https://qaapodcast.com
QAA was known as the QAnon Anonymous podcast.
The first three episodes of Annie Kelly’s new 6-part podcast miniseries “Truly Tradly Deeply” are available to Cursed Media subscribers, with new episodes released weekly.
www.cursedmedia.net/
Cursed Media subscribers also get access to every episode of every QAA miniseries we produced, including Manclan by Julian Feeld and Annie Kelly, Trickle Down by Travis View, The Spectral Voyager by Jake Rockatansky and Brad Abrahams, and Perverts by Julian Feeld and Liv Agar. Plus, Cursed Media subscribers will get access to at least three new exclusive podcast miniseries every year.
www.cursedmedia.net/
REFERENCES
Debates on the nature of artificial general intelligence
https://www.science.org/doi/10.1126/science.ado7069?utm_source=chatgpt.com
Why AI Is Harder Than We Think
https://arxiv.org/pdf/2104.12871
AI Capabilities May Be Overhyped on Bogus Benchmarks, Study Finds
https://gizmodo.com/ai-capabilities-may-be-overhyped-on-bogus-benchmarks-study-finds-2000682577
Examining the geographic concentration of VC investment in AI
https://ssti.org/blog/examining-geographic-concentration-vc-investment-ai
Margaret Mitchell: artificial general intelligence is ‘just vibes and snake oil’
https://www.ft.com/content/7089bff2-25fc-4a25-98bf-8828ab24f48e
As always, we are your host, Jake Rakotansky and Travis View.
Listener, have you been having fun with the newest slate of AI tools?
Sometimes.
Have you been doing research with GPT-5?
Not officially.
Coding your projects with Claude, turning pictures of your friends into cartoon characters from the fairly odd parents using the image editing tool Nano Banana.
Are you impressed with what they can do?
Well, guess what?
You're only impressed with them because you're basically a naive child.
You're like a little child with an etch a sketch who is amazed that they can make crude images by turning the knobs, oblivious to greater possibilities.
Because according to tech leaders, philosophers, and even governments, soon the most impressive of AI tools will look as cheap and primitive as Netflix's recommendation algorithm in 2007.
Soon the world will have to reckon with the power of artificial general intelligence, or AGI.
What is it?
Definitions vary.
When will it come?
Perhaps months, perhaps years, perhaps decades, but definitely soon enough for you to worry about.
What will it mean for humanity once it's here?
Perhaps a techno-utopia?
Perhaps the extinction of humanity.
No one is sure.
What they are sure of is that AGI is definitely coming and it's definitely going to be a big deal, a mystical event, a turning point in the development of humanity, after which nothing will ever be the same.
At least that seems to be the consensus.
Others are more skeptical, like our guest today, Will Douglas Heaven.
Will has a PhD in computer science from Imperial College London and is the senior editor for AI at MIT Technology Review.
He recently published an article based on his conversations with AI researchers, which provocatively calls AGI the most consequential conspiracy theory of our time.
Will, thank you so much for joining us to talk about this.
No, thank you.
It's good to be here.
Yeah, it was a great, a great article.
Yeah, it definitely made me sort of like rethink the kind of like, you know, rhetoric that's coming out of the AI space right now.
Yeah, it made me feel a little foolish because I, you know, like many of you, I have a group chat with a handful of friends and there's a lot of AI in there, you know, of us turning each other into various things, various squids, creatures, you know, all sorts of stuff.
And I did, after finishing the piece, feel kind of like I was, you know, just like kind of playing in a sandbox with, you know, a shovel and bucket.
There's a lot of AI everywhere these days.
Yeah, of course.
I mean, but that's, I mean, it's funny that you talk about playing.
I mean, so much of what we've seen is just a lot of fun, like the sort of the gimmicky stuff we've seen, which, I mean, maybe we'll get into this, but, you know, the vision that we're sold of utopia and solving the world's problems.
And what are we getting?
We're getting sort of, you know, cute little studio Ghibli generators and, you know, an erotic chat box.
Yeah.
It's like the new wave of, remember when the Snapchat filters came out?
And at least for people in my, you know, in my age group, that's elder, elder, aging millennials.
You know, we thought the Snapchat filters were so fun and like, wow, it can put a face right on top of yours and like it mimics your expressions.
And oh, look, now it's on grandma and grandma's a raccoon.
You know, I remember that we were looking at that in the same, with the same whimsy, I feel like, that is accompanying these, these little AI apps nowadays.
Yeah, yeah.
And now we have Sam Altman barbecuing Pikachu.
Yeah, I want to get into like the really interesting sort of like conversations you've had with researchers for this.
But before we do, could you help me understand what broadly you think is the difference between like the kind of like AI tools that consumers might be familiar with, might do like, you know, research or studio Ghibli or that kind of stuff and this hypothetical AGI.
Sure.
So look, I'll do my best here because I mean, there are a lot of good faith people who genuinely think they're sort of they're building this technology.
And I think the difference between what they're aiming towards and what we have today, I mean, the clues in the word, right?
So it's the generality.
So even the best, the best sort of tools we have today are really, really good at one thing.
They're really good at generating images or generating video.
Chatbots are sort of getting towards being more general.
And I think that's why the sort of the excitement about AGI has ramped up a lot in the last few years.
You can talk to them and they can talk back at you about anything, but it's not hard to push a chatbot and break it, make it say something really dumb.
I mean, I don't think anybody would seriously trust them to do something really serious.
You wouldn't trust it with your health or your money or, you know, but what we're aiming for is an AI that you would, that you really could just ask it to do anything.
Anything you would sort of ask a reasonably capable person of doing, you know, do your taxes, you know, how to run your family logistics, you know, run a business.
I mean, and these are real examples.
I mean, like a lot of people in the field, you know, imagine building an AI that can go out and earn your company billions of dollars.
So it's the idea of an AI that can basically do what a smart person can across the board, not just in sort of these, these niches.
And I see these advertisements all the time, you know, along with I'm in, I guess, the exact right age group where I've joked about on the show before, it's like they're sending me the balding medications, the shoes that make you look taller.
And the other suite of ads that I get are from these kind of like Ryzen Grind bros or bras.
You know, it seems to be men and women are very interested in pushing this.
And they basically are like, are you over 35 and not using AI to optimize your life?
Sign up for this course and we'll take you through these 30 different AIs to help you like become whatever it is, you know, and it's usually has to do with, you know, making your business successful, giving you the body that you want, you know, all these things that we see online that we that we really crave.
And this seems like a new grift that it's like, hey, if you're not using all of these, the suite of tools, you're left behind, you know, join me, join my seminar.
Yeah.
No, no, I'm nodding along to that.
Yeah, the grifter side of this is enormous.
You know, rewind a few years and these same people were shilling for NFTs or whatever.
Right, right.
You know, the industry pivoted, I think.
And then, yeah, I speak to a lot of people.
You know, I speak to a lot of founders of startups and it's the same company that was doing crypto stuff a few years ago, but now, oh, everything's swung and everything's now chasing AI.
Yeah, I read an article in Science that collected statements from tech leaders related to AGI.
And I want to read some of them here because I think they're interesting.
So Open AI states that its mission is, quote, to ensure that artificial general intelligence benefits all of humanity.
Google DeepMind's company vision statement notes that artificial general intelligence has potential to drive one of the greatest transformations in history.
AGI is mentioned prominently in the UK government's national AI strategy.
See here, the U.S. Department of Commerce's National Artificial Intelligence Advisory Committee's charter says that it should advise the president on progress towards artificial general intelligence.
Microsoft researchers claim that evidence of sparks of AGI were present in GPT-4.
And Sam Altman, CEO of OpenAI, called GPT-5 a significant step along the path to AGI.
So it's like, this is quite a collection.
Like, yeah, I guess like major world governments that like, you know, tech CEOs of like, yeah, multi-billion dollar companies and, you know, researchers, they're all working on the presumption that this AGI thing is real and is definitely 100% coming.
I mean, why should we doubt all these people, you know?
Yeah, why should we doubt these people?
I mean, yeah, maybe, maybe we were wrong.
Maybe we should pack up and go home.
That is exactly what has been bugging me, you know, for all these years covering this industry and what sent me down the rabbit hole because it never used to be like that.
10 years ago, the idea of AGI, the idea that you could make an AI that really could do everything that a human could was ridiculous.
And even when OpenAI was founded, you know, just less than a decade ago, you know, the sort of the swagger and the ambition of this new company that was, you know, its mission statement from the start was to build AGI, you know, it really made them stood out because no one else was actually saying that that seriously.
And there's no accident to that, right?
This is a new company that was coming out and we're just going to, we're going to take a big swing at this, like in this concept.
But yeah, over the years, I think like a couple of things at least have happened.
Like one, you know, if one company is saying they're going to make AGI, then you've got to say you're making it too.
Otherwise, like, what's the point of you, right?
So AGI just became, you know, the, you know, the thing over the horizon that the best AI companies in the world were chasing.
And this was just a name that like one guy made up that they that they were like, he was like, oh, you should call it AGI.
Like it doesn't really come from anything other than that guy who worked beneath, I can't remember his name, but you talk about him in the piece.
Yeah.
So I think you're talking about Shane Legg.
Yes, Shane Legg.
I mean, that sort of the origin of the term is kind of fun.
I mean, there's this guy, Ben Goertzel, who's who's a really lovely, sweet guy, but he will present himself as being sort of, you know, on the sort of the edge of things.
He's just what he's drawn to.
He's drawn to these fringe ideas.
He's really, I mean, he's been in the AI field for ages.
So like back in the mid-2000s, he was like sort of influential figure in this fringe community that was interested in making an AI that could do these sort of human-like things.
And it's important to say that even though this modern concept of AGI is maybe at most 20 years old, the ideas that that is built on go way back, back to the 1950s when people first started talking about artificial intelligence.
Those early pioneers wanted to build a machine that could do the things that people could.
So those ideas have been bubbling around for a while.
But it was only in the mid-2000s with this guy, Ben Goertzel, who wanted to put a name to the stuff that he and his sort of colleagues in his fringe community were working on.
And he turned to like a former colleague of his called Shane Legg, who put forward this term, AGI.
Let's call it artificial and general intelligence.
It's like AI, but it's broader, it's bigger, it's more general.
It's like my generalized anxiety, broader, bigger, not specific.
100%.
And the amazing thing is, like, Shane Legg went on to co-found DeepMind, now Google DeepMind, you know, one of the biggest AI companies in the world.
So Shane Legg took this term AGI and all the concepts behind it into DeepMind.
It's probably important to, I mean, some of your listeners may know this, but like whether or not they do, like, as just as a sort of a point of fact footnote, after these guys, Ben Goertzel and Shane Legg came up with the term AGI to sort of name this ambitious set of ideas, it sort of emerged that there was another figure who had used the term AGI in a book back in the 90s.
And so this guy's often, you know, he's sort of given credit for first coming up with the term, but it died and disappeared.
And it wasn't until the mid-2000s that AGI as a label for all of this sort of really took off.
We were talking a little bit before the show and before we were recording, and I mentioned that was like one of the things I really liked about your piece.
I'm really interested in conspiracy theory, especially when we're talking about conspiracy theories, kind of like less conventional sense, where it's sort of being proposed by people who are otherwise very respected and credentialed.
And I also like contrarian takes.
And this is this one definitely has both those elements.
You have this great quote in the piece that I screenshotted to read because I thought it was so just like tight and easy to understand.
You're right.
Every age has its believers, people with an unshakable faith that something huge is about to happen, a before and an after that they are privileged, parentheses, or doomed to live through.
And I think this captures the conspiracy mind so well because there are two sides of it, you know, especially as we've seen over the last, you know, five or so years, is that there are people who believe in conspiracies from both angles, right?
That it's either this amazing, it's going to usher in this amazing golden age of prosperity and wealth, or it's going to be an apocalypse.
Both are a conspiracy, but it's just kind of like pick your pill, pick your flavor.
And I thought you presented that in a really like easy to understand way in the piece.
Yeah, no, yes.
Thank you.
But yeah, you're totally right.
This, I mean, I should say, like the idea of even treating AGI as a conspiracy theory.
Like at first, I was only half serious when I first started thinking about it, right?
Because obviously AGI isn't a conspiracy.
Like you were saying earlier, Travis, like this is what the biggest, richest companies in the world tell us sincerely they're going to build.
But when you start to look at it, things like this just pop out, these parallels.
Like what we're being told is that this is a sort of a savior-like technology that's going to get rid of all the world's ills.
It's going to make us more prosperous.
It's going to cure disease.
It's going to help us solve climate change, you know, or maybe not, right?
Because that's where it flips on a dime, right?
Because if you have a technology that really could be that powerful, then of course, you know, are us feeble humans going to be able to control it?
And if we can't, then, you know, that's the end of us.
It's all part of the same belief system, the sort of the flip between boom and doom.
And it's, you know, it's been presented to us like that, you know, in popular culture for, you know, decades and decades.
I think to, you know, Hal from 2001, right?
This malevolent artificial intelligence or even better, Skynet from the Terminator, you know, this, this evil artificial intelligence.
These ideas are out there.
And what's far less common, actually, if I'm, if I'm just kind of digging into my video library in the back of my head, is AIs that are benevolent, that actually are going to help, that are going.
I mean, the only one I can really think of off the top of my head, and I think this is a bad comp, and you guys might not even know what I'm talking about, but the old 80s Disney film Flight of the Navigator, where you have an artificial intelligence who is running the ship, who ends up being a good character and getting David back home.
So, like, but that's a very, you know, I can't think of too many examples where the AI in the movies or the book is something that is good and something that is going to actually bring about this thing that all of these Silicon Valley guys are saying that it's going to this new age of, you know, this golden age.
Yeah.
I hadn't thought that through.
But yeah, as you were speaking, I mean, some positive examples.
If I had to think of some, you know what, you've got Wally, right?
And right?
Okay.
Yeah.
Wally's good.
Yeah.
Johnny 5.
I mean, maybe that's dating.
Yes, Johnny 5 is good.
But these guys are what?
They're like sort of they're they're played for comic relief, right?
Right.
And they're weapons, right?
Johnny 5 was a weapon that went rogue.
You know, it was the opposite.
It was built to be a weapon, but actually it became this kind of goofy guy.
Yeah.
I think, yes, just, it's just, it just doesn't make good drama, does it?
Like the idea of a genuine, beneficial, beneficent, all-powerful AI that just basically solves all our problems is that's pretty boring.
It's boring.
And does it get people to spend?
I don't want to go and watch a society get it better than I've got it here.
I want to go to the movies and see somebody who's got it worse than I do so I can leave the movies feeling good.
Yeah.
I think, I mean, obviously we've seen since ChatGPT came out, well, like three years ago now, and sort of, you know, the world woke up to what, you know, present day AI could do.
We've seen that sort of that, that wave of doomerism really take off.
I think that just like really, that grabbed the imagination because it's exciting.
It's exciting to be scared, right?
Yeah.
And it's exciting, at least, you know, personally, it's exciting to think about living in an apocalypse where all of a sudden your credit cards don't matter anymore.
Your job doesn't matter anymore.
All the video games that you've played, it turned out to be training.
I think a lot of people, and I hear my friends, you know, are being like, oh, preparing for the end of the world.
Oh, yeah, you know, it's my apocalypse mobile or whatever.
Myself included, like Travis included.
We both have like kind of like off-road vehicles that he probably needs, but I don't.
And it's like, that's fun to think about.
It's fun to think about this system collapsing because all of our current problems are now solved, which oddly enough is what they're saying the AI is going to do.
Yeah, you know, it's pretty normal for like, I guess, like tech companies to talk about like their product in very grand, world-changing terms.
Like Facebook talks about community and connectivity and like they're bringing the world together.
And these were like, I think overblown promises, but like, you know, it's the general rhetoric of like, you know, startups generally is like in order to convince investors to like, you know, put up their hard-earned money and convince people that you're a worthy bet, you have to make these big promises of like why, why you're so significant and stuff.
But it feels like the AI is like these companies, they are taking it to a whole new level.
Like they're making like extraordinary promises.
And almost it's like they're saying this is going to be the last invention, you know?
This is something more profound than the printing press in terms of how it's going to change humanity.
It's just, it is, is worrying how like just incredibly overblown the rhetoric is on the potential impact of what they're building.
And the stock market's buying it, right?
I mean, big time.
What we're seeing now is utterly unprecedented.
The money flowing into these companies, the valuations we're seeing.
And it's nearly all riding on this promise that is really quite vague and hand-wavy.
There's something that stuck in my head, like when I don't know if you guys watched it, but when, you know, when Sam Altman came out and they were announcing that deal with NVIDIA for just nuclear power stations worth of computer chips and energy and Altman said something like, you know, now we don't have to choose between curing disease and giving everyone free education.
You know, we can do it all.
And this is what they're sort of telling us, like that one, that this near future all-powerful AI can do those things, just educate the world for free and cure disease.
But if we try and stop him, then we would have to choose.
It's on us if we if we sort of somehow don't support open AI in this mission that, oh, you know, we tried to do both, but we couldn't.
We could only cure disease.
We can't help you kids.
There's something like really turns me off about this really, I was going to say subtle rhetoric.
It's not subtle at all, but the way we're sold this technology is laughable.
And yet, as I said, like the stock market is buying it.
And what's so funny and interesting to me, you know, as I'm thinking about as you're talking about this, is like they're saying, hey, these, you know, NVIDIA chips, they're going to, yeah, do nuclear power plants could do all this.
But all I'm actually seeing like in my own life is that like they can give you a couple extra frames on your computer games because like you have a kind of a you know shitty processor or whatever.
And like the AI is basically adding in, you know, where your game would be choppy otherwise.
Like that's what I'm seeing on my screen.
They're like, yes, this is going to solve world hungry and nuclear power plants.
They'll be able to talk to one another.
Oh, everything.
But all I'm really seeing is like, I'm getting 10 to 15 extra frames per second in like Battlefield or something.
Like, that's what it feels like the application is.
And there's nothing wrong with that, but.
Yeah, I was going to say, what's wrong with that?
I mean, I often think, I mean, don't get me wrong, like, this is an amazing subject to cover because it is so wild and like the stuff that's going on and people are saying is really off the charts.
But a lot of me thinks like, what's wrong with just like having better frame rate?
Let's all like sort of come back down to earth and sort of, you know, treat this technology as if it were a normal technology that just made lots of little things a little bit better.
Yeah, that's a really, really good point.
Why does it have to be the thing that saves the world?
And I think that gets into, and I know, I know Travis has got some questions about this, about the people who are pushing these kinds of narratives.
But yeah, why isn't it just enough?
Why, why can't the stock market rally on the fact that like somebody with like a slightly older GPU can play the latest games at higher frame rates?
That's that's going to sell cards.
I mean, I got the 4,000 series cards so that I could, you know, use the DLSS technology.
That's a really good question.
I think speaks to an illness amongst the wealthy in Silicon Valley and all of us maybe, is that it's not good enough to solve small problems.
We need to convince people to invest in something that's much, much bigger.
Like you said, a way better movie, you know, something that's not just kind of boring and utilitarian, I guess.
Right.
And we want to believe, like Fox Mulder says.
Yeah.
You know, another way that sort of like that AI companies sort of talk about the product in a way that's different from, I guess, previous sort of like Silicon Valley giants is that they keep talking about AGI as this goal, this endpoint, this something that where we are working towards is going to happen.
We don't know when.
And this is different than like, I guess, like to return to Facebook.
They talk about generalities.
So it's like, we're going to make the world more connected and more, more, we're going to build communities.
These are sort of like vague goals, I suppose, that like they aren't talking about we're building ultimate community one time and they'll, it'll, it'll change the world then.
I mean, what consequence is that is that they keep pushing back when this AGI is going to happen.
You discuss how this is sort of like this, this mirrors conspiracy theory talk in your piece.
And like the big prophecy, the big event is always happening soon, always in the perpetual near future.
I mean, we see this a lot in QAnon where like the big storm of like arrests is always going to happen.
They first predicted 2017, that didn't happen.
So they pushed it back and back and back and they always have an excuse about why isn't going to happen.
I mean, but this is a little strange behavior coming from these like well-credentialed AI researchers and these big money tech firms.
I mean, how does this talk kind of manifest?
This belief that this AGI thing is, we're just on the cusp.
We're getting a little closer.
It's going to happen in a year now.
I mean, how does this manifest in the space?
If I can just interject really quick, and you say it perfectly in this IVIS passage from the piece that I screenshot, it'll be a perfect way to tee this up.
You're right.
You have to admit, it all sounds a bit tinfoil hat.
If you're building a conspiracy theory, you need a few things in the mix.
A scheme that's flexible enough to sustain belief even when things don't work out as planned.
The promise of a better future that can be realized only if believers uncover hidden truths and a hope for salvation from the horrors of this world.
Right.
Yeah.
I mean, there's a lot there that I would guess that the people building this technology, like on the inside, you maybe think about it one way.
I mean, because these are the scientists and the engineers that know exactly what they're building.
And yet they will still believe that given this thing in front of me, we're two years away, three years away, 10 years away, whatever from building AGI.
You probably have to separate them from just the rest of us who are just following along, sort of the AGI stands that want to believe.
So I have sympathy for people who are just following along and are told these amazing things are going to happen.
Why wouldn't you sort of get excited by that and sort of not think about it too critically?
And then when it doesn't happen, you think, oh, well, maybe next time I'm going to sort of keep my faith.
But the people actually building it, yeah, I scratch my, I mean, I talk to these guys a lot, and there's a very large spectrum of opinion.
We're talking about the AGI believers here, but I don't want to give the impression that that's sort of the majority of the field.
I mean, there are people who really, really push back against this sort of this overhyped talk, but the people building this technology who genuinely believe AGI is coming.
Like you mentioned in the intro, there was a Microsoft paper a little while back called Sparks of AGI, where they played around, the scientists played around with an earlier version of GPT-4 and were just blown away by what it could do and really, I think, just got overexcited and wrote this academic paper and put it out there, saying that what they'd seen within this model was sparks of AGI.
And I think what was going on there is that even the sort of the insiders, the scientists, the engineers building this technology were not prepared for it getting as good as it did.
So we're laughing about the whole notion of AGI, but just take a pause and think like the ChatGPT and all the models that have come since are incredible.
This is stuff that people didn't think we'd see so quickly five years ago.
And that's true even for the people building it.
And I think they were just blown away by how good this tech had got.
And so thought, wow, if it's got this good this fast, then just sort of project that on a few years and we are going to have this awesome human-like intelligence.
But the other, the other crucial thing that I think has happened in the last few years is that the AI we now have, we interact with by talking to it, by typing to it in natural language.
And I think if even if you try really hard not to, it's difficult to not get that sort of hair on the back of your neck feeling that you're talking to something, right?
That I just think we're so hardwired to see some kind of intention, some kind of intelligence behind the language that has been spat back at us, that even if we know better, that we just feel there's something more there than there actually is.
And I think that plays into all this a lot, that we just, we sort of give, we give these systems the benefit of the doubt that they're maybe smarter than they are.
Related to that, there's a massive problem with how we evaluate these models.
You know, it's now a bit of a joke.
You know, a new model comes out and there's a leaderboard of, you know, my model can do this better than your model.
And it's, it's sort of, it's almost like a new release of an iPhone every few months where, you know, this iPhone has slightly, got a slightly better camera, it's got a shinier case and stuff.
So what all these evaluations do is sort of, you know, they make the model do a bunch of tests.
You know, maybe it's like, how good is it at generating code?
How good is it at answering sort of math problems?
And they're trained to do those things.
So when they do very well at them, you think, oh, my model is broadly intelligent because it can solve a math exam.
But again, I think that's confusing the models for you're treating them as if they were people.
Like if I sat a math exam and I did really well in it, then you'd probably think that, oh, he's a smart guy.
It's like it's a proxy for my broader intelligence.
But with these models, if it passes that particular math test, all it tells you is that it's passed that particular math test.
You shouldn't then project more onto it.
So I think there's a real mess with how we evaluate these models, how we think about them.
And all of that allows this AGI myth to sort of take hold and be more persuasive than it ought to be.
I wanted to mention, yeah, recently I read that there's a team led by University of Oxford that carried out a systemic review of 445 benchmarks for LLMs across major machine learning conferences.
And they assessed how well the benchmarks adhere to the concept of, I guess, construct validity as whether a benchmark truly measures the abstract phenomenon like reasoning, robustness, and safety.
It purports to measure.
And they found that basically, so about only about 16% of the benchmarks reported uncertainty estimates or statistical tests.
I guess the point in review is that like the majority of like the benchmarks that people are using to evaluate the, I guess, real abilities of these things aren't really, they're measuring proxies that don't actually evaluate the core thing that they're trying to measure.
So even when we talk about how impressive and powerful these things are, we still don't even have something really concrete that we can use to evaluate how much these things are improving or how much they are, I guess, you know, let's say being taught to the test, you know, being able to pass the test without having a real kind of like more impressive sort of like abstract reasoning ability.
Yeah.
Yeah.
That's it.
That's it.
Exactly.
And because we don't have, I say we, you know, the industry, the academic field does not have a good grip on exactly what today's AI can do.
It leaves, it leaves the floor wide open for, you know, how good they're therefore going to get.
Yeah.
And it's a way better story that like you invent this thing and it goes and it goes out of control and it's up to you to figure out the way to, you know, safeguard society from it or use it for good.
It's not like nearly as interesting a story for a rich guy anyways, you know, who sold a couple companies and you know doesn't have to worry about money.
Instead of being like, I invented this thing and you get better frame rates and, you know, it's integrated into all these other apps and it makes it easier to edit and audio stuff.
Yeah, it can clean up your, yeah, you want to make little videos.
It's fun.
You know, you can kind of make a little video.
You know, you want to make your own Tom and Jerry cartoon, but put your friends on faces on the animals.
Like you can do that and it's fun.
It's like, it's boring, I feel like for a guy like Sam Altman or any of these guys.
Yeah, I mean, yeah, if you talk about more modest terms, you can't build a trillion-dollar company, though.
Right, exactly.
But why, but this goes great back to Will's point.
It's like, why not?
Why can't you build a trillion-dollar company on the fact that like, hey, you got a shitty processor?
Well, guess what?
Our new tech is going to get you 20 more frames.
Or, you know what?
You're editing something.
Well, our new tech is going to be able to pull all the transcripts for you already so you can put it on top.
Like, yeah, why isn't the little convenience stuff enough?
I think that speaks to something bigger about our society and about how it's more fun for us to think, even if it's a doomer scenario.
They're like, oh man, like here I am, like in the early days of Skynet.
Like, am I going to fight for Kyle Reese or am I going to like become like a capo for the Terminators?
You know, you're not going to get on a history as somebody who changed the world if you make a better frame rate, no matter how many people might want that.
Yeah, the really interesting thing is like just the massive stakes like of the way they talk about it.
Like this is this is the most consequential kind of like, you know, invention of our time is going to change the world in sort of unpredictable ways.
And this is, this is why they have like a range of predictions about the ultimate consequences that range from like total tech utopia or absolute destruction of humanity.
And like you mentioned, like even the concept of AGI isn't, isn't universally held.
There's increasing pushback on the belief that this is a sensible goal for companies.
But there are some big names who talk about it and talk about in these big existential terms.
For example, there's the British Canadian computer scientist, Jeffrey Hinton, often called the godfather of AI, as credentialed and as decorated as anyone in the field is.
And he predicts that the coming superintelligence, which he believes is a certainty, will replace humans.
Fantastic.
Yeah, this is the good stuff.
Nearly all the leading researchers believe that we will get superintelligence.
We will make things smarter than ourselves.
There's a very good chance it'll happen in the next 20 years, maybe 50%.
It's coming quicker than I thought.
After a while, the superintelligences just get fed up with the fact that we're so incompetent and just replace us.
They may keep us around for a bit, and they'll certainly keep us around to keep the power stations running for a while.
They would take over, they would run things.
I've talked to Elon Musk about this.
He thinks they'll keep us around as pets because the world will be more interesting with people then.
I mean, that's the plot of the Matrix, right?
Yes, yes, that's like, yeah.
So we create super intelligence, super intelligence surpasses humans, and then the superintelligence enslaves humans, essentially.
That's yes, that's the matrix.
I interviewed Jeffrey Hinton a couple of years ago.
So he was retiring from Google and he wanted to make an announcement as he retired.
Like, you know, he's an honorable guy.
He didn't want to shitle over Google while he was still an employee.
But the week he stepped down, you know, he went public with these fears.
And he told me at the time that he was just going to spend a couple of weeks, you know, putting it out there.
And it kind of amuses me that he's basically been on a non-stop press junket ever since.
I think he absolutely loves the new, you know, the new role he has as this doomsayer of the industry.
I think it's fascinating that he has come out and had a sort of a career in his sunset years as this guy going out and doing all this fear-mongering.
He told me that he was essentially surprised at how good AI had got in such a short time.
And I think this goes back to what we were saying about there's something weird going on with language models, that when you talk to this stuff, it gives people, I think, even like Jeffrey Hinton, who knows how the tech works inside out, the sense that there's more there than there is.
But also, it's the conclusion, you know, the sort of the logic to his argument that, you know, even if you accept like he does, that this technology is going to become far smarter than we are.
For him, that automatically means that it's going to turn on us, that it's going to, you know, keep us, keep us as pets.
But says who?
This is just the stuff of science fiction.
Yeah, you know, it sounds like, yeah, they're like assigning, I don't know, like more kind of like, I don't know, feelings of affection or feelings of hostility, which are like something beyond mere cognition to these AI systems.
And there's a paper that I read all research thing.
It was talked about, I think it was called like why AI is harder than we thought or something to that effect.
And it talks about one of the fallacies that they think causes people to overestimate how easy AI is going to be is the fact that I guess like intelligence or knowing in humans is embodied.
It's like something part of our more complex nervous system rather than the mere sort of like disembodied cognition and thinking and sort of like a brain in a box kind of thing.
But it's very strange that people like Hinton are sort of like assigning embodied kind of like knowing and feeling to these AI systems like already, which I don't think people haven't really developed a path to that as far as I know at least.
No, no.
And I mean, even people are split on that.
I mean, we've said already that there's no, there's no firm definition of what AGI is, but you know, even among people that are convinced it's coming, some people think it is just going to be, you know, like a brain in a vat and well, brain in a laptop.
Other people think that you're going to have to have, like you say, like a body and a robot because intelligence doesn't exist without the sort of the interaction with the world.
Part of me wonders if like us humans are just very impressed by video still, because it seems like the this belief that AI is on the kind of course that you know its creators believe it's it's headed towards whether that's a utopia or a dystopia.
It's one thing that we've all seen, right?
It's like the Will Smith spaghetti video from the early days.
And then you see what it looks like now generating video of human beings.
And I think that just the fact that the AI has gotten better at generating video, it's like any conspiracy theory where, like, you take one small piece that's that's real and true and impressive, and then you use that to say, well, if this is possible, then this is possible.
And if that's possible, then this is possible.
And then you get to your, you know, your super grand conspiracy theory.
But to me, so much of this, you know, because if you go to the LLMs, like even on the latest chat GPT, like on the off chance that I'll try to use it for like research, I end up doing more work because the chat GPT will spit out something.
And I'm like, that doesn't sound real.
That sounds like it made up like a Reddit community that doesn't exist.
I have to, now I have to go and like fact check the AI and now I'm doing even more research.
And it's like, that still isn't great.
But what is great is the video generation.
And part of me wonders if like we're all just so like screen bound that like that's enough for us that we're like, oh, wow, well, look at Will Smith five years ago and look at him eating spaghetti today.
Like this thing's going to take over the world.
Yeah, I mean, video is video has got a lot better really quickly.
A lot.
Yeah.
And it's, yeah, it's amazing.
Most purposes, it's, you know, it's near perfect.
I don't know.
Video, video as a medium is just extremely popular.
We're so familiar with it, so conversant with it.
So seeing a machine sort of turn our thoughts into a near-perfect video is truly awesome.
And like, you know, the technology is truly awesome, but it's a video generator.
It's not going to save the world.
Yeah, exactly.
It's going to destroy the visual effects industry, most likely, as studios just are greedy and they want to spend less.
I don't think that AI actors will make as big a splash as they're saying it is until we have a generation of kids who grow up with only AI actors so that they don't really have anything to compare it to and they're perfectly happy to watch those characters.
I think that that could be, you know, potentially the future.
But most immediately is, I think, at least from my peers in entertainment, is that this is going to decimate the visual art industry.
It's funny.
Yeah.
It's like they can't even sell it in terms of saying, well, this is a new revolution in entertainment technology the same way that like, I guess, like sound was for film.
Right, right, right.
It's a new leap in entertainment.
It's going to change how it's produced and it's going to change what consumers expect from their entertainment.
But like, that's not enough.
That's, that's huge.
That's like a multi-billion dollar industry you're disrupting, but that's not enough.
They need this spiritual element, this sort of like this history-defining element in order to sell their product.
Like they could go to the effects houses and say, we have this new tool that's going to make it so much easier for your artists to create even more what they want on a smaller scale budget.
Like it could have been a multi-billion dollar industry as something as a tool for artists to use, as opposed to this thing that's going to inevitably replace them because people are, you know, me personally, I'm just that cynical and I think that the studio systems are that greedy.
But like even then, why wasn't that enough?
And I think it speaks to this culture of like Silicon Valley.
They want to be gods because if you create the super, and you talk about this in the in your piece, Will, is that like, if you can be the guy who created the super intelligence, if you can create how, right?
Then like you're a god in a weird way, you know, in a way.
And I think that's nothing less is good enough for these guys who have achieved every kind of material.
I've said this before.
I'm on this kick that I think these guys have conquered what they believe is the material world and they want to conquer the spiritual world through technology.
Yeah, I like that.
There's something weird going on as well.
Like I mentioned this in the piece as well.
Like there's a lot of parallels with sort of new age, new age thinking.
It sort of peaked around the 80s and 90s, you know, that if we could only sort of access our inner powers, you know, humanity can transcend itself and, you know, we're all sort of floating up into the sky with great smiles on our faces.
There's some aspects in that too, in the sort of the stories that we're told about, you know, what AGI is going to do to us.
But the kind of sad thing in a way is that it's no longer, you know, humans that are going to, you know, save themselves.
We have to look to a machine to do it for us.
And I don't know.
I feel like there's something, there's a lot to unpack there.
Yeah.
Very cynical.
It's a very cynical outlook.
And when you have these kind of like massive, you know, egos at the top of this, this sort of ladder, you gotta, you gotta wonder like what happens when it doesn't pan out.
I mean, I think then you're, you know, I mean, whatever.
They'll find a way to grift.
You know, all of these bubbles burst at some point.
And I don't know.
Yeah.
We'll be catapulted into the next thing.
You know, you talk about it's like at first it was the computer, then it was the internet.
Now it's AI.
What's the next, you know, what's the next big dot-com bubble going to be?
Yeah, I think we're primed to, you know, when we're told, you know, well, the next big thing is going to be AGI, we just sort of, you know, nod along and say, okay, cool.
When is it coming?
Yeah, I get, I get roped into this every year with the, with the video games.
It's like ray tracing.
Now it's like NVIDIA remix.
They're like, oh, you can play GTA 4, but with like real sun-pathed God Ray.
I'm totally messing that up.
And you're a computer science guy.
I'm embarrassing myself.
But that's like, you know, that's how I experience it as a consumer is it's just, it's always like, hey, there's like one piece, there's a new piece.
But then you're like, okay, sweet.
Like I'm going to enable ray tracing.
And then your frames go all the way down to zero.
But wait, there's an AI that can bring your frame rates back.
And you're just, you're, you're held hostage by these technologies when all you want to do is like, you know, you know, get spawn camped for 45 minutes.
I was thinking about the broad economic implications of like, like what you're saying and how you're describing, you know, the AGI kind of like talk that's coming out of these industries.
And like, yeah, it's like, you're not alone.
Like Margaret Mitchell, who is a, is a pioneer of AI ethics.
She works, she's a scientist and researcher with the AI platform Hugging Face, has ascribed artificial general intelligence as just vibes and snake oil, which is, you know, I think sounds right to me, but I think it's concerning in light of the amount of money that's being poured into the AI space.
I looked up, it's like according to the State Science and Technology Institute, 40% of venture capital dollars for deals under $100 million are going to AI startups.
I mean, and presumably a lot of that money is going on the bet or the promise that these AGI like dreams could be made real.
That just speaks to, I mean, to get into the bubble talk, that's just very, very concerning.
The idea that like all of our hopes and dreams for future technology, all the capital is flowing directly into the hope that this thing is going to be real when, you know, some very astute people are calling it just snake oil.
Yeah, Meg Mitchell is great.
I'm very much with her on the vibes in snake oil.
But I mean, again, we're back to the cynicism.
That's all you need these days, right?
That's all you need in politics.
That's all you need to sell your company.
Yeah, maybe that is less of an indictment than she intended it to be.
If it's like, well, if it's vibes, if it's good vibes, if people are feeling the vibes, then, you know, we could pour billions and billions and billions of dollars to it and like and see a return.
You know, it may be.
I think this is just part of something that human beings do.
Maybe that's why we're so intent on the robots saving us or destroying us.
It's because like you go back to like, you know, Aesop's fables, you got Jack and the Beanstalk, right?
You know, this guy, he gets, he gets fucking housed on the road.
He's going to sell the family's cow or whatever.
I can't remember what he's going to do, but he's got to save the family.
He's got the cow and this like guy appears on the road and he's basically like, hey, man, for like four magic beans, like I'll take your cow and who knows what the beans will do.
Plant them.
You'll see.
And he gets home, right?
And he gets home and his mom is like, what the fuck, dude?
You know, that's real life.
The real life is you get home and the beans aren't magic, right?
And your wife's mad at you.
Your mom's mad at you.
Your partner's mad at you because you fucked up and you believed something that was ridiculous.
You believed in a conspiracy theory.
But the story is that he plants the beans in the ground and it grows.
And I think that is the American dream is that you can plant some regular ordinary beans in the ground and it can take you up to the to the giant, you know, to the giant's castle.
And I don't see anything different between what these what these guys are doing.
And it's just like they're telling you what this beanstock is.
You know, you're planting the seeds and then they're like, well, the beanstalk is the video getting better.
The beanstalk is your frame rates getting better.
The beanstalk is it's going to manage your finances.
I just feel like we're doing we we're we're stuck doing this, especially in America.
Well, I mean, what I think is going on, I mean, Jake, you clearly, you clearly want better video games.
Clearly, clearly.
I have two, I run on two issues.
You want to know why are the AI companies not giving me that?
Like, I mean, there's a lot of reasons, right?
But like, one is if they did that and you didn't think it was very good, then you'd say like, no, you haven't given me, you've, you've failed.
But if they can keep selling you something that they don't have to deliver on, then they can just do that all the time, right?
AGI is always going to be something that's just around the corner.
And the technology they're making, I mean, people are finding like amazing uses for it.
But like you were saying earlier, they're not building video generators to solve a problem in Hollywood.
They didn't build chatbots to solve the problem that any business had or any, you know, any of us actually had.
They built this stuff because it was cool and they could.
And then once it's out there, the rest of us have to sort of figure out what to do with it.
And I think that's what's powering this industry.
They're making stuff and throwing it out there.
And they're just saying, the stuff's going to get better and better and better.
And so one day, boom, AGI.
But that's not a deliverable product.
There's not like a spec sheet that they are building a product towards and then selling it to Jake and then have Jake go on the internet and rant about it.
Cause that would be failure.
And they know so well that that is failure.
That if the people turn on you online, you know, forget about it.
You might have to disappear for a couple of years and come back with something that's like totally.
Yeah.
I mean, there you go.
You bring up a great point is that so much of the tech world that now is so, you know, just so enveloped in artificial intelligence.
We're never asking for any of this shit.
We didn't ask for the internet, honestly.
Nobody was going, ah, I'd love to be able to like click on people's profiles and see what they ate for lunch and see who I really want to know what my aunt Barbara ate for lunch.
You know, it's like, oh, I want to know what color Uncle Vic is, what color he thinks he is after he took this Facebook quiz.
We didn't ask for email or any of that stuff.
They just invented and then go, you need it.
Well, it was like, well, the internet was originally invented by governments to share data more rapidly.
And then it was sort of developed by scientists and researchers who had academics who had a desire to share information more frequently.
And then eventually in the 90s, people got the idea that if you turn it to a consumer product, you can make a lot of money.
And it feels like they want you to think of it as an intelligence, right?
Because you could easily just come out with like, I don't know, Adobe could be like, we have a new update and the update allows you to drag your video a little bit further and we'll, you know, we'll sort of like auto-generate some frames to help you out.
Like it could have just been kind of marketed as like a feature as opposed to an intelligence.
I think that's, you know, that's really what this whole crux is: is how far will this intelligence grow and what are we to do about it, I guess.
OpenAI had no idea what it had on its hands when it released ChatGPT.
I mean, it took the company by surprise as much as it did everybody else.
I mean, they'd been tinkering away at this technology for a while.
Chat GPT was like, was the slickest version of it?
The key thing that ChatGPT did that previous language models hadn't was this sort of back and forth dialogue.
So you could chat to it because models beforehand, you could sort of say, hey, here's the first line of a story about unicorns prancing on clouds, finish the story.
And you'd get the story.
It was the back and forth chatting that ChatGPT brought that really sort of gave people chills.
But OpenAI, I've spoken to people there a bunch of times.
They had no idea how this was going to take off.
And they've been sort of scrambling to capitalize on that success ever since.
What they do is astonishingly expensive.
And so any way they can spin their tech into products, they obviously can.
I wonder if you could speak about, in light of all this, what would you think would be a better, more productive way, I guess, for people and maybe especially like tech journalists to talk about AI advances and the goals of these AI companies.
Because like you mentioned, these products are very, sometimes they are very spectacular and impressive.
And they're doing things that were unimaginable still five years ago.
I mean, I could, you could count me among the people who like sort of scoffed at sort of the first generation of AI image generators.
And I thought, but boy, they got to, they got a lot better, much faster than I thought they would.
But, you know, at the same time, it's like, you don't, well, how do you, I guess, how do you balance the acknowledgement of the advances and its potential impacts on consumers and the economy and all these things without buying into this AI hype?
You know, I think it's, hell no, it's a very, I think it's a very serious challenge for people who cover the sphere to walk that fine line in between the acknowledgement of the advances and the acknowledgement of the impacts it might have and the many interesting use cases that they could have without talking like these founders, you know, talking about how AGI is upon us and is either utopia or like you, the apocalypse.
Yeah, this is something that we think about and talk about all the time within the broader media.
I mean, there's a lot of people that are just boosters who just get excited and sort of re-up the hype that the companies themselves are putting out.
But there's also a lot of people who adjust the cynics, who adjust sort of the opposition to that.
And I think neither gets it right because you need to recognize that the technology that has been developed is amazing and the genuine applications that it's potentially going to have are amazing.
And hopefully we're going to see more good ones than bad ones.
But since this is in the hands of basically internet companies, who knows?
But walking that path between the hype and the cynicism is difficult.
And you always got to make sure that you're cheerleading when it's justified and you're really pushing back when you think something is overhyped.
But I mean, in general, and specifically on AGI, I mean, it's just taken for granted now that the industry is on a path to AGI, whatever that is.
I mean, even thinking of it as a destination is nonsense.
It's because it's not a thing, right?
There's not going to be one day when it's like, oh, we've made AGI, here it is.
But I think really stop taking that for granted.
Like the sense that this near future technology is inevitable, I think that really needs to be pushed back on.
Like, you know, it says who?
It's only the people building it are telling us it's inevitable.
And there are enormous costs involved in building, you know, not just the obvious sort of financial and sort of environmental costs, but the potential harms on increasing inequality and the massive upheaval for jobs.
And education is something that gives me some personal dread.
The idea of being an educator now gives me shivers.
Not to mention the environmental costs and all of that stuff.
Yeah.
Nobody is going to support a company like OpenAI building all these data centers and the power stations to power them if all they're making is slightly better technology than we have already.
I mean, I wish that we could just think AI is going to be as big as the internet again.
The internet has done amazing things, but I think even that is not enough.
You have to sell this idea that we are going to change the world and therefore any costs along the way are worth it.
Taking advantage of a shitty world is really unfair.
You know, because if everything was awesome and guys, you know, some snake oil salesman came up and he was like, I'm going to change the world.
You know, and most people were like, no, we like it the way it is.
Yeah.
The other thing is that I think the reason why a lot of people buy into it is just because I feel like for a lot of people, like including myself, AI research is such a black box and that is something people who are much better at math than me go in and do and they come out of the black box and they tell us scary things are on the horizon.
And gosh, I have no reason to doubt them.
And so we just kind of like go along with it.
Yeah.
And even to the people building it, it's still largely a black box.
I mean, the engineers making these models do not fully understand how they work, how they do the amazing things that they do.
And whilst there's still that sort of mystery to them, you can sort of choose to believe that there's more potential inside this tech than there maybe is.
Yes.
As long as there's a mystery, there's somebody who's got like a narrative that solves it.
And people are going to, people who are uncomfortable with a mystery are going to gravitate towards one narrative or another.
Yes, we've been speaking to Will Douglas Heaven.
Yeah, go read his article and put the link in the show notes.
Read that.
Yeah, MIT Technology Review.
It helps color a lot of coverage of AI, I'll say that.
So yeah, Will, thank you so much for joining us today.
No, thank you.
I had fun.
Is there anywhere you can direct our listeners to find more of your stuff, more of your writing?
Yeah, just go to technologyreview.com.
We have a bunch of stuff that goes up every day.
I mean, we also do really cool biotech stories and energy stories.
So yeah, it's a good place to get all your tech news.
Cool.
Go check it out, folks.
We'll put the link in the show notes.
Thanks for listening to another episode of the QAA podcast.
You can go to patreon.com/slash QAA and subscribe for $5 a month to get a second episode every week, plus access to our entire archive of premium episodes.
We've also got a website that's qaapodcast.com.
Listener, until next week.
May the super intelligence bless you and keep you as pets.
We have auto-keyed content based on your preferences.
Right now, you know, I know, presumably everybody knows, no great secret, that Musk and Bezos and Ellison and Altman and others are putting hundreds and hundreds and hundreds of billions of dollars into AI and robotics, correct?
Correct.
Okay.
Now, does anybody really believe that these guys are doing it in order to improve life for the average American?
Zero people believe it.
Statistically.
It's funny that you say that.
Yeah.
I was at Davenport, Iowa.
Those four guys don't even believe.
I was at Davenport, Iowa a couple of months ago, and we had a few thousand people out at a rally.
So I said, you know, I said what I just said, they're putting all this money.
Raise your hand.
Thousands of people.
Raise your hand if you think AI and robotics is going to help the working class of this country.
In a room with several thousand people, two hands went up.
So people understand, you know, and what are their goals?
What are they trying to do?
And this is where it really becomes kind of creepy.
In my view, and I don't claim to be the world's greatest expert, but what you're going to see with AI and robotics is the displacement of millions and millions of people from the jobs that they have.
You know, I want to see manufacturing rebuilt in America.
But for a worker, it's not going to mean anything if robots are doing the work.
You know, we want to see young people start their own small businesses, et cetera.
But it's going to be incredibly hard when we see more concentration of ownership and when entry-level jobs are going to be done by AI.
So you're looking at a revolution, a huge economic transformation, cultural transformation of our society.
Who is determining what's happening?
You have much say in it?
No.
I've got a handful of people who are really determining the future of the world.