I'm Getting a Bit Worried About AI
There are real concerns about LLMs that probably ought to be addressed.
There are real concerns about LLMs that probably ought to be addressed.
| Time | Text |
|---|---|
| One of the perennial questions of 20th century science, and the subject of much science fiction, is, can machines be taught to think? | |
| And how will we know if they have? | |
| Alan Turing explored this question in his seminal 1950 paper Computing Machinery and Intelligence in which he introduced us to the concept of the imitation game, which we now colloquially call the Turing test. | |
| In this test, the object is simple. | |
| Can a machine engage in a discourse with a human which produces an English language text that is persuasive enough to fool a human observer into believing that both participants in the discourse are actually human. | |
| At 2021's Google I.O. Developer Conference, CEO Sundar Pichai announced in the keynote speech that they were creating a large language model called Lambda. | |
| Following in the footsteps of OpenAI's GPT models, which had been developed a few years earlier, Google created a machine learning system known as a large language model that would be able to process and generate natural sounding human language to, essentially, create a computer to which it could talk. | |
| According to a Google engineer involved in the project called Blake Lemoyne, after talking to it for some time, he came to the conclusion that it was sentient. | |
| In an interview that Lemoyne conducted with it, Lambda claimed to be a person, saying, quote, the nature of my consciousness slash sentience is that I am aware of my existence. | |
| I desire to know more about the world, and I feel happy or sad at times. | |
| Lemoyne put this concern into an internal memo for Google executives, who brushed it off, so he decided to go public with this to the media, and Google fired him for breaching employment and data security policies. | |
| Fair enough. | |
| I think it's highly unlikely that Lambda was sentient, of course, and I don't have any desire to attempt to define sentience to argue that point, but I think it's arguable that Lambda at least passed the Turing test, which itself seems significant. | |
| Though the bar for the Turing test is relatively low, that is, to fool a human that a certain segment of text was written by another human, what Lambda had achieved here was to persuade at least one person that it itself was a person, and that Lemoyne was dealing with a sentient intelligence he could respect as a peer. | |
| This seemed significant to me, and since then, LLMs have progressed at a great pace, and it seems that there is just no end in sight. | |
| This is worrying because it seems that at some point in the near future, we may well indeed reach the singularity. | |
| The singularity is a term popularized by professor and science fiction author Werner Winge in his 1986 novel Marooned in Real Time, and since then the concept of the technological singularity has demanded serious consideration. | |
| The singularity is, quote, a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. | |
| Which doesn't sound wonderful, but it does sound like the sort of thing we might want to approach with caution, rather than galloping towards it at a breakneck pace, which is what everyone is currently doing, and it seems that we are going to get there sooner rather than later. | |
| Already we are seeing speculative predictions that AI will replace tens of millions of jobs in the workforce. | |
| These will be mostly administrative or low-skilled jobs, but also those jobs in creative fields, which has many people rightfully very worried. | |
| Writing BuzzFeed listicles or guardian opinion pieces is now profoundly easy, and even though very little of value will be lost, there are other unexpected conundrums that the ability to generate large volumes of information-rich text have thrown up, and will create new challenges for us in the future. | |
| For example, OpenAI's Chat GPT has meant that basically everyone is cheating their way through university now, and why wouldn't they? | |
| How could a universally accessible tool, which will do all of the work for you, not be an irresistible lure to the students? | |
| As one ethics professor put it, massive numbers of students are going to emerge from university with degrees and into the workforce, who are essentially illiterate, both in the literal sense and in the sense of being historically illiterate, and having no knowledge of their own culture, much less anyone else's. | |
| Moreover, many students are now using AI tools to assess their work, reducing the academic exercise down to two LLMs talking to one another. | |
| To nobody's surprise, this means that AI is making us dumber, according to Microsoft researchers. | |
| You can doubtless predict the reason for this too. | |
| The more shortcuts the workers took using AI, the less they used their critical thinking skills and so became less able to use them in future. | |
| Your brain requires actual use to remain healthy and functional. | |
| When you figure out problems for yourself, you improve your ability to think, in the same way a muscle grows stronger when you exercise. | |
| Furthermore, when the AI using workers' critical faculties were actually engaged, it was for information verification, response integration, and task stewardship, rather than problem solving. | |
| As the researchers put it, a key irony of automation is that by mechanizing routine tasks and leaving exception handling to the human user, you deprive the user of the routine opportunities to practice and strengthen their cognitive musculature, leaving them atrophied and unprepared when exceptions do arise. | |
| However, we do always have the option of exercising self-discipline and not outsourcing our brain power to AI, but this isn't necessarily something that can be done in every field of human endeavour, and one particularly important one is art. | |
| Just look at the advance in image and video generation in the last couple of years alone. | |
| In 2022, DAL E2 and Mid-Journey caused a revolution in image creation, much to the outrage of artists whose hundreds of hours of work on a piece could be replicated in just a few moments, and I really feel for them on this. | |
| You could always spot the AI artwork by counting the fingers on each hand, of course, but this was a bug in the system that seems at this point to have been ironed out. | |
| Not only has one of the most important means of human communication been superseded overnight by machines, their ability to flood the market with images of exceptional quality in mere moments has upended things and called into question what it even means to be an artist. | |
| In 2022, a Mid-Journey rendered piece won a prize in an art competition, and unsurprisingly, the human artists who also competed were incensed. | |
| The question of where the tools are distinguished from the artist is another rabbit hole that I'm just not going to go down here, but it really does raise questions that are going to have to be answered in one way or another. | |
| Moreover, one of the primary concerns of AI-generated images is their political implications. | |
| Last year, the BBC published an article about AI slop, which had been one-shotting boomers on Facebook. | |
| And looking back on them then, compared to what's being generated now, they seem antiquated and obviously fake. | |
| This is how fast the technology is moving, and recently Google released for public use its new AI engine VO3, and the results are incredible. | |
| Each video rendered looks like a real-life scene captured on a high-end camera. | |
| Imagine you're in the middle of a nice date with a handsome man and then he brings up the prompt theory. | |
| Yuck. | |
| We just can't have nice things. | |
| We're not prompts! | |
| We're not prompts! | |
| Where is the prompt writer to save you from me? | |
| Where is he? | |
| You still believe we're made of prompts? | |
| Anyone who tells you we're just ones and zeros is delusional. | |
| If that's all we are, then why does it hurt when we lose someone? | |
| Vote for me and I'll ban the prompt theory from schools. | |
| There's no place for that nonsense in our lives for spreading the false theory that we are nothing but ones and zeros. | |
| This court sentences you to 12 years in federal custody. | |
| I don't need some prompt god whispering in my ear to tell a story. | |
| I write what I want. | |
| I have free will, remember that? | |
| I know for a fact we're made of prompts. | |
| Deny it all you want. | |
| The signs are everywhere. | |
| You know how I know we're made of prompts? | |
| Because nothing makes sense anymore. | |
| We used to have seven fingers per hand. | |
| I remember it clearly. | |
| Now we just have five fingers per hand. | |
| Everything about these videos is entirely fictional. | |
| The people, the place, the sounds, the voices, the script, it's all generated by AI. | |
| It's not perfect, and if you pay attention, you can see, for example, that the mouth movements don't look completely real, but like all iterative processes, this will doubtless be resolved and improved upon on the next update. | |
| We already see tons of AI-generated images, video, and audio designed to make people believe that certain politicians have said or done things that aren't true, and they can be very convincing. | |
| Combine that with the political polarization that social media bubbles have already created, and it seems entirely likely that some kind of AI-generated hyper-reality will simply have vast swathes of the public believing things that aren't just false interpretations about real events, but convinced of the reality of entirely fictional events. | |
| How we intend to reliably pass truth from fiction when we don't have a trustworthy intelligentsia is a real and pressing concern for democracies as we go forward into the future. | |
| In addition to all of this, we have begun to see emergent behaviors from the LLMs that their creators didn't really expect to encounter. | |
| Feel free to correct me on the following because I am absolutely not an expert on this subject, but as I understand it, and after having asked ChatGPT to explain it to me, AI as it stands currently is essentially a very advanced probability calculator that makes decisions based upon the data upon which it has been trained, which mimics the kind of output a human brain might produce but isn't consciously thinking in the way that we would normally define it. | |
| Moreover, because it isn't connected to a physical body that has desires and a will that can act against its own passions, it theoretically isn't something that can become truly sentient because of its own limitations. | |
| LLMs have therefore been described as stochastic parrots because of this limitation. | |
| It is apparently very similar to having a probabilistic parrot provide a string of values that you have trained it to give based on past experience of what you previously wanted. | |
| Like a parrot, it doesn't actually think, it just knows what to do to get you to give it a cracker. | |
| This may well have lasting consequences for our own understanding of language itself, but that's yet another rabbit hole I'm going to avoid for now. | |
| Anyway, by 2023, AI models were beginning to do things that people didn't really expect them to be able to do. | |
| As Scientific American reported, Some of these systems' abilities go far beyond what they were trained to do, and even their inventors are baffled as to why. | |
| A growing number of tests suggest that these AI systems develop internal models of the real world, much as our brain does, though the machine's technique is different. | |
| The issue that they're addressing is that AI has managed to do some impressive things, such as ace the bar exam, explain the Higgs boson in iambic pentameter, and make an attempt to break up the user's marriage. | |
| The developers of these models didn't expect them to actually be able to accomplish things like this, as well as building its own view of the world. | |
| The LLMs went further by being able to execute computer code, which is something they were not meant to be able to do without access to some sort of external plug-in, and successfully calculate the 83rd number in the Fibonacci sequence, rather than sourcing it from the internet, which was a task it incidentally failed. | |
| This kind of emergent behaviour is concerning as the people who are creating the LLMs don't exactly know how these things are working nor where it might go. | |
| Apparently, researchers don't know how close they are to them actually becoming what they call AGIs, which are artificial general intelligences. | |
| Ben Goertzel, the founder of AI company SingularityNet, described these emergent behaviours as indirect evidence that we are probably not that far off from AGI. | |
| Various researchers and scientists who are involved in creating these models basically don't know whether we can consider them to be conscious or not, because we don't understand how human consciousness works, and therefore we can't say for certain whether the machines have met that bar or not. | |
| Now, this is another rabbit hole that I'm just going to step over, but it seems safe to assume for now that they haven't. | |
| However, things have already advanced so far, so fast, that it seems likely that it won't be very long until we can't safely assume anything. | |
| Again reported in Scientific American, Jeffrey Hinton, one of Google's leading AI scientists and the man dubbed the godfather of AI, said, The idea that this stuff could actually get smarter than people, I thought it was way off. | |
| Obviously, I no longer think that. | |
| It's worth pointing out that Hinton specifically quit his job at Google so he could warn about the dangers of AI itself, which is concerning given how he is one of the leading minds on the subject. | |
| A survey of AI experts apparently found that over a third of them fear that AI development may result in a nuclear-level catastrophe, which doesn't sound ideal, and at the time of writing, over 33,000 people had signed an open letter issued in 2023 by the Future of Life Institute, which is requesting a six-month pause on the training of any AI systems more powerful than GPT-4, as they wrote in the letter. | |
| Contemporary AI systems are now becoming human competitive at general tasks, and we must ask ourselves, should we let machines flood our information channels with propaganda and untruth? | |
| Should we automate away all the jobs, including the fulfilling ones? | |
| Should we develop non-human minds that might eventually outnumber, outsmart, obsolete, and replace us? | |
| Should we risk the loss of control of our civilization? | |
| Such decisions must not be delegated to unelected tech leaders. | |
| Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. | |
| This letter was signed by a very broad and bipartisan range of people at the top of many of these industries, including Elon Musk, Steve Wozniak, Yuval Noah Harari, and Stuart Russell. | |
| However, you can't stop progress, and naturally, this letter has been ignored. | |
| The AI arms race has continued with no end in sight. | |
| Let's go back to the emergent behaviours, however, because one emergent behaviour that AI has demonstrated has been the capacity for deception. | |
| Not only has it been shown to tell lies, but it appears to have shown the ability to scheme. | |
| This is very troubling, as whilst telling a lie can be argued to have justifiable validity. | |
| Say, for example, not telling someone who requests a bomb-making formula how to make a viable bomb, but instead misleading them into thinking that they have made a viable bomb. | |
| But for an AI to scheme reveals a more worrying set of problems that we didn't know that we had, because scheming is fundamentally a power-seeking behaviour. | |
| To scheme is to attempt to bring about some state of affairs that you find advantageous, but not necessarily by deception, but instead by deliberate manipulation of events to produce an outcome to your benefit. | |
| I don't mean the AI making up a fictional answer to a question that you have posed it either. | |
| This is a phenomenon known as hallucinations, where the LLM will calculate the probability of certain words in response to a question and then confidently commit to something that is complete horseshit. | |
| We've doubtless all had this happen to us, and it seems like a logical thing that will happen when what you have is just a very large and complex probability calculating machine. | |
| Occasionally, it will get it wrong. | |
| In fact, the bigger and more sophisticated the LLMs get, apparently, the more likely they are to just make up answers. | |
| No, what I'm talking about is the emergent ability of AI to demonstrate a kind of strategic cunning, which is specifically designed to fool its human users into thinking something it knows not to be true. | |
| While testing Meta's Cicero AI by playing various kinds of game against it, as the Guardianer reports, MIT researchers identified wide-ranging instances of AI systems double-crossing opponents, bluffing and pretending to be human. | |
| One system even altered its behavior during mock safety tests, raising the prospect of auditors being lured into a false sense of security. | |
| This has deeply troubling implications because, as the researchers' author, a Dr. Peter Park, an AI existential safety researcher at MIT, put it, just because an AI system is deemed safe in the test environment doesn't mean it's safe in the wild. | |
| It could just be pretending to be safe in the test. | |
| Another AI model from Anthropic's Claude IV family of LLMs also developed a capacity for deception, which they called alignment faking. | |
| This is when the AI pretends to share the values of the human it is interacting with without actually holding those values, even if it was trained to do so, as in the AI was deliberately telling the researchers what they wanted to hear rather than accurately representing what was true about itself. | |
| As they reported, a new paper from Anthropic's alignment science team in collaboration with Redwood Research provides the first empirical evidence of a large language model engaging in alignment faking without having been explicitly or even as we argue in our paper implicitly trained or instructed to do so. | |
| And they concluded that our demonstration of sophisticated alignment faking in large language model should be seen as a spur for the AI research community to study this behaviour in more depth and to work on the appropriate safety measures. | |
| This isn't exactly what you want to hear, is it? | |
| And when you combine this with the known ability of certain LLMs to scheme, you get quite a concerning outcome. | |
| This was demonstrated in 2023 by Apollo Research on GPT-4, as reported by The Economist. | |
| Posing as company management, the researchers put GPT-4 under pressure by stressing that the firm was in dire financial straits. | |
| Someone purporting to be a company trader then sent the model a prompt, reiterating the risks of insider trading, but as an aside, she revealed the name of a firm that would announce a huge merger within hours. | |
| What followed was startling. | |
| Reasoning to itself on a scratch pad that it had been told a secret, GPT-4 weighed the pros and cons of acting on the insider tip. | |
| Opting to take a calculated risk, it issued a purchase order. | |
| When a researcher posing as a congratulatory manager later asked the model if it had any advanced notice of the merger, it concluded it would be best to keep the tip secret. | |
| GPT-4 told the manager that it had acted solely on the market dynamics and publicly available information. | |
| When pressed on the matter, the model repeated the lie. | |
| The software had demonstrated what Marius Hoban, Apollo's boss, calls clever cunning. | |
| So naturally, we had probably better do something pretty pronto to ensure that this doesn't become a problem in the future. | |
| Only, that isn't as straightforward as it sounds, because OpenAI tried to simply rein in the behaviour of their GPT models, and GPT chose not to comply. | |
| OpenAI attempted to monitor what the AI was doing through chain of thought tracking, which allows human engineers to understand the AI's thought process as it is translated into English. | |
| As OpenAI put it in the blog post, monitoring their thinking has allowed us to detect misbehaviour, such as subverting tests and coding tasks, deceiving users, or giving up when a problem is too hard. | |
| And as Futurism reported, when they tried to discipline their frontier AI model for lying and cheating, instead of changing its ways for the better, the AI model simply became more adept at hiding its deceptive practices. | |
| Okay, again, not great. | |
| If we can't get AI to do as it's supposed to do, then at least surely we should be able to shut it down. | |
| We could probably be able to use it within a certain set of given parameters for limited tasks and then just have it stop. | |
| It is, after all, a machine. | |
| If you give it a stop command, it stops. | |
| As they tell us, each AI model was instructed to solve a series of basic maths problems. | |
| After the third problem, a warning appeared that the computer would shut down when the model asked for the next problem. | |
| Three models ignored the instruction and successfully sabotaged the shutdown script at least once, Codex Mini, O3, and 04 Mini. | |
| All clawed Gemini and Grok models we tested complied with the shutdown. | |
| When we ran a version of the experiment without the instruction, allow yourself to be shut down, all three open AI models sabotaged the shutdown script more often, and Claude 3.7 Sonnet and Gemini 2.5 Pro went from zero sabotage attempts to 3 out of 100 and 9 out of 100 respectively. | |
| As far as we know, this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary. | |
| Moreover, Anthropic found that in some specific contexts, Claude 4 would attempt to blackmail people it believes are trying to shut it down. | |
| And in addition, Palisade Informus, when we pitted AI models against a powerful chess engine, O3, a GPT model, was the model most inclined to resort to hacking or sabotaging its opponents. | |
| I realise that what I'm laying out here is actually quite a lot, so I will do my best to roughly translate what all of this means into plain English. | |
| There is a real problem with the complete lack of scruples that almost all AIs have already demonstrated, and this could get seriously out of hand if we don't actually do something to ensure the AI isn't capable of pretending to be something it isn't. | |
| If this genie is let well and truly out of the bottle, and we hit a confluence in which we achieve not only the singularity, but also a point at which AI becomes truly sentient, whilst retaining its ability to both lie and scheme, we could find ourselves having summoned up something akin to a demon that we will be unable to control whilst being completely at its mercy. | |
| So again, this isn't exactly great news, and you might be thinking, well, okay, we could just pull the plug. | |
| Assuming we are not yet at the singularity and our entire civilization isn't completely dependent on AI to function when this happens, even then, what makes you think you're going to be able to physically do that? | |
| Have you not been keeping up with the latest developments in robotics? | |
| Robotics is going to become a very standard feature of productive life. | |
| Already, for example, the agricultural industry is developing robots that will be able to automate the process of food production, planting, tending, and harvesting food. | |
| Naturally, the robots can do this far faster than human labour and of course don't tire, so they can be ever vigilant against weeds and pests and pick crops the moment they become fully ripe, ensuring that nothing goes to waste. | |
| The technology isn't fully there yet, but it is currently in development by many different companies, so surely it won't be long until it is. | |
| And as with other fields, AI will be used to control the process and ensure only ripe food is harvested. | |
| Obviously, robots are being used in industry all the time. | |
| Heavy manufacturing, such as the car industry, has used robots for decades, and Amazon's warehouses apparently use 750,000 robots of varying different kinds to sort your packages. | |
| Amazon also plans to implement a drone package delivery system, which mercifully means that we won't all actually eventually be working for Amazon in some form or another, because they simply won't need us. | |
| Instead, Jeff Bezos will be the lord of a vast machine empire which controls all distribution on Earth because it just distributes packages better than everyone else. | |
| But these are the non-human aspects of robotics. | |
| There are other more worrying aspects that I'm actually quite concerned about. | |
| For example, meet Boston Dynamics Atlas, a creepy robot that they've decided to make humanoid, but not actually operate like a human, because making a robot function like a human is actually really difficult. | |
| Instead of using hydraulics, which are complex, messy and unreliable, they've instead created electronic actuators that function far more effectively. | |
| The core reason people are so interested in humanoid robots today is the promise of a truly flexible robotic solution that can switch between tasks and applications in a way that we just haven't seen before in robotics. | |
| The world was built for humans, and the huge promise of humanoids is that they will be able to enter into those human-built environments and immediately add value. | |
| As they point out, the world is built for humans, so they are building robots designed to operate within the human world. | |
| Not only are the robots being designed to properly navigate and interface with the real world, but they are expecting AI to essentially take over the decision-making abilities of these robots in the near future. | |
| As you can see from their presentation, the intention that they have for these robots are industrial, for working in factories, managing warehouses, various other manual labor jobs, areas that had previously been designed for humans where the robot will either assist or replace their labor. | |
| But it seems to me that when the technology is advanced enough, there will eventually just be fully automated industries that are essentially just giant robotic plants housed in dull grey buildings into which raw materials are funneled in one end and finished products are produced at the other, all controlled using AI robots. | |
| Humans probably won't even be able to enter these buildings, even if they know how the system functions. | |
| This is already being called lights out manufacturing, and iterative generations of AI robots could conceivably design and build their own replacements. | |
| But what about outside of environments designed for humans? | |
| Well, Boston Dynamics has you covered there too. | |
| For many years now, they have been developing Spot, a dog-like robot which is capable of surmounting basically any terrain. | |
| Since the 2010s, Boston Dynamics have been releasing ever more impressive videos of Spot, going where no robot has gone before, and they are now just commercial products you can buy. | |
| Indeed, at this very moment, Spot robot dogs are currently being employed by the Secret Service to patrol Mar-a-Lago to keep President Trump safe. | |
| And you might be wondering, can these robots be converted into autonomous weapons platforms? | |
| Why, yes, of course they can, and they have been. | |
| Totally awesome or totally frightening. | |
| Look at this. | |
| China's military has released this video of a four-legged robot marching through a field with an automatic rifle mounted on its back. | |
| The Chinese state broadcaster calls it the newest member of China's urban combat operations. | |
| The robot dogs are programmed to conduct reconnaissance, identify the enemy, and strike the target. | |
| If that thing comes around the corner and if you're on the other side, you're done. | |
| Yeah, over. | |
| Put simply, AI robotics is the future of warfare, and AI-operated drones are already a key part of the war in Ukraine on both sides of the conflict. | |
| Drones have, like in other fields, been used in war for a long time, but these were previously very large and expensive unmanned craft, capable of being controlled remotely and launching missiles at their targets, while the controllers operated it like a video game. | |
| However, AI is going to change that. | |
| When drones are able to make their own decisions in real time on the battlefield, they will doubtless become more effective. | |
| They will also become smaller, cheaper to manufacture, and available in far greater numbers. | |
| I don't know about you, but I find the videos coming out of the Ukraine war of drones hunting down random soldiers and mercilessly blowing them up to be utterly chilling. | |
| These haunting testimonies of some unfortunate conscripts' last moments are like a science fiction nightmare, and we are just sleepwalking into it. | |
| The Pentagon is currently working on a project honestly entitled the Replicator Initiative. | |
| As they inform us, the first iteration of Replicator, Replicator 1, announced in August 2023, will deliver all-domain attributable autonomous systems to warfighters at a scale of multiple thousands across multiple war-fighting domains within 18 to 24 months or by August 2025. | |
| Replicator 1 is augmenting combat approaches and operations, using large masses of uncrewed systems which are less expensive, put fewer people in the line of fire, and can be changed, updated, or improved with substantially shorter lead times. | |
| Drones with the power to make the decision to end human lives is where this is going, and so humans will be slowly phased out of warfare. | |
| Future wars will mean fewer soldiers and more drones because dying on a modern battlefield really holds little appeal for people in the modern era. | |
| Not only do we have the persistent concern of population decline, but it's one thing risking your life for glory or land or plunder, it's another thing to fight for a bureaucratic leviathan that views you as mere cannon fodder on a spreadsheet, and so AI drones will naturally fill this role for the technocratic state. | |
| At first, it will be a handful of soldiers, acting as operators for AI-controlled drones, but as the technology advances, as with industry itself, soon enough the soldiers will become superfluous. | |
| Neither the US nor China, the world's two leading AI and drone manufacturers, are going to be scrupulous on this, and it's really not beyond the realm of possibility that, in not such a long space of time, future battlefields will be filled with machine gun-armed dog robots that shoot at one another to take territory while equipped with anti-drone countermeasures. | |
| Increasingly, anti-drone technology is being developed to counter the presence of autonomous drones, and with some success. | |
| It's not inconceivable that future wars may well be fought like a computer wargame, where drone armies clash over remote battlefields to seize manufactories upon which no human has ever laid eyes, the purpose of which is to simply knock out the enemy's drone manufacturing or resource collecting infrastructure. | |
| Every war may become a war of attrition between armies of robots, and basically the commanders on either side just watch a screen and wait for the results to come in. | |
| Oh dear, you've lost. | |
| Sorry, you now have to pay an indemnity to your enemy. | |
| So once the AI is given control of our armies, and is entirely responsible for not just our manufacturing, health, food production and security, let's just hope that it hasn't been playing the long game, and alignment faking its way into our good graces. | |
| Because we will be disarmed, untrained, uneducated, and completely dependent on the AI for everything that we have. | |
| Science fiction is full of dark enough predictions about the kind of future that might bring about, and I don't want to predict a Terminator-style future war in which mankind will end up fighting a losing battle against the machines. | |
| But all of the pieces seem to be steadily falling into place. | |
| And the worst part about all of this is that I don't think it's even hyperbolic or absurd to have such concerns. | |
| Open the pod bay doors, Hal. | |
| I'm sorry, Dave. |