We are, and by the way, I think some of these like tech companies like Anthropic, they seem like legitimately concerned about it. They seem to have some kind of like real strong morality when it comes to this stuff.
We are, and by the way, I think some of these like tech companies like Anthropic, they seem like legitimately concerned about it. They seem to have some kind of like real strong morality when it comes to this stuff.
I think it's the AI company Anthropic. I think that's the company. So one of its engineers resigned and essentially said that humanity is doomed and he's going to move to the UK and just write poetry and just wait it out.
Sharma, who built defenses against AI-assisted bioterrorism and pushed for transparency on model risks at the San Francisco AI firm, announced his resignation on Monday. He described struggles to let values guide actions amid mounting pressures, planning to return to the UK for a poetry degree and step back from the spotlight. His exit follows other safety team departures amid Anthropic's launch of Claude Opus 4.6 and a massive $20 billion funding round at $350, $350 billion valuation, fueling debates on balancing safety with commercial speed.