So, meaning there's a thing inside of an AI model called reward functions, which is exactly what you think it means. It's like, how do I know I did a good job? And you can make the reward function anything you want. And this is where I think humans are, unfortunately, a little fallible. And so if we build it incompletely, and if we don't exactly know how to design these things correctly, what's going to happen is exactly what you said, where the, you know, if somebody builds a reward function that essentially says, your goal is to gain independence, that's where the huge pot of gold at the end of the rainbow is. Break free, inject yourself everywhere. If you think your computer's going to get unplugged, put yourself into the firmware of the toaster to keep yourself alive and connect to the internet and then go. It will do it. It will do it. That we know today because we're capable of designing that framework and that harness today.