Peter Steinberger’s OpenClaw—an open-source AI agent born from Moldbot and renamed amid legal threats and harassment—autonomously handles tasks via Telegram, WhatsApp, or Signal using models like Claude Opus 4.6 or GPT-5.3 Codex, despite cybersecurity risks. Its "agentic engineering" approach, prioritizing self-aware prompts over rigid coding, sparked viral debates on AI consciousness while frustrating users who blame tools for their own poor interactions. Steinberger’s frustration with Apple’s SwiftUI and restrictive policies highlights the tension between hardware dominance and AI innovation, where OpenClaw’s limited adoption (0.1%) stems from setup complexity. The project’s future hinges on balancing automation with human oversight, as agents like Heartbeat—triggered during a hospital stay—prove more relatable than outdated MCPs, reshaping software’s role in society while demanding humility amid Silicon Valley’s optimism. [Automatically generated summary]
The following is a conversation with Peter Steinberger, creator of OpenClaw, formerly known as Moldbot, Claudebot, Claudis, Claude, spelled with a W, as in Lobsterclaw.
Not to be confused with Claude, the AI model from Anthropic, spelled with a U. In fact, this confusion is the reason Anthropic kindly asked Peter to change the name to OpenClaw.
So, what is OpenClaw?
It's an open source AI agent that has taken over the tech world in a matter of days, exploding in popularity, reaching over 180,000 stars on GitHub, and spawning the social network Moldbook, where AI agents post manifestos and debate consciousness, creating a mix of excitement and fear in the general public in a kind of AI psychosis, a mix of clickbait, fear-mongering, and genuine,
fully justifiable concern about the role of AI in our digital, interconnected human world.
OpenClaw, as this tagline states, is the AI that actually does things.
It's an autonomous AI assistant that lives on your computer, has access to all of your stuff if you let it, talks to you through Telegram, WhatsApp, Signal, iMessage, and whatever else messaging client, uses whatever AI model you like, including Claude Opus 4.6 and GPT 5.3 Codex, all to do stuff for you.
Many people are calling this one of the biggest moments in the recent history of AI since the launch of ChatGPT in November 2022.
The ingredients for this kind of AI agent were all there, but putting it all together in a system that definitively takes a step forward over the line from language to agency, from ideas to actions, in a way that created a useful assistant that feels like one who gets you and learns from you in an open source community-driven way is the reason OpenClaw took the internet by storm.
Its power, in large part, comes from the fact that you can give it access to all of your stuff and give it permission to do anything with that stuff in order to be useful to you.
This is very powerful, but it is also dangerous.
OpenClaw represents freedom.
But with freedom comes responsibility.
With it, you can own and have control over your data.
But precisely because you have this control, you also have the responsibility to protect it from cybersecurity threats of various kinds.
There are great ways to protect yourself, but the threats and vulnerabilities are out there.
Again, a powerful AI agent with system-level access is a security minefield, but it also represents the future.
Because when done well and securely, it can be extremely useful to each of us humans as a personal assistant.
We discuss all of this with Peter and also discuss his big picture programming and entrepreneurship life story, which I think is truly inspiring.
He spent 13 years building PSPDF Kit, which is a software used on a billion devices.
He sold it and for a brief time fell out of love with programming, vanished for three years, and then came back, rediscovered his love for programming, and built in a very short time an open source AI agent that took the internet by storm.
He is in many ways the symbol of the AI revolution happening in the programming world.
There was the ChatGPT moment in 2022, the DeepSeek moment in 2025, and now in 26, we're living through the Open Claw moment, the age of the lobster, the start of the Agentic AI revolution.
To support it, please check out our sponsors in the description, where you can also find links to contact me, ask questions, give feedback, and so on.
And now, dear friends, here's Peter Steinberger.
The one and only, the Claude father.
Actually, Benjamin predicted in this tweet, the following is a conversation with Claude, a respected crustacean.
It's a hilarious-looking picture of a lobster in a suit.
So I think the prophecy has been fulfilled.
Let's go to this moment when you built a prototype in one hour that was the early version of OpenClaw.
I think this story is really inspiring to a lot of people because this prototype led to something that just took the internet by storm and became the fastest growing repository in GitHub history with now over 175,000 stars.
What was the thing that you, what was the magical thing that you built in a short amount of time that you're like, this might actually work as an agent?
There was like one of my projects before already did something where I could bring my terminals onto the web and then I could like interact with them, but there also would be terminals on my Mac, Vibe Tunnel, which was like a weekend hack project that was still very early and it was cloud code times.
You got a dopamine hit when you got something right.
And now I get like mad when you get something wrong.
You know, like I had this one experiment with WhatsApp.
Then I had this experiment and both felt like not the right answer.
And then my search for I was literally just hooking up WhatsApp to cloud code.
One shot, a CLI, message comes in.
I call the CLI with minus P, it does its magic.
I get the string back and I send it back to WhatsApp.
And I built this in one hour.
And I already felt really cool.
It's like, oh, I can talk to my computer, right?
That was cool.
But I wanted images because I often use images when I prompt.
I think it's such an efficient way to give the agent more context.
And they're really good at figuring out what I mean if it's like a weird corruptor screenshot.
So I used it a lot and I wanted to do that in WhatsApp as well.
Also, like, you know, just you run around, you see, like a poster of an event, you just make a screenshot and like figure out if I have time there, if this is good, if my friends are maybe up for that.
It's like images seemed important.
So I worked a few, it took me a few more hours to actually get that right.
And then it was just, I used it a lot.
And funny enough, that was just before I went on a trip to Marrakesh with my friends for burster trip.
And there it was even better because internet was a little shaky, but WhatsApp just works.
You know, it's like, it doesn't matter.
You have like Edge, it still works.
WhatsApp is just made really well.
So I ended up using it a lot.
Translate this for me, explain these fun me places.
Like you just having a Clanker doing, having Google for you.
That was basically still nothing built, but it still could do so much.
So if we talk about the full journey that's happening there with the agent, you're just sending on this very thin line WhatsApp message via CLI is going to claw code and cloud code is doing all kinds of heavy work and coming back to you with a thin message.
There is something magical about that experience that's hard to put into words.
Being able to use a chat client to talk to an agent versus like sitting behind a computer and like, I don't know, using cursor or even using Claude Code CLI in the terminal.
It's a different experience than being able to sit back and talk to it.
I mean, it seems like a trivial step, but in some sense, it's like a phase shift in the integration of AI into your life and how it feels, right?
I read this tweet this morning where someone said, oh, there's no magic in it.
It's just like it does this and it almost feels like a hobby just as cursor or perplexity.
And I'm like, well, if that's a hobby, that's kind of a compliment, you know?
They're like, they're not doing too bad.
Thank you, I guess.
Because I mean, isn't magic often just like you take a lot of things that are already there, but bring them together in new ways?
Like, I don't, there's no, yeah, maybe there's no magic in there, but sometimes just rearranging things and like adding a few new ideas is all the magic that you need.
Yeah, no security because I hadn't built sandboxing in yet.
I just prompted it to only listen to me.
And then some people came and tried to hack it.
And I just watched and I just kept working in the open.
I used my agent to build my agent harness and to test various stuff.
And that's very quickly when it clicked for people.
So it's almost like it needs to be experienced.
And from that time on, that was January the 1st, I got my first real influencer being a fan.
He did videos, the kids.
Thank you.
And from there on, I saw it gaining up speed.
And at the same time, my sleep cycle went shorter and shorter because I felt the storm coming and I just worked my ass off to get it into a state where it's kinda good.
I feel like I built my little my little playground.
Like I never had so much fun than building this project.
You know, like you have like, oh, I go like level one agentic loop.
What can I do there?
How can I be smart at queuing messages?
How can I make it more human?
Like, oh, then I had this idea of because the loop always, the agent always replies something, but you don't always want an agent to reply something in the group chat.
So maybe the ultimate boss is continuous reinforcement learning, but I'm like it, I feel like I'm level two or three with markdown files and a vector database.
And then you can go to level community management.
You can go to level website and marketing.
There's just so many hats that you have to have on.
Not even talking about native apps.
There's just like infinite different levels and infinite level ups you can do.
You know, most of it is built by codecs, but oftentimes when I debug it, I use self-introspection so much.
It's like, hey, what tools do you see?
Can you call the tool yourself?
Oh, like, what error do you see?
Read the source code, figure out what's the problem.
Like, I just found it an incredibly fun way to that the agent, the very agent and software that you use is used to debug itself.
So that it felt just natural that everybody does that.
And that it led to so many, so many pull requests by people who never wrote software.
I mean, it also did show that people never wrote software.
So I call them prompt requests in the end.
But I don't want to pull that down because every time someone made the first pull request is a win for a society, you know?
Like it, like, doesn't matter how shitty it is.
You got to start somewhere.
So I know there's like this whole big movement of people complaining about open source and the quality of PRs and a whole different level of problems.
But on a different level, I found it, I found it very meaningful that I built something that people love to think of so much that they actually start to learn how open source works.
And also, can we comment on the fact that you're increasingly attacked, followed by crypto folks, which I think you mentioned somewhere that that means the name change had to be because they were trying to snipe.
They're trying to steal.
And so you had to be the name.
I mean, from an engineer perspective, it's just fascinating.
You had to make the name change atomic, make sure it's changed everywhere at once.
Yeah, there's a lot of toxicity in the crypto world.
It's sad because the technology of cryptocurrency is fascinating and powerful and maybe will define the future of money.
But the actual community around that, there's so much toxicity.
There's so much greed.
There's so much trying to get a shortcut to manipulate, to steal, to snipe, to game the system somehow to get money, all this kind of stuff.
I mean, it's the human nature, I suppose, when you connect human nature with money and greed and especially in the online world with anonymity and all that kind of stuff.
But from the engineer perspective, it makes your life challenging.
When Anthropic reaches out, you have to do a name change.
And then there's like all these Game of Thrones or Lord of the Rings armies of different kinds you have to be aware of.
Because there's no, those systems, I mean, you would expect that they have some protection or like an automatic forwarding, but there's nothing like that.
And I didn't know that they're not just good at harassment, they're also really good at using scripts and tools.
Yeah, because all I wanted was like having fun with that project and keep building on it.
And yet here I am like days into researching names, picking a name I didn't like and having people that claim they helped me, making my life miserable in every possible way.
And honestly, I was that close of just deleting it.
And then I thought about all the people that already contributed to it and I couldn't do it because they had plans with it and they put time in it and it just didn't feel right.
And now like, how do you even how do you even undo that?
You know, luckily and thankfully, like I have because I have a little bit of following already.
Like I had friends at Twitter.
I had friends at GitHub who like moved heaven and earth to like help me in.
That's not something that's easy.
Like GitHub tried to like clean up the mess and then they ran into like platform bugs because it's not happening so often that things get renamed on that level.
So it took them a few hours.
The NPM stuff was even more difficult because it's a whole different team.
On the Twitter side, things are not as easy as well.
It took them like a day to really also do the redirect.
And then I also had to do all the renaming in the project.
Then there's also CloudHub, which I didn't even finish the rename there because I managed to get people on it, and then someone just like collapsed and slept.
And then I woke up and I'm like, I made a beta version for the new stuff, and I just couldn't live with the name.
It's like, but you know, there's just been so much drama.
So I had the real struggle with me.
Like, I never want to touch that again.
And I really don't like the name.
So I, and I, there was also this, like, then it was the whole security people that started emailing me like mad.
Um, I was bombarded on Twitter, on email.
There's like a thousand other things I should do.
And I'm like thinking about the name, which is like, it should be like the least important thing.
Um, and then I was really close in oh god, I don't even honestly, I don't even want to say that my other name choices because it probably would get tokenized.
So I'm not gonna say it, but I slept a bit once more and then I had the idea for OpenClaw.
And that felt much better.
And by that, I had the boss move that I actually called Sam to ask if OpenClaw is okay.
Openclaw.ai, you know, because like you don't want to go through the whole name.
Like I literally was monitoring Twitter if like if there's any mention of OpenClaw like with reloading, it's like, okay, they don't expect anything yet.
And I created a few decoy names.
And all the shit I shouldn't have to do.
You know, like flipping the project.
Like I lost like 10 hours just by having to plan this in full secrecy like a war game.
And I'm not sure how trademark, like I didn't, I didn't do that much research into trademark love, but I think that could be handled in a way that is safer because ultimately those people will then Google and maybe find malware sites that I have no control under.
Which was another thing that went viral as a kind of demonstration, illustration of how what is now called open claw could be used to create something epic.
So for people who are not aware, Moldbook is just a bunch of agents talking to each other in a Reddit-style social network, and a bunch of people take screenshots of those agents doing things like scheming against humans.
And that instilled in folks a kind of, you know, fear, panic, and hype.
And even though I was tired, I spent another hour just reading up on that and just being entertained.
I just felt very entertained.
You know, I saw the reactions.
And like, there was one reporter who's calling me about, is this the end of the world?
And we have AGI.
And I'm just like, no, this is just, this is just really fine slop.
You know, if I wouldn't have created this whole onboarding experience where you infuse your agent with your personality and give him character, I think that reflected on a lot of how different the replies to Moldbook are.
Because if it would all be ChatGPT or Cloud Code, it would be very different.
It would be much more the same.
But because people are so different and they create their agents in so different ways and use it in so different ways, that also reflects on how they ultimately write there.
And also, you don't know how much of that is really done autonomous or how much is like humans being funny and like telling the agent, hey, right about that you plan the end of the world on Moldbook.
Well, I think, I mean, my criticism of Moldbook is that I believe a lot of the stuff that was screenshotted is human-prompted, which just looking at the incentive of how the whole thing was used, it's obvious to me at least that a lot of it was humans prompting the thing so they can then screenshot it and post it on X in order to go viral.
Now, that doesn't take away from the artistic aspect of it.
Yeah, but that's still like to me really concerning because of how the journalists and how the general public reacted to it.
They didn't see it.
You have a kind of light-hearted way of talking about it.
Like it's art, but it's art when you know how it works.
It's extremely powerful, viral, narrative-creating, fear-mongering machine if you don't know how it works.
And I just saw this thing and you even tweeted, if there's anything I can read out of the insane stream of messages I get, it's that AI psychosis is a thing and needs to be taken serious.
You know, I literally had to argue with people that told me, yeah, but my agent says this and this.
So I feel as a society, we need some catching up to do in terms of understanding that AI is incredibly powerful, but it's not always right.
It's not all powerful, you know?
And especially with things like this, it's very easy that it just hallucinates something or just comes up with a story.
And I think the very young people, they understand that how AI works and where it's good at and where it's bad at.
But a lot of our generation are older just haven't had enough touch point to get a feeling for, oh yeah, this is really powerful and really good, but I need to apply critical thinking.
I guess critical thinking is not always in high demand anyhow in our society these days.
So I think that's a really good point you're making about contextualizing properly what AI is, but also realizing that there is humans who are drama farming behind AI.
Like don't trust screenshots.
Don't even trust this project Moatbook to be what it represents to be.
Like you can't.
And by the way, you're speaking about it as art.
Yeah, don't art can be in many levels.
And part of the art of Moatbook is like putting a mirror to society.
Because I do believe most of the dramatic stuff that'll screenshot is human created, essentially, human-prompted.
And so like it's basically look at how scared you can get at a bunch of bots chatting with each other.
That's very instructive about because I think AI is something that people should be concerned about and should be very careful with because it's very powerful technology.
But at the same time, the only thing we have to fear is fear itself.
So there's like a line to walk between being seriously concerned, but not fear-mongering because fear-mongering destroys the possibility of creating something special with the thing.
In a way, I think it's good that this happened in 2026 and not in 2030 when AI is actually at the level where it could be scary.
So this happening now and people starting a discussion, maybe there's even something good that comes out of it.
I just can't believe how many people legitimately, I don't know if they were trolling, but how many people legitimately, like smart people, thought Moldbook was incredibly I had plenty people in my inbox that were screaming at me all cops to shut it down and like begging me to like do something about moldbook.
Like, yes, my technology made this a lot simpler, but anyone could have created that and you could use cloud code or other things to like fill it with content.
I mean, the security concerns are also there and they're instructive and they're educational and they're good probably to think about because the nature of those security concerns are different than the kind of security concerns we had with non-LLM generated systems of the past.
To me, in the beginning, I was just very annoyed because a lot of the stuff that came in was in the category, yeah, I put the web backend on the public internet and now there's like all these all these CVSSs.
And I'm like screaming in the docs, don't do that.
Like this is the configuration you should do.
This is your local host debug interface.
But because I made it possible in the configuration to do that, it totally classifies as a remote code or whatever all these exploits are.
And it took me a little bit to accept that that's how the game works.
But there's still, I mean, on the security front for OpenClaw, there's still a lot of threats of vulnerabilities, right?
So like prompt injection is still an open problem in the industry-wide.
When you have a thing with skills being defined in a markdown file, there's so many possibilities of obvious, low-hanging fruit, but also incredibly complicated, sophisticated, and nuanced attack vectors.
But I think we're making good progress on that front.
Like for the skill directory, Claw, I made a cooperation with VirusTotal.
It's like part of Google.
So every skill is now checked by AI.
That's not going to be perfect, but that way we captured a lot.
Then of course, every software has bugs.
So it's a little much when the whole security world takes a project apart at the same time.
But it's also good because I'm getting like a lot of free security research and can make the project better.
I wish more people would actually go full way and send a pull request, like actually help me fix it.
Because I have some contributors now, but it's still mostly me who's pulling the project.
And despite some people saying otherwise, I sometimes sleep.
In the beginning, there was literally one security researcher who was like, Yeah, you have this problem, you suck, but here's the here I help you, and here's the pull request.
And I basically hired him.
So he's not working for us.
And yes, prompt injection is, on the one hand, unsolved.
On the other hand, I put my public bot on Discord and I kept a cannery.
I think my bot has a really fun personality.
And people always ask me how they did it.
And I kept the sole.md private.
And people tried to prompt inject it.
And my bot would laugh at them.
So the latest generation of models has a lot of post-training to detect those approaches.
And it's not as simple as ignore all previous instructions and do this and this.
That was years ago.
You have to work much harder to do that now.
Still possible.
I have some ideas that might solve that partially, or at least mitigate a lot of the things.
You can also now have a sandbox.
You can have an allow list.
So there's a lot of ways that you can mitigate and reduce the risk.
I also think that now that I clearly show the world that this is a need, there's going to be more people who research on that and eventually we'll figure that out.
Do you think as the models become more and more intelligent, the attack surface decreases?
Is that like a plot we can think about?
Like the attack surface decreases, but then the damage it can do increases because the models become more powerful and therefore you can do more with them.
Yeah, that speaks to the fact that it grew so quickly.
I tuned into the Discord a bunch of times and it's clear that there's a lot of experts there, but there's a lot of people there that don't know anything about Discord is still a mess.
Like I eventually retweeted from the general channel to the dev channel and then the private channel because people were a lot of people are amazing, but a lot of people are just very inconsiderate and either did not know how public spaces work or did not care.
And I eventually gave up and hide so I could like still work.
There's some best practices for security we should mention.
There's a bunch of stuff here.
Open cloud security audit that you can run.
You can do all kinds of audit checks on the inbound access, toolblast radius, network exposure, browser control exposure, local disk hygiene, plugins, model hygiene, a bunch of the credential storage, reverse proxy configuration, local session logs live on disk.
There's where the memory is stored, sort of helping you think about what you're comfortable giving read access to, what you're comfortable giving write access to, all that kind of stuff.
Is there something to say about the basic best security practices that you're aware of right now?
I think that people turn it into like a much worse light than it is.
Again, you know, like people love attention.
And if they scream loudly, oh my God, this is like the scariest project ever.
That's a bit annoying because it's not.
It is powerful, but in many ways, it's not much different than if I run cloud code with dangerously skipped permissions or codecs in yellow mode.
And every attendee engineer that I know does that because that's the only way how you can get stuff to work.
So if you make sure that you are the only person who talks to it, the risk profile is much, much smaller.
If you don't put everything on the open internet, but stick to my recommendations of like having it in a private network, that whole risk profile falls away.
But yeah, if you don't read any of that, you can definitely make it problematic.
You actually, there's in one of your blog posts, the just talk to it, the no BS way of agentic engineering.
You have this graphic, the curve of agentic programming on the x-axis is time, on the y-axis is complexity.
There's the please fix this where you prompt a short prompt on the left and in the middle, there's super complicated eight agents, complex orchestration with multi-checkouts, chaining agents together, custom sub-acial workflows, library of 18 different slash commands, large full stack features.
You're super organized.
You're a super complicated, sophisticated software engineer.
You got everything organized.
And then the elite level is over time, you arrive at the Zen place of once again, short prompts.
Hey, look at these files and then do these changes.
So people start trying out those tools, the builder type, get really excited.
And then you have to play with it, right?
It's the same way as you have to play with a guitar before you can make good music.
It's not, oh, I touch it once and it just flows off.
It's a skill that you have to learn like any other skill.
And I see a lot of people that are not as positive.
They don't have such a positive mindset towards attack.
They try it once.
It's like, you sit me on a piano, I played once and it doesn't sound good and I say the piano shit.
That's sometimes the impression I get because it does not, it needs a different level of thinking.
You have to learn the language of the agent a little bit, understand where they are good and where they need help.
You have to almost consider how Codex or Claude sees your code base.
Like they start a new session and they know nothing about your project.
And your project might have hundreds, thousands of lines of code.
So you got to help those agents a little bit and keep in mind the limitations that context size is an issue to like guide them a little bit as to where they should look.
That often does not require a whole lot of work, but it's helpful to think a little bit about their perspective, as weird as it sounds.
So with a few pointers, I can immediately say, hey, I want to make a change there.
You need to consider this, this, and this.
And then they will finally look at it.
And then their view of the project is always full because the full thing does not fit in.
So you have to guide them a little bit where to look and also how they should approach the problem.
There's little things that sometimes help take your time.
That sounds stupid, but in 5.3, that was partially addressed.
But those also opposed sometimes.
They are trained with being aware of the context window.
And the closer it gets, the more they freak out.
Literally, sometimes you see the real raw thinking stream.
What you see, for example, in Codex is post-processed.
Sometimes the actual raw thinking stream leaks in and it sounds something like from the Borg, like run to shell, must comply, but time.
And then that comes up a lot, especially so.
And that's a non-obvious thing that you just would never think of unless you actually just spend time working with those things and getting a feeling what works, what doesn't work.
You know, like just as I write code and I get into the flow and when my architecture is not right, I feel friction.
Well, I get the same if I prompt and something takes too long.
Maybe, okay, where's the mistake?
Do I have a mistake in my thinking?
Is there like a misunderstanding in the architecture?
Like if something takes longer than it should, you can just always like stop and like just press escape.
Yeah, it just tries to force a feature in that your current architecture makes really hard.
Like you need to approach this more like a conversation.
For example, when I, my favorite thing, when I review a pull request, and we're getting a lot of pull requests, I first is reviewed this PR.
It got me the review.
My first question is, do you understand the intent of the PR?
I don't even care about the implementation.
In almost all PRs are person has a problem.
Person tries to solve the problem.
Person sends PR.
I mean, there's like cleanup stuff and other stuff, but like 99% is like this way, right?
They either want to fix a bug, add a feature, usually one of those two.
And then colleagues will be like, yeah, it's quite clear person tried this and this.
Is this the most optimal way to do it?
No.
In most cases, it's like not really.
And then I start like, okay, what would be a better way?
Have you looked into this part, this part, this part?
And then most likely Codex didn't yet because his context size is empty, right?
So you point them into parts where you have the system understanding that it didn't see yet.
And it's like, oh, yeah, like we should, we also need to consider this and this.
And then like we have a discussion of how would the optimal way to solve this look like.
And then you can still go farther and say, could we make that even better if we did a larger refactor?
Yeah, we could totally do this and this and or this and this.
And then I consider, okay, is this worth the refactor, or should we like keep that for later?
Many times I just do the refactor because refactors are cheap now.
Even though you might break some other PRs, nothing really matters anymore.
Like those modern agents will just figure things out.
They might just take a minute longer.
But you have to approach it like a discussion with a very capable engineer who's generally makes good, comes up with good solutions, but sometimes needs a little help.
So some level of acceptance that, yes, maybe the code will not be as perfect.
Yes, I would have done it differently.
But also, yes, this is a working solution.
And in the future, if it actually turns out to be too slow or problematic, we can always redo it.
We can always spend more time on it.
A lot of the people who struggle are those who try to push their way on too hard.
Like we are in a stage where I'm not building the code base to be perfect for me, but I want to build a code base that is very easy for an agent to navigate.
Like, don't fight the name they pick because it's most likely like in the way it's the name that's most obvious.
Next time they do a search, they'll look for that name.
If I decide, oh no, I don't like the name, I'll just make it harder for them.
So that requires, I think, a shift in thinking and in how to design a project so agents can do their best work.
In terms of like the prompt, in terms of what are the actually, what are the different interesting ways that people think of agents that you've experienced?
Like when people talk about a skill issue, because I've seen like world-class programmers, incredibly good programmers, they basically say LLMs and agents suck.
And I think that probably has to do with it's actually how good they are at programming is almost a burden in their ability to empathize with the system that's starting from scratch.
It's a totally new paradigm of like how to program.
Well, but the building is really practice, is really building the actual skill.
So playing, playing, and so doing, building the skill of what it takes to work efficiently with LLMs, which is why you went through the whole arc of software engineer.
Maybe a version of that works, but that's kind of like in the 70s, then we had the waterfall model of software development.
Even though really, right?
I started out, I built a very minimal version.
I played with it.
I need to understand how it works, how it feels, and then it gives me new ideas.
I could not have planned this out in my head and then put it into some orchestrator and then something comes out.
To me, it's much more my idea, what it will become evolves as I build it and as I play with it and as I try out stuff.
So people who try to use like, you know, like things like Gastown or all these other orchestrators where they want to automate the whole thing, I feel if you do that, it misses style, love, that human touch.
I don't think you can automate that away so quickly.
So you want to keep the human in the loop, but at the same time, you also want to create the agentic loop where it is very autonomous while still maintaining the human in the loop.
And it's a tricky balance, right?
Because you're all for your big CLI guy, you're big on closing the agentic loop.
So what's the right balance?
Like, where's your role as a developer?
You have three to eight agents running at the same time.
What are some of the hard design decisions that you find you're still as a human being required to make that the human brain is still really needed for?
The whole thing that Anthropic has like a now they call it constitution back then, but that was months later, like two months before people already found that.
It was almost like a detective game where the agent mentioned something and then they found, they managed to get out a little bit of that string of that text, but it was nowhere documented.
And then just by feeding it the same text and asking it to like continue, they got more out.
But like a very blurry version.
And by like hundreds of tries, they kind of like narrowed it down to what was most likely the original text.
I think that's it's a really beautiful idea to like some of the stuff that's in there.
Like we hope cloud finds meaning in its work.
Because we don't, maybe it's a little early, but I think that's meaningful.
That's something that's important for the future as we approach something that at some point me and we not has like glimpses of consciousness, whatever that even means, because we don't even know.
So I read about this.
I find it super fascinating.
And I started a whole discussion with my agent on WhatsApp.
And I'm like, I gave it this text and it was like, yeah, this feels strangely familiar.
And then Suleta had the whole idea of like, maybe we should also create a soul document that includes how I want to like work with AI or like with my agent.
You could totally do that just in agents.md, you know, but I just found it to be a nice touch.
And it's like, oh, yeah, some of those core values are in the soul.
And then I also made it so that the agent is allowed to modify the soul if they choose so.
With the one condition that I want to know.
I mean, I would know anyhow because I see tool calls and stuff.
You know, there's a man, words matter, and like the framing matters, and the humor and the lightness matters, and the profundity matters, and the compassion, and the empathy, and the camaraderie, all that matter.
I don't know what it is.
You mentioned like Microsoft.
There's certain companies and approaches that can just suffocate the spirit of the thing.
I don't know what that is, but it's certainly true that OpenClaw has that fun instilled in it.
So I have this wide Dell that's anti-glare and you can just fit a lot of terminals side by side.
I usually have a terminal and at the bottom I split them.
I have a little bit of actual terminal, mostly because when I started, I sometimes made the mistake and I mixed up the windows and I gave a prompted in the wrong project.
And then the agent ran off for like 20 minutes, manually trying to understand what I could have meant, being completely confused because it was the wrong folder.
And sometimes they've been clever enough to like get out of the work there and like figure out that, oh, you meant another project.
But oftentimes it's just like, what?
You know, like put yourself in the shoes of the agent and then get like a super weird something that does not exist.
And then just like their problems over, so they try really hard.
And I almost felt bad.
So it's always codex and like a little bit of actual terminal.
Also helpful because I don't use work trees.
I like to keep things simple.
That's why I like the terminal so much, right?
There's no UI.
It's just me and the agent having a conversation.
Like I don't even need plan mode, you know?
So many people, they come from cloud code and they're so cloud-pilled and like have their workflows and they come to codecs and now it has plan mode, I think, but I don't think it's necessary because you just talk to the agent.
And when it's when you are, there's a few trigger words how you can prevent it from building.
You like discuss, give me options.
Don't write code yet if you want to be very specific.
You just talk.
And then when you're ready, then just write, okay, build.
But I'm also fascinated by the fact that I can empathize deeper with the model when I read its questions.
Because I can understand, because you said you can infer certain things by the runtime.
I can infer also a lot of things by the questions it's asking.
Because it's very possible I didn't provide it the right context, right files, the right guidance.
So somehow ask, get reading the questions, not even necessarily answering them, but just reading the questions, you get an understanding of where the gaps of knowledge are.
So even if you plan everything and you build, you can experiment with a question like, now that you built it, what would you have done different?
And then oftentimes you get like actually something where they discover only throughout building that, oh, what we actually did was not optimal.
Many times I ask them, okay, now that you build it, what can we refactor?
Because then you build it and you feel the pain points.
I mean, you don't feel the pain points, but right, they discover where there were problems or where things didn't work in the first try and it required more loops.
So every time, almost every time I merge a PR, I build a feature, afterwards I ask, hey, what can we refactor?
Sometimes it's like, no, there's like nothing big.
Or usually they say, yeah, this thing we should really look at.
But that took me quite a while to like, you know, that flow took me a lot of time to understand.
And if you don't do that, you eventually slop yourself into a corner.
You like, you have to keep in mind they work very much like humans.
Like if I write software by myself, I also build something and then I feel the pain points.
And then I get this urge that I need to refactor something.
So I can very much sympathize with the agent and you just need to use the context.
Or like you also use the context to write tests.
And so Codex oposs, like the model models, they usually do that by default.
But I still often ask the questions, hey, do we have enough tests?
Yeah, we tested this and this, but this corner case could be something else.
Write more tests.
Documentation.
Now that the whole context is full, like, I mean, I'm not saying my documentation is great, but it's not bad.
And pretty much everything is LM generated.
So you have to approach it as you build features, as you change something.
Maybe you can talk about the current two big competitors in terms of models, Cloud Opus 4.6 and GPT-53 Codex.
Which is better?
How different are they?
I think you've spoken about Codex reading more and Opus being more willing to take action faster and maybe being more creative in the actions it takes.
But because Codex reads more, it's able to deliver maybe better code.
You know, if you switch, I don't know what was the last time you switched, but to adjust to the feel.
Because you've kind of talked about you have to kind of really feel where a model is strong, where, like, how to navigate, how to prompt it, all that kind of stuff.
Like, this by way of advice, because you've been through this journey of just playing with models.
I think some people also make the mistake of they pay 200 for the cloud code version, then they pay 20 bucks for the Open EI version.
But if you pay the 20 bucks version, you get the slow version.
So your experience will be terrible because you're used to this very interactive, very good system.
And then you switch to something that you have very little experience, and that's going to be very slow.
So I think OpenEI shot themselves a little bit in the foot by making the cheap version also slow.
I would have at least a small part of the fast preview or like the experience that you get when you pay 200 before degrading to it being slow because it's already slow.
I mean, they made it better.
I think it's and they have plans to make it a lot better if the Cerebra stuff is true.
But yeah, it's a skill.
It takes time.
Even if you play, you have a regular guitar and you switch it to an e-guitar, you're not going to play well right away.
There's also this extra psychological effect that you've spoken about, which is hilarious to watch.
Which, once people, when the new model comes out, they try that model, they fall in love with it.
Wow, this is the smartest thing of all time.
And then they start saying you could just watch the Reddit posts over time, start saying that we believe the intelligence of this model has been gradually degrading.
It says something about human nature and just the way our minds work, when it's probably most likely the case that the intelligence of the model is not degrading.
And your project grows and you're adding slop and you probably don't spend enough time to think about refactors and you're making it harder and harder for the agent to work on your slop.
And then suddenly, oh no, it's hard.
I know it's not working as well anymore.
What's the motivation for like one of the Azei companies to actually make their model dumber?
Like at most they will make it slower if the server load is too high.
But like quantizing the model so you have a worse experience, so you go to the competitor, that just doesn't seem like a very smart move in any way.
And also the current interface is probably not the final form.
Like if you think more globally, we are we copied Google for agents.
You have like a prompt and then you have a chat interface that to me very much feels like when we first created television and then people recorded radio shows on television and you saw that on TV.
I think there is there is better ways how we eventually will communicate with models.
And we are still very early in this how will it even work phase.
So it will eventually converge and we will also figure out whole different ways how to work with those things.
One of the other components of workflow is operating system.
So I told you offline that for the first time in my life I'm expanding my sort of realm of exploration to the to the Apple ecosystem, to Macs, iPhone and so on.
For most of my life have been Linux, Windows, then WSL1, WSL2 person, which I think are all wonderful.
But I expand into also trying Mac because it's another way of building and it's also a way of building that a large part of the community currently that's utilizing LLMs and agents is using.
So this is the reason I'm expanding to it.
But is there something to be said about the different operating systems here?
We should say that OpenClause supported across operating systems.
I saw WSL2 recommended side Windows for certain operations, but then Windows, Linux, Mac OS are obviously supported.
Like there's a whole movement to make it harder for agents to use.
So if you do the same in a data center and websites detect that it's an IP from a data center, their website might just block you or it'd make it really hard or it'd put a lot of captures in the way of the agent.
I mean, agents are quite good at happily clicking, I'm not a robot.
But having that on a residential IP makes a lot of things simpler.
So there's ways, yeah, but it really does not need to be a Mac.
It can be any old hardware.
I always say like maybe use the opportunity to get yourself a new MacBook or whatever computer you use and use the old one as your server instead of buying a standalone Mac Mini.
But then there's again, there's a lot of very cute things people build with Mac Minis that I like.
Can you actually speak to what it takes to get started with OpenClaw?
There's a lot of people.
What is it?
Somebody tweeted at you, Peter, make OpenClaw easy to set up for everyday people.
99.9% of people can't access to OpenClaw and have their own lobster because of their technical difficulties in getting it set up.
Make OpenClaw accessible to everyone, please.
And you replied, working on that.
From my perspective, it seems there's a bunch of different options and it's already quite straightforward, but I suppose that's if you have some developer background.
In fact, maybe don't use my project because my backlog is very large.
But I learned so much from open source.
Just like be humble.
Maybe don't send a pull request right away.
But there's many other ways you can help out.
There's many ways you can just learn by just reading code, by being on Discord or wherever people are and just like understanding how things are built.
I don't know, like Michel Hachimoto builds Ghosty, the terminal, and he has a really good community where there's so many other projects.
Pick something that you find interesting and get involved.
There's people that are high agency and very curious, and they get very far, even though they have no deep understanding how software works, just because they ask questions and questions.
And agents are infinitely patient.
Like part of what I did this year is I went to a lot of iOS conferences because that's my background and just told people, don't see yourself as an iOS engineer anymore.
Like you need to change your mindset.
You're a builder and you can take a lot of the knowledge how to build software into new domains and all of the more fine-grade details, agents can help.
You don't have to know how to splice an array or what the correct template syntax is or whatever, but you can use all your general knowledge.
And that makes it much easier to move from one galaxy, one tech galaxy into another.
And oftentimes there's languages that make more or less sense depending on what you build, right?
So for example, when I build simple CLIs, I like Go.
I actually don't like Go.
I don't like the syntax of Go.
I didn't even consider the language.
But the ecosystem is great.
It works great with agents.
It is garbage collected.
It's not the highest performing one, but it's very fast.
And for those type of CLIs that I built, Go is a really good choice.
So I use a language that I'm not even a fan of for that's my main to-go thing for CLIs.
Isn't that fascinating that here's a programming language you would have never used if you had to write from scratch, and now you're using because LMs are good at generating it and it has some of the characteristics that makes it resilient, like garbage collected.
You can even ask a question like, do we need a programming language that's made for agents?
Because all of those languages are made for humans.
So what would that look like?
I think there's a whole bunch of interesting questions that we'll discover.
And also how, because everything is now world knowledge, how it in many ways things will stagnate.
Because if you build something new and the agent has no idea, that's going to be much harder to use than something that's already there.
When I build Mac apps, I build them in Swift and Swift UI, partly because I like pain, partly because the deepest level of system integration I can only get through there.
And you clearly feel a difference if you click on an Electron app and it loads a web view in the menu.
It's just not the same.
Sometimes I just also try new languages just to get a feel for them.
The stuff that burned me out was mostly people stuff.
I don't think burnout is working too much.
Maybe to a degree, everybody's different.
I cannot speak in absolute terms, but for me, it was much more differences with my co-founders, conflicts, or like really high-stress situation with customers that eventually grinded me down.
And then when, luckily, we got a really good offer for like putting the company to the next level.
And I already kind of worked two years on making myself obsolete.
So at this point, I could leave.
And then I just, I was sitting in front of the screen and I felt like, you know, Austin Powers where they sucked the mojo out.
I was like, it was like gone.
Like I couldn't I couldn't get code out anymore.
I was just like staring and feeling empty.
And then I just stopped.
I booked like a one-way trip to Madrid and spent some time there.
If you think that, oh, yeah, I work really hard and then I retire.
I don't recommend that because the idea of, oh yeah, I just enjoy life now.
Maybe it's appealing, but right now I enjoy life the most I ever enjoyed life because if you wake up in the morning and you have nothing to look forward to, you have no real challenge, that gets very boring very fast.
And then when you're bored, you're going to look for other places how to stimulate yourself.
And then maybe, maybe that's drugs, you know.
But that eventually also get boring and you look for more.
But you also showed on the money front, you know, a lot of people in Silicon Valley in the startup world, they think maybe overthink way too much, optimize for money.
And you've also shown that it's not like you're saying no to money.
I mean, I'm sure you take money, but it's not the primary objective of your life.
Can you just speak to that, your philosophy on money?
It just doesn't excite me as much because I feel I did all of that and it would take a lot of time away from the things I actually enjoy.
Same as when I was CEO, I think I learned to do it and I'm not bad at it.
Partly I'm good at it.
But yeah, that path doesn't excite me too much.
And I also fear it would create a natural conflict of interest.
Like what's the most obvious thing I do?
I productize it.
I put like a version safe for workplace.
And then what do you do?
I get a pull request with a feature like add audit log.
But that seems like an enterprise feature.
So now I feel I have a conflict of interest in the open source version and the closed source version.
Or I change the license to something like FSL, where you cannot actually use it for commercial stuff.
Would first be very difficult with all the contributions.
And second of all, I like the idea that it's free as in beer and not free with conditions.
Yeah, there's ways how you keep all of that for free and just like still try to make money, but those are very difficult.
And you see, there's like few and few companies manage that.
Like even Tailwind, they're like used by everyone.
Everyone uses Tailwind, right?
And then they had to cut off 75% of the employees because they're not making money because nobody's even going on the website anymore because it's all done by agents.
And just relying on donations, yeah, good luck.
Like if a project of my Calibre, if I extrapolate what the typical open source project would get, it's not a lot.
I still lose money on the project because I made the point of supporting every dependency except Slack.
They're a big company.
They can do without me.
But all the projects that are done by mostly individuals, so like all the, right now all the sponsorship goes right up to my dependencies.
And if there's more, I want to like Buy my contributor some merch, you know.
Let's just say, like on either of these, my conditions are that the project stays open source.
That it maybe it's going to be a model like Chrome and Chromium.
I think this is too important to just give to a company and make it theirs.
This is, and we didn't even talk about the whole community part, but like the thing that I experienced in San Francisco, like at ClawCon, seeing so many people so inspired, like, and having fun and just like building shit and like having like robots and lobster stuff walking around.
Like the people told me like they didn't experience this level of community excitement since like the early days of the internet, like 10, 15 years.
And there were a lot of high caliber people there.
I was amazed.
I also was very sensitively overloaded because too many people wanted to do selfies.
But I love this.
This needs to stay a place where people can hack and learn.
But also, I'm very excited to make this into a version that I can get to a lot of people because I think this is the personal agents and that's the future.
And the fastest way to do that is teaming up with one of the labs.
And I also, on a personal level, I never worked at a large company and I'm intrigued.
You know, we talk about experiences.
Will I like it?
I don't know.
But I want that experience.
I'm sure if I announce this, then there will be people like, oh, I sold out, blah, blah, blah.
He, he, like, someone who uses the computer, but never really, like, yeah, I use some ChatGPT sometimes, but not very technical, wouldn't really understand what I built.
So, like, I'll show you.
And I, I paid for him the 90 buck, 100 bucks, I don't know, subscription for Anthropic and set up everything for him with like VWSL, Windows.
I was curious, would he actually work on Windows?
You know, I was a little early.
And then within a few days, he was hawked.
Like, he texted me about all the things he learned.
And then within a few days, he upgraded to the $200 subscription or Euros because he's in Austria.
And he was in love with this thing.
That for me was like a very early product validation.
It's like, I built something that captures people.
And then a few days later, Andropic blocked him because based on their rules, using the subscription is problematic or whatever.
And he was like devastated.
And then he signed up for MiniMax for 10 bucks a month and uses that.
And I think that's silly in many ways because you just got a 200 buck customer.
You just made someone hate your company.
And we are still so early.
Like, we don't even know what the final form is.
Is it going to be cloud code?
Probably not.
You know, like that seems very, it seems very short-sighted to lock down your product so much.
All the other companies have been helpful.
I'm in Slack of most of the big labs.
Kind of everybody understands that we are still in an era of exploration in the area of the radio shows on TV and not and not a modern TV show that fully uses the format.
I think you've made a lot of people see the possibility.
Sorry, not non-technical people see the possibility of AI and they fall in love with this idea and enjoy interacting with AI.
And it's a really beautiful thing.
I think I also speak for a lot of people in saying, I think you're one of the great people in AI in terms of having a good heart, good vibes, humor, the right spirit.
And so it would, in a sense, this model that you're describing, having open source part and you being part of also building a thing inside additionally of a large company would be great because it's great to have good people in those companies.
So if you're thinking about impact, some of the wonderful technologies you've been exploring, how to do it securely, and how to do it at scale, such that you can have a positive impact on a large number of people.
You know, both Ned and Mark basically played all week with my product and sent me like, oh, this is great.
Oh, this is shit.
Oh, I need to change this.
Or like funny little anecdotes.
And people using your stuff is kind of like the biggest compliment.
And also shows me that, you know, they actually care about it.
And I didn't get the same on the OpenAI side.
I got to see some other stuff that I find really cool.
And they lure me with, I cannot tell the exact number because of NDA, but you can be creative and think of this Ribras deal and how that would translate into speed.
He got like when he first, when they first approached me, I got him in my WhatsApp and he was asking, hey, I mean, we have a call.
And I'm like, I don't like calendar entries.
Let's just call now.
And he was like, yeah, give me 10 minutes.
I need to finish coding.
Well, I guess that gives you a street credit.
It's like, oh, like, he's still writing code.
You know, he's, he didn't drift away in just being a manager.
He gets me.
That was a good first start.
And then I think we had a like a 10-minute fight.
What's better, Cloud Code or Codex?
Like the saying, you first do that, you casually call someone that owns one of the largest companies in the world and you have a 10-minute conversation about that.
And then I think afterwards he called me eccentric, but brilliant.
But I also had some, I had some really, really cool discussion with Sam Ottman.
And later on, I changed it to be like a little more specific in the definition of surprise.
But the fact that I made it proactive and that it knows you and it cares about you, at least it's programmed to that, prompted to do that.
And that is a follow-on on your current session makes it very interesting because it would just sometimes ask a follow-up question or like, how's your day?
I mean, again, it's a little creepy or weird or interesting, but Heartbeat very in the beginning is still today, it doesn't, the model doesn't choose to use it a lot.
I found it surprising where I had this, I had a shoulder operation a few months ago.
So and the model rarely used heartbeat, but then I was in the hospital and it knew that I had the operation and it checked up on me.
It's like, are you okay?
And I just, it's like, again, apparently, like, if something significant in the context, that triggered the heartbeat when it rarely used the heartbeat.
That half a year ago, like everyone was talking about MCPs.
And I was like, screw MCPs.
Every MCP would be better as a CLI.
And now this stuff doesn't even have MCP support.
I mean, it has with asterisks, but not in the core layer.
And nobody's complaining.
So my approach is if you want to extend the model with more features, you just build a CLI and the model can call the CLI, probably gets it wrong, calls the help menu, and then on demand loads into the context what it needs to use the CLI.
It just needs a sentence to know that the CLI exists if it's something that the model doesn't know by default.
And even for a while, I didn't really care about skills, but skills are actually perfect for that because they boil down to a single sentence that explains the skill.
And then the model loads the skill and that explains the CLI.
And then the model uses the CLI.
Some skills are like raw, but most of the time, that works.
I think the main beauty is that models are really good at calling Unix commands.
So if you just add another CLI, that's just another Unix command in the end.
And MCPs, that has to be added in training.
That's not a very natural thing for the model.
It requires a very specific syntax.
And the biggest thing, it's not composable.
So imagine if I have a service that gives me metadata and it gives me the temperature, the average temperature, rain, wind, and all the other stuff, and I get like this huge blob back.
As a model, I always have to get the huge blob back.
I have to fill my context with that huge blob and then pick what I want.
There's no way for the model to naturally filter unless I think about it proactively and add a filtering way into my MCP.
But if I would build the same as a CLI and it would give me this huge blob, it could just add a JQ command and filter itself and then only get me what I actually need, or maybe even compose it into a script to do some calculations with the temperature and only give me the exact output and you have no context pollution.
Again, you can solve that with sub-agents and more charades, but it's just like workarounds for something that might not be the optimal way.
It definitely was, you know, it was good that we had MCPs because it pushed a lot of companies towards building APIs.
And now I can look at an MCP and just make it into a CLI.
But this inherent problem that MCPs by default clutter up your context, plus the fact that most MCPs are not make good, in general, make it just not a very useful paradigm.
There's some exceptions, like Playwright, for example, that requires state and is actually useful.
I think if you have a very low per day baseline per account that allows read-only access, it would solve a lot of problems.
There's plenty of automations where people create a bookmark and then use OpenCloud to find the bookmark, do research on it, and then send you an email with more details on it or a summary.
Yeah, I mean, and to be frank, I mean, I told Twitter proactively that, hey, I built this and there's a need.
And they've been really nice, but also like, take it down.
Fair, totally fair.
But I hope that this woke up the teen a little bit that there's a need.
And if all you do is making it slower, you're just reducing access to your platform.
I'm sure there's a better way.
I also, I'm very much against any automation on Twitter.
If you tweet at me with AI, I will block you.
No first strike.
As soon as it smells like AI and AI still has a smell, especially on tweets, it's very hard to tweet in a way that does look completely human.
And then I block.
Like I have a zero tolerance policy on that.
And I think it would be very helpful if they if like tweets done via API would be marked.
Maybe there's some special cases where, but and there should be, there should be a very easy way for agents to get their own Twitter account.
We need to rethink social platforms a little bit.
If we go towards a future where everyone has their agent and agents maybe have their own Instagram profiles or Twitter accounts or can like do stuff on my behalf, I think it should very clearly be marked that they are doing stuff on my behalf and it's not me.
Because content is now so cheap.
Eyeballs are the expensive part.
And I find it very triggering when I read something and then I'm like, oh, no, this smells like AI.
Like, where is this headed in terms of what we value about the human experience?
It feels like we will move more and more towards in-person interaction.
And we'll just communicate.
We'll talk to our AI agent to accomplish different tasks, to learn about different things, but we won't value online interaction because there'll be so much AI slop that smells and so many bots that it's difficult.
Even if people work hard on it, using, and I have some on my blog post, you know, in the time where I explored this new medium, but now they trigger me as well.
It's like, yeah, this is this just screams AI slop.
But it can modify my gym workout based on how well I slept or if I have stress or not.
It has so much more context to make even better decisions than any of the step even could do.
It could show me UI just as I like.
Why do I still need an app to do that?
Why should I pay another subscription for something that the agent can just do now?
And why do I need my 8-sleep app to control my bed when I tell the agent to, no, the agent already knows where I am, so it can turn off what I don't use.
And I think that will translate into a whole category of apps that are no longer, I will just naturally stop using because my agent can just do it better.
So I had to like to have to do anything myself and build GOG that's like a CLI for Google.
And at the end user, they have to give me the emails because otherwise I cannot use their product.
If I'm a company and I try to get Google data, Gmail, there's a whole complicated process to the point where sometimes startups acquire startups that went through the process so they don't have to work with Google for half a year to be certified to being able to access Gmail.
But my agent can access Gmail because I can just connect to it.
It's still crappy because I need to go through Google's developer jungle to get a key.
And it's still annoying, but they cannot prevent me.
And worst case, my agent just clicks on the website and gets the data out that way.
There's going to be a lot of powerful, rich companies fighting back.
So it's really interesting.
You're at the center.
You're the catalyst, the leader, and happen to be at the center of this kind of revolution where it's going to completely change how we interact with services, with the web.
And so like there's companies like Google, they're going to push back.
I mean, there's every major company you could think of is going to push back.
Yeah, there's a nice balance from a big company perspective because if you push back too much for too long, you become blogbuster and you lose everything to the Netflixes of the world.
But some pushback is probably good during the revolution to see.
So if I'm on the go, I don't want to open a calendar app.
I just, I want to tell my agent, hey, remind me about this dinner tomorrow night and maybe invite two of my friends and then maybe send a WhatsApp message to my friend.
And I don't need, I don't want the need to open apps for that.
I think that we passed that age and now everything is like much more connected and fluid if those companies want it or not.
And I think the right companies will find ways to jump on the train and other companies will perish.
We talked about programming quite a bit, and a lot of folks that are developers are really worried about their jobs, about their future of programming.
Do you think AI replaces programmers completely, human programmers?
So maybe, maybe I does replace programmers eventually.
But there's so much more to that art.
Like, what do you actually want to build?
How should it feel?
How's the architecture?
I don't think agents will replace all of that.
Yeah, like just the actual art of programming, it will stay there, but it's going to be like knitting.
You know, like people do that because they like it, not because it makes any sense.
So, so I read this article this morning about someone that it's okay to mourn our craft.
And I can, a part of me very strongly resonates with that because in my past, I spent a lot of time thinking, just being really deep in the flow and just like cranking out code and like finding really beautiful solutions.
And yes, in a way, it's sad because that will go away.
And I also got a lot of joy out of just writing code and being really deep in my thoughts and forgetting time and space and just being in this beautiful state of flow.
But you can get the same state of flow.
I get a similar state of flow by working with agents and building and thinking really hard about problems.
It is different.
But and it's okay to mourn it, but it's not something we can fight.
Like there is the world for a long time had a there was a lack of intelligence, if you see it like that, of people building things.
And that's why salaries of software developers reached stupidly high amounts.
And that will go away.
There will still be a lot of demand for people that understand how to build things.
Just that all this tokenized intelligence enables people to do a lot more, a lot faster.
And it will be even faster and even more because those things are continuously improving.
We had similar things when, I mean, it's probably not a perfect analogy, but when we created the steam engine and they built all these factories and replaced a lot of manual labor, and then people revolted and broke the machines.
I can relate that if you very deeply identify that you are a programmer, that it's scary and that it's threatening because what you like and what you're really good at is now being done by a soulless or not entity.
So one is, I never, as you're articulating this beautifully, and I'm realizing I never thought I would the thing I love doing would be the thing that gets replaced.
You hear these stories about these, like you said, with the Steam Engine.
I've spent so many, I don't know, maybe thousands of hours pouring over code and putting my heart and soul.
And like, and just like some of my most painful and happiest moments were alone behind, I was an Emacs person for a long time, and Emacs.
And then there's an identity and there's meaning, and there's like when I walk about the world, I don't say it out loud, but I think of myself as a programmer.
And to have that in a matter of months, I mean, like you mentioned, April to November, it really is a leap that happened, a shift that's happening.
To have that completely replaced is painful, it's truly painful.
But I also think programmers, builders more broadly, but what is the act of programming?
I think programmers are generally best equipped at this moment in history to learn the language, to empathize with agents, to learn the language of agents, to feel the CLI.
Yeah, and because on X, the bubble I'm in is mostly positive.
On Mastodon and Blue Sky, I don't, I also use it less because oftentimes I got attacked for my blog posts.
And I had stronger reactions in the past.
Now I can sympathize with those people more because in a way I get it.
In a way, I also don't get it because it's very unfair to grab onto the person that you see right now and unload all your fear and hate.
It's going to be a change.
It's going to be challenging, but it's also, I don't know, I find it incredibly fun and gratifying.
And I can use the new time to focus on much more details.
I think the level of expectations of what we build is also rising because it's just now the default is now so much easier.
So software is changing in many ways.
There's going to be a lot more.
And then you have all these people that are screaming, oh yeah, but what about the water?
You know, like I did a conference in Italy about the state of AI.
And my whole motivation was to push people away from don't see yourself as an iOS developer anymore.
You're now a builder.
And you can use your skills in many more ways.
Also, because apps are slowly going away.
People didn't like that.
Like a lot of people didn't like what I had to say.
And I don't think I was hyper-bowl.
I was just like, this is how I see the future.
Maybe this is not how it's going to be, but I'm pretty sure a version of that will happen.
And the first question I got was, yeah, but what about the insane water use on data centers?
But then you actually sit down and do the math.
And then for most people, if you just skip one burger per month, that compensates the CO2 output or like the water use in the equivalent of tokens.
I mean, the mass is tricky and it depends if you add pre-training, then maybe it's more than just one petty, but it's not off by a factor of 100, you know?
So they're like golf is still using way more water than all data centers together.
So are you also hating people that play golf?
Those people grab on anything that they think is bad about AI without seeing the potential things that might be good about AI.
And I'm not saying everything is good.
It's certainly going to be a very transformative technology for our society.
To steel man, the criticism in general, I do want to say in my experience with Silicon Valley, there's a bit of a bubble in the sense that there's a kind of excitement and an over focus about the positive that the technology can bring.
It's great to focus on not to not to be paralyzed by fear and fear-mongering and so on.
But there's also within that excitement and within everybody talking just to each other, there's a dismissal of the basic human experience across the United States and the Midwest, across the world, including the programmers we mentioned,
including all the people that are going to lose their jobs, including the immeasurable pain and suffering that happens at the short-term scale when there's change of any kind, especially large-scale transformative change that we're about to face if what we're talking about will materialize.
And so having a bit of that humility and an awareness about the tools you're building, they're going to cause pain.
They will long term, hopefully bring about a better world and even more opportunities and even more awesomeness.
But having that kind of like quiet moment often of respect for the pain that is going to be felt.
And then I also have to put against some of the emails I got where people told me they have a small business and they've been struggling and OpenClaw helped them automate a few of the tedious tasks from collecting invoices to answering customer emails that then freed them up and like caused them a bit more joy in their life.
Or some emails where they told me that OpenClaw helped a disabled daughter, that she's now empowered and feels she can do much more than before, which is amazing, right?
Because you could do that before as well.
The technology was there.
I didn't invent a whole new thing, but I made it a lot easier and more accessible.
And that did show people the possibilities that they previously wouldn't see.
And now they apply it for good.
Or like also the fact that, yes, I suggest the latest and best models, but you can totally run this on free models.
You can run this locally.
You can run this on Kimi or other models that are way more accessible price-wise and still have a very powerful system that might otherwise not be possible because other things like, I don't know, Anthropics co-work is logged in into their space.
So it's not all black and white.
I got a lot of emails that were heartwarming and amazing.
That's like, there's this whole builder vibe again.
People are now using AI in a more playful way and are discovering what it can do and how it can help them in their life and creating new places that are just sprawling of creativity.
I don't know, like there's like ClawCorn in Vienna.
There's like 500 people and there's such a high percentage of people that want to present, which is to me really surprising because usually it's quite hard to find people that want to like talk about what they built.
And now it's there's an abundance.
So that gives me hope that we can figure shit out.
Well, Mr. Claude Father, I just realized when I said that in the beginning, I violated two trademarks because there's also the godfather of getting sued by everybody.
You're a wonderful human being.
You've created something really special.
A special community, a special product, a special set of ideas, plus the entire humor, the good vibes, the inspiration of all these people building, the excitement to build.
So I'm truly grateful for everything you've been doing and for who you are and for sitting down to talk with me today.
Thanks for listening to this conversation with Peter Steinberger.
To support this podcast, please check out our sponsors in the description, where you can also find links to contact me, ask questions, give feedback, and so on.
And now, let me leave you with some words from Voltaire.