Anthropic ARR went $1B -> $4B in the first half of this year. They're getting my $200 a month and it's easily the best money I spend. There's definitely something there.
It makes me perhaps a little sad to say that "I'm showing my age" by bringing up the .com boom/bust, but this feels exactly the same. The late 90s/early 00s were the dawn of the consumer Internet, and all of that tech vastly changed global society and brought you companies like Google and Amazon. It also brought you Pets.com, Webvan, and the bajillion other companies chronicled in "Fucked Company".
You mention Anthropic, which I think is in a good a position as any to be one of the winners. I'm much less convinced about tons of the others. Look at Cursor - they were a first moving leader, but I know tons of people (myself included) who have cancelled their subscription because there are now better options.
Please don't say stuff like that.
As a 20-something who was in diapers during the dot-com boom, I really appreciate your insight. Thanks for sticking around on HN!
I probably mean it less as "I'm too old" and more of "Wow, time really flies".
To me, who started my career in the very late 90s, the .com boom doesn't really seem that long ago. But then I realize that there is about the same amount of time between now and the .com boom, and the .com boom and the Altair 8800, and I think "OK, that was a loooong time ago". It really is true what they say, that the perception of time speeds up the older you get, which is both a blessing and a curse.
Regarding AI, it's a bit fascinating to me to think that we really only had enough data to get generative AI to work in the very recent past, and nearly as soon as we had enough data, the tech appeared. In the past I would have probably guessed that the time between having enough data and the development of AI would have been a lot longer.
The irony with Webvan, they had the right idea about 15 years too early. Now we have InstaCart, DoorDash, etc. You really needed the mobile revolution circa 2010 for it to work.
Pets.com is essentially Chewy (successful pet focused online retailer)
So, neither of those ideas were really terrible in the same vain as say Juicera, or outright frauds like Theranos. Overvalued and ill-timed, sure
March 1999: ~$27.7B
Jan 2009: ~$25B (back to $27.7B & rising by Feb)
Huh.
That's the problem with most "AI" products/companies that still isn't being answered. Why do people use your tool/service if you don't own the LLM which is most of the underlying "engine"? And further, how do you stay competitive when your LLM provider starts to scale RL with whatever prompting tricks you're doing, making your product obsolete?
Every time I've tried Copilot or Cursor, it's happily gone off and written or rewritten code into a state it seemed very proud of, and which didn't even work, let alone solve the problem I put to it.
Meanwhile, Kiro:
1. Created a requirements document, with user stories and acceptance criteria, so that we could be on the same page about the goals
2. Once I signed off on that, it then created a design document, with code examples, error handling cases, and an architecture diagram, for me to review
3. After that looked good, it set about creating an itemized task list for each step of the implementation, broken down into specific tasks and sub-tasks and including which of the acceptance criteria from step 1 that task addressed
4. I could go through the document task by task, ask it to work on it, and then review the results
At one point, it noticed that the compiler had reported a minor issue with the code it had written, but correctly identified that resolving that issue would involve implementing something that was slated for a future task, so it opted to ignore the issue until the appropriate time.
For once, I found myself using an AI tool that handled the part of the job I hate the most, and am the worst at: planning, diagramming, and breaking down tasks. Even if it hadn't been able to write any working code at all, it already created something useful for me that I could have built off of, but it did end up writing something that worked great.
In case anyone is curious about the files it created, you can see them here: https://github.com/danudey/rust-downloader/pull/4
Note that I'm not really familiar with Rust (as most of the code will demonstrate), so it would probably have been far faster for an experienced Rust programmer to implement this. In my case, though, I just let it do its thing in the background and checked in occasionally to validate it was doing what I expected.
Meanwhile other "wrappers" e.g. in nvim or whatever, don't have this feature, they just have slightly better autocomplete than bare LSP.
Yes, there's (maybe?) four, but they're at the very bottom of the value chain.
Things built on top of them will be higher up the value chain and (in theory anyway) command a larger margin, hence a VC rush into betting on which company actually makes it up the value chain.
I mean, the only successes we see now are with coding agents. Nothing else has made it up the value chain except coding agents. Everything else (such as art and literature generation) is still on the bottom rung of the value chain.
That, by definition alone, is where the smallest margins are!
By similar token Windows is mostly a wrapper around Intel and AMD and now Qualcomm CPUs. Cursor/Windsurf add a lot of useful functionality. So much so so that Microsoft GitHub Copilot is losing marketshare to these guys.
It is a lot less trivial than people like yourself make it out to be to get an effective tool chain and especially do it efficiently.
I am old and I remember when you could make a lot of money offering "Get Your Business On The Information Superhighway" (HTML on Apache) and we're in that stage of LLMadness today, but I suspect it will not last.
Don’t be sorry it shows your true colors. The point stands that you continue to step around. Cursor and other tools like it are more than a trivial wrapper but of course you have never used them so you have no idea. At least give yourself some exposure before projecting.
Dropbox is still a $5+bn business. Cursor is still growing, will it work out, I don’t know but lots of folks are seeing the value in these tools and I suspect we have not hit peak yet with the current generation. I am not sure what a service business like a small biz website builder has to do with Cursor or other companies in adjacent spaces.
Your characterization of hosting as "a small biz website builder" is revealing. https://finance.yahoo.com/quote/GDDY/ is the one that made it and is now a $24B firm, but there were at least dozens of these companies floating around in the early 2000s.
Why are you so sure Cursor is the new GoDaddy and not the new Tripod? https://www.tripod.lycos.com/
The only person being defensive here is you. My point was simple: tools like Cursor are more than just “wrappers.” Whether it becomes a massive business or not, revenue is growing, and clearly many users find enough value to justify the subscription. You don’t have to like it but writing it off without firsthand experience just weakens your argument.
At this point, you’re debating a product you haven’t tried, in a market you’re not tracking. Maybe sit this one out unless you have something constructive to say beyond “it’s just a wrapper”.
The shell of the IDE is open source. It’s true there is some risk on the supply of models and compute but again none of those, except MSFT which does not even own any of the SOTA models, have any direct competition. OpenAI has codex but it’s half baked and being bundled in ChatGPT. It is in nobodies interest to cut off Cursor as at this point they are a fairly sustained and large customer. The risk exists but feels pretty far fetched until someone is actively competing or Cursor gets bought out by a OpenAI.
Again, what proof do you have that there is zero complexity or most being driven by the sandwich filling. Most of OpenAIs valuation is being driven by the wrapper ChatGPT not API usage. I have written a number of integrations with LLM APIs and while some of it just works, there is a lot of nuance to doing it effectively and efficiently at scale. If it was so simple why would we not see many other active competitors in this space with massive MAUs?
what? Do you think providers (or their other customers) don’t care about the business implications of a decision like this? All so that cursor can bring their significant customer base to a nearly-indistinguishable competitor?
Current situation doesn't sound too good for "scaling hypothesis" itself.
But the “scaling hypothesis” is the easiest, fastest story to raise money. So it will be leveraged until conclusively broken by the next advancement.
For the time being, nothing comes close, at least for me.
I use Github copilot and often tend to be frustrated. It messes up old things while making new. I use Claude 4 model in GH CP.
Then I'll look through the changes and decide if it is correct. Sometimes can just run the code to decide if it is correct. Any compilation errors are pasted right back in to the chat in agent mode.
Once the feature is done, commit the changes. Repeat for features.
Do you also get it to add to it's to-do list?
I also find that having the o3 model review the plan helps catch gaps. Do you do the same?
Here are some nice copilot resources: https://github.com/github/awesome-copilot
Also, I am using tons of markdown documents for planning, results, research.... This makes it easy to get new agent sessions or yourself up to context.
I'm not the original poster, but regarding workflow, I've found it works better to let the LLM create one instead of imposing my own. My current approach is to have 10 instances generate 10 different plans, then I average them out.
User experience is definitely worth something, and I think Cursor had the first great code integration, but then there is very little stopping the foundation model companies from coming in and deciding they want to cut out the middleman if so desired.
I watch the changes on Kilo Code as well (https://github.com/Kilo-Org/kilocode). Their goal is to merge the best from Cline & Roo Code then sprinkle their own improvements on top.
Sometimes one model would get stuck in their thinking and submitting the same question to a different model would resolve the problem
It allows you to have CC shoot out requests to o3, 2.5 pro and more. I was previously bouncing around between different windows to achieve the same thing. With this I can pretty much live in CC with just an editor open to inspect / manually edit files.
Once I max out the premium credits I pay-as-you-go for Gemini 2.5 Pro via OpenRouter, but always try to one shot with GPT 4.1 first for regular tasks, or if I am certain it's asking too much, use 2.5 Pro to create a Plan.md and then switch to 4.1 to implement it which works 90% of the time for me (web dev, nothing too demanding).
With the different configurable modes Roo Code adds to Cline I've set up the model defaults so it's zero effort switching between them, and have been playing around with custom rules so Roo could best guess whether it should one shot with 4.1 or create a plan with 2.5 Pro first but haven't nailed it down yet.
Roo Code just has a lot more config exposed to the user which I really appreciate. When I was using Cline I would run into minor irritating quirks that I wished I can change but couldn't vs. Roo where the odds are pretty good there are some knobs you could turn to modify that part of your workflow.
I stopped writing code by hand almost entirely and my output (measured in landed PRs) has been 10x
And when I write code myself then it’s gnarly stuff and I want AI to get out of my way…so I just use Webstorm
Care to share your opinions on which options are better?
That being said _sometimes_ its analysis is actually correct, so it's not a total miss. Just not something I'm willing to pay for when Ollama and free models exist.
Cursor has a $500mm ARR your anecdote might be meaningful in the medium turn but so far growth as not slowed down.
Ah, yes, companies like Amazon.com, eBay, PayPal, Expedia, and Google. Never heard of those losers again. Not to mention those crazy kids at Kozmo foolishly thinking that people would want to have stuff delivered same-day.
The two lessons you should learn from the .com bubble are that the right idea won’t save you from bad execution, and that boom markets–especially when investors are hungry for big returns–can stay inflated longer than you think. You can be early to market, have a big share, and still end up like Netscape because Microsoft decided to take the money from under the couch cushions and destroy your revenue stream. That seems especially relevant for AI as long as model costs are high and nobody has a moat: even if you’re right on the market, if someone else can train users to expect subsidized low prices long enough you’ll run out of runway.
Cursor’s growth is impressive, but sustained dominance isn’t guaranteed. Distribution, margins, and defensibility still matter and we haven’t seen how durable any of that is once incentives tighten and infra costs stop being subsidized.
Kozmo is a great case study: decent demand, terrible unit economics, and zero pricing power. They didn’t just scale too fast, they scaled a structurally unprofitable model. There was no markup, thin margins, and they held inventory without enough throughput.
Many of these companies may fail but it’s a much different environment and the path to profitability is moving a lot quicker.
There also were companies like Sun and Cisco who had real, roaring business and lots of revenue that depended on loose start-up purse-strings, and VC exuberance...
Sun and Cisco both survived the .com bust, but were never the same, nor did theu ever reach their high-water marks again. They were shovel-sellers, much like Amazon and Nvidia in 2025.
I'm an attorney that got pitched the leading legal AI service and it was nothing but junk... so I'm not sure why you think that's different from what's going on right now.
I am not sure why you would think your single anecdote is defensible or evidence to prove much. My perspective is valuations that are going on right now don’t have multiples that are that wild especially if we aren’t compare it to the com bubble.
Evidence? Prove? What are you talking about. This is just a discussion between people, not some courtroom melodrama you are making it out to be.
>My perspective is valuations that are going on right now don’t have multiples that are that wild especially if we aren’t compare it to the com bubble.
Okay, I could be equally rude to you, but I wont.
As for valuations, when looking at current VC multiples and equity markets, I don’t see the same bubble from a qualitative perspective. Absolutely there is over hype coming from CEOs in public markets but there is a lot of value being driven. I don’t believe the giants are going to do well, maybe the infrastructure plays will but I think we will see a carve out of a new generation of companies driving the change. Unlike ‘99, I am seeing a lot more startups and products with closer to the ground roadmaps to profitability. In 99 so many were running off of hopes and dreams.
If you would actually like to converse I would love to see your perspective but if all you can be is mad please please don’t respond. Nobody is having a courtroom drama other than what’s playing out in your head.
Briefpoint.ai, casely.ai, eve.legal etc. I work with an attorney who trained his paralegals to use chatgpt + some of these drafting tools, says it's significantly faster than what they could've done previously.
> I feel like big VC money goes to solving legal analysis, but I'm seeing a lot of wins with document drafting/templating.
What do you mean "wins?" Like motions won with AI drafted papers? I'm skeptical.
>I work with an attorney who trained his paralegals to use chatgpt + some of these drafting tools, says it's significantly faster than what they could've done previously.
I'd be concerned about malpractice, personally. The case reviews I've seen from Vincent (which is ChatGPT + the entire federal docket) are shocking in how facially wrong they can be. It's one thing for an attorney to use ChatGPT when they do know the law and issues (hasn't seemed to help the various different partners getting sanctioned for filing AI drafted briefs) but to leave the filtering to a paralegal? That's insane, imo.
If you discusses a plan with CC well upfront, covering all integration points where things might go off rail, perhaps checkpoint the plan in a file then start a fresh CC session for coding, then CC is usually going to one shot a 2k-LoC feature uninterrupted, which is very token efficient.
If the plan is not crystal clear, people end up arguing with CC over this and that. Token usage will be bad.
Now I just find myself exasperated at its choices and constant forgetfulness.
My assessment so far is that it is well worth it, but only if you're invested in using the tool correctly. It can cause as much harm as it can increase productivity and i'm quite fearful of how we'll handle this at day-job.
I also think it's worth saying that imo, this is a very different fear than what drives "butts in seats" arguments. Ie i'm not worried that $Company will not get their value out of the Engineer and instead the bot will do the work for them. I'm concerned that Engineer will use the tool poorly and cause more work for reviewers having to deal with high LOC.
Reviews are difficult and "AI" provides a quick path to slop. I've found my $200 well worth it, but the #1 difficulty i've had is not getting features to work, but in getting the output to be scalable and maintainable code.
Sidenote, one of the things i've found most productive is deterministic tooling wrapping the LLM. Eg robust linters like Rust Clippy set to automatically run after Claude Code (via hooks) helps bend the LLM away from many bad patterns. It's far from perfect of course, but it's the thing i think we need most atm. Determinism around the spaghetti-chaos-monkeys.
The challenge with the bubble/not bubble framing is the question of long term value.
If the labs stopped spending money today, they would recoup their costs. Quickly.
There are possible risks (could prices go to zero because of a loss leader?), but I think anthropic and OpenAI are both sufficiently differentiated that they would be profitable/extremely successful companies by all accounts if they stopped spending today.
So the question is: at what point does any of this stop being true?
Maybe. But that would probably be temporary. The market is sufficiently dynamic that any advantages they have right now, probably isn't stable defensible longer term. Hence the need to keep spending. But what do I know? I'm not a VC.
If that is the case at some point the music is going to stop and they will either perish or they will have to crank up their subscription costs.
Claude 3.7 Sonnet supposedly cost "a few tens of millions of dollars"[1], and they recently hit $4B ARR[2].
Those numbers seem to give a fair bit of room for salaries, and it would be surprising if there wasn't a sustainable business in there.
[1] https://techcrunch.com/2025/02/25/anthropics-latest-flagship...
[2] https://www.theinformation.com/articles/anthropic-revenue-hi...
I use claude code exclusively for the initial version of all new features, then I review and iterate. With the Max plan I can have many of these loops going concurrently in git worktrees. I even built a little script to make the workflow better: http://github.com/jarredkenny/cf
As I said above, I don’t think a single AI company is remotely in the black yet. They are driven by speculation and investment and they need to figure out real quick how they’re going to survive when that money dries up. People are not going to fork out 24k a year for these tools. I don’t think they’ll spend even $10k. People scoff at paying $70+ for internet, a thing we all use basically all the time.
I have found it rather odd that they have targeted individual consumers for the most part. These all seem like enterprise solutions that need to charge large sums and target large companies tbh. My guess is a lot of them think it will get cheaper and easier to provide the same level of service and that they won’t have to make such dramatic increases in their pricing. Time will tell, but I’m skeptical
As I note above, Anthropic probably is in the black. $4B ARR, and spending less than $100M on training models.
Profit is for companies that don't have anything else to spend money on, not ones trying to grow.
The goal for investors is to be able to exit their investment for more than they put in.
That doesn't mean the company needs to be profitable at all.
Broadly speaking, investors look for sustainable growth. Think Amazon, when they were spending as much money as possible in the early 2000s to build their distribution network and software and doing anything they possibly could to avoid becoming profitable.
Most of the time companies (and investors) don't look for profits. Profits are just a way of paying more tax. Instead the ideal outcome is growing revenue that is cost negative (ie, could be possible) but the excess money is invested in growing more.
Note that this doesn't mean the company is raising money from external sources. Not being profitable doesn't imply that.
The only answer that matters is the one to the question "how much more are you making per month from your $200/m spend?"
If you need to repeatedly remind it to do something though, you can store it in claude.md so that it is part of every chat. For example, in mine I have asked it to not invoke git commit but to review the git commit message with me before committing, since I usually need to change it.
There may be a maximum amount of complexity it can handle. I haven't reached that limit yet, but I can see how it could exist.
I'm just worried that I'm doing it wrong.
I've found though that if you can steer it in the right direction it usually works out okay. It's not particularly good at design, but it's good at writing code, so one thing you can do is say write classes and some empty methods with // Todo Claude: implement, then ask it to implement the methods with Todo Claude in file foo. So this way you get the structure that you want, but without having to implement all the details.
What kind of things are you having issues with?
Nothing. Most people will not pay for a chat bot unless forced to by cramming it into software that they already have to use
This is _especially_ true for developers in general, which is very ironic considering how our livelihood is dependent on Software.
Apple says their App Store did $53B in "digital goods and services" the US alone last year. Thats not 100% software, but its definitely more than 0%
But productivity software in general, only a few large companies seem to be able to get away with it. The Office Suite, CRM such as SalesForce.
In the graphics world, Maya and 3DS Max. Adobe has been holding on.
Which puts the current valuations I've heard pretty much in the right ballpark. Crazy, but it could make sense.
Are those things created by Claude actually making you that much in real money every month? Because the amount of money it would cost to pay someone to create something, and the value that something brings to you once it's made are largely unrelated.
I know it's hard to place a value on how much a utility saves a business, but honestly this math is like the piracy math and we didn't buy it back then either.
Some teenager downloading 20k songs does not mean that they saved $20k[1], nor does it mean that the record labels lost $20k.
In your case, the relevant question is "how much did your revenue increase by after you started 10x your utility code?"
[1] Assuming the songs are sold on the market for $1 each.
OP wanted a thing. in the past, they've been OK paying $10k for similar things. now they're paying $200/month + a bunch of their time wrangling it and they're also OK with that.
seems reasonable to consider that "$10k of value" in very rough terms which is of course how all value is measured.
Okay, then their costs should have come down similarly, no? OP said they were a business and that these weren't luxury hobby things but business needs. In which case, it must reflect on the bottom line.
I operate as a business myself (self-employed), and I can generally correlate purchases with the bottom line almost immediately for some things (Jetbrains, VPSes for self-hosted git, etc) and correlate it with other things in the near future (certifications, conferences, etc).
The idea that "here is something I recently started paying a non-trivial amount for but it does not reflect on the bottom line" is a new and alien concept to me.
You can actually hire a few excellent devs for very little money. You just can't hire 20k of them and convince them to move to a certain coastal peninsula with high rent and $20 shawarmas, for very little money each.
[1] https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
[2] https://futurism.com/companies-fixing-ai-replacement-mistake...
So indeed, IF you are in that case: Many years on the same project with multiple years experience then it is not usefull, otherwise it might be. This means it might be usefull for junior and for experienced devs who are switching projects. It is a tool like any other, indeed if you have a workflow that you optimized through years of usage it won't help.
In other words: it might be useful for people who don't understand the generated code well enough to know that it's incorrect or unmaintainable.
You are welcomed to your point of view, but for me while one agent is finding an obscure bug, I have another agent optimising or refactoring, while I am working on something else. Its hard to believe I am deluded in thinking I am spending more time on a task.
I think the research does highlight that training is important. I don't throws devs agents and expect them to be productive.
2. They don't have a moat. DeepSeek and Kimi are already good enough to destroy any high margins they're hoping to generate from compute.
Just because something is highly useful doesn't mean it's highly profitable. Water is essential to life, but it's dirt cheap in most of the world. Same goes for food.
I find a combination of local Ollama models, with very inexpensive APis like Moonshot’s Kimi with occasional Gemini 2.5 Pro use, and occasionally using gemini-cli provides extraordinary value. Am I missing out by not using one or more $200-$300 a month subscriptions? Probably but I don’t care.
I don’t want to descend into talking politics, but I want to say that geopolitics, the rising geopolitical ‘south’, etc., is fascinating stuff - much more interesting and entertaining than anything fictional on Netflicks or HBO!
I decide to try out the agent built into VS Code. It basically matches most of these fly by night "agent" ides which are mostly just VS Code forks anyway.
But it's weird. Because Microsoft can use Anthropic's API, funnel them revenue and take a loss on Copilot.
We're all getting this stuff heavily subsidized by either VC money or big corp money.
Microsoft can eat billions in losses on this if they become *the* provider of choice.
This stuff isn't perfect, but this is the worst it'll ever be. In 2 years it'll be able to replace many of us.
Can you explain? I don't see how $200 makes that much difference than what I get from paying $20/month with OpenAI? What's the use case?
I think that there is a bubble but it's shaped more like the web bubble and less like the crypto bubble.
Regarding LLMs there are two concerns - current products don't have any killer feature to lock in customers, so people can easily jump ship. And diminishing returns, if there won't be a clear progress with models, then free/small, maybe even local models will fill most of people needs.
People are speculating that even OAI is burning more money than they make, it's hard to say what will happen if customer churn will increase. Like for example me - I never paid for LLMs specifically, and didn't use them in any major way, but I used free Claude for testing how it works, maybe incorporating in the workflow. I may transitioned to the paid tier in the future. But recently someone noted that Google cloud storage includes "free" Gemini Pro and I've switched to it, because why not, I'm already paying for the storage part. And there was nothing keeping me with Anthropic. Actually that name alone is revolting imo. I wrote this as an example that when monsters like Google or Microsoft or Apple would start bundling their solutions (and advertise them properly, unlike Google), then specialized companies including OAI will feel very very bad, with their insane expenses and investments.
If that's a genuine question: Facebooks sells ads, information and influence (eg. to political parties). It's a very profitable enterprise. In 2024 Meta made $164B in revenue, and they're still growing at ~16% year-over-year.
[0] https://investor.atmeta.com/investor-news/press-release-deta...
Its Meta now, and they own alot of "brands" besides Facebook. Instagram, WhatsApp, Oculus, Giphy, etc.
I don't LLM capacities have to reach human-equivalent for their uses to multiply for years to come.
I don't LLM technology as it exists can reach AGI by the simple addition of more compute power and moreover, I don't think adding computer necessarily is going to provide proportionate benefit (indeed, someone pointed-out that the current talent race acknowledges that brute-force has likely had it's day and some other "magic" is needed. Unlike brute-force, technical advances can't be summoned at will).
There are still massive gains to be had from scaling up - but frontier training runs have converged on "about the largest model that we can fit into our existing hardware for training and inference". Going bigger than that comes with non-linear cost increases. The next generations of AI hardware are expected to push that envelope.
The reason why major AI companies prioritize things like reasoning modes and RLVR over scaling the base models up is that reasoning and RLVR give real world performance gains cheaper and faster. Once scaling up becomes cheaper, or once the gains you can squeeze out of RLVR deplete, they'll get back to scaling up once again.
I think overstating their broad-ness is core to the hype-cycle going on. Everyone wants to believe—or wants a buyer to believe—that a machine which can grow documents about X is just as good (and reliable) as actually creating X.
A machine which can define a valid CAD document can get the actual product built (even if the building requires manual assembly).
The landscape has changed dramatically now. Investors and VCs have learnt if we stick with winners and growth companies, the payoffs are massive.
We also have more automatic, retail and foreign money flowing into the market. Buy the dip is a phenomenon that didn't exist at the scale it is now.
Pre-2015 if Big Money pulled out, the market was guaranteed to fail, but now retailers sometimes have longer views and belief (on people like Musk, Altman) than institutions and they continue to prop it.
So, it's foolish to apply 2000 parallels to now. Yes, history repeats, but doesn't with the exact time or price points
> Investors and VCs have learnt if we stick with winners and growth companies, the payoffs are massive.
Well... yes and no. 2021 wasn't that long ago.
> So, it's foolish to apply 2000 parallels to now
The stock market and other financial stuff is of course different. The fundamental trend not necessarily though. It took awhile for anyone to figure out how to directly build a highly profitable internet based business back then for AI it seems more or less the same so far.
Similar to the invention of the web, AI is not a bubble. Real value has been created.
"Good company" is subjective, but to argue that the company that built the backbone of modern web didn't make anything novel or monetizable is a bit short-sighted, don't you find?
I'm pretty sure many internet companies would be given a longer rope to survive now. E.g OpenAI and Anthropic will probably take years to get profitable but investors are OK with it
AI Agents can't be copied in a race to the bottom market to resell inference compute?
lol. Investors and VCs have no idea what they're doing
There is a reason Anthropic/OpenAI and many startups are given much much longer ropes to be profitable than in the 2000 era when VCs pulled the rug the first opportunity of trouble
Delusional to apply this to top operators (and at the same breath complain about Rich getting Richer)
I believe the true revolution is going to be when AI can start living / interacting with the physical world. Driverless cars might be the start here.
I have lived and worked through two previous ‘AI winters’ and I expect the current bubble to eventually pop in a dramatic way. There will be good things produced by AI, but I am skeptical of the panic FOMO rush to AGI or super intelligence.
Look at the process of shifting manufacturing out of the USA: that was all about driving extreme wealth for special interest insiders. Sadly, most people look to their particular little political party for some form of relief - how is that working out?
A lot of VCs and PEs lost a lot of money during the crash. This means a lot of capital was spent in the economy, generating a lot of good activity, and the companies that failed then also put a lot more capital back into the economy through bankruptcies. Other businesses can pick up talent, IP, and assets for cheap, and everyone can learn from the failures. While losing that money isn't great for VCs, what they got was a very valuable education to be better stewards of their investments, and pick better companies. The next rounds of companies have to hit metrics, milestones, have to prove their value, etc.
Never waste a perfectly good crisis: learn if nothing else.
Things move fast in tech because there isn't nearly as much red tape and litigation as there are in other mature industries. This is because there's an agreed 'way of doing things'. Take funding, grow like crazy, sell/merge or IPO. Everyone wins or loses together (even if things are stacked in favour for some over others).
Once trust in this process is broken and founders or VCs start stacking the deck in their favour the game becomes rigged to the point where other people don't want to play anymore. Once that trust is gone red tape and litigation appears.
I'm astounded that anyone still genuinely believes we're not in a massive bubble. Of course AI company CEOs are going to say we're not and that AGI is just around the corner, it's deeply in their financial interest to keep inflating the bubble as long as possible.
OpenAI’s Windsurf deal is off, and Windsurf’s CEO is going to Google - https://news.ycombinator.com/item?id=44536988 - July 2025 (679 comments))
Attended Windsurf's Build Night 18 hours before founders joined Google DeepMind - https://news.ycombinator.com/item?id=44539884 - July 2025 (1 comment)
That, by itself, would obliterate the entire value of Windsurf or Cursor or whatever. The fact that Google has this kind of money and spends it on dubious "talent" (though none of these people are known in the community) is a testament to how overfunded tech companies are compared to the value that they provide.
> For other IDEs: The protocol is editor-agnostic. Any editor that can run a WebSocket server and implement the MCP tools can integrate with Claude Code.
https://github.com/anthropics/claude-code/issues/1234 https://github.com/coder/claudecode.nvim/blob/da78309eaa2ca2...
Example in Emacs, this is how I use claude-code: https://github.com/manzaltu/claude-code-ide.el
It can also access the IDEs' real-time errors and warnings, not just compile output ('ideDiagnostics' tool), see your active editor selection, cursor position, etc.
The simpler explanation seems more correct here — there was a lot of product fluff and a lot of headcount allocated to build that fluff.
The fact that one division of Google is wildly profitable does not exempt other parts of the company from criticism of their financially dubious choices.
I'll assume the former and try again. Maybe you didn't realize I'm not the person you originally replied to?
If a company is profitable, they have funds. The funds generated by the profits, can be used to fund additional internal projects. If the bucket of funds from profits gets ridiculously large, then it may begin to be used for vanity projects, like gutting an AI company, or building a gold statue of the founder. It seems reasonable to call companies spending on mostly-useless excesses "overfunded".
As a bystander and outsider it is hard to isolate the value igniting behaviour from the moonshot behaviour. Shareholders love to gut a business of its risk taking and excess behaviour for predictable and inflated margins (and dividends) but the past 20+ years of our megacap companies is that they have continued to "innovate" in spite of all their inefficiencies.
I always have a chuckle when I recall how shareholders tried to oust Zuck for buying Instagram for 1B...
These vanity hires do seem frothy and reminiscent of dotcom style behaviour. But "AI" clearly will be game changing much like the internet was, and who at this stage can say what is worth recruiting people at the forefront of commercialising the tech right now
Stop looking at the entire world through the eyes of VC, because it doesn't work.
You're not funding google by paying for youtube, you're buying a service.
You didn't "overfund" your pizza shop that hired a stripper for friday night vibes, and neither did 99.99% of customers that paid google for their service offerings.
You just bought a pizza. Put down the VC podcasts
That's fine. I'm going to continue referring to corporations that blow lots of money on random intra-industry dick measuring matches because they can as "overfunded", and you can continue expressing your opinion to anyone who will listen that this one person on the internet used an idiom that you think is dumb because it implies something other than "that person profited, therefore they did something right and therefore whatever they do with that money is correct and intelligent and never ever wasteful or dumb."
aren't you reviewing diffs in whatever diff tool you like? I find magit to be superlative for this (and for correcting and committing things).
I use Rust and found it's better to let the AI hallucinate function names, then let the compiler correct them. Rust's compiler is significantly better than TypeScript's at this, so it works well.
Could you please avoid juicing a random comment this way?
Have used Cursor and I know that there is quite a bit of value between the model and the chat input box and it will be similar to Claude Code or Codex, it's what makes this agentic, it's just accessed through a different interface. So from that perspective, Cursor makes sense for folks that are already in the VSCode environment.
Microsoft forced Cursor to stop using their versions of various plugins https://forum.cursor.com/t/the-c-dev-kit-extension/76226/18
The technology is nowhere close to what they're hoping for and incremental progress isn't getting us there.
If we get true AGI agents, anyone can also build a multi-billion dollar tech companies on the cheap.
That's not how the economy works...
Devin etc will give you let's say 10x more engineering power, but not necessarily elite one.
Last I checked, feeding the output of an LLM back into its training data leads to a progressively worse LLM. (Note I'm not talking about distillation, which involves training a smaller model, by sacrificing accuracy. I'm referring to an equal or greater number of model parameters)
https://medium.com/@villispeaks/the-blitzhire-acquisition-e3...
which I first saw here
So, Google will be paying $2.5B to Devin guys?
Basically, Google bought the top talent from the company. This cash was used (according to articles I read this morning) in part to pay directly out to shareholders, and in exchange Google got the top talent from the company and a license for the software (probably mostly so their new talent didn't have to worry about NDA, non-compete, and patent challenges).
Since this money went to shareholders, not to the company bank, and since top talent fleeing the company reduces the value of the company the overall value of Windsurf likely went down as part of the Google deal. This in turn likely made it cheap enough for the remainder to be purchased by Cognition.
The sale price for Windsurf was likely significantly lower than the original acquisition plans.
It didn't go to $0 like some predicted, but it was never going to be as valuable as it was before the executives bailed on it.
There is no evidence at all in the announcement that is the case. It just says "100% of Windsurf employees will participate financially in this deal". What "participate financially" looks like is not elaborated upon.
It is possible you're right. It's also equally possible that the founders have still screwed over their employees, we just don't know. Nothing in this post supports either position.
In the lack of evidence, its okay to assume the most likely scenario, which is the executives & shareholders will make out like bandits and everyone else is likely to at best, get pennies.
Presumably the "payout" from Cognition is at a lower nominal value and in illiquid (and IMO overvalued) shares in Cognition rather than cash.
that's absolutely not the case. they ejected and the remaining executive team dealt with the sale over the weekend.
Did OpenAI ever actually announce anything publicly regarding a potential windsurf acquisition?
AFAICT most of the reporting was based on rumors or leaks. But they never actually announced an acquisition. Seems like Bloomberg may have made an oopsie here.
Geez is the cognitive distortion field active again? Even Grok could figure this one out.
[API Error: got status: INTERNAL. {"error":{"code":500,"message":"An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting..."}}]```
Works fine.
Does this represent confirmation that there was no pro-rata compensation to common share holders in the Google deal?
I just have so many questions.
Cognition being worth $4B with Devin being raced to zero by Claude Code also undercutting both Windsurf and Cursor have a very steep hill to climb.
Having both Devin and Windsurf will just make them raise more money as they burn through their operational costs.
This is unclear. $2.4B was for licensing and compensation. Why would Google have agreed to pay any significant amount to the Windsurf leftovers?
Reminds me of of some quip where a doctor says to a resident. Yeah this job would be so cool if it wasn't for all these sick people.
I haven't seen anything to indicate what was paid for what's left of Windsurf.
I guess it's about to happen again!
It's also been a lot of random stuff recently with their 3 separate Ross and Rachel acquisition storylines.
Some takeaways:
1. Devin/Cognition definitely have a legit AI dev agent now
2. It's crazy what Google passed on. The fact that it was worth it to them without the traditional best assets is wild. Guess that's what happens when you play on ultra hard mode with an infinite money glitch.
3. I am worried/pre-emptively sad that Windsurf will likely go away or get nerfed, more expensive etc.
^ Any company that competes with them could say that and it would create some pause.
The fact that it doesn't make sense with those numbers almost surely indicates those numbers are misleading.
> Google paid a $2.4 billion licensing fee
This is the reported number for licensing and compensation, but who knows what the terms really were.
> Cognition’s valuation is $4 billion
Doubtful
Microsoft poached the talent, devin Co. Picks up the scraps
But with all this changing of hands, I'm not sure I can trust it going forward at all, so I guess it's back to looking for alternatives.
They had released their own model which was free and good enough a couple of weeks back.
Obviously will need to look for alternatives.
If there's 47m software engineers in the world, at $200/month, and 50% gross profit that's a $56 billion TAM. Not crazy to think it's more if we include the adjacent space of analyst roles that write software (sql, advanced excel, etc).
They'll have to crush it to make a $2 billion acquihire look reasonable, but it's possible.
Basically anyone that inputs and outputs goods which can be digitized. So writers, graphic artists, accountants, legal work, etc.
> 100% of Windsurf employees will have vesting cliffs waived for their work to date
> 100% of Windsurf employees will receive fully accelerated vesting for their work to date
This sounds like a happy ending for the employees of Windsurf and a good deal for Cognition
The employees were robbed from having a big cash exit. Illiquid stock options from Windsurf were converted to illiquid stock options of Devin.
What's worse is that the well is now poisoned. I would advise against joining startups from now on, because I think that there's no upside for employees anymore.
For those brief 2 weeks, Windsurf felt like the SOTA tool. Crazy how the winds change.
Feels like a new SOTA tool every couple weeks. Heck, the post below this is about a new agentic IDE.
The "world-class GTM" is a joke.
> To that end, Jeff and I worked together to ensure that every single employee is treated with respect and well taken care of in this transaction. Specifically:
> 100% of Windsurf employees will participate financially in this deal
> 100% of Windsurf employees will have vesting cliffs waived for their work to date
> 100% of Windsurf employees will receive fully accelerated vesting for their work to date
On the other hand, I can imagine the execs taking Google golden handcuffs while trying to close the Cognition deal so the employees are made whole or maybe even on better terms than if they all went to Google.
No code or prompts are stored unless you opt-in. We also have on-prem deployment options but it's much more expensive.
> This transaction is structured so that 100% of Windsurf employees will participate financially. They will also have all vesting cliffs waived and will receive fully accelerated vesting for their work to date.
All right, cancelled.
I wonder what the terms were there. Hard for me to imagine why Google would've included that in the deal.
First, OpenAI wanted to acquire Windsurf. Terrific move! Win-win for OpenAI (who needs more AI product) and Windsurf (for the deal price). But this fell apart because Windsurf didn't want the IP to go to Microsoft (which imo should not have been not a big deal, especially if you knew what would have happened next). Big loss for all parties for this to have fallen apart.
My biggest question still is why not continue on as an independent company? Perhaps losing access to Claude doomed signups; perhaps employees/investors had a taste of an exit and still wanted it; perhaps due to fiduciary duty to maximize returns; perhaps their growth stalled due to the announcement? In any case, the founders got a similarly equivalent deal from Google, and were arguably wise to pursue it.
But Google's Corp Dev team here is the most maddening. Why not fully acquire the entire company, instead of doing the same "acquihire and license" deal that was done to Character AI, Adept, Scale, etc.? Risk of FTC antitrust review is a thing, but Google's not even competitive in the coding market, so I doubt there is a review (though I do hear that all acquisitions by large tech companies these days are reviewed by default). If there's anyone to blame in this situation, it's the FTC and Google for pursuing this strategy, instead of a full acquisition. Win-win for Google (for the team) and Windsurf (for getting a similar acquisition price, but liquid!).
Imo, the founders did a good job ensuring that close to the $3B acquisition price was reflected in the $2.5B Google deal--all existing investors and vested employee/equity holders are paid out; the company also retained $100M which was suspiciously similar to the amount needed to pay out all unvested employee/equity holders [1]. So theoretically the remaining company could pay accelerate vesting, then pay out the cash to their remaining employees, and then shut down, to give everyone the same exit as an acquisition, or better. This might have been the best scenario, because the brand damage to Windsurf as an IDE that happened over the weekend was pretty close to unrecoverable for them as an independent company.
But instead, the company leadership decided to field acquisition offers for the remaining company and IP, and got one from Cognition. (I'm actually surprised this acquisition isn't under FTC review; it's more plainly an agentic coding company acquiring a competitor agentic coding company). In taking the offer, it reinforces that the Windsurf IDE will continue to exist, that they have a R&D team backing the IDE again, and can marry Windsurf's enterprise sales chops with Cognition's product [3]. Win-win for both Cognition and Windsurf.
So overall, win-win-win all around, except for OpenAI, Varun's public reputation (imo, undeserved), and startups hiring employees (who might think they might not get a proper exit) [2].
[1] https://x.com/haridigresses/status/1944406541064433848
[2] https://stratechery.com/2025/google-and-windsurf-stinky-deal...
But the statement from Cognition was:
> 100% of Windsurf employees will participate financially in this deal
> 100% of Windsurf employees will have vesting cliffs waived for their work to date
> 100% of Windsurf employees will receive fully accelerated vesting for their work to date
The details matter. "vesting cliffs waived" meaning what? Windsurf shares exchanged for Cognition shares? at what ratio?"Participate financially" means what exactly? They could all get a coupon for a free doughnut, and that statement would be true.
I'm not saying the employees are getting nothing, or even a raw deal. I'm saying we have no idea if the deal is good for them, without details.
> theoretically the remaining company could pay accelerate vesting, then pay out the cash to their remaining employees, and then shut down, to give everyone the same exit as an acquisition, or better.
unlikely that will happen. More likely the investors and VCs will take the lion's share of the $2.5B, that is what they do. That is why they exist. And they'll distribute as thin a slice as possible to the employees.
And to your last paragraph, read reference [1]. The distribution of that 2.5B is in accordance to the existing cap table; it will make sense once you read that tweet. You must allocate money according to the cap table, and so that allocation is already determined in a company's previous funding rounds.
I would argue against it if the downside is even more technological enslavement for billions.
And while many improvements to the human condition are rooted in technology, many of the problems of humanity are rooted in it as well. There might very well be an optimal point that we've already past.
See, most closed source software really just pisses me off of ideological reasons, I just like to tinker with things and just having the possibility to do so by being provided the source code really helps my mind feel happy I guess.
So I "vibe coded" a game that I used to play and some projects that I was curious about and I just wanted to tinker too. sure the game and code have bugs.
Also with the help of AI, I feel like I can tinker about things that I don't know too much about and get a decent distance ahead. You might think that I am an AI advocate by reading this comment, but quite the contrary, I personally think that this is the only positive quality that AI helped in quite substantially.
But at what cost? The job market has sunk a large hole and nobody's hiring the junior devs because everybody feels better doing some AI deals than hiring junior devs.
My hunch is that senior devs are extremely in demand and are paid decently and so will retire on average early too. Then, there would be a huge gap b/w senior and juniors, because nobody's hiring the junior engineers now, so who will become the senior engineers if nobody got hired in the first place. I really hope that most companies actually realize that the AI game is quite a funny game really, most companies are too invested into it to realize that really, open source AI will catch up and there is just no moat with AI and building with AI or just doing stuff with AI isn't that meaningfully significant as they think it is as shown by recent studies.
Is this true? I am not seeing salaries rising, the demand seems to be met. But maybe I'm wrong.
Also maybe I felt this way because of 100 Million $ and the 30 Billion $ acquisition by Zuckerberg I guess
I might ask AI (Oh the irony) and here is the chat https://chatgpt.com/share/68756188-d374-8011-9f23-6860d6b1db... and here is one of the major source of this I suppose
https://www.hackerrank.com/blog/senior-hiring-is-surging-wil...
And I would like to quote a part from the hackerrank ie. Taken in isolation, this might suggest a cautious but healthy rebound. But viewed through a 2025 lens, a deeper pattern emerges: teams are leaning hard into experience, and leaving early-career talent behind.
But in general my reason for hitting myself in the head is that I'm an IDE author and people keep tripping over themselves to place great value on forks of VSCode
Clearly.
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
My name is Devin; it has been for many decades now. I'm embarrassed to see you've named your product after me. It has already prompted uncomfortable jokes at my expense, and I'm sure there will be more. I now have newfound empathy for people named Alexa.
For instance, people have made jokes about my name in interviews, and it's embarrassing for me, and thus awkward for everyone, and awkward interactions make it objectively less likely that I will get job offers.
I don't think any product should be named after people. Please change the name of Devin.