I love AMP, it delivers great results. I like that it’s opinionated in how it should be used and for example tells me that I need to hand off context.

I love that I am not welded to one model and someone smart has evaluated what’s best fit for what. It’s a bit a shame they lost Steve Yegge as their brand ambassador. A respected and practicing coder is big endorsement.

To anyone on the fence - give it a go.

  • neom
  • ·
  • 16 hours ago
  • ·
  • [ - ]
What are the chances we see local models rendering these paid IDEs useless? I presume it will be a very long time before we see a good enough local model that can compete with their ad supported frontier models on the vast majority of machines out there (I presume most people don't have the latest and greatest)?

I was watching the CEO of that Chad IDE give an interview the other day, they are doing the same thing as amp just with "brain rot" ads/games etc instead (different segment, fine), they are using that activity to produce value (ads/game usage or whatever) for another business such they can offset the costs of more expensive models. (I presume this could extend out into selling data also, but I don't know they do that today)

Congrats to the amp team on all the success btw, all I hear is great things about the product, good work.

  • lukev
  • ·
  • 15 hours ago
  • ·
  • [ - ]
I do wonder what the moat is around this class of products (call it "coding agents").

My intuition is that it's not deep... the differentiating factor is "regular" (non LLM) code which assembles the LLM context and invokes the LLM in a loop.

Claude/Codex have some advantage, because they can RLHF/finetune better than others. But ultimately this is about context assembly and prompting.

There is no moat. It's all prompts. The only potential moat is building your own specialized models using the code your customers send your way I believe.
I think "prompts" are a much richer kind of intellectual property than they are given credit for. Will put in here a pointer to the Odd Lots recent podcast with Noetica AI- a give to get M&A/complex debt/deal terms benchmarker. Noetica CEO said they now have over 1 billion "deal terms" in their database, which is only half a dozen years old. Growing constantly. Over 1 billion different legal points on which a complex financial contract might be structured. Even more than that, the representation of terms in contracts they see can change pretty dramatically quarter to quarter. The industry learns and changes.

The same thing is going to happen with all of the human language artifacts present in the agentic coding universe. Role definitions, skills, agentic loop prompts....the specific language, choice of words, sequence, etc really matters and will continue to evolve really rapidly, and there will be benchmarkers, I am sure of it, because quite a lot of orgs will consider their prompt artifacts to be IP.

I have personally found that a very high precision prompt will mean a smaller model on personal hardware will outperform a lazy prompt given to a foundation model. These word calculators are very very (very) sensitive. There will be gradations of quality among those who drive them best.

The best law firms are the best because they hire the best with (legal) language and are able to retain the reputation and pricing of the best. That is the moat. Same will be the case here.

But the problem is the tight coupling of prompts to the models. The half-life of prompt value is short because the frequency of new models is high, how do you defend a moat that can half (or worse) any day a new model comes out?

You might get an 80% “good enough” prompt easily but then all the differentiation (moat) is in that 20% but that 20% is tied to the model idiosyncrasies, making the moat fragile and volatile.

I think the issue was they (the parent commenter) didn't properly convey and/or did not realize they were arguing for context. Data that is difficult to come by that can be used in a prompt is valuable. Being able to workaround something with clever wording (i.e. prompt) is not a moat.
Can folks who have compared Amp with other agents share their experience? Some of my colleagues swear this is the best agent out there.
I think it’s great but also pricey. Amp like Claude Code feels like a product used by the people that build it and oddly enough that does not seem to be the case for most coding agents out there.
As someone who switches between most CLIs to compare, Amp is still on top, costs more, but has the best results. The librarian and oracle make it leagues ahead of the competition.
I don't understand how people use these tools without a subscription. Unless you are using it very infrequently paying per token gets costly very fast.
Work pays for it. I don't work for stingy companies that don't provide the tools required to do the job. (our team spends > $1000/m EACH on Amp alone)
  • hboon
  • ·
  • 11 hours ago
  • ·
  • [ - ]
Could you please share a little on why it's noticeably better than Claude Code on a sub (or 5? I mean, sometimes you can brute force a solution with agents)?
[dead]
https://www.askmodu.com/rankings independently aggregates traffic from a variety of agents and amp consistently has the highest success rate for small and large tasks

That aligns with my annecdata :)

Thanks for the link.

But my first thought looking at this is that the numbers are probably skewed due to distribution of user skill levels, and what types of users choose which tool.

My hypothesis is that Amp is chosen by people who are VERY highly skilled in agentic development. Meaning these are the people most likely to provide solid context, good prompts, etc. That means these same people would likely get the best results from ANY coding agent. This also tracks with Amp being so expensive -- users or companies are more likely to pay a premium if they can get the most from the tool.

Claude Code on the other hand is used by (I assume) a way larger population. So the percentage of low-skill users is likely to be much higher. Those users may still get value from the tool, but their success rate will be lower by some factor with ANY coding agent. And this issue (if my hypothesis is correct) is likely 10x as true for GitHub Copilot.

Therefore I don't know how much we should read into stats like the total PR merge success percentage, because it's hard to tell the degree of noise caused by this user skill distribution imbalance.

Still interesting to see the numbers though!

To be honest, I've gave it a try a couple of times, but it's so expensive I'm having a hard time even being able to judge it fairly. The first time I spent just $5, second $10 and the third time $20, but they all went by so fast I'm worried even if I find it great, it's way too expensive, and having a number tick up/down makes me nervous or something. And I'm the type of person who has ChatGPT Pro so I'm not exactly stingy with paying for things I find useful, but there is a limit somewhere and I guess for me Amp is that.
It sounds like you're being temporarily stingy due to having ChatGPT Pro. Might be good to get rid of it if you think the grass might be greener outside of Codex.
No, ChatGPT Pro was an example that I'm not stingy to pay for things I find useful. I'm also paying for Gemini, Claude and other types of software to do my job, not even just coding. But even if I do, I still find Amp too expensive to be able to use for anything useful.

I run every single coding prompt through Codex, Claude Code, Qwen and Gemini, compare which one gives me the best and go ahead with using that one. Maybe I go with Codex 60% of the times, Claude 20% and Qwen/Gemini the remaining 20%, not often at all either of them get enough right. I've tried integrating Amp into my workflow too, but as mentioned, too expensive. I do think the grass is currently the greenest with Codex, still.

It depends on your perspective. From a startup perspective, this makes you a less interesting potential customer, to which one might attach the term stingy. From a perspective of willingness to invest in your own productivity it doesn't sound stingy, though.
[dead]
It was really good in early stages (this past summer). But that was before Claude Code and Codex took off big time. I would say the biggest downside of Amp is that it’s expensive. Like running Opus the whole time expensive. But they don’t have their own model so what are you really paying for? Prompts? Not so sure. Amp is for people who are not smart enough to roll their own agents. So in that case, they shouldn’t be using agentic workflow.
I’d love to know the internals here. Is equity being split? Seems like a legal minefield to split a company like this.

Amp was built by Sourcegraph, so I assume all investors and employees of Sourcegraph now get equity in Amp?

No insight here either, but I would guess it is a spin-off ... mostly because it is a US company and in the US spin-off's are generally tax free to both company and shareholders.

Spin-off is where the parent company creates a subsidiary and distributes the shares in the subsidiary to the existing shareholders. So the shareholders end up holding shares of two companies. Share allocation is done on a pro-rata basis so each shareholder still has the same exposure they did before.

  • foota
  • ·
  • 16 hours ago
  • ·
  • [ - ]
I have no insight, but I would assume it's a 1-1 sort of split, that is that everyone that previously had one share of sourcegraph now has one share of sourcegraph and one of amp? That seems like the least legally fraught way to do it.
yes that is easiest; or just be a 100% owned subsidiary. (that's what say, waymo is).

the good thing is that you afterwards the cap table of the subsidiary or the spunoff can evolve (ex: waymo / amp can raise money independent of the parent company).

Why is that preferable to just pivoting?
  • foota
  • ·
  • 13 hours ago
  • ·
  • [ - ]
Presumably they wanted to continue working on sourcegraph? Or maybe they want to have something left if amp flops?
You have to pay by the token, without being able to use your subscription, right? If so, is it better enough than the coding agent that the others ship with to make up for that loss? This is a crowded space.
So the point of this is to clean the cap table, right? Current investors aren't getting a stake in the new company?
  • ics
  • ·
  • 16 hours ago
  • ·
  • [ - ]
Is Amp the thing that supplanted Cody (which was either developed or acquired by Sourcegraph, can't remember)?
Yep, it was all developed by Sourcegraph in-house.
  • ·
  • 16 hours ago
  • ·
  • [ - ]
  • htrp
  • ·
  • 16 hours ago
  • ·
  • [ - ]
much easier to raise money as a frontier lab
Who is Amp’s competition?

Because if I understand them correctly, aren’t they a wrapper around all the major LLMs (focused specially on developer use cases?

What model does Amp use?
  • ·
  • 6 days ago
  • ·
  • [ - ]