As a non-Cursor user who does AI programming, there is nothing there to make me want to try it out.
Feedback 2: I feel any new agentic AI tool for programming should have a comparison against Aider[1] which for me is the tool to benchmark against. Can you give a compelling reason to use this over Aider? Don't just say "VSCode" - I'm sure there are extensions for VSCode that work with Aider.
As an example of the questions I have:
- Does it have something like Aider's repomap (or better)?
- To what granularity can I limit the context?
We don't have a repomap or codebase summary - right now we're relying on .voidrules and Gather/Agent mode to look around to implement large edits, and we find that works decently well, although we might add something like an auto-summary or Aider's repomap before exiting Beta.
Regarding context - you can customize the context window and reserved amount of token space for each model. You can also use "@ to mention" to include entire files and folders, limited to the context window length. (you can also customize the model's reasoning ability, think tags to parse, tool use format (gemini/openai/anthropic), FIM support, etc).
Back in 2023 one of the cursor devs mentioned [1] that they first convert the HTML to markdown then do n-gram deduplication to remove nav, headers, and footers. The state of the art for chunking has probably gotten a lot better though.
[1] https://forum.cursor.com/t/how-does-docs-crawling-work/264/3
I wonder if the next round of models trained on tool-use will be good at looking at documentation. That might solve the problem completely, although OSS and offline models will need another solution. We're definitely open to trying things out here, and will likely add a browser-using docs scraper before exiting Beta.
I can just disable `ask` tool for example to have it easily go full autonomous on certain tasks.
Have a look at https://github.com/aperoc/toolkami to see if it might be useful for you.
That's all in the website, not the README, but yes a bulleted list or identical info from the site would work well.
If nearly everytime I use it to accomplish something it gets it 40-85% correct and I have to go in to fix the other 60-15%, what is the point? It's as slow as hand writing code then, if not slower, and my flow with Continue is simply better:
1. CTRL L block of code 2. Ask a question or give a task 3. I read what it says and then apply the change myself by CTRL C and then tweaking the one or two little things it inevitably misunderstood about my system and its requirements
Aider's killer features are integration of automated lint/typecheck/test and fix loops with git checkpointing. If you're not setting up these features you aren't getting the full value proposition from it.
Use gemini-2.5pro or sonnet3.5/3.7 or gpt-4.1
Be as specific and detailed in your prompts as you can. Include the right context.
I'm getting really tired of AI advocates telling me that AI is as good as the hype if I just pay more (say, fellow HNer, your paycheck doesn't come from those models, does it?), or use the "right" prompt. Give some examples.
If I have to write a prompt as long as Claude's system prompt in order to get reliable edits, I'm not sure I've saved any time at all.
At least you seem to be admitting that aider is useless with local models. That's certainly my experience.
I've tested aider with gemini2.5 with prompts as basic as 'write a ts file with pupeteer to load this url, click on button identified by x, fill in input y, loop over these urls' and it performed remarkably well.
Llm performance is 100% dependent on the model you're using so you ca hardly generalize from a small model you run locally on a cpu.
We're hoping that one of the big labs will distill an ~8B to ~32B parameter model that performs SOTA benchmarks! This would be huge both in cost and probably make it reasonable for most people to code with agents in parallel.
There's no such thing as a "right prompt". It's all snake oil. https://dmitriid.com/prompting-llms-is-not-engineering
Perhaps when it can dynamically learn on the go this will be better. But right now it's not terribly useful.
I don't see it as an admission. I'd wager 99% of Aider users simply haven't tried local models.
Although I would expect they would be much worse than Sonnet, etc.
> I'm getting really tired of AI advocates telling me that AI is as good as the hype if I just pay more (say, fellow HNer, your paycheck doesn't come from those models, does it?), or use the "right" prompt. Give some examples.
Examples? Aider is a great tool and much (probably most) of it is written by AI.
There are certainly a lot of alternatives that are plugins(!), but our differentiation right now is being a full open source IDE and having all the features you get out of the big players (quick edits, agent mode, autocomplete, checkpoints).
Surprisingly, all of the big IDEs today (Cursor/Windsurf/Copilot) send your messages through their backend whenever you send a message, and there is no open source full IDE alternative (besides Void). Your connection to providers is direct with Void, and it's a lot easier to spin up your own models/providers and host locally or use whatever provider you want.
We're planning on building Git branching for agents in the next iteration when LLMs are more independent, and controlling the full IDE experience for that will be really important. I worry plugins will struggle.
And Zed: https://zed.dev
Yesterday on the front page of HN:
Their tools are wildly popular in many spaces. It isn't for everyone though. It's totally believable in your circle no one uses their tools, but it isn't niche.
Their use base is completely different. And we’re both in a bubble, I reckon. IntelliJ people also only know a few VSCode users!
Claude Code (neither IDE nor extension) is rapidly gaining ground, it's biggest current limitation being cost, which is likely to get resolved sooner rather than later (Gemini Code anyone?). You're right about the right now, but with the pace at which things are moving, the trends are honestly more relevant than the status quo.
We think in 1-2 years people will write code at a systems level, not a function level, and it's not clear to us that you can do that with text. Text-based tools like Claude Code work in our text-based-code systems today, but I think describing algorithms to a computer in the future might involve more diagrams, and terminal will not be ideal. That's our reasoning against building a tool in the terminal, but it clearly works well today, and is the simplest way for the labs to train/run terminal tool-use agents.
There's a reason why fully creating systems from them died 20 years ago - and it wasn't just because the code gen failed. Finding a bug in your spec when its a mess of arrows and connections can be nigh impossible.
Go image search "complex unreal blueprint".
I don't imagine people will want to fully visualize codebases in a giant unified diagram, but I find it hard to imagine that we won't have digests and overviews that at least stray from plaintext in some way.
I think there are a lot of unexplored ways of using AI to create an intelligent overview of a repo and its data structures, or a React project and its components and state, etc.
Sounds exactly like what DeepWiki is doing from the Devin AI Agent guys: https://deepwiki.com
> hard to imagine that we won't have digests and overviews
100% agreed here.
Disclosure: I'm the author of the project below.
Hey by the way I hear all communication between people is going to shift to pictograms soon. You know -- emoji and hieroglyphs. Text just isn't ideal, you know
What makes you say that? From what I’m observing it doesn’t seem to be talked much about at all.
The big giveaway is that everyone who has tried it agrees that it's clearly the best agentic coding tool out there. The very few who change back to whatever they were using before (whether IDE fork, extension or terminal agent), do so because of the costs.
Relevant post on the front page right now: A flat pricing subscription for Claude Code [0]. The comment section supports the above as well.
I've been reaching for Claude Code first for the last couple weeks. They had offered me a $40 credit after I tried it and didn't really use it, maybe 6 weeks ago, but since I've been using it a lot. I've spent that credit and another $30, and it's REALLY good. One thing I like about Claude Code is you can "/init" and it will create a "CLAUDE.md" that saves off it's understanding of the code, and then you can modify it to give it some working knowledge.
I've also tried Codex with OpenAI and o4-mini, and it works very well too, though I have had it crash on me which claude has not.
I did try Codex with Gemini 2.5 Pro Preview, but it is really weird. It seems to not be able to do any editing, it'll say "You need to make these edits to this file (and describe high level fixes) and then come back when you're done and I'll tell you the edits to do to this other file." So that integration doesn't seem to be complete. I had high hopes because of the reviews of the new 2.5 Pro.
I also tried some claude-like use in the AI panel in Zed yesterday, and made a lot of good progress, it seemed to work pretty well, but then at some point it zeroed out a couple files. I think I might have reached a token limit, it was saying "110K out of 200K" but then something else said "120K" and I wonder if that confused it. With Codex you can compact the history, I didn't see that in Zed. Then at some point my Zed switched from editing to needing me to accept every change. I used nearly the entire trial Zed allowance yesterday asking it to impelement a Galaga-inspired game, with varying success.
I think editing just a part of the file is what Roo calls diff editing, and I'm asking if this is what the person above means by line edits.
Are you sure? I have some expertise with my IDE, some other extension which solve problems for me, a wide range of them, I've learnt shortcuts, troubleshooting, where and who ask for help, but now you're telling me that I am better off leaving all that behind, and it's better for me? ;o
We used to know better
edit: ahh just saw that it is also a fork of VS Code, so it is indeed OSS Cursor
Let’s say I make a few changes in the code that will require changes or additions to tests. I give the agent the test command I want it to run and the files to read, and let it cycle between running tests and modifying files.
While it’s doing that, I open Slack or do whatever else I need to do.
After a few minutes, I come back, review the agent’s changes, fix anything that needs to be fixed or give it further instructions, and move to the next thing.
1) Authority (whatever a prominent evangelist developer was peddling)
2) The book I was following as a guide
3) The tutorial I was following as a guide
4) The consensus of the crowd at the time
5) Whatever worked (SO, brute force, whatever library, whatever magic)
It took a long ass time before I got to throw all five of those things out (throw the map away). At the moment, #5 on that list is AI (whatever works). It's a Rite of Passage, and because so much of being a developer involves autodidacticism, this is a valley you must go through. Even so, it's pretty cool when you make it out of that valley (you can do whatever you want without any anxiety about is this the right path?). You are never fearful or lost in the valley(s) for the most part afterward.
Most people have not deployed enough critical code that was mostly written with AI. It's when that stuff breaks, and they have to debug it with AI, that's when they'll have to contend with the blood, sweat, and tears. That's when the person will swear that they'll never use AI again. The thing is, we can never not use AI ever again. So, this is the trial by fire where many will figure out the depth of the valley and emerge from it with all the lessons. I can only speculate, but I suspect the lessons will be something along the lines of "some things should use less AI than others".
I think it's a cool journey, best of luck to the AI-first crowd, you will learn lessons the rest of us are not brave enough to embark on. I already have a basket of lessons, so I travel differently through the valley (hint: My ship still has a helm).
Or, most software will become immutable. You'll just replace it.
You'll throw away the mess, and let a newer LLM build a better version in a couple of days. You ask the LLM to write down the specs for the newer version based on the old code.
If that is true, then the crowd that is not brave enough to do AI-first will just be left behind.
Do we really want that? To be beholden to the hands of a few?
Hell, you can't even purchase a GPU with high enough VRAM these days for an acceptable amount of money. In part because of geopolitics. I wonder how many morerestrictions are there to come.
There's a lot of FOMO going around, those honing their programming skills will continue to thrive, and that's a guarantee. Don't become a vassal when you can be a king.
This is peak AI, it only goes downhill from here in terms of quality, the AI first flows will be replaceable. Those offshored teams that we have suffered with for years now will be primarly replaced (google first programmers). And developers will continue, working around the edges. The differences will be that startups wont be able to use technology horading to stifle competition, unless they make themselves immune from the ai vacumes.
I can appreciate the comments further up around how AI can help unravel the mysterys of a legacy codebase. Being able to ask questions about code in quick succession will mean that we will feel more confident. AI is lossy, hard to direct, yet very confident always. We have 10k line functions in our legacy code that nests and nest. How confident are you to let ai go and refactor this code without oversight and ship to a customer? Thus far im not, maybe i dont know the best model and tools to use and how to apply them, but even if one of those logic branches gets hallucinated im in for a very bumpy ride. Watching non-technical people at my org get frusted and stuck with it in a loop is a lot more common then the successes which seem to be the opposite of the experienced engineers who use it as a tool, not a savour. But every situation is different.
If you think you company can be a differentiator in the market because it has access to the same AI tool as every other company? We'll well see about that. I believe there has to be more.
Im an experienced engineer of 30+yrs. Technology comes and goes. AI is just a another tool in the chest. I use it primarily because i dont have to deal with ads. I also use it to be a electrical engineer, designing circuts in areas i am not familiar with. I can see very simply the noivce side of the coin, it feels like you have super powers because you just dont know enough about the subject to be aware of anything else. Its sped up the learning cycle considerably beacause of the conversational nature. After a few years of projects, i know how to ask better questions to get better results.
That's like saying "I'll just burn down my house because I can replace it. Anyone who repairs their house will be left behind."
It's true, you can replace it, so I can't put my finger on what has been stopping people from burning their houses down instead of, say, spring cleaning
The joys of dependency hell combined with rapid deprecation of the underlying tooling.
Not even, devoured might be more apt. If I'm manually moving through this valley and a flood is coming through, those who are sticking automatic propellers and navigation systems on their ship are going to be the ones that can surf the flood and come out of the valley. We don't know, this is literally the adventure. I'm personally on the side of a hybrid approach. It's fun as hell, best of luck to everyone.
It's in poor taste to bring up this example, but I'll mention it as softly as I can. There were some people that went down looking for the Titanic recently. It could have worked, you know what I mean? These are risks we all take.
Quoting the Admiral from Starcraft Broodwars cinematic (I'm a learned person):
"... You must go into this with both eyes open"
Not sure if you drew the right conclusion from that one.
I'm not using AI and I'm still an incredibly high velocity engineer because I own my codebase. I've written each line ten times over, like a player who has become highly skilled at one particular game.
> that's when they'll have to contend with the blood, sweat, and tears. That's when the person will swear that they'll never use AI again.
sounds like me after my first hangover...Context switching is not the bottleneck. I actually like to go away from the IDE/keyboard to think through problems in a different environment (so a voice version of chatgpt that I can talk to via my smartwatch while walking and see some answers either on my smartglasses or via sound would be ideal… I don’t really need more screen (monitor) time)
I do this all the time, and I am completely fine with it. Sure, I need to pay more attention, but I think it does more good than harm.
For well trodden paths that AI is good at, you're wasting a ton of time copying context and lint/typechecking/test results and copying back edits. You could probably double your productivity by having an agentic coding workflow in the background doing stuff that's easy while you manually focus on harder problems, or just managing two agents that are working on easy code.
With humans there is this point where even the most patient teacher has to move on to do other things. Learning is best when one is curios about something and curiosity is more often specific. (When generic one can just read the manual)
Now that I think about it, I might have only ever used agents for searching and answering questions, not for producing code. Perhaps I don't trust the AI to build a good enough structure, so while I'll use AI, it is one file at a time sort of interaction where I see every change it makes. I should probably try out one of these agent based models for a throw away project just to get more anecdotes to base my opinion on.
At it's most basic, agentic mode is necessary for building the proper context. While I might know the solution at the high level, I need the agent to explore the code base to find things I reference and bring them into context before writing code.
Agentic mode is also super helpful for getting LLMs from "99%" correct code to "100%" correct code. I'll ask them to do something to verify their work. This is often when the agent realizes it hallucinated a method name or used a directionally correct, but wrong column name.
Senior engineers are not necessarily old but have the experience to delegate manageable tasks to peers including juniors and collaborate with stakeholders. They’re part of an organization by definition. They’re senior to their peers in terms of experience or knowledge, not age.
Agentic AIs slot into this pattern easily.
If you are a solo dev you may not find this valuable. If you are a senior then you probably do.
I actually flip things - I do the breakdown myself in a SPEC.md file and then have the agent work through it. Markdown checklists work great, and the agent can usually update/expand them as it goes.
The main reason I think there is such a proliferation is it's not clear what the best interface to coding agents will be. Is it in Slack and Linear? Is it on the CLI? Is it a web interface with a code editor? Is it VS Code or Zed?
Just like everyone has their favored IDE, in a few years time, I think everyone will have their favored interaction pattern for coding agents.
Product managers might like Devin because they don't need to setup an environment. Software engineers might still prefer Cursor because they want to edit the code and run tests on their own.
Cursor has a concept of a shadow workspace and I think we're going to see this across all coding agents. You kick off an async task in whatever IDE you use and it presents the results of the agent in an easy to review way a bit later.
As for Void, I think being open source is valuable on it's own. My understanding is Microsoft could enforce license restrictions at some point down the road to make Cursor difficult to use with certain extensions.
Another YC backed open source VS Code is Continue: https://www.continue.dev/
(Caveat: I am a YC founder building in this space: https://www.engines.dev/)
For real. I think it's because code editors seem to be in that perfect intersection of:
- A tool for programmers. Programmers like building for programmers.
- A tool for productivity. Companies will pay for productivity.
- A tool that's clearly AI-able. VC's will invest in AI tools.
- A tool with plenty of open source lift. The clear, common (and extreme?) example of this being forking VSCode.
Add to that the recent purchase of VSCode-fork [1] Windsurf for $3 billion [2] and I suspect we will see many more of these.
[1]: https://windsurf.com/blog/why-we-built-windsurf#:~:text=This...
[2]: https://community.openai.com/t/openai-is-acquiring-windsurf-...
sudo chmod 755 path/to/Cursor-0.48.6.x86_64.AppImage
path/to/Cursor-0.48.6.x86_64.AppImage
and then I get greeted with an error message: The setuid sandbox is not running as root. Common causes:
* An unprivileged process using ptrace on it, like a debugger.
* A parent process set prctl(PR_SET_NO_NEW_PRIVS, ...)
Failed to move to new namespace: PID namespaces supported, Network namespace supported, but failed: errno = Operation not permitted
I have to go Googling, then realize I have to run it with bin/appimage/Cursor-0.48.6-x86_64.AppImage --no-sandbox
Often I'm lazy to do all of this and just use the Claude / ChatGPT web version and paste code back and forth to VS code.The effort required to start Cursor is the reason I don't use it much. VS code is an actual, bona fide installed app with an icon that sits on my screen, I just click it to launch it. So much easier. Even if I have to write code manually.
I'm working on an agnostic unified framework to make contexts transferrable between these tools.
This will permit zero friction, zero interruption transitions without any code modification.
Should have something to show by next week.
Hit me up if you're interested in working on this problem - I'm tired of cowboying my projects.
I'm still in vim with copilot and know I'm missing out. Anyway I'm also adding to the problem as I've got my own too (don't we all?!), at https://codeplusequalsai.com. Coded in vim 'cause I can't decide on an editor!
I know some folks like using the terminal, but if you like Claude Code you should consider plugging your API key into Void and using Claude there! Same exact model and provider and price, but with a UI around the tool calls, checkpoints, etc.
That is until I started using Claude Code.
It’s not about the terminal. It’s just a better UX in general.
https://techcrunch.com/2024/09/30/y-combinator-is-being-crit...
Most common these days is to make the paid product be a hosted version of the open source software, but there are other ways too. Experienced founders emphasize to new startups how important it is to get this right and to keep your open source community happy.
No one I've heard is treating open source like a bait and switch; quite the opposite. What is sought is a win-win where each component (open source and paid) does better because of the other.
I think there’s a general misconception out there that open sourcing will cannibalize your hosted product business if you make it too easy to run. But in practice, there’s not a lot of overlap between people who want to self-host and people who want cloud. Most people who want cloud still want it even if they can self-host with a single command.
In a project where I already have a lot of linting brought into the editor, I want to be able to reuse that linting in a headless mode: start something at the CLI, then hop into the IDE when it says it's done or needs help. I'd be able to see the conversation up to that point and the agent would be able to see my linting errors before I start using it in the IDE. For a large, existing codebase that will require a lot of guardrails for an agent to be successful, it's disheartening to imagine splitting customization efforts between separate CLI and IDE tools.
For me so far, cursor's still the state of the art. But it's hard to go all-in on it if I'll also have to go all-in on a CLI system in parallel. Do any of the tools that are coming out have the kind of dual-mode operation I'm interested in? There's so many it's hard to even evaluate them all.
Does anyone think this model of "tool" + "curated model aggregator" + "open source" would be useful for other, non-developer fields? For instance, would an AI art tool with sculpting and drawing benefit from being open source?
I've talked with VCs that love open source developer tools, but they seem to hate on the idea of "open creative tools" for designers, illustrators, filmmakers, and other creatives. They say these folks don't benefit from open source. I don't quite get it, because Blender and Krita have millions of users. (ComfyUI is already kind of in that space, it's just not very user-friendly.)
Why do investors seem to want non-developer things to be closed source? Are they right?
But as you point out there are great solutions so it’s clearly not a dead end path.
My understanding is that these are not custom models but a combination of prompting and steering. That makes Cursor's performance relative to others pretty surprising to me. Are they just making more requests? I wonder what the secret sauce is.
Issue has been open for over a year.
Is this feature on the roadmap?
- The logo looks like it was inspired directly from the Cursor logo and modified slightly. I would suggest changing it.
- It might be wise to brand yourself as your own thing, not just an "open source Cursor". I tend to have the expectation that "open source [X]" projects are worse than "[X]". Probably unfair, I know.
Believe it or not, the logo similarity was actually unintentional, though I imagine there was subconscious bias at play (we created ours trying to illustrate "a slice of the Void").
But that assumes that you're already familiar with the non-open-source software referenced. I've never used Cursor so I have no idea what it can or can't do. I'm pretty sure I would never have discovered Inkscape if it had consistently been described as an “open-source Illustrator” as I've simply never used Adobe software.
It's been a lot harder to build an IDE than an extension, but we think having full control over the IDE (whether that's VSCode or something else we build in the future) will be important in the long run, especially when the next iteration of tool-use LLMs comes out (having native control over Git, the UI/UX around switching between iterations, etc).
Are you sure about this one? I'm sure I have used an extension whose whole purpose was to automatically open or close the sidebar under certain conditions.
But since there seems to be a need for AI-powered forks of VS Code, it could make sense for them all to build off the same fork, rather than making their own.
Hint: they dropped XUL because every update broke extensions
Did you mean to say a debugger? That one has an open alternative (NetCoreDbg) alongside a C# extension fork which uses it (it's also what VS Codium would install). It's also what you'd use via DAP with Neovim, Emacs, etc.
Then I opened an existing file and asked it to modify a function to return a fixed value and it did the same.
I'm an absolute newb in this space so if I'm doing something stupid I'd appreciate it if you helped me correct it because I already had the C/C++ extension complain that it can only be used in "proper vscode" (I imported my settings from vscode using the wizard) and when this didn't work either it didn't spark joy as Marie Kondo would say.
Please don't get me wrong, I gave this a try because I like the idea of having a proper local open source IDE where I can run my own models (even if it's slower) and have control over my data. I'm genuinely interested in making this work.
Thanks!
Small OSS models are going to get better at this when there's more of a focus on tool-use, which we're expecting in the next iteration of models.
It's compatible but has better integration and modularity, and doing so might insulate you a bit from your rather large competitor controlling your destiny.
Or is the exit to be bought by Microsoft? By OpenAI? And thus to more closely integrate?
If you're open-source but derivative, can they not simply steal your ideas? Or will your value depend on having a lasting hold on your customers?
I'm really happy there are full-fledged IDE alternatives, but I find the hub-and-spoke model where VSCode/MS is the only decider of integration patterns is a real problem. LSP has been a race to the bottom, feature-wise, though it really simplified IDE support for small languages.
Show HN: Void, an open-source Cursor/GitHub Copilot alternative - https://news.ycombinator.com/item?id=41563958 - Sept 2024 (154 comments)
For the record, I really like Void. It's great at utilising local models, which no-one else does. Although I'd love to know which are the best Ollama local coding models. I've failed with a few so the moment I'm sticking to Sonnet 3.7 and GPT 4.1. With 03 as the 'big daddy'. :)
I'm also a fan because it's open source, which is really needed in this space I feel. One question for the devs, what do you think about this? https://blog.kilocode.ai/p/vs-code-forks-are-facing-a-grim
When I am using LLMs, I know exactly what the code should be and just am using it as a way to produce it faster (my Cursor rules are extremely extensive and focused on my personal architecture and code style, and I share them across all my personal projects), rather than producing a whole feature. When I try and use just the agent in Cursor, it always needs significant modifications and reorganization to meet my standards, even with the extensive rules I have set up.
Cursor appeals to me because those QOL features don't take away the actual code writing part, but instead augment it and get rid of some of the tedium.
Continue.dev also received investment from YC. Remember PearAI? Very charismatic founders that just forked Continue.dev and got a YC investment [1].
https://techcrunch.com/2024/12/20/after-causing-outrage-on-t...
I’ll be sticking with VSCode until:
- Notebooks are first class objects. I develop Python packages but notebooks are essential for scratch work in data centric workflows
- I can explore at least 2D data structures interactively (including parquet). The Data Wrangler in VSCode is great for this
New Editors:
- Firebase Studio
- Zed
- OpenHands (OSS Devin Clone)
VS Code Forks:
- Cursor
- Windsurf Editor
- Void
VS Code Extensions:
- Gemini Code Assist
- Continue.dev
- GitHub Copilot Agent Mode
- Cline
- RooCode
- Kilo Code (RooCode + Cline Fork)
- Windsurf Plugin
- Kodu.ai Claude Coder (not claude code!)
Terminal Agents:
- Aider
- Claude Code
- OpenAI codex
Issue Fixing Agents:
- SWE-agent
Also missing a class of non-IDE desktop apps like 16x Prompt and Repo Prompt.
Though, since I specifically mentioned agentic, I wanted to exclude non-agentic tools like prompt builders and context managers that you linked. :)
Reason being: my idea of agents is to generalize well enough, so the need for workflow based apps isn't needed anymore.
During discovery and planning phase, the agents should traverse the code base with a retrieval strategy packaged as a tool (embedded search, code-graphs, ...) and then add that new knowledge to the plan before executing the code changes.
For example, Cursor a year ago was not agentic at all. GitHub Copilot only recently added agentic features.
I also think the end game for an agentic tool would not an IDE, because IDE was designed for human workflows, not agents.
I wrote about this topic a while ago and made a classification that's probably a bit outdated, but still relevant: https://prompt.16x.engineer/blog/ai-coding-l1-l5
I want to mention my current frustration with cursor recently and why I would love an OSS alternative that gives me control; I feel cursor has dumped agentic capabilities everywhere, regardless of whether the user wants it or not. When I use the Ask function as opposed to Agent, it seems to still be functioning in an agentic loop. It takes longer to have basic conversations about high level ideas and really kills my experience.
I hope void doesn’t become an agent dumping ground where this behavior is thrust upon the user as much as possible
Not to say I dislike agent mode, but I like to choose when I use it.
AppImage, .deb, .tar.gz
I have indeed felt at times slighted for developing open-source on Windows and been told that I'm “weird” for not embracing Linux wholeheartedly. Experiences in the open-source world suggest that I'm not the only one. Perhaps that answers your question.
In that case, a shitty, closed system is good actually because it's another thing your users will need to "give up" if they move to an alternative. By contrast, an open ide like void will hopefully make headway on an open interface between ides and the llm agents in such a way that it can be adapted by neovim people like me or anyone else for that matter
I'd say Firebase Studio and OpenHands
[1] CodeGraph https://arxiv.org/abs/2408.13863
However, I uninstalled due to the sounds it made! A constant clicking for some (unannounced) background work is bizarre choice for any serious development environment.
Sadly when I try to add a model, I get the error: > Error while encrypting the text provided to safeStorage.encryptString. Encryption is not available
vscode and Cursor work perfectly fine this way:
> nix-shell -p appimage-run > [nix-shell:~/Downloads]$ appimage-run Cursor-0.49.6-x86_64.AppImage
It'd be a security nightmare if it was more popular, but fortunately the community hovers around being big enough for serious work to be done but small enough that it's not worth writing malware for.
One advantage for Emacs is it's both easy and common read the code of the plugins you are using. I can't tell you the last time I looked at the source code of a plugin for VS Code or any other editor. The last time I looked at the code for a plugin in Emacs was today.
It's like saying the AUR is a security nightmare. You're just expected to be an adult and vet what you're using.
Emacs runs all elisp code as if it's part of Emacs. Think about what Emacs is capable of, and compare that to what a browser allows its extensions to do. No widely used software works like that because it's way too easy to abuse. Emacs gets away with it because it's not widely used.
I don't know the first thing about VSCode but I'm willing to bet there are strict limits to what its plugins are allowed to do.
Especially since you might not be familiar with the new one.
Personally, I'm trying out things in VS Code, just to see how they work. But when I need to work, I do it in Emacs, since I know it better.
Also, with VS Code, just while trying it out, simple things like cut & paste would stop working (choosing them from the menu, they would work, but trying to cut & paste with the key shortcuts and the mouse, wouldn't). You'd have to refresh the whole view or restart it, for cut & paste to become available again.
Over the years I have gotten better with vim, added phpactor and other tooling, but frankly i dont have time to futz and its not so polished. With VSCode I can just work. I don't love everything about it, but it works well enough with copilot i forgot the benefits of vim.
LSP experience with VSCode may be superior, but if I truly needed that, I would get an IDE and have proper intellisense. The LSP in Vim and Emacs is more than enough for my basic requirements, which are auto-imports, basic linting, and autocomplete to avoid misspellings. VSCode lacks Vim's agility and Emacs's powerful text tooling, and do far worse on integration.
I'd venture to say there's more of these than there are UI Editors tbh.
Would you say this is true?
From what I've seen, most senior/staff-level engineers are working for big corps which have limited contracts with providers like Github Copilot, which until recently only gave access to autocomplete.
I prefer the web-based interface. It feels like my choice to reference a tool. It's easy to run multiple chats at once, running prompts against multiple models.
When I was reading your comment I thought that there is a space for an out-of-flow coding assistant, i.e. rather than deploy an entire IDE with extension, the assistant can be just a floating window (I guess chatgpt does that) and is able to dive in and out or just suggest as you type along.
Beyond autocomplete, I've found the LLM to be useful in some cases: sometimes you'll want to make edits which are quite automatic, but can't quite be covered by refactoring tools, and are a bit more involved than search/replace. LLMs can handle that quite well.
For unfamiliar domains, it's also useful to be able to ask an LLM to troubleshoot / identify problems in code.
Also litellm
If you're all in on Claude then yeah, just go direct. Going through a proxy of silly
One thing i'd really like to have is a manual order for folders or even files in the explorer view.
Anyway, I didn't know what your service was trying to do so I clicked on the homepage, clicked Sources to see what else was there, it cited <https://extraakt.com/extraakts?source=reddit#:~:text=Open-so...> but the hyperlink took me back to the HN thing, which defeats the purpose of having a source picker
Ycombinator backed too I guess Vibe coding is here to stay
Oh wow, I didn't even realize. Substantially less appealing of a project to me now.
Flaskbacks to that pretty good Voyager episode.
Tip: Write a SPEC.md file first. Ask the LLM to ask _you_ about what might be missing from the spec, and once you "agree" ask it to update the SPEC.md or create a TODO.md.
Then iterate on the code. Ask it to implement one of the features, hand-tune and ask it to check things against the SPEC.md and update both files as needed.
Works great for me, especially when refactoring--the SPEC.md grounds Claude enough for it not to go off-road. Gemini is a bit more random...
One thing I particularly don't like about LLMs is that their Python code feels like Java cosplay (full of pointless classes and methods), so my SPEC.md usually has a section on code style right at the start :)
We live in the age of dev tools
(Btw if your comment already has replies, it is good to add "Edit:" or something if you're changing it in a way that will alter the context of replies.)
---
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
"Don't be curmudgeonly."
>VSCode Fork.
Already did. Can't wait to hear their super special very important reason why this can't exist as an extension.
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
If you can't invest in yourself without making the same size investment in your competitor, you probably have no path to actually win out over that competitor.
Additionally, Zed is written in Rust and has robust hardware-accelerated rendering. This has a tangible feel that other editors do not. It feels just so smooth, unlike clunky heavyweight JetBrains products. And it feels solid and sturdy, unlike VS Code buffers, which feel like creaky webviews.
But it's a different take, Brokk is built to let humans supervise AI more effectively rather than optimizing for humans reading and writing code by hand. So it's not a VS Code fork, it's not really an IDE in the traditional sense at all.
Intro video with demo here: https://www.youtube.com/watch?v=Pw92v-uN5xI
What I want is to be able to do is.
1. Create a branch called TaskForLLM_123 2. Add a file with text instructions called Instructions_TaskForLLM_123.txt 3. Have a GitHub action read the branch, perform the task and then submit a PR.
We're trying for thoughtful, respectful discussion of people's work on this site. Snarky, nasty oneliners destroy that.
We detached this subthread from https://news.ycombinator.com/item?id=43928512.
But I do stand by the point. We are seeing umpteen of these things launched every week now, all with the exact same goal in mind; monetizing a thin layer of abstraction between code repos and model providers, to lock enterprises in and sell out as quickly as possible. None of them are proposing anything new or unique above the half dozen open source extensions out there that have gained real community support and are pushing the capabilities forward. Anyone who actually uses agentic coding tools professionally knows that Windsurf is a joke compared to Cline, and that there is no good reason whatsoever for them to have forked. This just poisons the well further for folks who haven't used one yet.
I would still push back on this:
> all with the exact same goal in mind
It seems to me that you're assuming too much about other people's intentions, jumping beyond what you can possibly know. When people do that to reduce things to a cynical endstate before they've gotten off the ground, that's not good for discussion or community. This is part of the reason why we have guidelines like these in https://news.ycombinator.com/newsguidelines.html:
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
"Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."
Next big sale is going to be something like "Chrome Fork + AI + integrated inter-app MCP". Brave is eh, Arc is being left to die on its own, and Firefox is... doing nothing.