Show HN: Aide, an open-source AI native IDE
Hey HN, We are Sandeep and Naresh, the creators of Aide. We are happy to open source and invite the community to try out Aide which is a VSCode fork built with LLMs integrated.

To talk through the features, we engineered the following:

- A proactive agent

Agent which iterates on the linter errors (powered by the Language Server) and pulls in relevant context by doing go-to-definitions, go-to-references etc and propose fixes or ask for more files which might be missing in the context.

- Developer control

We encourage you to do edits on top of your coding sessions. To enable this, we built a VSCode native rollback feature which gets rid of all the edits made by the agent in a single click if there were mistakes, without messing up your changes from before.

- A combined chat+edit flow which you can use to brainstorm and edit

You can brainstorm a problem in chat by @’ting the files and then jump into edits (which can happen across multiple files) or go from a smaller set of edits and discuss the side-effects of it

- Inline editing widget

We took inspiration from the macos spotlight widget and created a similar one inside the editor, you can highlight part of the code, do Cmd+K and just give your instructions freely

- Local running AI brain

We ship a binary called sidecar which takes care of talking to the LLM providers, preparing the prompts and using the editor for the LLM. All of this is local first and you get full control over the prompts/responses without anything leaking to our end (unless you choose to use your subscription and share the data with us)

We spent the last 15 months learning about the internals of VSCode (its a non-trivial codebase) and also powering up our AI game, the framework is also at the top of swebench-lite with 43% score. On top of this, since the whole AI side of the logic runs locally on your machine you have complete control over the data, from the prompt to the responses and you can use your own API Keys as well (can be any LLM provider) and talk to them directly.

There’s still a whole lot to build and we are at 1% of the journey. Right now the editor feels robust and does not break on any of the flows which we aimed to solve for.

Let us know if there’s anything else you would like to see us build. We also want to empower extensibility and work together with the community to build the next set of features and set a new milestone of AI native editors.

In a world that prioritizes precise user control over the output text, how do you justify the value of relinquishing such control even to provide actions the user would have made already? It only takes a single bad edit to make the user lose all trust.
Links to the project, I'm guessing these :)

https://github.com/codestoryai/aide

https://aide.dev/

you missed this one https://github.com/codestoryai/sidecar Sidecar: The AI brains Aide: https://github.com/codestoryai/aide the editor
Why is a fork required? I use the cline plugin for VS Code and it seems to be able to be able to more things, like update code directly, create new files, etc.
After using Cursor (another AI focused fork) I'm 100% on the fork train. AI built natively into the IDE presents another layer of speed and isn't subject to the limitations of the extension system (which is awesome in its own right, not a knock on it).
I was on the fork train for awhile but cursors keeps having weird issues with indexing, intellisense, not being able to save files when format on save is enabled I wound up going back to vscode with cline and use openrouter to save money via prompt caching. To my knowledge cursor doesn't have Claude sonnets computer use enabled yet which is a total game changer and cline does I'll check back in a few months but instead of paying 20 a month for cursor pro I can put 20 in credits in openrouter and fully leverage the latest Claude model and features
fork was necessary for the UX we wanted to go for. I do agree that an extension can also satisfy your needs (and it clearly is in your case)

Having a deeper integration with the editor allows for some really nice paradigms: - Rollbacks feel more native, in the sense that I do not loose my undo or redo stack - cmd+k is more in line with what you would expect with a floating widget for input instead of it being shown at the very top of your screen which is the case with any extension for now.

Going further, the changes which Microsoft are making to enable copilot editing features are only open to "copilot-chat" and no other extension (fair game for Microsoft IMHO) So keeping these things in mind, we designed the architecture in a way that we can go towards any interface (editor/extension). We did put energy into making this work deeply with the VSCode ecosystem of APIs and also added our own.

If the editor does not work to our benefit, we will take a call on moving to a different interface and thats where an extension or cloud based solution might also make sense

I'm curious - what does the AI coding setup of the HN community look like, and how has your experience been so far?

I want to get some broader feedback before completely switching my workflow to Aide or Cursor.

I tried Cursor and found it annoying. I don’t really like talking to AI in IDE chat windows. For whatever reason, I really prefer a web browser. I also didn’t like the overall experience.

I’m still using Copilot in VS Code every day. I recently switched from OpenAI to Claude for the browser-based chat stuff and I really like it. The UI for coding assistance in Claude is excellent. Very well thought out.

Claude also has a nice feature called Projects where you can upload a bunch of stuff to build context which is great - so for instance if you are doing an API integration you can dump all the API docs into the project and then every chat you have has that context available.

As with all the AI tools you have to be quite careful. I do find that errors slip into my code more easily when I am not writing it all myself. Reading (or worse, skimming) source code is just different than writing it. However, between type safety and unit testing, I find I get rid of the bugs pretty quickly and overall my productivity is multiples of what it was before.

This is me also, I don't like the UX/DX of Cursor and such just yet.

I can't tell if it is a UX thing or if it also doesn't suit my mental model.

I religiously use Copilot, and then paste stuff into Claude or ChatGPT (both pro) when needed.

I am on day 8 of Cursor's 14-day trial. If things continue to go well, I will be switching from Webstorm to Cursor for my Typescript projects.

The AI integrations are a huge productivity boost. There is a substantial difference in the quality of the AI suggestions between using Claude on the side, and having Claude be deeply integrated in the codebase.

I think I accepted about 60-70% of the suggestions Cursor provided.

Some highlights of Cursor:

- Wrote about 80% of a Vite plugin for consolidating articles in my blog (built on remix.run)

- Wrote a Github Action for automated deployments. Using Cursor to write automation scripts is a tangible productivity boost.

- Made meaningful alterations to a libpg_query fork that allowed it to be cross-compiled to iOS. I have very little experience with C compilation, it would have taken me a substantially long time to figure this out.

There are some downsides to using Cursor though:

- Cursor can get too eager with its suggestions, and I'm not seeing any easy way to temporarily or conditionally turn them off. This was especially bad when I was writing blog posts.

- Cursor does really well with Bash and Typescript, but does not work very well with Kotlin or Swift.

- This is a personal thing, but I'm still not used to some of the shortcuts that Cursor uses (Cursor is built on top of VSCode).

Its great that cursor is working for you. I do think LLMs in general are far far better on Typescript and Python compared to other languages (reflects from the training data)

What features of cursor were the most compelling to you? I know their autocomplete experience is elite but wondering if there are other features which you use often!

Their autocomplete experience is decent, but I've gotten the most value out of Cursor's "chat + codebase context" (no idea what it's called). The feature where you feed it the entire codebase as part of the context, and let Cursor suggest changes to any parts of the codebase.
I would not be able to leave a Jetbrains product for Kotlin, or XCode for Swift

Overall it's so unfortunate that Jetbrains doesn't have a Cursor-level AI plugin* because Jetbrains IDEs by themselves are so much more powerful than base level VS Code it actually erases some small portion of the gains from AI...

(* people will link many Jetbrains AI plugins, but none are polished enough)

  • yen223
  • ·
  • 55 minutes ago
  • ·
  • [ - ]
I probably would switch to Cursor for Swift projects too if it weren't for the fact that I will still need Xcode to compile the app.

I also agree with the non-AI parts of JetBrains stuff being much better than the non-AI parts of Cursor. Jetbrain's refactoring tools is still very unmatched.

That said, I think the AI part is compelling enough to warrant the switch. There are code rewrite tasks that JetBrains would struggle with, that LLMs can do fairly easily.

I'm using Copilot in VScode every day, it works fine, but I mostly use it as glorified one-line autocomplete. I almost never accept multi-line suggestions, don't even look at them.

I tried to use AI deeper, like using aider, but so far I just don't like it. I'm very sensitive to the tiny details of code and AI almost never got it right. I guess actually the main reason that I don't like AI is that I love to write code, simple as that. I don't want to automate that part of my work. I'm fine with trivial autocompletes, but I'm not fine with releasing control over the entire code.

What I would love is to automate interaction with other humans. I don't want to talk to colleagues, boss or other people. I want AI to do so and present me some short extracts.

GitHub Copilot in either VS Code or JetBrains IDEs. Having more or less the same experience across multiple tools is lovely and meets me where I am, instead of making me get a new tool.

The chat is okay, the autocomplete is also really pleasant for snippets and anything boilerplate heavy. The context awareness also helps. No advanced features like creating entirely new structures of files, though.

Of course, I’ll probably explore additional tools in the future, but for now LLMs are useful in my coding and also sometimes help me figure out what I should Google, because nowadays seemingly accurate search terms return trash.

yeah I am also getting the sense that people want tooling which meets them in their preferred environment.

Do you use any of the AI features which go for editing multiple files or doing a lot more in the same instruction?

I can give my broader feedback: - Codegen tools today are still not great: The lack of context and not using LSP really burns down the quality of the generated code. - Autocomplete is great Autocomplete is pretty nice, IMHO it helps finish your thoughts and code faster, its like intellisense but better.

If you are working on a greenfield project, AI codegen really shines today and there are many tools in the market for that.

With Aide, we wanted it to work for engineers who spend >= 6 months on the same project and there are deep dependencies between classes/files and the project overall.

For quick answers, I have a renewed habit of going to o1-preview or sonnet3.5 and then fact checking that with google (not been to stack overflow in a long while now)

Do give AI coding a chance, I think you will be excited to say the least for the coming future and develop habits on how to best use the tool.

> Codegen tools today are still not great: The lack of context and not using LSP really burns down the quality of the generated code

Have you tried Aider?

They've done some discovery on this subject, and it's currently using tree-sitter.

Yup, I have.

We also use tree-sitter for the smartness of understanding symbols https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186... and also the editor for talking to the Language Server.

What we found was that its not just about having access to these tools but to smartly perform the `go-to-definition` `go-to-reference` etc to grab the right context as and when required.

Every LLM call in between slows down the response time so there are a fair bit of heuristics which we use today to sidestep that process.

  • jeswin
  • ·
  • 41 minutes ago
  • ·
  • [ - ]
I've been building and using these tools for well more than a year now, so here's my journey on building and using them (ORDER BY DESC datetime).

(1) My view now (Nov 2024) is that code building is very conversational and iterative. You need to be able to tweak aspects of generated code by talking to the LLM. For example: "Can you use a params object instead of individual parameters in addToCart?". You also need the ability to sync generated code into your project, run it, and pipe any errors back into the model for refinement. So basically, a very incremental approach to writing it.

For this I made a Chrome plugin, which allowed ChatGPT and Claude to edit source code (using Chrome's File System APIs). You can see a video here: https://www.youtube.com/watch?v=HHzqlI6LLp8

The code is here, but its WIP and for very early users; so please don't give negative reviews yet: https://github.com/codespin-ai/codespin-chrome-extension

(2) Earlier this year, I thought I should build a VS Code plugin. It actually works quite well, allows you to edit code without leaving VSCode. It does stuff like adding dependencies, model selection, prompt histories, sharing git diffs etc. Towards the end, I was convinced that edits need to be conversations, and hence I don't use it as much these days.

Link: https://github.com/codespin-ai/codespin-vscode-extension

(3) Prior to that (2023), I built this same thing in CLI. The idea was that you'd include prompt files in your project, and say something like `my-magical-tool gen prompt.md`. Code would be mostly written as markdown prompt files, and almost never edited directly. In the end, I felt that some form of IDE integration is required - which led to the VSCode extension above.

Link: https://github.com/codespin-ai/codespin

All of these tools were primarily built with AI. So these are not hypotheticals. In addition, I've built half a dozen projects with it; some of it code running in production and hobby stuff like webjsx.org.

Basically, my takeaway is this: code editing is conversational. You need to design a project to be AI-friendly, which means smaller, modular code which can be easily understood by LLMs. Also, my way of using AI is not auto-complete based; I prefer generating from higher level inputs spanning multiple files.

Cursor works amazing day to day. Copilot is not even comparable there. I like but rarely use aider and plandex. I'd use them more if the interface didn't take me completely away from the ide. Currently they're closer to "work on this while I'm taking a break".
Besides Claude.vim for "AI pair programming"? :) (tbh it works well only for small things)

I'm using Codeium and it's pretty decent at picking up the right context automatically, usually it autocompletes within ~100kLoC project quite flawlessly. (So far I haven't been using the chat much, just autocomplete.)

any reason you don't use the chat often, or maybe it's not your usecase?
I'm not the parent poster, but in my case I very rarely use it because it's not in the Neovim UI; it opens in a browser.

I've also had some issues where it doesn't seem to work reliably, but that could be related to my setup.

yeah I am learning that on neovim you can own a buffer region and instead use that for ai back and forth.. it's a very interesting space
cursor works well - uses RAG on your code to give context, can directly reference latest docs of whatever you're using

not perfect but good to incrementally build things/find bugs

Using cursor and it’s been great !

Founders care about development experience a lot and it shows.

Yet to try others, but already satisfied so not required.

Vscode + cline + openrouter using claude sonnet 3.5 20241022 model it's unreal the shit it can do
I tried GH copilot again recently with Claude. It was complete shit. Dog slow and gave incomplete responses. Back to aider.
what was so bad about it? genuinely curious cause they did make quite a bit of noise about the integration.
It's not nearly as helpful as Claude.ai - it seems to only want to do the minimum required. On top of that it will quite regularly ignore what you've asked, give you back the exact code you gave it, or even generate syntactically invalid code.

It's amazing how much difference the prompt must make because using it is like going back to gpt3.5 yet it's the same model.

It kept truncating files only about 600 lines long. It also seems to rewrite the entire file each time instead of just sending diffs like aider making it super slow.
oh, I see your point now. Its weird that they are not doing the search and replace style editing. Altho now that OpenAI also has Predicted Output, I think this will improve and it won't make mistakes while rewriting longer files.

The 600 line limit might be due to the output token limit on the LLM (not sure what they are using for the code rewriting)

Yeah I guess it's a response limit. It makes it a deal breaker though.
Looks interesting, is there a binary for Mac OS? I'd rather not build from scratch just to demo.

For the people comparing to Cursor on features, I suspect the winner is going to be hard to articulate in an A:B comparison.

There's such a difference in feel that may be rooted in a philosophy, but boils down to how much the creator's vision aligns with my own.

Yes there is, we have the binary link on our website but putting it here:

- arm64 build: https://github.com/codestoryai/binaries/releases/download/1....

- x86 build: https://github.com/codestoryai/binaries/releases/download/1....

> There's such a difference in feel that may be rooted in a philosophy, but boils down to how much the creator's vision aligns with my own.

Hard agree! I do think AI will find its way into our productivity tool kit in different ways. There are still so many ways we can go about doing this, A:B comparison aside I do feel the giving people to power to mold the tool to work for themselves is the right way.

What is the privacy policy? Do you get to see my code and secrets? Does anybody else? I don't want you to. Nothing personal.
we don't .. in your user settings, type in disable all telemetry and we won't see a thing
Is there a comparison with Cursor I can read?
  • h1fra
  • ·
  • 10 hours ago
  • ·
  • [ - ]
Genuine question, with vscode going all-in in this direction, what's left for forks like this?
There are quite a few things! VSCode's direction (I am making my own assumptions from the learnings I have) - VSCode is working on the working set direction of making multi-file edits work - Their idea of bringing in other extension is via the provider API which only copilot has access to (so you can't use them if you are not a copilot subscriber)

So just taking these things for face value, I think there is lots to innovate. No editor (bias view of mine) has really captured the idea of a pair programmer working alongside you. Even now the most beloved feature is copilot or cursor tab with the inline completions.

So we are ways further from a saturated market or even a feature set level saturation. Until we get there, I do think forks have a way to work towards their own version of an ideal AI native editor, I do think the editors of the future will look different given the upwards trend of AI abilities.

Betting on Microsoft messing up on UX side, as always.
Any tips for using Aide with another text editor? Ie i'm not going to work outside of my preferred text editor (Helix atm), so i'm curious about software which has a workflow around this. Rather than trying to move me to a new text editor
  • swlkr
  • ·
  • 9 hours ago
  • ·
  • [ - ]
I also use helix and i've been getting some mileage out of aider, the cli tool. Confusing name, as I don't believe aider is affiliated with aide
do you know of helix exposes the LSP APIs all the way to the editor .. if it does doing the integration should be trivial
  • swlkr
  • ·
  • 2 hours ago
  • ·
  • [ - ]
im fairly sure helix-gpt does this, though i haven't tried it
reading the code for helix-gpt over here https://github.com/leona/helix-gpt/blob/2a047347968e63ca55e2... looks like the architecture for the extension is based around getting the diagnostic event and then passing that along to the chat.

The readme also talks about how LSP services are not exposed properly yet, my takeaway is that its not complete yet..but surely doable

hmm.... I do think it can be extended to work outside of just the VSCode environment.

If you look at the sidecar side of things: https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186... these are the main APIs we use

On the editor side: https://github.com/codestoryai/ide/blob/0eb311b7e4d7d63676ad...

These are the access points we need

The binary is fairly agnostic to the environment, so there is a possibility to make it work elsewhere. Its a bit non trivial but I would be happy to brainstorm and talk more about this

FYI the youtube embed on https://docs.codestory.ai/features is broken (both Firefox and Chrome, MacOS).

https://support.mozilla.org/1/firefox/132.0.1/Darwin/en-US/x...

RIP, didn't expect that to happen. This is the embedded video btw https://www.youtube.com/watch?v=i8ZXMgnFSo8 putting it here for prosperity
Is the "sidecar" open source too?
Confusingly, "Sidecar" is the name Apple uses for their feature of having an iPad serve as a second screen/touch interface for a Mac:

https://support.apple.com/en-us/102597

In my previous life at Facebook, I worked on the infra team and worked on a cluster manager similar to kubernetes, thats where I first heard the term sidecar. Something about the concept of a binary running alongside the pod powering other related things felt strong. In most parts this is the inspiration for naming the AI brain: sidecar
I believe I get the metaphor. Why is it confusing?
Overloading the term with a second technological meaning.
So what? Just because Apple calls something retina "display" means that others cannot call stuff displays.
read it as sAIDEcar
What differentiates Aide from all the existing tools in this space like Cursor?
VSCode forks are not new, there are many companies out there building towards this vision. What sets us apart is partly our philosophy (deeply integrating into the editor) and also the tech stack (running everything to the dot locally) and giving developers control over the LLM usage and also other niceness (like rollbacks which I think are paramount and important)
  • swyx
  • ·
  • 10 hours ago
  • ·
  • [ - ]
> What sets us apart is partly our philosophy (deeply integrating into the editor)

i'm so sorry but what do you think cursor's philosophy is

> also other niceness (like rollbacks

yep in cursor too

i know youre new so just being gentle but try to focus on some kind of killer feature (which i guess is sidecar?)

also https://x.com/codestoryai seems suspended fyi

Fair point! We are not taking a stab at cursor in any way (its a great product)

In terms of features I do believe we are differentiated enough, the workflows we came up with are different and we are all about giving back the data (prompt+responses) back to the user.

The sidecar is not the killer feature, its one of the many thing which ticks the whole experience together.

Good callout on the codestoryai account being suspended, we are at @aide_dev

  • swyx
  • ·
  • 9 hours ago
  • ·
  • [ - ]
you link to it still on your home page
great catch! Thank you for pointing this out
> I'm so sorry but what do you think cursor's philosophy is

I've never understood why people say sorry for cases like these?

Aide seems to have a good open source license (Cursor is proprietary)
Open Source; giving full ownership of the data to the users; running completely locally; we want to make sure you can use Aide no matter the environment you are in.
Hmm, any time frame for when Linux (.deb,flatpak) binaries will be available?
you should be able to use this: https://github.com/codestoryai/binaries/releases/download/1.... let me know if that does not work.

All our binaries are listed out here: https://github.com/codestoryai/binaries/releases/tag/1.94.2....

You could also use this script to setup everything: curl -sL https://raw.githubusercontent.com/codestoryai/binaries/main/... | bash (you can see the source of the script too)
This is a fork of VScode, which means people can’t use the extension store anymore right?
They can from the openvsx store https://open-vsx.org/

We also import your extensions automatically (safe guarding against the ones with Microsoft's licensed)

You can also just download in from the vscode marketplace webpage and drag and drop it in

Looks like the download links from your landing page are broken?

  Looks like our build pipeline is broken!
  Click here to let us know?
wooops... on it (we got rate limited by Github) in the meanwhile https://github.com/codestoryai/binaries/releases/tag/1.94.2.... check this out
its fixed!
I see Qwen 2.5 is not listed on your website, is plugging in different llms supported as well?
Honestly we can, I haven't prompted it enough what do you want to use the model for?
Just general coding, mostly python. Seems to me that Qwen 2.5, especially the upcoming bigger coder model might be the best performing coding model for 24GB VRAM setups
Any short-term plans for Claude via AWS Bedrock? (That's for me personally a blocker for trying it on our main codebase.)
Thanks for your interest in Aide!

If I understood that correctly, it would mean supporting Claude via the AWS Bedrock endpoint, we will make that happen.

If the underlying LLM does not change then adding more connectors is pretty easy, I will ping the thread with updates on this.

Yep! And AWS Bedrock gives you also plenty of other models on the back end, plus better control over rate limits. (But for us the important thing is data residency, the code isn't uploaded anywhere.)

Is it ~just about adding another file to https://github.com/codestoryai/sidecar/blob/main/llm_client/... ?

I could take a look too - another way for me to test Aide by working with it to implement this. :-)

(https://github.com/pasky/claude.vim/blob/main/plugin/claude_... is sample code with basic wrapper emulating Claude streaming API with AWS Bedrock backend.)

yup! feel free to add the client support, you are on the right track with the changes.

To test the whole flow out here are a few things you will want to do: - https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186... (you need to create the LLMProperties object over here) - add support for it in the broker over here: https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186... - after this you should be at the very least able to test out Cmd+K (highlight and ask it to edit a section) - In Aide, if you go to User Settings: "aide self run" you can tick this and then run your local sidecar so you are hitting the right binary (kill the binary running on 42424 port, thats the webserver binary that ships along with the editor)

If all of this sounds like a lot, you can just add the client and I can also take care of the plumbing!

Hmm looks like this is still pretty early project for me. :)

My experience: 1. I didn't have a working installation window after opening it for the first time. Maybe what fixed it was downloading and opening some random javascript repo, but maybe it was rather switching to "Trusted mode" (which makes me a bit nervous but ok).

2. Once the assistant window input became active, I wrote something short like "hi", but nothing happenned after pressing ctrl-Enter. I rageclicked around a bit, it's possible I have queued multiple requests. About 30 seconds later, suddenly I got a reply (something like "hi what do you want me to do"). That's .. not great latency. :)

3. Since I got it working, I opened the sidecar project and sent my second assistant prompt. I got back this response after few tens of seconds: "You have used up your 5 free requests. Please log in for unlimited requests." (Idk what these 5 requests were...)

I gave it one more go by creating an account. However after logging in through the browser popup, "Signing in to CodeStory..." spins for a long time, then disappears but AIDE still isn't logged in. (Even after trying again after a restart.)

One more thought is maybe you got DDos'd by HN?

> 2. Once the assistant window input became active, I wrote something short like "hi", but nothing happenned after pressing ctrl-Enter. I rageclicked around a bit, it's possible I have queued multiple requests. About 30 seconds later, suddenly I got a reply (something like "hi what do you want me to do"). That's .. not great latency. :)

Yup thats cause of the traffic and the LLM rate limits :( we are getting more TPM right now so the latency spikes should go away, I had half a mind to spin up multiple accounts to get higher TPM but oh well.... if you do end up using your own API Key, then there is no latency at all, right now the requests get pulled in a global queue so thats probably whats happening.

> 3. Since I got it working, I opened the sidecar project and sent my second assistant prompt. I got back this response after few tens of seconds: "You have used up your 5 free requests. Please log in for unlimited requests." (Idk what these 5 requests were...)

The auth flow being wonky is on us, we did fuzzy test it a bit but as with any software it slipped from the cracks. We were even wondering to skip the auth completely if you are using your own API Keys, that way there is 0 touch interaction with our llm proxy infra.

Thanks for the feedback tho, I appreciate it and we will do better

Aide.dev is similar to aider.chat except Aide being an IDE while Aider is a CLI
AIDE == AI + IDE (that was our take on the name)
This is very similar to the Zed editor. How much did you get inspired by them? And what are the differences between yours and their implementations?
I would take that as a compliment, big fan of Zed (I hope their extension ecosystem allows for us to plugin sidecar into Zed soon)

Tbh I did try out their implementation and it still feels early, one of the key difference we went for was to allow the user to move freely between chat and editing mode.

There's feels much more detailed and thoughtful than yours.

Yours for example doesn't allow one to insert diagnostics or allow for example for things like inserting all tabs open and all files open at once.

They also allow jumping from editing to chatting mode by simply doing command enter.

hmmm you are right, the ergonomics of providing context are more powerful in zed. Feedback taken, we will work on it.

We implicitly take in all the diagnostics on the files https://github.com/codestoryai/sidecar/blob/e5408782a3bfa461...

This looks great. Would love some blog posts about your experience building this our with rust!
Oh for sure! I do want to talk about how rust really helped us so many times when doing refactors or building new features, part of the reason why we were able to iterate so quickly on the AI side of things and ship features
sigh more Electron
I know... we could have built something free form ground up (like zed did) but we had to pick a battle between building a new editor from the grounds up or building from a solid foundation (VSCode) We are a small team right now (4 of us) and have been users of VSCode, so instead of building something new, putting energy into building from VSCode made a lot more sense to us.
first editor I've seen recently that defaults to turn of minimap.

I won't shut up for about this, I don't understand how such an useless "feature" becomes the norm in modern IDEs.

You have just woken up from the cryosleep you entered in 2024. The year is 2237. GPT-64 and its predecessors have been around for nigh on 100 years. But there has been no civilizational upheaval. Your confusion is cleared when you check the inter-agent high-speed data bus. You expect this to be utterly incomprehensible, but both the human and AI data is clearly visible. It is a repeating pattern. The agents are mimicking human behavior perfectly and you can’t tell which is which. All data transmitted has the same form:

    $word is already a name for a project. Stop copying it. Change your name.
Mankind and His Machine Children have met The Great Filter.
I don't think it's going to take us 200 years to kick the habit of using global namespaces for friendly names, maybe 80. Recognizing a name and rendering it as a disambiguation based on my location in the trust graph should be a feature of the text box, not something that I have to think about.
sounds like the scene in movie Idiocracy where roomba is stuck in corner and keeps repeating 'floor is now clean'.
hahaha
Not only is the name Aide already used by another project, it’s even also an IDE.

https://www.android-ide.com/

TIL, I thought we covered the ground when grepping for Aide. Funny that its also an editor
It is a pretty well established IDE. I used it back on a Nexus 4 when that phone was actually "recent" to give you some context.
I do remember the Nexus 4 (jelly bean OS). I was fascinated at the time that you could play games on Android and ran the emulator for android on my desktop at that point (I was young and needed the games haha)
AIDE has been around for 25 years: https://aide.github.io/

IMHO the right thing would be to use another name.

I ... did not know that.

We should probably pick another name then

There's also https://aider.chat/, which is... close.
This is literally a totally different piece of software with a completely unrelated use case. Changing the name would make as much sense as renaming a hammer because someone invented a screwdriver.
The name is perfect, AI + IDE = Aide. You should keep it.
You probably shouldn’t.
[dead]