And last but not least: The issue here - if there's any - can already be caught by linter rules like no-floating-promises in eslint/by using TypeScript instead. You don't need a shitty AI linter confidently suggesting wrong code.
That's how you advertise the product? I'm guessing competent coders at GitHub have already been replaced by juniors+copilot.
I don't let it write blocks of code, but usually I'm ~5 keystrokes into a line change or addition and it can complete the rest. I also use a text-file todo in the same project to help it with context of what I'm trying to do.
This made it very easy for claude to work out what to do and what needed to be corrected for things to work. Then you get your tests either to write debug statement to the terminal or json serialize something to disk you can paste into the claude chat that helps claude understand the flow of the code and where things goes wrong.
As an added bonus having all those test alse made it easy for claude to write the documentation.
I will not say it was fun because you are sitting doing copy paste all day but i couldn't have written this library without the assistance from clause. cursor just reduced the amount of copy paste actions since it can update the files.
I've found numerous threads about this issue on their forums too.
In the chat sidebar dialog there are two submit buttons. The one labelled "chat" cannot edit the active file and when you chat using that submit button it doesn't understand your code base.
The submit button labelled "Codebase chat" on the other hand can update the active document and has knowledge about your project.
In the codebase chat dialog it will print the suggested changes and then you must press the apply button next to the suggested change. then it will apply the changes to the file in a git compare kind of way with red and green colors. Finally you must accept each of the red/green changes and the file will be updated on disk.
More likely replaced by seniors who forgot how to write code by hand because they're relying so much on AI. I don't think juniors are a thing anymore.
But I am just venting. All of yall have clearly won, I get it. I am just grateful I have lived a full life doing other things other than computers, so this all isn't too sad other than the prospect of being poor again.
I will always have my beautiful emacs and I will always be hacking. I will always have my Stallman, my Knuth, my Andy Wingo, my SICP. I feel it is accomplishment enough to have progressed in this career as I have, especially as a self taught developer. But I kinda want to let yall deal with slop now, you really seem to like it!
Maybe I'll get another degree now, or just make some silly music and video games again. It's liberating just thinking about being free from this "new way" we work now.
Thanks for all the fish though!
The only way you can deal with the codebase is to fully embrace the AI. Whenever I want to make anything beyond a simple API change, I have to boot up Cursor and give it 5+ files for context and then write a short novel about what I want it to do. I sip some coffee while it chugs away and spits out some changes, which I then have to go and figure out how to test. I'm not fully convinced that the iteration time is any faster, and the codebase is a hot mess.
It also just feels very stifling and frustrating to me to have to write a ton of prose for something when I'm used to being able to write code to do it! I have to go home and work on other projects without AI just to scratch the itch of actually programming which is what I fell in love with all those years ago.
Humans themselves tend to write new code instead of using old code- a common problem- but with sensible code structures and CI, code will grow at a sustainable rate.
LLMs continuously barf a steam of new code, never deleting anything. Then you need to provide the barf as context, and the cycle surely must continue until it falls apart. How has this not happened yet?
Once the codebases become an unmanageable mess I think the pendulum will swing back, hard.
That should buy sometime for anyone not entering the industry right now.
It feels like most developers en masse have taken on some masochist pleasure at deskilling themselves while becoming a prompt engineer beholden to OpenAI/MS/Google.
The upside is that those who take time to learn and improve can write software that most devs have given up the hope of being able to write. Write the next Maigt or org-mode while everyone else is asking AI to generate tailwind HTML React forms!
It's a weird/delusional timeline, that's for sure.
Now, everything feels automated, fast, and often a bit too dumb. Sure, it’s easier, but it’s lost that raw connection to the work. We’ve abstracted away so much that it’s hard to feel like we’re truly engineering something anymore — it’s more like patching together random components and hoping it holds. I think we lost something when we all started staring at screens all day and disconnected from the hands-on nature of building. There's a lot of slop now, and while some people thrive on that, it’s not for everyone.
- Buy GitHub and devalue all individual projects by soft-forcing most big projects to go there and lose their branding.
- Gamify and make "development" addictive.
- Use social cliques who hype up each other's useless code and PRs and ban critics.
- Make kLOC and plagiarism great again!
This all happened before "AI". "AI" is the last logical step that will finally destroy open source, as planned by Microsoft.
- Have you ever read a PR and thought "this code is useless" and the result was you deciding that "kLOC is great"? Any way I put those things (Microsoft, kLOC, AI, Github, social cliques,...) together I don't get anything sensible; Microsoft spent 7.5Bn on Github to make kLOC great to help them destroy open source? It's a crackpot word salad, not a reasoned position. At least, if it's a reasoned position they should post the reasoning.
- Github has 200,000,000+ public repositories and makes $1Bn/year revenue. How will putting AI into Github 'finally' destroy open source and why does Microsoft want to screw up a revenue stream roughly the size of Squarespace's, bigger than DigitalOcean's or Netgear's, and getting on for as big as CloudFlare's?
Announcing 150M developers and a new free tier for GitHub Copilot in VS Code
https://github.blog/news-insights/product-news/github-copilo...
StackOverflow's database was public and shared at the Internet Archive until recently: https://archive.org/details/stackexchange
They've now moved it onto their own infra, ostensibly so people have to agree not to use it for LLM training: https://meta.stackexchange.com/questions/401324/announcing-a...
If they really did close off db downloads then I'll never answer a question there ever again. I bet many others wont either. Maybe that's also part of why SE will fail.
The only deal I personally am willing to tolerate is me spending time providing quality questions and answers in my domain in return for being able to download all questions and answers under the cc-sa license and being able to freely download them for offline viewing without an account (via kiwix). No other arrangement is acceptable.
I'll look further into this and update this comment with my findings.
Edit: Yep. No more answers from me. Talk about throwing the baby out with the bath water.
Maybe small community forums are really the only way to share knowledge these days :(
I thought of mailing lists as archaic, but maybe we just regressed from there.
Best for Society: Open Source <-- Where SO was
Medium for Society: Proprietary, but openly browsable <-- Where SO is
Worst for Society: Proprietary, not browsable <-- LLM-based code assist tools
Either your site's content stays hidden behind discord, or an LLM's bot/minion scrapes all your content and makes visiting your site superfluous, thereby effectively killing your site.
Whoa there Nelly!
If we're going to resurrect things, can we do it whilst leaving PHP in the past?
Is Photoshop.exe "interpretable" by anybody with a copy (of windows)? How about a binary that's been heavily decompiled, like a Mario game?
Don't get me wrong, llama is at least more open than OpenAI and that may be meaningful.
Photoshop, or any compiled binary, isn't meant to be open source and the code isn't meant to be reviewable. Llama is called open source, though the most important part isn't publicly available for review. If llama didn't claim to be open source I don't think it would matter that the model itself and all the weights aren't available.
If your argument is just that most software is shipped as compiled and/or obfuscated code, sure that's how it is usually done. That isn't considered open source though, and the line with LLMs seems to be very gray - it can be "open source" if the source code for the training logic is available even though the actual code/model being run can't be reviewed or interpreted.
When I said "it's not really open source", I was referring to the fact that there are restrictions on who can use Llama.
Source: I bothered a lot of people on the Internet about C++ when I was child.
When I was in high school I read the docs, and learned C++ from books and MSDN. Granted my access to the internet was rather limited back then, but it also never crossed my mind to bother people for things I could easily lookup myself.
Growing up in a RTFM, "search the forum first before asking" environment is seen as toxic today, but it really helps keeping certain behavior in check thats a drag on society as a whole.
One of the best mentors/bosses I ever had never answered coding questions directly, but always in the form of a question so I could look it up and learn for myself.
I try to do the same with my junior devs today, unless there's time constraints or they're under stress, I try to let them figure out the final answer themselves.
I just hit that point with some TensorFlow stuff because I started hitting the limits of what ChatGPT could answer successfully, and I think that's fine. But maybe good that I couldn't get everything out of it or it may have delayed my learning further yet. Which I guess reinforces your point.
Information is only useful if it's accessible. If they are asking questions it's because the information they want is in practice not accessible to them.
So, every comment on a post roughly equals 100 views.
There will always be new libraries and software updates, and those will always have corners and edge cases that will foster questions. The LLMs won’t have the answers out of the box until they have something to train from, so there’s still room for StackOverflow.
There should be a product, something that you install to capture your own data, and then pseudo-anonymize it and sell it back to databurgers (I meant to write data brokers but I kept this misspelling for lolz).
Is there?
Assistant LLMs like Limitless.AI merged with their older desktop scraper app could be repackaged to do that...
New industry? Or old?
I used to like reading StackExchange sites as a social media site--lots of interesting questions and clever answers. Today, votes have slowed down and the best answers are from 2017, and only niche questions can avoid being closed.
> but as microprocessors become faster, I believe we'll be able to generate artificial realities of useful information we can't live without which are superior to Stack Overflow today.
Was this written by an LLM bot? It seems…off.
If so, I apologise to fart@butts.com.
Like, imagine GPS navigation wasn't widespread and there was a paid service that gave you 20 free trips. Eventually your normal navigation skills would atrophy and you'd be obliged to purchase.
Even if they explicitly and clearly state that they don't use your data for any purpose other than generating immediate responses, they can change this once the free Copilot gains traction and people become really addicted to it.
Like, "pay if you don't want us to use your data for training". Most people won't pay and will be happy to give away their data instead.
GPS is something that wouldn't be invented these days. Instead of the US military footing the bill, it would be some private company somewhere.
Real AI is definitely a rubicon.
For the last year Copilot completions have been slow, unreliable and just bad, coming in at random moments, messing up syntax in dumb ways, being slow and sometimes not showing up at all. It has been painfully bad even though it used to be good. Maybe it's since they switched from Codex to the general GPTs ...
I used the recommended models and couldn't figure it out, I assume I did something wrong but I followed the docs and triple checked everything. It'd be nice to use the GPU I have locally for faster completions/privacy, I just haven't found a way to do that.
Additionally, I've tried a bunch of these (even the same models, etc) and they've all sucked compared to Copilot. And believe me, I want that local-hosted sweetness. Not sure what I'm doing wrong when others are so excited by it.
And at some point I asked to change a pretty large file in some way. It started processing, very very slowly and I couldn't figure out a way to stop it. Had to restart VS Code as it still kept changing the file 10 minutes later.
Copilot was also very slow when I tried it yesterday but at least there was a clear way to stop it.
The sibling comment also describes the process for chat, which I personally don’t care about.
(Assuming your computer has the specs necessary to run ollama)
I don't want my work to depend on proprietary or even worse, online, software. We (software engineer) got lucky that all the good tools are free software and I feel we have a collective interest in making sure it stays that way (unless we want to be like farmers having to pay Monsanto tax for just being able to work because we don't know how to work differently anymore)
Open source, fast and good: openrouter with opensource models (Qwen, Llama,etc...) It's not local but these is no vendor lockin, you can switch to another provider or invest in a gpu.
Unless you've got a CPU with AI-specific accelerators and unified memory, I doubt you're going to find that.
I can't imagine any model under 7B parameters is useful, and even with dual-channel DDR5-6400 RAM (Which I think is 102 GB/s?) and 8-bit quantization, you could only generate 15 tokens/sec, and that's assuming your CPU can actually process that fast. Your memory bandwidth could easily be the bottleneck.
EDIT: If I have something wrong, I'd rather be corrected so I'm not spreading incorrect information, rather than being silently downvoted.
Though if it all works great for you then no reason to mess with it, but if you want to tinker you can absolutely run larger models at smaller quant sizes, q6_k is basically indistinguishable from fp16 so there's no real downside.
If I ever pay for a different AI product I would prefer a pay-by-the-token plan vs a monthly charge since there are often spans of several weeks where I'm not using the tools at all.
I do think this slightly aggressive tactic, I’m sure they claim otherwise!
Compared to other paid plans for various AI services, this one seems like it's relatively the most enticing.
I'm glad it's open source so I was able to fix most of the issues I had with it and now my copy is in a great place. The documentation is in places many versions behind the actual code so it can be tough to figure out how to set things up when you're venturing off the beaten path. That all being said the granularity of control you have when using local models leads to an experience that's far better than Cursor/Copilot, I really enjoy that it reads my mind a lot of the time now (because I have prompt engineered it to know how I think).
Ultimately, isn't this just the way of things? https://thot-experiment.github.io/forever-problems/?set%20up...
Sure it's busywork. But it's a lot of busywork very fast.
I'm really curious what your problem domain is, like specifically what sort of code are you asking it to change and what changes are you asking for.
I just gave o1 and Sonnet a total layup question (optimization that had a huge win simply by filtering an array before sorting it vs the other way around) and neither model got the solution right, both of them came up with ~hundred lines of code, neither model's code worked on the first try. It took me like 10 minutes to refactor and optimize the code for a 6x speedup and it would take longer than that to debug the AI code to even make it run. (I spent 10 minutes prompting/editing to try to get the generated solutions to run)
Also the initial code was 11 sloc, my solution is 14 sloc, and claude was 70 sloc and o1 was 93. idfk, i just don't think we're there yet
Obviously nothing complicated but it takes non-zero time. It did it in one shot in about 30s. Didn’t have to look at and docs (I don’t have the yaml lib memorized). Got the python typing right which would have been a bit of a pain. A lot faster than doing it myself, even with reviews. Tests were solid so I could tell it worked.
The filter example you give seems like they should have aced it. Not sure what went wrong but it has easily done work like that for me. I usually am half way through typing the method name when the rest autocompletes. Are you using a tool with good context management like cursor?
Also keep in mind that, even though it's "unified memory", the OS enforces a certain quota for the GPU. If I remember correctly, it's something like 2/3 of the overall RAM.
Open Weights models has it's place (in training custom agents and custom services), but if you are knowledge worker, using a model even 5% less than SOTA is extremely dumb
Also, in general I don't find the difference between SOTA models and local models to be that significant in the real world even when used in the exact same way.
Does this run with VSCode and how hard is it to set this up?
you'll then have to download a model, which ollama makes very easy. choosing which one will depend on your hardware but the biggest QwenCoder2.5 you can fit is a very solid starting place. it's not ready for your grandma, but it's easy enough that I'd trust a junior dev to be able to get it done
I just read the parent post, lol.
The UI is still changing slightly every couple weeks as they improve things and polish it, but it's become a big enough part of my day to day that it's pretty much always open on the right pane of vscode for me.
Your setup sounds interesting. What sort of API key do you use?
We’re running a preview of the code review feature right now, and are looking forward to opening it to all paid subscribers soon.
If you’d like to try it sooner, I can hook you up - just email my HN username @github.com :)
Never used Copilot or any AI assisting tool, is this a lot or is it as free as cheese in a mousetrap?
It would probably take me 10 days to use up 50 chats? 5 a day seems about right.
I have NO idea how many completions I use.
They recently changed their pricing to some weird "Flow Action credits" system and MOST of the people are dissatisfied with that. I'm still looking for a replacement IDE because I mostly just chat and rarely use autocomplete.
https://github.com/settings/billing/summary
I have Copilot from an org I'm in so I just see "You are assigned a seat as part of a GitHub Copilot Business subscription"
or
qwen2.5-coder:32b-instruct-q5_K_M (~23 GB)
or
gemma2:9b-instruct-q6_K (~7.5 GB)
and
https://github.com/bernardo-bruning/ollama-copilot
or alternatively:
https://github.com/ollama/ollama + https://github.com/olimorris/codecompanion.nvim
> Once awarded, if you are still a maintainer of a popular open source project when your initial 12 months subscription expires then you will be able to renew your subscription for free.
source: https://github.com/pricing#i-work-on-open-source-projects-ca...
Experts weigh in pls?
As for `pytest`, I just have to remind it I'm using pytest. "Write pytests for this" is sufficient to get it to do what I want.
We have had them 'embrace' the wider developer and open-source ecosystem by buying GitHub.
Then they have 'extended' this with partnerships and deep developer integrations in VSCode and exclusive partnerships with OpenAI which in the background was used to build the best tools on the Microsoft platform with added enhancements and extensions.
Now in the new intelligence age, we finally have the definition of what 'Extinguish' looks like to competitors wanting to compete with the best tools available, For free.
Realize: most of the digital ecosystem runs on whales, and thats who businesses go after. As long as wealth inequality thrives, that's the enshittification cycle.
The free plan is not going to work for professional coding
What I would like is a deep integration with VS Code using my preferred foundational model
I see Cursor has their own model and support for 2 foundational models, but not my preferred model and they charge a monthly fee.
Supposedly: https://cloud.google.com/blog/products/ai-machine-learning/g...
but do I still have to pay microsoft $20+ per month? what I really want is pay-per-usage, not pay-for-access+usage
Once you pressed "Start using Copilot" the copilot menu shows up on all repositories within GitHub.com for careless or uninformed users to leak proprietary code to their training set.
https://github.com/settings/copilot
> - [x] Allow GitHub to use my code snippets from the code editor for product improvements > > Allow GitHub, its affiliates and third parties to use my code snippets to research and improve GitHub Copilot suggestions, related models and product features. More information in About GitHub Copilot privacy.
Should we still be concerned if opting out?
People opt out, some PM or whoever decides they need 'metrics' (numbers to spin)... and more categories or qualifications are summarily added. Starting the cycle again.
I assume they'll send a privacy policy update in at most six months. I wouldn't call it concerning. Routine. I won't participate, that's for sure.
The slow elephant enterprise GitHub will never be as good/fast as Cursor, they had their chance but they have joined the party of "keeping the devs under our umbrella with free features" too late.
I can imagine that good enterprise/business level training data (the layer above the open source widgets we compose) is not as easy coming on the internet as the open source libraries themselves are. But through their free tool they would get access to this especially from small scrappy startup's that will want to save costs. Seeing some startup's go from zero to next big thing on Copilot will be great training data.
But I do enable it, for example when I'm coding in a language or project I'm not used to, I'd say I use like 20% of my coding time, but that 20% is useful, it's when google doesn't work for me either...
https://github.com/CopilotC-Nvim/CopilotChat.nvim is the best I've found for the chat-type interaction. It lets you choose models/etc.
It's still not quite as nice as cursor, but decent enough that I enjoy using them
https://github.com/olimorris/codecompanion.nvim
Haven't used it yet but it supports many models.
January 1, 4713 BCE marks the beginning of the Julian Day count. This system was introduced by Joseph Scaliger in 1583 CE.
Scaliger selected 4713 BCE as the starting point because it is the nearest date where three cycles—solar, lunar, and Roman indiction—coincide. These cycles are:
The 28-year solar cycle.
The 19-year Metonic lunar cycle.
The 15-year Roman indiction cycle.
Congratulations to Microsoft for owning enough of the stack to make doing this possible.
- 50 chats
- 2.000 code completions
per month.
After that it is starting from 10$/month.
If You're Not Paying For It, You Become The Product (2012) https://www.forbes.com/sites/marketshare/2012/03/05/if-youre...
The useless but true answer is nobody knows what's allowed and what isn't, until it's tested in court. Practically (not being a lawyer, though) I suspect that the clause will never be pursued on its own, because it's bullshit and everyone involved knows it is so.
In your scenario, though, assuming you publish in a way that's not overtly and primarily meant for AI training, I think the "use" of data isn't yours and would be hard to argue as violating the terms of the agreement.
Of course we might take it to the absurd end of this line of reasoning and demand that any code base that Copilot was involved in should have a license term preventing the training of any other AI in it, and we wind up in a place where all AIs are trained on source material they're explicitly licensed not to be trained on, or trained only on a mostly static set of "pre-AI" publications.
Daft stuff.
> Your code is yours. We follow responsible practices in accordance with our Privacy Statement to ensure that your code snippets will not be used as suggested code for other users of GitHub Copilot.
IIRC, I think this statement gave me the initial reassurance I needed to use Copilot many months ago, however I feel like this could be deceptively reassuring. Does it mean they can use my code for training and suggestions to other users after changing the variable names?
I tried to dig deeper. The section on "Private repositories" in their Privacy Policy [1] says: "GitHub personnel does not access private repository information without your consent", with exceptions for security, customer support, and legal obligations. Again, this feels deceptively reassuring, since GitHub personnel and GitHub's AI services are separate entities.
In their Privacy Policy, "Code" falls under the definition of "Personal Data" (User Content and Files) [2], and they go on to list lots of broad ways the data can be used and shared.
Unless I've missed anything, and as other commenters have said much more succinctly, I have to assume that there's a real possibility that my private repo code is used to train their models.
[1] https://docs.github.com/en/site-policy/privacy-policies/gith...
[2] https://docs.github.com/en/site-policy/privacy-policies/gith...
They claim it's fair use for them to steal all data they want, but you're not allowed to use AI data output, despite this data literally not being subject to copyright protections on account of lacking a human author.
And especially Github. They already have an enormous corpus that is licensed under MIT/equivalent licenses, explicitly permitting them to do this AI nonsense. All they had to do was use only the code they were allowed to use, maybe put up an attribution page listing all the repos used, and nobody would've minded because of the explicit opt-in given ahead of time.
But no. They couldn't bother with even that little respect.
Short answer: unlikely
Serious answer: we'll only know whether it is when someone challenge it at court.
FWIW I use GoLand w/ Supermaven, currently.
Copy and paste workflow is a minor slowdown, but nothing compared to things like smart links in terminal, auto detection of run configurations, etc, etc.
I watch people navigate code in VSCode and I want to pull my hair out. Things that I don’t even think about are hard and/or require just falling back to search.
And before “there is a plugin for that”, I’m sure there is. I’m sure you can configure VSCode to be just as powerful as IDEA but the rank and file using it aren’t doing that work to install and configure a bunch of plugins. So, on average, VSCode doesn’t hold a candle to an IDEA.
With Aider I skip a lot of the copy/pasting but I’d still copy/paste to the browser before I left IDEA.
For me it's the other way around, when I see someone using an IDE instead of a lean editor I see their struggle. Multiple seconds to open the IDE (sometimes tens of second), multi-hundred millisecond lag when opening a file, noticeable input lag. And when you have to edit a file your IDE doesn't understand, all you have is a bloated notepad.
I know I'm biased and I intentionally wrote this one-sided to counter your post. In practice, it just depends. Right now in my work I primarily edit scripts (up to a few hundred lines of code), do quick edits to various larger projects - sometimes few different projects a day - and read files in dozens of programming language (I only reall program in Python and C/C++, but I have to constantly consult various weird pieces of code). VsCode works great for me.
On the other hand, long time ago when I was working on large C# projects, I can't imagine not using Visual Studio (or Rider nowadays I guess).
+1 this is what brought me back to vscode after experimenting with goland. To me vscode better handles the heterogeneity of my daily work. In my workspace I can keep open: a golang codebase, a massive codebase consisting of yaml config files, a filesystem from a remote ssh connection, a directory of personal markdown notes, and directories of debug logs. In my experience jetbrains excelled at the single use case, but vscode won on its diversity.
I will say that the parent comment had me curious about goland again. But I suspect I really need to spend more time configuring my vscode setup. I spent years using emacs, and would love to have a helm-like navigation experience.
Sublime Text's text search is the killer feature for me. CTRL+SHIFT+F and I can search a million LOC+ codebase instantly, navigate the results with CTRL+R, do a sub-search of the results with CTRL+F, set bookmarks with CTRL+F2 (and jump to them with F2/SHIFT+F2), pop the results out to a different window for reference, etc. And all that happens with no jank whatsoever.
The LSP plugins make life easier, but even without that Sublime is often able to find where symbols are defined in a project using just the contextual information from the syntax highlighter.
I tried CLion for a while, but couldnt get productive in it. Ofc I'm much more experienced with Sublime, so maybe I just didnt give myself enough time to learn it, but CLion felt sluggish and inefficient. The smart code features are probably more advanced than Sublime's LSP plugins, but I didn't find anything that would make the switch actually an improvement to me.
* Click to find usage is exceptionally good.
* when refactoring, it will also find associated files/classes/models and ask you want to change them as well. It’s also smart enough to avoid refactoring things like database migrations
* Click-run just about anything is amazing. I work in multiple languages in multiple code bases. I get tired of figuring out how to install packages, run things, and even get into debug mode. Most major tooling is supported out of the box.
* Debugging. Lots of data types have smart introspection when debugging. It knows that Pandas tables should be opened in a spreadsheet, JSON should be automatically formatted, and you really just want that one property in a class already formatted as a string.
* Built in profilers and code coverage. This is something that’s always annoying to switch tooling with.
* Great Git integration, though that’s probably par for the course.
* Database integration in some toolsets (like Rails). If you need to look at records in the database, it will jump you directly to the table you need with Excel-style filters
* Local history on just about everything. This has saved my butt so many times. You can delete an entire folder and know you can restore it, even if you delete it from Git.
* Automatic dependency change detection. For example, after a pull, it will identify if new or updated updated dependencies were pulled. 1-click to install.
* Type hinting in in-type languages
I have a laptop I bought 10 years ago. It only has 16 gig of RAM [1].
I have had 8+ editors open, mix of visual studio and vs codes. And in VS you often group all your codebases into single solutions. So usually have multiple windows from multiple projects open in each ide.
It only struggles when I leave the debuggers running several days because of a slight (known) memory leak in visual studio. There's probably a fix but reopening the ide takes like 10 seconds. And it remembers all my open files.
All editors are much better, faster at searching, use less memory, etc. than they wwre 10/20 years ago.
Everyone's improved. You seem to be a bit stuck with an old impression.
[1] I have a vastly more powerful machine but I keep procrastinating switching my work setup over to it.
Notice that if you work with large projects it is crucial to give the IDE enough RAM so it doesn't thrash a lot. You can also remove a lot of its default plugins to make it much faster.
I'm a near-exclusive user of VSCode (or Codium, at home) and like to think of myself as moderately advanced. I continually update my configurations and plugins to make my workflow easier and often see my peers stumble on operations that are effortless for me. It's hard to explain to them what they're missing until they watch me code. So now I'm curious about watching some typical Jetbrains workflows.
For most of the rough edges, I've found workarounds at this point.
I miss some of the refactorings, but half the time the AI does them for me anyways.
Smart links in terminal are supported. Detection of run configurations is supported.
My main issues atm are:
- Quick search is inferior. You can't search for folders, and symbol search is global, without the option to exclude certain locations (such as build artifacts) from indexing.
- cspell is more annoying than it is useful. I don't want to babysit my spellchecker. Without extensive configuration, there are far too many false positives.
Features I'd love:
* Reproduce the web/browser chat UI in the IDE, this is an easy concept to interact with vs a dialog that goes away after each run
* Provide tabs for multiple chats (each chat can have different files/history/etc)
* Allow multiple Aider processes running at the same time
But this is still super slick as-is, thank you!
If your main issue is the keybinding though there is a vscode plugin[1] that recreates Intellij IDEA bindings, which I found helped smooth the transition during my tryouts for me.
[1] https://marketplace.visualstudio.com/items?itemName=k--kato....
Seriously, this here right now is the precise moment in time where people will either look back at wondering how such a clear leader managed to sink into insignificance, or not.
I love IntelliJ more than my own kids, but if they don't add "the AI does not just talk about what code to create, it actually creates it, across multiple files and folders", then I'm out.
Just yesterday I made Cursor rewrite the whole ui layer of my app from light-mode-only to light-and-dark-mode-with-switcher in one single sweep, in less than 5 minutes (it would have taken me hours, if not days to do it manually), and this is just not feasible if you have to manually copy-and-paste whatever Jetbrains AI spits out.
Jetbrains — Move. Now!
Claude 3.5 + Cursor has fundamentally improved my productivity. It's worth the 20 dollars a month.
I've written thousands of lines of Vitest tests with it, and they have come out near perfect. It would have taken me days to write those tests by hand, but now I can just generate them and review each to make sure it works.
Intellij will have it's lunch eaten if it doesn't pursue the Cursor/Windsurf editing modality.
I keep them both open on the same project, as there are some things IntelliJ does superbly.
I evaluated Jetbrains AI and Copilot with VSCode, but they just didn't impress me. I tried Cursor, and subscribed a couple of days into the trial. The workflow is just right.
However I think JetBrains should be worrying. I've been using and paying for JetBrains products for many years and Cursor is the only thing making me consider switching from it.
PS: One UI feature that I haven't seen anywhere yet is not splitting the screen into code and chat but unifying it.
Also a common approach (e.g. in openai composer) seems to be to modify the file line by line which takes a very long time if the file is long, which (agentic) tools use a diff approach instead?
Autocompletiok is losing features every day, it seems. Yes, Jetbrains might either be removing features to move them to AI, or breaking features involuntarily in order to move faster to implement AI.
Either way, AI is killing Jetbrains.
aider
Personally, I'm using Goland for editing, along with Zed for its AI assistant. I just have the same project open in both, and do any AI editing in Zed.
I really like the AI UX in Zed with the chat window on the right being context for inline assist (and being able to easily include tons of files in the context window).
We are allowed Copilot and Chatgpt because of an enterprise contract with each. Luckily copilot has been improving but there definitely was a while where it felt way behind the jumps other products have had in the past year or so.
I would understand this mindset when it comes to consumer uses of it, but enterprise is where Microsoft makes its money and it would have to be the dumbest business decision ever to ruin the enterprise cash cow by doing that.
There are areas where Microsoft knows (or at least behaves as though) they have an effective unchallenged monopoly, and off boarding is too costly an endeavour.
Please explain. Vague insinuations mean nothing.
GitHub Copilot works fine with these; just authenticate against github.com first
A product can work however it works, good or bad, but if you can't wrap contractual guarantees around it that are palatable to your enterprise customers, you're not going to get enterprise sales.
I was wondering if you have used Copilot Edits and can compare it to Cursor Composer? Is Cursor much more superior when it comes to multi-file edits?
Do you also have tips for how to give Cursor some documentation, is there a way to make it RAG on a folder of markdown documentation files?
For us who prefer coding ourselves (instead of telling LLMs to do stuff and review the result) it is much much better.
func(a, b, 1)
func(b, c, 1)
and, say, for some reason I need to swap the order so the first line looks like func(1, a, b). After doing that, without even moving to the second line, Cursor just suggests changing the second line to func(1, b, c).You just can't make Copilot do this. Even after you move to the second line, it won't suggest anything. It just suggests a completion starting from where your cursor is, instead of an inline edit around where you are at.
Sometimes you can delete everything after func( on the second line and Copilot will finish it, but sometimes it just can't and decided to autocomplete irrelevant (e.g. func(1, e, f).
In this case there's not much intelligence needed, but for more complicated changes Cursor does just as well.
Cursor has that, but also other modalities. It also just has a better UX for the core shared experience. Cursor also has their own models they mix with APIs to big well known models.
For example…
1. Cursor’s chat window generates patches (multi line, multi file, non contiguous) that can be directly applied to the file editor with a click, instead of requiring you to copy/paste the chat results. It lets you apply parts of the patches too.
2. Cursor supports multi-line changes (which show up as auto-complete), where the lines aren’t contiguous. For example, if you rename a field in a class, the AI will propose a rename for the Getter/Setter methods, as well as all the uses of said field/method. It’s like all the familiar refactoring tools available in IDEs, but AI powered so it operates “fuzzy matching”
3. Building off 2, they will use AI to move your cursor around (it’s not intrusive).
4. Cursor supports BYO keys for OpenAI, Gemini, Anthropic, etc. They host their own model which you’d need to pay to access though.
5. They support AI autocomplete and conversation in the command line. Helpful for remembering commands or if you need to change a command to test some change you’d made.
There might be more but this is what sticks out.
Getting answers (with references) to questions *destroys* using Google.
Why do I say this? Because their cost to deliver their product exceeds their revenue. That tells me the product (as it is today) does not match the market demand.
- Works with local models
- Context-aware chat with very nice ergonomics (we see consistently more chats per day than other coding assistants)
- Used by both indie devs and devs at very large enterprises like Palo Alto Networks
- Hooks nicely into code search, which is important for building a strong mental model inside large, messy codebases
- Open source core
Technical: https://news.ycombinator.com/item?id=23038520 with the tl;dr of "patches over email is king, if you want fancy web stuff go elsewhere"
Administrative: https://sourcehut.org/blog/2022-10-31-tos-update-cryptocurre... https://news.ycombinator.com/item?id=33403780
What are your arguments for this?
2. AI is too dangerous. Whatever innovations used for benign applications will be eventually used for more dangerous applications such as advanced genetic engineering and the military.
3. AI uses too much energy. It's disrespectful to the resources that we have.
4. AI is an apex technology amongst technologies designed to further enrich the elite and strengthen the power structure.
5. AI will also be used to completely replace workers at a speed much faster than other automations and I don't agree with that. The new jobs that have been created are demeaning such as "AI Prompt Engineer".
6. AI is one step closer to technology creating autonomous technology, and that's a bad thing.
Society needs to slow down and find alternative, more sustainable solutions. AI is aligned with short-term economic efficiency and that is detrimental.
7. This idea of "AI" and how it is expected to be used is detrimental to human intellectual development, particularly for junior generations, and the presumption that AI will solve everything is what actually may bring us closer to the world of Idiocracy.
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."
Granted, most of us are not choosing this.
That's a new thing in the world, ordinary people investing in their savings, 401k, retirement, mortgages, index-linked accounts. Not many hundreds of years old, but people advise it as if it's as solid as the mountains. And work for 50 years watching the numbers go up for the carrot of freedom at the end.
- AI output is taken as an ultimate source of truth despite frequently, and dangerously, getting details wrong. Fact-checking is abdicated as a personal responsibly while simultaneously marketing and designing products to people who are weak at or are otherwise indifferent about critical thinking. (This is similar to social media products telling its users to "consume responsibly" while designing them to be as addictive as possible.)
- AI is expensive. Microsoft, Google and Meta are the only companies that can afford to train. I don't feel comfortable with allowing these companies to be the ultimate arbiter of truth behind the scenes.
I am currently having a ride with chatgpt allowing me to write applications at 3 times the speed compared to before (where before may be "never" for some technologies) and I am happy for everyone contributing to this.
But all your points are well grounded, I will have to figure out a way to think about them, while keeping my day job.
So to give you the benefit of the doubt that you’re not just another dude who has hitched their financial wagon to this current AI slopfest, I just retried a question about writing OSSEC rules and the response, while convincing looking, was completely wrong. Again.
There’s a lot that engineers can due that are well beyond the limits of LLMs. If you really want to keep your day job, I would really commit yourself to that gap when you can!
I would put that phrase the other way around.
The time freed here gives me more time to spend on what actually brings value.
My primary job is not to write applications.
And if it was, I would not include "the process of editing lines of code" in my job description.
I am not afraid to be fired, but at the same time there is no discussion about the ethics of using AI and whether ethics is a good reason not to in, my workplace.
The AI proponents that support this have no serious solution to the mass displacement of jobs thanks to AI. They actually don't mention any alternative solutions and instead scream about nonsense such as UBI which has never worked at a large sustainable scale.
> Society needs to slow down and find alternative, more sustainable solutions. AI is aligned with short-term economic efficiency and that is detrimental.
I don't think they can come up with sensible alternatives or any sustainable solutions to the jobs displaced as there is no alternative.
This one I somewhat agree with. Ideally these technologies are owned by nobody.
Though it does give me hope when I see Facebook of all companies leading the charge in regards to open sourcing AI. The fact that their business model incentivizes them to do this, is good (lucky) for everyone, whatever your other opinions of the company are.
I agree with you that it can also be abused to make the existing state of affairs even worse. But if we resist technical progress on this basis, we'll never get better, either.
What human instincts do you have in mind, and how are they incompatible with AI?
I belong to an internet community whose artists make for a third of its population – they are mostly hostile to generative art and never gave their consent for their art to be plagiarized, yet their artstyle end up in Civitai and their content shows up on haveibeentrained.com.
Personally, my hostility towards generative art would stop if training was opt-in, and I would use GitHub again if it, AT LEAST, allowed members to opt-OUT.
and with the recent addition to o1 in Cursor the price of a mid-senior is set to around $20.
It has been a while since my business needed to hire more senior engineers for a while, I only needed around 1 or 2 and the rest as interns using Copilot or Cursor.
This is a great time to build projects and get into programming for everyone.