Hilarious. The code "fix" Copilot suggests in their example[1] is wrong on some many levels. It's trying to use await in something that is not an async function. From the other examples it's clear this is likely React's useEffect. You could not even make that callback an async function if you wanted to - this naive fix for that leads to subtle bugs.

And last but not least: The issue here - if there's any - can already be caught by linter rules like no-floating-promises in eslint/by using TypeScript instead. You don't need a shitty AI linter confidently suggesting wrong code.

That's how you advertise the product? I'm guessing competent coders at GitHub have already been replaced by juniors+copilot.

[1] https://i.imgur.com/FHnsqyP.png

  • soco
  • ·
  • 2 days ago
  • ·
  • [ - ]
I've read enthusiastic reports about Cursor but my very short trial experience was quite similar to yours: refactoring by duplicating code, mixing up cases, server connection errors, to the point where I just gave up after one day. It might be me of course and my poor prompting skills but hey if it offers me "rename symbol" I kinda expect it to rename the symbol AND its occurences. Oh well. I'll check again in a year or something.
I personally love Cursor but never using prompts. I just use it's auto-complete and smart-tab features it saves me a tonne of manual typing. (I type fast, yet now find switching to a "manual" editor annoyingly slow).

I don't let it write blocks of code, but usually I'm ~5 keystrokes into a line change or addition and it can complete the rest. I also use a text-file todo in the same project to help it with context of what I'm trying to do.

Yep, for all the gloss of these AI tools, I only found myself using autocomplete and smart-tab daily. And GH Copilot is actually not bad at all, and have integration with nvim.
I have just complete as month long task of developing a highly complex library in C# (a parser that uses ANTLR) together with claude and cursor and it worked very well. For me trick was to do test driven development and write all the tests first. Then after each code change rerun the tests.

This made it very easy for claude to work out what to do and what needed to be corrected for things to work. Then you get your tests either to write debug statement to the terminal or json serialize something to disk you can paste into the claude chat that helps claude understand the flow of the code and where things goes wrong.

As an added bonus having all those test alse made it easy for claude to write the documentation.

I will not say it was fun because you are sitting doing copy paste all day but i couldn't have written this library without the assistance from clause. cursor just reduced the amount of copy paste actions since it can update the files.

Can Cursor actually update the files? Because I've seen many examples and complaints, most recently from my developer, that it says it's changing the files, but nothing actually changes, and when pressed, says it can't actually change files.

I've found numerous threads about this issue on their forums too.

It can. It can modify the active document.

In the chat sidebar dialog there are two submit buttons. The one labelled "chat" cannot edit the active file and when you chat using that submit button it doesn't understand your code base.

The submit button labelled "Codebase chat" on the other hand can update the active document and has knowledge about your project.

In the codebase chat dialog it will print the suggested changes and then you must press the apply button next to the suggested change. then it will apply the changes to the file in a git compare kind of way with red and green colors. Finally you must accept each of the red/green changes and the file will be updated on disk.

An alternative is aider that can edit multiple files at once. It performs a git commit after each changes which drove me nuts and didn't fit into my workflow. With cursor you decide when to commit to git which i like better.
Oh no, cursor _is_ very nice. To be fair I've not used it for refactoring,ore for code generation, but it's great at applying changes to the code when it identifies patterns. Makes everything easier!
> been replaced by juniors+copilot

More likely replaced by seniors who forgot how to write code by hand because they're relying so much on AI. I don't think juniors are a thing anymore.

Working with people who use this stuff a lot has made my current job just so so much harder in every way its astonishing. I used to solve problems with code, now I feel like hermeneut or dream analyzer: absent of human intention, codebases quickly become these weird piles of different idioms, even without considering the hallucinations (those have definitely cost me a few sleepless nights now either way).

But I am just venting. All of yall have clearly won, I get it. I am just grateful I have lived a full life doing other things other than computers, so this all isn't too sad other than the prospect of being poor again.

I will always have my beautiful emacs and I will always be hacking. I will always have my Stallman, my Knuth, my Andy Wingo, my SICP. I feel it is accomplishment enough to have progressed in this career as I have, especially as a self taught developer. But I kinda want to let yall deal with slop now, you really seem to like it!

Maybe I'll get another degree now, or just make some silly music and video games again. It's liberating just thinking about being free from this "new way" we work now.

Thanks for all the fish though!

Where I work, we have a directive to use AI as much as we can wherever possible. I was handed a codebase that's been working on by many, many, many different people over the past ~2 years and was written almost entirely with AI (starting with GPT 3)

The only way you can deal with the codebase is to fully embrace the AI. Whenever I want to make anything beyond a simple API change, I have to boot up Cursor and give it 5+ files for context and then write a short novel about what I want it to do. I sip some coffee while it chugs away and spits out some changes, which I then have to go and figure out how to test. I'm not fully convinced that the iteration time is any faster, and the codebase is a hot mess.

It also just feels very stifling and frustrating to me to have to write a ton of prose for something when I'm used to being able to write code to do it! I have to go home and work on other projects without AI just to scratch the itch of actually programming which is what I fell in love with all those years ago.

It's hard to address just one point for how messed up this seems, but the first thing that stands out is that I would guess the code volume of this process has to be unsustainable.

Humans themselves tend to write new code instead of using old code- a common problem- but with sensible code structures and CI, code will grow at a sustainable rate.

LLMs continuously barf a steam of new code, never deleting anything. Then you need to provide the barf as context, and the cycle surely must continue until it falls apart. How has this not happened yet?

This is how I think the current bubble will pop. Yes, these are useful tools that we just now are trying to learn how to use. But wallstreet and bean counters are going apeshit on the prospect of replacing the (expensive!) humans they currently pay for.

Once the codebases become an unmanageable mess I think the pendulum will swing back, hard.

That should buy sometime for anyone not entering the industry right now.

I have similar feelings though maybe a bit more optimistic take. Obviously the AI hype train hasn't taken us to anywhere objectively better. Software has in no way become less buggy (if anything it feels worse in the past few years) and most if not all of the software I use predates the LLM era.

It feels like most developers en masse have taken on some masochist pleasure at deskilling themselves while becoming a prompt engineer beholden to OpenAI/MS/Google.

The upside is that those who take time to learn and improve can write software that most devs have given up the hope of being able to write. Write the next Maigt or org-mode while everyone else is asking AI to generate tailwind HTML React forms!

  • nunez
  • ·
  • 1 day ago
  • ·
  • [ - ]
> It feels like most developers en masse have taken on some masochist pleasure at deskilling themselves while becoming a prompt engineer beholden to OpenAI/MS/Google.

It's a weird/delusional timeline, that's for sure.

I hear you. There was something special about the old days when programming was all about taking your time, thinking through every step, and truly understanding what you were building. I miss the days of punching cards — there was a certain simplicity to it. You’d write your code, feed it into the machine, and if it broke, it was your fault. There was no hiding behind tests or CI/CD pipelines, no auto-fixes or layers of abstraction. It was just you and the machine, and every bug was a lesson you had to learn the hard way. The feedback loop was slow, but it was real.

Now, everything feels automated, fast, and often a bit too dumb. Sure, it’s easier, but it’s lost that raw connection to the work. We’ve abstracted away so much that it’s hard to feel like we’re truly engineering something anymore — it’s more like patching together random components and hoping it holds. I think we lost something when we all started staring at screens all day and disconnected from the hands-on nature of building. There's a lot of slop now, and while some people thrive on that, it’s not for everyone.

  • forty
  • ·
  • 2 days ago
  • ·
  • [ - ]
Wait a bit, it won't last
In what way - that those things/people doing that will fail? Or the tooling will get better that this is no longer a problem? Completely different outcomes with very different consequences.
  • forty
  • ·
  • 2 days ago
  • ·
  • [ - ]
I think people will realize those AI are not worth the cost (both money wise and environment wise). Right now money is raining on everything AI, but at some point they will want to have a return and most projects will shut down, and we can move on
  • hkaal
  • ·
  • 2 days ago
  • ·
  • [ - ]
It is part of the great plan:

- Buy GitHub and devalue all individual projects by soft-forcing most big projects to go there and lose their branding.

- Gamify and make "development" addictive.

- Use social cliques who hype up each other's useless code and PRs and ban critics.

- Make kLOC and plagiarism great again!

This all happened before "AI". "AI" is the last logical step that will finally destroy open source, as planned by Microsoft.

  • nunez
  • ·
  • 1 day ago
  • ·
  • [ - ]
Don't know why this was downvoted. This is an interesting take that I, at least, haven't seen before.
A good conspiracy can be interesting even if it's false, so I don't criticise you for finding it interesting; but I downvoted because I didn't - HN is chock full of "Microsoft bad, Embrace Extend Extinguish" and it's fucking tiresome. I wrote 1700 words on how that comment makes no sense but I'll reduce it:

- Have you ever read a PR and thought "this code is useless" and the result was you deciding that "kLOC is great"? Any way I put those things (Microsoft, kLOC, AI, Github, social cliques,...) together I don't get anything sensible; Microsoft spent 7.5Bn on Github to make kLOC great to help them destroy open source? It's a crackpot word salad, not a reasoned position. At least, if it's a reasoned position they should post the reasoning.

- Github has 200,000,000+ public repositories and makes $1Bn/year revenue. How will putting AI into Github 'finally' destroy open source and why does Microsoft want to screw up a revenue stream roughly the size of Squarespace's, bigger than DigitalOcean's or Netgear's, and getting on for as big as CloudFlare's?

Link should be announcement post:

Announcing 150M developers and a new free tier for GitHub Copilot in VS Code

https://github.blog/news-insights/product-news/github-copilo...

The real cost is that society at large is no longer contributing to the StackOverflow, so problems and solutions are all now stored in proprietary databases (which granted SO also was) but also now stored in an invisible proprietary database
> proprietary databases (which granted SO also was)*

StackOverflow's database was public and shared at the Internet Archive until recently: https://archive.org/details/stackexchange

They've now moved it onto their own infra, ostensibly so people have to agree not to use it for LLM training: https://meta.stackexchange.com/questions/401324/announcing-a...

Wait, I thought they backed off from this nonsense.

If they really did close off db downloads then I'll never answer a question there ever again. I bet many others wont either. Maybe that's also part of why SE will fail.

The only deal I personally am willing to tolerate is me spending time providing quality questions and answers in my domain in return for being able to download all questions and answers under the cc-sa license and being able to freely download them for offline viewing without an account (via kiwix). No other arrangement is acceptable.

I'll look further into this and update this comment with my findings.

Edit: Yep. No more answers from me. Talk about throwing the baby out with the bath water.

Maybe small community forums are really the only way to share knowledge these days :(

The situation at large is even worse if you consider how Discord moved a significant portion of discussion away from the indexable Web.

I thought of mailing lists as archaic, but maybe we just regressed from there.

IMO public forums are the sweet spot.
The StackOverflow database was open source for a while, but I haven't checked for a data dump since the new management came in.
True, may not be open source, but it is still viewable.

Best for Society: Open Source <-- Where SO was

Medium for Society: Proprietary, but openly browsable <-- Where SO is

Worst for Society: Proprietary, not browsable <-- LLM-based code assist tools

This is just like the majority of communities moving to Discord over Forums. The barrier to entry might be a lot lower than getting a VPS and hosting phpBB or whatever, but the discoverability and searchabilty has gone towards 0. Everything is just moving into opaque black boxes.
Your comment sparked a thought -- maybe this is the "Dark Forest" of the Web:

Either your site's content stays hidden behind discord, or an LLM's bot/minion scrapes all your content and makes visiting your site superfluous, thereby effectively killing your site.

I never understood the move to Discord. Maybe I should host a PhpBB and bring some sanity back.
> phpBB

Whoa there Nelly!

If we're going to resurrect things, can we do it whilst leaving PHP in the past?

PHP hasn't failed me yet. Been using if for about 20 years. Not heavily but it gets the job done and it's still improving. It's really quite sane if you start comparing it to the alternatives. It continues to 'just work'
[dead]
Less competition for me
...where would you classify Llama in here? That's not really "open source" despite what Facebook calls it, but I wouldn't call it proprietary, anyone can download and use the whole thing locally.
  • pabs3
  • ·
  • 2 days ago
  • ·
  • [ - ]
"public weights"?
Can those weights be interpreted by anyone viewing them? If not, it seems like publicly available, obfuscated code at best.
"model available" would be my preferred term.

Is Photoshop.exe "interpretable" by anybody with a copy (of windows)? How about a binary that's been heavily decompiled, like a Mario game?

Photoshop doesn't claim to be open source like llama does though, I'm not sure of the connection you're making.

Don't get me wrong, llama is at least more open than OpenAI and that may be meaningful.

The license aside, the question is what can be done with a carefully arranged blob of binary? Without additional software (Windows) I can't really do anything with Photoshop.exe. Similarly, Llama.gguf is either useful, with Ollama.app, or not, standing alone. So (looking past the difference in license), would you consider Photoshop.exe similar in that it's a binary blob that's useless by itself, or is it a useful collection of bytes, and why is/is not an ML model available on hugging faces the same?
The license used isn't important in my opinion, when talking about open source the question is whether the source code is available to be modified and reviewed/interpreted.

Photoshop, or any compiled binary, isn't meant to be open source and the code isn't meant to be reviewable. Llama is called open source, though the most important part isn't publicly available for review. If llama didn't claim to be open source I don't think it would matter that the model itself and all the weights aren't available.

If your argument is just that most software is shipped as compiled and/or obfuscated code, sure that's how it is usually done. That isn't considered open source though, and the line with LLMs seems to be very gray - it can be "open source" if the source code for the training logic is available even though the actual code/model being run can't be reviewed or interpreted.

  • pabs3
  • ·
  • 1 day ago
  • ·
  • [ - ]
The source data for the training needs to be public and freely licensed too, otherwise its IMO not an open source model.
I think this discussion is silly in the context of a modern LLM. Nobody really understands how an LLM works, and you absolutely do not actually want to retrain Llama from scratch.

When I said "it's not really open source", I was referring to the fact that there are restrictions on who can use Llama.

Yeah, I agree that StackOverflow it but a shadow of what it used to be, and that this is a shame. But I don’t think that AI is to blame for this - all the interesting discussions had already migrated to GitHub issues. AI is just the nail in the coffin for SO.
I think you'd find society-at-small was contributing, with perhaps 10x larger yet still quite small number posting but watering down useful contributions, 100x that lurking, and 1000x that just drive-by copy-pasting from SO to their IDE.
"Useful contributions" is subjective. Not everyone is born a senior developer. Juniors, and even children who aren't even juniors yet ask questions on these channels.

Source: I bothered a lot of people on the Internet about C++ when I was child.

I contributed well-researched answers already when I was ostensibly a very junior dev, and even before that during my CS studies. Stuff you can lookup or simply try out is doable, of course questions where you need experience aren't a good fit.

When I was in high school I read the docs, and learned C++ from books and MSDN. Granted my access to the internet was rather limited back then, but it also never crossed my mind to bother people for things I could easily lookup myself.

Growing up in a RTFM, "search the forum first before asking" environment is seen as toxic today, but it really helps keeping certain behavior in check thats a drag on society as a whole.

One of the best mentors/bosses I ever had never answered coding questions directly, but always in the form of a question so I could look it up and learn for myself.

I try to do the same with my junior devs today, unless there's time constraints or they're under stress, I try to let them figure out the final answer themselves.

I find there's a time and a place for RTFM, and it's usually not at the start of the project when you know the least. When you're just starting, you just want to get something working. Being spoon-fed some answers to get past a few hurdles is rather nice. But then there comes a point where you have to stop and be like "Okay, what the heck is this actually doing? How does it work?".

I just hit that point with some TensorFlow stuff because I started hitting the limits of what ChatGPT could answer successfully, and I think that's fine. But maybe good that I couldn't get everything out of it or it may have delayed my learning further yet. Which I guess reinforces your point.

Are you saying these child questions are useful contributions?
To the child that asked it? Absolutely. :-)

Information is only useful if it's accessible. If they are asking questions it's because the information they want is in practice not accessible to them.

I'm not disagreeing, just clarifying. I myself have asked lots of dumb/easy questions but prior to ChatGPT it was hard to get simple straight-forward answers.
There's a rule of tens online: for every comment, there are ten interactions (like an upvote/down ote), and for every interaction, there are 10 views.

So, every comment on a post roughly equals 100 views.

I learned this as a real estate salesperson, some 20-years ago: Take 10 calls to get a meeting. Take 10 meetings to get a contract. Sign 10 contracts to complete a sale. So a thousand calls and a hundred contracts for 10 sales. I didn’t last long, the only other people there who were successfully selling land were also heavy cocaine users, that’s a mark of how grind-y the job was.
  • ·
  • 2 days ago
  • ·
  • [ - ]
Sure, but at least the knowledge the publicly viewable, even if the contributions werent equal. Upvoting, browsing, asking is also a contribution.
I think this only removes the low quality questions like “how do I make X in React”.

There will always be new libraries and software updates, and those will always have corners and edge cases that will foster questions. The LLMs won’t have the answers out of the box until they have something to train from, so there’s still room for StackOverflow.

So the real solution is tracking all of your data.

There should be a product, something that you install to capture your own data, and then pseudo-anonymize it and sell it back to databurgers (I meant to write data brokers but I kept this misspelling for lolz).

Is there?

Assistant LLMs like Limitless.AI merged with their older desktop scraper app could be repackaged to do that...

New industry? Or old?

I think there will always be open platforms for us to exchange ideas where Copilot can't help. It's just now simple problems can be solved with Copilot directly, we can finally focus on implementing ideas and optimizing them.
I don't know. Sometimes there's many ways to solve a simple problem but some are better than others. On stack overflow you get that variety and that discussion. Copilot just gives one. It might be suboptimal or it might have subtle bugs.
I wonder if a purge and a fresh start for StackOverflow would renew interest.

I used to like reading StackExchange sites as a social media site--lots of interesting questions and clever answers. Today, votes have slowed down and the best answers are from 2017, and only niche questions can avoid being closed.

  • jart
  • ·
  • 2 days ago
  • ·
  • [ - ]
Stack Overflow is now a proprietary database too. Given the choice between that, and a proprietary robot offering 10x as much clarity and quality, I'd choose the robot. Not all LLMs are proprietary in the same way as Claude. Many LLMs have their weights publicly available, like Gemma. But I can understand if you feel like a floating point numbers file is de facto proprietary tool. But if you're smart, you would look at this instead as an opportunity to invent the tools that will make this knowledge accessible. I've been working with Mozilla to build a "fantasy mode" feature for Firefox, which works similar to incognito mode, where you have a local LLM generate a synthetic version of the world wide web on the fly. This gives you the ability to explore the knowledge contained in LLM weights using an intuitive familiar browser-based interface. So far it's about as fast as 56k dialup was in the 1990s but as microprocessors become faster, I believe we'll be able to generate artificial realities of useful information we can't live without which are superior to Stack Overflow today.
Forgive me if I have missed something, but how is a synthetic version of the web (which sounds interesting and impressive in its own right) in any way comparable to a vast, indexed repository of curated and organized technical knowledge shared by experts with nuanced experiences and insights?

> but as microprocessors become faster, I believe we'll be able to generate artificial realities of useful information we can't live without which are superior to Stack Overflow today.

Was this written by an LLM bot? It seems…off.

  • WD-42
  • ·
  • 15 hours ago
  • ·
  • [ - ]
This sounds great! There isn't enough slop in the web, so this sounds like a good way to experience browsing without any non-ai generated nonsense getting in the way!
They appear to be SO excited about their new feature that they sent an email about it to every single email address associated with my Github accounts. Which is quite a few addresses that have been used for git commits over the years.
  • hug
  • ·
  • 2 days ago
  • ·
  • [ - ]
Does that include the email addresses configured in your ~/.gitconfig?

If so, I apologise to fart@butts.com.

Only if you have manually associated them with your Github account so that they show up as "your" commits, as far as I'm aware (although I'm not sure how deep Github's integration with desktop tools reaches these days)
  • az226
  • ·
  • 2 days ago
  • ·
  • [ - ]
GitHub is scared. It’s losing mind share with startups who are leapfrogging them, Cursor, Poolside, Augment, Cognition, Codeium.
But are you Stack Overflow excited?!
This should not be downvoted :) lighten up you guys!
I feel like this free plan is just enough to get someone hooked and require them to upgrade to paid.

Like, imagine GPS navigation wasn't widespread and there was a paid service that gave you 20 free trips. Eventually your normal navigation skills would atrophy and you'd be obliged to purchase.

  • snug
  • ·
  • 2 days ago
  • ·
  • [ - ]
Of course it is, why would they give away something without trying to capitalize on it later
Some will want to pay more for better service, but it seems likely to be plenty for people who aren't full-time programmers and don't write code every day?
  • xpl
  • ·
  • 2 days ago
  • ·
  • [ - ]
It is to collect more data for LLM training. To tap into proprietary code bases that are never uploaded to GitHub.
Have you read the terms or are you just saying things?
  • xpl
  • ·
  • 1 day ago
  • ·
  • [ - ]
I didn't. Terms are subject to change + can be vaguely written. What matters is your data getting uploaded to someone else's cloud.

Even if they explicitly and clearly state that they don't use your data for any purpose other than generating immediate responses, they can change this once the free Copilot gains traction and people become really addicted to it.

Like, "pay if you don't want us to use your data for training". Most people won't pay and will be happy to give away their data instead.

> imagine GPS navigation wasn't widespread and there was a paid service

GPS is something that wouldn't be invented these days. Instead of the US military footing the bill, it would be some private company somewhere.

Congratulations, you are one of today's 10000* to learn about https://en.wikipedia.org/wiki/Shareware and free trials.

*https://xkcd.com/1053/

This, but _every single thing we rely on_.

Real AI is definitely a rubicon.

GPS navigation, at least on Google Maps, is there free to get you to click on paid ad places.
I have already replaced Copilot with continue.dev+qwen2.5-coder:1.5b on Ollama and don't see myself coming back.

For the last year Copilot completions have been slow, unreliable and just bad, coming in at random moments, messing up syntax in dumb ways, being slow and sometimes not showing up at all. It has been painfully bad even though it used to be good. Maybe it's since they switched from Codex to the general GPTs ...

I've had the opposite experience - I tried continue.dev and for me it doesn't come close to copilot. Especially with copilot chat having o1-preview and Sonnet 3.5 for so cheap that I single handedly might bankrupt microsoft (we can hope), but I tried it before that was availlable and the inline completions were laughably bad in comparison.

I used the recommended models and couldn't figure it out, I assume I did something wrong but I followed the docs and triple checked everything. It'd be nice to use the GPU I have locally for faster completions/privacy, I just haven't found a way to do that.

The last couple times I tried "continue" it felt like "Step 1" in someone's business plan; bulky and seconds away from converting into a paid subscription model.

Additionally, I've tried a bunch of these (even the same models, etc) and they've all sucked compared to Copilot. And believe me, I want that local-hosted sweetness. Not sure what I'm doing wrong when others are so excited by it.

I just tried Continue and it was death by 1000 paper cuts. And by that I mean 1000 accept/reject blocks.

And at some point I asked to change a pretty large file in some way. It started processing, very very slowly and I couldn't figure out a way to stop it. Had to restart VS Code as it still kept changing the file 10 minutes later.

Copilot was also very slow when I tried it yesterday but at least there was a clear way to stop it.

  • euoia
  • ·
  • 2 days ago
  • ·
  • [ - ]
Do you have a guide for how to set this up? I am also pretty dissatisfied with Copilot completions.
Here you go: https://docs.continue.dev/autocomplete/model-setup

The sibling comment also describes the process for chat, which I personally don’t care about.

Probably via Tabby (https://www.tabbyml.com/)
Tabby is great! Though they broke the vim plugin going for lsp support, but older version still works fine.
  • mfkp
  • ·
  • 2 days ago
  • ·
  • [ - ]
looks like the parent comment hints at https://docs.continue.dev/chat/model-setup#local-offline-exp...

(Assuming your computer has the specs necessary to run ollama)

  • maeil
  • ·
  • 2 days ago
  • ·
  • [ - ]
That page is about chat while the parent comment seems to be about completions, that's the codepilot feature most of us will be looking to replace/improve on.
  • mfkp
  • ·
  • 2 days ago
  • ·
  • [ - ]
Here's the similar docs page for completions: https://docs.continue.dev/autocomplete/model-setup#local-off...
Also interested !
Same, tabbyML+llammacpp running StarCoder3B runs great, excellent completions for go, terraform, shell, nix
  • forty
  • ·
  • 2 days ago
  • ·
  • [ - ]
Is there any solution that is 1- fully local 2- open source 3- fast on CPU only 4- provide reasonable good results for smart auto complete ?

I don't want my work to depend on proprietary or even worse, online, software. We (software engineer) got lucky that all the good tools are free software and I feel we have a collective interest in making sure it stays that way (unless we want to be like farmers having to pay Monsanto tax for just being able to work because we don't know how to work differently anymore)

Fast on cpu it's just not realistic.

Open source, fast and good: openrouter with opensource models (Qwen, Llama,etc...) It's not local but these is no vendor lockin, you can switch to another provider or invest in a gpu.

> 3- fast on CPU only

Unless you've got a CPU with AI-specific accelerators and unified memory, I doubt you're going to find that.

I can't imagine any model under 7B parameters is useful, and even with dual-channel DDR5-6400 RAM (Which I think is 102 GB/s?) and 8-bit quantization, you could only generate 15 tokens/sec, and that's assuming your CPU can actually process that fast. Your memory bandwidth could easily be the bottleneck.

EDIT: If I have something wrong, I'd rather be corrected so I'm not spreading incorrect information, rather than being silently downvoted.

deepseek-1b, qwen2.5-coder:1.5b, and starcoder2-3b are all pretty fast on cpu due to their small size, you're not going to be able to have conversations with them or ask them to perform transformations on your code but autocomplete should work well
Starcoder3B works great in my second hand rtx2080. Can’t run 7B, just a hair too little Ram, but still great completions
You should definitely be able to run 7B at q6_k and that might be outperformed by 15b w/ a sub 4bpw imatrix quant, iQ3_M should fit into your vram. (i personally wouldn't bother with sub 4bpw quants on models < ~70b parameters)

Though if it all works great for you then no reason to mess with it, but if you want to tinker you can absolutely run larger models at smaller quant sizes, q6_k is basically indistinguishable from fp16 so there's no real downside.

You can set up Cursor with a local LLM, connected via an open source ngrok substitute (can’t remember which ones are good). Only recommend doing this on a Mac with a lot of ram so you can use one of the actually useful coding models though (e.g. QwenCoder2.5-32b).
Continue VSCode extension.
Interesting, I cancelled my plan a couple weeks ago so I suppose it's nice to know my vscode plugin won't stop working at the end of the month.

If I ever pay for a different AI product I would prefer a pay-by-the-token plan vs a monthly charge since there are often spans of several weeks where I'm not using the tools at all.

  • mehh
  • ·
  • 2 days ago
  • ·
  • [ - ]
I cancelled mine today, as I’ve moved to cursor, so this is a great news as I’d still want to use it sometime.

I do think this slightly aggressive tactic, I’m sure they claim otherwise!

That free plan sounds too limited for continuous use. Is this the sort of plan that will convert non-users into paying customers?

Compared to other paid plans for various AI services, this one seems like it's relatively the most enticing.

Yes, this is meant to expose people to Copilot and convince them to subscribe.
Data point: I'm going to give copilot a try now.
If anyone is looking for a free/local alternative Continue + Ollama is acceptable. If you're just doing run of the mill programming it will work well out of the box.

I'm glad it's open source so I was able to fix most of the issues I had with it and now my copy is in a great place. The documentation is in places many versions behind the actual code so it can be tough to figure out how to set things up when you're venturing off the beaten path. That all being said the granularity of control you have when using local models leads to an experience that's far better than Cursor/Copilot, I really enjoy that it reads my mind a lot of the time now (because I have prompt engineered it to know how I think).

Ultimately, isn't this just the way of things? https://thot-experiment.github.io/forever-problems/?set%20up...

Acceptable UX maybe, but the gap between Sonnet 3.5 and open models isn’t worth it. I know people are going to pitch qwen coder 72b, but it’s still a long way off on benchmarks and my time matters more.
You're the second person in this thread to make this point, what are you using it for? I find the difference is basically negligible (in the sense that both get the busywork right and both fail at anything complicated)
yeah, Sonnet goes past that. 300+ line changes in 20 seconds. You have to review it, but generally it's right. It's infinitely faster than the time to look at docs and do it myself.

Sure it's busywork. But it's a lot of busywork very fast.

Well it's definitely not infinitely faster since you're having to review it, but we're talking about the delta between Sonnet and Qwen/Mistral/Llama or whatever, not doing it manually.

I'm really curious what your problem domain is, like specifically what sort of code are you asking it to change and what changes are you asking for.

I just gave o1 and Sonnet a total layup question (optimization that had a huge win simply by filtering an array before sorting it vs the other way around) and neither model got the solution right, both of them came up with ~hundred lines of code, neither model's code worked on the first try. It took me like 10 minutes to refactor and optimize the code for a 6x speedup and it would take longer than that to debug the AI code to even make it run. (I spent 10 minutes prompting/editing to try to get the generated solutions to run)

Also the initial code was 11 sloc, my solution is 14 sloc, and claude was 70 sloc and o1 was 93. idfk, i just don't think we're there yet

Quick example: create a config class, allowing the caller get and set a variety of config vars. It should be easy to add new vars. Persist any set value to a yaml file. Each var should be described by a name, type, env var fallback value (optional) and default value if the set value and env var are null (optional). The API should be typed and allow int, string, float and lists. Add comprehensive tests.

Obviously nothing complicated but it takes non-zero time. It did it in one shot in about 30s. Didn’t have to look at and docs (I don’t have the yaml lib memorized). Got the python typing right which would have been a bit of a pain. A lot faster than doing it myself, even with reviews. Tests were solid so I could tell it worked.

The filter example you give seems like they should have aced it. Not sure what went wrong but it has easily done work like that for me. I usually am half way through typing the method name when the rest autocompletes. Are you using a tool with good context management like cursor?

My work involves a lot of boilerplate. Made some changes recently that amount to about 3 lines of meaningful code but wrapped in 4 new files + edits to like 6 others. It's ridiculous. Perfect for AI but I haven't found a way to automate because the tools don't seem to be smart enough to create files and make random little edits, and the amount of words to explain what I want would be too much. One day...
I gotta find a new trade
On what kind of hardware/GPU are you running that locally?
M1 MacBook Pro is sufficient for a Qwen model.
with how much ram though?
For Qwen-32B, 32Gb should be quite sufficient to run it at 4-bit quantization with full context.
Thanks, I need to figure out something to run on M4 Mac Mini standard 16G
You should be looking at 7-8b sized models, then. https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct is considered pretty strong for its size. That said, you shouldn't expect more than glorified autocompletion at that point.
I'm not familiar with the practical memory requirements on Macs but I suspect that with 16gb of integrated ram you won't have issues running 14B models even at q6_k, certainly would be fine at q4 and it's definitely going to be capable of writing code based on instruction as well as minor refactoring, generating docstrings etc.
The model itself will fit just fine, of course, but you'll also want a large context for coding. And then since it's integrated RAM, it's also used by everything else running on your system - like, say, your IDE, your compiler etc, which are all fairly memory hungry.

Also keep in mind that, even though it's "unified memory", the OS enforces a certain quota for the GPU. If I remember correctly, it's something like 2/3 of the overall RAM.

2x 1080Ti, ~25t/s
I found the plugin did not prompt the model often enough. I would finish writing aline, and go to a new line and sit there waiting to see what the AI thought would be next, only to realize continue wasn't prompting...
Tried a few models and was extremely disappointed by how dumb the completions are. Basically useless. What local ollama-available models do you recommend?
Have you tried Qwen 2.5 coder? I've only done very little testing, but it seemed to work pretty well. I used the 14B version.

https://ollama.com/library/qwen2.5-coder

A software developer's time is much more precious than wasting time on sub-optimal models.

Open Weights models has it's place (in training custom agents and custom services), but if you are knowledge worker, using a model even 5% less than SOTA is extremely dumb

100% disagree with this take, the flexibility in controlling the prompt leads to QwenCoder2.5-32b outperforming gpt-o1 and claude sonnet 3.5 for nearly everything that I use it for (true for Gemma-27b and llama3.3-70b, though in this context I'm almost always using the former). A specialist model that's specifically prompted to do the correct thing will outperform a SOTA generic model with a one size fits all system prompt. This is why small autocomplete models can very obviously outperform larger models at that specific task. I am speaking 100% from experience and ignoring all benchmarks in forming this view btw, so maybe it's just my specific situation.

Also, in general I don't find the difference between SOTA models and local models to be that significant in the real world even when used in the exact same way.

  • k__
  • ·
  • 2 days ago
  • ·
  • [ - ]
Sounds great.

Does this run with VSCode and how hard is it to set this up?

yes, the vscode extension is a one click install, so is ollama which is a separate project that provides local inference

you'll then have to download a model, which ollama makes very easy. choosing which one will depend on your hardware but the biggest QwenCoder2.5 you can fit is a very solid starting place. it's not ready for your grandma, but it's easy enough that I'd trust a junior dev to be able to get it done

  • k__
  • ·
  • 1 day ago
  • ·
  • [ - ]
What's the extension name?
Continue, I talk about it at length in the gp post.
  • k__
  • ·
  • 1 day ago
  • ·
  • [ - ]
Ah, thanks.

I just read the parent post, lol.

Are there any small trained models out there that are specifically for python programming that you know of?
Do you have any example prompts or suggestions for coming up with them?
  • ·
  • 2 days ago
  • ·
  • [ - ]
Has anyone compared the updated Copilot with Cursor? The main updates I am wondering about are model selection and multi-file edits. I used copilot before these features, changed to Cursor and now I am wondering how much Copilot has closed the gap.
Have already moved to Cursor and realised I was only paying Github for Co-pilot - have downgraded immediately. No affiliation to Cursor, but the results are just a lot better and I don't need to be paying $20 to them AND $10 to co-pilot :/
I've been loving Copilot Edits (their take on the multi file edits stuff). I'm personally still doing the process of finding and adding files to its context/working set (letting it try to do that part just doesn't work great yet), but then it quickly applied edits and gives you a review/checkpoint UI.

The UI is still changing slightly every couple weeks as they improve things and polish it, but it's become a big enough part of my day to day that it's pretty much always open on the right pane of vscode for me.

I'm using Copilot daily and I didn't find any improvements over last year. I think they introduced something called Copilot Edits which works kind of like aider judging from screenshots, but it's experimental, so I didn't try it. It's basically glorified autocomplete and that's about it. At least I didn't discover anything new.
The glorified autocomplete only applies to the completions. Granted, that's often the primary interaction, but it isn't the whole story. Using chat+edits can enable copilot to make changes across entire file(s). I still haven't found it perfectly reliable for large scale edits, but it has proved useful for handling the sort of busywork that may occur when refactoring. I'd love to hear stories where copilot actually made meaningful contributions to unique/interesting projects. The demo's often showcase making mundane modifications to yet-another-website or spawning a project into existence, which is neither very compelling nor something I do often.
  • olup
  • ·
  • 2 days ago
  • ·
  • [ - ]
I use supermaven and cline with my own API key, a setup superior to cursor imo. Tried to go back to gh copilot yesterday but couldn't bear it for a full workday, and reverted to my previous arrangement.
FYI supermaven has joined Cursor

https://www.cursor.com/blog/supermaven

This looks really interesting, cursor has been way better than copilot for me but supermaven looks great. I went down the rabbit hole with: https://www.youtube.com/watch?v=zLQuBSuzu2w&t=661s

Your setup sounds interesting. What sort of API key do you use?

  • olup
  • ·
  • 2 days ago
  • ·
  • [ - ]
We have an openai account for the company, so I mainly use gpt4o or 4o mini with supermaven and cline. I think Claude 3.5 works even better.
  • ·
  • 2 days ago
  • ·
  • [ - ]
Includes up to 2,000 completions and 50 chat requests per month.
So basically it’s free for 1 day per month (I only use chat).
I hope access to Code Review in GitHub will be available to paid users soon! It's going to be a game changer as a first level of code review before real people get involved with the PR.
I love to hear this as the PM for Copilot code review at GitHub!

We’re running a preview of the code review feature right now, and are looking forward to opening it to all paid subscribers soon.

If you’d like to try it sooner, I can hook you up - just email my HN username @github.com :)

I have just sent you an email. Thank you Tim!
It's like the U2 album on everyone's iphone. It's a Homer bowling ball.
>Includes up to 2,000 completions and 50 chat requests per month.

Never used Copilot or any AI assisting tool, is this a lot or is it as free as cheese in a mousetrap?

The org I work for pays for Copilot so I'm used to using it.

It would probably take me 10 days to use up 50 chats? 5 a day seems about right.

I have NO idea how many completions I use.

  • az226
  • ·
  • 2 days ago
  • ·
  • [ - ]
Mousetrap.
Great. More easily generated garbage code written by other people for me to review.
Copilot in VSCode is so far behind Windsurf, not terribly excited about this and still happy to pay for a Codeium pro account. My only fear is someone buying them and screwing up model access, but maybe it’s fine if that is Anthropic. GPT-4 is terrible at coding and o-1 isn’t tell tiene for practical use.
For anyone else who never heard of Windsurf, it seems this was announced a month ago:

https://news.ycombinator.com/item?id=42127882

  • vazma
  • ·
  • 2 days ago
  • ·
  • [ - ]
Tried windsurf and the beginning was amazing, last few days I start realizing that is getting annoyingly stupid and slow. Open reddit and realized lots of people had the same issue with a lot of wild theories about it. I stopped my subscription and I am considering trying cursor just to compare. Tldr;×trying an agent Ai coding assistant was amazing and I think I will never go back!
They use a lot of custom models and logic chaining, so I think this is more search and retrieval optimization or similar problems, I guess maybe some Anthropic API issues as well. I’ve seen the variance too and was wondering if was load related. Overall, still loving it, even if there’s some bleeding edge bumpiness.
Concur. Dropped Copilot for Cursor and then tried Windsurf. Windsurf is the keeper for me.
  • mdrzn
  • ·
  • 2 days ago
  • ·
  • [ - ]
Been using Windsurf for a couple months and it was almost perfect with Sonnet 3.5

They recently changed their pricing to some weird "Flow Action credits" system and MOST of the people are dissatisfied with that. I'm still looking for a replacement IDE because I mostly just chat and rarely use autocomplete.

This is me too. Windsurf feels perfect right now.
Is there some way to check what I currently use? I rarely use chat, so 50 per month probably wouldn't be an issue, but completion has become my main completion tool, not sure if I'd stay under 2000..
Can you see it here?

https://github.com/settings/billing/summary

I have Copilot from an org I'm in so I just see "You are assigned a seat as part of a GitHub Copilot Business subscription"

No, unfortunately it just tells me that I spent $10. There seem to be usage stats for other pro offers by Github, which I don't use though.
Came here to ask the same question. I bet I have 2000+ code completions in a single, fevered, Mountain Dew fueled day of coding, which would be my quota for a month of the free tier.
llama3.3:70b-instruct-q4_K_M (~43 GB, 2x 3090/4090 or fast memory cpu inference on e.g. macs)

or

qwen2.5-coder:32b-instruct-q5_K_M (~23 GB)

or

gemma2:9b-instruct-q6_K (~7.5 GB)

and

https://github.com/bernardo-bruning/ollama-copilot

or alternatively:

https://github.com/ollama/ollama + https://github.com/olimorris/codecompanion.nvim

Or just get yourself a cerebras cluster and run full llama-3.1-405B.
  • paxys
  • ·
  • 2 days ago
  • ·
  • [ - ]
Really hope I don't get my open source maintainer Pro plan bumped down to free.
It didn't. If you had a complimentary Copilot Pro plan, you'll continue to have it so you're good!
  • jart
  • ·
  • 2 days ago
  • ·
  • [ - ]
We still get the pro plan, which saves $10/month. But that's not good enough. Popular open source maintainers deserve their most expensive enterprise plan. Microsoft indemnifies enterprises from me, but they won't indemnify me from enterprises. How is that fair? Is this how they treat the people who make their platform great? Rolling out the red carpet and granting open source developers enterprise privileges is the only way for Microsoft to prove that it's serious.
The FAQ entry about the free tier for open source maintainers is still live here: https://github.com/pricing#i-work-on-open-source-projects-ca...
You can confirm at https://github.com/settings/copilot that you're still on the Pro plan.
That only lasts a year I think? Mine was cut after a while. Although I had stopped using it before that point.
It's renewable.

> Once awarded, if you are still a maintainer of a popular open source project when your initial 12 months subscription expires then you will be able to renew your subscription for free.

source: https://github.com/pricing#i-work-on-open-source-projects-ca...

I assume the recommendation-edit feedback loop is valuable and so is access to code/text that isn’t on GitHub yet? Email says “GitHub and affiliates may use your data for product improvement.” Access to all the little changes in between commits seems valuable. If you’re looking to compete on access to data I imagine this helps.

Experts weigh in pls?

For JetBrains IDE-s the open source CodeGPT plugin works much better. Many more models to choose from
  • orra
  • ·
  • 2 days ago
  • ·
  • [ - ]
I like the mediocrity of the example they headlined: the user asks for unit tests. Copilot writes only two test cases, so not exactly great coverage. Plus, the test cases uses python's unittest, which isn't as slick as pytest.
I use Copilot a lot for writing specs. I've come to prefer that it only writes one or two specs to start. It's a good way for me to quickly review it has the structure correct. When it throws too many at me at once, it can be harder to make broad tweaks.

As for `pytest`, I just have to remind it I'm using pytest. "Write pytests for this" is sufficient to get it to do what I want.

You can write a prompt stating how tests should be writen in your project. It goes on some .vscode file. It's consulted anytime it's writing a test, even in a conversation. I state where my factory fixtures are to be found.
Well they didn't ask for pytest tests. I wonder if AIs are like bored developers, doing the minimum necessary to complete the task instead of going the extra mile and make things better. Perhaps that will be the differentiator in the future. Or they just need to adjust the pre-prompt.
  • butz
  • ·
  • 2 days ago
  • ·
  • [ - ]
Are there any local models that provide only very limited knowledge, e.g. autocomplete and chat for single programming language? Or is my thinking, that such limited model would be smaller and work faster even on CPU, is incorrect?
I believe Jetbrains' built in models work like that but unfortunately they force you to disable any competing AI chat plugins so you lose out on access to Claude etc. if you use them.
  • rvz
  • ·
  • 2 days ago
  • ·
  • [ - ]
Microsoft (unsurprisingly) has won the race to zero.

We have had them 'embrace' the wider developer and open-source ecosystem by buying GitHub.

Then they have 'extended' this with partnerships and deep developer integrations in VSCode and exclusive partnerships with OpenAI which in the background was used to build the best tools on the Microsoft platform with added enhancements and extensions.

Now in the new intelligence age, we finally have the definition of what 'Extinguish' looks like to competitors wanting to compete with the best tools available, For free.

I think you're ignoring the part where a MBA shows up, figures they have a valuable monopoly and starts it's extortion racket.

Realize: most of the digital ecosystem runs on whales, and thats who businesses go after. As long as wealth inequality thrives, that's the enshittification cycle.

Bundling this with GitHub and giving it away for free reminds me of other times they’ve pulled this same tactic like with Teams. I don’t know why people respect Satya too much - he’s just another typical Microsoft monopolist. Others can also act unethically and abuse market position to grow their company.
I'd prefer something where I can bring my own model and pay the API costs directly, rather than yet another $20+/m/dev fee

The free plan is not going to work for professional coding

FYI, you can do that with Cody + Ollama. A good portion of our user community does exactly that: https://sourcegraph.com/blog/local-code-completion-with-olla...
I don't want a local LLM, it is slower, less capable, and slows the computer down generally

What I would like is a deep integration with VS Code using my preferred foundational model

I see Cursor has their own model and support for 2 foundational models, but not my preferred model and they charge a monthly fee.

Supposedly: https://cloud.google.com/blog/products/ai-machine-learning/g...

but do I still have to pay microsoft $20+ per month? what I really want is pay-per-usage, not pay-for-access+usage

How do you disable it again? You can't!

Once you pressed "Start using Copilot" the copilot menu shows up on all repositories within GitHub.com for careless or uninformed users to leak proprietary code to their training set.

Could this be a strategy to get access to more code than what is available in just Github? More training data means better models I assume.
I just enabled copilot and I see in the settings this is checked by default

https://github.com/settings/copilot

> - [x] Allow GitHub to use my code snippets from the code editor for product improvements > > Allow GitHub, its affiliates and third parties to use my code snippets to research and improve GitHub Copilot suggestions, related models and product features. More information in About GitHub Copilot privacy.

Should we still be concerned if opting out?

It's been my experience that, with time, opt-outs from SaaS multiply.

People opt out, some PM or whoever decides they need 'metrics' (numbers to spin)... and more categories or qualifications are summarily added. Starting the cycle again.

I assume they'll send a privacy policy update in at most six months. I wouldn't call it concerning. Routine. I won't participate, that's for sure.

I switched to Codeium free tier three months ago and never looked back to Copilot since then. Copilot is a mess on Julia and Elixir/Erlang code.
  • ·
  • 2 days ago
  • ·
  • [ - ]
Yeah, codeium free tier blows copilot paid out of water
Before GitHub Copilot was a paid feature. This only shows to the Cursor team that they are going to the right direction.

The slow elephant enterprise GitHub will never be as good/fast as Cursor, they had their chance but they have joined the party of "keeping the devs under our umbrella with free features" too late.

If Cursor remains successful, they’re likely to turn into a slow elephant enterprise as well eventually. That’s rather the rule than the exception, unfortunately.
Are there any exceptions? I feel like small companies can be nimble because they don't have as many customers to lose, but larger orgs need to worry much more about existing customers: scale, migrations, backwards compatibility, legal compliance, supply chains...
  • ·
  • 2 days ago
  • ·
  • [ - ]
  • ·
  • 2 days ago
  • ·
  • [ - ]
Microsoft giving away a losing product for free is the exact kind of thing that proves they’re too big. This is just a desperate move to suck oxygen away from startups in this same space that compete with them. We need new antitrust laws ASAP.
  • swah
  • ·
  • 2 days ago
  • ·
  • [ - ]
Couldn't they acquire for a cool 100 M or something? Its a VSCode fork!
Hopefully the people championing this at work will stop trying to get me to switch
They need more training data.
That was my first thought too. The Github Copilot CEO recently said in no uncertain words he wants to make everyone a dev - "a billion developers". Of course this is code for make current dev's redundant IMO similar to how everyone can type these days too making typists redundant. The MBA types really really want this outcome for all intellectual work.

I can imagine that good enterprise/business level training data (the layer above the open source widgets we compose) is not as easy coming on the internet as the open source libraries themselves are. But through their free tool they would get access to this especially from small scrappy startup's that will want to save costs. Seeing some startup's go from zero to next big thing on Copilot will be great training data.

I think this is a bad move... I immediately switched, 2000 completions for me (I have it disabled by default on Neovim) is enough, and 50 chats, more than enough I didn't even know they had chat like ChatGPT.
If you've disabled completions and didn't know about chats, what were you using Copilot for?
I have a toggle to enable it back when needed, I found this is way better, for me, to stay focused and make less errors.

But I do enable it, for example when I'm coding in a language or project I'm not used to, I'd say I use like 20% of my coding time, but that 20% is useful, it's when google doesn't work for me either...

I have been using Cursor, but I am a vim user at heart. What is the best plugin I can use that gives us the power of AI coding, but for vim? Every time I look, the plugins seem to be far behind Cursor.
https://github.com/github/copilot.vim works pretty well for inline autocompletion

https://github.com/CopilotC-Nvim/CopilotChat.nvim is the best I've found for the chat-type interaction. It lets you choose models/etc.

It's still not quite as nice as cursor, but decent enough that I enjoy using them

I've been slightly curious about Avante, but I switched to vim about a year ago and still have other things I'm working on first.

https://www.youtube.com/watch?v=4kzSV2xctjc

Not vim, but neovim:

https://github.com/olimorris/codecompanion.nvim

Haven't used it yet but it supports many models.

  • ·
  • 2 days ago
  • ·
  • [ - ]
The free version has severe usage limits: "Includes up to 2,000 completions and 50 chat requests per month." More of a 7d trial that resets every month.
  • snide
  • ·
  • 2 days ago
  • ·
  • [ - ]
Anyone using the Neovim plugin and suddenly starting to get rate limit errors? I'm already on pro, but it's asking me to upgrade to pro in Neovim?
This happened to me as well. Came back from several weeks off and I kept getting these rate limit windows in vim today. I was/om on the OSS pro plan so I just figured the gravy train ended and unsintalled it
I saw a post on Reddit that said this was a bug and they’re working on it.
Started using Supermaven since it was free and Copilot wasn't, but I stayed because it's faster and uses recent diffs to aid in generation
Is there something like a startpage that can act as intermediary and make the queries and responses public? And searchable?
hm.. it seems like I won't use this service for a while as I got the following error message while starting IntelliJ today: You've reached your monthly code completion limit. Upgrade your plan to Copilot Pro (30-day Free Trial) or wait until -4712-01-01 for your limit to reset to continue coding with GitHub Copilot.
Wow 4712 BCE was when the pyramids were new and shiny. Just kidding, it turns out it was the first Julian day. Got some help here:

January 1, 4713 BCE marks the beginning of the Julian Day count. This system was introduced by Joseph Scaliger in 1583 CE.

Scaliger selected 4713 BCE as the starting point because it is the nearest date where three cycles—solar, lunar, and Roman indiction—coincide. These cycles are:

The 28-year solar cycle.

The 19-year Metonic lunar cycle.

The 15-year Roman indiction cycle.

So you don't have to pay the yearly fee? Also did anyone get the email and Outlook picked up the link as harmful?
Any mobile devs here using Copilot, Cursor or Supermaven? I am curious if you have any recommendations.
I found that AI models really have a hard time with Android native, kept reverting to the latest libraries which don't have broad support. I had more success with Flutter but still needs a decent amount of course correction.
  • flawn
  • ·
  • 2 days ago
  • ·
  • [ - ]
On Mobile, my experience was really poor. It really does not perform for e.g. Flutter. Probably not too much training data on Flutter Code.
Copilot is totally useless.
Anyone tried replit? Is it worth the money with free copilot now from Github?
> Now automatically integrated into VS Code

Congratulations to Microsoft for owning enough of the stack to make doing this possible.

Free for upto:

- 50 chats

- 2.000 code completions

per month.

After that it is starting from 10$/month.

What if I have already paid the annual plan?
The you are already a paying customer.
Not for vs2022?
Good news
Microsoft would need to pay me to use Copilot. Seems like a major scam for them to learn from our code and then tell us we can’t use it to make our own competing AI systems. “Limits on use of data from the AI Services. You may not use the AI services, or data from the AI services, to create, train, or improve (directly or indirectly) any other AI service.” From https://www.microsoft.com/en/servicesagreement#13r_AIService... TODAY — the ridiculous focus on AI over core business at github is the number one thing likely to kill the platform. They are coasting on network effects
psssst *it's a secret

If You're Not Paying For It, You Become The Product (2012) https://www.forbes.com/sites/marketshare/2012/03/05/if-youre...

These days you will pay for it and still become the product :)
I've been paying for GitHub for years and years, long before they introduced any kind of AI service. I don't use any of their AI products, even though they're now included with my plan. Hopefully that small datapoint shows up as a blip in their metrics somewhere.
"You may not use the AI services, or data from the AI services, to create, train, or improve (directly or indirectly) any other AI service" - It's not like anyone chose to indirectly train these existing systems - this means that anything published online that could be scraped and used to train something is not allowed right?
It wouldn't be an online service's T&C document if it didn't include at least one vague, threatening, unworkable, and unenforceable condition.

The useless but true answer is nobody knows what's allowed and what isn't, until it's tested in court. Practically (not being a lawyer, though) I suspect that the clause will never be pursued on its own, because it's bullshit and everyone involved knows it is so.

In your scenario, though, assuming you publish in a way that's not overtly and primarily meant for AI training, I think the "use" of data isn't yours and would be hard to argue as violating the terms of the agreement.

Of course we might take it to the absurd end of this line of reasoning and demand that any code base that Copilot was involved in should have a license term preventing the training of any other AI in it, and we wind up in a place where all AIs are trained on source material they're explicitly licensed not to be trained on, or trained only on a mostly static set of "pre-AI" publications.

Daft stuff.

  • merek
  • ·
  • 2 days ago
  • ·
  • [ - ]
I'm having a hard time determining if my private repo code is used for training their models. The GitHub Copilot VS Code Extension states:

> Your code is yours. We follow responsible practices in accordance with our Privacy Statement to ensure that your code snippets will not be used as suggested code for other users of GitHub Copilot.

IIRC, I think this statement gave me the initial reassurance I needed to use Copilot many months ago, however I feel like this could be deceptively reassuring. Does it mean they can use my code for training and suggestions to other users after changing the variable names?

I tried to dig deeper. The section on "Private repositories" in their Privacy Policy [1] says: "GitHub personnel does not access private repository information without your consent", with exceptions for security, customer support, and legal obligations. Again, this feels deceptively reassuring, since GitHub personnel and GitHub's AI services are separate entities.

In their Privacy Policy, "Code" falls under the definition of "Personal Data" (User Content and Files) [2], and they go on to list lots of broad ways the data can be used and shared.

Unless I've missed anything, and as other commenters have said much more succinctly, I have to assume that there's a real possibility that my private repo code is used to train their models.

[1] https://docs.github.com/en/site-policy/privacy-policies/gith...

[2] https://docs.github.com/en/site-policy/privacy-policies/gith...

It's a good example of how ridiculous the AI training situation is.

They claim it's fair use for them to steal all data they want, but you're not allowed to use AI data output, despite this data literally not being subject to copyright protections on account of lacking a human author.

And especially Github. They already have an enormous corpus that is licensed under MIT/equivalent licenses, explicitly permitting them to do this AI nonsense. All they had to do was use only the code they were allowed to use, maybe put up an attribution page listing all the repos used, and nobody would've minded because of the explicit opt-in given ahead of time.

But no. They couldn't bother with even that little respect.

I wonder how GPLv3 and CC BY SA licenses should be considered when training AIs like this? The model is software, and if it's sufficiently different from the source, it's a derivative work, isn't it?
> it's a derivative work, isn't it

Short answer: unlikely

Serious answer: we'll only know whether it is when someone challenge it at court.

Use tree-sitter, change some identifiers here and there, some function names also, then you can use the generated data for anything you like. Pass each identifier through the cheapest LLM possible and change it ever so slightly.
[dead]
[dead]
  • ·
  • 2 days ago
  • ·
  • [ - ]
  • asdev
  • ·
  • 2 days ago
  • ·
  • [ - ]
I ditched Copilot for Cursor and never looked back. Cursor might be the only other AI product with product market fit than ChatGPT, it's that good
Has anyone who uses an IDE (e.g., JetBrains, not a code editor) moved to Cursor? I've downloaded it a few times because everyone raves about it, but I've always come back almost immediately because editors can't reliably make changes across projects (among many other things)... What am I missing?

FWIW I use GoLand w/ Supermaven, currently.

This is the main sticking point for me, I'm not leaving JetBrains anytime soon. GitHub Copilot + Aider handle my needs beautifully and while I wish Aider had deep IDE integration I can work around that (Yes, I know about the "AI!" command thing, it's a cool idea but sucks in practice). Aider in browser mode has pretty much replaced my back and forth to ChatGPT/Claude's web UI to the point that I'm considering going API-only for both of those (currently pay for the $20/mo plan for each).
Jetbrains is still the leader in all of the small details that make navigating and working a code base easy.

Copy and paste workflow is a minor slowdown, but nothing compared to things like smart links in terminal, auto detection of run configurations, etc, etc.

This. 1000x This.

I watch people navigate code in VSCode and I want to pull my hair out. Things that I don’t even think about are hard and/or require just falling back to search.

And before “there is a plugin for that”, I’m sure there is. I’m sure you can configure VSCode to be just as powerful as IDEA but the rank and file using it aren’t doing that work to install and configure a bunch of plugins. So, on average, VSCode doesn’t hold a candle to an IDEA.

With Aider I skip a lot of the copy/pasting but I’d still copy/paste to the browser before I left IDEA.

>I watch people navigate code in VSCode and I want to pull my hair out.

For me it's the other way around, when I see someone using an IDE instead of a lean editor I see their struggle. Multiple seconds to open the IDE (sometimes tens of second), multi-hundred millisecond lag when opening a file, noticeable input lag. And when you have to edit a file your IDE doesn't understand, all you have is a bloated notepad.

I know I'm biased and I intentionally wrote this one-sided to counter your post. In practice, it just depends. Right now in my work I primarily edit scripts (up to a few hundred lines of code), do quick edits to various larger projects - sometimes few different projects a day - and read files in dozens of programming language (I only reall program in Python and C/C++, but I have to constantly consult various weird pieces of code). VsCode works great for me.

On the other hand, long time ago when I was working on large C# projects, I can't imagine not using Visual Studio (or Rider nowadays I guess).

> sometimes few different projects a day - and read files in dozens of programming language

+1 this is what brought me back to vscode after experimenting with goland. To me vscode better handles the heterogeneity of my daily work. In my workspace I can keep open: a golang codebase, a massive codebase consisting of yaml config files, a filesystem from a remote ssh connection, a directory of personal markdown notes, and directories of debug logs. In my experience jetbrains excelled at the single use case, but vscode won on its diversity.

I will say that the parent comment had me curious about goland again. But I suspect I really need to spend more time configuring my vscode setup. I spent years using emacs, and would love to have a helm-like navigation experience.

Neovim works fine with massive codebases. Telescope is a bit slow sometimes, but given how long ripgrep takes on the same, I assume it’s simply a limitation of memory bandwidth, and not tooling.
  • nick_
  • ·
  • 2 days ago
  • ·
  • [ - ]
For literally multiple years I tried to convince a colleague of mine to try Rider. They're a diehard CLI and VS Code user. I made a video showing my workflow and how quickly I can navigate around and do refactors. Next day they were saying they couldn't believe it took them this long to use something better.
Can you give some examples? I spend the majority of my time reading code nowadays, and I often have like 8 or more Sublime Text windows open (each open on a separate codebase). I cant imagine how much RAM it would take to do that in CLion or Visual Studio.

Sublime Text's text search is the killer feature for me. CTRL+SHIFT+F and I can search a million LOC+ codebase instantly, navigate the results with CTRL+R, do a sub-search of the results with CTRL+F, set bookmarks with CTRL+F2 (and jump to them with F2/SHIFT+F2), pop the results out to a different window for reference, etc. And all that happens with no jank whatsoever.

The LSP plugins make life easier, but even without that Sublime is often able to find where symbols are defined in a project using just the contextual information from the syntax highlighter.

I tried CLion for a while, but couldnt get productive in it. Ofc I'm much more experienced with Sublime, so maybe I just didnt give myself enough time to learn it, but CLion felt sluggish and inefficient. The smart code features are probably more advanced than Sublime's LSP plugins, but I didn't find anything that would make the switch actually an improvement to me.

It’s a lot of small things, so here are some examples:

* Click to find usage is exceptionally good.

* when refactoring, it will also find associated files/classes/models and ask you want to change them as well. It’s also smart enough to avoid refactoring things like database migrations

* Click-run just about anything is amazing. I work in multiple languages in multiple code bases. I get tired of figuring out how to install packages, run things, and even get into debug mode. Most major tooling is supported out of the box.

* Debugging. Lots of data types have smart introspection when debugging. It knows that Pandas tables should be opened in a spreadsheet, JSON should be automatically formatted, and you really just want that one property in a class already formatted as a string.

* Built in profilers and code coverage. This is something that’s always annoying to switch tooling with.

* Great Git integration, though that’s probably par for the course.

* Database integration in some toolsets (like Rails). If you need to look at records in the database, it will jump you directly to the table you need with Excel-style filters

* Local history on just about everything. This has saved my butt so many times. You can delete an entire folder and know you can restore it, even if you delete it from Git.

* Automatic dependency change detection. For example, after a pull, it will identify if new or updated updated dependencies were pulled. 1-click to install.

* Type hinting in in-type languages

Are you joking about the 8 editors?

I have a laptop I bought 10 years ago. It only has 16 gig of RAM [1].

I have had 8+ editors open, mix of visual studio and vs codes. And in VS you often group all your codebases into single solutions. So usually have multiple windows from multiple projects open in each ide.

It only struggles when I leave the debuggers running several days because of a slight (known) memory leak in visual studio. There's probably a fix but reopening the ide takes like 10 seconds. And it remembers all my open files.

All editors are much better, faster at searching, use less memory, etc. than they wwre 10/20 years ago.

Everyone's improved. You seem to be a bit stuck with an old impression.

[1] I have a vastly more powerful machine but I keep procrastinating switching my work setup over to it.

You'll be disappointed when you move over to vastly more powerful machine and performance improvement is neglible
To add to the other great responses if you use search a lot try JetBrains semantic search. It's like searching text but within code based on the parsed structure of the code, so you can find complex usages.

Notice that if you work with large projects it is crucial to give the IDE enough RAM so it doesn't thrash a lot. You can also remove a lot of its default plugins to make it much faster.

It's always difficult to notice features you don't know are missing.

I'm a near-exclusive user of VSCode (or Codium, at home) and like to think of myself as moderately advanced. I continually update my configurations and plugins to make my workflow easier and often see my peers stumble on operations that are effortless for me. It's hard to explain to them what they're missing until they watch me code. So now I'm curious about watching some typical Jetbrains workflows.

I switched from JetBrains to Cursor after multiple years, and it's nowhere near as bad as the comments here make it out to be.

For most of the rough edges, I've found workarounds at this point.

I miss some of the refactorings, but half the time the AI does them for me anyways.

Smart links in terminal are supported. Detection of run configurations is supported.

My main issues atm are:

- Quick search is inferior. You can't search for folders, and symbol search is global, without the option to exclude certain locations (such as build artifacts) from indexing.

- cspell is more annoying than it is useful. I don't want to babysit my spellchecker. Without extensive configuration, there are far too many false positives.

  • pl-27
  • ·
  • 2 days ago
  • ·
  • [ - ]
I will also stick with JetBrains and am impressed by aider! I'm using this plugin to integrate aider into IntelliJ: https://plugins.jetbrains.com/plugin/25249-coding-aider Still in progress but already pretty useful
Wow! Thank you for calling out that plugin, I hadn't see it. It is very useful and I love the better integration. I feel like with just a little more QoL stuff this plugin will be amazing. I am getting a weird hang after Aider "completes", the dialog hangs and takes 20-30 sec before the IDE becomes usable again.

Features I'd love:

* Reproduce the web/browser chat UI in the IDE, this is an easy concept to interact with vs a dialog that goes away after each run

* Provide tabs for multiple chats (each chat can have different files/history/etc)

* Allow multiple Aider processes running at the same time

But this is still super slick as-is, thank you!

For Jetbrains users: consider voting for https://youtrack.jetbrains.com/issue/LLM-2402
Seems like they're adding Claude in January? https://www.jetbrains.com/legal/docs/terms/jetbrains-ai/serv...
Cool! Seems like they could have commented on that ticket. :)
We just need to make another ticket asking them to comment on this one, then ask HN to upvote it!
I’m also a JetBrains person and never really “got” VSCode. So cursor was not a fit for me, the VSC kn shortcuts always felt limiting. So I use zed since it can be configured to use JB kb shortcuts. And it’s open source and super fast (rust-based)
  • yoble
  • ·
  • 2 days ago
  • ·
  • [ - ]
I have the same issue, I tried to get into VSCode a few times but each time switched back to JetBrains.

If your main issue is the keybinding though there is a vscode plugin[1] that recreates Intellij IDEA bindings, which I found helped smooth the transition during my tryouts for me.

[1] https://marketplace.visualstudio.com/items?itemName=k--kato....

Thanks! Maybe this will open me up to cursor/windsurf.
I can't wait for IntelliJ to get where Cursor appears to be. Being able to combine a great IDE with project-level AI coding will be a huge leap forward.
I'm going to feel personally offended if Jetbrains drops the ball on this.

Seriously, this here right now is the precise moment in time where people will either look back at wondering how such a clear leader managed to sink into insignificance, or not.

I love IntelliJ more than my own kids, but if they don't add "the AI does not just talk about what code to create, it actually creates it, across multiple files and folders", then I'm out.

Just yesterday I made Cursor rewrite the whole ui layer of my app from light-mode-only to light-and-dark-mode-with-switcher in one single sweep, in less than 5 minutes (it would have taken me hours, if not days to do it manually), and this is just not feasible if you have to manually copy-and-paste whatever Jetbrains AI spits out.

Jetbrains — Move. Now!

My experience mirrors this exactly. I loved WebStorm, but I can't use it anymore because Cursor is just a massive productivity booster for things like what you're describing.

Claude 3.5 + Cursor has fundamentally improved my productivity. It's worth the 20 dollars a month.

I've written thousands of lines of Vitest tests with it, and they have come out near perfect. It would have taken me days to write those tests by hand, but now I can just generate them and review each to make sure it works.

Intellij will have it's lunch eaten if it doesn't pursue the Cursor/Windsurf editing modality.

Very impressive. Was it a React application?
That would be fantastic...

I keep them both open on the same project, as there are some things IntelliJ does superbly.

I evaluated Jetbrains AI and Copilot with VSCode, but they just didn't impress me. I tried Cursor, and subscribed a couple of days into the trial. The workflow is just right.

  • asdev
  • ·
  • 2 days ago
  • ·
  • [ - ]
I've tried both Copilot and JetBrains AI with IntelliJ and both are awful compared to Cursor. No multiline editing, no composer, worse at writing tests etc.
I've been a PyCharm user for over a decade, but recently decided to experiment with VS Code-like editors again. I subscribed to Windsurf pro tier, and while it's quite lacking as a traditional IDE, its AI capabilities are incredible. I'm now considering not renewing my PyCharm license next year. Until (if ever) I fully adapt to Windsurf, I'm planning to use both tools together - PyCharm for the features I'm most comfortable with (where I also use "CodeGPT.ee" plugin), and Windsurf for its AI strengths.
Actually writing code is a small part of what I do in the IDE. So I'm not keen on jumping ship to a whole new editor and lose all the IDE stuff. But I do think IDEs need to step up. The editors will get the IDE features slowly (or be able to work around some of them, like how an IDE can know your database layout and help you write sql, the AI helpers can make educated guesses as well and get close), so the IDEs don't have a big moat.
I keep my projects open in Cursor and Visual Studio at the same time. Cursor is only used to interact with the AI.
This is my approach as well with Cursor and Intellij IDEA.
  • vr46
  • ·
  • 2 days ago
  • ·
  • [ - ]
Yes, I finally decided not to renew my über-cheap JetBrains subscription in January and shift to VSCode + Copilot with Claude, then tested Cursor and dropped Copilot. Cursor is still a bit frustrating and gets trapped in circular reasoning a lot, but its features are good, like creating files.
  • lowkj
  • ·
  • 2 days ago
  • ·
  • [ - ]
I had been using JetBrains (Webstorm and PyCharm before that) for years now and just switched to Cursor a few months ago. I never liked the way AI copilots tacked onto the IDE worked and preferred to just use standalone ChatGPT. Been very happy with the Cursor experience, big improvement in UX from my perspective.
I personally jumped between Rider and Cursor for a while, and finally settled with Rider.

However I think JetBrains should be worrying. I've been using and paying for JetBrains products for many years and Cursor is the only thing making me consider switching from it.

I've been using https://www.codegpt.ee with the Jetbrains IDEs (mainly PyCharm) and I'm pretty happy with it. You can also bring your own API key.
Yes, for non-agentic AI assisted development flow, CodeGPT is better than most IntelliJ plugins, inculding JB's own Assistant.
Tried to move from JetBrains to VS Code with new Copilot Edit mode. Moved back to JetBrains and horribly outdated copy paste workflow ... for now. Disruption is imminent though. (And yes JetBrains AI integration got better but the 'insert code at cursor option' is laughable. There is a command to generate code right in the editor too, but not from chat. Also $10 extra. )

PS: One UI feature that I haven't seen anywhere yet is not splitting the screen into code and chat but unifying it.

Also a common approach (e.g. in openai composer) seems to be to modify the file line by line which takes a very long time if the file is long, which (agentic) tools use a diff approach instead?

Jetbrains is getting laughingly bad at normal autocompletion.

Autocompletiok is losing features every day, it seems. Yes, Jetbrains might either be removing features to move them to AI, or breaking features involuntarily in order to move faster to implement AI.

Either way, AI is killing Jetbrains.

Oh that's why; I thought my copilot got worse. I think it got replaced by the worse Jetbrains AI. Can I selectively deactivate it?
> which (agentic) tools use a diff approach instead?

aider

Same, would love all of these awesome AI features, but I have to use Rider.
Yeah, agreed.

Personally, I'm using Goland for editing, along with Zed for its AI assistant. I just have the same project open in both, and do any AI editing in Zed.

I really like the AI UX in Zed with the chat window on the right being context for inline assist (and being able to easily include tons of files in the context window).

Automatic multiline in-line-edits blew my mind. But the AI's eagerness of wanting to edit something frustrated me enough to no longer use it.
This has been a sticking point for me since my workplace told us all to start using the new copilot licenses they bought. The ideal workflow for me is usually inline question-and-answer but it tends to insist on editing the code, often in ways that do more than I actually wanted.
Unfortunately for me in enterprise it was impossible to convince them to allow us to use cursor since they're not well known like GitHub/Microsoft so they're afraid of what they'd do with our data.

We are allowed Copilot and Chatgpt because of an enterprise contract with each. Luckily copilot has been improving but there definitely was a while where it felt way behind the jumps other products have had in the past year or so.

... and they aren't concerned with what microsoft is doing with it?
Can Cursor sign a DPA in addition to an enterprise contract where they actually have to put into concise writing what they are and aren't doing with our data? Microsoft will. Amazon will. But in my experience a lot of smaller players can't or won't. That's a legal blocker.
This is such a common lazy cynical take on HN. “Microsoft is evil, therefore they will destroy their enterprise customer relationships by stealing their enterprise customer data, despite the fact that they explicitly state they will not do that.”

I would understand this mindset when it comes to consumer uses of it, but enterprise is where Microsoft makes its money and it would have to be the dumbest business decision ever to ruin the enterprise cash cow by doing that.

Please reconcile that idea with o365.

There are areas where Microsoft knows (or at least behaves as though) they have an effective unchallenged monopoly, and off boarding is too costly an endeavour.

“Please reconcile that idea with o365”

Please explain. Vague insinuations mean nothing.

  • hex3
  • ·
  • 2 days ago
  • ·
  • [ - ]
[flagged]
My company isn't. Apparently, "We've heard of Microsoft" == "They won't do anything wrong with our data" in the minds of non-technical lawyer types.
Microsoft already has access to all of your code on Github.
Many big companies are using GitHub Enterprise Server or some other self-hosted version control.

GitHub Copilot works fine with these; just authenticate against github.com first

and your partner network, and your commit patterns, and your bug list, and your remote IP address while working, and you have to authenticate to them each time you use it (which means that they can turn off your access)
Microsoft is a monopoly and they've done everything they can to scare people about AI.
Part of the hangup is that startups are shit at dealing with enterprise privacy contracts, whereas Microsoft probably has a whole department for that.

A product can work however it works, good or bad, but if you can't wrap contractual guarantees around it that are palatable to your enterprise customers, you're not going to get enterprise sales.

What's currently the best local model one can run? Codestral?
Qwen2.5-Coder.
I just downloaded Cursor yesterday.

I was wondering if you have used Copilot Edits and can compare it to Cursor Composer? Is Cursor much more superior when it comes to multi-file edits?

Do you also have tips for how to give Cursor some documentation, is there a way to make it RAG on a folder of markdown documentation files?

  • rfoo
  • ·
  • 2 days ago
  • ·
  • [ - ]
The actual killer feature for me is the much superior autocompletion, Cursor always suggests an edit inline once I stopped typing, without having me to type a prompt to ask it to do stuff. And it feels faster than GH Copilot, too.

For us who prefer coding ourselves (instead of telling LLMs to do stuff and review the result) it is much much better.

But Copilot does exactly the same? You don't need to prompt anything.
  • clvx
  • ·
  • 2 days ago
  • ·
  • [ - ]
Exactly. This is how it works in vim/nvim using the copilot.vim plugin. Unless it's refactoring multiple files, I don't see the value. Now that you can choose your own model, I don't see many benefits with Cursor.
  • rfoo
  • ·
  • 2 days ago
  • ·
  • [ - ]
No, Copilot is just dumb, for example, I have the following code:

    func(a, b, 1)
    func(b, c, 1)
and, say, for some reason I need to swap the order so the first line looks like func(1, a, b). After doing that, without even moving to the second line, Cursor just suggests changing the second line to func(1, b, c).

You just can't make Copilot do this. Even after you move to the second line, it won't suggest anything. It just suggests a completion starting from where your cursor is, instead of an inline edit around where you are at.

Sometimes you can delete everything after func( on the second line and Copilot will finish it, but sometimes it just can't and decided to autocomplete irrelevant (e.g. func(1, e, f).

In this case there's not much intelligence needed, but for more complicated changes Cursor does just as well.

So you're talking about Copilot suggesting edits rather than inserts. Yes, I agree, that's a major deficiency of Copilot. Thanks.
  • k__
  • ·
  • 2 days ago
  • ·
  • [ - ]
Yes, sounds like Copilot but without the benefit of using my favourite editor.
Cursor is literally just a reskinned VSCode. At this point Copilot is playing catch-up to them and gaining ground quickly.
Okay, this 100% echoes my (limited) experience, and I'm glad I'm not imagining things.
I need to try it again. When I tried it I found it was kind of overblown and went back to copilot. I also wasn't sure whether to maintain two IDEs - Cursor for scaffolding new projects, and VScode for daily driving.
  • siva7
  • ·
  • 2 days ago
  • ·
  • [ - ]
I don’t get it. Cursor looks to me about the same like VS Code + Copilot. What am i missing?
Copilot is basically just autocomplete for code with a ChatWindow on the side.

Cursor has that, but also other modalities. It also just has a better UX for the core shared experience. Cursor also has their own models they mix with APIs to big well known models.

For example…

1. Cursor’s chat window generates patches (multi line, multi file, non contiguous) that can be directly applied to the file editor with a click, instead of requiring you to copy/paste the chat results. It lets you apply parts of the patches too.

2. Cursor supports multi-line changes (which show up as auto-complete), where the lines aren’t contiguous. For example, if you rename a field in a class, the AI will propose a rename for the Getter/Setter methods, as well as all the uses of said field/method. It’s like all the familiar refactoring tools available in IDEs, but AI powered so it operates “fuzzy matching”

3. Building off 2, they will use AI to move your cursor around (it’s not intrusive).

4. Cursor supports BYO keys for OpenAI, Gemini, Anthropic, etc. They host their own model which you’d need to pay to access though.

5. They support AI autocomplete and conversation in the command line. Helpful for remembering commands or if you need to change a command to test some change you’d made.

There might be more but this is what sticks out.

Sticking with Jetbrains (Pycharm) which has good CoPilot integration (has an extension for Claude as well but I haven't gotten it working). Tried Cursor but I didn't find it compelling enough to switch.
Looks nice but if it phones home then my boss won't let me use it.
If you pay for the "Business" license, then you get some guarantees about privacy + security. It was a fight, but we eventually got the go-ahead from our legal team and are able to use it. It's pretty nice.
One of the problems is that we'd have to ask our own clients for permission too.
Try Microsoft Copilot on iPhone/ Android (not GitHub Copilot, this is a general AI product that is not for coders).

Getting answers (with references) to questions *destroys* using Google.

Yeah I'm happy to check it out, Cursor isn't perfect, but at least it feels like it was created by people who code for a living versus people tasked to inject a LLM into an IDE.
Isn't cursor a fork of vscode?
I went from VSCode to Cursor 3 months ago. Might come back to VSCode for this, but so far, Cursor feels snappier than copilot.
ChatGPT doesn't have PMF

Why do I say this? Because their cost to deliver their product exceeds their revenue. That tells me the product (as it is today) does not match the market demand.

midjourney definitely has PMF.
  • ·
  • 2 days ago
  • ·
  • [ - ]
For those looking for a free coding assistant they can also use at work / in the enterprise, Cody has had a free tier for awhile: https://sourcegraph.com/cody

- Works with local models

- Context-aware chat with very nice ergonomics (we see consistently more chats per day than other coding assistants)

- Used by both indie devs and devs at very large enterprises like Palo Alto Networks

- Hooks nicely into code search, which is important for building a strong mental model inside large, messy codebases

- Open source core

  • ·
  • 2 days ago
  • ·
  • [ - ]
I actually closed by Github account because I don't want to support AI or have my code trained on AI. Now I'm extra happy I shut it down.
i recently learned about sourcehut.org which has "No AI features whatsoever"
Nice! I am definitely using this.
  • norir
  • ·
  • 2 days ago
  • ·
  • [ - ]
Unfortunately sourcehut has its own cultural issues.
It’s quite easy to set up cgit, laminar and gitolite to self host.
  • bpye
  • ·
  • 2 days ago
  • ·
  • [ - ]
Maybe I’m out of the loop, what happened there?
I interpreted "culture" in two ways, one technical and one administrative

Technical: https://news.ycombinator.com/item?id=23038520 with the tl;dr of "patches over email is king, if you want fancy web stuff go elsewhere"

Administrative: https://sourcehut.org/blog/2022-10-31-tos-update-cryptocurre... https://news.ycombinator.com/item?id=33403780

> I don't want to support AI or have my code trained on AI.

What are your arguments for this?

1. I believe AI is detrimental. It makes us go too fast. It's all about production now, pure efficiency over individuals.

2. AI is too dangerous. Whatever innovations used for benign applications will be eventually used for more dangerous applications such as advanced genetic engineering and the military.

3. AI uses too much energy. It's disrespectful to the resources that we have.

4. AI is an apex technology amongst technologies designed to further enrich the elite and strengthen the power structure.

5. AI will also be used to completely replace workers at a speed much faster than other automations and I don't agree with that. The new jobs that have been created are demeaning such as "AI Prompt Engineer".

6. AI is one step closer to technology creating autonomous technology, and that's a bad thing.

Society needs to slow down and find alternative, more sustainable solutions. AI is aligned with short-term economic efficiency and that is detrimental.

I strongly agree with your points 1, 3, 4 and 5, and I would add another one:

7. This idea of "AI" and how it is expected to be used is detrimental to human intellectual development, particularly for junior generations, and the presumption that AI will solve everything is what actually may bring us closer to the world of Idiocracy.

I agree with that. I think AI may not make us dumber in every way, but it certainly will make us dumber when it comes to being able to plot out independent, large-scale solutions. We will be as dependent on AI for certain sorts of decision-making as we are on water treatment to treat our polluted water sources.
There are a couple of sentences in "Dune" about this:

"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."

Granted, most of us are not choosing this.

We turn our money over to investments, hoping this will set us free.

That's a new thing in the world, ordinary people investing in their savings, 401k, retirement, mortgages, index-linked accounts. Not many hundreds of years old, but people advise it as if it's as solid as the mountains. And work for 50 years watching the numbers go up for the carrot of freedom at the end.

I would say we have already chosen this.
  • nunez
  • ·
  • 1 day ago
  • ·
  • [ - ]
My views exactly. I'll add two more:

- AI output is taken as an ultimate source of truth despite frequently, and dangerously, getting details wrong. Fact-checking is abdicated as a personal responsibly while simultaneously marketing and designing products to people who are weak at or are otherwise indifferent about critical thinking. (This is similar to social media products telling its users to "consume responsibly" while designing them to be as addictive as possible.)

- AI is expensive. Microsoft, Google and Meta are the only companies that can afford to train. I don't feel comfortable with allowing these companies to be the ultimate arbiter of truth behind the scenes.

Thank you for your reply.

I am currently having a ride with chatgpt allowing me to write applications at 3 times the speed compared to before (where before may be "never" for some technologies) and I am happy for everyone contributing to this.

But all your points are well grounded, I will have to figure out a way to think about them, while keeping my day job.

ChatBLT and Copilot break every license of every repo they were trained on. Even the most liberal project license states you have to include, and not modify, the license file. So you’re glad for code thieves. Interesting…

So to give you the benefit of the doubt that you’re not just another dude who has hitched their financial wagon to this current AI slopfest, I just retried a question about writing OSSEC rules and the response, while convincing looking, was completely wrong. Again.

I am sorry you are so bitter. My reply did contain some ambivalence.
I don't begrudge you for trying to keep your job. I myself do things for my own job that I consider questionable. I guess it's all something we should think about.
Faster is not necessarily better, and if 2/3 of your value comes from LLMs, that doesn’t bode well for job security.

There’s a lot that engineers can due that are well beyond the limits of LLMs. If you really want to keep your day job, I would really commit yourself to that gap when you can!

> If 2/3 of your value comes from LLMs

I would put that phrase the other way around.

The time freed here gives me more time to spend on what actually brings value.

My primary job is not to write applications.

And if it was, I would not include "the process of editing lines of code" in my job description.

I am not afraid to be fired, but at the same time there is no discussion about the ethics of using AI and whether ethics is a good reason not to in, my workplace.

Ah my bad, mentioning writing applications 3x faster and narrative ties to a day job led me to assume your day job way writing applications :)
Given that AI output can't be copyrighted, how do you protect and distribute the project you are working on?
I don't know? I guess I don't worry about that. Maybe I am Borg.
  • rvz
  • ·
  • 2 days ago
  • ·
  • [ - ]
All of the above. It is even worse than crypto.

The AI proponents that support this have no serious solution to the mass displacement of jobs thanks to AI. They actually don't mention any alternative solutions and instead scream about nonsense such as UBI which has never worked at a large sustainable scale.

> Society needs to slow down and find alternative, more sustainable solutions. AI is aligned with short-term economic efficiency and that is detrimental.

I don't think they can come up with sensible alternatives or any sustainable solutions to the jobs displaced as there is no alternative.

Curious: where was UBI ever tried at a large scale?
If robots do all the work (which is ultimately what all the lost jobs are about), why would UBI be unsustainable at scale?
Because the rich and powerful people who will reap the most benefit from all the automation will not redistribute the wealth to the now-useless ex-labor force.
Oh, they will. The only question is whether this will happen sooner and voluntarily, or later with torches and pitchforks.
Better bring out the pitchforks earlier while we still have access to them!!
  • ·
  • 2 days ago
  • ·
  • [ - ]
> AI is an apex technology amongst technologies designed to further enrich the elite and strengthen the power structure.

This one I somewhat agree with. Ideally these technologies are owned by nobody.

Though it does give me hope when I see Facebook of all companies leading the charge in regards to open sourcing AI. The fact that their business model incentivizes them to do this, is good (lucky) for everyone, whatever your other opinions of the company are.

This is a very good starting list.
AI is a technology that, in principle, makes it possible to have a "Star Trek communism" society.

I agree with you that it can also be abused to make the existing state of affairs even worse. But if we resist technical progress on this basis, we'll never get better, either.

I think Star Trek, while being one of my favourite shows, didn't take into account human instinct, which may in fact be incomaptible with AI.
Iain Banks' The Culture is perhaps a better description of such a hypothetical society.

What human instincts do you have in mind, and how are they incompatible with AI?

  • Diti
  • ·
  • 2 days ago
  • ·
  • [ - ]
I have the same opinion as the person you replied to. My arguments are basically that I don’t support plagiarism, and LLMs/diffusion models of our generation have been trained on a massive corpus of copyrighted material, ignoring the fundamentals of the Berne Convention.

I belong to an internet community whose artists make for a third of its population – they are mostly hostile to generative art and never gave their consent for their art to be plagiarized, yet their artstyle end up in Civitai and their content shows up on haveibeentrained.com.

Personally, my hostility towards generative art would stop if training was opt-in, and I would use GitHub again if it, AT LEAST, allowed members to opt-OUT.

GP doesn’t actually need arguments for it. As change agents AI companies need to argue why they should be allowed to train on others’ code, and clearly in GP’s case they’ve failed to meet the burden of proof.
Private repos exist for a reason. It's reasonable that if you don't want humans seeing your code that machines should also not see your code.
  • ·
  • 2 days ago
  • ·
  • [ - ]
  • ·
  • 2 days ago
  • ·
  • [ - ]
[flagged]
You are welcome. I probably am not nearly as good as you. It was just a few programs I made in my spare time like some games and a Sudoku solver. I am sure if I were elevated to your coding level, my departure from GitHub would be a true loss. I hope one day I can reach a tenth of your level.
soon: copilot generates a comment recommending a SaaS product 25% of the time. Pay $5 month to disable ads-comments
  • ·
  • 2 days ago
  • ·
  • [ - ]
[dead]
'free', you code is the only price they require.
  • ·
  • 2 days ago
  • ·
  • [ - ]
[dead]
[dead]
[dead]
[flagged]
[flagged]
[flagged]
Looks like Microsofties are sensitive birches. Why would you downvote the truth?
This is great as an entry point to programming and also good for startups.

and with the recent addition to o1 in Cursor the price of a mid-senior is set to around $20.

It has been a while since my business needed to hire more senior engineers for a while, I only needed around 1 or 2 and the rest as interns using Copilot or Cursor.

This is a great time to build projects and get into programming for everyone.