Every month so many new models come out. My new fav is GLM-4.5... Kimi K2 is also good, and Qwen3-Coder 480b, or 2507 instruct.. very good as well. All of those work really well in any agentic environment/in agent tools.
I made a context helper app ( https://wuu73.org/aicp ) which is linked to from there which helps jump back and forth from all the different AI chat tabs i have open (which is almost always totally free, and I get the best output from those) to my IDE. The app tries to remove all friction, and annoyances, when you are working with the native web chat interfaces for all the AIs. Its free and has been getting great feedback, criticism welcome.
It helps the going from IDE <----> web chat tabs. Made it for myself to save time and I prefer the UI (PySide6 UI so much lighter than a webview)
Its got Preset buttons to add text that you find yourself typing very often, per-project state saves of window size of app and which files were used for context. So next time, it opens at same state.
Auto scans for code files, guesses likely ones needed, prompt box that can put the text above and below the code context (seems to help make the output better). One of my buttons is set to: "Write a prompt for Cline, the AI coding agent, enclose the whole prompt in a single code tag for easy copy and pasting. Break the tasks into some smaller tasks with enough detail and explanations to guide Cline. Use search and replace blocks with plain language to help it find where to edit"
What i do for problem solving, figuring out bugs: I'm usually in VS Code and i type aicp in terminal to open the app. Fine tune any files already checked, type what i am trying to do or what problem i have to fix, click Cline button, click Generate Context!. Paste into GLM-4.5, sometimes o3 or o4-mini, GPT-5, Gemini 2.5 Pro.. if its a super hard thing i'll try 2 or 3 models. I'll look and see which one makes the most sense and just copy and paste into Cline in VS Code - set to GPT 4.1 which is unlimited/free.. 4.1 isn't super crazy smart or anything but it follows orders... it will do whatever you ask, reliably. AND, it will correct minor mistakes from the bigger model's output. The bigger smarter models can figure out the details, and they'll write a prompt that is a task list with how-to's and why's perfect for 4.1 to go and do in agent mode....
You can code for free this way unlimited, and its the smartest the models will be. Anytime you throw some tools or MCPs at a model it dumbs them down.... AND you waste money on all the API costs having to use Claude 4 for everything
vs
> If you set your account's data settings to allow OpenAI to use your data for model training
So, it's not "for free".
If it's not your job: Do we "have to" find this way? What's the oppotunity cost compared to a premium subscription or using not-state of the art tools?
If it is your job: it's putting food on the table. So it should be a relatively microscopic cost to doing business. Maybe even a tax write-off.
What are they building? A training corpus.
Are people who responds to their ads getting the money for free?
Handing your codebase to an AI company is not nothing.
it's a battle that's already lost a long time ago. Every crappy little service by now indexes everything. If you ever touch Github, Jira, Datadog, Glean (god forbid), Upwork, etc etc they each have their own shitty little "AI" thing which means what? Your project has been indexed, bagged and tagged. So unless you code from a cave without using any saas tools, you will be indexed no matter what.
To your point, "free from having to spend money" is exactly it. It's paid for with other things, and I get that some folks don't care. But being more open about this would be nice. You don't typically hide a monetary cost either, and everybody trying to do that is rightfully called out on it by being called a scam. Doing that with non-monetary costs would be a nice custom.
If it helps, think about those company's own selfish motivations. They like money, so they like paying customers. If they promise those paying customers (in legally binding agreements, no less) that they won't train on their data... and are then found to have trained on their data anyway, they wont just lose that customer - they'll lose thousands of others too.
Which hurts their bottom line. It's in their interest not to break those promises.
No, they won't. And that's the problem in your argument. Google landed in court for tracking users in incognito mode. They also were fined for not complying with the rules for cookie popups. Facebook lost in court for illegally using data for advertising. Did it lose them any paying customer? Maybe, but not nearly enough for them to even notice a difference. The larger outcome was that people are now more pissed at the EU for cookie popups that make the greed for data more transparent. Also in the case of Google most money comes from different people than the ones that have their privacy violated, so the incentives are not working as you suggest.
> Going through life not trusting any company isn't a fun way to live
Ignoring existing problems isn't a recipe for a happy life either.
Your examples also differ from what I'm talking about. Advertising supported business models have a different relationship with end users.
People getting something for free are less likely to switch providers over a privacy concern compared with companies is paying thousands of dollars a month (or more) for a paid service under the understanding that it won't train on their data.
"If the penalty is a fine, it's legal for the rich". These businesses also don't want to pay taxes or even workers, but in the end they will take the path of least resistence. if they determine fighting in court for 10 years is more profitable than following regulations, then they'll do it.
Until we start jailing CEO's (a priceless action), this will continue.
>companies is paying thousands of dollars a month (or more) for a paid service under the understanding that it won't train on their data.
Sure, but are we talking about people or companies here?
In the context of the original thread here: If all you need to do is go to jail then whatever that's for was "for free"!
The underlying problem is that we have companies with more power than sovereign states, before you even include the power over the state the companies have.
At some point in the next few decades of continued transfer of wealth from workers to owners more and more workers will snap and bypass the courts. The is what happened with the original fall of feudalism and warlords. This wasn't guaranteed though -- if the company owners keep themselves and their allies rich enough they will be untouchable, same as drug lords.
Isn't that the Hacker mindset, though? We want to trailblaze solutions and share it with everyone for free. Always in liberty and oftentimes in beer too. I think it's a good mentality to have, precisely because of your lens of selfish motivations.
Wanting money is fine. If it was some flat $200 or even $2000 with legally binding promises that I have an indefinitely license to use this version of the software and they won't extract anything else from me: then fine. Hackers can be cheap, but we aren't opposed to barter.
But that's not the case. Wanting all my time and privacy and data under the veneer of something hackers would provide with no or very few strings is not. tricks to push into that model is all the worse.
> If they promise those paying customers (in legally binding agreements, no less) that they won't train on their data... and are then found to have trained on their data anyway, they wont just lose that customer - they'll lose thousands of others too.
I sure wish they did. In reality, they get a class action, pay off some $100m to lawyers after making $100b, and the lawyers maybe give me $100 if I'm being VERY generous, while the company extracted $10,000+ of value out of me. And the captured market just keeps on keeping on.
Sadly, this is not a land of hackers. It is a market of passive people of various walks of life: of students who do not understand what is going on under the hood (I was here when Facebook was taking off), of businsessmen too busy with other stuff to understand the sausage in the factory, of ordinary people who just wants to fire and forget. This market may never even be aware of what occurred here.
I am very skeptical anytime something is 'free'. I specifically avoid using a free service when the company profits from my use of the service. These arrangements usually start mutually beneficial, and almost always become user hostile.
Why pay for something when you can get it for free? Because the exchange of money for service sets clear boundaries and expectations.
If you're fine with compromising your privacy and having others extract wealth from you, you can go the "free" route.
I got a free pizza just for coding a little app. That saved me a lot of money.
So yes, it is free.
This sounds pedantic, but I think it's important to spell this out: this sort of stuff is only free if you consider what you're producing/exchanging for it to have 0 value.
If you consider what you're producing as valuable, you're giving it away to companies with an incentive to extract as much value from your thing as possible, with little regard towards your preferences.
If an idiot is convinced to trade his house for some magic beans, would you still be saying "the beans were free"?
As for sharing code, most of the parts of a project/app/whatever have already been done and if an experienced developer hears what your idea is, they could just make it and figure it out without any code. The code itself doesn't really seem that valuable (well.. sometimes). Someone can just look at a screenshot of my aicodeprep app and just make one and make it look the same too.
Not all the time of course - If I had some really unique sophisticated algorithms that I knew almost no one else would or has figured out, I would be more careful.
Speaking of privacy.. a while back a thought popped into my head about Slack, and all these unencrypted chat's businesses use. It kinda does seem crazy to do all your business operations over unencrypted chat, Slack rooms.. I personally would not trust Zuckerberg to not look in there and run lots of LLMs through all the conversations to find anything 'good'! Microsoft.. kinda doubt would do that on purpose but what's to stop a rogue employee from finding out some trade secrets etc.. I'd be suprised if it hasn't been done. Security is not usually a priority in tech. They half-ass care about your personal info.
To some extent. But without your codebase they will make different decisions in the back which will affect a myriad of factors. Some may actually be better than your app, others will end up adding tech debt or have performance impacts. And this isn't even to get into truly novel algorithms; sometimes just having the experience to make a scalable app with best practices can make all the difference.
Or the audience doesn't care and they take the cheaper app anyway. It's not always a happy ending.
Hackernews is free. The posts are valuable to me and I guess my posts are valuable to me, but I wouldn't pay for it and I definitely don't expect to get paid.
For YC, you are producing content that is "valuable" that brings people to their site, which they monetize through people signing up for their program. They do this with no regard for what your preferences are when they choose companies to invest in.
They sell ads (Launch, Hire, etc.) against the attention that you create. You ARE the product on HackerNews, and you're OK with it. As am I.
Same as OpenAI, I dont need to monetize them training on my data, and I am happy for you to as I would like to use the services for free.
at this point, we may need future forums to be premium so we can avoid the deluge of AI bots plauging the internet. a small, one time cost is a guaranteed way to make such strategies untenable. SomethingAwful had a point decades ago.
But like any other business, you need to follow the money and understand the incentives. Hackernews has ads, but ads for companies with us as the audience. It's also indirectly an ad for YCombinator itself as bringing awareness of the accelerator (note what "hackernews.com" redirects to).
I'm fine with a company advertising itself; if I wasn't the idea of a company ceases to really function. And in this structure for companies, I can also get benefits by potentially getting jobs from here. So I don't mind that either. Everything aligns. I agree and support the structure. I can't say that about many other "free" websites.
As for me. I do want to monetize my data one day. I can't stop the scraping the entire internet over (that's for the courts), but I sure as heck won't hand it to them on a silver platter.
It wouldn't ever be worth me getting $.0001431 dollars for my data and individual data will always be worthless on it's own because 1. taking away one individuals data from a model does not make the model worse. 2. the price of an individuals data will always be zero because you have people like me who are willing to give it away for free in exchange for a free service (aka hackernews or IG)
One user's LTV on IG may be $34, but one user's data is worth $0. Which I think a lot of people struggle with.
From a more moral standpoint, the best part about the advertising business model is that it makes the internet open to everyone, not just those who can pay for every site they use.
I will even use an ad example with conventions and festivals. You can argue an event like Comic-con is simply a huge ad. And it is. But I'm there "for the ad" in that case. It gathers other people "for the ad". It collectively benefits all of us to gather and socialize among one another.
Ads aren't bad, but many ads primarily exist to distract, not to facilitate an experience. And as a hot take, maybe we do need to gatekeep a bit more in this day and age. I don't want a "free intent" if it means 99% of my interactions are with bots instead of humans. If it means that corporations determine what is "worthy" of seeing instead of peers. If credit cards get to determine what I can spend my money on instead of my own personal (and legal) taste.
>It wouldn't ever be worth me getting $.0001431 dollars for my data and individual data will always be worthless on it's own
On top of being a software engineers who's contributed to millions on value with my data, I also strive to be an artist. An industry that has spent decades being extracted from but not as fortunate to be compensated a living wage most often. People can argue that "art is worthless" , yet it also props up multiple billion dollar industries on top of societal cultured. An artisan these days can even sustain themselves as a individual, with much faster turnaround than trying to program a website or app.
By all metrics, its hard to argue this sector's value is zero. Maybe having that lens only strengthened my stance, as a precursor to what software can become if you don't push against abuse early on.
In any case, not caring about the cost (at a specific time) doesn't make the cost disappear.
Privacy absolutely does not matter, until it does, and then it is too late
So no, it's not free.
By your logic, are the paid plans not sometimes free?
Not sure how you got it for free?
Meta has free and generous APIs for the crappy Llama 4 models... they're okay at summarizing things but I have no idea if its any good for code. Prob not since no one even talks about those anymore.
zai[.]net -> zainet -> zainetto -> which is the italian word for "little school backback"
I would be very interested in an in dept of your experiences of differences between Roo Code and Cline if you feel you can share that. I've only tried Roo Code (with interesting but mixed results) thus far.
Not sure if GLM-4.5 Air is good, but non-Air one is fabulous. I know for free API access there is pollinations ai project. Also llm7. If you just use the web chat's you can use most of the best models for free without API. There are ways to 'emulate' an API automatically.. I was thinking about adding this to my aicodeprep-gui app so it could automatically paste and then cut. Some MCP servers exist that you can use and it will automatically paste or cut from those web chat's and route it to an API interface.
OpenAI offers free tokens for most models, 2.5mil or 250k depending on model. Cerebras has some free limits, Gemini... Meta has plentiful free API for Llama 4 because.. lets face it, it sucks, but it is okay/not bad for stuff like summarizing text.
If you really wanted to code for exactly $0 you could use pollinations ai, in Cline extension (for VS Code) set to use "openai-large" (which is GPT 4.1). If you plan using all the best web chat's like Kimi K2, z.ai's GLM models, Qwen 3 chat, Gemini in AI Studio, OpenAI playground with o3 or o4-mini. You can go forever without being charged money. Pollinations 'openai-large' works fine in Cline as an agent to edit files for you etc.
It's called SelectToSearch and it reduces my friction by 85% by automating all those copy paste etc actions with a single keyboard shortcut:
https://apps.apple.com/ca/app/select-to-search-ai-assistant/...
I would not recommend it to anyone.
For simple changes I actually found smaller models better because they're so much faster. So I shifted my focus from "best model" to "stupidest I can get away with".
I've been pushing that idea even further. If you give up on agentic, you can go surgical. At that point even 100x smaller models can handle it. Just tell it what to do and let it give you the diff.
Also I found the "fumble around my filesystem" approach stupid for my scale, where I can mostly fit the whole codebase into the context. So I just dump src/ into the prompt. (Other people's projects are a lot more boilerplatey so I'm testing ultra cheap models like gpt-oss-20b for code search. For that, I think you can go even cheaper...)
Patent pending.
In testing I've found it to be underwhelming at being an agent compared to claude code, wrote up some case-studies on it here: https://github.com/sutt/agro/blob/master/docs/case-studies/a...
I'm also interested in smaller models for their speed. That, or a provider like Cerebras.
Then, if you narrow the problem domain you can increase the dependability. I am curious to hear more about your "surgical" tools.
I rambled about this on my blog about a week ago: https://hpincket.com/what-would-the-vim-of-llm-tooling-look-...
The surgical context tool (aicodeprep-gui) - there are at least 30 similar tools but most (if not all) are CLI only/no UI. I like UIs, I work faster with them for things like choosing individual files out of a big tree (at least it is using PySide6 library which is "lite" (could go lighter maybe), i HATE that too many things use webview/browsers. All the options on it are there for good reasons, its all focused on things that annoy me..and slow things down: like doing something repeatedly (copy paste copy paste or typing the same sentence over and over every time i have to do a certain thing with the AI and my code.
If you have not run 'aicp' (the command i gave it, but also there is a OS installer menu that will add a Windows/Mac/Linux right click context menu in their file managers) in a folder before, it will try to scan recursively to find code files, but it skips things like node_modules or .venv. but otherwise assumes most types of code files will probably be added so it checks them. You can fine tune it, add some .md or txt files or stuff in there that isn't code but might be helpful. When you generate the context block it puts the text inside the prompt box on the top AND/OR bottom - doing both can get better responses from AI.
It saves every file that is checked, and saves the window size, other window prefs, so you don't have to resize the window again. It saves the state of which files are checked so its less work / time next time. I have been just pasting the output from the LLMs into an agent like Cline but I am wondering if I should add browser automation / browser extension that does the copy pasting and also add option to edit / change files right after grabbing the output from a web chat. Its probably about good enough as it is though, not sure I want to make it into a big thing.
--- Yeah I just keep coming back to this workflow, its very reliable. I have not tried Claude Code yet but I will soon to see if they solved any of these problems.
Strange this thing has been at the top of hacker news for hours and hours.. weird! My server logs are just constant scrolling
- https://chutes.ai - 200 requests per day if you deposit (one-time) $5 for top open weights models - GLM, Qwen, ...
- https://github.com/marketplace/models/ - around 10 requests per day to o3, ... if you have the $10 GitHub Copilot subsciption
- https://ferdium.org - I open all the LLM webapps here as separate "apps", my one place to go to talk with LLMs, without mixing it with regular browsing
- https://www.cherry-ai.com - chat API frontend, you can use it instead of the default webpages for services which give you free API access - Google, OpenRouter, Chutes, Github Models, Pollinations, ...
I really recommend trying a chat API frontend, it really simplifies talking with multiple models from various providers in a unified way and managing those conversations, exporting to markdown, ...
Last few days I am experimenting with using Codex (via MCP ${codex mcp}) from Gemini CLI and it works like a charm. Gemini CLI is mostly using Flash underneath but this is good enough for formulating problems and re-evaluating answers.
Same with Claude Code - I am asking (via MCP) for consulting with Gemini 2.5 Pro.
Never had much success of using Claude Code as MCP though.
The original idea comes of course from Aider - using main, weak and editor models all at once.
Let’s keep something in reason, I have multiple times in my life spent days on what would end up to be maybe three lines of code.
You can end up doing this with entirely human written code too. Good software devs can see it from a mile away.
I always end up with a vastly smaller code base. Like 2000 lines turns into 800 lines or something like that.
Did that happen too or did the AI just do a glorified 'extract method', that any decent IDE can already do without AI?
I use AI, I'm not anti it, but on the other hand I keep seeing these gushing posts where I'm like 'but your ide could already do that, just click the quick refactoring button'.
What this shows me is that it truly understands all the things this script was supposed to do and was able to organize it better, while not breaking any functionality.
However, I'd like to mention a tool called repomix (https://repomix.com/), which will pack your code into a single file that can be fed to an LLM's web chat. I typically feed it to Qwen3 Coder or AI Studio with good results.
The setup could be: • Cursor CLI for agentic/dev stuff (example:https://x.com/cursor_ai/status/1953559384531050724) • A local memory layer compatible with the CLI — something like LEANN (97% smaller index, zero cloud cost, full privacy, https://github.com/yichuan-w/LEANN) or Milvus (though Milvus often ends up cloud/token-based) • Your inference engine, e.g. Ollama, which is great for running OSS GPT models locally
With this, you’d have an offline, private, and blazing-fast personal dev+AI environment. LEANN in particular is built exactly for this kind of setup — tiny footprint, semantic search over your entire local world, and Claude Code/ Cursor –compatible out of the box, the ollama for generation. I guess this solution is not only free but also does not need any API.
But I do agree that this need some effort to set up, but maybe someone can make these easy and fully open-source
The idea of this tiny, private index like what the LEANN project describes, combined with local inference via Ollama, is really powerful. I really like this idea about using it in programming, and a truly private "Cursor-like" experience would be a game-changer.
but you'll quickly notice that it's not even close to matching the quality of output, thought and reflecting that you'd get from running the same model but significantly high parameter count on a GPU capable of providing over 128gb of actual vram.
There isn't anything available locally that will let me load a 128gb model and provide anything above 150tps
The only thing that local ai model makes sense for right now seems to be Home Assistant in order to replace your google home/alexis.
happy to be proven wrong, but the effort to reward just isn't there for local ai.
Open weight models like DeepSeek R1 and GPT-OSS are also made available with free API access from various inference providers and hardware manufacturers.
Note that you'll need to either authorize with a Google Account or with an API key from AI Studio, just be sure the API key is from an account where billing is disabled.
Also note that there are other rate limits for tokens per request and tokens per minute on the free plan that effectively prevent you from using the whole million token context window.
It's good to exit or /clear frequently so every request doesn't resubmit your entire history as context or you'll use up the token limits long before you hit 100 requests in a day.
Though to be fair, it's kind of silly how much effort we go through to protect our mostly open source software from AI agents, while at the same time, half our OT has build in hardware backdoors.
Meta, Alphabet might not want that, but it is impossible to completely avoid with current architectures.
I completely agree with this!
While I understand that it looks a little awkward to copy and paste your code out of your IDE and into a web chat interface, I generally get better results that way than with GitHub copilot or cursor.
Whether agentic, not… it’s all about context.
Either agentic with access to your whole project, “lives” in GitHub, a fine tune, or RAG, or whatever… having access to all of the context drastically reduces hallucinations.
There is a big difference between “write x” and “write x for me in my style, with all y dependencies, and considering all z code that exists around it”.
I’m honestly not understand a defense of copy and paste AI coding… this is why agents are so massively popular right now.
That being said, I think everyone has probably different expectations and workflows. So if that’s what works for them, who am I to judge?
Bonus: it's European, kinda tired of giving always money to the American overlords.
It's my goto copilot.
Continue and Zed.. gonna check them out, prompts in Cline are too long. I was thinking of just making my own VS Code extension but I need to try Claude Code with GLM 4.5 (heard it pairs nicely)
I also use kiro which I got access for completely free because I was early on seeing kiro and actually trying it out because of hackernews!
Sometimes I use cerebras web ui to get insanely fast token generation of things like gpt-oss or qwen 480 b or qwen in general too.
I want to thank hackernews for kiro! I mean, I am really grateful to this platform y'know. Not just for free stuff but in general too. Thanks :>
And lots of helpful comments here on HN as well. Good job everyone involved. ;)
Cant speak to qwen or crush as I have not used them
Glad to see I'm not the only one who prefers to work like that. I don't need many different models though, the free version of Gemini 2.5 Pro is usually enough for me. Especially the 1.000.000 token context length is really useful. I can just keep dumping full code merges in.
I'll have a look at the alternatives mentioned though. Some questions just seem to throw certain models into logic loops.
I only use OpenRouter which gives access to almost all models.
Sonnet was my favorite until I tried Gemini 2.5 Pro, which is almost always better. It can be quite slow though. So for basic questions / syntax reminders I just use Gemini Flash: super fast, and good for simple tasks.
(If it was not clear, I have no love for JS and I never really programmed in it, but you have to admit, it did allow us to have more stuff. Even if 99% of it should be torched by fire if evaluated purely from engineering perspective)
Also grok disclaimer lol
From https://community.atlassian.com/forums/Rovo-for-Software-Tea...
If you really need another model / a custom interface, it's better to use openrouter: deposit $10 and you get 1000 free queries/day across all free models. That $10 will be good for a few months, at the very least.
Just use Amazon Q Dev for free which will cover every single area that you need in every context that you need (IDE, CLI, etc.).
Claude Sonnet 4 is pretty exceptional. GPT-4.1 asks me too frequently if it wants to move forward. Yes! Of course! Just do it! I'll reject your changes or do something else later. The former gets a whole task done.
I wonder if anyone is getting better results, or comparable for cheaper or free. GitHub Copilot in Visual Studio Code is so good, I think it'd be pretty hard to beat, but I haven't tried other integrated editors.
You are better off worrying about your car use and your home heating/cooling efficiency, all of which are significantly worse for energy use.
I might as well read LLM gibberish instead of this article.
font-family: "Roboto", "Underdog", cursive, -apple-system, BlinkMacSystemFont,
"Segoe UI", Helvetica, Arial, sans-serif, "Apple Color Emoji",
"Segoe UI Emoji", "Segoe UI Symbol";
The better solution is that web devs and designers should either stop changing fonts or learn how to do so without making peoples' eyes bleed.
It is for me. Page was great when I opened it in Telegram browser (my default) though, and then I saw the crazy when I opened in Firefox.