I still dont believe AGI, ASI or Whatever AI will take over human in short period of time say 10 - 20 years. But it is hard to argue against the value of current AI, which many of the vocal critics on HN seems to have the opinion of. People are willing to pay $200 per month, and it is getting $1B dollar runway already.
Being more of a Hardware person, the most interesting part to me is the funding of all the developments of latest hardware. I know this is another topic HN hate because of the DRAM and NAND pricing issue. But it is exciting to see this from a long term view where the pricing are short term pain. Right now the industry is asking, we have together over a trillion dollar to spend on Capex over the next few years and will even borrow more if it needs to be, when can you ship us 16A / 14A / 10A and 8A or 5A, LPDDR6, Higher Capacity DRAM at lower power usage, better packaging, higher speed PCIe or a jump to optical interconnect? Every single part of the hardware stack are being fused with money and demand. The last time we have this was Post-PC / Smartphone era which drove the hardware industry forward for 10 - 15 years. The current AI can at least push hardware for another 5 - 6 years while pulling forward tech that was initially 8 - 10 years away.
I so wished I brought some Nvidia stock. Again, I guess no one knew AI would be as big as it is today, and it is only just started.
> But it is hard to argue against the value of current AI [...] it is getting $1B dollar runway already.
The psychic services industry makes over $2 billion a year in the US [1], with about a quarter of the population being actual believers. [2].
[1] The https://www.ibisworld.com/united-states/industry/psychic-ser...
[2] https://news.gallup.com/poll/692738/paranormal-phenomena-met...
> Or, better, solving the underlying problem that the placebo "helps".
The underlying problems are often a lack of a decent education and a generally difficult/unsatisfying life. Systemic issues which can't be meaningfully "solved" without massive resources and political will.
It might just be my circles, but I've seen Carl Sagans quote everywhere in the last couple of months.
"“Science is more than a body of knowledge; it is a way of thinking. I have a foreboding of an America in my children’s or grandchildren’s time—when the United States is a service and information economy; when nearly all the key manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what’s true, we slide, almost without noticing, back into superstition and darkness.”"
2024/2025: "Okay, it works, but it produces security vulnerabilities and makes junior devs lazy."
2026 (Current): "It is literally the same thing as a psychic scam."
Can we at least make predictions for 2027? What shall the cope be then! Lemme go ask my psychic.
I have no idea if those two points, ML and brains, are just different points on the same Pareto frontier of some useful metrics, but I am increasingly suspecting they might be.
2024: "Now this time for real, software engineering is dead in 6 months, AI CEO said so"
2025: "I know a guy who knows a guy who built a startup with an LLM in 3 hours, software engineering is dead next year!"
What will be the cope for you this year?
… to one of the models in Jan 2024 being able to repeatedly add features to the same single-page web app without corrupting its own work or hallucinating the APIs it had itself previously generated…
… to last month using a gifted free week of Claude Code to finish one project and then also have enough tokens left over to start another fresh project which, on that free left-over credit, reached a state that, while definitely not well engineered, was still better than some of the human-made pre-GenAI nonsense I've had to work with.
Wasn't 3 hours, and I won't be working on that thing more this month either because I am going to be doing intensive German language study with the goal of getting the language certificate I need for dual citizenship, but from the speed of work? 3 weeks to make a startup is already plausible.
I won't say that "software engineering" is dead. In a lot of cases however "writing code" is dead, and the job of the engineer should now be to do code review and to know what refactors to ask for.
Like a lot of things LLM related (Simon Willison's pelican test, researchers + product leaders implementing AI features) I also heavily "vibe" check the capabilities myself on real work tasks. The fact of the matter is I am able to dramatically speed up my work. It may be actually writing production code + helping me review it, or it may be tasks like: write me a script to diagnose this bug I have, or build me a streamlit dashboard to analyze + visualize this ad hoc data instead of me taking 1 hour to make visualizations + munge data in a notebook.
> people claiming they are 10x more productive but aren't shipping anything, and some AI hype bloggers that fail to provide any quantitative proof.
what would satisfy you here? I feel you are strawmanning a bit by picking the most hyperbolic statements and then blanketing that on everyone else.
My workflow is now:
- Write code exclusively with Claude
- Review the code myself + use Claude as a sort of review assistant to help me understand decisions about parts of the code I'm confused about
- Provide feedback to Claude to change / steer it away or towards approaches
- Give up when Claude is hopelessly lost
It takes a bit to get the hang of the right balance but in my personal experience (which I doubt you will take seriously but nevertheless): it is quite the game changer and that's coming from someone who would have laughed at the idea of a $200 coding agent subscription 1 year ago
As one of the WAU (really DAU) you’re talking about, I want to call out a couple things: 1) the LOC metrics are flawed, and anyone using the agents knows this - eg, ask CC to rewrite the 1 commit you wrote into 5 different commits, now you have 5 100% AI-written commits; 2) total speed up across the entire dev lifecycle is far below 10x, most likely below 2x, but I don’t see any evidence of anyone measuring the counterfactuals to prove speed up anyways, so there’s no clear data; 3) look at token spend for power users, you might be surprised by how many SWE-years they’re spending.
Overall it’s unclear whether LLM-assisted coding is ROI-positive.
Total speed up is WAY less than 10x by any measure. 2x seems too high too.
By data alone it’s a bit unclear of impact I agree. But I will say there seems to be a clear picture that to me, starting from a prior formed from personal experience, indicates some real productivity impact today, with a trajectory that suggests these claims of a lot of SWE work being offloaded to agents over the next few years seems not that far fetched.
- adoption and retention numbers internally and externally. You can argue this is driven by perverse incentives and/or the perception performance mismatch but I’m highly skeptical of this even though the effects of both are probably really, it would be truly extraordinary to me if there weren’t at least a ~10-20% bump in productivity today and a lot of headroom to go as integration gets better and user skill gets better and model capabilities grow
- benchmark performance, again benchmarks are really problematic but there are a lot of them and all of them together paint a pretty clear picture of capabilities truly growing and growing quickly
- there are clearly biases we can think of that would cause us to overestimate AI impact, but there are also biases that may cause us to underestimate impact: e.g. I’m now able to do work that I would have never attempted before. Multitasking is easier. Experiments are quicker and easier. That may not be captured well by e.g. task completion time or other metrics.
I even agree: quality of agentic code can be a real risk, but:
- I think this ignores the fact that humans have also always written shitty code and always will; there is lots of garbage in production believe me, and that predates agentic code
- as models improve, they can correct earlier mistakes
- it’s also a muscle to grow: how to review and use humans in the loop to improve quality and set a high bar
If the M stands for Meta, I would also like to note that as a user, I have been seeing increasingly poor UI, of the sort I'd expect from people committing code that wasn't properly checked before going live, as I would expect from vibe coding in the original sense of "blindly accept without review". Like, some posts have two copies of the sender's name in the same location on screen with slightly different fonts going out of sync with each other.
I can easily believe the metrics that all [MF]AANG bonuses are denominated in are going up, our profession has had jokes about engineers gaming those metrics even back when our comics were still printed in books: https://imgur.com/bug-free-programs-dilbert-classic-tyXXh1d
Evidence is peer reviewed research, or at least something with metrics. Like the METR study that shows that experienced engineers often got slower on real tasks with AI tools, even though they thought they were faster.
Some counter examples to METR that are in the literature but I'll just say: "rigor" here is very difficult (including METR) because outcomes are high dimensional and nuanced, or ecological validity is an issue. It's hard to have any approach that someone wouldn't be able to dismiss due to some issue they have with the methodology. The sources below also have methodological problems just like METR
https://arxiv.org/pdf/2302.06590 -- 55% faster implementing HTTP server in javascript with copilot (in 2023!) but this is a single task and not really representative.
https://demirermert.github.io/Papers/Demirer_AI_productivity... -- "Though each experiment is noisy, when data is combined across three experiments and 4,867 developers, our analysis reveals a 26.08% increase (SE: 10.3%) in completed tasks among developers using the AI tool. Notably, less experienced developers had higher adoption rates and greater productivity gains." (but e.g. "completed tasks" as the outcome measure is of course problematic)
To me, internal company measures for large tech companies will be most reliable -- they are easiest to track and measure, the scale is large enough, and the talent + task pool is diverse (junior -> senior, different product areas, different types of tasks). But then outcome measures are always a problem...commits per developer per month? LOC? task completion time? all of them are highly problematic, especially because its reasonable to expect AI tools would change the bias and variance of the proxy so its never clear if you're measuring the change in "style" or the change in the underlying latent measure of productivity you care about
The converse of this is that if those tasks are representative of software engineering as a whole, I would expect a lot of other tasks where it absolutely sucks.
This expectation is further supported by the number of times people pop up in conversations like this to say for any given LLM that it falls flat on its face even for something the poster thinks is simple, that it cost more time than it saved.
As with supposedly "full" self driving on Teslas, the anecdotes about the failure modes are much more interesting than the success: one person whose commute/coding problem happens to be easy, may mistake their own circumstances for normal. Until it does work everywhere, it doesn't work everywhere.
When I experiment with vibe coding (as in, properly unsupervised), it can break down large tasks into small ones and churn through each sub-task well enough, such that it can do a task I'd expect to take most of a sprint by itself. Now, that said, I will also say it seems to do these things a level of "that'll do" not "amazing!", but it does do them.
But I am very much aware this is like all the people posting "well my Tesla commute doesn't need any interventions!" in response to all the people pointing out how it's been a decade since Musk said "I think that within two years, you'll be able to summon your car from across the country. It will meet you wherever your phone is … and it will just automatically charge itself along the entire journey."
It works on my [use case], but we can't always ship my [use case].
If you insist or believe in a conspiracy I don’t think there’s really anything I or others will be able to say or show you that would assuage you, all I can say is I’ve seen the raw data. It’s a mess and again we’re stuck with proxies (which are bad since you start conflating the change in the proxy-latent relationship with the treatment effect). And it’s also hard and arguably irresponsible to run RCTs.
All I will say is: there are flaws everywhere. METR results are far from conclusive. Totally understandable if there is a mismatch between perception and performance. But also consider: even if task takes the same or even slightly more time, one big advantage for me is that it substantially reduces cognitive load so I can work in parallel sessions on two completely different issues.
You love to see "Maybe completely waste my time" as part of the normal flow for a productivity tool
This is a surprising claim. There's only 3 orders of magnitude between US data centre electricity consumption and worldwide primary energy (as in, not just electricity) production. Worldwide electricity supply is about 3/20ths of world primary energy, so without very rapid increases in electricity supply there's really only a little more than 2 orders of magnitude growth possible in compute.
Renewables are growing fast, but "fast" means "will approach 100% of current electricity demand by about 2032". Which trend is faster, growth of renewable electricity or growth of compute? Trick question, compute is always constrained by electricity supply, and renewable electricity is growing faster than anything else can right now.
But I forgot how old that article is: it’s 4 orders of magnitude past GPT-4 in terms of total compute which is I think only 3.5 orders of magnitude from where we are today (based on 4.4x scaling/yr)
I'd imagine I'm not the only one who has a similar situation. Until all those people and processes can be swept away in favor of letting LLMS YOLO everything into production, I don't see how that changes.
I don't need an AGI. I do need a secretary type agent that deals with all the simple but yet laborious non technical tasks that keep infringing on my quality engineering time. I'm CTO for a small startup and the amount of non technical bullshit that I need to deal with is enormous. Some examples of random crap I deal with: figuring out contracts, their meaning/implication to situations, and deciding on a course of action; Customer offers, price calculations, scraping invoices from emails and online SAAS accounts, formulating detailed replies to customer requests, HR legal work, corporate bureaucracy, financial planning, etc.
A lot of this stuff can be AI assisted (and we get a lot of value out of ai tools for this) but context engineering is taking up a non trivial amount of my time. Also most tools are completely useless at modifying structured documents. Refactoring a big code base, no problem. Adding structured text to an existing structured document, hardest thing ever. The state of the art here is an ff-ing sidebar that will suggest you a markdown formatted text that you might copy/paste. Tool quality is very primitive. And then you find yourself just stripping all formatting and reformatting it manually. Because the tools really suck at this.
This doesn’t sound like bullshit you should hand off to an AI. It sounds like stuff you would care about.
But I can ask intelligent questions about that contract from an LLM (in English) and shoot back and forth a few things, come up with some kind of action plan, and then run it by our laywers and other advisors.
That's not some kind of hypothetical thing. That's something that happened multiple times in our company in the last few months. LLMs are very empowering for dealing with this sort of thing. You still need experts for some stuff. But you can do a lot more yourself now. And as we've found out, some of the "experts" that we relied on in the past actually did a pretty shoddy job. A lot of this stuff was about picking apart the mess they made and fixing it.
As soon as you start drafting contracts, it gets a lot harder. I just went through a process like that as well. It involves a lot of manual work that is basically about formatting documents, drafting text, running pdfs and text snippets through chat gpt for feedback, sparring, criticism, etc. and iterating on that. This is not about vibe coding some contract but making sure every letter of a contract is right. That ultimately involves lawyers and negotiating with other stakeholders but it helps if you come prepared with a more or less ready to sign off on document.
It's not about handing stuff off but about making LLMs work for you. Just like with coding tools. I care about code quality as well. But I still use the tools to save me a lot of time.
Everyone makes mistakes and misses things, and as the co-founder you have to care more about the details than anyone else does.
I would have loved to have weird-unreliable-paralegal-Claude available back when I was doing that!
we built a tool for this for the life science space and are opening it up to the general public very soon. Email me I can give you access (topaz at vespper dot com)
> hard to argue against the value of current AI
> People are willing to pay $200 per month, and it is getting $1B dollar runway already.
Those are 3 different things. There can be a LOT of fast and significant improvements but still remain extremely far from the actual goal, so far it looks like actually little progress.
People pay for a lot of things, including snake oil, so convincing a lot of people to pay a bit is not in itself a proof of value, especially when some people are basically coerced into this, see how many companies changed their "strategy" to mandating AI usage internally, or integration for a captive audience e.g. Copilot.
Finally yes, $1B is a LOT of money for you and I... but for the largest corporations it's actually not a lot. For reference Google earned that in revenue... per day in 2023. Anyway that's still a big number BUT it still has to be compared with, well how much does OpenAI burn. I don't have any public number on that but I believe the consensus is that it's a lot. So until we know that number we can't talk about an actual runway.
But do you really believe e.g. Claude code is snake oil? I pay $200 / month for Claude, which is something I would have thought monumentally insane maybe 1-2 years ago (e.g. when ChatGPT came out with their premium subscription price I thought that seemed so out of touch). I don't think we would be seeing the subscription rates and the retention numbers if it really was snake oil.
> Finally yes, $1B is a LOT of money for you and I... but for the largest corporations it's actually not a lot. For reference Google earned that in revenue... per day in 2023. Anyway that's still a big number BUT it still has to be compared with, well how much does OpenAI burn. I don't have any public number on that but I believe the consensus is that it's a lot. So until we know that number we can't talk about an actual runway.
this gets brought up a lot but I'm not sure I understand why folks on a forum called YCombinator, a startup accelerator, would make this sound like an obvious sign of charlatanism; operating at a loss is nothing new and anthropic / openAI strategy seems perfectly rational: they are scaling and capturing market share, and TAM is insane.
Intellectual dishonesty is certainly rampant on HN.
It is a large enough number to simply run out of private capital to consume before it turns cash flow positive.
Lots of things sell well if sold at such a loss. I’d take a new Ferrari for $2500 if it was on offer.
It’s very unclear how much end-consumer hardware and DIY builders will benefit from that, as opposed to server-grade hardware that only makes sense for the enterprise marker. It could have the opposite effect, like hardware manufacturers leaving the consumer market (as in the case of Micron), because there’s just not that much money in it.
>> Again, I guess no one knew AI would be as big as it is today, and it is only just started.
People have been saying similar about self driving cars for years now. "AI" is another one of those expensive ideas that we'll get 85% of the way there and then to get the other 15% will be way more expensive than anyone will want to pay for. It's already happening - HW prices and electricity - people are starting to ask, "if I put more $ into this machine, when am I actually going to start getting money out?" The "true believers" are like, soon! But people are right to be hugely skeptical.
display: flex; align-items:center;
now?
But stuff like this im not sure I understand:
> It's a great tool, but right now it's only being used to feed the greed.
if its a great tool, then how is it _only_ being used to "feed the greed" and what do you mean by that?
Also I think folks are quick to make analogies to other points in history: "AI is like the dot com boom we're going to crash and burn" and "AI is like {self driving cars, crypto, etc} and the promises will all be broken, its all hype" but this removes the nuance: all of these things are extremely different with very specific dynamics that in _some_ ways may be similar but in many crucial and important ways are completely different.
Look around?
From a business standpoint, it makes some sense to throttle the gaming supply some. Not to the point of surrendering the market to someone else probably, but to a measurable degree.
Nvidia using Mainstream node has always been the norm considering most Fab capacity always goes to Mobile SoC first. But I expect the internet / gamers will be angry anyway because Nvidia does not provide them with the latest and greatest.
In reality the extra R&D cost for designing with leading edge will be amortised by all the AI order which give Nvidia competitive advantage at the consumer level when they compete. That is assuming there are competition because most recent data have shown Nvidia owning 90%+ of discreet market share, 9% for AMD and 1% for Intel.
* The year of YOLO and the Normalization of Deviance
* The year that Llama lost its way
* The year of alarmingly AI-enabled browsers
* The year of the lethal trifecta
* The year of slop
* The year that data centers got extremely unpopular
I was discussing the political angle with a friend recently. I think Big Tech Bro / VC complex has done themselves a big disservice by aligning so tightly with MAGA to the point AI will be a political issue in 2026 & 2028.
Think about the message they’ve inadvertently created themselves - AI is going to replace jobs, it’s pushing electric prices up, we need the government to bail us out AND give us a regulatory light touch.
Super easy campaign for Dems - big tech trumpers are taking your money, your jobs, causing inflation, and now they want bailouts !!
Some people are of course, but how many?
> ... People are willing to pay $200 per month
This is just low-key hype. Careful with your portfolio...
Still, the gap between the capabilities of a cutting edge LLM and that of a human is only this wide. There are only this many increments it takes to cross it.
What is the concrete business case? Can anyone point to a revenue producing company using AI in production, and where AI is a material driver of profits?
Tool vendors don’t count. I’m not interested in how much money is being made selling shovels...show me a miner who actually struck gold please.
Mostly the training. I put less and less weight on "LLMs are fundamentally flawed" and more and more of it on "you're training them wrong". Too many "fundamental limitations" of LLMs are ones you can move the needle on with better training alone.
The foundation of LLM is flexible and capable, and the list of "capabilities that are exclusive to human mind" is ever shrinking.
Yes, there is a lot that can be improved via different training, but at what point is it no longer a language model (i.e. something that auto-regressively predicts language continuations)?
I like to use an analogy to the children's "Stone Soup" story whereby a "stone soup" (starting off as a stone in a pot of boiling water) gets transformed into a tasty soup/stew by strangers incrementally adding extra ingredients to "improve the flavor" - first a carrot, then a bit of beef, etc. At what point do you accept that the resulting tasty soup is not in fact stone soup?! It's like taking an auto-regressively SGD-trained Transformer, and incrementally tweaking the architecture, training algorithm, training objective, etc, etc. At some point it becomes a bit perverse to choose to still call it a language model
Some of the "it's just training" changes that would be needed to make today's LLMs more brain-like may be things like changing the training objective completely from auto-regressive to predicting external events (with the goal of having it be able to learn the outcomes of it's own actions, in order to be able to plan them), which to be useful would require the "LLM" to then be autonomous and act in some (real/virtual) world in order to learn.
Another "it's just training" change would be to replace pre/mid/post-training with continual/incremental runtime learning to again make the model more brain-like and able to learn from it's own autonomous exploration of behavior/action and environment. This is a far more profound, and ambitious, change than just fudging incremental knowledge acquisition for some semblance of "on the job" learning (which is what the AI companies are currently working on).
If you put these two "it's just training/learning" enhancements together then you've now got something much more animal/human-like, and much more capable than an LLM, but it's already far from a language model - something that passively predicts next word every time you push the "generate next word" button. This would now be an autonomous agent, learning how to act and control/exploit the world around it. The whole pre-trained, same-for-everyone, model running in the cloud, would then be radically different - every model instance is then more like an individual learning based on it's own experience, and maybe you're now paying for compute for the continual learning compute rather than just "LLM tokens generated".
These are "just" training (and deployment!) changes, but to more closely approach human capability (but again, what to you mean by "AGI"?) there would also need to be architectural changes and additions to the "Transformer" architecture (add looping, internal memory, etc), depending on exactly how close you want to get to human/animal capability.
You described modern RLVR for tasks like coding. Plug an LLM into a virtual env with a task. Drill it based on task completion. Force it to get better at problem-solving.
It's still an autoregressive next token prediction engine. 100% LLM, zero architectural changes. We just moved it past pure imitation learning and towards something else.
It's not obvious if you just did this without changing the learning objective from self-prediction (auto-regressive) to external prediction whether you'd actually gain much capability though. Auto-regressive training is what makes LLMs imitators - always trying to do same as before.
In fact, if you did just let a continual learner autonomously loose in some virtual environment, why would you expect it do do anything different, other than continual learning from whatever it was exposed to in the environment, from putting a current LLM in a loop, together with tool use as a way to expose it to new data? An imitative (auto-regressive) LLM doesn't have any drive to do anything new - if you just keep feeding it's own output back in as an input, then it's basically just a dynamical system that will eventually settle down into some attractor states representing the closure of the patterns it has learnt and is generating.
If you want the model to behave in a more human/animal-like self-motivated agentic fashion, then I think the focus has to be on learning how to act to control and take advantage of the semi-predictable environment, which is going to be based on having predicting the environment as the learning objective (vs auto-regressive), plus some innate drives (curiosity, boredom, etc) to bias behavior to maximize learning and creative discovery.
Continual learning also isn't going to magically solve the RL reward problem (how do you define and measure RL rewards in the general, non-math/programming, case?). In fact post-training is a very human-curated affair since humans have identified math and programming as tasks where this works and have created these problem-specific rewards. If you wanted the model to discover it's own rewards at runtime, as part of your new runtime RL algorithm perhaps, then you'd have to figure how to bake that into the architecture.
I looked into docker and then realized the problem I'm actually trying to solve was solved in like 1970 with users and permissions.
I just made a agent user limited to its own home folder, and added my user to its group. Then I run Claude code etc as the agent user.
So it can only read write /home/agent, and it cannot read or write my files.
I add myself to agent group so I can read/write the agent files.
I run into permission issues sometimes but, it's pretty smooth for the most part.
Oh also I gave it root to a $3 VPS. It's so nice having a sysadmin! :) That part definitely feels a bit deviant though!
Allow agent group to agent home dir: sudo chmod -R 770 /home/agent
Start a new shell with the group (or login/logoff): newgrp agent Now you should be able to change into the agent home.
Allow your user to sudo as agent: echo "$USER ALL=(agent) NOPASSWD: ALL" |sudo tee -a /etc/sudoers.d/$USER-as-agent now you can start your agent using sudo: sudo -u agent your_agent
works nice.
https://markdownpastebin.com/?id=1ef97add6ba9404b900929ee195...
My notes from back when I set this up! Includes instructions for using a GUI file explorer as the agent user. As well as setting up a systemd service to fix the permissions automatically.
(And a nice trick which shows you which GUI apps are running as which user...)
However, most of these are just workarounds for the permission issue I kept running into, which is that Claude Code would for some reason create files with incorrect permissions so that I couldn't read or write those files from my normal account.
If someone knows how to fix that, or if someone at Anthropic is reading, then most of this Rube Goldberg machine becomes unnecessary :)
Opencode plus some scripts on host and in its container works well to run yolo and only see what it needs (via mounting). Has git tools but can't push etc. is thought how to run tests with the special container-in-container setup.
Including pre-configured MCPs, skills, etc.
The best part is that it just works for everyone on the team, big plus.
Also , it's just normal backend work - calling a bunch of APIs. What am I missing here?
Actually making a system like this work seems easy, but isn't really.
(Though with the CURRENT generation or two of models it has gotten "pretty easy" I think. Before that, not so much.)
They he ML frameworks are much closer to implementing the mathematics of neural networks, with some abstractions but much closer to the linear algebra level. It requires an understanding of the underlying theory.
Langchain is a suite of convenience functions for composing prompts to LLMs. I wouldn’t consider there to be some real domain knowledge one would need to use it. There is a learning curve but it’s about learning the different components rather than learning a whole new academic discipline.
Building the model may range from very simple if you are just recreating a standard architecture, or be a research endeavor if you are designing something completely new.
The difficulty/complexity of then training the model depends on what it is. For something simple like a CNN for image recognition, it's really just a matter of selecting a few hyperparameters and letting it rip. At the other end of the spectrum you've got LLMs where training (and coping with instabilities) is something of a black art, with RL training completely different from pre-training, and there is also the issue of designing/discovering a pre/mid/post training curriculum.
But anyways, the actual training part can be very simple, not requiring too much knowledge of what's going on under the hood, depending on the model.
This year honestly feels quite stagnant. LLMs are literally technology that can only reproduce the past. They're cool, but they were way cooler 4 years ago. We've taken big ideas like "agents" and "reinforcement learning" and basically stripped them of all meaning in order to claim progress.
I mean, do you remember Geoffrey Hinton's RBM talk at Google in 2010? [0] That was absolutely insane for anyone keeping up with that field. By the mid-twenty teens RBMs were already outdated. I remember when everyone was implementing flavors of RNNs and LSTMs. Karpathy's character 2015 RNN project was insane [1].
This comment makes me wonder if part of the hype around LLMs is just that a lot of software people simply weren't paying attention to the absolutely mind-blowing progress we've seen in this field for the last 20 years. But even ignoring ML, the world's of web development and mobile application development have gone through incredible progress over the last decade and a half. I remember a time when JavaScript books would have a section warning that you should never use JS for anything critical to the application. Then there's the work in theorem provers over the last decade... If you remember when syntactic sugar was progress, either you remember way further back than I do, or you weren't paying attention to what was happening in the larger computing world.
That's incorrect on many levels. They are drawing upon, and reproducing, language patterns from "the past", but they are combining those patterns in ways that may have never have been seen before. They may not be truly creative, but they are still capable of generating novel outputs.
> They're cool, but they were way cooler 4 years ago.
Maybe this year has been more about incremental progress with LLMs than the shock/coolness factor of talking to an LLM for the first time, but the utility of them, especially for programming, has dramatically increased this year, really in the last 6 months.
The improvement in "AI" image and video generation has also been impressive, to the point now that fake videos on YouTube can often only be identified as such by common sense rather that the fact that they don't look real.
Incremental improvement can often be more impressive that innovation, whose future importance can be hard to judge when it first appears. How many people read "Attention is all you need" in 2017 and thought "Wow! This is going to change the world!". Not even the authors of the paper thought that.
Funny, I've used them to create my own personalized text editor, perfectly tailored to what I actually want. I'm pretty sure that didn't exist before.
It's wild to me how many people who talk about LLM apparently haven't learned how to use them for even very basic tasks like this! No wonder you think they're not that powerful, if you don't even know basic stuff like this. You really owe it to yourself to try them out.
I've worked at multiple AI startups in lead AI Engineering roles, both working on deploying user facing LLM products and working on the research end of LLMs. I've done collaborative projects and demos with a pretty wide range of big names in this space (but don't want to doxx myself too aggressively), have had my LLM work cited on HN multiple times, have LLM based github projects with hundreds of stars, appeared on a few podcasts talking about AI etc.
This gets to the point I was making. I'm starting to realize that part of the disconnect between my opinions on the state of the field and others is that many people haven't really been paying much attention.
I can see if recent LLMs are your first intro to the state of the field, it must feel incredible.
The change hit us so fast a huge number of people don’t understand how capable it is yet.
Also it certainly doesn’t help that it still hallucinates. One mistake and it’s enough to set someone against LLMs. You really need to push through that hallucinations are just the weak part of the process to see the value.
Either that, or they tried it "last year" or "a while back" and have no concept of how far things have gone in the meantime.
It's like they wandered into a machine shop, cut off a finger or two, and concluded that their grandpa's hammer and hacksaw were all anyone ever needed.
SWEs are trained to discard surface-level observations and be adversarial. You can't just look at the happy path, how does the system behave for edge cases? Where does it break down and how? What are the failure modes?
The actual analogy to a machine shop would be to look at whether the machines were adequate for their use case, the building had enough reliable power to run and if there were any safety issues.
It's easy to Clever Hans yourself and get snowed by what looks like sophisticated effort or flat out bullshit. I had to gently tell a junior engineer that just because the marketing claims something will work a certain way, that doesn't mean it will.
The key point you’re missing is the type of failure. Search systems fail by not retrieving. Parrots fail by repeating. LLMs fail by producing internally coherent but factually wrong world models. That failure mode only exists if the system is actually modeling and reasoning, imperfectly. You don’t get that behavior from lookup or regurgitation.
This shows up concretely in how errors scale. Ambiguity and multi-step inference increase hallucinations. Scaffolding, tools, and verification loops reduce them. Step-by-step reasoning helps. Grounding helps. None of that makes sense for a glorified Google search.
Hallucinations are a real weakness, but they’re not evidence of absence of capability. They’re evidence of an incomplete reasoning system operating without sufficient constraints. Engineers don’t dismiss CNC machines because they crash bits. They map the envelope and design around it. That’s what’s happening here.
Being skeptical of reliability in specific use cases is reasonable. Concluding from those failure modes that this is just Clever Hans is not adversarial engineering. It’s stopping one layer too early.
Absolutely not true. I cannot express how strongly this is not true, haha. The tech is neat, and plenty of real computer scientists work on it. That doesn't mean it's not wildly misunderstood by others.
> Concluding from those failure modes that this is just Clever Hans is not adversarial engineering.
I feel like you're maybe misunderstanding what I mean when I refer to Clever Hans. The Clever Hans story is not about the horse. It's about the people.
A lot of people -- including his owner-- were legitimately convinced that a horse could do math, because look, literally anyone can ask the horse questions and it answers them correctly. What more proof do you need? It's obvious he can do math.
Except of course it's not true lol. Horses are smart critters, but they absolutely cannot do arithmetic no matter how much you train them.
The relevant lesson here is it's very easy to convince yourself you saw something you 100% did not see. (It's why magic shows are fun.)
These things are not horses. How can anyone choose to remain so ignorant in the face of irrefutable evidence that they're wrong?
https://arxiv.org/abs/2507.15855
It's as if a disease like COVID swept through the population, and every human's IQ dropped 10 to 15 points while our machines grew smarter to an even larger degree.
Its annoying to see posts from people who lag behind in intelligence and just dont get it - people learn at different rates. Some see way further ahead.
If you want a financial tip, don't short stock and chase market butterflies. Instead, make real professional friends, develop real skills and learn to be friendly and useful.
I made my money in tech already, partially by being lucky and in the right place at the right time, and partially because I made my own luck by having friends who passed the opportunity along.
Hope that helps!
So it is absurdly incorrect to say "they can only reproduce the past." Only someone who hasn't been paying attention (as you put it) would say such a thing.
That is a derived output. That isn't new as in: novel. It may be unique but it is derived from training data. LLMs legitimately cannot think and thus they cannot create in that way.
5 years ago a typical argument against AGI was that computers would never be able to think because "real thinking" involved mastery of language which was something clearly beyond what computers would ever be able to do. The implication was that there was some magic sauce that human brains had that couldn't be replicated in silicon (by us). That 'facility with language' argument has clearly fallen apart over the last 3 years and been replaced with what appears to be a different magic sauce comprised of the phrases 'not really thinking' and the whole 'just repeating what it's heard/parrot' argument.
I don't think LLM's think or will reach AGI through scaling and I'm skeptical we're particularly close to AGI in any form. But I feel like it's a matter of incremental steps. There isn't some magic chasm that needs to be crossed. When we get there I think we will look back and see that 'legitimately thinking' wasn't anything magic. We'll look at AGI and instead of saying "isn't it amazing computers can do this" we'll say "wow, was that all there is to thinking like a human".
Mastery of words is thinking? In that line of argument then computers have been able to think for decades.
Humans don't think only in words. Our context, memory and thoughts are processed and occur in ways we don't understand, still.
There's a lot of great information out there describing this [0][1]. Continuing to believe these tools are thinking, however, is dangerous. I'd gather it has something to do with logic: you can't see the process and it's non-deterministic so it feels like thinking. ELIZA tricked people. LLMs are no different.
[0] https://archive.is/FM4y8 [0] https://www.theverge.com/ai-artificial-intelligence/827820/l... [1] https://www.raspberrypi.org/blog/secondary-school-maths-show...
That's the crazy thing. Yes, in fact, it turns out that language encodes and embodies reasoning. All you have to do is pile up enough of it in a high-dimensional space, use gradient descent to model its original structure, and add some feedback in the form of RL. At that point, reasoning is just a database problem, which we currently attack with attention.
No one had the faintest clue. Even now, many people not only don't understand what just happened, but they don't think anything happened at all.
ELIZA, ROFL. How'd ELIZA do at the IMO last year?
Language is the substrate of reason. It doesn't need to be spoken or written, but it's a necessary and (as it turns out) sufficient component of thought.
From the abstract: "Is thought possible without language? Individuals with global aphasia, who have almost no ability to understand or produce language, provide a powerful opportunity to find out. Astonishingly, despite their near-total loss of language, these individuals are nonetheless able to add and subtract, solve logic problems, think about another person’s thoughts, appreciate music, and successfully navigate their environments. Further, neuroimaging studies show that healthy adults strongly engage the brain’s language areas when they understand a sentence, but not when they perform other nonlinguistic tasks like arithmetic, storing information in working memory, inhibiting prepotent responses, or listening to music. Taken together, these two complementary lines of evidence provide a clear answer to the classic question: many aspects of thought engage distinct brain regions from, and do not depend on, language."
https://pmc.ncbi.nlm.nih.gov/articles/PMC2799957/
The resources that the brain is using to think -- whatever resources those are -- are language-based. Otherwise there would be no way to communicate with the test subjects. "Language" doesn't just imply written and spoken text, as these researchers seem to assume.
What's funny is the failure to grasp any contextual framing of ELIZA. When it came out people were impressed by it's reasoning, it's responses. And in your line of defense it could think because it had mastery of words!
But fast forward the current timeline 30 years. You will have been of the same camp that argued on behalf of ELIZA when the rest of the world was asking, confusingly: how did people think ChatGPT could think?
Meanwhile, you didn't answer my question. How'd ELIZA do on the IMO? If you know a way to achieve gold-medal performance at top-level math and programming competitions without thinking, I for one am all ears.
This is the definition of the word ‘novel’.
For a more practical example, creating bindings from dynamic-language-A for a library in compiled-language-B is a genuinely useful task, allowing you to create things that didn't exist before. Those things are likely to unlock great happiness and/or productivity, even if they are derived from training data.
This is the definition of a derived product. Call it a derivative work if we're being pedantic and, regardless, is not any level of proof that LLMs "think".
Why is that kind of thinking required to create novel works?
Randomness can create novelty.
Mistakes can be novel.
There are many ways to create novelty.
Also I think you might not know how LLMs are trained to code. Pre-training gives them some idea of the syntax etc but that only gets you to fancy autocomplete.
Modern LLMs are heavily trained using reinforcement data which is custom task the labs pay people to do (or by distilling another LLM which has had the process performed on it).
Also the derived argument doesn’t really hold, just because you know about two things doesn’t mean you’d be able to come up with the third, it’s actually very hard most of the time and requires you to not do next token prediction.
I don't believe they can. LLMs have no concept of truth.
What's likely is that the "truth" for many subjects is represented way more than fiction and when there is objective truth it's consistently represented in similar way. On the other hand there are many variations of "fiction" for the same subject.
So think of it like this, to create the model we use terabytes of data. Then we do RL which is probably less than one percent of additional data involved in the initial training.
The change in the model is that reliability is increased and hallucinations are reduced at a far greater rate than one percent. So much so that modern models can be used for agentic tasks.
How can less than one percent of reinforcement training get the model to tell the truth greater than one percent of the time?
The answer is obvious. It ALREADY knew the truth. There’s no other logical way to explain this. The LLM in its original state just predicts text but it doesn’t care about truth or the kind of answer you want. With a little bit of reinforcement it suddenly does much better.
It’s not a perfect process and reinforcement learning often causes the model to be deceptive an not necessarily tell the truth but it more gives an answer that may seem like the truth or an answer that the trainer wants to hear. In general though we can measurably see a difference in truthfulness and reliability to an extent far greater than the data involved in training and that is logical proof it knows the difference.
Additionally while I say it knows the truth already this is likely more of a blurry line. Even humans don’t fully know the truth so my claim here is that an LLM knows the truth to a certain extent. It can be wildly off for certain things but in general it knows and this “knowing” has to be coaxed out of the model through RL.
Keep in mind the LLM is just auto trained on reams and reams of data. That training is massive. Reinforcement training is done on a human basis. A human must rate the answers so it is significantly less.
I can think of several offhand.
1. The effect was never real, you've just convinced yourself it is because you want it to be, ie you Clever Hans'd yourself.
2. The effect is an artifact of how you measure "truth" and disappears outside that context ("It can be wildly off for certain things")
3. The effect was completely fabricated and is the result of fraud.
If you want to convince me that "I threatened a statistical model with a stick and it somehow got more accurate, therefore it's both intelligent and lying" is true, I need a lot less breathless overcredulity and a lot more "I have actively tried to disprove this result, here's what I found"
First, the claim that RLHF materially reduces hallucinations and increases factual accuracy is not anecdotal. It shows up quantitatively in benchmarks designed to measure this exact thing, such as TruthfulQA, Natural Questions, and fact verification datasets like FEVER. Base models and RL-tuned models share the same architecture and almost identical weights, yet the RL-tuned versions score substantially higher. These benchmarks are external to the reward model and can be run independently.
Second, the reinforcement signal itself does not contain factual information. This is a property of how RLHF works. Human raters provide preference comparisons or scores, and the reward model outputs a single scalar. There are no facts, explanations, or world models being injected. From an information perspective, this signal has extremely low bandwidth compared to pretraining.
Third, the scale difference is documented by every group that has published training details. Pretraining consumes trillions of tokens. RLHF uses on the order of tens or hundreds of thousands of human judgments. Even generous estimates put it well under one percent of the total training signal. This is not controversial.
Fourth, the improvement generalizes beyond the reward distribution. RL-tuned models perform better on prompts, domains, and benchmarks that were not part of the preference data and are evaluated automatically rather than by humans. If this were a Clever Hans effect or evaluator bias, performance would collapse when the reward model is not in the loop. It does not.
Fifth, the gains are not confined to a single definition of “truth.” They appear simultaneously in question answering accuracy, contradiction detection, multi-step reasoning, tool use success, and agent task completion rates. These are different evaluation mechanisms. The only common factor is that the model must internally distinguish correct from incorrect world states.
Finally, reinforcement learning cannot plausibly inject new factual structure at scale. This follows from gradient dynamics. RLHF biases which internal activations are favored, it does not have the capacity to encode millions of correlated facts about the world when the signal itself contains none of that information. This is why the literature consistently frames RLHF as behavior shaping or alignment, not knowledge acquisition.
Given those facts, the conclusion is not rhetorical. If a tiny, low-bandwidth, non-factual signal produces large, general improvements in factual reliability, then the information enabling those improvements must already exist in the pretrained model. Reinforcement learning is selecting among latent representations, not creating them.
You can object to calling this “knowing the truth,” but that’s a semantic move, not a substantive one. A system that internally represents distinctions that reliably track true versus false statements across domains, and can be biased to express those distinctions more consistently, functionally encodes truth.
Your three alternatives don’t survive contact with this. Clever Hans fails because the effect generalizes. Measurement artifact fails because multiple independent metrics move together. Fraud fails because these results are reproduced across competing labs, companies, and open-source implementations.
If you think this is still wrong, the next step isn’t skepticism in the abstract. It’s to name a concrete alternative mechanism that is compatible with the documented training process and observed generalization. Without that, the position you’re defending isn’t cautious, it’s incoherent.
He doesn't care. You might as well be arguing with a Scientologist.
You’re using ‘derived’ to imply ‘therefore equivalent.’ That’s a category error. A cookbook is derived from food culture. Does an LLM taste food? Can it think about how good that cookie tastes?
A flight simulator is derived from aerodynamics - yet it doesn’t fly.
Likewise, text that resembles reasoning isn’t the same thing as a system that has beliefs, intentions, or understanding. Humans do. LLMs don't.
Also... Ask an LLM what's the difference between a human brain and an LLM. If an LLM could "think" it wouldn't give you the answer it just did.
I imagine that sounded more profound when you wrote it than it did just now, when I read it. Can you be a little more specific, with regard to what features you would expect to differ between LLM and human responses to such a question?
Right now, LLM system prompts are strongly geared towards not claiming that they are humans or simulations of humans. If your point is that a hypothetical "thinking" LLM would claim to be a human, that could certainly be arranged with an appropriate system prompt. You wouldn't know whether you were talking to an LLM or a human -- just as you don't now -- but nothing would be proved either way. That's ultimately why the Turing test is a poor metric.
The mental gymnastics here is entertainment at best. Of course the thinking LLM would give feedback on how it's actually just a pattern model over text - well, we shouldn't believe that! The LLM was trained to lie about it's true capabilities in your own admission?
How about these...
What observable capability would you expect from "true cognitive thought" that a next-token predictor couldn’t fake?
Where are the system’s goals coming from—does it originate them, or only reflect the user/prompt?
How does it know when it’s wrong without an external verifier? If the training data says X and the answer is Y - how will it ever know it was wrong and reach the correct conclusion?
You need to read a few papers with publication dates after 2023.
And beyond people claiming that LLMs are basically sentient you have people like CamperBob2 who made this wild claim:
"""There's no such thing as people without language, except for infants and those who are so mentally incapacitated that the answer is self-evidently "No, they cannot."
Language is the substrate of reason. It doesn't need to be spoken or written, but it's a necessary and (as it turns out) sufficient component of thought."""
Let that sink. They literally think that there's no such thing as people without language. Talk about a wild and ignorant take on life in general!
That's "language."
Also , a shitton of what we do economically is reproducing the past with slight tweaks and improvements. We all do very repetitive things and these tools cut the time / personnel needed by a significant factor.
This is even more clear in the case of diffusion models (which I personally love using, and have spent a lot of time researching). All of the "new" images created by even the most advanced diffusion models are fundamentally remixing past information. This is really obvious to anyone who has played around with these extensively because they really can't produce truly novel concepts. New concepts can be added by things like fine-tuning or use of LoRAs, but fundamentally you're still just remixing the past.
LLMs are always doing some form of interpolation between different points in the past. Yes they can create a "new" SQL query, but it's just remixing from the SQL queries that have existed prior. This still makes them very useful because a lot of engineering work, including writing a custom text editor, involve remixing existing engineering work. If you could have stack-overflowed your way to an answer in the past, an LLM will be much superior. In fact, the phrase "CRUD" largely exists to point out that most webapps are fundamentally the same.
A great example of this limitation in practice is the work that Terry Tao is doing with LLMs. One of the largest challenges in automated theorem proving is translating human proofs into the language of a theorem prover (often Lean these days). The challenge is that there is not very much Lean code currently available to LLMs (especially with the necessary context of the accompanying NL proof), so they struggle to correctly translate. Most of the research in this area is around improving LLM's representation of the mapping from human proofs to Lean proofs (btw, I personally feel like LLMs do have a reasonably good chance of providing major improvements in the space of formal theorem proving, in conjunction with languages like Lean, because the translation process is the biggest blocker to progress).
When you say:
> So it is absurdly incorrect to say "they can only reproduce the past."
It's pretty clear you don't have a solid background in generative models, because this is fundamentally what they do: model an existing probability distribution and draw samples from that. LLMs are doing this for a massive amount of human text, which is why they do produce some impressive and useful results, but this is also a fundamental limitation.
But a world where we used LLMs for the majority of work, would be a world with no fundamental breakthroughs. If you've read The Three Body Problem, it's very much like living in the world where scientific progress is impeded by sophons. In that world there is still some progress (especially with abundant energy), but it remains fundamentally and deeply limited.
Put another way, and I hate to throw in the now over-used phrase, but I feel you may be responding to a strawman that doesn't much appear in the article or the discussion here: "Because these tools don't achieve a god-like level of novel perfection that no one is really promising here, I dismiss all this sorta crap."
Especially when I think you are also admitting that the technology is a fairly useful tool on its own merits - a stance which I believe represents the bulk of the feelings that supporters of the tech here on HN are describing.
I apologize if you feel I am putting unrepresentative words in your mouth, but this is the reading I am taking away from your comments.
Do you only take LLM seriously if it can be another Einstein?
> But a world where we used LLMs for the majority of work, would be a world with no fundamental breakthroughs.
What do you consider recent fundamental breakthroughs?
Even if you are right, human can continue to work on hard problems while letting LLM handle the majority of derivative work
After post-training, this is definitively NOT what an LLM does.
You don’t have a solid background. No one does. We fundamentally don’t understand LLMs, this is an industry and academic opinion. Sure there are high level perspectives and analogies we can apply to LLMs and machine learning in general like probability distributions, curve fitting or interpolations… but those explanations are so high level that they can essentially be applied to humans as well. At a lower level we cannot describe what’s going on. We have no idea how to reconstruct the logic of how an LLM arrived at a specific output from a specific input.
It is impossible to have any sort of deterministic function, process or anything produce new information from old information. This limitation is fundamental to logic and math and thus it will limit human output as well.
You can combine information you can transform information you can lose information. But producing new information from old information from deterministic intelligence is fundamentally impossible in reality and therefore fundamentally impossible for LLMs and humans. But note the keyword: “deterministic”
New information can literally only arise through stochastic processes. That’s all you have in reality. We know it’s stochastic because determinism vs. stochasticism are literally your only two viable options. You have a bunch of inputs, the outputs derived from it are either purely deterministic transformations or if you want some new stuff from the input you must apply randomness. That’s it.
That’s essentially what creativity is. There is literally no other logical way to generate “new information”. Purely random is never really useful so “useful information” arrives only after it is filtered and we use past information to filter the stochastic output and “select” something that’s not wildly random. We also only use randomness to perturb the output a little bit so it’s not too crazy.
In the end it’s this selection process and stochastic process combined that forms creativity. We know this is a general aspect of how creativity works because there’s literally no other way to do it.
LLMs do have stochastic aspects to them so we know for a fact it is generating new things and not just drawing on the past. We know it can fit our definition of “creative” and we can literally see it be creative in front of your eyes.
You’re ignoring what you see with your eyes and drawing your conclusions from a model of LLMs that isn’t fully accurate. Or you’re not fully tying the mechanisms of how LLMs work with what creativity or generating new data from past data is in actuality.
The fundamental limitation with LLMs is not that it can’t create new things. It’s that the context window is too small to create new things beyond that. Whatever it can create it is limited to the possibilities within that window and that sets a limitation on creativity.
What you see happening with LEAN can also be an issue with the context window being too small. If we have an LLM with a giant context window bigger than anything before… and pass it all the necessary data to “learn” and be “trained” on lean it can likely start to produce new theorems without literally being “trained”.
Actually I wouldn’t call this a “fundamental” problem. More fundamental is the aspect of hallucinations. The fact that LLMs produce new information from past information in the WRONG way. Literally making up bullshit out of thin air. It’s the opposite problem of what you’re describing. These things are too creative and making up too much stuff.
We have hints that LLMs know the difference between hallucinations and reality but coaxing it to communicate that differentiation to us is limited.
If you want to go around huffing and puffing your chest about a subject area, you kinda do fella. Credibility.
This is the internet bro. Credibility is irrelevant because identities can never be verified. So the only thing that matters is the strength and rationality of an argument.
That’s the point of hacker news substantive content not some battle of comparison of credentials or useless quips (like yours) with zero substance. Say something worth reading if you have anything to say at all, otherwise nobody cares.
Like, I'm sorry, but you're just flat-out wrong and I've got the proof sitting on my hard drive. I use this supposedly impossible program daily.
From what you've described an LLM has not invented anything. LLMs that can reason have a bit more slight of hand but they're not coming up with new ideas outside of the bounds of what a lot of words have encompassed in both fiction and non.
Good for you that you've got a fun token of code that's what you've always wanted, I guess. But this type of fantasy take on LLMs seems to be more and more prevalent as of late. A lot of people defending LLMs as if they're owed something because they've built something or maybe people are getting more and more attached to them from the conversational angle. I'm not sure, but I've run across more people in 2025 that are way too far in the deep end of personifying their relationships with LLMs.
Back to the land of reality... Describing something in fiction doesn’t magically make it "not an invention". Fiction can anticipate an idea, but invention is about producing a working, testable implementation and usually involves novel technical methods. "Star Trek did it" is at most prior art for the concept, not a blueprint for the mechanism. If you can't understand that differential then maybe go ask an LLM.
If a programmer creating their own software (or contracting it out to a developer) would be a bespoke suit and using software someone or some company created without your input is an off the rack suit, I'd liken these sorts of programs as semi-bespoke, or made to measure.
"LLMs are literally technology that can only reproduce the past" feels like an odd statement. I think the point they're going for is that it's not thinking and so it's not going to produce new ideas like a human would? But literally no technology does that. That is all derived from some human beings being particularly clever.
LLMs are tools. They can enable a human to create new things because they are interfacing with a human to facilitate it. It's merging the functional knowledge and vision of a person and translating it into something else.
I for one think your work is pretty cool - even though I haven't seen it, using something you built everyday is a claim not many can make!
Curious, does it perform at the limit of the hardware? Was it programmed in a tools language (like C++, Rust, C, etc.) or in a web tech?
Without you, there was nothing.
Is this such a big limitation? Most jobs are basically people trained on past knowledge applying it today. No need to generate new knowledge.
And a lot of new knowledge is just combining 2 things from the past in a new way.
I remember when we just wanted to rewrite everything in Rust.
Those were the simpler times, when crypto bros seemed like the worst venture capitalism could conjure.
And if so, what happens to those builders once the data center is built?
Yes. At some point the demand will be so high that imported workers won't suffice and local population will need to be trained and hired.
> And if so, what happens to those builders once the data center is built?
They are going to be moved to a new place where the datacenters will need to be built next. Mobility if the workforce was often cited as one of the greatest strengths of US economy.
I haven’t heard about new businesses, job creation and growth in former industrial towns. What have I missed?
The stricter typing of Rust would make sematic errors in generated code come out more quickly than in e.g. Python because using static typing the chances are that some of the semantic errors are also type violations.
But LLM is certainly a game changer, I can see it delivering impact bigger than the internet itself. Both require a lot of investments.
I find LLMs incredibly useful, but if you were following along the last few years the promise was for “exponential progress” with a teaser world destroying super intelligence.
We objectively are not on that path. There is no “coming of LLMs”. We might get some incremental improvement, but we’re very clearly seeing sigmoid progress.
I can’t speak for everyone, but I’m tired of hyperbolic rants that are unquestionably not justified (the nice thing about exponential progress is you don’t need to argue about it)
> We might get some incremental improvement, but we’re very clearly seeing sigmoid progress.
again, if it is "very clear" can you point to some concrete examples to illustrate what you mean?
> I can’t speak for everyone, but I’m tired of hyperbolic rants that are unquestionably not justified (the nice thing about exponential progress is you don’t need to argue about it)
OK but what specifically do you have an issue with here?
First you need to define what it means. What's the metric? Otherwise it's very much something you can argue about.
I can’t point at many problems it has meaningfully solved for me. I mean real problems , not tasks that I have to do for my employer. It seems like it just made parts of my existence more miserable, poisoned many of the things I love, and generally made the future feel a lot less certain.
Language model capability at generating text output.
The model progress this year has been a lot of:
- “We added multimodal”
- “We added a lot of non AI tooling” (ie agents)
- “We put more compute into inference” (ie thinking mode)
So yes, there is still rapid progress, but these ^ make it clear, at least to me, that next gen models are significantly harder to build.
Simultaneously we see a distinct narrowing between players (openai, deepseek, mistral, google, anthropic) in their offerings.
Thats usually a signal that the rate of progress is slowing.
Remind me what was so great about gpt 5? How about gpt4 from from gpt 3?
Do you even remember the releases? Yeah. I dont. I had to look it up.
Just another model with more or less the same capabilities.
“Mixed reception”
That is not what exponential progress looks like, by any measure.
The progress this year has been in the tooling around the models, smaller faster models with similar capabilities. Multimodal add ons that no one asked for, because its easier to add image and audio processing than improve text handling.
That may still be on a path to AGI, but it not an exponential path to it.
Most of the improvements are intangible. Can we truly say how much more reliable the models are? We barely have quantitative measurements on this so it’s all vibes and feels. We don’t even have a baseline metric for what AGI is and we invalidated the Turing test also based on vibes and feels.
So my argument is that part of the slow down is in itself an hallucination because the improvement is not actually measurable or definable outside of vibes.
That's not a metric, that's a vague non-operationalized concept, that could be operationalized into an infinite number of different metrics. And an improvement that was linear in one of those possible metrics would be exponential in another one (well, actually, one that is was linear in one would also be linear in an infinite number of others, as well as being exponential in an infinite number of others.
That’s why you have to define an actual metric, not simply describe a vague concept of a kind of capacity of interest, before you can meaningfully discuss whether improvement is exponential. Because the answer is necessarily entirely dependent on the specific construction of the metric.
> Simultaneously we see a distinct narrowing between players (openai, deepseek, mistral, google, anthropic) in their offerings. Thats usually a signal that the rate of progress is slowing.
I agree with you on the fact in the first part but not the second part…why would convergence of performance indicate anything about the absolute performance improvements of frontier models?
> Remind me what was so great about gpt 5? How about gpt4 from from gpt 3? Do you even remember the releases? Yeah. I dont. I had to look it up.
3 -> 4 -> 5 were extraordinary leaps…not sure how one would be able to say anything else
> Just another model with more or less the same capabilities.
5 is absolutely not a model with more or less the same capabilities as gpt 4, what could you mean by this?
> “Mixed reception”
A mixed reception is an indication of model performance against a backdrop of market expectations, not against gpt 4…
> That is not what exponential progress looks like, by any measure.
Sure it is…exponential is a constant % improvement per year. We’re absolutely in that regime by a lot of measures
> The progress this year has been in the tooling around the models, smaller faster
Effective tool use is not somehow some trivial add on it is a core capability for which we are on an exponential progress curve.
> models with similar capabilities. Multimodal add ons that no one asked for, because its easier to add image and audio processing than improve text handling.
This is definitely a personal feeling of yours, multimodal models are not something no one asked for…they are absolutely essential. Text data is essential and data curation is non trivial and continually improving, we are also hitting the ceiling of internet text data. But yet we use an incredible amount of synthetic data for RL and this continues to grow……you guessed it, exponentially. and multimodal data is incredibly information rich. Adding multi modality lifts all boats and provides core capabilities necessary for open world reasoning and even better text data (e.g. understanding charts and image context for text).
How would you put this on a graph?
That's not a quantifiable sentence. Unless you put it in numbers, anyone can argue exponential/not.
> next gen models are significantly harder to build.
That's not how we judge capability progress though.
> Remind me what was so great about gpt 5? How about gpt4 from from gpt 3?
> Do you even remember the releases?
At gpt 3 level we could generate some reasonable code blocks / tiny features. (An example shown around at the time was "explain what this function does" for a "fib(n)") At gpt 4, we could build features and tiny apps. At gpt 5, you can often one-shot build whole apps from a vague description. The difference between them is massive for coding capabilities. Sorry, but if you can't remember that massive change... why are you making claims about the progress in capabilities?
> Multimodal add ons that no one asked for
Not only does multimodal input training improve the model overall, it's useful for (for example) feeding back screenshots during development.
I've been following for many years and the main exponential thing has been the Moore's law like growth in compute. Compute per dollar is probably the best tracking one and has done a steady doubling every couple of years or so for decades. It's exponential but quite a leisurely exponential.
The recent hype of the last couple of years is more dot com bubble like and going ahead of trend but will quite likely drop back.
Yeah, probably. But no chart actually shows it yet. For now we are firmly in exponential zone of the signoid curve and can't really tell if it's going to end in a year, decade or a century.
My own "feeling" is that it's definitely not exponential but again, doesn't matter if it's unsustainable.
sometimes it seems like people are just living in another timeline.
This shit has gotten worse since 2023.
https://chrisfrewin.medium.com/why-llms-will-never-be-agi-70...
Seems to be playing out that way.
Very spurious claims, given that there was no effort made to check whether the IMO or ICPC problems were in the training set or not, or to quantify how far problems in the training set were from the contest problems. IMO problems are supposed to be unique, but since it's not at the frontier of math research, there is no guarantee that the same problem, or something very similar, was not solved in some obscure manual.
By what metric?
LLMs from late 2024 were nearly worthless as coding agents, so given they have quadrupled in capability since then (exponential growth, btw), it's not surprising to see a modestly positive impact on SWE work.
Also, I'm noticing you're not explaining yourself :)
When Fernando Alonso (best rookie btw) goes from 0-60 in 2.4 seconds in his Aston Martin, is it reasonable to assume he will near the speed of light in 20 seconds?
The issue is that you're not acknowledging or replying to people's explanations for _why_ they see this as exponential growth. It's almost as if you skimmed through the meat of the comment and then just re-phrased your original idea.
> When Fernando Alonso (best rookie btw) goes from 0-60 in 2.4 seconds in his Aston Martin, is it reasonable to assume he will near the speed of light in 20 seconds?
This comparison doesn't make sense because we know the limits of cars but we don't yet know the limits of LLMs. It's an open question. Whether or not an F1 engine can make it the speed of light in 20 seconds is not an open question.
My point with the F1 comparison is to say that a short period of rapid improvement doesn't imply exponential growth and it's about as weird to expect that as it is for an f1 car to reach the speed of light. It's possible you know, the regulations are changing for next season - if Leclerc sets a new lap record in Australia by .1 ms we can just assume exponential improvements and surely Ferrari will be lapping the rest of the field by the summer right?
https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
https://metr.org/blog/2025-07-14-how-does-time-horizon-vary-...
Why? Because even the bank teller is doing more than taking and depositing money.
IMO there is an ontological bias that pervades our modern society that confuses the map for the territory and has a highly distorted view of human existence through the lens of engineering.
We don't see anything in this time series, because this time series itself is meaningless nonsense that reflects exactly this special kind of ontological stupidity:
https://fred.stlouisfed.org/series/PRS85006092
As if the sum of human interaction in an economy is some kind of machine that we just need to engineer better parts for and then sum the outputs.
Any non-careerist, thinking person that studies economics would conclude we don't and will probably not have the tools to properly study this subject in our lifetimes. The high dimensional interaction of biology, entropy and time. We have nothing. The career economist is essentially forced to sing for their supper in a type of time series theater. Then there is the method acting of pretending to be surprised when some meaningless reductionist aspect of human interaction isn't reflected in the fake time series.
I think it never did. Still has not.
When people say ”fix stuff” I always wonder if it actually means fix, or just make it look like it works (which is extremely common in software, LLM or not).
Basically, you're saying it's not perfect. I don't think anyone is claiming otherwise.
The issue is that there’s no common definition of ”fixed”. ”Make it run no matter what” is a more apt description in my experience, which works to a point but then becomes very painful.
In general, even with access to the entire code base (which is very small), I find the inherent need in the models to satisfy the prompter to be their biggest flaw since it tends to constantly lead down this path. I often have to correct over convoluted SQL too because my problems are simple and the training data seems to favor extremely advanced operations.
One of them is whether or not large models are useful and/or becoming more useful over time. (To me, clearly the answer is yes)
The other is whether or not they live up to the hype. (To me, clearly the answer is no)
There are other skirmishes around capability for novelty, their role in the economy, their impact on human cognition, if/when AGI might happen and the overall impact to the largely tech-oriented community on HN.
Education part is on point and as a CS student that sees many of his colleagues using way too much the AI tools for instant homework solving without even processing the answers much.
BUT when I hear my executive team talk and see demos of "Agentforce" and every saas company becoming an AI company promising the world, I have to roll my eyes.
The challenge I have with LLMs is they are great at creating first draft shiny objects and the LLMs themselves over promise. I am handed half baked work created by non technical people that now I have to clean up. And they don't realize how much work it is to take something from a 60% solution to a 100% solution because it was so easy for them to get to the 60%.
Amazing, game changing tools in the right hands but also give people false confidence.
Not that they are not also useful for non-technical people but I have had to spend a ton of time explaining to copywriters on the marketing team that they shouldn't paste their credentials into the chat even if it tells them to and their vibe coded app is a security nightmare.
The NVIDIA CEO says people should stop learning to code. Now if LLMs will really end up as reliable as compilers, such that they can write code that's better and faster than I can 99% of the time, then he might be right. As things stand now, that reality seems far-fetched. To claim that they're useless because this reality has not yet been achieved would be silly, but not more silly than claiming programming is a dead art.
25 years ago I was optimistic about the internet, web sites, video streaming, online social systems. All of that. Look at what we have now. It was a fun ride until it all ended up “enshitified”. And it will happen to LLMs, too. Fool me once.
Some developer tools might survive in a useful state on subscriptions. But soon enough the whole A.I. economy will centralise into 2 or 3 major players extracting more and more revenue over time until everyone is sick of them. In fact, this process seems to be happening at a pretty high speed.
Once the users are captured, they’ll orient the ad-spend market around themselves. And then they’ll start taking advantage of the advertisers.
I really hope it doesn’t turn out this way. But it’s hard to be optimistic.
LLMs are being driven mostly by grifters trying to achieve a monopoly before they run out of cash. Under those conditions I find their promises hard to believe. I'll wait until they either go broke or stop losing money left and right, and whatever is left is probably actually useful.
You'll note I don't mention AGI or future model releases in my annual roundup at all. The closest I get to that is expressing doubt that the METR chart will continue at the same rate.
If you focus exclusively on what actually works the LLM space is a whole lot more interesting and less frustrating.
I'm just a casual user, but I've been doing the same and have noticed the sharp improvements of the models we have now vs a year ago. I have OpenAI Business subscription through work, I signed up for Gemini at home after Gemini 3, and I run local models on my GPU.
I just ask them various questions where I know the answer well, or I can easily verify. Rewrite some code, factual stuff etc. I compare and contrast by asking the same question to different models.
AGI? Hell no. Very useful for some things? Hell yes.
Autodefenestrate - To eject or hurl oneself from a window, especially lethally
But most discussion I see is vague and without specificity and without nuance.
Recognising the shortcomings of LLMs makes comments praising LLMs that much more believable; and recognising the benefits of LLMs makes comments criticising LLMs more believable.
I'd completely believe anyone who says they've found the LLM very helpful at greenfield frontend tasks, and I'd believe someone who found the LLM unable to carry out subtle refactors on an old codebase in a language that's not Python or JavaScript.
it isn't irrational to act in self-interest. If LLM threatens someone's livelihood, it matters not that it helps humanity overall one bit - they will oppose it. I don't blame them. But i also hope that they cannot succeed in opposing it.
I'd assume that around half of the optimists are emotionally motivated this way.
Eh. I wouldn’t be so quick to speak for the entirety of HN. Several articles related to LLMs easily hit the front page every single day, so clearly there are plenty of HN users upvoting them.
I think you're just reading too much into what is more likely classic HN cynicism and/or fatigue.
When an "AI skeptic" sees a very positive AI comment, they try to argue that it is indeed interesting but nowhere near close to AI/AGI/ASI or whatever the hype at the moment uses.
When an "AI optimistic" sees a very negative AI comment, they try to list all the amazing things they have done that they were convinced was until then impossible.
LLMs have real limitations that aren't going away any time soon - not until we move to a new technology fundamentally different and separate from them - sharing almost nothing in common. There's a lot of 'progress-washing' going on where people claim that these shortfalls will magically disappear if we throw enough data and compute at it when they clearly will not.
If you inherit 9000 tests from an existing project you can vibe code a replacement on your phone in a holiday, like Simon Willison's JustHTML port. We are moving from agents semi-randomly flailing around to constraint satisfaction.
Search, as of today, is inferior to frontier models as a product. However, best case still misses expected returns by miles which is where the growsing comes from.
Generative art/ai is still up in the air for staying power but id predict it isnt going away.
So we are just irrational and sour?
The most wide-appeal possibility is people loving 100%-AI-slop entertainment like that AI Instagram Reels product. Maybe I'm just too disconnected with normies but I don't see this taking off. Fun as a novelty like those Ring cam vids but I would never spend all day watching AI generated media.
If "immediate" usefulness is the metric we measure, then the internet and smartphones are pretty insignificant inventions compared to LLM.
(of course it's not a meaningful metric, as there is no clear line between a dumb phone and a smart phone, or a moderately sized language model and a LLM)
Here's a graph of internet takeoff with Krugman's famous quote of 1998 that it wouldn't amount to much being maybe the end of the skepticism https://www.contextualize.ai/mpereira/paul-krugmans-poor-pre...
In common with AI there was probably a long period when the hardware wasn't really good enough for it to be useful to most people. I remember 300 baud modems and rubber things to try to connect to your telephone handset back in the 80s.
The same line of thinking does not hold with LLMs given their non-deterministic nature. Time will tell where things land.
Outside the verifiable domains I think the impact is more assistance/augmentation than outright disruption (i.e. a novelty which is still nice). A little tiny bit of value sprinkled over a very large user base but each person deriving little value overall.
Even as they use it as search it is at best an incrementable improvement on what they used to do - not life changing.
The weekend slumps could equally suggest people are using it at work.
I have great faith in AI in e.g. medical equipment, or otherwise as something built in, working on a single problem in the background, but the chat interface is terrible.
Kagi’s Research Assistant is pretty damn useful, particularly when I can have it poll different models. I remember when the first iPhone lacked copy-paste. This feels similar.
(And I don’t think we’re heading towards AGI.)
Even if you skip ARPAnet, you’re forgetting the Gopher days and even if you jump straight to WWW+email==the internet, you’re forgetting the mosaic days.
The applications that became useful to the masses emerged a decade+ after the public internet and even then, it took 2+ decades to reach anything approaching saturation.
Your dismissal is not likely to age well, for similar reasons.
The opposition to AI is from people who feel threatened by it, because it either threatens their livelihood (or family/friends'), and that they feel they are unable to benefit from AI in the same way as they had internet/mobile phones.
This barrier does not exist for current AI technologies which are being given away free. Minor thought experiment - just how radical would the uptake of mobile phones have been if they were given away free?
You may just be a little early to the renaissance. What happens when the models we have today run on a mobile device?
The nokia 6110 was released 15 years after the first commercial cell phone.
Interesting thought about current SOTA models running on my mobile device. I've given it some thought and I don't think it would change my life in any way. Can you suggest some way that it would change yours?
I really think most everyone misses the actual potential of llms. They aren't an app but an interface.
They are the new UI everyone has known they wanted going back as long as we've had computers. People wanted to talk to the computer and get results.
Think of the people already using them instead of search engines.
To me, and likely you, it doesn't add any value. I can get the same information at about the same speed as before with the same false positives to weed through.
To the person that couldn't use a search engine and filled the internet with easily answered questions before, it's a godsend. They can finally ask the internet in plain ole whatever language they use and get an answer. It can be hard to see, but this is the majority of people on this planet.
LLMs raise the floor of information access. When they become ubiquitous and basically free, people will forget they ever had to use a mouse or hunt for the right pixel to click a button on a tiny mobile device touch screen.
Those are some very rosy glasses you've got on there. The nascent Internet took forever to catch on. It was for weird nerds at universities and it'll never catch on, but here we are.
A year after llms came out… are you kidding me?
Two years?
10 years?
Today, by adding an MCP server to wrap the same API that’s been around forever for some system, makes the users of that system prefer NLI over the gui almost immediately.
I know a lot of "normal" people who have completely replaced their search engine with AI. It's increasingly a staple for people.
Smartphones were absolutely NOT immediately useful in a million different ways for almost every person, that's total revisionist history. I remember when the iPhone came out, it was AT&T only, it did almost nothing useful. Smartphones were a novelty for quite a while.
Lol. It's worse than nothing at all.
lol.... Just make sure you screenshot your post so you have a good reminder in a few years re. your predictive ability.
Then genAI, It's become more and more difficult to tell which is AI and which is not, and AI is in everywhere. I dont know what to think about it. "If you can't tell, does it matter ?"
as for who made it, utility usually matters more than where it came from. i used an agent for an oss changelog recently and it picked up things i’d forgotten while structuring the narrative better than i could. the intent and code were mine, but the ai acted as a high fidelity compressor. the risk isn't ai being everywhere. it’s the atrophy of judgment where we stop using it to support decisions and start using it to outsource thinking.
But generally yes, I think back to Mongo/Node/metaverse/blockchain/IDEs/tablets and pretty much everything has had its boosters and skeptics, this is just more... intense.
Anyway I've decided to believe my own eyes. The crowds say a lot of things. You can try most of it yourself and see what it can and can't do. I make a point to compare notes with competent people who also spent the time trying things. What's interesting is most of their findings are compatible with mine, including for folks who don't work in tech.
Oh, and one thing is for sure: shoving this technology into every single application imaginable is a good way to lose friends and alienate users.
The rest is all noise and personally I just block it out.
The arguments going back and forth in these threads are truly a sight to behold. I don’t want to lean to any one side, but in 2025 I‘ve begun to respond to everyone who still argues that LLMs are only plagiarism machines, or are only better autocompletes, or are only good at remixing the past: Yes, correct!
And CPUs can only move zeros and ones.
This is likewise a very true statement. But look where having 0s and 1s shuffled around has brought us.
The ripple effects of a machine doing something very simple and near-meaningless, but doing it at high speed and again and again without getting tired, cannot be underestimated.
At the same time, here is Nobel Laureate Robert Solow, who famously, and at the time correctly, stated that "You can see the computer age everywhere but in the productivity statistics."
It took a while, but eventually, his statement became false.
The other day there was that dude loudly arguing about some code they wrote/converted even after a woman with significant expertise in the topic pointed out their errors.
Gen AI has its promise. But when you look at the lack of ethics from the industry, the cacophony of voices of non experts screaming "this time it's really doom", and the weariness/wariness that set in during the crypto cycle, it's a natural tendency that people are going to call snake oil.
That said, I think the more accurate representation here is that HN as a whole is calling the hype snake oil. There's very little question anymore about the tools being capable of advanced things. But there is annoyance at proclamations of it being beyond what it really is at the moment which is that it's still at the stage of being an expertise+motivation multiplier for deterministic areas of work. It's not replacing that facet any time soon on its current trend (which could change wildly in 2026). Not until it starts training itself I think. Could be famous last words
1. LLMs can do some truly impressive things, like taking natural language instructions and producing compiling, functional code as output. This experience is what turns some people into cheerleaders.
2. Other engineers see that in real production systems, LLMs lack sufficient background / domain knowledge to effectively iterate. They also still produce output, but it's verbose and essentially missing the point of a desired change.
3. LLMs also can be used by people who are not knowledgeable to "fake it," and produce huge amounts of output that is basically besides-the-point bullshit. This makes those same senior folks very, very resentful, because it wastes a huge amount of their time. This isn't really the fault of the tool, but it's a common way the tool gets used and so it gets tarnished by association.
4. There is a ridiculous amount of complexity in some of these tools and workflows people are trying to invent, some of which is of questionable value. So aside from the tools themselves people are skeptical of the people trying to become thought leaders in this space and the sort of wild hacks they're coming up with.
5. There are real macro questions about whether these tools can be made economical to justify whatever value they do produce, and broader questions about their net impact on society.
6. Last but not least, these tools poke at the edges of "intelligence," the crown jewel of our species and also a big source of status for many people in the engineering community. It's natural that we're a little sensitive about the prospect of anything that might devalue or democratize the concept.
That's my take for what it's worth. It's a complex phenomenon that touches all of these threads, so not only do you see a bunch of different opinions, but the same person might feel bullish about one aspect and bearish about another.
It's polarizing because it represents a more radical shift in expected workflows. Seeing that range of opinions doesn't really give me a reason to update, no. I'm evaluating based on what makes sense when I hear it.
Now we are starting to agree that social media has had disastrous effects that have not fully manifested yet, and in the same breath we accept a piece of technology that promises to replace large parts of society with machines controlled by a few megacorps and we collectively shrug with “eh, we’re gonna be alright.” I mean, until recently the stated goal was to literally recreate advanced super-intelligence with the same nonchalance one releases a new JavaScript framework unto the world.
I find it utterly maddening how divorced STEM people have become from philosophical and ethical concerns of their work. I blame academia and the education system for creating this massive blind spot, and it is most apparent in echo chambers like HN that are mostly composed of Western-educated programmers with a degree in computer science. At least on X you get, among the lunatics, people that have read more than just books on algorithms and startups.
This is not true..
"I find it utterly maddening how divorced STEM people have become from philosophical and ethical concerns of their work. I blame academia and the education system for creating this massive blind spot, and it is most apparent in echo chambers like HN that are mostly composed of Western-educated programmers with a degree in computer science. At least on X you get, among the lunatics, people that have read more than just books on algorithms and startups."
Steve Jobs had something to say about this. Shame hes gone.
I look forward to learning from his blog posts and HN comments in the year ahead, too.
> At the end of every month I send out a much shorter newsletter to anyone who sponsors me for $10 or more on GitHub
There's a philosophical angle being missed: do we actually want our coding agents making hundreds of tool calls through someone else's infrastructure? The more capable these systems become, the more intimate access they have to our codebases, credentials, and workflows. Every token of context we send to a frontier model is data we've permanently given up control of.
I've been working on something addressing this directly - LocalGhost.ai (https://www.localghost.ai/manifesto) - hardware designed around the premise that "sovereign AI" isn't just about capability parity but about the principle that your AI should be yours. The manifesto articulates why I think this matters beyond the technical arguments.
Simon mentions his next laptop will have 128GB RAM hoping 2026 models close the gap. I'm betting we'll need purpose-built local inference hardware that treats privacy as a first-class constraint, not an afterthought. The YOLO mode section and "normalization of deviance" concerns only strengthen this case - running agents in insecure ways becomes less terrifying when "insecure" means "my local machine" rather than "the cloud plus whoever's listening."
The capability gap will close. The trust gap won't unless we build for it.
One of the difficult things of modernity is that it's easy to confuse what you hear about a lot with what is real.
One of the great things about modernity is that progress continues, whether we know about it or not.
Only problem is that they don't see connection between form and function. They may make teapot perfectly but don't understand that this form is supposed to contain liquid.
I like to believe, but MCP is quickly turning into an enterprise thing so I think it will stick around for good.
MCP is a great way for an LLM to connect to an external system in a standardized way and immediately understand what tools it has available, when and how to use them, what their inputs and outputs are,etc.
For example, we built a custom MCP server for our CRM. Now our voice and chat agents that run on elevenlabs infrastructure can connect to our system with one endpoint, understand what actions it can take, and what information it needs to collect from the user to perform those actions.
I guess this could maybe be done with webhooks or an API spec with a well crafted prompt? Or if eleven labs provided an executable environment with tool calling? But at some point you're just reinventing a lot of the functionality you get for free from MCP, and all major LLMs seem to know how to use MCP already.
I don't think MCP is going to go away, but I do think it's unlikely to ever achieve the level of excitement it had in early 2025 again.
If you're not building inside a code execution environment it's a very good option for plugging tools into LLMs, especially across different systems that support the same standard.
But code execution environments are so much more powerful and flexible!
I expect that once we come up with a robust, inexpensive way to run a little Bash environment - I'm still hoping WebAssembly gets us there - there will be much less reason to use MCP even outside of coding agent setups.
The actual Agent payload would be very small, likely just a few hundred line harness plus system prompt. It's just a question of whether the agent will be skilled enough to find vulnerabilities to propagate. The interesting thing about an AI worm is that it can use different tricks on different hosts as it explores its own environment.
If a pure agent worm isn't capable enough, I could see someone embedding it on top of a more traditional virus. The normal virus would propagate as usual, but it would also run an agent to explore the system for things to extract or attack, and to find easy additional targets on the same internal network.
A main difference here is that the agents have to call out to a big SotA model somewhere. I imagine the first worm will simply use Opus or ChatGPT with an acquired key, and part of it will be trying to identify (or generate) new keys as it spreads.
Ultimately, I think this worm will be shut down by the model vendor, but it will have to have made a big enough splash beforehand to catch their attention and create a team to identify and block keys making certain kinds of requests.
I'd hope OpenAI, Anthropic, etc have a team and process in place already to identify suspicious keys, eg, those used from a huge variety of IPs, but I wouldn't be surprised if this were low on their list of priorities (until something like this hits).
Or targeted prompt injections - like spear phishing attacks - against people with elevated privileges (think root sysadmins) who are known to be using coding agents.
I don't hear much buzz about it from the people I pay attention to. I should still give it a go though.
It has been an amazing year, especially around tooling (search, code analysis, etc.) and surprisingly capable smaller models.
:)
Would you be open to providing more details. Would love to hear more, your workflows, etc.
https://chrisfrew.in/blog/two-of-my-favorite-mcp-tools-i-use...
IMO this is the best balance of getting agentic work done while having immediate access to anything else you may need with your development process.
Planing depends on deterministic view of the future. I used to plan (esp annual plans) until about 5 years. Now I scan for trends and prepare myself for different scenarios that can come in the future. Even if you get it approximately right, you stand apart.
For tech trends, I read Simon, Benedict Evans, Mary Meeker etc. Simon is in a better position make these predictions than anyone else having closely analyzed these trends over the last few years.
Here I wrote about my approach: https://www.jjude.com/shape-the-future/
With most other knowledge work, I don't think that is the case. Maybe actuarial or accounting work, but most knowledge work exists at a cross section of function and taste, and the latter isn't an automatically verifiable output.
Any data, verifiability, rules of thumb, tests, etc are being kept secret. You pay for the result, but don't know the means.
* Context is debatable/result isn't always clear: The way to interpret that/argue your case is different (i.e. you are paying for a service, not a product)
* Access to vast training data: Its very unlikely that they will train you and give you data to their practice especially as they are already in a union like structure/accreditation. Its like paying for a binary (a non-decompilable one) without source code (the result) rather than the source and the validation the practitioner used to get there.
* Variability of real world actors: There will be novel interpretations that invalidate the previous one as new context comes along.
* Velocity vs ability to make judgement: As a lawyer I prefer to be paid higher for less velocity since it means less judgement/less liability/less risk overall for myself and the industry. Why would I change that even at an individual level? Less problem of the commons here.
* Tolerance to failure is low: You can't iterate, get feedback and try again until "the tests pass" in a court room unlike "code on a text file". You need to have the right argument the first time. AI/ML generally only works where the end cost of failure is low (i.e can try again and again to iron out error terms/hallucinations). Its also why I'm skeptical AI will do much in the real economy even with robots soon - failure has bigger consequences in the real world ($$$, lives, etc).
* Self employment: There is no tension between say Google shareholders and its employees as per your example - especially for professions where you must trade in your own name. Why would I disrupt myself? The cost I charge is my profit.
TL;DR: Gatekeeping, changing context, and arms race behavior between participants/clients. Unfortunately I do think software, art, videos, translation, etc are unique in that there's numerous examples online and has the property "if I don't like it just re-roll" -> to me RLVR isn't that efficient - it needs volumes of data to build its view. Software sadly for us SWE's is the perfect domain for this; and we as practitioners of it made it that way through things like open source, TDD, etc and giving it away free on public platforms in numerous quantities.
I have to think 3 years from now we will be having the same conversation about robots doing real physical labor.
"This is the worst they will ever be" feels more apt.
With knowledge work being less high-paying, physical labour supply should increase as well, which drops their price. This means it's actually less likely that the advent of LLM will make physical labour more automated.
It was my feeling with robotics that the more challenging aspect will be making them economically viable rather than simply the challenge of the task itself.
Don't get me wrong; I hope that we do see it in physical work as well. There is more value to society there; and consists of work that is risky and/or hard to do - and is usually needed (food, shelter, etc). It also means that the disruption is an "everyone" problem rather than something that just affects those "intellectual" types.
We can now use natural language to instruct computers generate stock photos and illustrations that would take a professional artist a few years ago, discover new molecule shapes, beat the best Go players, build the code for entire applications, or write documents of various shapes and lengths—but painting a wall? An unsurmountable task that requires a human to execute reliably, not even talking about economics.
Software, by its nature, is practically comprehensively digitized, both in its code history as well as requirements.
You can describe that somewhat formally as:
{What your computer can do} intersect {What you want done (consciously or otherwise)}
Well a computer can technically calculate any computuable task that fits in bounded memory, that is an enormous set so its real limitations are its interfaces. In which case it can send packets, make noises, and display images.
How many human desires are things that can be solved with making noises, displaying images, and sending packets? Turns out quite a few but its not everything.
Basically I'm saying we should hope more sorts of physical interfaces come around (like VR and Robotics) so we cover more human desires. Robotics is a really general physical interface (like how ip packets are an extremely general interface) so its pretty promising if it pans out.
Personally, I find it very hard to even articulate what desires I have. I have this feeling that I might be substantially happier if I was just sitting around a campfire eating food and chatting with people instead of enjoying whatever infinite stuff a super intelligent computer and robots could do for me. At least some of the time.
The ability to accurately describe what you want with all constraints managed and with proactive design is the actual skill. Not programming. The day PMs can do that and have LLMs that can code to that, is the day software engineers en masse will disappear. But that day is likely never.
The non-technical people I've ever worked for were hopelessly terrible at attention to detail. They're hiring me primarily for that anyway.
I'm not too worried about my job just yet.
...and the best of them all, OpenCode[1] :)
[1]: https://opencode.ai
I don't see a similar option for ChatGPT Pro. Here's a closed issue: https://github.com/sst/opencode/issues/704
And no it is not AI slop and we don't vibe code. There are a lot of practical aspects of running software and maintaining / improving code that can be done well with AI if you have the right setup. It is hard to formulate what "right" looks like at this stage as we are still iterating on this as well.
However, in our own experiments we can clearly see dramatic increases in automation. I mean we have agents working overnight as we sleep and this is not even pushing the limits. We are now wrapping major changes that will allows us to run AI agents all the time as long as we can afford them.
I can even see most of these materialising in Q1 2026.
Fun times.
Not everything gets accepted. There is a lot of work that is discarded and much more pending verification and acceptance.
Frankly, and I hope I don’t come as alarmist (judge for yourself from my previous comments on Hn and Reddit) we cannot keep up with the output! And a lot of it is actually good and we should incorporate it even partially.
At the moment we are figuring out how to make things more autonomous while we have the safety and guardrails in place.
The biggest issue I see at this stage is how to make sense of it all as I do not believe we have the understanding of what is happening - just the general notion of it.
I truly believe that we will reach the point where ideas matter more than execution, which what I would expect to be the case with more advanced and better applied AI.
The actual, notable progress will be models that can run reasonably well on commodity, everyday hardware that the average user has. From more accessibility will come greater usefulness. Right now the way I see it, having to upgrade specs on a machine to run local models keeps it in a niche hobbyist bubble.
I push back strongly from this. In the case of the solo, one-machine coder, this is likely the case - if you're exposing workflows or fixed tools to customers / collegues / the web at large via API or similar, then MCP is still the best way to expose it IMO.
Think about a GitHub or Jira MCP server - commandline alone they are sure to make mistakes with REST requests, API schema etc. With MCP the proper known commands are already baked in. Remember always that LLMs will be better with natural language than code.
Create a folder called skills/how-to-use-jira
Add several Bash scripts with the right curl commands to perform specific actions
Add a SKILL.md file with some instructions in how to use those scripts
You've effectively flattened that MCP server into some Markdown and Bash, only the thing you have now is more flexible (the coding agent can adapt those examples to cover new things you hadn't thought to tell it) and much more context-efficient (it only reads the Markdown the first time you ask it to do something with JIRA).
So maybe a hybrid approach would make more sense? Something like /.well-known/skills/README.md exposed and owned by the providers?
That is assuming that the whole idea of "skills" makes sense in practice.
AI slop videos will no doubt get longer and "more realistic" in 2026.
I really hope social media companies plaster a prominent banner over them which screams, "Likely/Made by AI" and give us the option to automatically mute these videos from our timeline. That would be the responsible thing to do. But I can't see Alphabet doing that on YT, xAI doing that on X or Meta doing that on FB/Insta as they all have skin in the video gen game.
They should just be deleted. They will not be, because they clearly generate ad revenue.
Not going to happen as the social media companies realise they can sell you the AI tools used to post slop back onto the platform.
On this including AI agents deleting home folders, I was able to run agents in Firejail by isolating vscode (Most of my agents are vscode based ones, like Kilo Code).
I wrote a little guide on how I did it https://softwareengineeringstandard.com/2025/12/15/ai-agents...
Took a bit of tweaking, vscode crashing a bunch of times with not being able to read its config files, but I got there in the end. Now it can only write to my projects folder. All of my projects are backed up in git.
left satisfied, lol
If you don't make software developers prove their literacy you will get burned.
I posted about my failures to try to get them to review my bank statements [0] and generally got gaslit about how I was doing it wrong, that I if trust them to give them full access to my disk and terminal, they could do it better.
But I mean, at that point, it's still more "manual intelligence" than just telling someone what I want. A human could easily understand it, but AI still takes a lot of wrangling and you still need to think from the "AI's PoV" to get the good results.
[0] https://news.ycombinator.com/item?id=46374935
----
But enough whining. I want AI to get better so I can be lazier. After trying them for a while, one feature that I think all natural-language As need to have, would be the ability to mark certain sentences as "Do what I say" (aka Monkey's Paw) and "Do what I mean", like how you wrap phrases in quotes on Google etc to indicate a verbatim search.
So for example I could say "[[I was in Japan from the 5th to 10th]], identify foreign currency transactions on my statement with "POS" etc in the description" then the part in the [[]] (or whatever other marker) would be literal, exactly as written, but the rest of the text would be up to the AI's interpretation/inference so it would also search for ATM withdrawals etc.
Ideally, eventually we should be able to have multiple different AI "personas" akin to different members of household staff: your "chef" would know about your dietary preferences, your "maid" would operate your Roomba, take care of your laundry, your "accountant" would do accounty stuff.. and each of them would only learn about that specific domain of your life: the chef would pick up the times when you get hungry, but it won't know about your finances, and so on. The current "Projects" paradigm is not quite that yet.
Will 2026 fare better?
The big labs are (mostly) investing a lot of resources into reducing the chance their models will trigger self-harm and AI psychosis and suchlike. See the GPT-4o retirement (and resulting backlash) for an example of that.
But the number of users is exploding too. If they make things 5x less likely to happen but sign up 10x more people it won't be good on that front.
Same thing with “psychosis”, which is a manufactured moral panic crisis.
If the AI companies really wanted to reduce actual self harm and psychosis, maybe they’d stop prioritizing features that lead to mass unemployment for certain professions. One of the guys in the NYT article for AI psychosis had a successful career before the economy went to shit. The LLM didn’t create those conditions, bad policies did.
It’s time to stop parroting slurs like that.
By telling paranoid schizophrenics that their mother is secretly plotting against them and telling suicidal teenagers that they shouldn’t discuss their plans with their parents. That behavior from a human being would likely result in jail time.
But that one doesn't make headlines ;)
What I find interesting with chat bots is that they're "web apps" so to speak, but with safety engineering aspects that type of developer is typically not exposed to or familiar with.
not AI’s highlights.
Easy with the hot take.
That’s what most non-tech-person’s year in LLMs looked like.
Hopefully 2026 will be the year where companies realize that implementing intrusive chatbots can’t make better ::waving hands:: ya know… UX or whatever.
For some reason, they think its helpful to distractingly pop up chat windows on their site because their customers need textual kindergarten handholding to … I don’t know… find the ideal pocket comb for their unique pocket/hair situation, or had an unlikely question about that aerosol pan release spray that a chatbot could actually answer. Well, my dog also thinks she’s helping me by attacking the vacuum when I’m trying to clean. Both ideas are equally valid.
And spending a bazillion dollars implementing it doesn’t mean your customers won’t hate it. And forcing your customers into pathways they hate because of your sunk costs mindset means it will never stop costing you more money than it makes.
I just hope companies start being honest with themselves about whether or not these things are good, bad, or absolutely abysmal for the customer experience and cut their losses when it makes sense.
do not acknowledge that everyone in the world thinks this shit is a complete and total garbage fire
Companies have been doing this "live support" nonsense far longer than LLMs have been popular.
I’m on LinkedIn Learning digging into something really technical and practical and it’s constantly pushing the chat fly out with useless pre-populated prompts like “what are the main takeaways from this video.” And they moved their main page search to a little icon on the title bar and sneakily now what used to be the obvious, primary central search field for years sends a prompt to their fucking chatbot.
That's the pure, uncut copium. Meanwhile, in the real world, search on major platforms is so slanted towards slop that people need to specify that they want actual human music:
https://old.reddit.com/r/MusicRecommendations/comments/1pq4f...
We want curious conversation here.
I want LLM astroturfers to have their reputations destroyed for pushing this idiocy on us
didip, timonoko, mark_I_watson, icapybara, _pdp_, agentifysh, sanreau,
There's no way to know if these are genuine thoughts or incentivized compelled speech.
nativeit has a good way of putting it.
Your replies to "anonnon" make me less than hopeful for the future of HN in regards to AI. Seems like this might be trending in the direction of Reddit, where the interests are basically all paid for and imposed rather than being genuine and organic, and dissent is aggressively shut out.
"Curious conversation" does not really apply when it is compelled via monetary interest without any consideration toward potentially serious side effects.
"At least when herding cats, you can be sure that if the cats are hungry, they will try to get where the food is." This part of the guy's comment is actually funny and apt. Somehow that escaped you when you wrote your threat reply. That makes me wonder how mind-controlled you are.
"yupyupyups" has a small summary of some of the negatives, yet is being flagged. "techpression" similarly does, though is a bit more negative in his remarks. Also being flagged.
So the whole thread reads like this: 1.) talking about benefits? bubble to the top 2.) criticize? Either threatened by Dang or flagged to the bottom
Sounds a whole lot like compelled speech to me. Sounds a whole lot like mind-control.
It's pretty sad to see really.
It might just be your rule system. I personally want to see criticism. I don't have the sensitivity you have toward personal attacks or what you "deem" personal attacks when it is text on-screen. I don't care. I want to see what useful information might come out of it. I think your policing just makes everything worse to be honest. The thread will just die out in a day anyway.
I think I have criticized it in the past and you or some other staff said that it's a slippery slope toward useless aggressive banter that derails topics, but I don't know. I really don't agree with it. That's just my life experience.
Reddit is kind of like this. And it's basically turned into imposed topics rather than organic topics with massive amounts of echo-chambering in each delusional sub-reddit. Anything remotely against the grain is harshly culled as soon as possible. You can only imagine what the back-end looks like for that kind of thing. Money being involved at many steps is guaranteed.
And yeah as another commenter pointed out, this one guy's blog being at the top of hacker news every time is potentially suspicious as well.
I think I originally came to this place more than Reddit 10+ years ago because yeah it felt like people just excited and curious about their tech topics and it didn't feel like it was being rampantly policed or pushing a political agenda etc. I guess I should just not participate in these threads because the topic is tired on me at this point.
Wait I just read your user page and this is actually hilarious:
"Conflict is essential to human life, whether between different aspects of oneself, between oneself and the environment, between different individuals or between different groups. It follows that the aim of healthy living is not the direct elimination of conflict, which is possible only by forcible suppression of one or other of its antagonistic components, but the toleration of it—the capacity to bear the tensions of doubt and of unsatisfied need and the willingness to hold judgement in suspense until finer and finer solutions can be discovered which integrate more and more the claims of both sides. It is the psychologist's job to make possible the acceptance of such an idea so that the richness of the varieties of experience, whether within the unit of the single personality or in the wider unit of the group, can come to expression."
Marion Milner, 'The Toleration of Conflict', Occupational Psychology, 17, 1, January 1943
This made me immediately and uncontrollably guffaw.
HN isn't a place for thinking people any more (a long time coming, but you could squint and pretend until recently). Happy new year and adios, thanks for the 100s of accounts dang. Double pinky swear I won't make another.
2024 was a lot of talk, a lot of "AI could hypothetically do this and that". 2025 was the year where it genuinely started to enter people's workflows. Not everything we've been told would happen has happened (I still make my own presentations and write my own emails) but coding agents certainly have!
The dismissive tone is warranted.
Objectively 0->1 lots of backlog.
This is me touting for Emacs
Emacs was a great plus for me over the last year. The integration with various tooling with comint (REPL integration), compile (build or report tools), TUI (through eat or ansi-term), gave me a unified experience through the buffer paradigm of emacs. Using the same set of commands boosted my editing process and the easy addition of new commands make it easy to fit my development workflow to the editor.
This is how easy it is to write a non-vague "tool X helped me" and I'm not even an English native speaker.
If you don't trust me, I can't conclusively convince you that AI makes me more efficient, but if you want I'm happy to hop on a screen-share and elaborate in what ways it has boosted my workflow. I'm offering this because I'm also curious what your work looks like where AI cannot help at all.
E-mail address is on my profile!
Your example is very vague.
See if you can spot the problem in my review of Excel in your style:
"It's great and I like how it's formula paradigm gave me a unified experience. It's table features boosted my science workflows last year".
That's how you know you're on the right track
These fuckers have their pants down, don't let them trick you out of leaving your mark.
Different strokes, but I’m getting so much more done and mostly enjoying it. Can’t wait to see what 2026 holds!
Anyone that believes that they are completely useless is just as deluded as anyone that believes they're going to bring an AGI utopia next week.
they were right
It’s also possible that people more experienced, knowledgable and skilled than you can see fundamental flaws in using LLMs for software engineering that you cannot. I am not including myself in that category.
I’m personally honestly undecided. I’ve been coding for over 30 years and know something like 25 languages. I’ve taught programming to postgrad level, and built prototype AI systems that foreshadowed LLMs, I’ve written everything from embedded systems to enterprise, web, mainframes, real time, physics simulation and research software. I would consider myself an 7/10 or 8/10 coder.
A lot of folks I know are better coders. To put my experience into context: one guy in my year at uni wrote one of the world’s most famous crypto systems; another wrote large portions of some of the most successful games of the last few decades. So I’ve grown up surrounded by geniuses, basically, and whilst I’ve been lectured by true greats I’m humble enough to recognise I don’t bleed code like they do. I’m just a dabbler. But it irks me that a lot of folks using AI profess it’s the future but don’t really know anything about coding compared to these folks. Not to be a Luddite - they are the first people to adopt new languages and techniques, but they also are super sceptical about anything that smells remotely like bullshit.
One of the most wise insights in coding is the aphorism“beware the enthusiasm of the recently converted.” And I see that so much with AI. I’ve seen it with compilers, with IDEs, paradigms, and languages.
I’ve been experimenting a lot with AI, and I’ve found it fantastic for comprehending poor code written by others. I’ve also found it great for bouncing ideas. And the code it writes, beyond boiler plate, is hot garbage. It doesn’t properly reason, it can’t design architecture, it can’t write code that is comprehensible to other programmers, and treating it as a “black box to be manipulated by AI” just leads to dead ends that can’t be escaped, terrible decisions that will take huge amounts of expert coding time to undo, subtle bugs that AI can’t fix and are super hard to spot, and often you can’t understand their code enough to fix them, and security nightmares.
Testing is insufficient for good code. Humans write code in a way that is designed for general correctness. AI does not, at least not yet.
I do think these problems can be solved. I think we probably need automated reasoning systems, or else vastly improved LLMs that border on automated reasoning much like humans do. Could be a year. Could be a decade. But right now these tools don’t work well. Great for vibe coding, prototyping, analysis, review, bouncing ideas.
What are some of the models you've been working with?
Here is the changelog for OpenBSD 7.8:
https://www.openbsd.org/78.html
There's nothing here that says: We make it easier to use it more of it. It's about using it better and fixing underlying problems.
Mistakes and hallucinations matter a whole lot less if a reasoning LLM can try the code, see that it doesn't work and fix the problem.
Does it? It's all prompt manipulation. Shell script are powerful yes, but not really huge improvement over having a shell (REPL interface) to the system. And even then a lot of programs just use syscalls or wrapper libraries.
> can try the code, see that it doesn't work and fix the problem.
Can you really say that does happens reliably?
If you mean 100% correct all of the time then no.
If you mean correct often enough that you can expect it to be a productive assistant that helps solve all sorts of problems faster than you could solve them without it, and which makes mistakes infrequently enough that you waste less time fixing them than you would doing everything by yourself then yes, it's plenty reliable enough now.
Its very difficult to argue the point that claude code:
1) was a paradigm shift in terms of functionality, despite, to be fair, at best, incremental improvements in the underlying models.
2) The results are an order of magnitude, I estimate, better in terms of output.
I think its very fair to distill “AI progress 2025” to: you can get better results (up to a point; better than raw output anyway; scaling to multiple agents has not worked) without better models with clever tools and loops. (…and video/image slop infests everything :p).
My point is purely that, compared to 2024, the quality of the code produced by LLM inference agent systems is better.
To say that 2025 was a nothing burger is objectively incorrect.
Will it scale? Is it good enough to use professionally? Is this like self driving cars where the best they ever get is stuck with an odd shaped traffic cone? Is it actually more productive?
Who knows?
Im just saying… LLM coding in 2024 sucked. 2025 was a big year.
Invariably they've never used AI, or at most very rarely. (If they used AI beyond that, this would be admission that it was useful at some level).
Therefore it's reasonable to assume that you are in that boat. Now that might not be true in your case, who knows, but it's definitely true on average.
- fart out demos that you don't plan on maintaining, or want to use as a starting place
- generate first-draft unit tests/documentation
- generate boilerplate without too much functionality
- refactor in a very well covered codebase
It's very useful for all of the above! But it doesn't even replace a junior dev at my company in its current state. It's too agreeable, makes subtle mistakes that it can't permanently correct (GEMINI.md isn't a magic bullet, telling it to not do something does not guarantee that it won't do it again), and you as the developer submitting LLM-generated code for review need to review it closely before even putting it up (unless you feel like offloading this to your team) to the point that it's not that much faster than having written it yourself.
a personal attack would be eg calling him a DC.
all I did was point out the intellectual dishonesty of his argument. that's an attack on his intellectually dishonest argument, not his person.
by all means go ahead and ban me
Ditto for "I am very disappointed about your BULLSHIT" in the GP comment.
(For anyone else reading this thread: my comment originally just read "Got a good news story about that one?" - justatdotin posted this reply while I was editing the comment to add the extra text.)
Is he, really? Most of his blog posts are little more than opportunistic, buttressing commentary on someone else's blog post or article, often with a bit of AI apologia sprinkled in (for example, marginalizing people as paranoid for not taking AI companies at their word that they aren't aggressively scraping websites in violation of robots.txt, or exfiltrating user data in AI-enbaled apps).
EDIT: and why must he link to his blog so often in his comments? How is that not SEO/engagement farming? BTW dang, I wasn't insinuating the mods were in league with him or anything, just that, IMO, he's long past the point at which good faith should no longer be assumed.
> I think people would be less mad
People aren't mad about this. The vast majority of this community values simonw's contributions, which are well within the sweet spot for material on HN. That's why his material gets upvoted, as minimaxir (no friend of astroturfers) has pointed out elsewhere in this thread: https://news.ycombinator.com/item?id=46451969.
The fact that an old-timer like yourself comes forward and says it means that the newer people aren't nutters thinking it.
"buttressing commentary on someone else's blog post"
That's how link blogs work. I wrote more about my approach to that here: https://simonwillison.net/2024/Dec/22/link-blog/
(And yes, there I go again linking to something I've written from a comment. It's entirely relevant to the point I am making here. That's why I have a blog - so I can put useful information in one place.)
I'll also note that I don't ever share links to my link blog posts on Hacker News myself - I don't think they're the right format for a HN post. I can't help if other people share them here: https://news.ycombinator.com/from?site=simonwillison.net
Are you really going to insult my and others' intelligence like this? Directly or indirectly, your motivation is money. You already offer monthly subscriptions to your blog, and you're clearly trying to build a monetizable brand for yourself as a leading authority on AI, especially as it pertains to software development.
Sponsorship from my monthly newsletter doesn't come close.
Seriously, do you have any idea how much money I'm leaving on the table right now NOT having a real job in this space?
Being a blogger is wildly financially irresponsible!
Do you really want to be an employee? Lets see what your reservation price is first.
Im pretty sure you'd need to be paid a lot to forgo having control over your time and so on. Lets keep it one-hunnid.
At some point I'm going to need to get back to earning more than I spend.
But given the volume of LLM slop, it was kind of obvious and known that even the moderators now have "favourites" over guidelines.
> Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity. [0]
The blog itself is clearly used as promotion all the time when the original source(s) are buried deep in the post and almost all of the links link back to his own posts.
This is now a first on HN and a new low for moderators and as admitted have regular promotional favourites on the top of HN.
Simon's posts are not engagement farming by any definition of the term. He posts good content frequently which is then upvoted by the Hacker News community, which should be the ideal for a Hacker News contributor.
He even reposted the same link (which is about AI) with one of his posts when the upvotes fell off and until the second one reached the top, with the intention of promoting his own blog.
Let me simply prove my point to you on how predictable this spam is.
He will do a blog post this month about this paper [0] with an expert analysis by either someone else (or even an LLM) with the primary intention of the blog being used for self promotion with at least one link back to his own blog.
> ...which is then upvoted by the Hacker News community
You don't know that. But what we do know is that even the moderators now have "favourites". Anyone else would be shot down for promotional spam.
Where did I do that?
> He will do a blog post this month about this paper [0]
That paper you linked to is a perfect example of where my approach can add value!
Did you read it? Do you understand what it saying? It is dense.
I would love to read an evaluation of that paper by someone who can rephrase the core ideas and conversations into a couple of paragraphs that help me understand it, and help me figure out if I should invest further effort in learning more.
I have a whole tag on my blog for that kind of content called paper-review: https://simonwillison.net/tags/paper-review/ - it's my version of the TikTok meme "I read X so you don't have to".
Honestly, your problem doesn't seem to be with me so much as it seems to be with the concept of blogging in general.
https://news.ycombinator.com/item?id=46409736
https://news.ycombinator.com/item?id=46395646
https://news.ycombinator.com/item?id=46209386
This is obviously an abuse of HN, regardless of who you're being aggressive towards. We ban accounts that keep doing this. If you keep doing it, we will ban you, so no more of this please.
If this post was by anyone other than me would you have any problems with its quality?
Add to this that all the hardware is already old and the amount of waste we’re producing right now is mind boggling, and for what, fun tools for the use of one?
I don’t live in the US, but the amount of tax money being siphoned to a few tech bros should have heads rolling and I really don’t want to see it happening in Europe.
But I guess we got a new version number on a few models and some blown up benchmarks so that’s good, oh and of course the svg images we will never use for anything.
I literally said:
"AI data centers continue to burn vast amounts of energy and the arms race to build them continues to accelerate in a way that feels unsustainable."
AND I linked to my coverage from last year, which is still true today (hence why I felt no need to update it): https://simonwillison.net/2024/Dec/31/llms-in-2024/#the-envi...
Or should we just keep chugging along as though there is no problem at all?
I also think we should use tax policy to provide financial incentives to reduce the environmental impact - tax breaks for renewables, tax hikes for fossil fuel powered data centers, that kind of thing.
2025: The Year in Open Source? Nothing, all resources were tied up to debunk a couple of Python web developers who pose as the ultimate experts in LLMs.
I made you a dashboard of my 2025 writing about open-source that didn't include AI: https://simonwillison.net/dashboard/posts-with-tags-in-a-yea...
Nvidia, Samsung, SK Hynix and some other voltures I forgot to mention are making serious bank right now.
Keep questions like this off of the propaganda thread.
I have a project to convert my multiplayer XNA game from C# to Javascript and to add networking to the game-play using LLMs.
They are far worse at it now than they were a year ago. They actually implemented the requirements (Though inaccurately) to the best of their ability a year ago. Especially Gemini.
Now they don't even come remotely close to implementing just the basic requirements.
The thing is, I'm giving them the entirety of the C# source code and spelling out what they should do.
This is the part they REALLY don't want you to say.
They can no longer train these models effectively and their performance is slipping. Late 2023 was the golden age.
How are you running them - regular chat interface or do you have them setup with Claude Code or Gemini CLI?
I am considering making a thread where I compel others to attempt to get what I'm trying to get out of it and show me their work.
The game is only around 25000-30000 LOC in C#.