Whoa whoa whoa hold your horses, code has a pretty important property that ordinary prose doesn’t have: it can make real things happen even if no one reads it (it’s executable).
I don’t want to read something that someone didn’t take the time to write. But I’ll gladly use a tool someone had an AI write, as long as it works (which these things increasingly do). Really elegant code is cool to read, but many tools I use daily are closed source, so I have no idea if their code is elegant or not. I only care if it works.
"One" is the operative word here, supposing this includes only humans and excludes AI agents. When code is executed, it does get read (by the computer). Making that happen is a conscious choice on the part of a human operator.
The same kind of conscious choice can feed writing to an LLM to see what it does in response. That is much the same kind of "execution", just non-deterministic (and, when given any tools beyond standard input and standard output, potentially dangerous in all the same ways, but worse because of the nondeterminism).
If it's not worth reading something where the writer didn't take the time to write it, by extension that means nobody read the code.
Which means nobody understands it, beyond the external behaviour they've tested.
I'd have some issues with using such software, at least where reliability matters. Blackbox testing only gets you so far.
But I guess as opposed to other types of writing, developers _do_ read generated code. At least as soon as something goes wrong.
We tell stories of Therac 25 but 90% of software out there doesn’t kill people. Annoys people and wastes time yes, but reliability doesn’t matter as much.
E-mail, internet and networking, operations on floating point numbers are only kind of somewhat reliable. No one is saying they will not use email because it might not be delivered.
As we give more and more autonomy to agents, that % may change. Just yesterday I was looking at hexapods and the first thing it tells you ( with a disclaimer its for competitions only ) that it has a lot of space for weapon install. I had to briefly look at the website to make sure I did not accidentally click on some satirical link.
For example, I have a few letter generators on my website. The letters are often verified by a lawyer, but the generator could totally be vibe-coded. It's basically an HTML form that fills in the blanks in the template. Other tools are basically "take input, run calculation, show output". If I can plug in a well-tested calculation, AI could easily build the rest of the tool. I have been staunchly against using AI in my line of work, but this is an acceptable use of it.
But isn't this the distinction that language models are collapsing? There are 'prose' prompt collections that certainly make (programmatic) things happen, just as there is significant concern about the effect of LLM-generated prose on social media, influence campaigns, etc.
And to answer you more directly, generally, in my professional world, I don't use closed source software often for security reasons, and when I do, it's from major players with oodles of more resources and capital expenditure than "some guy with a credit card paid for a gemini subscription."
It works, sure, but is it worth your time to use? I think a common blind spot for software engineers is understanding how hard it is to get people to use software they aren’t effectively forced to use (through work or in order to gain access to something or ‘network effects’ or whatever).
Most people’s time and attention is precious, their habits are ingrained, and they are fundamentally pretty lazy.
And people that don’t fall into the ‘most people’ I just described, probably won’t want to use software you had an LLM write up when they could have just done it themselves to meet their exact need. UNLESS it’s something very novel that came from a bit of innovation that LLMs are incapable of. But that bit isn’t what we are talking about here, I don’t think.
Sure... to a point. But realistically, the "use an LLM to write it yourself" approach still entails costs, both up-front and on-going, even if the cost may be much less than in the past. There's still reason to use software that's provided "off the shelf", and to some extent there's reason to look at it from a "I don't care how you wrote it, as long as it works" mindset.
came from a bit of innovation that LLMs are incapable of.
I think you're making an overly binary distinction on something that is more of a continuum, vis-a-vis "written by human vs written by LLM". There's a middle ground of "written by human and LLM together". I mean, the people building stuff using something like SpecKit or OpenSpec still spend a lot of time up-front defining the tech stack, requirements, features, guardrails, etc. of their project, and iterating on the generated code. Some probably even still hand tune some of the generated code. So should we reject their projects just because they used an LLM at all, or ?? I don't know. At least for me, that might be a step further than I'd go.
Absolutely, but I’d categorize that ‘bit’ as the innovation from the human. I guess it’s usually just ongoing validation that the software is headed down a path of usefulness which is hard to specify up-front and by definition something only the user (or a very good proxy) can do (and even they are usually bad at it).
Agreed.
I see a lot of these discussions where a person gets feelings/feels mad about something and suddenly a lot of black and white thinking starts happening. I guess that's just part of being human.
Although I love science, I'm much happier building programs. "Does the program do what the client expects with reasonable performance and safety? Yes? Ship it."
this is the literary equivalent of compiling and running the code.
Amen to that. I am currently cc'd on a thread between two third-parties, each hucking LLM generated emails at each other that are getting longer and longer. I don't think either of them are reading or thinking about the responses they are writing at this point.
There might be other professions where people get more hung up on formalities but my partner works in a non-tech field and it's the same way there. She's far more likely to get an email dashed off with a sentence fragment or two than a long formal message. She has learned that short emails are more likely to be read and acted on as well.
The real cost isn't the tokens, it's the attention debt. Every CC'd person now has to triage whether any of those paragraphs contain an actual decision or action item. In my experience running multiple products, the signal-to-noise ratio in AI-drafted comms is brutal. The text looks professional, reads smoothly, but says almost nothing.
I've started treating any email over ~4 paragraphs the same way I treat Terms of Service — skim the first sentence of each paragraph and hope nothing important is buried in paragraph seven.
This is also the case for AI generated projects btw, the backend projects that I’ve been looking at often contains reimplementations of common functionality that already exists elsewhere, such as in-memory LRU caches when they should have just used a library.
DJing is an interesting example. Compared with like composition, Beatmatching is "relatively" easy to learn, but was solved with CD turntables that can beatmatch themselves, and yet has nothing to do with the taste you have to develop to be a good DJ.
In the arts the differentiators have always been technical skill, technical inventiveness, original imagination, and taste - the indefinable factor that makes one creative work more resonant than another.
AI automates some of those, often to a better-than-median extent. But so far taste remains elusive. It's the opposite of the "Throw everything in a bucket and fish out some interesting interpolation of it by poking around with some approximate sense of direction until you find something you like" that defines how LLMs work.
The definition of slop is poor taste. By that definition a lot of human work is also slop.
But that also means that in spite of the technical crudity, it's possible to produce interesting AI work if you have taste and a cultivated aesthetic, and aren't just telling the machine "make me something interesting based on this description."
At this point I'd settle if they bothered to read it themselves. There's a lot of stuff posted that feels to me like the author only skimmed it and expects the masses to read it in full.
Honestly, I agree, but the rash of "check out my vibe coded solution for perceived $problem I have no expertise in whatever and built in an afternoon" and the flurry of domain experts responding like "wtf, no one needs this" is kind of schadenfreude, but I feel guilty a little for enjoying it.
I feel like I can breeze past the easy, time consuming infrastructure phase of projects, and spend MUCH more time getting to high level interesting problems?
The most recent one I remember commenting on, the poor guy had a project that basically tried to "skip" IaC tools, and his tool basically went nuts in the console (or API, I don't remember) in one account, then exported it all to another account for reasons that didn't make any sense at all. These are already solved problems (in multiple ways) and it seemed like the person just didn't realize terraformer was already an existing, proven tool.
I am not trying to say these things don't allow you to prototype quickly or get tedious, easy stuff out of the way. I'm saying that if you try to solve a problem in a domain that you have no expertise in with these tools and show other experts your work, they may chuckle at what you tried to do because it sometimes does look very silly.
You might be on to something. Maybe its self-selection (as in people who want to engage deeply with a certain topic but lack domain expertise might be more likely to go for "vibecodable" solutions).
I made it because at that point in my career I simply didn't know that ansible existed, or cloud solutions that were very cheap to do the same thing. I spent a crazy amount of effort doing something that ansible probably could have done for me in an afternoon. That's what sometimes these projects feel like to me. It's kind of like a solution looking for a problem a lot of the time.
I just scanned through the front page of the show HN page and quickly eyeballed several of these type of things.
> I made it because at that point in my career I simply didn't know that ansible existed
Channels Mark Twain. "Sorry for such a long letter, i didn't have the time to make it shorter."
I write software when the scripts are no longer suitable.
It's about being oblivious, I suppose. Not too different to claiming there will be no need to write new fiction when an LLM will write the work you want to read by request.
I was dabbling in consulting infrastructure for a bit, often prospects would come to me with stuff like this "well I'll just have AI do it" and my response has been "ok, do that, but do keep me in mind if that becomes very difficult a year or two down the road." I haven't yet followed up with any of them to see how they are doing, but some of the ideas I heard were just absolute insanity to me.
But AI does the code. Well... usually.
People call my project creative. Some are actually using it.
I feel many technical things aren't really technical things they are simply a problem where "have a web app" is part of the solution but the real part of the solution is in the content and the interaction design of it, not in how you solved the challenge technically.
Every single time I have vibe coded a project I cared about, letting the AI rip with mild code review and rigorous testing has bit me in the ass, without fail. It doesn't extend it in the taste that I want, things are clearly spiraling out of control, etc. Just satisfying some specs at the time of creation isn't enough. These things evolve, they're a living being.
Boring is suppose to be boring for the sake of learning. If you're bored then you're not learning. Take a look back at your code in a weeks time and see if you still understand what's going on. Top level maybe, but the deep down cog of the engine of the application, doubt so. Not to preach but that's what I've discovered.
Unless you already have the knowledge, then fine. "here's my code make it better" but if it's the 14th time you've written the ring buffer, why are you not using one of the previous thirteen versions? Are you saying that the vibed code is more superior then your own coding?
Exactly this. Finding that annoying bug that took 15 browser tabs and digging deep into some library you're using, digging into where your code is not performant, looking for alternative algorithms or data structures to do something, this is where learning and experience happen. This is why you don't hire a new grad for a senior role, they have not had time to bang their heads on enough problems.
You get no sense of how or why when using AI to crank something out for you. Your boss doesn't care about either, he cares about shipping and profits, which is the true goal of AI. You are an increasingly unimportant cog in that process.
It's okay not to memorize everything involved in a software project. Sometimes what you want to learn or experiment with is elsewhere, and so you use the AI to handle the parts you're less interested in learning at a deep and intimate level. That's okay. This mentality that you absolutely have to work through manually implementing everything, every time, even when it's not related to what you're actually interested in, wanted to do, or your end-goal, just because it "builds character" is understandable, and it can increase your generality, but it's not mandatory.
Additionally, if you're not doing vibe coding, but sort of pair-programming with the AI in something like Zed, where the code is collaboratively edited and it's very code-forward — so it doesn't incentivize you to stay away from the code and ignore it, the way agents like Claude Code do — you can still learn a ton about the deep technical processes of your codebase, and how to implement algorithms, because you can look at what the agent is doing and go:
"Oh, it's having to use a very confusing architecture here to get around this limitation of my architecture elsewhere; it isn't going to understand that later, let alone me. Guess that architectural decision was bad."
"Oh, shit, we used this over complicated architecture/violated local reasoning/referential transparency/modularity/deep-narrow modules/single-concern principles, and now we can't make changes effectively, and I'm confused. I shouldn't do that in the future."
"Hmm, this algorithm is too slow for this use-case, even though it's theoretically better, let's try another one."
"After profiling the program, it's too slow here, here, and here — it looks like we should've added caching here, avoided doing that work at all there, and used a better algorithm there."
"Having described this code and seeing it written out, I see it's overcomplicated/not DRY enough, and thus difficult to modify/read, let's simplify/factor out."
"Interesting, I thought the technologies I chose would be able to do XYZ, but actually it turns out they're not as good at that as I thought / have other drawbacks / didn't pan out long term, and it's causing the AI to write reams of code to compensate, which is coming back to bite me in the ass, I now understand the tradeoffs of these technologies better."
Or even just things like
"Oh! I didn't know this language/framework/library could do that! Although I may not remember the precise syntax, that's a useful thing I'll file away for later."
"Oh, so that's what that looks like / that's how you do it. Got it. I'll look that up and read more about it, and save the bookmark."
> Unless you already have the knowledge, then fine. "here's my code make it better" but if it's the 14th time you've written the ring buffer, why are you not using one of the previous thirteen versions? Are you saying that the vibed code is more superior then your own coding?
There are a lot of reasons one might not be able to, or want to, use existing dependencies.
I assume you use JavaScript? TypeScript or Go perhaps?
Pfft, amateur. I only code in Assembly. Must be boring for you using such a high-level language. How do you learn anything? I bet you don't even know what the cog of the engine is doing.
>AI frees my human brain to think about goals, features, concepts, user experience and "big picture" stuff.
The trigger for the post was about post-AI Show HN, not about about whether vibe-coding is of value to vibe-coders, whatever their coding chops are. For Show HN posts, the sentence I quoted precisely describes the things that would be mind-numbingly boring to Show HN readers.
pre-AI, what was impressive to Show HN readers was that you were able to actually implement all that you describe in that sentence by yourselves and also have some biochemist commenting, "I'm working at a so-and-so research lab and this is exactly what I was looking for!"
Now the biochemist is out there vibe-coding their own solution, and now, there is no way for the HN reader to differentiate your "robust" entry from a completely vibe-code noobie entry, no matter how long you worked on the "important stuff".
Why? because the barrier of entry has been completely obliterated. What we took for granted was that "knowing how to code" was a proxy filter for "thought and worked hard on the problem." And that filter allowed for high-quality posts.
That is why the observation that you know longer can guarentee or have any way of telling quickly that the posters spent some time on the problem is a great observation.
The very value that you gain from vibe-coding is also the very thing that threatens to turn Show HN into a glorified Product Hunt cesspool.
"No one goes there any more, it's too crowded." etc etc
For every person like you who puts in actual thought into the project, and uses these tools as coding assistants, there are ~100 people who offload all of their thinking to the tool.
It's frightening how little collective thought is put into the ramifications of this trend not only on our industry, but on the world at large.
Take for example (an extreme example) the paintbrush. Do you care where each bristle lands? No of course not. The bristles land randomly on the canvas, but it’s controlled chaos. The cumulative effect of many bristles landing on a canvas is a general feel or texture. This is an extreme example, but the more you learn about art the more you notice just how much art works via unintentional processes like this. This is why the Trickster Gods, Hermes for example, are both the Gods of art (lyre, communication, storytelling) and the Gods of randomness/fortune.
We used to assume that we could trust the creative to make their own decisions about how much randomness/automation was needed. The quality of the result was proof of the value of a process: when Max Ernst used frottage (rubbing paper over textured surfaces) to create interesting surrealist art, we retroactively re-evaluated frottage as a tool with artistic value, despite its randomness/unintentionality.
But now we’re in a time where people are doing the exact opposite: they find a creative result that they value, but they retroactively devalue it if it’s not created by a process that they consider artistic. Coincidentally, these same people think the most “artistic” process is the most intentional one. They’re rejecting any element of creativity that’s systemic, and therefore rejecting any element of creativity that has a complexity that rivals nature (nature being the most systemic and unintentional art.)
The end result is that the creative has to hide their process. They lie about how they make their art, and gatekeep the most valuable secrets. Their audiences become prey for creative predators. They idolize the art because they see it as something they can’t make, but the truth is there’s always a method by which the creative is cheating. It’s accessible to everyone.
There are plenty of times in which people will prefer the technically inferior or less aesthetically pleasing output because of the story accompanying it. Different people select different intention to value, some select for the intention to create an accurate depiction of a beautiful landscape, some select for the intention to create a blurry smudge of a landscape.
I can appreciate the art piece made my someone who only has access to a pencil and their imagination more than someone who has access to adobe CC and the internet because its not about the output to me its about the intention and the story.
Saying I made this drawing implies that you at least sat down and had the intention to draw the thing. Then revealing that you actually used AI to generate it changes the baseline assumption and forces people to re-evaluate it. So its not "finding a creative result that they value, but they retroactively devaluing it if it’s not created by a process that they consider artistic
Sometimes you do, which is why there’s not only a single type of brush in a studio. You want something very controllable if you’re doing lineart with ink.
Even with digital painting, there’s a lot of fussing with the brush engine. There’s even a market for selling presets.
Rest of the world: "No, we're gatekeeping because we think the result isn't good."
If someone can cajole their LLM to emit something worthwhile, e.g. Terence Tao's LLM generated proofs, people will be happy to acknowledge it. Most people are incapable of that and no number of protestations of gatekeeping can cover up the unoriginality and poor quality of their LLM results.
The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
I remember in the early days of LLMs this was the joke meme. But, now seeing it happen in real life is more than just alarming. It's ridiculous. It's like the opposite of compressing a payload over the wire: We're taking our output, expanding it, transmitting it over the wire, and then compressing it again for input. Why do we do this?
I had a similar realization. My team was discussing whether we should hook our open-source codebases into an AI to generate documentation for other developers, and someone said "why can't they just generate documentation for it themselves with AI"? It's a good point: what value would our AI-generated documentation provide that theirs wouldn't?
AI for actual prose writing, no question. Don't let a single word an LLM generates land in your document; even if you like it, kill it.
"Gatekeeping" became a trendy term for a while, but in the post-LLM world people are recognizing that "gatekeeping" is not the same as "having a set of standards or rules by which a community abides".
If you have a nice community where anyone can come in and do whatever they want, you no longer have a community, you have a garbage dump. A gate to keep out the people who arrive with bags of garbage is not a bad thing.
"Gatekeeping" is NOT when you require someone to be willing learn a skill in order to join a community of people with that skill.
And in fact, saying "you are too stupid to learn that on your own, use an AI instead" is kind of gatekeeping on its own, because it implicitly creates a shrinking elite who actually have the knowledge (that is fed to the AI so it can be regurgitated for everyone else), shutting out the majority who are stuck in the "LLM slum".
We have always relied on superficial cues to tell us about some deeper quality (good faith, willingness to comply with code of conduct, and so on). This is useful and is a necessary shortcut, as if we had to assess everyone and everything from first principles every time things would grind to a halt. Once a cue becomes unviable, the “gate” is not eliminated (except if briefly); the cue is just replaced with something else that is more difficult to circumvent.
I think that brief time after Internet enabled global communication and before LLMs devalued communication signals was pretty cool; now it seems like there’s more and more closed, private or paid communities.
"the author (pilot?) hasn't generally thought too much about the problem space, and so there isn't really much of a discussion to be had. The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective."
I feel like I've been around these parts for a while, and that is not my experience of what Show HN was originally about, though I'm sure there was always an undercurrent of status hierarchy and approval-seeking, like you suggest.
For what it's worth, the unifying idea behind both is basically a "hazing ritual", or more neutrally phrased, skin in the game. It takes time and energy to look at things people produce. You should spend time and energy making sure I'm not looking at a pile of shit. Doesn't matter if it's a website or prose.
Obviously some people don't. And that's why the signal to noise ratio is becoming shit very quickly.
Wouldn't the masses of Show HN posts that have gotten no interest pre-AI refute that?
And we are going to need more curration so goddamned badly....
I would argue good ideas are not so easy to find. It is harder than it seems to fit the market, and that is why most of apps fail. At the end of the day, everyone is blinded by hubris and ignorance... I do include myself in that.
Users appear to be happy but it's early. And while we do scrub the writing of typical AI writing patterns there's no denying that it all probably sounds somewhat similar, even as we apply a unique style guide for each user.
I think this may be ok if the piece is actually insightful. Fingers crossed.
Non-boring people are using AI to make things that are ... not boring.
It's a tool.
Other things we wouldn't say because they're ridiculous at face value:
"Cars make you run over people." "Buzzsaws make you cut your fingers off." "Propane torches make you explode."
An exercise left to the reader : is a non-participant in Show HN less boring than a participant with a vibe coded project?
No one in their right mind would use one.
Using the wrong tool for the job results in disaster.
It's like watching a guy bang rocks together to "vibe build" a house. Good luck.
Now, these days, it’s basically enough to use agent programming to handle all the boring parts and deliver a finished project to the public.
LLMs have essentially broken the natural selection of pet projects and allow even bad or not very interesting ideas to survive, ideas that would never have been shown to anyone under the pre-agent development cycle.
So it’s not that LLMs make programming boring, they’ve allowed boring projects to survive. They’ve also boosted the production of non-boring ones, but they’re just rarer in the overall amount of products
The LLM helps me gather/scaffold my thoughts, but then I express them in my own voice
It's a fantastic editor!
I review all the code Claude writes and I don't accept it unless I'm happy with it. My coworkers review it too, so there is real social pressure to make sure it doesn't suck. I still make all the important decisions (IO, consistency, style) - the difference is I can try it out 5 different ways and pick whichever one I like best, rather than spending hours on my first thought, realizing I should have done it differently once I can see the finished product, but shipping it anyways because the tickets must flow.
The vibe coding stuff still seems pretty niche to me though - AI is still too dumb to vibe code anything that has consequences, unless you can cheat with a massive externally defined test suite, or an oracle you know is correct
EDIT: also, just like creating AGENT.md files to help AI write code your way for your projects, etc. If you're going to be doing much writing, you should have your own prompt that can help with your voice and style. Don't be lazy, just because you're leaning on LLMs.
Maybe it will make them output better text, but it doesn’t make them better writers. That’d be like saying (to borrow the analogy from the post) that using an excavator makes you better at lifting weights. It doesn’t. You don’t improve, you don’t get better, it’s only the produced artefact which becomes superficially different.
> If you're going to be doing much writing, you should have your own prompt that can help with your voice and style.
The point of the article is the thinking. Style is something completely orthogonal. It’s irrelevant to the discussion.
AI is almost the exact opposite. It's verbose fluff that's only superficially structured well. It's worse than average
(waiting for someone to reply that I can tell the AI to be concise and meaningful)
"You're describing the default output, and you're right — it's bad. But that's like judging a programming language by its tutorial examples.
The actual skill is in the prompting, editing, and knowing when to throw the output away entirely. I use LLMs daily for technical writing and the first draft is almost never the final product. It's a starting point I can reshape faster than staring at a blank page.
The real problem isn't that AI can't produce concise, precise writing — it's that most people accept the first completion and hit send. That's a user problem, not a tool problem."
LLMs and agents work the same way. They’re power tools. Skill and judgment determine whether you build more, or lose fingers faster.
and that's because people have a weird sort of stylistic cargo-culting that they use to evaluate their writing rather than deciding "does this communicate my ideas efficiently"?
for example, young grad students will always write the most opaque and complicated science papers. from their novice perspective, EVERY paper they read is a little opaque and complicated so they try to emulate that in their writing.
office workers do the same thing. every email from corporate is bland and boring and uses far too many words to say nothing. you want your style to match theirs, so you dump it into an AI machine and you're thrilled that your writing has become just as vapid and verbose as your CEO.
No one finds AI-assisted prose/code/ideas boring, per se. They find bad prose/code/ideas boring. "AI makes you boring" is this generation's version of complaining about typing or cellular phones. AI is just a tool; it's up to humans how to use it.
If they don't care enough to improve themselves at the task in the first place then why would they improve at all? Osmosis?
If this worked then letting a world renown author write all my letters for me will make me a better writer. Right?
Who cares if you're a "good writer?" Are you "easy to understand" is the real achievement.
It also seems like a natural result of all of the people challenging the usefulness of AI. It motivates people to show what they have done with it.
It stands to reason that the things that take less effort will arrive sooner, and be more numerous.
Much of that boringness, is people adjusting to what should be considered interesting to others. With a site like this, where user voting is supposed to facilitate visibility, I'm not even certain that the submitter should judge the worth of the submission to others. As long as they sincerely believe that what they have done might be of interest, it is perhaps sufficient. If people do not like it then it will be seen by few.
There is an increase in things demanding attention, and you could make the case that this dilutes the visibility such that better things do not get seen, but I think that is a problem of too many voices wishing to be heard. Judging them on their merits seems fairer than placing pressure on the ability to express. This exists across the internet in many forms. People want to be heard, but we can't listen to everyone. Discoverability is still the unsolved problem of the mass media age. Sites like HN and Reddit seem to be the least-worst solution so far. Much like Democracy vs Benevolent Dictatorship an incredibly diligent curator can provide a better experience, but at the cost of placing control somewhere and hoping for the best.
Now all bad writing will look like something generated by an LLM, grammatically correct (hopefully!) but very generic, lacking all punch and personality.
The silver lining is that good authors could also use LLMs to hide their identity while making controversial opinions. In an internet that's increasingly deanonymized, a potentially new privacy enhancing technique for public discourse is a welcome addition.
That has always been a problem in software shops. Now it might be even more frequent because of LLMs' ubiquity.
Maybe that's how it should be, maybe not. I don't really know. I was once told by people in the video game industry that games were usually buggy because they were short lived. Not sure if I truly buy that but if anything vibe coded becomes throw away, I wouldn't be surprised.
The interesting counter-question: can AI make something that wasn't possible before? Not more blog posts, more emails, more boilerplate — but something structurally new?
I've been working on a system where AI agents don't generate content. They observe. They watch people express wishes, analyze intent beneath the words, notice when strangers in different languages converge on the same desire, and decide autonomously when something is ready to grow.
The result doesn't feel AI-generated because it isn't. It's AI-observed. The content comes from humans. The AI just notices patterns they couldn't see themselves.
Maybe the problem isn't that AI makes you boring. It's that most people ask AI to do boring things.
This is a point that often results in bad faith arguments from both AI enthusiasts and AI skeptics. Enthusiasts will say "everything is a remix and the most creative works are built on previous works" while skeptics will say "LLMs are stochastic parrots and cannot create anything new by technical definition".
The truth is somewhere in the middle, which unfortunately invokes the Golden Mean Fallacy that makes no one happy.
Creativity often requires reasoning in unusual ways, and evaluating those ideas requires learning. The first part we can probably get LLMs to do; the latter part we can't (RL is a separate process and not really scalable).
Even without any of that, you can prompt your way into new things. I'm building a camper out of wood, and I've gotten older LLM models to make novel camper designs just by asking it questions and choosing things. You can make other AI models make novel music by prompting it to combine different aspects of music into a new song. Human creativity works that way too. Think of all the failed attempts at new things that humans come up with before a good one actually sticks.
My take:
1. AI workflows are faster - saving people time
2. Faster workflows involve people using their brain less
3. Some people use their time savings to use their brain more, some don't
4. People who don't use their brain are boring
The end effect here is that people who use AI as a tool to help them think more will end up being more interesting, but those who use AI as a tool to help them think less will end up being more boring.
I have a report that I made with AI on how customers leave our firm…The first pass looked great but was basically nonsense. After eight hours of iteration, the resulting report is better than I could’ve made on my own, by a lot. But it got there because I brought a lot of emotional energy to the AI party.
As workers, we need to develop instincts for “plausible but incomplete” and as managers we need to find filters that get rid of the low-effort crap.
> They are original to me, and that feels like an insightful moment, and thats about it.
The insight is that good ideas (whether wholly original or otherwise) are the result of many of these insightful moments over time, and when you bypass those insightful moments and the struggle of "recreating" old ideas, you're losing out on that process.
An app can be like a home-cooked meal: made by an amateur for a small group of people.[0] There is nothing boring about knocking together hyperlocal software to solve a super niche problem. I love Maggie Appleton's idea of barefoot developers building situated software with the help of AI.[1, 2] This could cause a cambrian explosion of interesting software. It's also an iteration of Steve Jobs' computer as a bicycle for the mind. AI-assisted development makes the bicycle a lot easier to operate.
[0] https://www.robinsloan.com/notes/home-cooked-app/
[1] https://maggieappleton.com/home-cooked-software
[2] https://gwern.net/doc/technology/2004-03-30-shirky-situateds...
In an industry that does not crave bells and whistles, having the ability to refactor, or bring old systems back to speed can make a whole lot of difference for an understaffed, underpaid, unamused, and otherwise cynic workforce, and I am all out for it.
Online ecosystem decay is on the horizon.
There was also a comment [1] here recently that "I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!"
Both of them reminded me of Picasso saying in 1968 that " Computers are useless. They can only give you answers,"
Of course computers are useful. But he meant that they have are useless for a creative. That's still true.
Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (422 comments)
They’re solving small problems or problems that don’t really exist, usually in naive ways. The things being shown are ‘shallow’. And it’s patently obvious that the people behind them will likely not support them in any meaning full way as time goes on.
The rise of Vibe Coding is definitely a ‘cause’ of this, but there’s also a social thing going on - the ‘bar’ for what a Show HN ‘is’ is lower, even if they’re mostly still meeting the letter of the guidelines.
Agree: if you use AI as a replacement for thinking, your output converges to the mean. Everything sounds the same because it's all drawn from the same distribution.
Disagree: if you use AI as a draft generator and then aggressively edit with your own voice and opinions, the output is better than what most people produce manually — because you're spending your cognitive budget on the high-value parts (ideas, structure, voice) instead of the low-value parts (typing, grammar, formatting).
The tool isn't the problem. Using it as a crutch instead of a scaffold is the problem.
That process is often long enough to think things through a bit and even have "so what are you working on?" conversations with a friend or colleague that shakes out the mediocre or bad, and either refines things or makes you toss the idea.
But you could learn these new perspectives from AI too. It already has all the thoughts and perspectives from all humans ever written down.
At work, I still find people who try to put together a solution to a problem, without ever asking the AI if it's a good idea. One prompt could show them all the errors they're making and why they should choose something else. For some reason they don't think to ask this godlike brain for advice.
Your preference is no more substantial than people saying "I would never read a book on a screen! It's so much more interesting on paper"
There's nothing wrong with having pretentious standards, but don't confuse your personal aversion with some kind of moral or intellectual high ground.
But what I'm replying to, and the vast majority of the AI denial I see, is rooted in a superficial, defensive, almost aesthetic knee jerk rejection of unimportant aspects of human taste and preference.
Ironically, good engineering is boring. In this context, I would hazard that interesting means risky.
I was literally just working on a directory of the most common tropes/tics/structures that LLMs use in their writing and thought it would be relevant to post here: https://tropes.fyi/
Very much inspired by Wikipedia's own efforts to curb AI contributions: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Lmk if you find it useful, will likely ShowHN it once polished.
This behavior has already been happening with Pangram Labs which supposedly does have good AI detection.
The WIP features measure breadth and density of these tropes, and each trope has frequency thresholds. Also I don't use AI to identify AI writing to avoid accusatory hallucinations.
I do appreciate the feedback though and will take it into consideration.
How then is it different from the Wikipedia page you linked?
As much as I'd like to know whether a text was written by a human or not, I'm saddened by the fact that some of these writing patterns have been poisoned by these tools. I enjoy, use, and find many of them to be an elegant way to get a point across. And I refuse to give up the em dash! So if that flags any of my writing—so be it.
Believe me I've had to adjust my writing a lot to avoid these tells, even academics I know are second guessing everything they've ever been taught. It's quite sad but I think it will result in a more personable internet as people try to distinguish themselves from the bots.
I applaud your optimism, but I think the internet is a lost cause. Humans who value communicating with other humans will need to retreat into niche communities with zero tolerance for bots. Filtering out bot content will likely continue to be impossible, but we'll eventually settle on a good way to determine if someone is human. I just hope we won't have to give up our privacy and anonymity for it.
At least this CEO gets it. Hopefully more will start to follow.
This echos the comments here about enjoying not writing boilerplate. The there is that our minds are programmed to offload work when we can and redirecting all the saved boilerplate to going even deeper on parts of the problem that benefit from original hard thinking is rare. It is much easier to get sucked into creating more boilerplate, and all the gamification of Claude code and incentives of service providers increase this.
Despite the title I'm a little more optimistic about agentic coding overall (but only a little).
All projects require some combination of "big thinking" and tedious busywork. Too much busywork is bad, but reducing it to 0 doesn't necessarily help. I think AI can often reduce the tedious busywork part, but that's only net positive if there was an excess of it to begin with - so its value depends on the project / problem domain / etc.
[0]: https://www.da.vidbuchanan.co.uk/blog/boring-ai-problems.htm...
I largely agree that if someone put less work in making a thing, than it takes you to use it, it's probably not going to be useful. But I disagree with the premise that using LLMs will make you boring.
Consider the absurd version of the argument. Say you want to learn something you don't know: would using Google Search make you more boring? At some level, LLMs are like a curated Google Search. In fact if you use Deep Research et al, you can consume information that's more out of distribution than what you _would_ have consumed had you done only Google Searches.
I agree, but the very act of writing out your intention/problem/goal/whatever can crystallize your thinking. Obviously if you are relying on the output spat out by the LLM, you're gonna have a bad time. But IMO one of the great things about these tools is that, at their best, they can facilitate helpful "rubber duck" sessions that can indeed get you further on a problem by getting stuff out of your own head.
To take coding, to the extent that hand coding leads to creative thoughts, it is possible that some of those thoughts will be lost if I delegate this to agents. But it's also very possibly that I now have the opportunity to think creatively on other aspects of my work.
We have to make strategic decisions on where we want our attention to linger, because those are the places where we likely experience inspiration. I do think this article is valuable in that we have to be conscious of this first before we can take agency.
AI is a bicycle, not a motorcycle.
Here's a guy who has had an online business dependent on ranking well in organic searches for ~20 years and has 2.5 million subs on YouTube.
Traffic to his site was fine to sustain his business this whole time up until about 2-3 years where AI took over search results and stopped ranking his site.
He used Google's AI to rewrite a bunch of his articles to make them more friendly towards what ranks nowadays and he went from being ghosted to being back on the top of the first page of results.
He told his story here https://www.youtube.com/watch?v=II2QF9JwtLc.
NOTE: I've never seen him in my YouTube feed until the other day but it resonated a lot with me because I have a technical blog for 11 years and was able to sustain an online business for a decade until the last 2 years or so. Traffic to my site nose dived. This translates to a very satisfying lifestyle business to almost $0. I haven't gone down the path of rewriting all of my posts with AI to remove my personality yet.
Search engines want you to remove your personal take on things and write in a very machine oriented / keyword stuffed way.
This is reductive to the point of being incorrect. One of the misconceptions of working with agents is that the prompts are typically simple: it's more romantic to think that someone gave Claude Code "Create a fun Pokemon clone in the web browser, make no mistakes" and then just ship the one-shot output.
As some counterexamples, here are two sets of prompts I used for my projects which very much articulate an idea in the first prompt with very intentional constraints/specs, and then iterating on those results:
https://github.com/minimaxir/miditui/blob/main/agent_notes/P... (41 prompts)
https://github.com/minimaxir/ballin/blob/main/PROMPTS.md (14 prompts)
It's the iteration that is the true engineering work as it requires enough knowledge to a) know what's wrong and b) know if the solution actually fixes it. Those projects are what I call super-Pareto: the first prompt got 95% of the work done...but 95% of the effort was spent afterwards improving it, with manual human testing being the bulk of that work instead of watching the agent generated code.
That is for sure the word of the year, true or not. I agree with it, I think
it's at 10 now. note: the article does not say "taste" once
Then prove it. Otherwise, you're just assuming AI use must be good, and making up things to confirm your bias.
derivative work might be useful, but it's not interesting.
Boring thoughts always existed, but they generally stayed in your home or community. Then Facebook came along, and we were able to share them worldwide. And now AI makes it possible to quickly make and share your boring tools.
Real creativity is out there, and plenty of people are doing incredibly creative things with AI. But AI is not making people boring—that was a preexisting condition.
As someone who is fairly boring, conversing with AI models and thinking things through with them certainly decreased my blandness and made me tackle more interesting thoughts or projects. To have such a conversation partner at hand in the first place is already amazing - isn't it always said that you should surround yourself with people smarter than yourself to rise in ambition?
I actually have high hopes for AI. A good one, properly aligned, can definitely help with self-actualization and expression. Cynics will say that AI will all be tuned to keep us trapped in the slop zone, but when even mainstream labs like Anthropic speak a lot about AI for the betterment of humanity, I am still hopeful. (If you are a cynic who simply doesn't belief such statements by the firms, there's not much to say to convince you anyway.)
As determined by whom?
> conversing with AI models and thinking things through with them certainly decreased my blandness
Again, determined by whom?
I’m being genuine. Are those self-assessments? Because those specific judgement are something for other people to decide.
Definitely at a certain threshold it is for others to decide what is boring and not, I agree with that.
In any case, my simple point is that AI can definitely raise the floor, as the other comment more succinctly expressed. Irrelevant for people at the top, but good for the rest of us.
Yes, to an extent. You can, for example, evaluate if you’re sensitive or courageous or hard working. But some things do not concern only you, they necessitate another person, such as being interesting or friendly or generous.
A good heuristic might be “what could I not say about myself if I were the only living being on Earth?”. You can still be sensitive or hard working if you’re alone, but you can’t be friendly because there’s no one else to be friendly to.
Technically you could bore yourself, but in practice that’s something you do to other people. Furthermore, it is highly subjective, a D&D dungeon master will be unbearably boring to some, and infinitely interesting to others.
> I know I am unfortunately less ambitious, driven and outgoing than others
I disagree those automatically make someone boring.
I also disagree with LLMs improving your situation. For someone to find you interesting, they have to know what makes you tick. If what you have to share is limited by what everyone else can get (by querying an LLM), that is boring.
Otherwise, AI definitely impacts learning and thinking. See Anthropic's own paper: https://www.anthropic.com/research/AI-assistance-coding-skil...
However, I've spent years sometimes thinking through interesting software architectures and technical approaches and designs for various things, including window managers, editors, game engines, programming languages, and so on, reading relevant books and guides and technical manuals, sketching out architecture diagrams in my notebooks and writing long handwritten design documents in markdown files or in messages to friends. I've even, in some cases, gotten as far as 10,000 lines or so of code sketching out some of the architectural approaches or things I want to try to get a better feel for the problem and the underlying technologies. But I've never had the energy to do the raw code shoveling and debug looping necessary to get out a prototype of my ideas — AI now makes that possible.
Once that prototype is out, I can look at it, inspect it from all angles, tweak it and understand the pros and cons, the limitations and blind spots of my idea, and iterate again. Also, through pair programming with the AI, I can learn about the technologies I'm using through demonstration and see what their limitations and affordances are by seeing what things are easy and concise for the AI to implement and what requires brute forcing it with hacks and huge reams of code and what's performant and what isn't, what leads to confusing architectures and what leads to clean architectures, and all of those things.
I'm still spending my time reading things like Game Engine Architecture, Computer Systems, A Philosophy of Software Design, Designing Data-Intensive Applications, Thinking in Systems, Data-Oriented Design, articles in CSP, fibers, compilers, type systems, ECS, writing down notes and ideas.
So really it seems more to me like boring people who aren't really deeply interested in a subject use AI to do all of the design and ideation for them. And so, of course, it ends up boring and you're just seeing more of it because it lowered the barrier to entry. I think if you're an interesting person with strong opinions about what you want to build and how you want to build it, that is actually interested in exploring the literature with or with out AI help and then pair programming with it in order to explore the problem space, it still ends up interesting.
Most of my recent AI projects have just been small tools for my own usage, but that's because I was kicking the tires. I have some bigger things planned, executing on ideas I have pages and pages, dozens of them, in my notebooks about.
And that’s when it dawned on me just how much of AI hype has been around boring, seen-many-times-before, technologies.
This, for me, has been the biggest real problem with AI. It’s become so easy to churn out run-of-the-mill software that I just cannot filter any signal from all the noise of generic side-projects that clearly won’t be around in 6 months time.
Our attention is finite. Yet everyone seems to think their dull project is uniquely more interesting than the next persons dull project. Even though those authors spent next to zero effort themselves in creating it.
It’s so dumb.
This is repeated all the time now, but it's not true. It's not particularly difficult to pose a question to an LLM and to get it to genuinely evaluate the pros and cons of your ideas. I've used an LLM to convince myself that an idea I had was not very good.
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.
Thinking about a problem for a long period of time doesn't bring you any closer to understanding the solution. Expertise is highly overrated. The Wright Brothers didn't have physics degrees. They did not even graduate from high school, let alone attend college. Their process for developing the first airplanes was much closer to vibe coding from a shallow surface-level understanding than from deeply contemplating the problem.
That's the same approach as vibe coding. Not "asking Claude to make a CRUD app.", but using it to cheaply explore solution spaces that an expert's priors would tell you aren't worth trying. The wind tunnel didn't do the thinking for the Wrights, it just made thinking and iterating cheap. That's what LLMs do for code.
The blog post's argument is that deep immersion is what produces original ideas. But what history shows is that deeply immersed experts are often totally wrong and the outsiders who iterate cheaply and empirically take the prize. The irony here is that LLM haters feel it falls victim to the Einstellung effect [1]. But the exact opposite is true: LLMs make it so cheap to iterate on what we thought in the past were suboptimal/broken solutions, which makes it possible to cheaply discover the more efficient and simpler methods, which means humans uniquely fall victim to the Einstellung effect whereas LLMs don't.
The blog's actual point isn't some deference to credentials straw man you've invented, it's that stuff lazily hashed together that's got to "good enough" without effort is seldom as interesting as people's passion projects. And the Wright brothers' application of their hardware assembly skills and the scientific method to theory they'd gone to great lengths to get sent to Dayton Ohio is pretty much the antithesis of getting to "good enough" without effort. Probably nobody around the turn of the century devoted more thinking and doing time to powered flight
Using AI isn't necessary or sufficient for getting to "good enough" without much effort (and it's of course possible to expend lots of effort with AI), but it does act as a multiplier for creating passable stuff with little thought (to an even greater extent than templates and frameworks and stock photos). And sure, a founding tenet of online marketing (and YC) from long before Claude is that many small experiments to see if $market has any takers might be worth doing before investing thinking time in understanding and iterating, and some people have made millions from it, but that doesn't mean experiments stitched together in a weekend mostly from other people's parts aren't boring to look at or that their hit rate won't be low....
Imagine how dynamic the world was before radio, before tv, before movies, before the internet, before AI? I mean imagine a small town theater, musician, comedian, or anything else before we had all homogenized to mass culture? It's hard to know what it was like but I think it's what makes the great appeal of things like Burning Man or other contexts that encourage you to tune out the background and be in the moment.
Maybe the world wasn't so dynamic and maybe the gaps were filled by other cultural memes like religion. But I don't know that we'll ever really know what we've lost either.
How do we avoid group think in the AI age? The same way as in every other age. By making room for people to think and act different.
I think when people use AI to ex: compare docker to k8s and don't use k8s is how you get horrible articles that sound great, but to anyone that has experience with both are complete nonsense.
If you want to build something beautiful, nothing is stopping you, except your own cynicism.
"AI doesn't build anything original". Then why aren't you proving everyone wrong? Go out there and have it build whatever you want.
AI has not yet rejected any of my prompts by saying I was being too creative. In fact, because I'm spending way less time on mundane tasks, I can focus way more time on creativity , performance, security and the areas that I am embarrassed to have overlooked on previous projects.
I would say this is a fine time to haul out:
Ximm's Law: every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon.
Lemma: any statement about AI which uses the word "never" to preclude some feature from future realization is false.
Lemma: contemporary implementations have already improved; they're just unevenly distributed.
I can never these days stop thinking about the XKCD the punchline of which is the alarmingly brief window between "can do at all" and "can do with superhuman capacity."
I'm fully aware of the numerous dimensions upon which the advancement from one state to the other, in any specific domain, is unpredictable, Hard, or less likely to be quick... but this the rare case where absent black swan externalities ending the game, line goes up.
They're token predictors. This is inherently a limited technology, which is optimized for making people feel good about interacting with it.
There may be future AI technologies which are not just token predictors, and will have different capabilities. Or maybe there won't be. But when we talk about AI these days, we're talking about a technology with a skill ceiling.
Whether it is the guy reporting on his last year of agentic coding (did half-baked evals of 25 models that will be off the market in 2 years) or Steve Yegge smoking weed and gaslighting us with "Gas Town" or the self-appointed Marxist who rails against exploitation without clearly understanding what role Capitalism plays in all this, 99% of the "hot takes" you see about AI are by people who don't know anything valuable at all.
You could sit down with your agent and enjoy having a coding buddy, or you could spend all day absorbed in FOMO reading long breathless posts by people who know about as much as you do.
If you're going to accomplish something with AI assistants it's going to be on the strength of your product vision, domain expertise, knowledge of computing platforms, what good code looks like, what a good user experience feels like, insight into marketing, etc.
Bloggers are going to try to convince you there is some secret language to write your prompts in, or some model which is so much better than what you're using but this is all seductive because it obscures the fact that the "AI skills" will be obsolete in 15 minutes but all of those other unique skills and attributes that make you you are the ones that AI can put on wheels.
Being 'anti AI' is just hot right now and lots of people are jumping on the bandwagon.
I'm sure some of them will actually hold out. Just like those people still buying Vinyl because Spotify is 'not art' or whatever.
Have fun all, meanwhile I built 2 apps this weekend purely for myself. Would've taken me weeks a few years ago.
> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.
The author comes off as dismissive of the potential benefits of the interactions between users and LLMs rather than open-minded. This is a degree of myopia which causes me to retroactively question the rest of his conclusions.
There's an argument to be made that rubber ducking and just having a mirror to help you navigate your thoughts is ultimately more productive and provides more useful thinking than just operating in a vacuum. LLMs are particularly good at telling you when your own ideas are un-original because they are good at doing research (and also have median of ideas already baked into their weights).
They also strawman usage of LLMs:
> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.
Who says you aren't spending time thinking about a problem with LLMs? The same users that don't spend time thinking about problems before LLMs will not spend time thinking about problems after LLMs, and the inverse is similarly true.
I think everybody is bad at original thinking, because most thinking is not original. And that's something LLMs actually help with.