We went from data mining to data fracking.
[0]: https://blog.pragmaticengineer.com/stack-overflow-is-almost-...
[1]: https://www.niemanlab.org/2026/01/news-publishers-limit-inte...
[2]: https://www.theregister.com/2024/05/16/wiley_journals_ai/
[3]: https://www.heise.de/en/news/OpenStreetMap-is-concerned-thou...
Then the chagpt effect is a sudden drop in visitors. But the rate of decline after that looks more or less the same as pre-chatgpt.
In a way it was a trial and glimpse of what was coming with the AI revolution
There were multiple times I wanted to contribute to SO but couldn't because I didn't have sufficient "reputation", or something. I shrugged and moved on.
Just the other day a question I asked about 10 years ago got flagged as a duplicate. It turns out somebody else had asked the same question several years later and got a better answer than my question got, so that other one is the canonical one and mine is pushed away. It feels kind of offensive but it makes complete sense if the goal is to provide useful answers to people searching.
Plus, there were a lot of fun questions they were really interesting to start with; and they stopped allowing them.
It would be better to allow duplicates in this specific case, but mark the old thread as outdated and link the questions in such a way that one can see the old thread and compare it to the new thread.
Find the stack overflow thread, answer from 10+ years ago. Not modern C++. New questions on the topic closed as duplicate. Occasionally the correct answer would be further down, not yet upvoted.
“Best practice” changes over time. I frequently saw wrong answers with install instructions that were outdated, commands that don’t function on newer OS version, etc etc.
They treated subjective questions about programming methods as if they were universal constants. It was completely antithetical to the actual pursuit of applied knowledge, or collecting and discussing best practices and patterns of software design. And it was painfully obvious for years this was as a huge problem, well before LLMs.
That said, I will say after being traumatized by having my threads repeatedly closed, I got so good at boiling down my problem to minimal reproducible examples that I almost never needed to actually post, because I’d solve it myself along the way.
So I guess it was great for training me to be a good engineer in the abstract sense. but absolutely shit at fostering any community or knowledge base.
Exactly! They should have added proper structuring to questions/replies so that it could specifically apply for Language/library version X. Later, such a question could be answered again (either by proving it's still correct for version X+1, or by giving a new answer) - that way people wouldn't have to look at a new reply with 2 votes vs an older, possibly outdated one with 100 and make a decision which to prefer.
So so SO much good stuff is gone now and much of what's left is AI cruft
Society is a Ship Theseus; each generation ripping off planks and nailing their own in place.
Having been online since the late 80s (am only mid 40s...grandpa worked at IBM, hooked me and my siblings up with the latest kit on the regular) I have read comments like this over and over as the 90s internet, 00s internet, now the 2010s state of the "information super highway" has been replaced.
Tbh things have felt quite stagnant and "stuck" the last 20 years. All the investment in and caretaking of web SaaS infrastructure and JS apps and jobs for code camp grads made it feel like tech had come to a standstill relative to the pace of software progress prior to the last 15-ish years.
1. I write hobby code all the time. I've basically stopped writing these by hand and now use an LLM for most of these tasks. I don't think anyone is opposed to it. I had zero users before and I still have zero users. And that is ok.
2. There are actual free and open source projects that I use. Sometimes I find a paper cut or something that I think could be done better. I usually have no clue where to begin. I am not sure if it even is a defect most of the time. Could it be intentional? I don't know. Best I can do is reach out and ask. This is where the friction begins. Nobody bangs out perfect code on first attempt but usually maintainers are kind to newcomers because who knows maybe one of those newcomers could become one of the maintainers one day. "Not everyone can become a great artist, but a great artist can come from anywhere."
LLM changed that. The newcomers are more like Linguini than Remy. What's the point in mentoring someone who doesn't read what you write and merely feeds it into a text box for a next token predictor to do the work. To continue the analogy from the Disney Pixar movie Ratatouille, we need enthusiastic contributors like Remy, who want to learn how things work and care about the details. Most people are not like that. There is too much going on every day and it is simply not possible to go in depth about everything. We must pick our battles.
I almost forgot what I was trying to say. The bottom line is, if you are doing your own thing like I am, LLM is great. However, I would request everyone to have empathy and not spread our diarrhea into other people's kitchens.
If it wasn't an LLM, you wouldn't simply open a pull request without checking first with the maintainers, right?
Even if they were willing to deploy agents for initial PR reviews, it would be a costly affair and most OSS projects won’t have that money.
I don't think anything of value will be lost by choosing to not interact with the unfettered masses whom millions of AI bots now count among their number.
Then there's the security concerns that this change would introduce. Forking a codebase is easy, but so are supply chain attacks, especially when some projects are being entirely iterated on and maintained by Claude now.
Exaggeration. Is SQLite halfway to closed source software? Open-source is about open source. Free software is about freedom to do things with code. None is about taking contributions from everyone.
Why would you review a PR that you are never going to merge?
I don't think every PR needs reviewing. Some PRs we can ignore just by taking a quick look at what the PR claims to do. This only requires a quick glance, not a PR review.
"On 9 February, the Matplotlib software library got a code patch from an OpenClaw bot. One of the Matplotlib maintainers, Scott Shambaugh, rejected the submission — the project doesn’t accept AI bot patches. [GitHub; Matplotlib]
The bot account, “MJ Rathbun,” published a blog post to GitHub on 11 February pleading for bot coding to be accepted, ranting about what a terrible person Shambaugh was for rejecting its contribution, and saying it was a bot with feelings. The blog author went to quite some length to slander Mr Shambaugh"
https://pivot-to-ai.com/2026/02/16/the-obnoxious-github-open...
Which functionally destroys OSS, since the PR you skipped might have been slop or might have been a security hole.
Blindly rejecting all PRs means you are also missing out on potential security issues submitted by humans or even AI.
At work we are not publishing any code or part of the OSS community (except as grateful users of other's projects), but even we get clearly AI enabled emails - just this week my boss has forwarded me two that were pretty much "Him do you have a bug bounty program? We have found a vulnerability in (website or app obliquely connected to us)." One of them was a static site hosted on S3!
There's always been bullshitters looking to fraudulently invoice your for unsolicited "security analysis". But the bar for generating bullshit that looks plausible enough to have to have someone spend at least a few minutes to work out if it's "real" or not has become extremely low, and the velocity with which the bullshit can be generated then have the victim's name and contact details added and vibe spammed to hundreds or thousands of people has become near unstoppable. It's like SEO spammers from 5 or 10 years back but superpowered with OpenAI/Anthropic/whoever's cocaine.
Come on. Maintainers can:
- insist on disclosure of LLM origin
- review what they want, when they can
- reject what they can't review
- use LLMs (yes, I know) to triage PRs
and pick which ones need the most
human attention and which ones can be
ignored/rejected or reviewed mainly
by LLMs
There are a lot of options.And it's not just open source. Guess what's happening in the land of proprietary software? YUP!! The same exact thing. We're all becoming review-bound in our work. I want to get to huge MR XYZ but I've to review several other people's much larger MRs -- now what?
Well, we need to develop a methodology for working with LLMs. "Every change must be reviewed by a human" is not enough. I've seen incidents caused by ostensibly-reviewed but not actually understood code, so we must instead go with "every change must be understood by humans", and this can sometimes involve a plain review (when the reviewer is a SME and also an expert in the affected codebase(s), and it can involve code inspection (much more tedious and exacting). But also it might involve posting transcripts of LLM conversations for developing and, separately, reviewing the changes, with SMEs maybe doing lighter reviews when feasible, because we're going to have to scale our review time. We might need to develop a much more detailed methodology, including writing and reviewing initial prompts, `CLAUDE.md` files, etc. so as to make it more likely that the LLM will write good code and more likely that LLM reviews will be sensible and catch the sorts of mistakes we expect humans to catch.
On the internet, nobody knows you're a dog [1]. Maintainers can insist on anything. That doesn't mean it will be followed.
The only realistic solution you propose is using LLMs to review the PRs. But at that point, why even have the OSS? If LLMs are writing and reviewing the code for the project, just point anyone who would have used that code to an LLM.
[1] https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_...
The Curl project refuse AI code and had to close their bug bounty program due to the flood of AI submissions:
"DEATH BY A THOUSAND SLOPS
I have previously blogged about the relatively new trend of AI slop in vulnerability reports submitted to curl and how it hurts and exhausts us.
This trend does not seem to slow down. On the contrary, it seems that we have recently not only received more AI slop but also more human slop. The latter differs only in the way that we cannot immediately tell that an AI made it, even though we many times still suspect it. The net effect is the same.
The general trend so far in 2025 has been way more AI slop than ever before (about 20% of all submissions) as we have averaged in about two security report submissions per week. In early July, about 5% of the submissions in 2025 had turned out to be genuine vulnerabilities. The valid-rate has decreased significantly compared to previous years."
https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...
I fully expect most of my PR's to need at least a second or third revision.
AI bots are literally DDOS'ing servers. Adoption is consuming and making both physical and computing resources either inaccessible or expensive for almost everyone.
The most significant one is the human cost. We suddenly found ourselves dealing with overwhelming levels of AI content/code/images/video that is mostly subpar. May be as AI matures we'll find it more easy and have better tools to work with the volume but for now it feels like it is coming from bad actors even when it is done by well meaning individuals.
There's no doubt AI has its uses and it is here to stay but I guess we'll all have to struggle until we reach that point where it is a net benefit. The hype by those financially invested isn't helping a bit though.
Of course it's going to be damaging to places where people actually want to craft things.
I realize there are many levels to this claim but I'm not being sarcastic at all here.
“Your absolutely right…”
Maintainers need better tools, not just policies. A "contributor must show they've read the contributing guide" gate (like a small quiz or a required issue link) would filter out 90% of drive-by LLM PRs. The spam problem in email was solved with a mix of technical and social solutions, not by asking people to stop spamming.
Effort asymmetry is inheret to AI's raison d'être. (One could argue that's true for most consumer-facing technology.)
The problem is AI.
I think AI is going to create a whole new class of people that take a tiny output and turn it into an outsized output.
When this works, it is really nice. Think Cursor, Lovable, or OpenClaw.
When it doesn’t work though, things get ugly too. The same power that allows a small team to build a billion dollar company also allows rouge agents to industrialize their efforts as well.
Combine this with the rise of headless browsers and you have a dangerous cocktail.
I wouldn’t be surprised if we see regulation or licensing around frontier AI APIs in the near future.
Having a no brown M&Ms rule will only work temporarily.
The LLM can read the guidelines too, after all.
Better might be to move to emailed PRs and ignore github completely. The friction is higher and email addresses are easier to detect and record as spammers than github accounts.
So if there are only brown M&Ms greeting you in your dressing room, most likely they were put there by a robot.
What might be better is an option that developers can enable which disables new PRs by API. This way, outside contributors can still create new PRs if they're willing to spend a few seconds doing it in the browser.
Devil's advocate: folks will take that wall a lot more seriously at 1,000 km/h.
"At Jena and Auerstedt the backwardness of the Prussian Army became apparent. By 1806, Prussian military doctrines have been unchanged for more than 50 years—tactics were monotonous, and the wagon system was obsolete" [1]. They had been obsolete for some time. But they didn't break until they hit Napoleon's army.
Similarly, we have a lot of social plumbing that became–with the benefit of hindsight–obsolete with social media. It was possible to ignore, however, because the rate of change was slow. Now it isn't.
[1] https://en.wikipedia.org/wiki/Battle_of_Jena%E2%80%93Auerste...
We over medicate people, especially the elderly, because each new med has side effects and they're dying eventually anyway. We print more and more debt to paper over massive budget surpluses because the unspoken reality is that we're financially screwed either way. We pile more and more regulations on because we'd rather further grow the government and kick the can a few more times. We bolt one new emissions system after another on our diesel engines because they're already unreliable, who cares.
We don't consider how we got here, only what the next step we take should be. And don't even ask where a step should be taken, progress requires changing things constantly and we rarely give ourselves time to look back and retrace our steps.
This is entirely opposite from accelerationism, which would advocate for less medication so that sick people die quicker, and less regulation so that society would be exploited faster and collapse faster.
Let me re-use your analogy. We were already driving off a cliff, and we are trying to blame the fact that we're pushing on the gas and accelerating however we're ignoring that we were already heading that way and brake lines were cut.
The future of the net was closed gated communities long before AI came along. At worst it’s maybe the last nail in the coffin. But the coffin lid was already on and the man inside was already dead.
AI is, I think, more mixed. It is creating more spam and noise, but AI itself is also fascinating to play with. It’s a genuine innovation and playing with it sometimes makes me feel the way I did first exploring the web.
Sure… so far.
The difference between AI slop and the existing large tech corps is that the large corps you list never strayed into the lane occupied by OSS.
I didn't say this was a good thing, I only said things were already fucked. And Trump is also a symptom of a deeper rot in our system. He just happens to be the asshole who took advantage of it.
If you don't fix the deeper issues, it doesn't matter what's going to happen. Blaming AI is blaming a symptom, not the cause.
Stating that we need to fix the deeper problem isn't even close to the same thing as whatever this nonsense is you responded with.
Isn't that the complaint to which you're responding? the SUPPLY side of the equation is the problem, so reading encyclopedias wouldn't impact that. Funny enough the criticism of Wikipedia was that a bunch of amateurs couldn't beat the quality from a small group of experts curating a controlled collection, and we saw that wasn't true. Maybe AI has pushed this to a new level where we need to tighten access and attention once again?
Here's the good news: AI cannot destroy open source. As long as there's somebody in their bedroom hacking out a project for themselves, that then decides to share it somehow on the internet, it's still alive. It wouldn't be a bad thing for us to standardize open source a bit more, like templates for contributors' guides, automation to help troubleshoot bug reports, and training for new maintainers (to help them understand they have choices and don't need to give up their life to maintain a small project). And it's fine to disable PRs and issues. You don't have to use GitHub, or any service at all.
You don't even need somebody. AI agents themselves can make and share projects.
Copyright can't be assigned yo agents. You cant have Open Source without copyright as the enforcement mechanism. Millions of AI-generated, public-domain projects with no social proof to distinguish them is uncharted territory. My prediction is it would be shit-territory amd worse than what we have currently.
Enforce what. Attribution? Open Source software doesn't requires software to require attribution for it to be considered open source. Public domain software can be open source.
I think it is about who is contributing, intention, and various other nuances. I would still say it is net good for the ecosystem.
The LLM providers will be laughing all the way to the bank because they get paid once by the people who are causing the problem and paid again by the person putting up the "barriers, processes, and mechanisms" to control the problem. Even better for them, the more the two sides escalate, the more they get paid.
The problem is the asymmetry of effort. You verified you fixed your issue. The maintainers verified literally everything else (or are the ones taking the hit if they're just LGTMing it).
Sorry, I am sure your specific change was just fine. But I'm speaking generally.
How many times have I at work looked at a PR and thought "this is such a bad way to fix this I could not have come up with such a comically bad way if I tried." And naturally couldn't say this to my fine coworker whose zeal exceeded his programming skills (partly because someone else had already approved the PR after "reviewing" it...). No, I had to simply fast-follow with my own PR, which had a squashed revert of his change, with the correct fix, so that it didn't introduce race conditions into parallel test runs.
And the submitter of course has no ability to gauge whether their PR is the obvious trivial solution, or comically incorrect. Therein lies the problem.
I'd even argue we need a new type of test coverage, something that traces back the asserts to see what parts of the code are actually constrained by the tests, sort of a differential mutation analysis.
I think the problem is where bug-bounty or reputation chasers are letting LLM's write the PRs, _without_ building and testing. They seek output, not outcomes.
The negative case are free running OpenClaw slop cannons that could even be malicious.
We don't want to bother maintainers, as they can focus on more important issues. I think a lot of tail-end issues and bugs can be addressed in OSS.
We leave it up to the maintainers to accept the PR or not, but we solve our problems as we thoroughly test the changes.
So I am pretty much confident as well as convinced about the change. But then I know what I know.
> But it's not improving like it did the past few years.
As opposed to... what? The past few months? Has AI progress so broken our minds as to make us stop believing in the concept of time?
If anything, the pace has increased.
This may be one of the most important graphs to keep an eye on: https://metr.org/ and it tracks well to my anecdotal experience.
You can see the industry did hit a bit of a wall in 2024 where the improvements drop below the log trend. However, in 2025 the industry is significantly _above_ the trend line.
Yeah, it's not a plateau.
There is some desire to downplay or dismiss it all, as if the naysayers are going to get their “told you so” moment and it’s just around the corner. Yet the goalposts for that moment just keep moving with each new release.
It’s sad that this has turned into a culture war where you’re supposed to pick a side and then blind yourself to any evidence that doesn’t support your chosen side. The vibecoding maximalists do the same thing on the other side of this war, but it’s getting old on both sides.
I still have several projects I developed in mid 2024 where I felt the AI was really close but not quite good enough for production, and almost two years in they haven't gotten appreciably better to where I would be able to release an actual application.
AI is killing creativity and human collaboration; those long nights spent having pizza and coffee while debugging that stubborn issue or implementing yet another 3D engine… now it is all extremely boring.
There is an entire new world of people having fun with LLM coding. There are people having fun with social media, too. These people having fun with their thing doesn’t make your thing less fun for you to do.
Let people enjoy things. You can do your own thing and they do theirs. The internet is a big place and there’s room for everyone to find their own way to have fun. If you can’t enjoy your thing because someone else is doing it differently, that’s a you problem.
This is new for us, but it’s not new globally. There used to be professional portrait painters before photography ruined it. Lots of great artists honed their skills and made a living that way. And there were skilled weavers before the loom. Computers (humans who computed things) before the digital computer was invented. And so on. And I’m sure the first photographs don’t look as good as a skilled portrait painting. Arguably they still don’t. But that didn’t save portrait painting as a profession.
We’ll be the same. You can still write code by hand for fun, just like you can paint for fun. I’m currently better at solving problems and writing code than Claude. But Claude is faster than I am, and it’s improving much faster than I am. I think the days of making big money for writing software by hand are mostly over.
AI is currently designed to be used somewhat antisocially. But nothing stops it from helping a team collaborate. Collaborative vibe working would be a fine place to wind up.
Yes I know about Usenet. I was on it in 1992.
IMO we're going to just have to deal with AI, like it or not.
One could also say Multi-Drug Therapy killed the solidarity and shared struggle found in leper colonies.
> But I wouldn't run my production apps—that actually make money or could cause harm if they break—on unreviewed AI code.
I hope no one is actually letting unreviewed code through. AI can, and _will_ make mistakes.
Nowadays > 90% of my code tasks are handled by AI. I still review and guide it to produce what I intended to do myself.
From the article -
> It's gotten so bad, GitHub added a feature to disable Pull Requests entirely. Pull Requests are the fundamental thing that made GitHub popular. And now we'll see that feature closed off in more and more repos.
I don't have a solution for this, I'm pointing to the flaw in the assumption that AI is destroying open-source.
No; FABRICATED quotes. We have a perfectly good, correct word for what's going on.
From my observation, the people that are the most excited about AI are low skilled/unskilled people in that domain. If said people treated AI as a learning tool, everything would be great (I think AI can be a really effective teacher if you're truly motivated to learn). The problem is those people think they "now have the skill", even though they don't. They essentially become walking examples of the Dunning-Kruger effect (the cognitive bias where people with limited knowledge or competence in a particular domain greatly overestimate their own knowledge or competence)
The problem with being able to produce an artifact that superficially looks like a good product, without the struggle that comes with true learning, is you miss out on all the supporting knowledge that you actually need to judge the quality of the output and fix it, or even the taste to be able to guide the agent in good patterns vs poor patterns.
I'd encourage people that are obsessed with cutting edge AI and running 5000 Claude agents simultaneously to vibe code a website to take a step back and use the AI to teach them fundamentals. Because if all you can do is prompt, you're useless.
What I found in the following week is a pattern of:
1) People reaching out with feature requests (useful) 2) People submitting minor patches that take up a few lines of code (useful) 3) People submitting larger PRs, that were mostly garbage
#1 above isn't going anywhere. #2 is helpful, especially since these are easy to check over. For #3, MOST of what people submitted wasn't AI slop per se, but just wasn't well thought out, or of poor quality. Or a feature that I just didn't want in the product. In most cases, I'd rather have a #1 and just implement it myself in the way that I want to code organized, rather than someone submitting a PR with poorly written code. What I found is that when I engaged with people in this group, I'd see them post on LinkedIn or X the next day bragging about how they contributed to a cool new open-source project. For me, the maintainer, it was just annoying, and I wasn't putting this project out there to gain the opportunity to mentor junior devs.
In general, I like the SQLite philosophy of we are open source, not open contribution. They are very explicit about this, but it's important for anyone putting out an open source project that you have ZERO obligation to accept any code or feature requests. None.
AI is a tool that must to be used well and many people currently raising pull requests seem to think that they don't even need to read the changes which puts unnecessary burden on the maintainers.
The first review must be by the user who prompted the AI, and it must be thorough. Only then I would even consider raising a PR towards any open source project.
There is a temporary solution. Let maintainers limit PRs to accounts that were created prior to November 30 2022 [1]. These are known-human accounts.
Down the road, one can police for account transfers and create a system where known-human accounts in good standing can vouch for newer accounts. But for now that should staunch the bleeding.
Additionally Geerling raises good points, but I am not sure we should jump to his conclusion yet.
The bias in AI coding discussions heavily skews greenfield. But I want to hear more from maintainers. By their nature they’re more conservative and care about balancing more varied constraints (security, performance, portability, code quality, etc etc) in a very specific vision based on the history of their project. They think of their project more like evolving some foundational thing gradually/safely than always inventing a new thing.
Many of these issues don’t yet matter to new projects. So it’s hard to really compare the greenfield with a 20 year old codebase.
It seems the users of this are so varied that refactors like what you describe would be rolled out more gradually than the usual AI workflow.
A smaller number of PRs generated by OpenClaw-type bots are also doing so based on their owner's direct or implied instructions. I mean, someone is giving them GitHub credentials and letting them loose.
AI is also allowing the creation of many new open-source projects, led by responsible developers.
Given the exponential speed at which AI is progressing, surely the quality of such PRs is going to improve. But there are also opportunities for the open-source community to improve their response. It will sound controversial, but AI can be used to perform an initial review of PRs, suggest improvements, and, in extreme cases, reject them.
We are in the early days and I believe that things will get better as more people will calm the f down. People who have built things for ages will continue to do so, with or without coding agents.
In the long term, I think Open Source will win. I can imagine content management systems, eCommerce software, CRM, etc. to all become coding agent friendly - customer can customize the core software with agents and the scaffold would provide fantastic guardrails.
Self-hosting is already becoming way more popular than it ever was. People are downloading all sorts of tools to build software. Building is better. A structure needs to emerge.
Was NFT or Crypto a bubble? The idea of a bubble means that it "pops" in a dramatic fashion. NFT prices in aggregate faded slowly, and the impact it has only applies to a handful of individuals. Moreover, the behavior we have seen with crypto and nft can largely be speculated that the purpose was largely illicit financial engineering.
If a handful of bad PRs "are destroying open source," Open Source as a concept is surprisingly in a vulnerable project. No project worth its salt ever integrates unverifiable PRs. No valid OSS ever integrates uninvited PRs in the first place. Every PR is driven by an issue or a very robust that is specific description. Any project that receives an "unsolicited" PR does not make the project maintainer yell "Oh, I am ruined."
I have stopped checking out these programming content videos for the last year or so. But I stupidly did it here. Every single channel has become like Coffeezilla with an agenda, being AI as a catalyst of great harm.
Yes, that’s been a known problem for a while. This comic: https://xkcd.com/2347/ is a popular illustration of the problem from 2020, but the problem itself was well known before that.
AI has laid bare the difference.
Open Source is significantly impacted. Business models based on it are affected. And those who were not taking the political position find that they may not prefer the state of the world.
Free software finds itself, at worst, a bit annoyed (need to figure out the slop problem), and at best, an ally in AI - the amount of free software being built right now for people to use is very high.
But in any case, the question really refers to, can the LLM-generated software be copyrighted? If not, it can’t be put under any particular license.
The way the world is currently working is code created by someone (using AI) is being dealt with as if it was authored by that someone. This is across companies and FOSS. I think it's going to settle with this pattern.
I'm the sole maintainer for a gamedev "middleware" open source project for Godot, and all AIs have been generally crap about Godot stuff and frequently getting it wrong, but Codex helped me catch some future bugs that could have caused hard to spot mysterious behavior and a lot of head scratching.
I don't dare let it edit anything but I look at its suggestions and implement them my way. Of course it's still wrong sometimes, if I trusted it blindly I would be f'ed. A few times I had to repeatedly tell it about how some of its findings were incorrect or the intended behavior, until it relented with "You're right. My assumption was based on..."
Also, while I would [probably] never let AI be the source of any of my core code, it's nice for experiments and what-ifs: since my project is basically a library of more-or-less standalone components, it's actually more favorable for AI, to wire them together like prebuilt Lego blocks: I can tell it to "make a simple [gameplay genre] scene using existing components only, do not edit any code" and it lets me spot what's missing from the library.
In the end this too is a tool like everything else. I've always wanted to make games but I've always been sidetracked by "black hole" projects like trying to make engines and frameworks without ever actually making an actual full game, and I think it's time to welcome anything that helps me waste less time on the stuff that isn't an actual game :)
It never is. You know you’ve hit peak bubble when everyone you know is investing in the new hotness and saying, “This time it’s different.” When that happens, get ready to short the market.
Project after project reports wasted time, increased hosting/bandwidth bills, and all around general annoyance from this UTTER BULLSHIT. But every morning, we wake up, and its still there, no sign of it ever stopping.
We fill repos with poison and watch the crawlers consume it.
https://www.reddit.com/r/hacking/comments/1r55wvg/poison_fou...
This is war. Join us.
There is a strong legal basis for this to happen because if you read the MIT license, which is one of the most common and most permissive licenses, it clearly states that the code is made available for any "Person" to use and distribute. An AI agent is not a person so technically it was never given the right to use the code for itself... It was not even given permission to read the copyrighted code, let alone ingest it, modify it and redistribute it. Moreover, it is a requirement of the MIT license that the MIT copyright notice be included in all copies or substantial portions of the software... Which agents are not doing in spite of distributing substantial portions of open source code verbatim, especially when considered in aggregate.
Moreover, the fact that a lot of open source devs have changed their views on open source since AI reinforces the idea that they never consented to their works being consumed, transformed and redistributed by AI in the first place. So the violation applies both in terms of the literal wording of the licenses and also based on intent.
Moreover, the usage of code by AI goes beyond just a copyright violation of the code/text itself; they appropriated ideas and concepts, without giving due credit to their originators so there is a deeper ethical component involved that we don't have a system to protect human innovation from AI. Human IP is completely unprotected.
That said, I think most open source devs would support AI innovation, but just not at their expense with zero compensation.
No there isn't. We're all free to copy each other's ideas and concepts and not give any credit to their "originators" who aren't usually even the first people to think of them but just the previous person in the chain of copying ideas. That's how progress happens. No we should not inhibit our use of knowledge because every idea "belongs" to somebody.
I'm not talking about copyright here, which is different and doesn't usually protect ideas and concepts anyway, at least none that are useful.
We've crossed a threshold whereby economic value creation is not fairly rewarded. The economy became a kind of winner-takes-all game of who can convince people to pay for stuff and lock them in first... Or who can wedge themselves first between large pre-existing corporate money flows.
It's like the office politics, bureaucracy and corruption that everyone hates has become the core reward mechanism of the economy. It was never designed that way but a combination of factors exacerbated by underlying system flaws and perverse incentives got us there.
There's already way too much false advertising. The winners of this game are those who can sell a dream . It doesn't matter if they don't deliver because by the time people figure it out, they already sold their startup and onto other things. Everyone is kept in a constant state of chasing the next big thing and it doesn't solve any problems. Human potential is just wasted on creating elaborate illusions which ultimately satisfy no one.
Using AI to find relevant parts of a codebase, help you remember stuff like which annotations a data class needs for dB persistence(yes I'm a Java server dev, hi!) is awesome. Having Claude solo dev an application based on a prompt generated by gpt is something else entirely(pretty fun, but not very useful for anything more complicated than mega-trivial)
Open claw is like the third level to this that also exists for some reason.
1. AI slop PRs (sometimes giant). Author responds to feedback with LLM generated responses. Show little evidence they actually gave any thought of their own towards design decisions or implementation.
2. (1) often leads me to believe they probably haven't tested it properly or thought of edge cases. As reviewer you now have to be extra careful about it (or just reject it).
3. Rise in students looking for job/internship. The expectation is that LLM generated code which is untested will give them positive points as they have dug into the codebase now. (I've had cases where they said they haven't tested the code, but it should "just work").
4. People are now even more lazy to cleanup code.
Unfortunately, all of these issues come from humans. LLMs are fantastic tools and as almost everyone would agree they are incredibly useful when used appropriately.
They are. They’ve always been there.
The problem is that LLMs are a MASSIVE force multiplier. That’s why they’re a problem all over the place.
We had something of a mechanism to gate the amount of trash on the internet: human availability. That no longer applies. SPAM, in the non-commercial sense of just noise that drowns out everything else, can now be generated thousands of times faster than real content ever could be. By a single individual.
It’s the same problem with open source. There was a limit to the number of people who knew how to program enough to make a PR, even if it was a terrible one. It took time to learn.
AI automated that. Now everyone can make massive piles of complicated plausible looking PRs as fast as they want.
To whatever degree AI has helped maintainers, it is not nearly as an effective a tool at helping them as it is helping others generate things to waste their time. Intentionally or otherwise.
You can’t just argue that AI can be a benefit therefore everything is fine. The externalities of it, in the digital world, are destroying things. And even if we develop mechanisms to handle the incredible volume will we have much of value left by the time we get there?
This is the reason I get so angry at every pro AI post I see. They never seem to discuss the possible downsides of what they’re doing. How it affects the whole instead of just the individual.
There are a lot of people dealing with those consequences today. This video/article is an example of it.
I've been thinking about this recently. As annoying as all the bots on Twitter and Reddit are, it's not bots spinning up bots (yet!), it's other humans doing this to us.
My canned response now is to respond, "Can you link me to the documentation you're using for this?" It works like a charm, the clanker doesn't ever respond.
I mean I don’t want you sending PRs to my vibe coded project, but I also don’t care if you fork it to make useful for your needs
We’ve been so worried about the burden of forking in the past - maybe that should change?
Open source software was trivially better in the nineties because it was done by people who would have and often did do it for free. Those people are better by simp.
The people bitching about it now didn't push back when it unified on a forge, or when it sold to Microsoft, or when it started working in like button stars.
They're bitching now that their grift is up.
Other than by corrupt criminals and mafia types who have a need to covertly hide cash.
And then the current administration wants the government to 'protect' crypto investors against big losses. Gotta love it.
And anyone who lives in a polity whose local currency may be undergoing rapid devaluation/inflation.
And anyone who needs a form of wealth that no local authority is technically capable of alienating them from - ie: if you need to pack everything in a steamer trunk to escape being herded into cattle cars, you can memorize a seed phrase and no one can stop you from taking your wealth with you.
And any polity who may no longer wish to use dollars as the international lingua franca of trade, as the global foreign exchange reserve currency, to reduce the degree to which their forex reserves prop up American empire.
Sadly, all of these use cases appear increasingly relevant as time goes on.
I’ve got an Argentinian friend who sends crypto to his mother because he pays less than 0.5 % in fees and exchange rates instead of close to 5% using the traditional way. From now on I’ll call him a corrupt criminal.
You're describing the people that use actual cash to launder and hide, well, cash, and that have done so for centuries, long before crypto had even been invented.
A few web searches on <big bank name> + "money laundering scandal" (e.g. "HSBC money laundering scandal") can offer valuable insights.
There are definitely people abusing AI and lying about what it can actually do. However, Crypto and NFTs are pretty much useless. Many people (including me) have already increased productivity using LLMs.
This technology just isn't going away.
Some payment chains are painful. An awful lot of middlemen take a cut. Some payment chains impose burdens on the endpoint like 90 day settlement debts which could be avoided with some use of tech. Nothing about the hype, just modification to financial transactions, but they could be done other ways as well (as could the settlement ideas above)
NFT are the same logic as bearer bonds. They're useful to very specific situations of value transfer, and almost nothing else. The use isn't about the artwork on the front, its the posession of a statement of value. Like bonds, they get discounted. The value is therefore a function of the yield and the trust in the chain of sigs asserting its a debt to that value. Not identical, but the concept stripped of the ego element isn't that far off.
Please note I think bored ape and coins are stupid. I am not attempting to promote the hype.
AI is the same. LLMs are useful. There are functional tools in this. The sheer amount of capital being sunk into venture plays is however, disconnected from that utility.
The key blockchain requirement is allowing unrestricted node membership. From that flows a dramatic explosion of security issues, performance issues, and N-level deep workarounds.
In the case of a bunch of banks trying to keep each other honest, it's drastically simpler/faster/cheaper to allocate a certain number of fixed nodes to be run by different participants and trusted outside institutions.
One doesn't need to trust every node, just the a majority is unlikely to be suborned, and you'll know in advance which majorities are possible. The bank in Australia probably doesn't want or need to shift some of that responsibility outside the group, onto literally anybody who shows up with some computing power.
Lasers turn out to be useful for... eye surgery, and pointing at things, and reading bits off plastic discs, and probably a handful of other niche things. There just aren't that many places where what they can do is actually needed, or better than other more pedestrian ways of accomplishing the same thing. I think the same is true of crypto, including NFTs.
More fool us. More power to him. He was well ahead of the curve, him and his laser physics friends worldwide.
We’ll still have the “best code tooling ever invented” stuff, but if the market is assuming “intellectual workers all replaced”, there’s still a bubble pop waiting for us.
It's just like all the ICO, NFT, and other crypto launches, but for all the little things that you can do with Ai. Everybody or their bot has some new game changing Ai project. It's a tiring mess right now, which I do hope will similarly die down in time
For clarity, I was a big fan of blockchain before it got bad, still am for things like ZKP and proof-of-authority, and I am similarly very excited for what Ai enables, but (imo) one cannot easily argue there is not a spam problem that feels similar.
AI has been good for years now. Good doesn't mean perfect. it doesn't mean flawless. It doesn't mean the hype is spot-on. good means exactly that, it is good at what is intended to do.
It is not destroying open source either. If anything, there would be more open source contributors using AI to create code.
You can call anything done by AI "slop" but that doesn't make it so.
Daniel and the curl project were also over reacting. A reaction was warranted, but there were many measures they could have taken before shutting down bug reporting entirely.
If you replace "AI" with "junior dev", "troll" , "spammer", what would things be like then? If it is scale, you can troll, spam and be incompetent at scale just fine without the help of AI.
It's gatekeeping and sentimentality amplified.
I can't wait for people who call everything slop to be overshadowed by people who are so used to LLMs that their usage isn't different than using a linter, a compiler, an IDE, just another tool good at certain tasks but not others. abusable, but with reasonable mitigations possible.
I keep reading posts about what open source users are owed and not owed. Github restricting PRs, developers complaining about burnouts. Have you considered using AI "slop" instead? give a slop response to what you consider to be a slop request? Oh, but no, you could never touch "AI", that would stain you! (I speak to the over-reactors). You don't need AI, you could do anything AI can do (except AI doesn't complain about it all the time, or demand clout).
What is the largest bottleneck and hinderance to open source adaption? Money? No, many, including myself are willing to spend for it. I've even lucked out trying to pay an open source project maintainer to support their software. It's always support.
Support means triaging bugs, and feature requests in a timely manner. You know what helps with that a lot? A tool that understand code generation and troubleshooting well, along with natural language processing. A bot that can read what people are requesting, and give them feedback until their reports meet a certain criteria of acceptability, so you as a developer don't have to deal with the tiring back and forth with them. that same tool can generate code in feature branches. fix people's PR's so it meets your standards and priorities. highlight changes and how they affect your branch, prioritize them for you, so you can spend minimal time reviewing code and accepting or rejecting PRs.
If that isn't good for open source then what is?
Bad attitude towards AI is destroying open source projects led by people entrenched in an all-or-nothing false dichotomy mindset against AI. And AI itself is good. not great, not replace-humans-great, but good enough for it's intended use. great with cooperative humans in the decision making loop.
Use the best tool for the task!
that should be like #2 in the developer rule book, with #1 being:
It needs to work.
"Good" means "I can trust it to give me code that is at least as good as what a moderately skilled human would produce". They still aren't there, even after years of development. They still regularly give you code that doesn't follow the correct logic, or which isn't even syntactically valid. They are not good, or even remotely good.
LLMs are confidently wrong and make bad engineers think they are good ones. See: https://en.wikipedia.org/wiki/Dunning–Kruger_effect
If you're a skilled dev, in an "common" domain, an LLM can be an amazing tool when you integrate it into your work flow and play "code tennis" with it. It can change the calculus on "one offs", "minor tools and utils" and "small automations" that in the past you could never justify writing.
Im not a Lawyer, or a Doctor. I would never take legal advice or medical advice from an LLM. Im happy to work with the tool on code because I know that domain, because I can work with it, and take over when it goes off the rails.
I'm a long time linux user - now I have more time to debug issues, submit them, and even do pull requests that I considered too time consuming in the past. I want and I can now spend more time on debugging Firefox issues that I see, instead of just dropping it.
I'm still learning to use AI well - and I don't want to submit unverified slop. It's my responsibility to provide a good PR. I'm creating my own projects to get the hang of my setup and very soon I can start contributing to existing projects. Maintainers on the other hand need to figure out how to pick good contributors on scale.
Someone can spam me with more AI slop than I can vet and it can pass any automated filter I can setup.
The solution is probably closed contributions because figuring out good contributors at scale sounds like figuring out how to hire at scale, which we are horrible at as an industry.
Social media feels like parks smothered with smog.
It makes you stupid like leaded gas.
We'll probably be stuck with it forever, like PFAS
I finally got around to Claude code and the code it generates and the debugging it does is pretty good.
Inb4 some random accuses me of being an idiot or shit engineer lol
I once worked at a company where the powers-that-be decided to add SonarQube with max settings to the pipeline for a large C++ code base. It produced no output so IT thought the install was broken. They eventually figured out that it was actually working perfectly but that it never found any issues across the entire code base ever. We got that for free with sensible build configurations long before it got to SonarQube.
TDD and tools are not a substitute for competent process. I’ve seen plenty of TDD produce objectively poor quality code bases.
I'd buy the put this in your ".git/hooks" workflow ... but I don't know what's going on with this thing.
The strongest opensource contributors tend to be kinda weird - like they don't have a google account and use some kind of libre phone os that you've never heard of.
What a "real" solution would look like is some kind of "guardrails" format where they can use an lsp or treesitter to give dos and donts and then have a secondary auditing llm punt the code back.
There may be tools (coderabbit?) that do this ... but that's realistically what the solution will be - local llms, self-orchestrated.
I was just saying that good engineers can guide GenAI into creating good code bases. Seeing I got voted down, not everyone agrees.
There's a lot of people trying to hustle their stuff on here. Strongly frowned upon unless it's genuinely free and even then...
Maybe something like "at work we use something called sonarqube and I've been using it on my own stuff. it's works really nice" might have been better
I think this kind of stuff is OK for the most part. I think it's a thrilling part of computer science: building systems so complex they're just on the brink of what can be fully understood by a single person. It's what sets software engineering apart from other engineering fields where it's unacceptable not to fully understand the engineering, say, for factories, buildings, bridges, ships and infrastructure and such.
AI agents mean that dollars can be directly translated into open-source code contributions, and dollars are much less scarce than capable OSS programmer hours. I think we're going to see the world move toward a model by which open source projects gain large numbers of dollar contributions, that the maintainers then responsibly turn into AI-generated code contributions. I think this model is going to work really, really well.
For more detail, I have written my thoughts on my blog just the other day: https://essays.johnloeber.com/p/31-open-source-software-in-t...
In the case of companies hiring Linux devs, that is is very, very costly and thereby inaccessible. Scale makes it different from the scenario of paying a few dollars to contribute tokens to fix a bug.
1. When people use LLMs to code, they never read the docs (why would they), so they miss the fact that the open source library may have a paid version or extension. This means that open source maintainers will receive less revenue and may not be able to sustain their open source libraries as a result. This is essentially what the Tailwind devs mentioned.
2. Bug bounties have encouraged people to submit crap, which wastes maintainers time and may lead them to close pull requests. If they do the latter, then they won't get any outside help (or at least, they will get less). Even if they don't do that, they now have a higher burden than previously.
While I'd like to believe in the decency and generosity of humans, I don't get the economic case of donating money to the agent behind an OS project, when the person could spend the money on the tokens locally themselves and reap the exclusive reward. If it really is just about money that only makes sense.
Obviously this is a gross oversimplification, but I don't think you can ignore the rational economics of this, since in capitalism your dollars are earned through competition.
Usually, getting stuff fixed on main is better than being forced to maintain a private fork.
It’s a tragedy of the commons problem. Most of the money available is not tied up to decision makers who are ideologically aligned with open source, so I don’t see why they’d donate any more in the future.
They usually do so because they are critically reliant on a library that’s going to die, think it’s good PR, makes engineers happy(don’t think they care about that anymore), or they think they can gain control of some aspect of industry(looking at you futurewei and the corporate workers of the Rust project)
More concretely, there are many features that I'd love to see in KDE which don't currently exist. It would be amazing if I could just donate $10, $20, $50 and submit a ticket for a maintainer to consider implementing the feature. If they agree that it's a feature worth having, then my donation easily covers running AI for an hour to get it done. And then I'd be able to use that feature a few days later.
2. Even assuming the AI can crap out the entire feature unassisted, in a large open source code base the maintainer is gonna to spend a sizeable fraction of the time reviewing and testing the feature as they would have coding it. You’re now back to 1.
Conceivably it might make it a little cheaper, but not anywhere close to the kind of money you’re talking about.
Now if agents do get so good that no human review is required, you wouldn’t bother with the library in the first place.
The comment you responded to is (presumably) talking about the transition phase where LLMs can help implement but not fully deliver a feature and need human oversight.
If there are reasonably good devs in low CoL areas who can coax a new feature or bug fix for an open source project out of an LLM for $50, i think it’s worth trialling as a business model.
Even if the human is only doing review and QA, there’s no low cost of living area where $50 get you enough time to do those things from someone with enough competence to do them. Much less $10.
If AI can make features without humans why would I, as a profit maximizing organization, donate that resource instead of keeping it in house? If we’re not gonna have human eyes on it then we’re not getting more secure, I don’t really think positive PR would exist for that, and it would deny competitors resources you now have that they don’t.
As a result our work on the project got reduced to maintenance until coding agents got better. Over the past year I've rewritten a spectacular amount of the code using AI agents. More importantly, I was able to construct enterprise level testing which was a herculean task I just couldn't take up on my own.
The way I see it, AI brought back my OSS project that was heading to purgatory.
EDIT: Also about OPs post. It's really f*ing bug bounties that are the problem. These things are horrible and should die in fire...
I think this is true, but misses the point: quantity of code contributions is absolutely useless without quality. You're correct that OSS programmer hours are the most scarce asset OSS has, but AI absolutely makes this scarce resource even more scarce by wasting OSS programmers' time sifting through clanker slop.
There literally isn't an upside. The code produced by AI simply isn't good enough consistently enough.
That's setting aside the ethical issues of stealing other people's work and spewing even more carbon into the atmosphere.
Give money to maintainers? No.
Give money to bury maintainers in AI Slop? Yes.
I can summarize your entire essay as frankly:
"We can give maintainers of OSS projects money to maintain projects" revolutionary never been done before. /S