The open internet has been going downhill for a while, but LLMs are absolutely accelerating it's demise. I was in denial for the last few years but at this point I've accepted that the internet I grew up on as a kid in the late 90s to mid 2000s is dead. I am grateful for having experienced it but the time has come to move on.
The future for people that valued what the early internet provided is local, trusted networks in my opinion. It's sad that we need to retreat into exclusionary circles but there are too many people interested in making a buck on the race to the bottom.
Everyone serving a website is being ddos by AI agents right now.
A local mesh network is one way to make sure that no one with a terabit network can index you.
>>Everyone serving a website is being ddos by AI agents right now.
You’re missing the point, the point is that while mesh networks solve a problem, it’s not required to solve the problem ”I’m tired of the Internet” or ”I’m being indexed”. You can build your own network on top of the Internet with zero new hardware required, with something like wireguard, i2p or whatever.
Edit: oh, it's probably dead because of the username
You can still make that overlay network geofenced and vetted. Heck, running it over a local ISP's last mile would probably yield wonderful latency.
We need vetted webrings on the existing Internet, not a new Internet.
How do you think will this work, when LLM accelerates the breakdown of trust and common epistemics?
Also I think the name vetted webrings or just the vetted web is simple enough to be a movement.
As in the vetted web movement.
… gotta start somewhere.
Jumping to an invite only network isn't the most ridiculous idea imo.
AI slop thrives in anonymity. In a community that's developed its own established norms and people who know each other, AI content trying to be passed off as genuine stands out like a sore thumb and is easily eradicated before it gets a chance to take root.
It doesn't have to be invite-only, per se, but it needs to have its own flavor that newcomers can adapt to, and AI slop doesn't.
...and not on Hacker News. Too many pseudo-anonymous jerks, too many throwaways, too much faith placed in gamified moderation tools.
https://www.scmp.com/news/people-culture/trending-china/arti...
I don't use Kagi but the context was their Privacy Pass thingie https://blog.kagi.com/kagi-privacy-pass
It works similarly to what you'd like: they sign sealed tokens you provide. Later, you can unseal a token and use it without invalidating the signature. It is mathematically too difficult for a classical computer to link the sealed and unsealed token.
Classic HN. Focus on the tech to avoid looking at the problem.
I love the idea.
It's also interesting in that a local mesh doesn't necessarily need to operate using the TCP/IP/HTTP stack that has been compromised at every layer by advertising and privacy intrusions.
Jokes aside, probably 10-20% of my browsing is related to local things, up to the country scale. From finding local restaurants or businesses, to finding about relevant laws or regulations, news, etc. That's not negligible.
Email in profile (deref a few times)
You could also, for instance, develop your own DNS alternative.
Perhaps AI-Skynet will not win - but they have a lot of money. I think we need to defund those big corporations that push AI onto everyone and worsen our lives.
It's probably too impractical to work as described, but I think that having a digital space constrained by physical access would be meaningful in a way that internet communities are not. The people you chat with would necessarily be the people in your physical environment, which would make it feel more like a local hangout than the typically vapid social media exchange.
(On further reflection, it would probably be easier to make a mesh network app version of this. Hmm...)
Computers run every part of our lives, and it's fucking preposterous that learning the basics of how to operate a computer aren't part of elementary school education. Now it's just "tap app" and look how it's trapped everybody into a world of ignorance.
There's been a huge uptick in this sort of brigade like behavior around current events. First noted it around LK99, that failed room temperature semiconductor in 2023, but it just keeps happening.
Used to be we only saw it around elections and crypto pump and dumps, now it's cropping up in the weirdest places.
This, in the same country that has allocated some of the greatest minds of our generation towards tasks like High Frequency Trading under the guise of “price discovery”?
Forgive me if I’m a little less credulous than you.
I believe the misinformation is largely by self-interested parties. Politicians as well as influencers trying to push agendas, and the engagement/attention farming for advertising revenue, which are largely indifferent to truth.
But, what if prediction markets are just used for information gathering, but the real money is made from market manipulation via prediction markets? I'm sure a lot of investment groups watch prediction markets very carefully, if they can manipulate the predictions, or be manipulated by them, the money to be made is big enough for any level of effort to be believable.
Same grift, different mask. NFTs, shitcoins, blockchain startups, AI startups. We continually see that even if it’s not the same mark, there are plenty of fools easily separated from their wallets.
Superconductor
Previously you might get burned with some bad information or incorrect data or get taken in by a clever hoax once in a while.
Now you get overwhelmed by regurgitation, which itself gets fed back into the machine.
The ratio of people to bots reading is crashed to near zero.
We have burned the web.
Maybe it will kill the veil of (perceived) anonimity which tangibly changes how people behave, or maybe the filter will be monetary and the filter will just affect the underclass shifting whatever discourse will be had
We can't act like whatever replaces the current web won't be different, because then there's no reason to change at all
This same type of info war tends to muddy, confuse, and get everyone on edge.
There's nothing anyone can do about it. No matter how many guidelines dang deploys, no matter how much negative social pressure we apply (and we could apply much more but doing so would just run afoul of the tone policing of the guidelines) people will use AI because they want to, and because it's a part of their identity politics, specifically to spite people who don't want to see it. They currently bother to mention when they use ChatGPT for a comment. It's just a matter of time until people don't even bother, because it's so normalized.
The Fediverse is currently good, the culture there is rabidly anti-capitalist and anti-AI. I like Mastodon. But that will eventually, inevitably get ruined as well, and we'll just have to move on to the next thing.
My understanding is that people tend to cooperate in smaller numbers or when reputation is persistent (the larger the group, the more reliable reputation has to be), otherwise the (uncommon) low-trust actors ruin everything.
Most humans are altruistic and trusting by default, but a large enough group will have a few sociopaths and misunderstood interactions; which creates distrust across the entire group, because people hate being taken advantage of.
... towards an in-group, yes. Not towards out-groups, as far as I can tell.
Though for some reason this tends not to apply to solo travellers in many, many parts of the world.
Lots of debate, yes, but very little about the basic fact that Hardin's formulation of "the tragedy of the commons" doesn't describe actual historical events in pretty any well documented case.
Although, there are other large-scale examples where tragedy of the commons has been (practically) avoided: ozone depletion and Polio eradication. Wikipedia (https://en.wikipedia.org/wiki/Tragedy_of_the_commons#Non-gov...) also mentions Elinor Ostrom, but her examples involve "smaller numbers".
If I were to be honest, going to where the fish aren't is also going to help. Almost certainly there are very few LLM generated websites on the Gemini protocol.
I'm setting up a secondary archiver myself that will record simply the parts of the web that consent to it via robots.txt. Let's see how far I get.
My question is -why? Is it really worth the ad revenue to trick a few people looking into a few niche topics? Say you pick the top 5000 trending movies/music/games and generate fake content covering the gamut. What is the payback period?
Maybe it's problem space exploration via pollution? Said creators of pollution (bullshit asymmetry theory in practice) have very little cost in creating said pollution and there is the possibility of a payback larger than that cost.
[1] https://www.youtube.com/watch?v=YpHUBC681iU https://www.youtube.com/watch?v=0w5a33Jeen0
I'm talking about like 100-200 members max social clubs, not subthing like subreddits with tens of thousands of users.
> The commons of the internet are probably already lost
That depends. If people don't push back against AI then yes. Skynet would have won without the rebel forces. And the rebels are there - just lurking. It needs a critical threshold of anger before they will push back against the AI-Skynet 3.0 slop.
Enshittification strikes again.
And it doesn't have appear to have any means to rid itself of the bad apples. A sad situation all around.
For example, a huge fraction of the world's spam originates from Russia, India and Bangladesh. And we know that a lot of the romance scams are perpetrated by Chinese gangs operating out of quasi-lawless parts of Myanmar. Not so much from, say, Switzerland.
For that reason, and because of limited English proficiency, Russian netizens rarely visit foreign resources these days, except for a few platforms without a good Russian replacement like Instagram and YouTube (both banned btw, only via a VPN), where they usually stay mostly within their Russian-speaking communities. I'm not sure why any of them would be the reason the Internet as a whole has supposedly become low-trust. The OP in question is some SEO company using an LLM to churn out sites with "unique content." We already had this stuff 20 years ago, except the "unique content" was generated by scripts that replaced words with synonyms. Nothing really new here.
Chinese have their own internet anyway- it was a shock to me at first just how little the average Chinese citizen really cares about Western culture or society. They have their own problems ofcourse but it has nothing to do with us
No it's the tens of billions of mostly American capital going into AI data centers and large bullshit models.
Though all that stuff is a very different thing from what's being discussed in this thread.
If you trust your government's propaganda that is used to jusitfy "hackbacks" and buying 0-days on the darkweb that fucks us all.
Don't get me wrong the west isn't doing much to enforce Russian or Chinese complaints either. It's really just a messy diplomatic situation all around.
"A report by the Global Initiative on Transnational Organised Crime (based on United States Institute of Peace findings) estimated that revenues from “pig-butchering” cyber scams in Laos were around US $10.9 billion, which would be *equivalent to more than two-thirds (≈67–70 %) of formal Lao GDP in a recent year."
https://globalinitiative.net/wp-content/uploads/2025/05/GI-T...
The difference is that there historically weren't much to be gained by annoying or misleading people on the internet, so trolling is mainly motivated by personal satisfaction. Two things changed since then: (1) most people now use the internet as the primary information source, and (2) the cost of creating bullshit has fallen precipitously.
The motivation for content online has changed over the last 20 years from people wanting to share things they're interested in to one where the primary goal is to collect eyeballs to make a profit in some way.
Isn't that what's driving the pollution of the Internet by LLMs?
> Enshittification, also known as crapification and platform decay, is a process in which two-sided online products and services decline in quality over time. Initially, vendors create high-quality offerings to attract users, then they degrade those offerings to better serve business customers, and finally degrade their services to both users and business customers to maximize short-term profits for shareholders.
Also see https://en.wikipedia.org/wiki/Enshittification#Impact which talks of the broadening of the usage of that term.
Also there are many online shops that are the best option for purchasing various things.
The greatest decline is in the search engines, which not only are overwhelmed by sites with fake content, but they generate fake content themselves, in the form of stupid answers that are offered instead of the real search results, whether you want them or not.
If you know precisely the Web sites that you want to use, it is still OK, but when you search something unknown, it has become horrible.
&udm=14 my friend, &udm=14
asserted without evidence and likely false.
I'd normally be the first to agree with and push your point about language evolving, but it's not time to apply that to a neologism this young.
The consumer internet has become platformized, and the dominant platforms are going through enshittification: early user subsidy, then advertiser/seller favoritism, now rent extraction that is degrading outcomes for everyone.
It literally started meaning that hours after it was first posted to HN and being used. Sorry, that's just how language works. Enshittification got enshittified. Deal with it and move on.
Maybe it wasn't literally hours, but it was really fast. I remember noting how quickly people began to complain about it being used "improperly." The earliest instance I could find was this thread[0] from 2023 where user Gunax complained about it. I couldn't find an earlier reference in Algolia, it probably exists but I honestly don't care enough to put in the effort.
[0]https://news.ycombinator.com/item?id=36297336
>and also in that "things become shittier" was and is still a perfectly common expression
...perfectly encapsulated and described by the term "enshittification." Which is why people use it for that now. It's more descriptive in the general sense than it is as a specific term of art. You're complaining that a word that means "the process of turning to shit" is being used to describe "the process of turning to shit." What did people expect to happen? If you want to keep it as a precise and technical term of art, keep calling it "platform decay." A shit joke is not a technical, precise term of art.
You can be as much of a prescriptivist crank about this as you want, it doesn't matter. "Enshittification" now refers to any process by which things "turn to shit."
But here's what you're basically implying:
A writer was thinking about the ways things get shittier, decided that there was an actual pattern (at least when it came to online services) that came up again and again, such that "shittified" or "shittier" didn't really describe the most insidious part of it, and coined "enshittification" as a neologism that captured both the "shittier/shittified" aspects and also the academic overtones of "enXXXXication" ...
... and within less than 3 years, sloppy use of the neologism rendered it undifferentiatable from its roots, and the language without a simple term to describe the specific, capitalistic, corporatist process that the writer had noticed.
I can be anti-prescriptivist in general without losing my opposition to that specific process.
The process of language drift is accelerated exponentially by the internet. 5-10 years and upwards is an obsolete timescale, these changes can happen in months now, sometimes faster depending on the community.
> As I said in that Berlin speech:
>> Enshittification names the problem and proposes a solution. It's not just a way to say 'things are getting worse' (though of course, it's fine with me if you want to use it that way. It's an English word. We don't have der Rat für englische Rechtschreibung. English is a free for all. Go nuts, meine Kerle).
Unfortunately, I just think that Cory is wrong in the sense that ... while it's true the English is a free for all (most languages are, really) ... there's an actual cost to the sloppy usage which diminishes the utility of ever even coming up with the word. It's obviously fine for Cory to be fine with it (along with anyone else being fine with it), but at a point in time where it actually is the theory that matters, I think the cost ought to be considered more seriously.
Somewhere in the not too distant future, the theory/concept that enshittification identifies will be of less importance for a variety of reasons, and loose use of the word won't matter, because the theory/concept will be either irrelevant or widely known or both. But right now, when someone wants to talk about Cory's idea about how internet services are deliberately degraded over time, it's incredibly helpful to have a "unique" term for that.
You act as if it was impossible to talk about "how internet services are deliberately being degraded over time" before the word was coined, but it wasn't, we already had a more precise term for that, platform decay.
But my brother in Christ "enshittification" isn't a unique term. It's a common prefix, a common suffix and the word "shit." It was never that great a term of art to begin with, it was just an excuse to say "shit" in polite company. It's a word invented by a blogger for clicks. This isn't a hill worth dying on.
We must live on different planets.
It must be easier than ever to build content mills these days.
On the internet no one knows if you're a dog, human or a moltbot.
1. Don't believe everything or anything you read or see on the Internet.
2. Never share personal information about yourself online.
3. Every man was a man, every woman was a man and every teenager is an FBI agent.
I have yet to find a problem with the Internet thats isn't because of breaking one of the above rules.
My point being you couldn't ever trust the Internet before anyways.
Now you can collate a list of thousands of titles and simply instruct an LLM to produce garbage for each one and publish it on the internet. This is a real change, IMO.
3a. ... and nobody knows if you're a dog.
Great piece btw
Yes, there was: becoming the primary contributor by volume to Scots Wikipedia (which probably doesn't have many contributors to begin with, but there you are). Some people just have to have attention, no matter how.
What we have here is worse; LLMs give you bullshit. A bullshitter does not care if something is true or false, it just uses rhetoric to convince you of something.
I am far from being someone nostalgic about the old internet, or the world in general back then. Things in many ways sucked back then, we just tend to forget how exactly they sucked. But honestly, a LLM-driven internet is mostly pointless. If what I am to read online is AI generated crap, why bother reading it on websites and not just reading it straight from a chatbot already?
Google did all the innovation it needed to and ever is going to. It needed to be broken up a decade ago. We can still do it now. Though I don't know how much it will save, especially if we don't also go after Apple, and Meta, and Microsoft.
AI needs to be kept up to date with training data. But that same training data is now poisoned with AI hallucination. Labelling AI generated media helps reduce the amount of AI poison in the training set, and keeps the AI more useful.
It also simply undermines the quality of search, both for human users and for AI tool use.
SEO is a slippery slope on both sides because a little bit is good for everyone. Google wanted pages it could easily extract meaning from, publishers wanted traffic, and users wanted relevant search results. Now there's a prisoners dilemma where once someone starts abusing SEO, it's a race to the bottom.
I reject this emphatically. Google should never have been in the business of shaping internet content. Perhaps they should have even gone out of their way to avoid doing so. Without Google (or a better-performing competitor) acquiescing to the game, there is no SEO market.
People want something real, not AI slop or shills or astroturf or corpo-speak or any of a thousand other flavors of fake. People want it rather desperately. In fact, the current situation is bad for peoples' mental health. Can someone figure out how to give people a much higher percentage of real?
And at that point does it even matter? Zuckerberg wins.
But it's the date at which it is no longer possible to discern reality you can't actually observe.