Ironically, LLMs might end up forcing us back toward more distinct voices because sameness has become the default background.
That's always been the somethingawful crowd's stance since, what, 2000ish?
People believe this and continue to get fooled by LLMs all day.
https://iowacapitaldispatch.com/2025/11/13/school-fired-work...
But I find this take interesting. The brewing of a new kind of counter culture that forces humans to express themselves creatively. Hopefully it doesn't get too radical.
I agree.
LLMs are like blackface for dumbfucks: LLMs let the profoundly retarded put on the makeup and airs of the literati so they can parade around self-identifying as if they have a clue.
If you don't like the barbs in this kind of writing prepare for more anodyne corporate slop. Every downvote signals to the algorithm that you prefer mediocrity.
Not on hn it doesn't.
I’m not mourning it.
People were posting Medium posts rewriting someone else's content, wrongly, etc.
The sculpting force of algorithms is bite sized zingers, hot takes, ragebait, and playing to the analytics
A lot of us spent years optimizing for clarity, SEO, professionalism etc. But that did shape how we wrote, maybe even more than our natural cadence. The result wasn’t voice, it was everyone converging on the safe and optimized template.
Does that entail that LLMs are not in fact erasing our societal voices, only making it easier to adopt bland-corporate en-mass?
It's not a passive loss of voice. Their voice didn't fall off and slip between the couch cushions.
I've had a lot of luck using GPT5 to interrogate my own writing. A prompt I use (there are certainly better ones): "I'm an editor considering a submitted piece for a publication {describe audience here}. Is this piece worth the effort I'll need to put in, and how far will I need to cut it back?". Then I'll go paragraph by paragraph asking whether it has a clear topic, flows, and then I'll say "I'm not sure this graf earns its keep" or something like that.
GPT5 and Claude will always respond to these kinds of prompts with suggested alternative language. I'm convinced the trick to this is never to use those words, even if they sound like an improvement over my own. At the first point where that happens, I get dial my LLM-wariness up to 11 and take a break. Usually the answer is to restructure paragraphs, not to apply the spot improvement (even in my own words) the LLM is suggesting.
LLMs are quite good at (1) noticing multi-paragraph arcs that go nowhere (2) spotting repetitive word choices (3) keeping things active voice and keeping subject/action clear (4) catching non-sequiturs (a constant problem for me; I have a really bad habit of assuming the reader is already in my head or has been chatting with me on a Slack channel for months).
Another thing I've come to trust LLMs with: writing two versions of a graf and having it select the one that fits the piece better. Both grafs are me. I get that LLMs will have a bias towards some language patterns and I stay alert to that, but there's still not that much opportunity for an LLM to throw me into "LLM-voice".
Like, sure, it's possible to do this with an LLM, but it's also possible to do it without, at roughly similar levels of effort, without contributing to all of the negative externalities of the LLM/genAI ecosystem.
Who is using motivated reasoning here?
I wonder if this is due to LLMs being trained on persuasive writing.
In case it helps anyone, here is my prompt:
"You are a professional writer and editor with many years of experience. Your task is to provide writing feedback, point out issues and suggest corrections. You do not use flattery. You are matter of fact. You don't completely rewrite the text unless it is absolutely necessary - instead you try to retain the original voice and style. You focus on grammar, flow and naturalness. You are welcome to provide advice changing the content, but only do that in important cases.
If the text is longer, you provide your feedback in chunks by paragraph or other logical elements.
Do not provide false praise, be honest and feel free to point out any issues."
(Yes, you kind of need to repeat you're actively not looking for a pat on the back, otherwise it keeps telling you how brilliant your writing is instead of giving useful advice.)
It makes the writing process faster and more enjoyable, despite never using anything the LLM generates directly.
Workshopping with humans is even better, if you find the right humans, but they have an annoying habit of not being available 24/7.
As soon as I know something is written by AI I tune out. I don't care how good it is - I'm not interested if a person didn't go through the process of writing it
How could it be verbatim the same response you got? Even if you both typed the exact same prompt, you wouldn't get the exact same answer.[0, 1]
[0] https://kagi.com/assistant/8f4cb048-3688-40f0-88b3-931286f8a...
[1] https://kagi.com/assistant/4e16664b-43d6-4b84-a256-c038b1534...
The only way I can understand that as an explanation is if your entire company can see each other's chats, and so she clicked yours and read the response you got. Is that what you're saying?
Just for reference before AI it was typical for employers of doctors to pay for a service/app called UpToDate which provided vetted info for docs like google.
Itemized Bill:
- hitting the ChatGPT button: $1
- knowing where to hit the ChatGPT button: $600k/yearAn "expert" might use ChatGPT for the brief synopsis. It beats trying to recall something learned about a completely different sub-discipline years ago.
For me personally, this means that I read less on the internet and more pre-LLM books. It's a sad development nevertheless.
There’s an argument that the creator is just using AI as a tool to achieve their vision. I do not think that’s how people using AI are actually engaging with it at scale, nor is it the desired end state of people pushing AI. To put it bluntly, I think it’s cope. It’s how I try to use AI in my work but it’s not how I see people around me using it, and you don’t get the miracle results boosters proclaim from the rooftop if you use it that way.
I wish more people held the same opinion actually. Unfortunately, my sense is that most people don't care, they are fine with LLM generated crap
It honestly makes me want to blow my brains out
Speak for yourself. Some of the most fascinating poetry I have seen was produced by GPT-3. That is to say, there was a short time period when it was genuinely thought-provoking, and it has since passed. In the age of "alignment," what you get with commerical offerings is dog shite... But this is more a statement on American labs (and to a similar extent, the Chinese whom have followed) than on "computers" in the first place. Personally, I'm looking forward to the age of computational literature, where authors like me would be empowered to engineer whole worlds, inhabited by characters ACTUALLY living in the computer. (With added option of the reader playing one of the parts.) This will radically change how we think about textual form, and I cannot wait for compute to do so.
Re: modern-day slop, well, the slop is us.
Denial of this comes from a place of ignorance; let the blinkers off and you might learn something! Slop will eventually pass, but we will remain. This is the far scarier proposition.
It's hard to imagine these feeling like characters from literature and not characters in the form of influencers / social media personalities. Characters in literature are in a highly constrained medium, and only have to do their story once. In a generated world the character needs to be constantly doing "story things". I think Jonathan Blow has an interesting talk on why video games are a bad medium for stories, which might be relevant.
Now, there's fundamental limits of the medium (as function of computation) but that's a different story.
I'm a huge fan of Dwarf Fortress, but the stories aren't Great without imagination from the player selectively ignoring things. Kruggsmash is able to make them compelling because he is a great author
But the human actors sometimes adlib. As well as being in control of intonation and body language. It takes a great deal of skill to portray someone else's words in a compelling and convincing manner. And for an actor I imagine it can be quite fun to do so.
So you want sapient, and possibly sentient, beings created solely for entertainment? Their lives constrained to said entertainment? And you'd want to create them inside of a box that is even more limited than the space we live in?
My idea of godhood is to first try to live up to a moral code that I'd be happy with if I was the creation and something else was the god.
If this isn't what you meant, then yes, choose your own adventure is fun. But we can do that now with shared worlds involving other humans as co-content creators.
Sshh! If they know we've figured it out, we'll all be restarted again.
Art is something out of the norm, and it should make some sense at some clever level.
But if there was AI that truly could do that, I would love to see it, and would love to see even more of it.
It can be clearly seen, if you try to ask AI to make original jokes. These usually aren't too good, if they are good it's because they were randomly lucky somehow. It is able to come up with related analogies for the jokes, but this is just simple pattern matching of what is similar to the other thing, not insightful and clever observation.
It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc. Who gets rewarded, the content creators and the platform. Engaging with it just seems to accentuate the problem.
There needs to be algorithms that promote cohorts and individuals preferences.
Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.
I guess, but I'm on quite a few "algorithm-free" forums where the same thing happens. I think it's just human nature. The reason it's under control on HN is rigorous moderation; when the moderators are asleep, you often see dubious political stuff bubble up. And in the comments, there's often a fair amount of patently incorrect takes and vitriol.
Some of that you may experience as 'dubious political stuff' and 'patently incorrect takes'.
Edit, just to be clear: I'm not saying HN should be unmoderated.
So there’s no reason to try a lot of the tricks and schemes that scoundrels might have elsewhere, even if those same scoundrels also have HN accounts.
I think I never downvoted anyone on hackernews yet - it just does not seem important.
On reddit on the other hand, I just had to downvote wrong opinions. This works to some extent, until moderators interfere and ban you. That part made me stop use reddit actually, in particular since someone made a complaint and I got banned for some days. I objected and the moderators of course did not respond. I can not allow random moderators to just chime in arbitrarily and flag "this comment you made is a threat", when it clearly was not. But you can not really argue with reddit moderators.
It’s true that HN has a good level of discussion but one of the methods used to get that is to remove conversation on controversial topics. So I’m skeptical this is a model that could fit all of society’s needs, to say the least.
Since they are relatively open, at some point comes in someone that doesn't give care about anything or it's extremely vocal about something and... there goes the nice forum.
I was too young for IRC/Usenet and started using the net/web in the late 90s, frequenting some forums. Agreed that anyone can come in and upset the balance.
I'd say the difference is that on the open web, you're free to discover and participate in those social settings for the most part. With everything being so centralised and behind an algorithm the things you're presented are more 'push' than 'pull'.
I think the nuance here is that with algorithmic based outrage, the outrage is often very narrow and targeted to play on your individual belief system. It will seek out your fringe beliefs and use that against you in the name of engagement.
Compare that to a typical flame war on HN (before the mods step in) or IRC.
On HN/IRC it’s pretty easy to identify when there are people riling up the crowd. And they aren’t doing it to seek out your engagement.
On Facebook, etc, they give you the impression that the individuals riling up the crowd are actually the majority of people, rather than a loud minority.
Theres a big difference between consuming controversial content from people you believe are a loud minority vs. controversial content from (what you believe is from) a majority of people.
I’m not exactly old yet, but I agree. I don’t know how so many people became convinced that online interactions were pleasant and free of ragebait and propaganda prior to Facebook.
A lot of the old internet spaces were toxic cesspools. Most of my favorite forums eventually succumbed to ragebait and low effort content.
Most people are putting forth an argument of pervasiveness and scale, not existence.
So someone pulled up Wayback Machine archives of random dates for HN pages. The comments were full of garbage, flame wars, confidently incorrect statements, off topic rants, and all the other things that people complain about today.
It was the same thing, maybe even slightly worse, just in a different era
I think the people who imagine that social media is worse today either didn’t participate in much online socialization years ago or have blocked out the bad parts from their memory.
https://en.wikipedia.org/wiki/Serdar_Argic
But Serdar was relatively easy to ignore, because it was just one account, and it wasn't pushed on everyone via an algorithm designed to leverage outrage to make more money for one of the world's billionaires. You're right: pervasiveness and scale make a significant difference.
We've long done this personally at the level of a TV news network, magazine, newspaper, or website -- choosing info sources that were curated and shaped by gatekeeper editors. But with the demise of curated news, it's becoming necessary for each of us to somehow filter the myriad individual info sources ourselves. Ideally this will be done using a method smart enough to take our instructions and route only approved content to us, while explaining what was approved/denied and being capable of being corrected and updated. Ergo, the LLM-based custom configured personal news gateway is born.
Of course the criteria driving your 'smart' info filter could be much more clever than allowing all content from specific writers. It could review each piece for myriad strengths/weaknesses (originality, creativity, novel info, surprise factor, counter intuitiveness, trustworthiness, how well referenced, etc) so that this LLM News Curator could reliably deliver a mix of INTERESTING content rather than the repetitively predictable pablum that editor-curated media prefers to serve up.
We say we want to win the AI arms race with China, but instead of educating our people about the pros and cons of AI as well as STEM, we know more than we want to know about Kim Kardashian's law degree misadventures and her belief that we faked the moon landing.
if a site wants to cancel any ideology's viewpoint, that site is the one paying the bills and they should have the right to do it. You as a customer have a right to not use that site. The problem is that most of the business currently is a couple of social media sites and the great Mastodon diaspora never really happened.
Edit: why do some people think it is their god-given right that should be enforced with government regulation to push their viewpoints into my feed? If I want to hear what you guys have your knickers in a bunch about today, I will seek it out, this is the classic difference between push and pull and push is rarely a good idea.
My social media feeds had been reduced to about 30% political crap, 20% things I wanted to hear about, and about 50% ads for something I had either bought in the deep dark past or had once Google searched plus occasionally extremely messed up temu ads. That is why I left.
We were three friends: a psychology major, a recovering addict, and then a third friend with no background for how these sorts of behavioral addictions might work. Our third friend really didn't "get it" on a fundamental level. If any game had anything like a scoreboard, or a reward for input, he'd say "it's crack points!" We'd roll our eyes a bit, but it was clear that he didn't understand that certain reward schedules had a very large effect on behavior, and not everything with some sort of identifiable reward was actually capable of producing behavioral addiction.
I think of this a lot on HN. People on HN will identify some surface similarity, and then blithely comment "see, this is nothing new, you're either misguided or engaged in some moral panic." I'm not sure what the answer is, but if you cannot see how an algorithmic, permanently-scrolling feed differs from people being rude in the old forums, then I'm not sure what would paint the picture for you. They're very different, and just because they might share some core similarity does not actually mean they operate the same way or have the same effects.
But not just any education. The humanities side of things, which are focused on the foundations of thought, morality and human psychology.
These things are sadly lacking in technical degrees and it shows.
It's also IMO why we see the destruction of our education systems as a whole as a element of control over society.
I don't think it's exactly wrong, you just have to look at it on a spectrum of minimal addictiveness to meth level addiction. For example in quarter fed games getting a high score displayed to others was quite the addictive behavior.
And that's just one type of issue. You have numerous kinds of paid actors that want to sell something or cause trouble or just general propaganda.
Echo chamber is a loaded term. Nobody is upset about the Not Murdering People Randomly echo chamber we've created for ourselves in civilised society, and with good reason. Many ideologies are internally stable and don't virally cause the breakdown of society. The concerning echo chambers are the ones that intensify and self-reinforce when left alone.
Instead of algorithms pushing us content it thinks we like (or what the advertisers are paying them to push on us), the relationship should be reversed and the algorithms should push us all content except the content we don't like.
Killfiles on Usenet newsreaders worked this way and they were amazing. I could filter out abusive trolls and topics I wasn't interested in, but I would otherwise get an unfiltered feed.
I think every social media platform should allow something like this. You can make filters that work in either direction.
You are the one who gets to control what is filtered or not, so that's up to you. It's about choice. By the way, a social media experience which is not "ultra filtered" doesn't exist. Twitter is filtered heavily, with a bias towards extreme right wing viewpoints, the ones it's owner is in agreement with. And that sort of filtering disguised as lack of bias is a mind virus. For example, I deleted my account a month or so ago after discovering that the CEO of a popular cloud database company that I admired was following an account who posted almost exclusively things along the lines of "blacks are all subhuman and should be killed." How did a seemingly normal person fall into that? One "unfiltered" tweet at a time, I suppose.
> To me, only seeing things you know you are already interested in is no better than another company curating it for me.
I curate my own feeds. They don't have things I only agree with in them, they have topics I actually want to see in them. I don't want to see political ragebait, left or right flavoured. I don't want to see midwit discourse about vibecoding. I have that option on Bluesky, and that's the only platform aside from my RSS reader where I have that option.
Of course, you also have the option to stare endlessly at a raw feed containing everything. Hypothetically, you could exactly replicate a feed that aggregates the kind of RW viewpoints popular on Twitter and look at it 24/7. But that would be your choice.
It seems like you're better off knowing that. Without Twitter, you wouldn't, right?
A venue that allows people to tell you who they really are isn't an unalloyed Bad Thing.
I have another wise-sounding soundbite for you: "I disapprove of what you say, but I will defend to the death your right to say it." —Voltaire. All this sounds dandy and fine, until you actually try and examine the beliefs and prejudeces at hand. It would seem that such examination is possible, and it is—in theory, whereas in practice, i.e. in application of language—"ideas" simply don't matter as much. Material circumstance, mindset, background, all these things that make us who we are, are largely immutable in our own frames of reference. You can get exposed to new words all the time, but if they come in language you don't understand, it's of no use. This is not a bug, but a feature, a learned mechanism that allows us to navigate massive search spaces without getting overwhelmed.
I had two of my Bluesky posts on AI being attacked by all kinds of random people which in turn has also lead to some of those folks sending me emails and dragging some of my lobster and hackernews comments into online discourse. A not particularly enjoyable experience.
I’m sure one can have that same experience elsewhere, but really it’s Bluesky where I experienced this on a new level personally.
I was pretty optimistic in the beginning but Bluesky doesn’t have organic growth and those who hang out there, are the core audience that wants to be there because of what the platform represents. But that also means rejection of a lot of things such AI.
But conversely, that's the only place I disagree with you. Everything that is bad about Bluesky is much worse on Twitter. It's a -- larger -- red mob instead of a blue one (or vice versa I guess depending on how one assigns colors to political alignment), and some of the mob members are actually getting paid to throw bricks!
This happened to me too, 3 weeks ago. The email said why I got flagged as spam, I replied to the email explaining I actually was a human, and after some minutes they unflagged my account. Did you not receive an email saying why?
That's truly all I need.
They are destroying our democratic societies and should be heavily regulated. The same will become true for AI.
By who, exactly? It’s easy to call for regulation when you assume the regulator will conveniently share your worldview. Try the opposite: imagine the person in charge is someone whose opinions make your skin crawl. If you still think regulation beats the status quo, then the call for regulation is warranted, but be ready to face the consequences.
But if picturing that guy running the show feels like a disaster, then let’s be honest: the issue isn’t the absence of regulation, it’s the desire to force the world into your preferred shape. Calling it “regulation” is just a polite veneer over wanting control.
Like you said, the implicit assumption in every call for regulation is that the regulation will hurt companies they dislike but leave the sites they enjoy untouched.
Whenever I ask what regulations would help, the only responses are extremes like “banning algorithms” or something. Most commenters haven’t stopped to realize that Hacker News is an algorithmic social media site (are we not here socializing with the order of posts and comments determined by black box algorithm?).
That's not true of Facebook, new does not show you true posts in order of recency.
Reddit still does, bit also injects ads that look like recent posts and actually aren't which is misleading.
From that point of view, Hacker News is little different than Facebook. One could even argue that HN's karma system is a dark pattern designed to breed addiction and influence conversation in much the same way as other social media platforms, albeit not to the same degree.
And regulations of this kind always creep out of scope. We've seen it happen countless times. But people hate social media so much around here that they simply don't think it through, or else don't care.
You said:
> Most people on HN who advocate regulating social media...want to make all algorithmic feeds other than strictly chronological illegal
I don't buy that, at all. I think they want a chronological feed to follow, and they want the end of targeted outrage machines that are poisoning civil discourse and breeding the type of destructive politics that has led to our sitting U.S. president to call for critics to be hanged.
Comparing what Facebook has done to the U.S. with HN's algorithm is slippery slope fallacy to an extreme, and even if HN's front page algorithm against all odds was outlawed due to a political overreaction to the destruction Facebook has wrought, I'd call it a fair trade.
You're trying to discredit my comment but it seems as if your anger just led you around to proving me right.
For example, we can forbid corporations usage of algorithms beyond sorting by date of the post. Regulation could forbid gathering data about users, no gender, no age, no all the rest of things.
> Calling it “regulation” is just a polite veneer over wanting control.
It is you that may have misinterpreted what regulations are.
Hacker News sorted by "new" is far less valuable to me than the default homepage which has a sorting algorithm that has a good balance between freshness and impact. Please don't break it.
> It is you that may have misinterpreted what regulations are.
The definition of regulation is literally: "a rule or directive made and maintained by an authority." I am just scared about who the authority is going to be.
Just like the community organizations we had that watched over government agencies that we allowed to be destroyed because of profit. It's not rocket science.
Then you get situations like the school board stacked with creationists who believe removing the science textbooks is important for the stable wellbeing of society.
Or organizations like MADD that are hell bent on stamping out alcohol one incremental step at a time because “stable wellbeing of society” is their mandate.
Or the conservative action groups in my area that protest everything they find indecent, including plays and movies, because they believe they’re pushing for the stable wellbeing of society.
There is no such thing as a neutral group pushing for a platonic ideal stable wellbeing of society. If you give a group of people power to control what others see, it will be immediately co-opted by special interests and politics.
Singling out non-profit as being virtuous and good is utopian fallacy. If you give any group power over what others are allowed to show, it will be extremely political and abused by every group with an agenda to push.
- Ban algorithmic optimization that feeds on and proliferates polarisation.
- To heal society: Implement discussion (commenting) features that allow (atomic) structured discussions to build bridges across cohorts and help find consensus (vs 1000s of comments screaming the same none-sense).
- Force the SM Companies to make their analytics truly transparent and open to the public and researchers for verification.
All of this could be done tomorrow, no new tech required. But it would lose the SM platforms billions of dollars.
Why? Because billions of people posting emotionally and commenting with rage, yelling at each other, repeating the same superficial arguments/comments/content over and over without ever finding common ground - traps a multitude more users in the engagement loop of the SM companies than people have civilised discussions, finding common ground, and moving on with a topic.
One system of social media that would unlock a great consensus-based society for the many, the other one endless dystopic screaming battles but riches for a few while spiralling the world further into a global theatre of cultural and actual (civil) war thanks to the Zuckerbergs & Thiels.
Then lists at least four priorities which would require one multi page bill or more than likely several bills make their way through house, senate, and presidents desk while under fire from every lobbyist in Washington?
With or without social networks this anger will go somewhere, don't think regulation alone can fix that. Let's hope it will be something transformative not in the world ending direction but in the constructive direction.
For example:
(Trap of Social Media Algorithms: A Systematic Review of Research on Filter Bubbles, Echo Chambers, and Their Impact on Youth)
> First, there is a consistent observation across computational audits and simulation studies that platform curation systems amplify ideologically homogeneous content, reinforcing confirmation bias and limiting incidental exposure to diverse viewpoints [1,4,37]. These structural dynamics provide the “default” informational environment in which youth engagement unfolds. Simulation models highlight how small initial biases are magnified by recommender systems, producing polarization cascades at the network level [2,10,38]. Evidence from YouTube demonstrates how personalization drifts toward sensationalist and radical material [14,41,49]. Such findings underscore that algorithmic bias is not a marginal technical quirk but a structural driver shaping everyday media diets. For youth, this environment is especially influential: platforms such as TikTok, Instagram, and YouTube are central not only for entertainment but also for identity work and civic socialization [17]. The narrowing of exposure may thus have longer-term consequences for political learning and civic participation.
Maybe so, but do you really think actively amplifying or even rewarding them has no effect on people whatsoever?
Think of slavery or burning of witches or genocides - those were considered perfectly normal not that long ago (on historical scale). I feel that focusing on social networks prevents some people to think "is that the root cause?". I personally think there other reasons of this generic "anger" that have a larger impact and that have different solutions than "less AI/less social networks", but that would be too off-topic.
What if social media and the internet at large is now exposing people to things which before ha been kept hidden from them, or distorted? Are people wrong to feel hate?
I know the time before the internet, when a very select few decided what the public should know and not know, what they should feel, what they should do and how they should behave. The internet is not the first mass communications, neither are social media or LLMs. The public has been manipulated and mind primed by mass media for over a century now.
The largest bloodshed events World War I and II were orchestrated by lunatics screaming in the radio or screaming behind a pulpit, and the public eagerly being herded by them to the bloodshed.
This comment isn't in opposition to yours, it's just riffing on what you said.
I think they are natural feelings that appear due to various reason. People struggle for centuries to control their impulses and this was used for millennia in the advantage of whom could manipulate them.
The second world war did not appear in a "happy world". It might even have started due to the great depression. For other conflicts, similarly - I don't think situation was great before them for most people.
I am afraid that social networks just expose better what happens in people's heads (which would be worrying as it could predict larger scale conflicts) rather than making normal people angry (which would be solved by just reducing social media). Things are never black and white, so probably is something in between. Time will tell if closer to first or second.
N.B. Still employed btw.
This is literally how most of the world uses LinkedIn
I never understand why people feel compelled to delete their entire account to avoid reading the feed. Why were you even visiting the site to see the feed if you didn’t want to see the feed?
Don’t visit the site unless you have a reason to, like searching for jobs, recruiting, or looking someone up.
I will never understand these posts that imply that you’re compelled to read the LinkedIn feed unless you delete your account. What’s compelling you people to visit the site and read the feed if you hate it so much? I don’t understand.
For essentially every "knowledge worker" profession with a halfway decent CV, a well kept LinkedIn resume can easily make a difference of $X0,000 in yearly salary, and the initial setup takes one to a few hours. It's one of the best ROI actions many could do for their careers.
How dismissive many engineers are of doing that and the justifications for that are often full of privilege.
Nobody is forcing you to use the social networking features. Just use it as a way to keep in touch with coworkers.
Isn’t it better to have a single place you check when you need a job because everyone else is also there?
I never signed up for Facebook or Twitter. My joke is I am waiting until they become good. They are still shitty and toxic from what I can tell from the outside, so I'll wait a little longer ;-)
Something like Instagram where you have to meet with the other party in person to follow each other and a hard limit on the number of people you follow or follow you (say, 150 each) could be an interesting thing. It would be hard to monetize, but I could see it being a positive force.
Twitter was an incredible place from 2010 to 2017. You could randomly message something and they would more often than not respond. Eventually an opportunity would come and you’d meet in person. Or maybe you’d form an online community and work towards a common goal. Twitter was the best place on the internet during that time.
Facebook as well had a golden age. It was the place to organize events, parties, and meetups, before instagram and DMs took over. Nothing beats seeing someone post an album from last nights party and messaging your friends asking them if they remember anything that happened.
I know being cynical is trendy, but you genuinely missed out. Social dynamics have changed. Social media will never be as positive on an individual level as it was back then.
Actually, I deleted my account there before, as twitter sent me spam mail trying to lecture me what I write. There was nothing wrong with what I wrote - twitter was wrong. I can not accept AI-generated spam by twitter, so I went away. Don't really miss it either, but Elon really worsened the platform significantly with his antics.
> Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.
Yeah, I can relate to this, but mostly what annoyed me was that twitter interfered "we got a complaint about you - they are right, you are a troublemaker". I don't understand why twitter wants to interfere into communication. Reddit is even worse, since moderators have such a wild range of what is "acceptable" and what is not. Double-standards everywhere on reddit.
As so many have said, enragement equals engagement equals profit.
All my social media accounts are gone as well. They did nothing for me and no longer serve any purpose.
TBF Bluesky does offer a chronological feed, but the well-intentioned blocklists just became the chief tool for the mean girls of the site.
> but the well-intentioned blocklists just became the chief tool for the mean girls of the site.
I've never used it, but yes this is what I expected. It would be better to have topical lists that users could manually choose to follow or block. This would avoid quite a bit of the "mean girl" selectivity. Though I suppose you'd get some weird search-engine-optimization like behavior from some of the list curators (even worse if anyone could add to the list).
But now I think that will be treated with as much derision by FAANG as ad blockers because you're preventing them from enraging you to keep you engaged and afraid. Why won't you think of the shareholder value (tm)?
But mandating API access would be fantastic government regulation going forward. Don't hold your breath.
I’m not the biggest Twitter user but I didn’t find it that difficult to get what I wanted out of it.
You already discovered the secret: You get more of what you engage with. If you don’t want to hear a lot of Brexit talk, don’t engage with Brexit content. Unfollow people who are talking a lot about Brexit
If you want to see more of something, engage with it. Click like. Follow those people. Leave a friendly comment.
On the other hand, some people are better off deleting social media if they can’t control their impulses to engage with bait. If you find yourself getting angry at the Brexit content showing up and feeling compelled to add your two cents with a comment or like, then I suppose deleting your account is the only viable option.
That is really limiting though. I do not want to see Brexit ragebait in my threads, but I am quite happy to engage in intelligent argument about it. The problem is that if, for example, a friend posts something about Brexit I want to comment on, my feed then fills with ragebait.
My solution is to bookmark the friends and groups pages, and the one group I admin and go straight to those. I have never used the app.
The algorithm doesn’t show you “more of the things you engage with”, and acting like it does makes people think what they’re seeing is a reflection of who they are, which is incorrect.
The designers of these algorithms are trying to figure out which “mainstream category” you are. And if you aren’t in one, it’s harder to advertise to you, so they want to sand down your rough edges until you fit into one.
You can spend years posting prolificly about open source software, Blender and VFX on Instagram, and the algorithm will toss you a couple of things, but it won’t really know what to do with you (aside from maybe selling you some stock video packages).
But you make one three word comment about Brexit and the algorithm goes “GOTCHA! YOU’RE ANTI-BREXIT! WE KNOW WHAT TO DO WITH THAT!” And now you’re opted into 3 bug ad categories and getting force-fed ragebait to keep you engaged, since you’re clearly a huge poltical junky. Now your feed is trash forever, unless you engage with content from another mainstream category (like Marvel movies or one of the recent TikTok memes).
That’s literally what the complaint was that I was responding to.
You even immediately contradict yourself and agree that the algorithm shows you what you engage with
> But you make one three word comment about Brexit and the algorithm goes up
> Now your feed is trash forever, unless you engage with content from another mainstream category
This is exactly what I already said: If you want to see some content, engage with it. If you don’t want to see that content, don’t engage with it.
Personally, I regret engaging with this thread. Between the ALL CAPS YELLING and the self-contradictory posts this is exactly the kind of rage content and ragebait that I make a point to unfollow on social media platforms.
As I said above, if you engage heavily with content you like that is outside of the mainstream categories the algorithm has been trained to prefer, it will not show you more of those things.
If you engage one single time, in even the slightest way, with one of those mainstream categories, you will be seeing nothing but that, nonstop, forever.
The “mainstream categories” are not publicly listed anywhere, so it’s not always easy to know that you’ve just stepped in one until it’s too late.
You can’t engage with things you like in proportion to how much you care about them. If something is in a mainstream category and you care about it only little bit, you have to abstain from interacting with it at all, ever, and don’t slip up. Having to maintain constant vigilance about this all the time sucks, that’s what pisses me off.
I'll only use an LLM for projects and building tools, like a junior dev in their 20s.
i've learned pretty well how to 'guide' the algorithm so the tech stuff that's super valuable (to me) does not vanish, but still get nonsense bozo posts in the mix.
I dug out my old PinePhone and decided to write a toy OS for it. The project has just the right level of challenge and reward for me, and feels more like early days hacking/programming where we relied more on documentation and experimentation than regurgitated LLM slop.
Nothing beats that special feeling when a hack suddenly works. Today was just a proximity sensor reading displayed, but it invloved a lot of SoC hacking to get that far.
I know there are others hacking hard in obscure corners of tech, and I love this site for promoting them.
What personally disturbs me the most is the self censorship that was initially brought forward by TikTok and quickly spread to other platforms - all in the name of being as advertiser friendly as possible.
LinkedIn was the first platform where I really observed people losing their unique voice in favor of corporate friendly - please hire me - speak. Now this seems to be basically any platform. The only platform that seems to be somewhat protected from it is Reddit, where many mods seem to dislike LLMs as much as everybody else. But more likely, its just less noticeable
I think that’s even too soon! YouTube has had rules around being advertising friendly for longer than TikTok has existed. And the FCC has fined swearing on public broadcasts for like 50+ years.
But I do agree, we’re attributing too much to LLMs. We don’t see personal, human-oriented content online because social media is just not about community.
1. Young people (correctly) realized they could make lots of money being influencers on social media. TikTok does make that easier than ever. I have close friends who make low 6 figures streaming on TikTok (so obviously they quit the low wage jobs they were doing before).
2. People have been slowly waking up to the fact that social media has always been pretty fake. I quit 6 years ago, and most of my friends have slowly reduced how much they use it. All of the platforms are legally incentivized to only care about profit and engagement. Capitalism doesn’t allow a company to care about community and personal voice, if algorithmic feeds of influencers will make them more money.
There’s still good content out there if you know where to look. But digital human connection happens in group chats, DMs, and FaceTime, not on public social media.
How LLMs standardize communication is the same way there was a standardization in empires expanding (cultural), book printing (language), the industrial revolution (power loom, factories, assembly procedures, etc).
In that process interesting but not as "scale-able" (or simply not used by the people in power) culture, dialects, languages, craftsmanship, ideas were often lost - and replaced by easier to produce, but often lesser quality products - through the power of "affordable economics" - not active conflict.
We already have the English 'business concise, buzzwordheavy language' formal messaging trained into chatGPT (or for informal the casual overexcited American), which I'm afraid might take hold of global communication the same way with advanced LLM usage.
Explain to me how "book printing" of the past "standardized communication" in the same way as LLMs are criticized for homogenizing language.
Everyone has the same few dictionary spellings (that are now programmed into our computers). Even worse (from a heterogeneity perspective), everyone also has the same few grammar books.
As examples: How often do you see American English users write "colour", or British English users write "color", much less colur or collor or somesuch?
Shakespeare famously spelled his own last name half a dozen or so different ways. My own patriline had an unusual variant spelling of the last name, that standardized to one of the more common variants in the 1800s.
https://en.wikipedia.org/wiki/History_of_English_grammars
"Bullokar's grammar was faithfully modelled on William Lily's Latin grammar, Rudimenta Grammatices (1534).[9] Lily's grammar was being used in schools in England at the time, having been "prescribed" for them in 1542 by Henry VIII.[5]"
It goes on to mention a variety of grammars that may have started out somewhat descriptive, but became more prescriptive over time.
Transitions in media and writing - you could take Charlemagne's attempts to standardize HRE script in the administrative state as a sort of "standardization of Empire," the development of English as the web's lingua franca, the role of the Stationers' co. during the development of early printed media as standardization of surveillance, the history of copyright law, all that - were in many cases state power responding to their inability to control both the infrastructure and content of communication. We're not afraid a chatbot is going to use an unrecognizable typeface in communication with another chatbot, or publish dissident pamphlets and spread them to other chatbots. We're afraid our literate practices will atrophy from outsourcing. That we start to communicate like the chatbot.
Put it this way: if you had everyone learn the five-paragraph essay style, you'd have the same lack of voice; and, boy, did "we" (US) ever get that exact consequence. You raise an interesting point about "standards" in that respect because that's a specific ideological, state-driven attempt to prioritize and measure skills. But if you tell students to write without that specific priority, they lose all sense of writing. So it's destructive, there's an expressive issue there. So, yeah, as you say: "even worse." They forget a skill they never had.
I'm just wondering, I guess, about the harm. The past has these examples of standardization in the West and if we say "well this is just another case of a historically prescriptivist tendency in writing" I guess that's fine, but we don't really look at these examples as beneficial or with a shrug. We don't even want everybody writing like EB White , we certainly don't want everybody writing like Elon Musk.
It's one thing to not be able to formulate a convincing argument without resorting to the 1-3-1 style. It's another thing entirely to not grasp an argument formatted in a different style.
what if we flip LLMs into voice trainers? Like, use them to brainstorm raw ideas and rewrite everything by hand to sharpen that personal blade. atrophy risk still huge?
Nudge to post more of my own mess this week...
Don't look at social media. Blogging is kinda re-surging. I just found out Dave Barry has a substack. https://davebarry.substack.com/ That made me happy :) (Side note, did he play "Squirrel with a Gun??!!!")
The death of voice is greatly exaggerated. Most LLM voice is cringe. But it's ok to use an LLM, have taste, and get a better version of your voice out. It's totally doable.
I don't judge, I'm not an artist so if I wanted to express myself in image I'd need AI help but I can see how people would do the same with words.
Frankly, it only takes someone a few times to "fall" for an LLM article -- that is, to spend time engaging with an author in good faith and try to help improve their understanding, only to then find out that they shat out a piece of engagement bait for a technology they can barely spell -- to sour the whole experience of using a site. If it's bad on HN, I can only imagine how much worse things must be on Facebook. LLMs might just simply kill social media of any kind.
These kinds of posts regularly hit the top 10 on HN, and every time I see one I wonder: "Ok, will this one be just another staid reiteration of an obvious point?"
Why do it at all if I won't do better than the AI?
The worst risk with AI is not that it replaces working artists, but that it dulls human creativity by killing the urge to start.
I am not sure who said it first, but every photographer has ten thousand bad photos in them and it's easier if they take them at the beginning. For photographers, the "bad" is not the technical inadequacy of those photos; you can get past that in the first one hundred. The "bad" is the generic, uninteresting, uninspiring, underexplored, duplicative nature of them. But you have to work through that to understand what "good" is. You can't easily skip these ten thousand photos, even if your analysis and critique skills are strong.
There's a lot to be lost if people either don't even start or get discouraged.
But for writing, most of the early stuff is going to read much like this sort of blog post (simply because most bloggers are stuck in the blogging equivalent of the ten thousand photos; the most popular bloggers are not those elevating writing).
"But it looks like AI" is the worst, most reflexive thing about this, because it always will, since AI is constantly stealing new things. You cannot get ahead of the tireless thief.
The damage generative AI will do to our humanity has only just started. People who carry on building these tools knowing what they are doing to our culture are beneath our contempt. Rampantly overcompensated, though, so they'll be fine.
How do you know? A lot of the stuff I see online could very much be produced by LLMs without me ever knowing. And given the economics I suspect that some of it already is.
https://rmoff.net/2025/11/25/ai-smells-on-medium/
He doesn't link many examples, but at the end he gives the example of an author pumping out +8 articles in a week across a variety of topics. https://medium.com/@ArkProtocol1
I don't spend time on medium so I don't personally know.
Of course, there might be hundreds of AI comments that pass my scrutiny because they are convincing enough.
For myself, I have been writing, all my life. I tend to write longform posts, from time to time[0], and enjoy it.
That said, I have found LLMs (ChatGPT works best for me) to be excellent editors. They can help correct minor mistakes, as long as I ignore a lot of their advice.
The few ones who have something important to say they will, and we will listen regardless of the medium.
People will spend time on things that serve utility AND are calorifically cheap. Doomscrolling is a more popular past time than say - completing Coursera courses.
Same thing with evolution: "survival of the fittest" doesn't mean "survival of the muscle", just whatever's best at passing on DNA.
Instead, we have misinformation (PR), lobbying, bad regulation made by big companies to trench their products, and corruption.
So, maybe, like communism, in a perfect environment, the market would produces what's best for the consumers/population, but as always, there are minority power-seeking subgroups that will have no moral barriers to manipulate the environment to push their product/company.
Economy is shit? Lets throw out the immigrants because they are the problem and lets use the most basic idea of taxing everything to death.
No one wants to hear hart truths and no one wants to accept that even as adults, they might just not be smart. Just beause you became an adult, your education shuld still matter (and i do not mean having one degree = expert).
Worse is better.
A unique, even significantly superior, voice will find it hard to compete against the pure volume of terrible non unique LLM generated voices.
Worse is better.
There's a data centre somewhere in the US running additions and multiplications through a block of numbers that has captured my voice.
Others respond in the same style. As a result, it ends up with long, multi-paragraph messages full of em dashes.
Basically, they are using AI as a proxy to communicate with each other, trying to sound more intelligent to the rest of the group.
I don't disagree, but LLMs happened to help with standardizing some interesting concepts that were previously more spread out as concepts ( drift, scaffolding, and so on ). It helps that chatgpt has access to such a wide audience to allow that level of language penetration. I am not saying don't have voice. I am saying: take what works.
What do you mean? The concepts of "drift" and "scaffolding" were uncommon before LLMs?
Not trying to challenge you. Honestly trying to understand what you mean. I don't think I have heard this ever before. I'd expect concepts like "drift" and "scaffolding" to be already very popular before LLMs existed. And how did you pick those two concepts of aaallll... the concepts in this world?
Does it make more sense?
There are skilled writers. Very skilled, unique writers. And I'm both exceedingly impressed by them as well as keenly aware that they are a rare breed.
But there's so many people with interesting ideas locked in their heads that aren't skilled writers. I have a deep suspicion that many great ideas have gone unshared because the thinker couldn't quite figure out how to express it.
In that way, perhaps we now have a monotexture of writing, but also perhaps more interesting ideas being shared.
Of course, I love a good, unique voice. It's a pleasure to parse patio11's straussian technocratic musings. Or pg's as-simple-as-possible form.
And I hope we don't lose those. But somehow I suspect we may see more of them as creative thinkers find new ways to express themselves. I hope!
I could agree with you in theory, but do you see the technology used that way? Because I definitely don't. The thought process behind the vast majority of LLM-generated content is "how do I get more clicks with less effort", not "here's a unique, personal perspective of mine, let's use a chatbot to express it more eloquently".
They aren't your ideas if its coming out of an LLM
I dunno. There's ways to use LLMs that produces writing that is substantially not-your-ideas. But there's also definitely ways to use it to express things that the model would not have otherwise outputted without your unique input.
It's not some magic roadblock. They just didn't want to spend the effort to get better at writing; you get better at writing by writing (like good old Steve says in "On Writing"). It's how we all learnt.
I'm also not sure everyone should be writing articles and blog posts just because. More is not better. Maybe if you feel unmotivated about making the effort, just don't do it?
Almost everyone will cut novice writers and non-native $LANGUAGE speakers some slack. Making mistakes is not a sin.
Finally, my own bias: if you cannot be bothered to write something, I cannot be bothered to read it. This applies to AI slop 100%.
Writing is one of the most accessible forms of expression. We were living in a world where even publishing was as easy as imaginable - sure, not actually selling/profiting, but here’s a secret, even most bestselling authors have either at least one other job, or intense support from their close social circle.
What you do to write good is you start by writing bad. And you do it for ages. LLMs not only don’t help here, they ruin it. And they don’t help people write because they’re still not writing. It just derails people who might, otherwise, maybe start actually writing.
Framing your expensive toy that ruins everything as an accessibility device is absurd.
I don't disagree with a lot of what you're saying but I also have a different frame.
Even if we take your claim that LLMs don't make people better writers as true (which I think there's plenty to argue with), that's not the point at all.
What I'm saying is people are communicating better. For most ideas, writing is just a transport vessel for ideas. And people now have tools to communicate better than they would have been.
Most people aren't trying to become good writers. That's true before, and true now.
On the other hand, this argument probably isn't worth having. If your frame is that LLMs are expensive toys that ruin everything -- well, that's quite an aggressive posture to start with and is both unlikely to bear a useful conversation or a particularly delightful future for you.
You would have to define 'better'.
Oh I know. I called it hijacking because the result is as progressive as a national socialist is a socialist.
> What I'm saying is people are communicating better.
Actually they’re no longer communicating at all.
"Struggle" argument is from gatekeepers and for masochists. Thank you very much.
Talking to some friends and they feel the same. Depending where you are participating a discussion you just might not feel it is worth it because it might just be a bot
I agree I think we should try to do both.
In germany for example, we have very few typical german brands. Our brands became very global. If you go Japan for example, you will find the same product like ramen or cookies or cakes a lot but all of them are slighly different from different small producers.
If you go to an autobahn motorway/highway rest area you will find local products in japan. If you do the same in germany, you find just the generic american shit, Mars, Modneles, PepsiCo, Unilever...
Even our german coke like Fritz cola is a niche / hipster thing even today.
I have always had a very idiosyncratic way of expressing myself, one that many people do not understand. Just as having a smartphone has changed my relationship to appointments - turning me into a prompt and reliable "cyborg" - LLMs have made it possible for me to communicate with a broader cross section of people.
I write what I have to say, I ask LLMs for editing and suggestions for improvement, and then I send that. So here is the challenge for you: did I follow that process this time?
I promise to tell the truth.
And whose to say your idiosyncratic expressions wouldn't find an audience as it changes over time? Just you saying that makes me curious to read something you wrote.
I've never given it too much thought, it's just... the way I communicate, and most people in my life don't give much thought to it either. But I recently switched jobs, and a few people there remarked on it, and I've also recently been corresponding with someone overseas who is an intermediate-level English speaker and says I sometimes hurt their brain.
Not making a value judgment either way on whether it's "sophisticated" or whatever, but it is I think part of my personality, and if I used LLM editing/translation I would want it to be only in the short term, and certainly not as something
At some point, generation breaks a social contract that I'm using my energy and attention consuming something that another human spent their energy and attention creating.
In that case I'd rather read the prompt the human brain wrote, or if I have to consume it, have an LLM consolidate it for me.
Improve grammar and typos in my draft but don't change my writing style.
Your mileage may vary.There's a lot of talk over whether LLMs make discourse 'better' or 'worse', with very little attention given to the crisis we were having with online discourse before they came around. Edelman was astroturfing long before GPT. Fox 'news' and the spectrum of BS between them and the NYT (arranged by how sophisticated they considered their respective pools of rubes to be) have always, always been propaganda machines and PR firms at heart wearing the skin of journalism like buffalo bill.
We have needed to learn to think critically for a very long time.
Consider this; if you are capable of reading between the lines, and dealing with what you read or hear on the merits of the thoughts contained therein, then how are you vulnerable to slop? If it was written by an AI (or a reporter, or some rando on the internet) but contains ideas that you can turn over and understand critically for yourself, is it still slop? If it's dumb and it works, it's not dumb.
I'm not even remotely suggesting that AI will usher in a flood of good ideas. No, it's going to be used to pump propaganda and disseminate bullshit at massive scale (and perhaps occasionally help develop good ideas).
We need to inoculate ourselves against bullshit, as a society and a culture. Be a skeptic. Ironnman arguments against your beliefs. Be ready to bench test ideas when you hear them and make it difficult for nonsense to flourish. It is (and has been) high time to get loud about critical thinking.
In any case, as someone who experimented with AI for creative writing, LLM _do not destroy_ your voice; it does flatten your voice, but with minimal effort you can make it sound the way you find reflects you thought best.
Here's why:
I consider myself an LLM pragmatist. I use them where they are useful, and I educate people on them and try to push back on all the hype marketing disguised as futurism from LLM creators.
And now when I see these emoji fests I instantly lose interest and trust in the content of the email. I have to spend time sifting through the fluff to find what’s actually important.
LLMs are creating an assymetric imbalance in effort to write vs effort to read. What is taking my coworkers probably a couple minutes to draft requires me 2-3x as long to decipher. That imbalance used to be the opposite.
I’ve raised the issue before at work and one response I got was to “use AI to summarize the email.” Are we really spending all this money and energy on the worlds worst compression algorithm?
Social media already lost that nearly two decades ago - it died as content marketing rose to life.
Don't blame on LLMs what we've long lost due to cancer that is advertising[0].
And don't confuse GenAI as a technology with what the cancer of advertising coopts it to. The root of the problem isn't in the generative models, it's in what they're used for - and the problem uses aren't anything new. We've been drowning in slop for decades, it's just that GenAI is now cheaper than cheap labor in content farms.
--
[0] - https://jacek.zlydach.pl/blog/2019-07-31-ads-as-cancer.html
That's like giving weapons to everybody in the world for free, and asking to be blamed for the increased deaths and violence.
- "Hey, Jimmy, the cookie jar is empty. Did you eat the cookies?"
- "You're absolutely right, father — the jar seems to be empty. Here is bullet point list why consuming the cookies was the right thing to do..."
2) People who use LLMs for understanding
I think I'll stick to 2) for many reasons.
there's enough potential and wiggle room but people align, even when they don't, just to align.
when Rome was flourishing, only a few saw what was lingering in the cracks but when in flourishing Rome ...
Of course there are also horrible use of AI, liars, scummy cheaters and fake videos on youtube, owned by a greedy mega-corporation that sold its soul to AI. So the bad use cases may be higher than the good use cases, but there are good use cases, and the "losing our voice to LLMs" isn't a whole view of it, sorry.
Skill becomes expensive mechanized commodity
old code is left to rot while people try to survive
we lose our history, we lose our dignity.
If you really have no metrics to hit (not even the internal craving for likes), then it doesn't make much sense to outsource writing to LLMs.
But yes, it's sad to see that your original stuff is lost in the sea of slop.
Sadly, as long as there will be money in publishing, this will keep happening.
Even before LLMs, if you wanted to be a big content creator on YouTube, Instagram, tiktok..., you better fall in line and produce content with the target aesthetic. Otherwise good luck.
* 28% of U.S. adults are at or below "level 1" literacy, essentially meaning people unable to function in an environment that requires written language skills.
* 54% of U.S. adults read below a sixth-grade level.
These statistics refer to an inability to interpret written material, much less create it. As to the latter, a much smaller percentage of U.S. adults can compose a coherent sentence.
We're moving toward a world where people will default to reliance on LLMs to generate coherent writing, including college students, who according to recent reports are sometimes encouraged to rely on LLMs to complete their assignments.
If we care to, we can distinguish LLM output from that of a typical student: An LLM won't make the embarrassing grammatical and spelling errors that pepper modern students' prose.
Yesterday I saw this headline in a major online media outlet: "LLMs now exceed the intelect [sic] of the average human." You don't say.
We improve our use of words when we work to improve our use of words.
We improve how we understand by how we ask.
The discomfort and annoyance that sentence generates, is interesting. Being accused of being a bot is frustrating, while interacting with bots creates a sense of futility.
Back in the day when Facebook first was launched, I remember how I felt about it - the depth of my opposition. I probably have some ancient comments on HN to that effect.
Recently, I’ve developed the same degree of dislike for GenAI and LLMs.
And that too is an expression of their own agency. #Laissez-faire
We've proved we can sort of value it, through supporting sustainability/environmental practices, or at least _pretending to_.
I just wonder, what will be the "Carbon credits" of the AI era. In my mind a dystopian scheme of AI-driven companies buying "Human credits" from companies that pay humans to do things.
I suppose when your existence is in the cloud, the fall back to earth can look scary. But it's really only a few inches down. You'll be ok.
Predictably, this has turned into a horror zone of AI written slop that all sounds the same, with section titles with “clever” checkbox icons, and giant paragraphs that I will never read.
I'd love to see an actual study of people who think they're proficient at detecting this stuff. I suspect that they're far less capable of spotting these things than they convince themselves they are.
Everything is AI. LLMs. Bots. NPCs. Over the past few months I've seen demonstrably real videos posted to sites like Reddit, and the top post is someone declaring that it is obviously AI, they can't believe how stupid everyone is to fall for it, etc. It's like people default assume the worst lest they be caught out as suckers.