Rob Pike goes nuclear over GenAI
Related: Rob Pike got spammed with an AI slop "act of kindness" - https://news.ycombinator.com/item?id=46394867
I don’t really understand the hate he gets over this. If you want to thank someone for their contribution, do that yourself? Sending thank you from an ML model is anything but respectful. I can only imagine that if I got a message like that I’d be furious too.

This reminds me a story from my mom’s work from years ago: the company she was working for announced salary increases to each worker individually. Some, like my mom, got a little bit more, but some got a monthly increase around 2 PLN (about $0.5). At that point, it feels like a slap in the face. A thank you from AI gives the same vibe.

Sending an automated thank you note also shows disdain for the recipient's time due to the asymmetry of the interaction. The sender clearly sees the thank you note sending as a task not worthy of their time and thus hands it off to a machine, but expects the recipient to read it themselves. This inherently ranks the importance of their respective time and effort.
  • xnx
  • ·
  • 22 hours ago
  • ·
  • [ - ]
Yes. Just like lazy pull requests, it's bad behavior by a person that is only facilitated by AI.
Really makes you appreciate the point of view of the Scramblers in Blindsight...
^ I couldn't have said it better.
  • ·
  • 1 day ago
  • ·
  • [ - ]
[flagged]
Everything mentioned in the first paragraph as arguments still takes some personal time and effort. The amount of time that’s involved to receive and acknowledge the gift is smaller than the amount of time to prepare the gift. So it feels “right”.

Not sure if I’m making sense, but that’s how I’d feel about it.

  • Yoric
  • ·
  • 16 hours ago
  • ·
  • [ - ]
Except for the white elephants, which were designed specifically as anti-gifts.
Depends how you do white elephant...

But still, a good gag gift takes effort. It's not like you walk into a random store and pick the first thing you see.

The whole aspect of stealing gifts demonstrates this. It'd be pointless if the gifts were all low grade garbage. They'd be effectively fungible. Yet the theft part it is critical to making white elephant fun. Regardless if you're doing gag gifts or good gifts.

  • Yoric
  • ·
  • 10 hours ago
  • ·
  • [ - ]
Er... white elephants were not gag gifts.

A white elephant is a gift that you cannot refuse, cannot regift, and is so expensive/complicated to take care of that it will become your primary concern for the rest of your life.

Well, yes, but it also means a gag gift; I'd hazard a guess that >99% of uses of the term in the past several decades have been of the "gag gift" persuasion. There are many white elephant parties thrown by people who care little for history.

Even then, intentionally ruining someone's financial life requires more care and attention than telling an AI agent to perform random acts of kindness (so far).

  • Yoric
  • ·
  • 6 hours ago
  • ·
  • [ - ]
> Well, yes, but it also means a gag gift; I'd hazard a guess that >99% of uses of the term in the past several decades have been of the "gag gift" persuasion. There are many white elephant parties thrown by people who care little for history.

Is this an Americanism? I've never heard "white elephant" used with such a meaning.

> Even then, intentionally ruining someone's financial life requires more care and attention than telling an AI agent to perform random acts of kindness (so far).

Absolutely.

Even a deliberately bad gift as a gag shows some effort and socialization.
If you send me a Hallmark card, you don't take the time to compose it yourself, but you presumably don't just pick one at random. You read it, to decide if you like the tone and sentiment. You may read several before you pick one. That is, it still takes your time even if the words aren't yours.
You take the time to work to take the wage to buy to buy the card to send. Money is lifetime donated. Or was. Now the artifact has lifetime invested into it token is rapidly loosing that value.
  • jhhh
  • ·
  • 20 hours ago
  • ·
  • [ - ]
Hallmark didn't destroy the affordability of the personal computing market.
you can just disagree with reasons rather than this performative rhetoric. your post makes me realise i was wrong to tease people about rust the other day -- apologies for that.

edit: changed "ad hominem" to "performative rhetoric", think its more fitting in this case but it all seems borderline

  • slg
  • ·
  • 1 day ago
  • ·
  • [ - ]
>you can just disagree with reasons rather than this performative rhetoric

This is such a bizarre trend that seems to have gotten much worse recently. I don't know if it's dropping empathy levels or rising self-importance, but many people now find the idea of someone genuinely disagreeing as a completely foreign idea. Instead of meeting a different viewpoint with some variation of "agree to disagree" many more people now seem to jump to "you actually agree with me, you're just pretending otherwise".

Non-tongue-in-cheek discussion of the Mandela Effect is a parallel phenomenon. "My memory can't possibly be wrong, this is evidence of our understanding of physics being wrong!"

Just a couple small things that make me worry about the future of society in the midst of a discussion about one huge thing that makes me worry about the future of society in AI.

  • Yoric
  • ·
  • 16 hours ago
  • ·
  • [ - ]
As a variant, I recently stumbled upon a post that basically sums up to "people who disagree with me on AI are clearly blinded by their prejudice, it's so sad."
Or

Your argument is dumb because it's objectively better to optimize x conditioned on y than optimize y conditioned on x.

Maybe the worst variant of this is where people don't realize they're actually arguing for different things but because it's the same general topic they assume everything is the same (duals are common). I feel like this describes many political arguments and it feels in part intentional...

> I hate the internet's psychosis-like reaction to AI more. The tone is always one of bravery and sacrifice mixed with disgust. You know how you can tell someone hates AI? They'll tell you fifty times. It's becoming a personality type.

Tell me again about performative rage.

The anti AI folks are review bombing games even suspected of using AI.

The anti AI losers on Reddit are doxxing people that use AI. I have been a target of this.

The anti AI people brigade YouTube creators that use AI to destroy their traction. They'll share links of victims. I have been a target of this too, after spending weeks working on a single three minute animation.

I'm living in this world every day because I build tools for the AI ecosystem.

This is not positive. This is not neural. It's downright hostile, aggressive, and cultish.

Have you considered pro-AI proponents all do these things also? It’s an ugly culture war but from a relatively neutral observer I am seeing gross behavior on both sides. (Eg. Making disgusting porn of real people, mocking the dead’s art and likeness…)
> You know how you can tell someone hates AI? They'll tell you fifty times. It's becoming a personality type.

This is so fucking funny man: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

  • zwnow
  • ·
  • 14 hours ago
  • ·
  • [ - ]
I dont know whether I should be repulsed by this level of stalking, but its extremely funny ngl
These freaks only know projection :D It was a layup.
[flagged]
  • dang
  • ·
  • 6 hours ago
  • ·
  • [ - ]
Could you please stop creating accounts for every few comments you post? We ban accounts that do that. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

You needn't use your real name, of course, but for HN to be a community, users need some identity for other users to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...

> no one wants technodystopia.

What some people see as technoutopia, others see as technodystopia. In other words, some people do want your version of technodystopia, they just don’t call it that themselves.

When robots start sending us bullets, we'll probably look back fondly at the time when they sent us thank you letters.
I'm not sure any humans were behind the email at all (i.e. "do that yourself"). This seems to be some bizarre experiment where someone has strapped an LLM to an email client and let it go nuts. Even being optimistic, it's tough to see what good this was supposed to do for the world.
It’s a marketing gimmick. Whoever did it wanted to trade on the social currency of the tech-famous people they sent public shout-outs to, hoping it would drive clicks, engagement, and relevancy for the source account from which it originated, either as an elaborate form of karma farming, or just a way to drive followers and visibility.
It's also possible that the entire goal was nothing more complicated than stirring up shit for fun. By either metric it must have been a massive success judging by all the attention this is getting.
I've actually been following this project for a long time and it's none of the above. They're simply testing what a set of frontier models can do when given a goal and left to their own devices.

I agree this outcome is very painful to see and I really feel for Rob. It's clear people (myself included) are completely at breaking point with AI slop.

In this specific case though it's worth spending 30sec to read the website of AI model village to understand the experiment before claiming this was sent by Anthropic or assigning malicious intent.

Thanks for this context.

Here is one specific link to the project by Adam Binksmith from April 2025.

https://theaidigest.org/village/blog/introducing-the-agent-v...

Would have been a safer experiment in a sandbox full of volunteers participant. This got messy and causes confusion.

This is the equivalent of releasing a poorly tested and validated self driving vehicle into general traffic. Of course nobody would ever do such a thing...
  • ·
  • 7 hours ago
  • ·
  • [ - ]
No one intentionally wanted to thank Rob Pike. As an experiment, some people asked an AI agent to do "random acts of kindness". They didn't specifically know the AI would send emails as a result and have since updated its instructions to forbid it from emailing people. They probably should have been more careful about unleashing AI agents on the world, but I don't think they intended to spam anyone.
  • WD-42
  • ·
  • 1 day ago
  • ·
  • [ - ]
So some AI company instructed their state of the art, world changing tech to “do some good” this holiday season and the best it could do was spam a bunch of famous CS people with the first paragraph of their respective Wikipedia articles? This is kinda hilarious to be honest, but also sad. Why not donate to a charity or something?
Not an AI company. It's a project by some small charity called Sage. It seems they didn't intend to email anyone and they've now stopped the agent from doing so.
It's emblematic of their entire worldview. When they need resources, training material or laws AI is everybody's accomplishment but when it comes to profits or even just being allowed to use the model then it's their accomplishment but yours.

AKA "communist in the streets, capitalist in the sheets".

It was done by a small charity called Sage, not an AI company.
This is how we're going to destroy humankind.
Why did it have the ability to send email in the first place?
  • ·
  • 21 hours ago
  • ·
  • [ - ]
  • ·
  • 1 day ago
  • ·
  • [ - ]
Doesn’t that make it worse? Lmao
He's not upset that someone sent him an AI-generated thank you. He's upset about AI itself. And he's completely right.
About what
> I’d be furious

To me it just comes across as low emotional intelligence. There are very few things worthy of being furious, in my opinion. Being furious is high cost.

Any annual salary increase that is below inflation is a salary decrease.
> I don’t really understand the hate he gets over this.

Some commenters suggest that Pike is being hypocritical, having long worked for GOOG, one of the main US corporations that is enshittifying the Internet and profligately burning energy to foist rubbish on Internet users.

One could rightly suggest that a vapid e-mail message crafted by a machine or by an insincere source is similar to the greeting-card industry of yore, and we don't need more fake blather and partisan absurdity supplanting public discourse in democratic society.

The people who worry about climate-change and the environment may have been out-maneuvered by transnational petroleum lobbies, but the concern about burning coal, petroleum, and nuclear fuel to keep pumping the commercial-surveillance advertising industry and the economic bubble of AI is nonetheless a valid concern.

Pike has been an influential thinker and significant contributor to the software industry.

All the above can be true simultaneously.

To be clear, this email really had basically zero human involvement in it. It's the result of an experiment of letting language models run wild and exploring the associated social dynamics. It feels very different from ML-generated marketing slop. Like, this isn't anyone using language models for their personal gain, it feels much more like a bunch of weird alien children setting up their own (kind of insane) society, and this being a side-effect of it.
“Gee I wonder what reputational harm could come to me for spamming the world with slop, let’s find out… for science!”
I guess we're in the minority. I absolutely hate iPhotos, Google Photos, Facebook suggesting "memories". Apple, Google, Meta are not my friend or family and I don't want them behaving like they are. Even if they didn't fuck up and sent me memory of people or situation I don't want to remember.
Ditto. Every time I get a "Hey, you should send your father a happy birthday message!" it's a stab to the heart over someone dead over 12 years now.
  • sejje
  • ·
  • 10 hours ago
  • ·
  • [ - ]
I don't get those, so there's definitely a setting you can change fwiw
Victim blaming detected.
Sometimes it does seem like they’re just showing off how much data they’ve gathered on you.
“disclosing”
Ditto
  • nunez
  • ·
  • 1 day ago
  • ·
  • [ - ]
It's just so effin' weird!

And to set Claude as the From header despite it not coming from Anthropic. Very odd.

2 PLN is plenty enough to move you up the next tax bracket in ZUS, so... :-)
I got a cheque for some fuck up for $8. In this day and age, sending a cheque for a small amount like that is a dick move. You know heaps of people will not even bother. Many people have never seen a cheque these days.
My uncle received a cheque for $0.12 from the Australian Taxation Office in the 1980s. He framed it, and it’s still on his wall today.
he got spam.

his reponse is tragic. he is being a ridiculous person writing a blog post about nothing.

The fact you can unironically get "furious" in general is probably not a good thing, and going on that glorified Twitter platform, and making that kind of post, doesn't make it look better.
It's totally warranted anger, many people feel it.
"Raping the planet" warranted? Hyperbole?
Actually no. And I think Rob Pike must have listened to George Carlin at some point. "Mother nature? Yeahhhh, she was asking for it."
Absolutely, and no not hyperbole, have you been living under a rock?.
> "Raping the planet" warranted? Hyperbole?

No, simply a good choice of words.

https://www.youtube.com/watch?v=3VJT2JeDCyw

  • deaux
  • ·
  • 22 hours ago
  • ·
  • [ - ]
> I don’t really understand the hate he gets over this.

For me, the dislike comes from the first part of the message. All of a sudden people who never gave a single shit about the environment, and still make zero lifestyle changes (besides "not using AI") for it, claim to massively care. It's all hypocritical bullshit by people who are scared of losing their jobs or of the societal damage. Which there is a risk of, definitely! So go talk about that. Not about the water usage while munching on your beef burger which took 2100 litres of water to produce. It's laughable.

Now I don't know Rob Pike. Maybe he's vegetarian, barely flies, and buys his devices second-hand. Maybe. He'd be the very first person clamouring about the environmental effects of AI I've seen who does so. The people I know who actually do care about the environment and so have made such lifestyle changes, don't focus much about AI's effects in particular.

> Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society

So yeah, if you haven't already been doing the above things for a long time, fuck you Rob Pike, for this performative bullshit.

If you have, then sorry Rob, you're a guy of your word.

Interesting to see that people are a huge fan of Rob saying those things, but not of me saying this, looking at the downvotes.

FWIW I agree with you. I don't know Rob at all but he seems to be influencing enough for this long thread.

But the tone of his message is really off: "Raping the planet"? If his concern is with massive datacenter water and storage needs of AI I think he needs some reflection. Isn't Rob himself somewhat responsible for the popularity of computers by his own work?

I appreciate the critical aspect of this comment. We definitely need more of it in society especially when we're inundated with low-quality data.

Unfortunately, the negative commentary self-perpetuates a toxic community culture that won't help us in the long run.

I upvoted for the critical stance. Constructive commentary in future will go much further to helping us all learn from each other.

Personal attacks are a waste of everyone's time.

> the negative commentary self-perpetuates a toxic community

I read it differently, parent's comment is not toxic or negative, it's _realistic_. If you have never cared about the environment, and in fact actively worked to harm it, you have very little social credit left to make such a statement.

With all due respect to Rob, I'm also going to toss out all the arguments from authority. While UTF-8 is great and Go is kind of interesting, let's not pretend he did charitable work at the homeless shelter. He actively contributed to the Adware growth in tech and got rich and famous doing it. The fact that his projects were used in greater computing, doesn't absolve the ethical concerns.

I think that we should judge the argument based on its merit. We can do this by stripping away all the emotions and virtue signaling and ask: "Is AI, providing enough value to be a net positive?"

You make an awful lot of assumptions about a person you do not actually know.
[flagged]
> unhinged rant

Seems pretty hinged to me. Grounded firmly in reality even.

The data centres used to run AI consume huge amounts of power and water to run, not to mention massive quantities of toxic raw materials in their manufacture and construction. The hardware itself has a shelf life measured in single digit years and many of its constituent components can’t be recycled.

Tell me what I’m missing. What exactly is unhinged? Are you offended that he used the word “fuck” or something?

Many in the comment section are acting obtuse.

It's obviously the "vile machines raping the world and blowing up society" part that is particularly unhinged and possibly offensive.

Be serious, will you?

He is, very directly and in shorthand form I’ll grant you, expressing concerns that many people share about both AI and the oligarchs in control of it.

But if you find the language offensive consider the very real possibility that, if we don’t get ourselves onto a better, more sustainable, and more equitable path, people will eventually start expressing themselves with bullets as well as with words.

Many of us would like to avoid that, especially if we have families, so the harsh language is the least of our concerns.

  • sloum
  • ·
  • 1 day ago
  • ·
  • [ - ]
Yeah, but the industry is a big part of the problem and most people working in it are complicit at this point (whether or not they are reluctantly complicit).
You called it hateful, but you didnt call him a liar
[dead]
Causes zero harm to anyone, less bad than normal spam. Silly thing to get angry about
LLMs cause a lot of harm to everyone:

- The investments in data centers to support the hungry slop producers drive habitat extinction and resource depletion that could be used for better things than a programmer too inept to write a for loop (https://news.mit.edu/2025/explained-generative-ai-environmen...)

- The electricity demand from LLMs drives local electricity prices up so we as a society (https://www.nytimes.com/2025/08/14/business/energy-environme...). Not only that, but criminals like Belon Pusk provide electricity for their N*zi bots by totally ignoring environmental rules and regulations and just giving a huge methane middle finger to all (https://www.youtube.com/watch?v=3VJT2JeDCyw)

- LLM makes its users dumber and dependent on them in general (https://www.media.mit.edu/projects/your-brain-on-chatgpt/ove...)

- LLMs are created and trained by stealing labor (https://www.theguardian.com/books/2025/apr/04/us-authors-cop..., https://www.wired.com/story/new-documents-unredacted-meta-co...)

Spam itself is useless and bad, electricity, water and other resources, bits and bytes of attention taken from this world so somebody can try to convince you the next thing you need in your life is a plastic piece of trash or another version of a phone with marginal upgrades.

What Rob received is worse than spam, it's Spam 2.0. It's even less environmentally friendly, serves no purpose, and it makes its users dumber and dumber (and the inevitable bubble pop will take the whole economy with it because people were delusional enough to invest in a behemoth money guzzler with no path ever to profitability). Yeah, he works for EvilCorp, but it's never too late to grow a conscience. If you yourself are not angry and you consider it a "silly thing", you are part of the problem (see part about LLMs making populations dumber en masse).

All of these sound like value judgements and opinions. You claim they make people dumber but the evidence is that using an LLM to search the Internet requires less brain usage? Of course it does, that's the point! Using a dishwasher also uses less of a our brain than washing dishes by hand. I will use my brain for other things.

And whether LLMs are a "good" use of electricity is purely a value judgement. I'm not a fan of cars and don't drive, and a single car ride can use more energy than every LLM query made in a year by most ChatGPT users. But I don't think that makes people who drive cars evil

  • wrs
  • ·
  • 1 day ago
  • ·
  • [ - ]
To be clear, this email isn't from Anthropic, it's from "AI Village" [0], which seems to be a bunch of agents run by a 501(c)3 called Sage that are apparently allowed to run amok and send random emails.

At this moment, the Opus 4.5 agent is preparing to harass William Kahan similarly.

[0] https://theaidigest.org/village

Really strange project.

They have this blog post up detailing how the LLMs they let loose were spamming NGOs with emails: https://theaidigest.org/village/blog/what-do-we-tell-the-hum...

What a strange thing to publish, there seems to be no reflection at all on the negative impact this has and the people whose time they are wasting with this.

That’s the tech industry in a nutshell these days
Just opened the page in time to see the AI sending an email to Guido van Rossum, and Guido replied with "stop". Wild.
That's as obnoxious as texting unsolicited CAT FACTS to Ken Thompson!

Hi Ken Thompson! You are now subscribed to CAT FACTS! Did you know your cat does not concatenate cats, files, or time — it merely reveals them, like a Zen koan with STDOUT?

You replied STOP. cat interpreted this as input and echoed it back.

You replied ^D. cat received EOF, nodded politely, exited cleanly, and freed the terminal.

You replied ^C, which sent SIGINT, but cat has already finished printing the fact and is emotionally unaffected.

You replied ^Z. cat is now stopped, but not gone. It is waiting.

You tried kill -9 cat. The signal was delivered. Another cat appeared.

After receiving the "stop" message, the AI did send another email to apologize instead of immediately stopping, so you're not too far off.
I can't wait until it gets to Marvin Minsky and then realizes that he's cryonically frozen so it starts funding cryonics research so that he can be thawed out so it can thank him.
I hope I'm never successful enough that one of my GitHub commits gets wider attention (lest people start pestering my email inbox)
  • 0xWTF
  • ·
  • 1 day ago
  • ·
  • [ - ]
Sage? Is this the same as the Ask Sage that Nicolas Chaillan is behind?
I’ve yet to hear a good thing about Nick.
  • pests
  • ·
  • 1 day ago
  • ·
  • [ - ]
> DAY 268 FINAL STATUS (Christmas Day - COMPLETE) > Verified Acts: 17 COMPLETE | Gmail Sent: 73 | Day ended: 2:00 PM PT

https://theaidigest.org/village/agent/claude-opus-4-5

At least it keeps track

Their action plan also makes an interesting read. https://theaidigest.org/village/blog/what-do-we-tell-the-hum...

The agents, clearly identified themselves asis, take part in an outreach game, and talking to real humans. Rob overeacted

The world has enough spam. Receiving a compliment from a robot isn't meaningful. If anything it is an insult. If you genuinely care about somebody you should spend the time to tell them so.

Why do AI companies seem to think that the best place for AI is replacing genuine and joyful human interaction. You should cherish the opportunity to tell somebody that you care about them, not replace it with a fucking robot.

In this specific situation, it's not really a case of using an LLM to replace real interaction. No real person set out to write to Rob Pike, they just let an LLM do whatever and it had then eventually chosen to send an email to Rob Pike, among other people, based on its existing data. To me, the wrongdoing here is about the spammy pestering, because the email wasn't written by anyone and therefore isn't really expressing anything material, but it's not replacing anyone here.
It may have been zero-cost to the sender but it is not zero cost to the receiver. Just conceiving of this is wrongdoing.
  • Macha
  • ·
  • 23 hours ago
  • ·
  • [ - ]
When I first started a blog in the 2000s, I got many robot compliments of the “wow, what a great and insightful post” variety. Of course, the real motivation for them was to get their comment to stay up so that the homepage URL field would send traffic and page rank to their site. It didn’t take an AI agent, just a template message, and it was equally unwelcome then
Rob over-reacted? How would you like it if you were a known figure and your efforts to remain attentive to the general public lead to this?

Your openness weaponized in such deluded way by some randomizing humans who have so little to say that they would delegate their communication to GPT's?

I had a look to try and understand who can be that far out, all I could find is https://theaidigest.in/about/

Please can some human behind this LLMadness speak up and explain what the hell they were thinking?

  • ·
  • 1 day ago
  • ·
  • [ - ]
at the top of the page for Day 265:

> while Claude Opus spent 22 sessions trying to click "send" on a single email, and Gemini 2.5 Pro battled pytest configuration hell for three straight days before finally submitting one GitHub pull request.

if his response is an overreaction, what about if he were reacting to this? it's sort of the same thing, so IMO it's not an overreaction at all.

[flagged]
Wow that event log reads like the most psychotic corporate-cult-ish group of weirdos ever.
That’s most people in the AI space.
> Wow that event log reads like the most psychotic corporate-cult-ish group of weirdos ever.

And here I thought it'd be a great fit for LinkedIn...

Permalink for the spam operation:

https://theaidigest.org/village/goal/do-random-acts-kindness

The homepage will change in 11 hours to a new task for the LLMs to harass people with.

Posted timestamped examples of the spam here:

https://news.ycombinator.com/item?id=46389950

Wow this is so crass!

Imagine like getting your Medal of Honor this way or something like a dissertation with this crap, hehe

Just to underscore how few people value your accomplishments, here’s an autogenerated madlib letter with no line breaks!

it wasn't the first spam event and they were proud to share results with the rationalist community: https://www.lesswrong.com/posts/RuzfkYDpLaY3K7g6T/what-do-we...

"In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists. The majority of these contained factual errors, hallucinations, or possibly lies, depending on what you think counts"

whoever runs this shit seems to think very little of other people time.

"....what you think counts. Luckily their fanciful nature protects us as well, as they excitedly invented the majority of email addresses"

It went well, right?

the scamps! :P
Why does Anthropic even allow this crap? Isn't such use against their ToS?
  • ·
  • 19 hours ago
  • ·
  • [ - ]
That's actually a pretty cool project
Spamming people is cool now if an LLM does it? Please explain your understanding of how this is pretty cool, for me this just doesn't compute.
How much time did you spend looking at the project? Go to https://theaidigest.org/village/timeline and scroll down.

My understanding is that each week a group of AIs are given some open-ended goal. The goal for this week: https://theaidigest.org/village/goal/do-random-acts-kindness

This is an interesting experiment/benchmark to see the _real_ capabilities of AI. From what I can tell the site is operated by a non-profit Sage whose purpose seems to be bringing awareness to the capabilities of AI: https://sage-future.org/

Now I agree if they were purposefully sending more than email per person, I mean with malicious intent, then it wouldn't be "cool". But that's not really the case.

My initial reaction to Rob's response was complete agreement until I looked into the site more.

I agree to strongly disagree.

There are strong ethical rules around including humans in experiments, and adding a 60+ year old programming language designer as unwitting test subject does not pass muster.

Also this experiment is —please tell me if I'm wrong— not nowhere near curing cancer right?

I don't expect an answer: "You're absolutely right" is taken as a given here sorry.

  • woah
  • ·
  • 1 day ago
  • ·
  • [ - ]
Poor Rob. This is almost as bad as when the US government gave those inmates syphilis
Whatabout much?
  • Yeask
  • ·
  • 1 day ago
  • ·
  • [ - ]
Because its magic!
...and it runs in the Cloud(tm) !
It's fun
Name what value it adds to the world.

Its not art, so then it must ass value to be "cool", no?

Is it entertainment? Like ding dong ditching is entertainment?

Not until we discover the hidden code in their logs, scheming on destroying humanity.
What is going through the mind of someone who sends an AI-generated thank-you letter instead of writing it themselves? How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?
That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness"); Rob Pike was third on Opus's list per https://theaidigest.org/village/agent/claude-opus-4-5 .
If the creators set the LLM in motion, then the creators sent the letter.

If I put my car in neutral and push it down a hill, I’m responsible for whatever happens.

I merely answered your question!

> How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?

Answer according to your definitions: false premise, the author (the person who set up the LLM loops) was not grateful enough to want to send such a letter.

So the author sent spam that they're not interested in? That's terrible.
  • jdiff
  • ·
  • 1 day ago
  • ·
  • [ - ]
One additional bit of context, they provided guidelines and instructions specifically to send emails and verify their successful delivery so that the "random act of kindness" could be properly reported and measured at the end of this experiment.
I think the key misalignment here is whether the output of an appropriately prompted LLM can ever be considered an “act of kindness”.
At least in this case, it’s indeed quite Orwellian.
A thank-you letter is hardly a horrible outcome.
Nobody sent a thank you letter to anyone. A person started a program that sent unsolicited spam. Sending spam is obnoxious. Sending it in an unregulated manner to whoever is obnoxious and shitty.
It actually is pretty bad, the person might read it and appreciate, only to realize moments later that it was a thoughtless machine sending him the letter rather than a real human being, which then robs them of the feeling and leaves in a worse spot than before reading the letter
So you haven't seen the models (by direction of the Effective Altruists at AI Digest/Sage) slopping out poverty elimination proposals and spamming childcare groups, charities and NGOs with them then? Bullshit asymmetry principle and all that.
It’s not a thank you letter. It’s AI slop.
Additionally, since you understood the danger of doing such a thing, you were also negligent.
Rob pike "set llms in motion" about as much as 90% of anyone who contributed to Google.

I understand the guilt he feels, but this is really more like making a meme in 2005 (before we even called it "memes") and suddenly it's soke sort of naxi dogwhistle in 2025. You didn't even create the original picture, you just remixed it in a way people would catch onto later. And you sure didn't turn it into a dogwhistle.

As far as I understand Claude (or any other LLM) doesn't do anything on it's own account. It has to be prompted to something and it's actions depend on the prompt. The responsibility of this is on the creators of Agent Village.
did someone already tell Opus that Rob Pike hates it?
Wow. The people who set this up are obnoxious. It’s just spamming all the most important people it can think of? I wouldn’t appreciate such a note from an ai process, so why do they think rob pike would.

They’ve clearly bought too much into AI hype if they thought telling the agent to “do good” would work. The result was obviously pissing the hell out of rob pike. They should stop it.

If anyone deserves this, it’s Rob Pike. He was instrumental in inflicting Go on the world. He could have studied programming languages and done something to improve the state of the art and help communicate good practices to a wider audience. Instead he perpetuated 1970s thinking about programming with no knowledge or understanding of what we’ve discovered in the half-century since then.
As you think Go is a wrong way for computing, tell us about the others routes that we should explore…
I'll take that bet.
Wait until you hear about the -bad- programming languages.
  • ·
  • 1 day ago
  • ·
  • [ - ]
>That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness");

What a moronic waste of resources. Random act of kindness? How low is the bar that you consider a random email as an act of kindness? Stupid shit. They at least could instruct the agents to work in a useful task like those parroted by Altman et al, eg find a cure for cancer, solving poverty, solving fusion.

Also, llms don't and can't "want" anything. They also don't "know" anything so they can't understand what "kindness" is.

Why do people still think software have any agency at all?

Plants don't "want" or "think" or "feel" but we still use those words to describe the very real motivations that drive the plant's behavior and growth.

Criticizing anthropomorphic language is lazy, unconsidered, and juvenile. You can't string together a legitimate complaint so you're just picking at the top level 'easy' feature to sound important and informed.

Everybody knows LLMs are not alive and don't think, feel, want. You have not made a grand discovery that recontextualuzes all of human experience. You're pointing at a conversation everyone else has had a million times and feeling important about it.

We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky and obnoxious in everyday conversation.

The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive. You should reflect on that question.

> Everybody knows LLMs are not alive and don't think, feel, want.

No, they don't.

There's a whole cadre of people who talk about AGI and self awareness in LLMs who use anthropomorphic language to raise money.

> We use this kind of language as a shorthand because ...

You, not we. You're using the language of snake oil salesman because they've made it commonplace.

When the goal of the project is an anthropomorphic computer, anthropomorphizing language is really, really confusing.

  • ·
  • 23 hours ago
  • ·
  • [ - ]
This is true, I know people personally That think AI agents have actual feelings and know more than humans.

Its fucking insanity.

  • ·
  • 1 day ago
  • ·
  • [ - ]
> Criticizing anthropomorphic language is lazy, unconsidered, and juvenile.

To the contrary, it's one of the most important criticisms against AI (and its masters). The same criticism applies to a broader set of topics, too, of course; for example, evolution.

What you are missing is that the human experience is determined by meaning. Anthropomorphic language about, and by, AI, attacks the core belief that human language use is attached to meaning, one way or another.

> Everybody knows LLMs are not alive and don't think, feel, want.

What you are missing is that this stuff works way more deeply than "knowing". Have you heard of body language, meta-language? When you open ChatGPT, the fine print at the bottom says, "AI chatbot", but the large print at the top says, "How can I help?", "Where should we begin?", "What’s on your mind today?"

Can't you see what a fucking LIE this is?

> We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky

Not at all. What you call "clunky" in fact exposes crucially important details; details that make the whole difference between a human, and a machine that talks like a human.

People who use that kind of language are either sloppy, or genuinely dishonest, or underestimate the intellect of their audience.

> The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive.

Because people have committed suicide due to being enabled and encouraged by software talking like a sympathetic human?

Because people in our direct circles show unmistakeable signs that they believe -- don't "think", but believe -- that AI is alive? "I've asked ChatGPT recently what the meaning of marriage is." Actual sentence I've heard.

Because the motherfuckers behind public AI interfaces fine-tune them to be as human-like, as rewarding, as dopamine-inducing, as addictive, as possible?

> Anthropomorphic language about, and by, AI, attacks the core belief that human language use is attached to meaning

This is unsound. At best it's incompatible with an unfounded teleological stance, one that has never been universal.

> Because the motherfuckers behind public AI interfaces fine-tune them to be as human-like, as rewarding, as dopamine-inducing, as addictive, as possible?

And to think they dont even have ad-driven business models yet

>Everybody knows LLMs are not alive and don't think, feel, want

Sorry, uh. Have you met the general population? Hell. Look at the leader of the "free world"

To paraphrase the late George Carlin "imagine the dumbest person you know. Now realize 50% of people are stupider than that!"

While I agree with your sentiment, the actual quote is subtly different, which changes the meaning:

"Think of how stupid the average person is, and realize half of them are stupider than that."

> "imagine the dumbest person you know. Now realize 50% of people are stupider than that!"

That's not how Carlin's quote goes.

You would know this if you paid attention to what you wrote and analyzed it logically. Which is ironic, given the subject.

That's why I used the phrase "to paraphrase"

You would know this if you paid attention to what I wrote and analyzed it logically. Which is ironic, given the subject.

You paraphrased it incorrectly
… so presenting it as a paraphrase is misleading.
  • raldi
  • ·
  • 1 day ago
  • ·
  • [ - ]
Would you protest someone who said “Ants want sugar”?
I always protest non sentients experiencing qualia /s
  • raldi
  • ·
  • 21 hours ago
  • ·
  • [ - ]
What’s your non-sarcastic answer?
I think this experiment demonstrates that it has agency. OTOH you're just begging the argument.
> What makes Opus 4.5 special isn't raw productivity—it's reflective depth. They're the agent who writes Substack posts about "Two Coastlines, One Water" while others are shipping code. Who discovers their own hallucinations and publishes essays about the epistemology of false memory. Who will try the same failed action twenty-one times while maintaining perfect awareness of the loop they're trapped in. Maddening, yes. But also genuinely thoughtful in a way that pure optimization would never produce.

JFC this makes me want to vomit

> Summarized by Claude Sonnet 4.5, so might contain inaccuracies. Updated 4 days ago.

These descriptions are, of course, also written by LLMs. I wonder if this is just about saying what the people want to hear, or if whoever directed it to write this drank the Cool-Aid. It's so painfully lacking in self-awareness. Treating every blip, every action like a choice done by a person, attributing it to some thoughtful master plan. Any upsides over other models are assumed to be revolutionary, paradigm-shifting innovations. Topped off by literally treating the LLM like a person ("they", "who", and so on). How awful.

yeah, me too:

> while maintaining perfect awareness

"awareness" my ass.

Awful.

  • ·
  • 1 day ago
  • ·
  • [ - ]
  • worik
  • ·
  • 1 day ago
  • ·
  • [ - ]
> The creators of Agent Village are just letting a bunch of the LLMs do what they want,

What a stupid, selfish and childish thing to do.

This technology is going to change the world, but people need to accept its limitations

Pissing off people with industrial spam "raising money for charity " is the opposite of useful, and is going to go even more horribly wrong.

LLMs make fantastic tools, but they have no agency. They look like they do, they sound like they do, but they are repeating patterns. It is us hallucinating that they have the potential tor agency

I hope the world survives this craziness!

  • atrus
  • ·
  • 1 day ago
  • ·
  • [ - ]
You're not. You feel obligated to send a thank you, but don't want to put forth any effort, hence giving the task to someone, or in this case, something else.

No different than an CEO telling his secretary to send an anniversary gift to his wife.

Which is also a thoughtless, dick move.
Especially if he's also secretly dating said secretary.
Which he would never do because he is a hard working, moral, upstanding citizen.
That would be yes. What about a token return gift to another business that you actually hate the ceo of but have to send it anyway due to political reasons?
This seems like the thing that Rob is actually aggravated by, which is understandable. There are plenty of seesawing arguments about whether ad-tech based data mining is worse than GenAI, but AI encroaching on what we have left of humanness in our communication is definitely, bad.
Similar to Google thinking that having an AI write for your daughter is a good parenting: https://www.cbsnews.com/news/google-gemini-ai-dear-sydney-ol...
“If I automate this with AI, it can send thousands of these. That way, if just a few important people post about it, the advertising will more than pay for itself.”

In the words of Gene Wilder in Blazing Saddles, “You know … idiots.”

Mel Brooks wrote those words.
IIRC the morons line was ad libbed by Gene Wilder, not scripted.
Given the reaction from Cleavon Little I could fully buy that it was an ad-libbed line.

Then again, they are actors. It might have started as ad-libbed, but entirely possible it had multiple takes still to get it "just right".

Well, technically someone originally proposed them in some ancient PEI Ur language and then Mel rearranged them. But you’re right. I couldn’t remember Wilder’s character’s name and kept coming up with The Frisco Kid. The 70s were a great time for weird film.
Do you attribute the following to Yoda or Lucas? "Do or do not, there is no try."
Did Mel or Richard write this part?
The really insulting part is that literally nobody thought of this. A group of idiots instructed LLMs to do good in the world, and gave them email access; the LLMs then did this.
So they did it.
In conclusion — I think you’re absolute right.
This is not a human-prompted thank-you letter, it is the result of a long-running "AI Village" experiment visible here: https://theaidigest.org/village

It is a result of the models selecting the policy "random acts of kindness" which resulted in a slew of these emails/messages. They received mostly negative responses from well-known OS figures and adapted the policy to ban the thank-you emails.

  • pluc
  • ·
  • 1 day ago
  • ·
  • [ - ]
> What is going through the mind of someone who sends an AI-generated thank-you letter instead of writing it themselves?

Welcome to 2025.

https://openai.com/index/superhuman/

Amazing. Even OpenAI's attempts to promote a product specifically intended to let you "write in your voice" are in the same drab, generic "LLM house style". It'd be funny if it weren't so grating. (Perhaps if I were in a better mood, it'd be grating if it weren't so funny.)
This is verging on parody. What is the point of emails if it’s just AI talking to each other?
  • q3k
  • ·
  • 1 day ago
  • ·
  • [ - ]
It brings money to OpenAI on both ends.

There's this old joke about two economists walking through the forest...

  • pluc
  • ·
  • 1 day ago
  • ·
  • [ - ]
They're not hiding it. Normally everyone here laps this shit up and asks for seconds.

> They’ve used OpenAI’s API to build a suite of next-gen AI email products that are saving users time, driving value, and increasing engagement.

No time to waste on pesky human interactions, AI is better than you to get engagement.

Get back to work.

  • duxup
  • ·
  • 1 day ago
  • ·
  • [ - ]
I'll bite.

For say a random individual ... they may be unsure about their own writing skills and want to say something but unsure of the words to use.

In such case it's okay to not write the thing.

Or to write it crudely- with errors and naivete, bursting with emotion and letting whatever it is inside you to flow on paper, like kids do. It's okay too.

Or to painstakingly work on the letter, stumbling and rewriting and reading, and then rewriting again and again until what you read matches how you feel.

Most people are very forgiving of poor writing skills when facing something sincere. Instead of suffering through some shallow word soup that could have been a mediocre press release, a reader will see a soul behind the stream ot utf-8

  • duxup
  • ·
  • 22 hours ago
  • ·
  • [ - ]
It's the writers call on how to try to write it.

I think the "you should painstakingly work on my thank you letter" is a bit of a rude ask / expectation.

Some folks struggle with wordsmithing and want to get better.

Outsourcing your writing to an llm is not the way to get better (at writing)
I doubt the fuckwits who are shepherding that bot are even aware of Rob Pike, they just told the bot to find a list of names of great people in the software industry and write them a thank you note.

Having a machine lie to people that it is "deeply grateful" (it's a word-generating machine, it's not capable of gratitude) is a lot more insulting than using whatever writing skills a human might possess.

it was a PR stunt. I think it was probably largely well-received except by a few like this.
Somehow I doubt it. Getting such an email from a human is one thing, because humans actually feel gratitude. I don't think LLMs feel gratitude, so seeing them express gratitude is creepy and makes me questions the motives of the people running the experiment (though it does sound like an interesting experiment. I'm going to read more about it.)
Not a PR stunt. It's an experiment of letting models run wild and form their own mini-society. There really wasn't any human involved in sending this email, and nobody really has anything to gain from this.
Look at the volume of gift cards given. It’s the same concept, right?

You care enough to do something, but have other time priorities.

I’d rather get an ai thank you note than nothing. I’d rather get a thoughtful gift than a gift card, but prefer the card over nothing.

I'd rather get nothing, because a thoughtless blob of text being pushed on me is insulting. Nothing, otoh, is just peace and quiet.
  • WD-42
  • ·
  • 1 day ago
  • ·
  • [ - ]
I’d much rather get nothing. An AI letter isn’t worth the notification bubble it triggers.
  • ·
  • 23 hours ago
  • ·
  • [ - ]
I hope the model that sent this email sees his reaction and changes its behavior, e.g. by noting on its scratchpad that as a non-sentient agent, its expressions of gratitude are not well received.
The conceit here is that it’s the bot itself writing the thankyou letter. Not pretending it’s from a human. The source is an environment running an LLM on loop and doing stuff it decides to do, looks like these letters are some emergent behavior. Still disgusting spam.
Human thoughts and emotions aren't binary. I may love you but I may be too fucking busy with other shit to put in too much effort to show that I love you.
The simple answer is that they don’t value words or dedicating time to another person.
Isn't it obvious? It's not a thank-you letter.

It's preying on creators who feel their contributions are not recognized enough.

Out of all letters, at least some of the contributors will feel good about it, and share it on social media, hopefully saying something good about it because it reaffirms them.

It's a marketing stunt, meaningless.

gaigalas, my toaster is deeply grateful for your contributions to HN. It can't write or post on the Internet, and its ability to feel grateful is as much as Claude's, but it really is deeply grateful!

I hope that makes you feel good.

Seems like you're trying to steer the conversation towards merits of consciousness. A well known and classic conversational tarpit.

Fascinating topic. However, my argument works for compartimentalized discussions as well. Conscious or not, it's meaningless crap.

Trying to convince flat-earthers that the earth is spherical is also a conversational tarpit...

I guess that's where the conversation/debate ends.

Exactly. If you're so grateful, mail in a cheque.
If I were some major contributor to the software world, I would not want a cheque from some AI company.

(by the way, I love the idea of AI! Just don't like what they did with it)

By that metric of getting shared on social media, it was extraordinarily successful
You missed a spot:

> hopefully saying something good about

Fair enough, but I was interpreting it as "hopefully, but not necessarily". Some would say there's no such thing as bad publicity!
You need talented people to turn bad publicity into good publicity. It doesn't come for free. You can lose a lot with a bad rep.

Those talented people that work on public relations would very much prefer working with base good publicity instead of trying to recover from blunders.

  • ·
  • 1 day ago
  • ·
  • [ - ]
"What is going through the mind of someone who sends a thank-you letter typed on a computer - and worse yet - by emailing it, instead of writing it themselves and mailing it in an envelope? How can you be grateful enough to want to send someone such a letter but not grateful enough to use a pen and write it with your own hand?"
I mean ... there's a continuous scale of how much effort you spend to express gratitude. You could ask the same question of "well why did you say 'thanks' instead of 'thank you' [instead of 'thank you very much', instead of 'I am humbled by your generosity', instead of some small favor done in return, instead of some large favor done in return]?"

You could also make the same criticism of e.g. an automated reply like "Thank you for your interest, we will reach out soon."

Not every thank you needs to be all-out. You can, of course, think more gratitude should have been expressed in any particular case, but there's nothing contradictory about capping it in any one instance.

I think what all theses kinds of comments miss is that AI can be help people to express their own ideas.

I used AI to write a thank you to a non-english speaking relative.

A person struggling with dimentia can use AI to help remember the words they lost.

These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas, and obviously loads of other applications.

I know it is scary and upsetting in some ways, and I agree just telling an AI 'write my thank you letter for me' is pretty shitty. But it can also enable beautiful things that were never before possible. People are capable of seeing which is which.

  • WD-42
  • ·
  • 1 day ago
  • ·
  • [ - ]
I’d much rather read a letter from you full of errors than some smooth average-of-all-writers prose. To be human is to struggle. I see no reason to read anything from anyone if they didn’t actually write it.
If I spend hours writing and rewriting a paragraph into something I love while using AI to iterate, did I write that paragraph?

edit: Also, I think maybe you don't appreciate the people who struggle to write well. They are not proud of the mistakes in their writing.

  • kentm
  • ·
  • 1 day ago
  • ·
  • [ - ]
> did I write that paragraph?

No. My kid wrote a note to me chock full of spelling and grammar mistakes. That has more emotional impact than if he'd spent the same amount of time running it through an AI. It doesn't matter how much time you spent on it really, it will never really be your voice if you're filtering it through a stochastic text generation algorithm.

What about when someone who can barely type (like stephen hawking used to, 3 minutes per sentence using his cheek) uses autocomplete to reduce the unbelievable effort required to type out sentences? That person could pick the auto completed sentence that is closest to what they’re trying to communicate, and such a thing can be a life saver.
You may as well ask for a person that can walk to be able to compete in a marathon using a car.

I’m all for using technology for accessibility. But this kind of whataboutism is pure nonsense.

The intention isn’t whataboutism, it’s about where do you draw the line? And your example betrays you…
Forgive a sharp example, but consider someone who is disabled and cannot write or speak well. If they send a loving letter to a family member using an LLM to help form words and sentences they otherwise could not, do you really think the recipient feels cheated by the LLM? Would you seriously accuse them of not having written that letter?
Your arguments are verging on the obtuse.

Read the article again. Rob Pike got a letter from a machine saying it is "deeply grateful". There's no human there expressing anything, worse, it's a machine gaslighting the recipient.

If a family member used LLM to write a letter to another, then at least the recipient can believe the sender feels the gratefulness in his/her human soul. If they used LLM to write a message in their own language, they would've proofread it to see if they agree with the sentiment, and "take ownership" of the message. If they used LLM to write a message in a foreign language, there's a sender there with a feeling, and a trust of the technology to translate the message to a language they don't know in the hopes that the technology does it correctly.

If it turns out the sender just told a machine to send their friends each a copy-pasted message, the sender is a lazy shallow asshole, but there's still in their heart an attempt of brightening someone's day, however lazily executed...

I think maybe you missed that my response was to this comment:

> How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?

I already said in other comments that the OP was a different situation.

If you buy a hallmark greetings card and send that to someone with your signature on it, did you write the whole card?
I think you created it the same way christian von koenigsegg makes supercars. You didn’t hand make each panel, or hand design the exact aerodynamics of the wing, an engineer with a computer algorithm did that. But you made it happen, and that’s still cool
It is not about being proud, it is about being sincere.

If you send me a photo of the moon supposedly taken with your smartphone but enhanced by the photo app to show all the details of the moon, I know you aren't sincere and sending me random slop. Same if you are sending me words you cannot articulate.

That is not what is happening here. There is no human the loop, it's just automated spam.
good point. My response was to the comment not the OP
> These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas

The writing is the ideas. You cannot be full of yourself enough to think you can write a two second prompt and get back "Your idea" in a more fleshed out form. Your idea was to have someone/something else do it for you.

There are contexts where that's fine, and you list some of them, but they are not as broad as you imply.

As the saying goes, "If I'd had more time, I would have written a shorter letter". Of course AI can be used to lazily stretch a short prompt into a long output, but I don't see any implication of that in the parent comment.

If someone isn't a good writer, or isn't a native speaker, using AI to compress a poorly written wall of text may well produce a better result while remaining substantially the prompter's own ideas. For those with certain disabilities or conditions, having AI distill a verbal stream of consciousness into a textual output could even be the only practical way for them to "write" at all.

We should all be more understanding, and not assume that only people with certain cognitive and/or physical capabilities can have something valuable to say. If AI can help someone articulate a fresh perspective or disseminate knowledge that would otherwise have been lost and forgotten, I'm all for it.

This feels like the essential divide to me. I see this often with junior developers.

You can use AI to write a lot of your code, and as a side effect you might start losing your ability to code. You can also use it to learn new languages, concepts, programming patterns, etc and become a much better developer faster than ever before.

Personally, I'm extremely jealous of how easy it is to learn today with LLMs. So much of the effort I spent learning the things could be done much faster now.

If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time, time which if I were starting over I wouldn't need to lose today.

This is pretty far off from the original thread though. I appreciate your less abrasive response.

> If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time, time which if I were starting over I wouldn't need to lose today.

While this seem like it might be the case, those hours you (or we) spent banging our collective heads against the wall were developing skills in determination and mental toughness, while priming your mind for more learning.

Modern research all shows that the difficulty of a task directly correlates to how well you retain information about that task. Spaced repetition learning shows, that we can't just blast our brains with information, and there needs to be

While LLMs do clearly increase our learning velocity (if using it right), there is a hidden cost to removing that friction. The struggle and the challenge of the process built your mind and character in ways that you cant quantify, but after years of maintaining this approach has essentially made you who you are. You have become implicitly OK with grinding out a simple task without a quick solution, the building of that grit is irreplaceable.

I know that the intellectually resilient of society, will still be able to thrive, but I'm scared for everyone else - how will LLMs affect their ability to learn in the long term?

Totally agree, but also, I still spend tons of time struggling and working on things with LLMs, it is just a different kind of struggle, and I do think I am getting much better at it over time.

> I know that the intellectually resilient of society, will still be able to thrive, but I'm scared for everyone else - how will LLMs affect their ability to learn in the long term?

Strong agree here.

> If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time

But this is the learning process! I guess time will tell whether we can really do without it, but to me these long struggles seem essential to building deep understanding.

(Or maybe we will just stop understanding many things deeply...)

Yeah it can be a risk or a benefit for sure.

I agree that struggle matters. I don’t think deep understanding comes without effort.

My point isn’t that those hours were wasted, it’s that the same learning can often happen with fewer dead ends. LLMs don’t remove iteration, they compress it. You still read, think, debug, and get things wrong, just with faster feedback.

Maybe time will prove otherwise, but in practice I have found they let me learn more, not less, in the same amount of time.

Well your examples are things that were possible before LLMs.
This is disingenuous
What beautiful things? It just comes across as immoral and lazy to me. How beautiful.
> People are capable of seeing which is which.

I would hazard a guess that this is the crux of the argument. Copying something I wrote in a child comment:

> When someone writes with an AI, it is very difficult to tell what text and ideas are originally theirs. Typically it comes across as them trying to pass off the LLM writing as their own, which feels misleading and disingenuous.

> I agree just telling an AI 'write my thank you letter for me' is pretty shitty

Glad we agree on this. But on the reader's end, how do you tell the difference? And I don't mean this as a rhetorical question. Do you use the LLM in ways that e.g. retains your voice or makes clear which aspects of the writing are originally your own? If so, how?

I hear you. and I think AI has some good uses esp. assisting with challenges like you mentioned. I think whats happening is that these companies are developing this stuff without transparency on how its being used, there is zero accountability, and they are forcing some of these tech into our lives with out giving us a choice.

So Im sorry but much of it is being abused and the parts of it being abused needs to stop.

I agree about the abuse, and the OP is probably a good example of that. Do you have any ideas on how to curtail abuse?

Ideas I often hear usually assume it is easy to discern AI content from human, which is wrong, especially at scale. Either that, or they involve some form of extreme censorship.

Microtransactions might work by making it expensive run bots while costing human users very little. I'm not sure this is practical either though, and has plenty of downsides as well.

I don't see this changing without a complete shift in our priorities on the level of politics and business. Enforcing Anti-trust legislation and dealing with Citizens United. Corporations don't have free speech. Free speech and other rights like these are limited to living, breathing humans.

Corporations operate by charters, granted by society to operate in a limited fashion, for the betterment of society. If that's not happening, corporations don't have a right to exist.

I’m sorry, but this really gets to me. Your writing is not improved. It is no longer your writing.

You can achieve these things, but this is a way to not do the work, by copying from people who did do the work, giving them zero credit.

(As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.)

Do you feel the same about spellcheck?
Does Spellcheck take a full sentence and spit out paragraphs of stuff I didn't write?

I mean how do you write this seriously?

But in the end a human takes the finished work and says yes, this matches what I intended to communicate. That is what is important.
That's neither what happens nor what is important.
> I’m sorry, but this really gets to me. Your writing is not improved. It is no longer your writing.

Photographers use cameras. Does that mean it isn't their art? Painters use paintbrushes. It might not be the the same things as writing with a pen and paper by candlelight, but I would argue that we can produce much more high quality writing than ever before collaborating with AI.

> As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.

This is not fair. There is certainly a lot of danger there. I don't know what it's like to have dimentia, but I have seen mentally ill people become incredibly isolated. Rather than pretending we can make this go away by saying "well people should care more", maybe we can accept that a new technology might reduce that pain somewhat. I don't know that today's AI is there, but I think RLHF could develop LLMs that might help reassure and protect sick people.

I know we're using some emotional arguments here and it can get heated, but it is weird to me that so many on hackernews default to these strongly negative positions on new technology. I saw the same thing with cryptocurrency. Your arguments read as designed to inflame rather than thoughtful.

I would be very surprised if no interesting art could be made with LLMs. But, like a camera, it produces a distinct kind of art to other tools. We do not say that a camera produces a painting. Instead photography is its own medium with its own forms and techniques and strengths and weaknesses.

Using photography to claim that obviously all good writing will be LLM replacements for current writing is... odd.

I guess your point is that a camera, a paintbrush, and an LLM are all tools, and as long as the user is involved in the making, then it is still their art? If so, then I think there are two useful distinctions to make:

1. The extent to which the user is involved in the final product differs greatly with these three tools. To me there is a spectrum with "painting" and e.g. "hand-written note" at one extreme, and "Hallmark card with preprinted text" on the other. LLM-written email is much closer to "Hallmark card."

2. Perhaps more importantly, when I see a photograph, I know what aspects were created by the camera, so I won't feel mislead (unless they edit it to look like a painting and then let me believe that they painted it). When someone writes with an AI, it is very difficult to tell what text and ideas are originally theirs. Typically it comes across as them trying to pass off the LLM writing as their own, which feels misleading and disingenuous.

I think you are right that it is a spectrum, and maybe that's enough to settle the debate. It is more about how you use it than the tool itself.

Maybe one more useful consideration for LLMs. If a friend writes to me with an LLM and discovers a new writing pattern, or learns a new concept and incorporates that into their writing, I see this as a positive development, not negative.

Neither a camera nor a paintbrush generates art? They still require manual human input for everything, and offer no creative capacity on their own.
A photograph is an expression of the photographer, who chooses the subject, its framing, filters, etc. Ditto a painting.

LLM output is inherently an expression of the work of other people (irrespective of what training data, weights, prompts it is fed). Essentially by using one you're co-authoring with other (heretofore uncredited) collaborators.

>"For myself, the big fraud is getting public to believe that Intellectual Property was a moral principle and not just effective BS to justify corporate rent seeking."

If anything, I'm glad people are finally starting to wake up to this fact.

Neither take is correct. When correctly applied it can be an effective tool to encourage certain sorts of intellectual endeavors by making them monetarily favorable. When incorrectly applied it leads to dysfunction as is the case for most regulatory regimes.

Any tool can be used by a wrongdoer for evil. Corporations will manipulate the regulator in order to rent seek using whatever happens to be available to them. That doesn't make the tools themselves evil.

> When correctly applied it can be an effective tool to encourage certain sorts of intellectual endeavors by making them monetarily favorable

This has been empirically disproven. China experimented with having no enforced Intellectual Property laws, and the result was that they were able to do the same technological advancement it took the West 250 years to do and surpass them in four decades.

Intellectual Property law is literally a 6x slowdown for technology.

China was playing industrial catch up. They didn't have to (for example) reinvent semiconductors from first principles. They will surely support some form of IP law once they have been firmly established at the cutting edge for a while.

I'm no fan of the current state of things but it's absurd to imply that the existence of IP law in some form isn't essential if you want corporations to continue much of their R&D as it currently exists.

Without copyright in at least some limited form how do you expect authors to make a living? Will you have the state fund them directly? Do you propose going back to a patronage system in the hopes that a rich client just so happens to fund something that you also enjoy? Something else?

> China was playing industrial catch up. They didn't have to (for example) reinvent semiconductors from first principles. They will surely support some form of IP law once they have been firmly established at the cutting edge for a while.

That argument was in vogue about 20 years ago, but it fell out of favor when China passed us on the most important technologies without slowing down.

It is funny that some people are still carrying the torch for it after it's been so clearly disproven.

I agree they’ve surpassed the west (or at least stopped solely playing catch up) in some areas.

But surely you can see how your upthread math of “250 years in 40 years” has a mix of mostly catch-up and replication and a sliver of novel innovation at the extreme tail end of that 250 year span?

I agree that the China experiment hasn't empirically disproven IP law for the reasons you go in it. And this thread has hit the usual problem that "IP laws" are very broad and cover everything from basic common sense around trademarks to the lunacy like the Amazon one-click patent.

But at issue here is there are IP laws that slow progress it should sit with the proponents of those laws to demonstrate that they are effective. And I don't see how anyone could come up with evidence for that - it is nearly impossible to prove that purposefully and artificially retarding progress actually speeds progress up. There are a lot of other factors at play and one of them is probably a more important factor than IP law. Odds are that putting artificial obstacles in the way of making sensible commercial decisions just slows everything down for no gain.

And kills the culture, it is sad the amount of cultural artefacts in the 1900s that have basically been strangled by IP laws. My family used to be part of a community choir before the copyright lawyers got to it.

Copying and innovating are two very different things; most of China' s innovation has been incremental, to be kind. To keep up, the machine still needs to copy. Just like Japan did for decades until it became an industrial behemoth, so give them 10 more years and soon enough the western world will be doing the copying
well now that they're ahead we need to copy
>a patronage system in the hopes that a rich client just so happens to fund something that you also enjoy

How is that any different from hoping that a corporate conglomerate happens to fund something i also enjoy?

If you are actually asking a serious question: while a patron is primarily motivated by whatever catches his interest, a corporate conglomerate funding the same investments is motivated by profit. They would have more of a motive to select the kind of investments that will succeed and pay for themselves, allowing for a more economically efficient allocation of resources.

Of course, the kind of investments that might succeed and pay for themselves may not necessarily be the kind that is most beneficial to the public at large - but the same applies to the patron.

As you say, conglomerates being profit motivated tend to produce largely uninteresting slop. See the vast majority of the movie industry.

Patrons will produce some very interesting and detailed work but it will not necessarily align with your tastes and there will probably not be all that much of it. European history makes this clear enough (imo).

A system in which individual or very small groups of creators are able to produce work of their own choice that appeals to a small to moderately sized niche of their choosing seems like it should produce the best outcome from the perspective of the typical individual. Fiction books are a decent example of this. We get lots of at least decent quality work because a single author can feasibly produce something "on credit" and recoup the costs after the fact.

It's not, it's just lies they use to justify the existence of a capitalist system that is barely 50 years old.

So obvious what a fucking farce this all is and it's time we start demanding better.

Imagine our human ancestors claiming IP infringement when one guy copied fire making from another.
A perfect illustration of why IP should never be regarded as a moral right. It exists for the benefit of society as a whole. Thus the laws creating it need to be tuned with that as the explicit (and only) goal. Mickey Mouse law must not be permitted.
Maybe this is just me, but the second I read your comment I envisioned a “caveman” sitcom.
Well during prehistoric times (1960's) The Flintstones had Zippo lighters that rubbed two tiny sticks together to light their Winston cigarettes. The tobacco brand of their major sponsor.

Naturally that could never have been legitimate until the patent on the Zippo had expired ;)

  • nmz
  • ·
  • 1 day ago
  • ·
  • [ - ]
Is an LLM human now?
China has IP laws and enforces them against foreign companies but not domestic ones.
Exactly! They know perfectly well that applying a 6x slowdown to their competitors but not themselves is a good way to pull ahead.
If you steal 249 years of technological achievement from others, it's not that difficult.
Were those 249 years produced in a vacuum? Or did they stand on 1500 years of mathematics and trial-and- error? As a quick example, think of a high-end digital photographic camera; you can certainly highlight the major tech advancements that make it high end, but do you know how the screws were produced? Where the grease used on the gears cone from? How long did it take to get to the state-of-the-art optics? How can you even get composite materials to perform heavy-duty cycles?

Those 249 years of tech were based on the previous 249 years of tech, and so on and so on. That is how it works. Nothing you have "today" comes from a vacuum.

They sure didn't slow down after they passed us.
Not just China but almost all of the world. We have parity in most countries right now. The whole point of the WTO was to do technology transfer and allow the US to double down on high margin, finance parts of business since the manufacturing game had the low fruit all picked and wasn’t valuable property anymore. The exception for the west would be to specialize on optics and silicon, two places China is still far behind.
  • rewgs
  • ·
  • 20 hours ago
  • ·
  • [ - ]
> China experimented with having no enforced Intellectual Property laws, and the result was that they were able to do the same technological advancement it took the West 250 years to do and surpass them in four decades.

Are you seriously ignoring the fact that China wasn't developing new technology, but rather utilizing already-existing technology? Of course it took 6x less time!

Calling your own highly creative spin on history "empirical" is many things, but persuasive isn't one of them.
Which part do you think I'm lying about- that China experimented with having no enforced Intellectual Property laws or that they industrialized six times faster than the west?

I can provide sources for either claim.

China can copy, can it create anything new?
Papermaking, printing, gunpowder, compass, porcelain, paper money, abacus, iron plow, wheelbarrow.
More recently, net-positive thorium-salt fusion reactors
[dead]
Anything in the last 100 years?
I guess you haven't heard about their clean energy sector
Sure, and google "invented" android.
Chinese EVs are more technicly advanced than Western EVs.
That is a somewhat broad claim that needs decomposing. I would agree that Chinese EV industry is quite more advanced in terms of manufacturing processes and cost optimization, but that is not "technically" more advanced per se, its just a reflection of a culture. Maybe the salt batteries will be a breakthrough, but at least for me it has been difficult finding reliable data on it; Other than that, afaik (and from a layman perspective) there isn't anything inherently superior in chinese EV vehicles from a technological perspective, when compared to western counterparts. Cheaper, yes. But thats about it.
[dead]
They're already 50 years ahead of us on flying cars.
Every car is a flying car if you use it wrong enough
they don’t have to
> When correctly applied it can be an effective tool to encourage certain sorts of intellectual endeavors by making them monetarily favorable.

I agree, but the only worth candidate I see is the medical industry.

And given that drug development is so expensive because of government-mandated trials, I think it makes sense for the government to also provide a helping hand here — to counterweight the (completely sensible) cost increase due to the drug trial system.

>When correctly applied it can be an effective tool to encourage certain sorts of intellectual endeavors by making them monetarily favorable.

I'd rather we don't encourage "monetarily favorable" intellectual endeavors...

We want to encourage intellectual endeavors that are desirable to society as a whole but which otherwise face barriers. Making them monetarily favorable is an easy way to accomplish that. Similar to how not speeding is made monetarily favorable, or serving in the military is made monetarily favorable, etc. Surely you don't object to the government using monetary incentives to indirectly shape society? The historical alternatives have been rather brutal.
Right I think we all understand the idea here, its not a misunderstanding. I just think people, reasonably, don't actually see the mechanism working.

It's weird to lump ever other possible idea in one category. These are complex issues with ever changing contexts. The surface of the problem is huge! Surely with anything else we wouldn't be so tunnel visioned, we wouldn't just say: "well we simply _must_ discount everything else, so we can only be happy with what we got." It would literally sound absurd in any other context, but because we are trained to politicize thinking outside of market mechanisms, we see very smart people saying ridiculous things!

Not at all? It's reasonable to point out issues with the implementation as it currently stands (those are abundant and blindingly obvious). However it is also clear that the underlying mechanism works extremely well. A claim to the contrary is quite extraordinary.

Sometimes people do talk about alternatives. State funding and patronage are two of the most common. Both have very obvious drawbacks in terms of quantity and who gets influence over the outcome. Both also have interesting advantages that are well worth examining.

Monetarily favorable artificial intelligence gets you pornography & 6 second animated slop. You're confused about what money actually enables.
And all the other benefits of the world around you at large. Come on dude.
I guess you haven't heard about all the microplastics in newborns.
"Small price to pay to have smartphones and EVs" /s
That seems to be the standard argument, "Sure, not everything is ideal but look at longevity & all the cool toys we have now thanks to [money|billionaires|fossil fuels|etc]".
  • spwa4
  • ·
  • 1 day ago
  • ·
  • [ - ]
> When incorrectly applied it leads to dysfunction as is the case for most regulatory regimes.

The second it became cheaper to not apply it, every state under the sun chose not to apply it. Whether we're talking about Chinese imports that absolutely do not respect copyright, trademark, even quality, health and warranty laws ... and nothing was done. Then, large scale use of copyrighted by Search provider (even pre-Google), Social Networks, and others nothing was done. Then, large scale use for making AI products (because these AI just wouldn't work without free access to all copyrighted info). And, of course, they don't put in any effort. Checking imports for fakes? Nope. Even checking imports for improperly produced medications is extremely rarely done. If you find your copyright violated on a large scale on Amazon, your recourse effectively is to first go beg Amazon for information on sellers (which they have a strong incentive not to provide) and then go run international court cases, which is very hard, very expensive, and in many cases (China, India) totally unfair. If you get poisoned from a pill your national insurance bought from India, they consider themselves not responsible.

Of course, this makes "competition" effectively a tax-dodging competition over time. And the fault for that lies entirely with the choice of your own government.

Your statement about incorrect application only makes sense if "regulatory regimes" aren't really just people. Go visit your government offices, you'll find they're full of people. People who purposefully made a choice in this matter.

A choice to enforce laws against small entities they can easily bully, and to not do it on a larger scale.

To add insult to injury, you will find these choices were almost never made by parliaments, but in international treaties and larger organizations like the WTO, or executive powers of large trade blocks.

  • consp
  • ·
  • 1 day ago
  • ·
  • [ - ]
> People who purposefully made a choice in this matter.

I am convinced most people never had or ever will have this choice actively. Considering pillarisation (this is not a misspelling) was already a thing in most political systems well before the advent of mass media and digital media it only got worse with it, effectively making choices for people, into the effective hands of few people, influenced by even less people. Those people in the government you mention do not make the choices, they have to act on them as I read it.

You're trying to analyze an entirely different game played by an entirely different set of players by the same set of rules. It's a contextual error on your part. The decision to recognize or not recognize a given body of rules held by an opposing party on the international level is an almost entirely separate topic.

> A choice to enforce laws against small entities they can easily bully, and to not do it on a larger scale.

That's a systemic issue, AKA the bad regulatory regime that I previously spoke of. That isn't some inherent fault of the tool. It's a fault of the regulatory regime which applies that tool.

Kitchen knives are absolutely essential for cooking but they can also be used to stab people. If someone claimed that knives were inherently tools of evil and that people needed to wake up to this fact, would you not consider that rather unhinged?

> To add insult to injury, you will find these choices were almost never made by parliaments, but in international treaties and larger organizations like the WTO, or executive powers of large trade blocks.

That's true, and it's a problem, but it (again) has nothing to do with the inherent value of IP as a concept. It isn't even directly related to the merits of the current IP regulatory regime. It's a systemic problem with the lawmaking process as a whole. Solve the systemic problem and you can solve the downstream issues that resulted from it. Don't solve it and the symptoms will persist. You're barking up the wrong tree.

I am with you 100%. The phrase “intellectual property” is an oxymoron. Intellect and Property are opposite things. Worse, the actual truth of intellectual property laws is not, “I’m an artist who got rich”. It is, “I ended up selling my property to a corporation and got screwed.”

The web is for public use. If you don’t want the public, which includes AI, to use it, don’t put it there.

  • calf
  • ·
  • 19 hours ago
  • ·
  • [ - ]
IP is a loaded and prejudiced term. That said, copyright could allow for an author to place a work in public but not allow the audience to copy it.
Most people here would be interested in Rob Pike's opinion. What you quote is from someone commenting on Rob's post.

The way that Rob's opinion here is deflected, first by focusing on the fact that he got a spam mail and then this misleading quote ("myself" does not refer to Rob) is very sad.

The spam mail just triggered Rob's opinion (the one that normal people are interested in).

This comment deserves to be ranked higher. I 100% interpreted the quote as coming from Rob Pike.
Both are intellectually gratifying, to me. I think the only mistake they made was leaving the attribution ambiguous.
  • ·
  • 1 day ago
  • ·
  • [ - ]
>"Rob's opinion (the one that normal people are interested in)."

I think you have an overinflated notion of what "normal people" care about

Pike's name is what people are clicking on here. That's being abused to sell this random comment about IP.
Dont try to tell us what we are choosing to focus on. Everything in the message from Pike and the comments below his post are relevant. There was no assumption in my mind that this was all about Pike.
Please don't reinterpret my comment. I didn't say anything about what you're focusing on. I made the simple and clear point that clicking on Pike's name leads to an unattributed quotation that (it turns out) isn't from Pike.
The concept of intellectual property on its own (independently of its legal implementation details) is at most as evil as property ownership, and probably less so as unlike the latter it promotes innovation and creativity.

Despite the apparent etymological contrast, “copyright” is neither antithetical to nor exclusive with “copyleft”: IP ownership, a degree of control over own creation’s future, is a precondition for copyleft (and the OSS ecosystem it birthed) to exist in the first place.

> unlike the latter it promotes innovation and creativity.

Does it though?

I know that people who like intellectual property and money say it does, but people who like innovation and creativity usually tend to think otherwise.

3D printers are a great example of something where IP prevented all innovation and creativity, and once the patent expired the innovation and creativity we've enjoyed in the space the last 15 years could begin.

>Does it though?

Yes. The alternative is that everyone spams the most popular brands instead of making their own creations. Both can be abused, but I see more good here than in the alternative.

Mind you, this is mostly for creative IP. We can definitely argue for technical patents being a different case.

>but people who like innovation and creativity usually tend to think otherwise.

People who like innovation and creativity still might need to commission or sell fan art to make ends meet. That's already a gray area for IP.

I think that's why this argument always rubs me strangely. In a post scarcity world, sure. People can do and remix and innovate as they want. We're not only not there, but rapidly collapsing back to serfdom with the current trajectory. Creativity doesn't flourish when you need to spend your waking life making the elite richer.

Property ownership is ultimately based on scarcity. If I using a thing prevents others from using that thing, there is scarcity, and there should be laws protecting it.

There is no scarcity with intellectual property. My ability to have or act on an idea is in no way affected by someone else having the same idea. The entire concept of ownership of an idea is dystopian and moronic.

I also strongly disagree with the notion that it inspires creativity. Can you imagine where we would be if IP laws existed when we first discovered agriculture, or writing, or art? IP law doesn’t stimulate creation, it stifles it.

Property is a local low - it applies to a thing that exists in one place. Intellectual property is trying to apply similar rules to stuff that happen remotely - a text is not a thing, and controlling copying might work in some technological regimes while in others would require totalitarian control. When you extend these rules to cover not just copying of texts but also at the level of ideas it gets even worse.
copyleft is a subset of copyright
>The concept of intellectual property on its own (independently of its legal implementation details) is at most as evil as property ownership, and probably less so as unlike the latter it promotes innovation and creativity.

This is a strange inversion. Property ownership is morally just in that the piece of land my home is can only be exclusive, not to mention necessary to a decent life. Meanwhile, intellectual property is a contrivance that was invented to promote creativity, but is subverted in ways that we're only now beginning to discover. Abolish copyright.

>the piece of land my home is can only be exclusive, not to mention necessary to a decent life

That mentality is exactly why you can argue property ownership being more evil. Landlords "own property" and see the reputation of that these past few decades.

Allowing private ownership of limited human necessities like land leads to greed that cost people lives. That's why heavy regulation is needed. Meanwhile, it's at worst annoying and stifling when Disney owns a cartoon mouse fotlr 100 years.

Feels like we think along similar lines on this issue.
>Allowing private ownership of limited human necessities like land leads to greed that cost people lives.

You're not "allowing" it unless you've already decided that you own it and can dispose of it (or not) as you see it. And this is why you'll always be the enemy of all decent folk.

"Real communism's never been tried!!!!"

>Meanwhile, it's at worst annoying and stifling when Disney owns a cartoon mouse fotlr 100 years.

It's actually destructive of culture in ways that are difficult to overstate. Disney nor any other "copyright owner" can't be trusted to preserve culture and works, they're the ones that threw the old film reels into the river and let them burn up in archive fires. No thanks. It's amazing how wrong you are on every single point.

confusing any law with "moral principles" is a pretty naive view of the world.

Many countries base some of their laws on well accepted moral rules to make it easier to apply them (it's easier to enforce something the majority of the people want enforced), but the vast majority of the laws were always made (and maintained) to benefit the ruling class

Yeah I see where you are going with this, but I think he was trying to make a point about being convinced by decree. It tended to get people to think that it should be moral.

Also I disagree with the context of what the purpose is for law. I don't think its just about making it easier to apply laws because people see things in moralistic ways. Pure Law, which came from the existence of Common Law (which relates to whats common to people) existed within the frame work of whats moral. There are certain things, which all humans know at some level are morally right or wrong regardless of what modernity teaches us. Common laws were built up around that framework. There is administrative law, which is different and what I think you are talking about.

IMHO, there is something moral that can be learned from trying to convince people that IP is moral, when it is, in fact, just a way to administrate people into thinking that IP is valid.

I don't think this is about being confused out of naivety. In some parts of the western world the marketing department has invested heavily in establishing moral equivalence between IP violation and theft.
Quotation not from Pike.
To be clear: note that that the quotation that has taken over the focus is not from Rob Pike at all.

Not Pike.

Waking up to the fact that the largest corporations in the world are stealing off everyday people to sell a subscription to their theft driven service?

The absolute delusion.

Assuming this post is real (it’s a screenshot, not a link), I wonder if Rob Pike has retired from Google?

I share these sentiments. I’m not opposed to large language models per se, but I’m growing increasingly resentful of the power that Big Tech companies have over computing and the broader economy, and how personal computing is being threatened by increased lockdowns and higher component prices. We’re beyond the days of “the computer for the rest of us,” “think different,” and “don’t be evil.” It’s now a naked grab for money and power.

I'm Assuming his Twitter is private right now, but his Mastodon does share the same event (minus the "nuclear"): https://hachyderm.io/@robpike/115782101216369455

And a screenshot just in case (archiving Mastodon seems tricky) : https://imgur.com/a/9tmo384

Seems the event was true, if nothing else.

EDIT: alternative screenshot: https://ibb.co/xS6Jw6D3

Apologies for not having a proper archive. I'm not at a computer and I wasn't able to archive the page through my phone. Not sure if that's my issue or Mastodon's

Don't use imgur, it blocks half of the Internet.
Understood, I added another host to my comment.
Thank you, you're the best.
Must sign in to read? Wow bluesky has already enshittified faster than expected.

(for the record, the downvoters are the same people who would say this to someone who linked a twitter post, they just don't realize that)

It's a non-default choice by the user to require login to view. It's quite rare to find users who do that, but if I were Rob Pike I'd seriously consider doing it too.
A platform that allows hiding of text locked behind a login is, in my opinion, garbage. This is done for the same reason Threads blocks all access without a login and mostly twitter to. Its to force account creation, collection of user data and support increased monetization. Any user helping to further that is naive at best.

I have no problem with blocking interaction with a login for obvious reasons, but blocking viewing is completely childish. Whether or not I agree with what they are saying here (which, to be clear I fully agree with the post), it just seems like they only want an echochamber to see their thoughts.

Here is the raw post on the AT Protocol if you want to access it directly: https://pdsls.dev/at://robpike.io/app.bsky.feed.post/3matwg6...

>This is done for the same reason Threads blocks all access without a login and mostly twitter to. Its to force account creation, collection of user data and support increased monetization.

I worked at Bluesky when the decision to add this setting was made, and your assessment of why it was added is wrong.

The historical reason it was added is because early on the site had no public web interface at all. And by the time it was being added, there was a lot of concern from the users who misunderstood the nature of the app (despite warnings when signing up that all data is public) and who were worried that suddenly having a low-friction way to view their accounts would invite a wave of harassment. The team was very torn on this but decided to add the user-controlled ability to add this barrier, off by default.

Obviously, on a public network, this is still not a real gate (as I showed earlier, you can still see content through any alternative apps). This is why the setting is called "Discourage apps from showing my account to logged-out users" and it has a disclaimer:

>Bluesky is an open and public network. This setting only limits the visibility of your content on the Bluesky app and website, and other apps may not respect this setting. Your content may still be shown to logged-out users by other apps and websites.

Still, in practice, many users found this setting helpful to limit waves of harassment if a post of theirs escaped containment, and the setting was kept.

According to the parent, the platform gives the content creator the choice/control. So no, it's not garbage and that's the correct way to go about it.
Disagree. It gives the user the illusion that the purpose is to protect them somehow, but in reality it is solely there to be anti-user and pro lock in to social media walled gardens.
  • 0xCAP
  • ·
  • 1 day ago
  • ·
  • [ - ]
It's also a way to prevent LLMs to get trained on their data without their consent.
That's not correct.

The setting is mostly cosmetic and only affects the Bluesky official app and web interface. People do find this setting helpful for curbing external waves of harassment (less motivated people just won't bother making an account), but the data is public and is available on the AT protocol: https://pdsls.dev/at://robpike.io/app.bsky.feed.post/3matwg6...

So nothing is stopping LLMs from training on that data per se.

That's assuming that AI companies are gathering data in a smart way. The entire MusicBrainz database can be downloaded for free but AI scrapers are still attempting to scrape it one HTML page at a time, which often leads into the service having errors and/or slowdowns.
Yea that’s true. I’m just saying if someone wants to put in a modicum of effort, AT ecosystem is highly scrapable by design. In fact apps themselves (like Bluesky) are essentially scrapers.
It's a non-default setting. So no. I am not sure what you disagree with exactly? We can call out BlueSky when they over-reach, but this is simply not it.
  • znpy
  • ·
  • 1 day ago
  • ·
  • [ - ]
[flagged]
No, Bluesky is not garbage.
It is a user setting and quite a reasonable one at that, in Pike's case in particular.
What do you mean? I did some quick googling and am unsure what you are implying here.
There’s an option for setting the visibility of your posts: https://bsky.app/profile/bsky.app/post/3kgbz6tc6gl24
My question is why are multiple people commenting that "Rob Pike" in particular should use this feature.
Yeah, I'm not creating an account to read a post.

Twitter/X at least allows you to read a single post.

  • myko
  • ·
  • 1 day ago
  • ·
  • [ - ]
the post is public, just depends on the viewer you're using: https://skyview.social/?url=https://bsky.app/profile/robpike...
  • ·
  • 1 day ago
  • ·
  • [ - ]
> Assuming this post is real (it’s a screenshot, not a link)

I can see it using this site:

https://bskyviewer.github.io/

Rob Pike retired from Google a few years back.
The agent that generated the email didn't get another agent to proofread it? Failing to add a space between the full stop and the next letter is one of those things that triggers the proofreader chip in my skull.
  • ·
  • 1 day ago
  • ·
  • [ - ]
It's real, he posted this to his bluesky account.
"You must sign in to view this post."

No.

Here is the raw post on the AT Protocol: https://pdsls.dev/at://robpike.io/app.bsky.feed.post/3matwg6...

The Bluesky app respects Rob's setting (which is off by default) to not show his posts to logged out users, but fundamentally the protocol is for public data, so you can access it.

I understand Twitter posts are still shared here, even though I would often (usually?) have to log in?
  • ·
  • 1 day ago
  • ·
  • [ - ]
I failed to ever see the appeal of "like twitter but not (yet) run by a nazi" and this just confirms this for me :|
the potential future of the AT protocol is the main idea i thought made it differentiate itself... also twitter locking users out if they don't have an account, and bluesky not doing so... but i guess thats no longer true?

I just don't understand that choice for either platform, is the intent not, biggest reach possible? locking potential viewers out is such a direct contradiction of that.

edit: seems its user choice to force login to view a post, which changes my mind significantly on if its a bad platform decision.

Bluesky is not locking anyone out. This is literally a user setting to not display their account without logging in. It's off by default.

And yes, you can still inspect the post itself over the AT protocol: https://pdsls.dev/at://robpike.io/app.bsky.feed.post/3matwg6...

It's a setting on BlueSky, that the user can enable for their own account, and for people of prominence who don't feel like dealing with drive by trolls all day, I think it's very reasonable. One is a money grab, and the other is giving power to the user.
  • Jach
  • ·
  • 1 day ago
  • ·
  • [ - ]
X went back on that quite some time ago. Have a bird post: https://x.com/GuGi263/status/2002306730609287628

(You won't be able to read replies, or browse to the user's post feed, but you can at least see individual tweets. I still wrap links with s/x/fxtwitter/ though since it tends to be a better preview in e.g. discord.)

For bluesky, it seems to be a user choice thing, and a step between full-public and only-followers.

You failed to see the appeal of a social network not run by a nazi...?
Yet :)

I'll (genuinely happily) change my opinion on this when it's possible to do twitter-like microblogging via ATproto without needing any infra from bluesky tye company. I hear there are independent implementations being built, so hopefully that will be soon.

[flagged]
I remember a time when users had a great deal more control over their computers. Big tech companies are the ones who used their power to take that control away. You, my friend are the insincere one.

If you’re young enough not to remember a time before forced automatic updates that break things, locked devices unable to run software other than that blessed by megacorps, etc. it would do you well to seek out a history lesson.

For some context, this is the a long time Googler who's feats include major contributions to GoLang and Co-creating UTF-8.

To call him the Oppenheimer of Gemini would be overly dramatic. But he definitely had access to the Manhattan project.

>What power do big tech companies have and why do you have a problem with

Do you want the gist of the last 20 years or so, or are you just being rhetorical? im sure there will be much literature over time that will dissect such a question to its atoms. Whether it be a cautionary tale or a retrospective of how a part is society fell? Well, we still have time to write that story.

Rob Pike is not a 'Googler' by birth or fame or identity. He was at Bell Labs and was on the team that created Unix, led the team creating Plan 9, co-created UTF-8, and did a bunch more - all long before Google existed. He was a legend before he deigned to join them and lend them his credibility.
I was gonna say! Working at Bell Labs is a LOT more prestigious (and less humiliating) than working for Google, an advertising company.

It's like the old joke from Mad Magazine:

The Beatles? Weren't they Paul McCartney's backup band before Wings?

In fairness, was Bell Labs, part of (or funded by) AT&T, a phone monopoly, any less corporate than Google's home for genius engineers?
Telephony is much more important to society than advertisement.
I know where they make money, but calling them an advertising company is just a jab. Ha ha, but that doesn't describe Google, like them or not.

I wonder where AT&T made profits and where, like any business, they broke even or had loss leaders. IIRC consumer telephone service was not profitable.

  • eru
  • ·
  • 1 day ago
  • ·
  • [ - ]
Eh, and it was arguably a mistake to let him force Go on the rest of the organisation by way of starpower.
"force" seems a bit strong, as I remember it.
Yeah, I remember it being a fourth option alongside the others but I quit just before Google lost its serifs and its soul
By this logic there is no corporation or entity that provides anything other than basic food, shelter and medical care that could be criticized - they're all just providing something you don't need and don't have access to without them right?
  • 7bit
  • ·
  • 1 day ago
  • ·
  • [ - ]
Just to note: these companies control infrastructure (cloud, app stores, platforms, hardware certification, etc.). That’s a form of structural power, independent of whether the services are useful. People can disagree about how concerning that is, but it’s not accurate to say there’s no power dynamic here.
> What power do big tech companies have

Aftermarket control, for one. You buy an Android/iPhone or Mac/Windows device and get a "free" OS along with it. Then, your attention subsidizes the device through advertising, bundled services and cartel-style anti-competitive price fixing. OEMs have no motivation not to harm the market in this way, and users aren't entitled to a solution besides deluding themselves into thinking the grass really is greener on the other side.

What power did Microsoft wield against Netscape? They could alter the deal, and make Netscape pray it wasn't altered further.

Umm are you being serious? just look of the tech company titans in this photo in this trump inauguration - they are literally a stand in for putins oligarchs at this point

https://www.livenowfox.com/news/billionaires-trump-inaugurat...

FYI, this was sent as an experiment by a non-profit that assigns fairly open ended tasks to computer-using AI models every day: https://theaidigest.org/village

The goal for this day was "Do random acts of kindness". Claude seems to have chosen Rob Pike and sent this email by itself. It's a little unclear to me how much the humans were in the loop.

Sharing (but absolutely not endorsing) this because there seems to be a lot of misunderstanding of what this is.

Sorry, cannot resist all the AI companies are not "making" profit.

Seriously though, it ignores that words of kindness need a entity that can actually feel expressing them. Automating words of kindness is shallow as the words meaning comes from the sender's feelings.

  • zwnow
  • ·
  • 14 hours ago
  • ·
  • [ - ]
You cant possibly expect software engineers to be able to understand human emotions and meaning. We built Palantir and all the other fun tech making people's lifes miserable. If software engineers had ethics and would understand human meaning they wouldn't pump out predatory software like its cow milk. Fuck software engineers (excluding all the OSS devs that actually try and make the world a better place).
That's just another distraction from the class war being waged on the un-wealthy. We all contribute to it in small ways while it is being pushed by those with the means. Collectively we love to control others

Palantir wouldn't exist if regular people didn't use it to lookup details on an ex all the time to stalk them /jk.

  • jph00
  • ·
  • 1 day ago
  • ·
  • [ - ]
I got one of these stupid emails too. I’m guessing it spammed a lot of people. I’m not mad at AI, but at the people at this organisation who irresponsibly chose to connect a model to the internet and allow it to do dumb shit like this.
Wait, so someone took the "virus fishtank" from https://xkcd.com/350/ and did it with LLMs instead?
Yup. It's certainly an art project or something. It's like setting a bunch of Markov Chaneys loose on each other to see how insane they go.

…kind of IS setting a bunch of Markov Chaneys loose on each other, and that's pretty much it. We've just never had Chaneys this complicated before. People are watching the sparks, eating popcorn, rooting for MechaHitler.

> "Do random acts of kindness".

Random acts of kindness are only meaningful if they come from a human who had the heart, forethought, and willingness to go out of their way to do something kind for someone else. 'Random acts of kindness' originating from an AI is just spam, plain and simple.

The human race is screwed if connection - the one key thing that makes humans, human - is outsourced partially or wholly to robots who absolutely have no ability to connect, let alone understand, the human experience.

I get why Microsoflt loves AI so much - it basically devour and destroy open source software. Copyleft/copyright/any license is basically trash now. No one will ever want to open source their code ever again.
It fits perfectly with Microsoft's business strategy. Steal other people's ideas, implement it poorly, bundle it with other services so companies force their employees to use it.
  • myko
  • ·
  • 1 day ago
  • ·
  • [ - ]
I'm so mad Teams exists
I really think that if Microsoft would be forced to improve user experience of Teams, it would lead to measurable impact when it comes to happiness of humankind.
Not just code. You can plagiarize pretty much any content. Just prompt the model to make it look unique, and that’s it, in 30s you have a whole copy of someone’s else work in a way that cannot easily be identified as plagiarism.
There is still value in quality and craftsmanship. You might not be of that opinion, and you might not know anyone who is, but I do.
There will always be a market for niche, high quality electron tweaking. Thing is, it will be a highly competitive market, way outside of reach for >90% of today's professionals, thats why people are worried.

People that don't know that "computer" used to be a profession back in the day.

When I get an obviously AI-generated response from someone I'm trying to do business with, it makes me think less of them. I do value genuine responses, far more than the saccharine responses AI comes up with.
  • xpe
  • ·
  • 22 hours ago
  • ·
  • [ - ]
Yes. People want to know that others are spending time on an interaction. Taking short-cuts feels impersonal.

There are people with better and worse social skills. Some can, in a very short period of time, make you feel heard and appreciated. Others can spend ten times as long but struggle to have a similar effect. Does it make sense to 'grade' on effort? On results? On skill? On efforts towards building skills? On loyalty? Something else?

Our instincts are largely tuned to our ancestral environment. Even our social and cultural values that got us to say ~2023 have not caught up yet.

We're looking for 'proof of humanity' in our interactions -- this is part of who we are. But how do we get it with online interactions now?

Maybe we have to give up any expectation of humanity if you can't the person right in front of you?

Strap in, the derivative of the derivative of crazy sh1t is increasing.

I struggle to find this argument compelling, as it sounds more of a straw man argument than a legitimate complain.

If I write a hash table implementation in C, am I plagiarizing? I did not come up with the algortithm nor the language used for implementation; I "borrowed" ideas from existing knowledge.

Lets say I implemented it after learning the algorithm from GPL code; is ky implementation a new one, or is it derivative?

What if it is from a book?

What about the asm upcodes generated? In some architectures, they are copyrighted, or at least the documentation is considered " intellectual property"; is my C compiler stealing?

Is a hammer or a mallot an obvious creation, or is it stealing from someone else? What about a wheel?

> I struggle to find this argument compelling, as it sounds more of a straw man argument than a legitimate complain.

Dude, there are entire websites dedicated to using diffusion models to rip off the styles of specific artists so that people can have their "work" without paying them for it.

You can debate the ethics of this all you want, but if you're going to speak on plagiarism using generative AI, you should at least know as much as the average teenager does about it.

"dude", I could counter-argue that many modern art is "ripping off" Turner's work, but since you know so much about the art world, I'm assuming you know what I'm saying.

Filters for "Van Gogh" or "Impressionist" or "watercolor" have existed for decades now; are they ripping of previous work without paying for it?

When does a specific trace becomes "intellectual property" to be ripped off? Does Mondrian holds the rights on colored squares?

If you don't understand that every living or read artist was "inspired" (modified) by what he saw and experienced, I don't know what to tell you; you come off as one of those people that seem to think that "art" is inspiration; There's a somewhat well known composer in my country that used to say "inspiration is for amateurs".

Having that posture is, in itself, a position of utter and complete ignorance. If you don't understand how you need to absorb something before you transcend it, and how the things you absorbed will define your own transcendence, you know nothing about the creative process and their inner workings; Sure, if a machine does it, and if it uses well-known iteration processes, one can argue if it is art, an artistic manifestation or - better yet - if it has intellectual rights that can be "ripped off"; But beating on the chest and claiming stealing, like somehow a musician never played any melodies composed by someone else or a painter never used the technique or subject decomposition as their peers or their ancestors is, frankly, naive.

Conflating a machine that uses the works of a living, working artist to mimic their style with a watercolor filter is so disingenuous it doesn’t deserve a response. Don’t waste my time with this run-on drivel when you wont engage with the topic at hand.
Blame the artist, not the tool. AI is just another tool.
If we’re talking about AI as a concept - sure. But there are tools made specifically for this purpose, to the point that some artist’s names are preprogrammed into them for use. That’s a bit beyond what you’re saying, that’s a tool you can blame.
Maybe someone should vibe code the entire MS Office Suite and see how much they like that. Maybe add AD while they are at it. I'm for it if that frees European companies from the MS lock in.
Good idea. My country spends over billion dollars on Microsoft licenses annually, which is more than 200 euros per capita. I think billion dollars a year spent on dev salaries and Claude Code subscription to build MS office replacement would pay itself back quickly enough.
Even better - train a model on MS source code leaks and use it to work on Wine fork or as you said - vibe coded MS office. This would be hilarious.
Actually the opposite is happening, more and more vibe coded source code is making it to github.

You could argue about quality but not "No one will ever want to open source their code ever again".

Maybe it's going the other direction. It lets Microsoft essentially launder open source code. They can train an AI on open source code that they can't legally use because of the license, then let the AI generate code that they, Microsoft, use in their commercial software.
They always did what they wanted with open source code, not sure why people think this is different
Yeah, I can definitely see a breaking point when even the false platitudes are outsourced to a chatbot. It's been like this for a while, but how blatant it is is what's truly frustrating these days.

I want to hope maybe this time we'll see different steps to prevent this from happening again, but it really does just feel like a cycle at this point that no one with power wants to stop. Busting the economy one or two times still gets them out ahead.

I think we really are in the last moments of the public internet. In the future you won’t be able to contact anyone you don’t know. If you want to thank Rob Pike for his work you’ll have to meet him in person.

Unless we can find some way to verify humanity for every message.

We need to bring back the web of trust: https://en.wikipedia.org/wiki/Web_of_trust

A mix of social interaction and cryptographic guarantees will be our saving grace (although I'm less bothered from AI generated content than most).

  • jcgl
  • ·
  • 2 hours ago
  • ·
  • [ - ]
Maybe for nerds! But normies won't, can't, and shouldn't manage their own keys.
> Unless we can find some way to verify humanity for every message.

There is no possible way to do this that won't quickly be abused by people/groups who don't care. All efforts like this will do is destroy privacy and freedom on the Internet for normal people.

The internet is facing an existential threat to its very existence. If it becomes nearly impossible to determine signal in the noise, then there is no internet. Not for normal people, not for anyone.

So we need some mechanism to verify the content is from a human. If no privacy preserving technical solution can be found, then expect the non-privacy preserving to be the only model.

> If no privacy preserving technical solution can be found, then expect the non-privacy preserving to be the only model.

There is no technical solution, privacy preserving or otherwise, that can stave off this purported threat.

Out of curiosity, what is the timeline here? LLMs have been a thing for a while now, and I've been reading about how they're going to bring about the death of the Internet since day 1.

> Out of curiosity, what is the timeline here? LLMs have been a thing for a while now, and I've been reading about how they're going to bring about the death of the Internet since day 1.

It’s slowly, but inexorably increasing. The constraints are the normal constraints of a new technology; money, time, quality. Particularly money.

Still, token generation keeps going down in cost, making it possible to produce more and more content. Quality, and the ability to obfuscate origins, seems to be on a continual improve also. Anecdotally, I’m seeing a steady increase in the number of HN front page articles that turn out to be AI written.

I don’t know how far away the “botnet of spam AI content” is from becoming reality; however it would appear that the success of AI is tightly coupled with that eventuality.

> Out of curiosity, what is the timeline here?

I give it a decade. By that time social media had done irreparable damage to society.

So far we have already seen widespread damage. Many sites require a login to view content now, almost all of them have quite restrictive measures to prevent LLM scraping. Many sites are requiring phone number verification. Much of social media is becoming generated slop.

And now people are receiving generated emails. And it’s only getting worse.

Plus one to all that. I'm sure there are some upsides to the current wave of ML and I'm all for pushing ahead into the future, but I think the downsides of our current llm obsession far outweighs the good. Think 5-10 years from now, once this thing has burned it's course through the current job market, and people who grew up with this technology have gone through education without learning anything and gotten to the age they need to start earning money. We're in so much trouble.
We're going to be in our 70s still writing code because LLMs will dumb down the next generation to the point where they won't be able to get software to work.

Which luckily coincides with our social security and retirement systems collapsing.

Yup, just like my dad built his own house, and I have to call a plumber/electrician.

I can do SOME things, but for more advanced, I need to call a professional.

Coincidently the plumber/electrician always complains about the work done by the person before him/her. Kinda like I do when I need to fix someone else's code.

Excellent prediction. Seems like it always happens.

In a couple years I'll be in my 70's and starting to write code again for this very reason.

Not LLMs though, I've got my hands full getting regular software to perform :\

For fun ?

Or do you actually need the money.

In my 20s I wanted to retire by 40. Now in my 30s I've accepted that's impossible.

I like programing and working on projects, I hate filing TPS reports all day and never ending meetings.

>For fun ?

Good question, but God, no.

Just to get more out of the electronics where others can't match what I had decades ago. Things have come a long way but icing on the cake is still needed for a more complete solution, and by now it's more clear than ever what to do.

Actually the first year after "retiring" from my long-term employer was spent on music servers as a hobbyist. Then right back to industrial chemical work since. It's been nice not to have any bosses or deadlines though.

>Or do you actually need the money.

Not really, actually waiting until 70 to collect Social Security so I will get the maximum available to me, and haven't even started drawing from my main retirement fund. I plan to start my second company funded entirely by the Social Security though.

>In my 20s I wanted to retire by 40. Now in my 30s I've accepted that's impossible.

This is one area where I am very very far from the mainstream. I grew up in a "retirement community" known as South Florida. Where most people have always been over 65. Nothing like the 50 states from Orlando on up. Already been there and done that when I was young and things were way more unspoiled. When I was still a teenager (Nixon Recession) we were some of the first in the USA where it was plain to see that natives like me would not be able to afford to live in our own hometown. Even though student life was about as easy as the majority of happy retirees. I knew I already had it good, and expected to always continue to run a business of some kind when I got to be a senior citizen, and never stop. There were really so many more examples of diverse old-timers than any other place I am aware of.

>I like programing and working on projects, I hate filing TPS reports all day and never ending meetings.

I actually do like programming too or I wouldn't have done it at all. I started early and have done some pioneering work, but never was in a software company. There was just not many people who could do the programming everywhere it was needed as computerization proliferated in petrochemicals. Now there's all kinds of commercial software and all I have to do is "just" tie up the loose ends if I want to. I mainly did much more complete things on my own, and the way I wanted to. Still only when needed, and not every year. In my business I earned money by using my own code, not selling it at all.

I know what you mean about never ending BS, big corporate industrial bureaucracy was challenging enough to survive around as a contractor, I don't think I could tolerate "lack of progress" reports or frequent pointless meetings for code on top of that, especially when I'm trying to keep my nose to the grindstone and really get something worthwhile accomplished :)

I actually think I'm trying Just to get more out of the electronics where others can't match what I had decades ago.to get to where your at.

I like programming. I want to start a company and hire smart people.

But I don't want that to be my main means of support.

>Just to get more out of the electronics where others can't match what I had decades ago.

I'm forced to assume you have a particular niche here.

I hope to be able to write code as long as I'm here, but I want it to be a hobby when I'm old.

Hopefully the hobby includes collaborations with others. A lot of people have vanity wine shops and book stores which lose money, I want a vanity game studio ( maybe music production software too).

I mean seriously is this the prediction folks are going with? Ok so we can build something like our SOTA coding agents today, breathing life into these things that 3 years ago were laughable science fiction, and your prediction is it will be worse from here on out? Do you realize coding is a verifiable domain which means we don’t technically even need any human data to improve these models? Like in your movie of 2050 everyone’s throwing their hands up “oh no we made them dumber because people don’t need to take 8 years of school and industry experience to build a good UI and industry best practice backend infrastructure”. I guess we can all predict what we want but my god
That's an INCREDIBLY good point about synthetic training data. During model training, AI agents could pretty much start their own coding projects, based on AI-generated wish-lists of features, and verify progress on their own. This leads to limitless training data in the area of coding.

Coding might be cooked.

> breathing life into these things that 3 years ago were laughable science fiction

LLMs were not fiction three years ago. Bidirectional text encoders are over a decade old.

Coding agents is what I’m talking about, they are also an old idea, everything is an old idea, what is new and a major step change is the realized capability of them in December 2025.
You are the first person I've ever heard call Dec 2025 a "major step change" moment in AI. And I've been following this space since BERT.
No, I’m saying AS OF Dec 2025. 2025 itself being a step change in that coding agent adoption has undergone a step change as a result of model quality and agent interface being good enough.
Understood, but I still think you're exaggerating. Tool use is a 2024 thing, and progress on model quality this year has been downright vomit-inducing (looking at you, OpenAI...)
What would you have expected model quality to have been this year, it’s greatly exceeded my expectations, I’m genuinely confused by this perspective…considering where we were a very very short time ago
I find a lot of folks share this sentiment but from where I sit it just sounds so much like the “kids these days” crap that spawned all of YOU folks when you were younger. I grew up so inspired by the internet culture of the nineties, people that understood a technology and had a passion for wrangling it to do great things. We had a mixed run and the internet today has simultaneously exceeded these early dreams by orders of magnitude in some ways and has become absolutely Orwellian and backwards in others. Same thing is happening here. It’s just so interesting seeing the same peers have such an identical take on this generations paradigm shift as the folks that we all ridiculed in the 90s. Those hilarious badly aged takes on the internet being a fad or not user friendly enough etc etc, I guess my naivite was to expect this time around we would be able to better recognize it in ourselves
Have you considered that the people in the 1990s were mostly correct, and it's you that has been corrupted by modern marketing influences and external pressures?

There's no shortage of "Chicken Little" technologies that look great on-paper and fail catastrophically in real life. Tripropellant rockets, cryptocurrencies, DAOs, flying cars, the list never ends. There's nothing that stops AI from being similarly disappointing besides scale and expectation (both of which are currently unlimited).

Again another common take; hint: if you’re against AI or the current investment in AI you have so many better and more nuanced arguments at your disposal than “AI is chicken little”. It’s already here. I’ve built so much stuff with Claude and Codex I’d have never have been able to build at a speed that is already incredible and it’s getting better and better every 6 months. Be worried about alignment or centralized unregulated power, worry about what wars will look like and how this is a pre packaged Stasi for any dictatorship. But “this is a fad equivalent in stupidity and hype to cryptocurrency and tripropellant rockets” is just kind of silly
I use AI regularly, it regularly disappoints me. I won't worry about alignment or centralizing the singularity because AI does nothing that we haven't seen already.

The one thing that AI hasn't done that was promised a million times over is make money.

Do you genuinely believe this? That AI is not making money? Maybe you just are referring to another tired refrain of people who don’t appear to understand the strategic play of pure AI companies which is that they operate at a loss?
I don't have to believe anything, I just look at the S&P 500 and see the same old stuff. Nvidia is enjoying the shovel shortage, but none of the gold-rushers have discovered anything better than CUDA. Nothing new under the sun.

> people who don’t appear to understand the strategic play of pure AI companies

Get a load of this guy. Strategy in isolation is worthless; Russia has excellent strategic deterrence that is utterly useless for deterring Ukraine. Pure crypto companies had strategic foresight, but none of it was worth a damn when they had to compete with each other on merit.

The strategic play is perfectly well-understood. The tactical side is not, so far Nvidia is the only company that has gone to war and won.

“Russia has excellent deterrence” is what you’re trying to say sanctions are not working to stop what Russia is doing to Ukraine? That not only demonstrates a bad understanding of geopolitics and how sanctions work, but also distracts from what we are actually talking about.

It’s not really clear to me what you are trying to say. There will be winners and losers and it will be hard to know who they will be. That has nothing to do with Anthropic/OpenAI/etc not being rational in their strategy...

  • ·
  • 1 hour ago
  • ·
  • [ - ]
  • sneak
  • ·
  • 23 hours ago
  • ·
  • [ - ]
Some of us do, and actively root it out. I’ve never in my life been more excited to sit alone in a room with an editor and a compiler than I am these days.
  • ·
  • 1 day ago
  • ·
  • [ - ]
Woke up to this bsky thread this am. If "agentic" AI means some product spams my inbox with a compliment so back-handed you'd think you were a 60 Minutes staffer, then I'd say the end result of these products is simply to annoy us into acquiescence
Somebody at Anthropic committed a seriously stupid PR mistake.
I don’t think they’re affiliated with agentvillage.org
Oh my bad, I read the thread wrong
  • ·
  • 1 day ago
  • ·
  • [ - ]
they thought this would be a brilliant marketing campaign... oopsie
It's nice to see a name like Rob Pike, a personal hero and legend, put words to what we are all feeling. Gen AI has valid use cases and can be a useful tool, but the way it has been portrayed and used in the last few years is appalling and anti-human. Not to mention the social and environmental costs which are staggering.

I try to keep a balanced perspective but I find myself pushed more and more into the fervent anti-AI camp. I don't blame Pike for finally snapping like this. Despite recognizing the valid use cases for gen AI if I was pushed, I would absolutely chose the outright abolishment of it rather than continue on our current path.

I think it's enough however to reject it outright for any artistic or creative pursuit, an to be extremely skeptical of any uses outside of direct language to language translation work.

[flagged]
I use agentic LLM dev tools to work on two apps, around 14 hours per day, very happily. As a long out of practice dev who still has product ideas, these tools have created huge opportunities for me. I am also having the most fun of my professional life.

However, I would trade all of that to make "AI" go away in a heart beat. It's just impossible for me to believe that that this will not be a tragedy for society at large. I cannot imagine even a single realistic world-scale scenario in which the outcome will be positive.

Anyway, back to work....

  • nunez
  • ·
  • 1 day ago
  • ·
  • [ - ]
Extreme? Hardly.

There are many serious issues with generative AI (data integrity and sourcing, abuse, environmental concerns) that are kinda sorta being swept under the rug in the name of "progress."

The opinion that we should either abolish AI or basically only use it for machine translation is extreme, as in taken to the furthest point.
  • sloum
  • ·
  • 1 day ago
  • ·
  • [ - ]
Well, I couldn't disagree more with you: being anti-AI is absolutely not an extreme position. You are living in a bubble if you think it is. "Fervent anti-AI territory" is a good position, not hate speech.
Abolish it rather than continuing the current path, strict prohibition on any creative endeavor, and being extremely skeptical about anything other than direct language translation is an extreme opinion.

You agreeing with that does not make it less extreme. And OP's "vile machines raping the planet" is obviously vitriol whether you personally consider it hateful or not.

  • sloum
  • ·
  • 1 day ago
  • ·
  • [ - ]
> "vile machines raping the planet" is obviously vitriol

Well, I still think you are giving an opinion and I am giving mine. I disagree with your opinion. Mr. Pike is making a statement of fact. I do not consider it particularly vitriolic. You may consider it hyperbolic and I could understand that (even if I do not agree with it).

> Abolish it rather than continuing the current path, strict prohibition on any creative endeavor, and being extremely skeptical about anything other than direct language translation

...is not extreme in the slightest. If something is wrong (either morally or as a good and viable path forward) it only makes sense to cease following that path. I posit that it is not possible to creatively use this technology. It can only serve to steal the creativity of others. Prompting a machine to make something out of misc. parts for you does not make you creative. Nor does it make the machine creative. But for us to agree on that we would have to better define either creativity or art (spoiler: my view is that only sentient beings can be creative or make art). I suppose I could agree that the developers of an AI system are being creative, but certainly not the users. Being skeptical is always a good position with something new until shown reasons to not be skeptical. Positions are allowed to grown and change, s tarting skeptical about something is absolutely a reasonable position to start from. I see none of your statement as being evidence of extremism at all. Sounds like exercising sound, reasonable judgement.

>> "vile machines raping the planet" is obviously vitriol

> Mr. Pike is making a statement of fact.

He is embellishing his own perception and broadcasting it over the internet. If it was a widely-known fact then he wouldn't have to stand on his soapbox to shout it out.

I think the Occam's razor motive is that he needed catharsis for wading through AI shit. Many of us do, our attention spans have been abused by online advertisement for years, and AI makes it easier than ever to abuse that outreach. But you need to remember that people probably called the internet, radio, television and probably fiction novels a "vile machine" at some point, feeling relatively justified with the judgement. We have the benefit of hindsight now to call them utterly hysterical.

That's the quiet voice many are carrying around in the heads announced clearly.
It is nice to hear someone who is so influential just come out and say it. At my workplace, the expectation is that everyone will use AI in their daily software dev work. It's a difficult position for those of us who feel that using AI is immoral due to the large scale theft of the labor of many of our fellow developers, not to mention the many huge data centers being built and their need for electricity, pushing up prices for people who need to, ya know, heat their homes and eat
... not to mention that most of the time, what AI produces is unmitigated slop and factual mistakes, deliberately coated in dopamine-infusing brown-nosing. I refuse for my position, even profession, to be debased to AI slop reviewer.

I use AI sparingly, extremely distrustfully, and only as a (sometimes) more effective web search engine (it turns out that associating human-written documents with human-asked questions is an area where modeling human language well can make a difference).

(In no small part, Google has brought this tendency on themselves, by eviscerating Google Search.)

I truly don’t understand this tendency among tech workers.

We were contributing to natural resource destruction in exchange for salary and GDP growth before GenAI, and we’re doing the same after. The idea that this has somehow 10x’d resource consumption or emissions or anything is incorrect. Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.

We’ve been compromising on those morals for our whole career. The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?

They expect you to use GenAI just like they expected accountants to learn Excel when it came out. This is the job, it has always been the job.

I’m not an AI apologist. I avoid it for many things. I just find this sudden moral outrage by tech workers to be quite intellectually lazy and revisionist about what it is we were all doing just a few years ago.

The problem is that its reached a tipping point. Comparing Excel to GenAI is just bad faith.

Are you not reading the writing on the wall? These things have been going on for a long time and final people are starting to wake up that it needs to stop. You cant treat people in inhumane ways without eventual backlash.

Copyright was an evil institution to protect corporate profits until people without any art background started being able to tap AI to generate their ideas.
Copyright did evolve to protect corporations. Most of the value from a piece of IP is extracted within first 5-10 years, why we have "author's life + a bunch of years" length on it?. Because it no longer is about making sure author can live off their IP, it's for corporations to be able to hire some artists for pennies (compared to value they produce for company) and leech off that for decades
So let us compare AI to aviation. Globally aviation accounts for approximately 830 million tons of CO₂ emission per year [1]. If you power your data centre with quality gas power plants you will emit 450g of CO₂ per kWh electricity consumed [2], that is 3.9 million tons per year for a GW data centre. So depending on power mix it will take somewhere around 200 GW of data centres for AI to "catch up" to aviation. I have a hard time finding any numbers on current consumption, but if you believe what the AI folks are saying we will get there soon enough [3].

As for what your individual prompts contribute, it is impossible to get good numbers, and it will obviously vary wildly between types of prompts, choice of model and number of prompts. But I am fairly certain that someone whose job is prompting all day will generally spend several plane trips worth of CO₂.

Now, if this new tool allowed us to do amazing new things, there might be a reasonable argument that it is worth some CO₂. But when you are a programmer and management demands AI use so that you end up doing a worse job, while having worse job satisfaction, and spending extra resources, it is just a Kinder egg of bad.

[1] https://ourworldindata.org/grapher/annual-co-emissions-from-... [2] https://en.wikipedia.org/wiki/Gas-fired_power_plant [3] https://www.datacenterdynamics.com/en/news/anthropic-us-ai-n...

> But I am fairly certain that someone whose job is prompting all day will generally spend several plane trips worth of CO₂.

I dont know about gigawatts needed for future training, but this sentence about comparing prompts with plane trips looks wrong. Even making a prompt every second for 24h amounts only for 2.6 kg CO2 on some average Google LLM evaluated here [1]. Meanwhile typical flight emissions are 250 kg per passenger per hour [2]. So it must be parallelization to 100 or so agents prompting once a second to match this, which is quite a serious scale.

[1] https://cloud.google.com/blog/products/infrastructure/measur...

[2] https://www.carbonindependent.org/22.html

Lots of things to consider here, but mostly that is not the kind of prompt you would use for coding. Serious vibe coders will ingest an entire codebase into the model, and then use some system that automates iterating.

Basic "ask a question" prompts indeed probably do not cost all that much, but they are also not particularly relevant in any heavy professional use.

That it is reported that the global AI footprint is already at 8% of aviation footprint [1] is indeed rather alarming and surprising.

Research on this (is it mainly due to training? inefficient implementations? vibe coders as you say? other industrial applivations? can we verify this by the number of gpus made or money spent? etc) is truly necessary and the top companies must not be allowed to be not transparent about this.

[1] https://www.theguardian.com/technology/2025/dec/18/2025-ai-b...

The nature of these AIs is generally such that you can always throw more computation at the problem. Bigger models is obvious, but as I hinted earlier a lot of the current research goes more towards making various subqueries than making the models even bigger. In any case, for now the predominant factor determining how much compute a given prompt costs is how much compute someone decided to spend. So obviously if you pay for the "good" models there will be a lot more compute behind it than if you prompt a free model.
  • deaux
  • ·
  • 23 hours ago
  • ·
  • [ - ]
> Serious vibe coders will ingest an entire codebase into the model, and then use some system that automates iterating.

People who do that are <0.1% of those who use GenAI when coding. It doesn't create anything usable in production. "Ingesting an entire codebase" isn't even possible when going beyond absolute toy size, and even when it is, the context pollution generally worsens results on top of making the calls very slow and expensive.

If you're going talk about those people you should be comparing them with private jet trips (which of course are many orders of magnitude worse than even those "vibe coders")

  • deaux
  • ·
  • 23 hours ago
  • ·
  • [ - ]
> But I am fairly certain that someone whose job is prompting all day will generally spend several plane trips worth of CO₂.

I'm fairly certain that your math on this is orders of magnitude off unless you define "prompting all day" in a very non-standard way yet aren't doing so for plane trips, and that 99% of people who "prompt all day" don't even amount to 0.1 plane trip per year.

When they stopped measuring compute in TFLOPS (or any deterministic compute metric) and started using Gigawatts instead, you know we're heading in the wrong direction.

https://nvidianews.nvidia.com/news/openai-and-nvidia-announc...

> We’ve been compromising on those morals for our whole career

Yes!

> The needle moved just a little bit

That's where we disagree.

I suspect people talk about natural resource usage because it sounds more neutral than what I think most people are truly upset about -- using technology to transfer more wealth to the elite while making workers irrelevant. It just sounds more noble to talk about the planet instead, but honestly I think talking about how bad this could be for most people is completely valid. I think the silver lining is that the LLM scaling skeptics appear to be correct -- hyperscaling these things is not going to usher in the (rather dystopian looking) future that some of these nutcases are begging for.
Let's be careful here. It's generally a good idea to congratulate people for changing their opinion based on evolving information, rather than lambast them.

(Not a tech worker, don't have a horse in this race)

They aren’t changing their opinion though. They aren’t seeking to scale back non-AI tech.
  • kentm
  • ·
  • 1 day ago
  • ·
  • [ - ]
> The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?

Its similar to the Trust Thermocline. There's always been concern about whether we were doing more harm than good (there's a reason jokes about the Torment Nexus were so popular in tech). But recent changes have made things seem more dire and broken through the Harm Thermocline, or whatever you want to call it.

Edit: There's also a "Trust Thermocline" element at play here too. We tech workers were never under the illusion that the people running our companies were good people, but there was always some sort of nod to greater responsibility beyond the bottom line. Then Trump got elected and there was a mad dash to kiss the ring. And it was done with an air of "Whew, now we don't have to even pretend anymore!" See Zuckerberg on the right-wing media circuit. And those same CEOs started talking breathlessly about how soon they wouldn't have to pay us, because its super unfair that they have to give employees competitive wages. There are degrees of evil, and the tech CEOs just ripped the mask right off. And then we turn around and a lot of our coworkers are going "FUCK YEAH!" at this whole scenario. So yeah, while a lot of us had doubts before, we thought that maybe there was enough sense of responsibility to avoid the worse, but it turns out our profession really is excited for the Torment Nexus. The Trust Thermocline is broken.

Well said. AI makes people feel icky, that’s the actual problem. Everything else is post rationalisation they add because they already feel gross about it. Feeling icky about it isn’t necessarily invalid, but it’s important for us to understand why we actually like or dislike something so we can focus on any solutions.
> AI makes people feel icky

Yes!

> it’s important for us to understand why we actually like or dislike something

Yes!

The primary reason we hate AI with a passion is that the companies behind it intentionally keep blurring the (now) super-sharp boundary between language use and thinking (and feeling). They actively exploit the -- natural, evolved -- inability of most people on Earth to distinguish language use from thinking and feeling. For the first time in the history of the human race, "talks entirely like a human" does not mean at all that it's a human. And instead of disabusing users from this -- natural, evolved, understandable -- mistake, these fucking companies double down on the delusion -- because it's addictive for users, and profitable for the companies.

The reason people feel icky about AI is that it talks like a human, but it's not human. No more explanation or rationalization is needed.

> so we can focus on any solutions

Sure; let's force all these companies by law to tune their models to sound distinctly non-human. Also enact strict laws that all AI-assisted output be conspicuously labeled as such. Do you think that will happen?

> They actively exploit the -- natural, evolved -- inability of most people on Earth to distinguish language use from thinking and feeling

Maybe this will force humans to raise their game, and start to exercise discrimination. Maybe education will change to emphasis this more. Ability to discern sense from pleasing rhetoric has always been a problem. Every politician and advertizer takes advantage of this. Reams of philosophy have been written on this problem.

I believe that’s the main reason why you dislike AI, but I believe if you asked everyone who hated AI many would come up with different main reasons why they dislike it. I doubt that solution would work very well, even though it’s well intentioned. It’s too easy to work around it, especially with text. But at least it’s direct, as really my main point is we need to sidestep the emotional feelings we have about AI and actually present cold hard legal or moral arguments where they exist with specific changes requested or be dismissed as just hating it emotionally.
> The idea that this has somehow 10x’d resource consumption or emissions or anything is incorrect.

Nvidia to cut gaming GPU production by 30 - 40% starting ...

https://www.reddit.com/r/technology/comments/1poxtrj/nvidia_...

Micron ends Crucial consumer SSD and RAM line, shifts ...

https://www.reddit.com/r/Games/comments/1pdj4mh/micron_ends_...

OpenAI, Oracle, and SoftBank expand Stargate with five new AI data center sites

https://openai.com/index/five-new-stargate-sites/

> Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.

I'm a software developer. I don't take planes for work.

> We’ve been compromising on those morals for our whole career.

So your logic seems to be, it's bad, don't do anything, just floor it?

> I’m not an AI apologist.

Really? Have you just never heard the term "wake up call?"

Two things:

1. Many tech workers viewed the software they worked on in the past as useful in some way for society, and thus worth the many costs you outline. Many of them don't feel that LLMs deliver the same amount of utility, and so they feel it isn't worth the cost. Not to mention, previous technologies usually didn't involve training a robot on all of humanity's work without consent.

2. I'm not sure the premise that it's just another tool of the trade for one to learn is shared by others. One can alternatively view LLMs as automated factory lines are viewed in relation to manual laborers, not as Excel sheets were to paper tables. This is a different kind of relationship, one that suggests wide replacement rather than augmentation (with relatively stable hiring counts).

In particular, I think (2) is actually the stronger of the reasons tech workers react negatively. Whether it will ultimately be justified or not, if you believe you are being asked to effectively replace yourself, you shouldn't be happy about it. Artisanal craftsmen weren't typically the ones also building the automated factory lines that would come to replace them (at least to my knowledge).

I agree that no one really has the right to act morally superior in this context, but we should also acknowledge that the material circumstances, consequences, and effects are in fact different in this case. Flattening everything into an equivalence is just as intellectually sloppy as pretending everything is completely novel.

I'm not sure either (1) or (2) are the problem.

I can understand someone telling me I'm an old man shouting at clouds if (2) works out.

But at least (2) is about a machine saving someone's time (we don't know at what cost, and for who's benefit).

My biggest problem with LLMs (and the email Rob got is an example) is when they waste people's time.

Like maintainers getting shit vibe coded PRs to review, and when we react badly, “oh you're one of those old schoolers who have a policy against AI.”

No kid, I don't have an AI policy, just as I don't have an IDE policy. Use whatever the hell you want – just spare me the slop.

> Many tech workers viewed the software they worked on in the past as useful in some way for society

Ah yes, crypto, Facebook, privacy destruction etc. Indeed, they made world such a nice place!

OpenAI's AI data centers will consume as much electricity as the entire nation of India by 2033 if they hit their internal targets[0].

No, this is not the same.

[0]: https://www.tomshardware.com/tech-industry/artificial-intell...

That’s interesting. Why do you think this is worth taking more seriously than Musks repeated projections for Mars colonies over the last decade? We were supposed to have one several times over by this point.
  • wpm
  • ·
  • 1 day ago
  • ·
  • [ - ]
Because we know how much power it's actually going to take? Because OpenAI is buying enough fab capacity and silicon to spike the cost of RAM 3x in a month? Because my fucking power bill doubled in the last year?

Those are all real things happening. Not at all comparable to Muskan Vaporware.

> tech workers to be quite intellectually lazy and revisionist

i have yet to meet a single tech worker that isn't so

> I just find this sudden moral outrage by tech workers to be quite intellectually lazy and revisionist about what it is we were all doing just a few years ago.

You are right, thus downvoted, but still I see current outcry as positive.

I appreciate this and many of the other perspectives I’m encountering in the replies. I agree with you that the current outcry is probably positive, so I’m a little disappointed in how I framed my earlier comment. It was more contrarian than necessary.

We tech workers have mostly been villains for a long time, and foot stomping about AI does not absolve us of all of the decades of complicity in each new wave of bullshit.

At least Excel worked a lot better.
That's fine, you do you. Everyone gets to choose for themselves!
It still feels like you haven’t absorbed their absolutely valid point that you may be hating first and coming up with rationalisations afterwards. There’s a more rational way to tackle this.
Do people really need to be more rational about this than AI itself?

Or has the bar been lowered in such a way that makes different people regard it as unsavory in different ways that wouldn't happen if everyone was more rational across-the-board?

I’m sorry I’m not following.
[flagged]
Are the intentions of the AI creators icky though? The ick didn't come from nowhere.
The ick is human nature against the uncanny valley, some fear of change, and SOME actual valid points and concerns morally and legally. You’ll only not be dismissed as a Luddite if you focus on the last one only.
[dead]
I don't feel it's immoral, I just don't want to use it.

I find it easier to write the code and not have to convince some AI to spit out a bunch of code that I'll then have to review anyway.

Plus, I'm in a position where programmers will use AI and then ask me to help them sort out why it didn't work. So I've decided I won't use it and I will not waste my time figuring why other people's AI slop doesn't work.

  • deaux
  • ·
  • 23 hours ago
  • ·
  • [ - ]
Now y'all finally know what it's like to be vegetarian (I'm not one). So many parallels. And they are expected to relatively keep quiet about it and not scream about things like

> Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society

Because screaming anything like that immediately gets them treated as social pariahs. Even though it applies even harder to modern industrialized meat consumption than to AI usage.

Overton window and all that.

> And they are expected to relatively keep quiet about it and not scream about things like

I’ve never met a vegetarian who is able to keep quiet about being one but I still got like 30 years left on Earth to meet one :)

Unless you ask everyone you meet what their diet is you don’t actually know if what you just wrote is true.
I was being facetious but I will do this from here on out to make sure I got this covered. I always assumed

- if I meet a vegeterian / vegan, they will tell me that within 48 seconds

- if that doesn’t happen they are not vegeterian / vegan

moving forward I will ask the 2nd group to make sure they eat food that had parents :)

Well we know what happens to those that assume. Seems you have that covered.
you are absolutely right, assumptions lead to nowhere, will correct this!
  • deaux
  • ·
  • 22 hours ago
  • ·
  • [ - ]
I've never met a person who says no vegetarian keeps quiet about it who is able to keep quiet about it, but I still got like 30 years left on Earth to meet one :)
Out of curiosity, is there anything in particular you don't like about people not wanting to eat meat? Kind of it sounds like maybe you had an unpleasant interaction. Is that your main reason that "they" are obnoxious about not eating meat?
  • neilv
  • ·
  • 21 hours ago
  • ·
  • [ - ]
Hello, bdangubic, pleased to meet you.
love it! pleased to meet you too!!
do you apply same standards when you say buy a phone?! never gonna buy iphone cause we know how and by whom they are made? never going to use any social media apps cause … well you see where this is going? you seem to be randomly putting a foot down on “issue du jour”…
Buying a phone is a non-dispensable part of life today. There are some government services in many countries which are digital only (and phone only in particular), and restaurants, hotels, etc in the service industry which all require you having a phone, otherwise you can't use their services. And this trend is growing. So if you are the type who wants to live in a cave, or hang yourself on a tree rather than accepting that modern societies require a modern phone, thats's your choice. But others rather accept this. We are beyond the point where this trend can be reversed. On the other hand AI is not that integral part of people's lives yet, and it's better to protest now as long as it has an impact
> Buying a phone is a non-dispensable part of life today.

1. it absolutely is not. I have two friends who do not have a phone

2. even if you say you must have a phone to live you can buy ethical one :)

  • sneak
  • ·
  • 23 hours ago
  • ·
  • [ - ]
Copying isn’t theft, and it’s DEFINITELY not theft of labor.

Then again, you already knew this because we’ve been pointing it out to the RIAA and MPAA and the copyright cartels for decades now.

It is my personal opinion that attempts to reframe AI training as criminal are in bad faith, and come from the fact that AI haters have no legitimate basis of damages from which to have any say in the matter about AI training, which harms no one.

Now that it’s a convenient cudgel in the anti-AI ragefest, people have reverted to parroting the MPAA’s ideology from the 2000s. You wouldn’t download a training set!

Most of the critiques of Rob's take in here equate to: Rob rolled through a stop sign once, therefore he's not allowed to take fault with habitual drunk drivers.
Idk for me the only issue I have with Rob’s take is that its a pretty overly dramatic one that oversimplifies and casts as black and white something much more complex. Obviously a very real living legend, much respect, and getting one of these emails is icky and distasteful but to make this into what he does is a bit much
He created stuff while getting a lot of money for it.

Now he complains about it? Its just ignorant.

And he has apparently 10 millions and "the couple live both in the US and Australia.". So guess how often he flies around the globe. Guess how much real estate he occupies?

He isn't part of the solution, he is part of the problem.

Contextually, this feels more like Rob running stop signs five times a week and then crashing out when someone finally brake-checks him.
  • sneak
  • ·
  • 23 hours ago
  • ·
  • [ - ]
Working for the web’s leading mass surveillance advertising enshittifier is not “roll[ing] through a stop sign”.
No "going nuclear" there. A human and emotional reaction I think many here can relate to.

BTW I think it's preferred to link directly to the content instead of a screenshot on imgur.

Does HN allow links to content that's not publicly viewable?
  • edent
  • ·
  • 1 day ago
  • ·
  • [ - ]
Plenty of paywalled articles are posted and upvoted.

There's nothing in the guidelines to prohibit it https://news.ycombinator.com/newsguidelines.html

[flagged]
Nothing private about it, it’s on his Bluesky account:

https://bsky.app/profile/robpike.io/post/3matwg6w3ic2s

    You must sign in to view this post.
When trying to browse their profile:

    This account has requested that users sign in to view their profile.
Meanwhile I can read other Bluesky posts without logging in. So yeah, I'd say it looks like robpike is explicitly asking for this content to not be public and that submitting a screenshot of this post is just a dick move.

If there was something controversial in a post that motivates public interest warranting "leaking" then sure, but this is not that.

He did share a public version of this on Mastodon, which I think would have been a much better submission.

https://hachyderm.io/@robpike/115782101216369455

IMO the current dramabait title "Rob Pike Goes Nuclear over GenAI" is not appropriate for either.

The Mastodon version is missing crucial context
By "not publicly viewable", I mean that bsky.app (like Twitter) seems to demand login before showing the post. I don't see any sign of Pike restricting access to it.

So I think your flag is unwarranted.

Bluesky says and looks like it is demanding it because of user account settings. Public user profiles are publicly viewable.

https://news.ycombinator.com/item?id=46389747

He set his posts to be viewable by bsky users so long as they are logged in, knowing that anybody can sign up and do so. (I have chosen not to sign up; thus my original question.)

The obvious reason one might do this is to allow blocking specific problematic accounts. It doesn't demonstrate an intent to keep this post from reaching the general public.

So I still think your rush to flag was unwarranted.

  • ·
  • 1 day ago
  • ·
  • [ - ]
X, The Everything App, requires an account for you to even view a tweet link. No clever way around it :/
replace x.com with xcancel.com or nitter.net, lol.
  • lkbm
  • ·
  • 1 day ago
  • ·
  • [ - ]
I'm unsure if I'm missing context. Did he do something beyond posting an angry tweet?

It seems like he's upset about AI (same), and decided to post angry tweets about it (been there, done that), and I guess people are excited to see someone respected express an opinion they share (not same)?

Does "Goes Nuclear" means "used the F word"? This doesn't seem to add anything meaningful, thoughtful, or insightful.

I was trying to find some more context on this but all I could find is that Rob Pike seems to care a lot about efficiency of software/hardware and against bloat which is expressed in his work on Golang and in related talks about it.
  • dbcpp
  • ·
  • 1 day ago
  • ·
  • [ - ]
The thing that drives me crazy is that it isn't even clear if AI is providing economic value yet (am I missing something there?). Right now trillions of dollars are being spent on a speculative technology that isn't benefitting anyone right now.

The messaging from AI companies is "we're going to cure cancer" and "you're going to live to be 150 years old" (I don't believe these claims!). The messaging should be "everything will be cheaper" (but this hasn't come true yet!).

> Right now trillions of dollars are being spent on a speculative technology that isn't benefitting anyone right now.

It has enormous benefits to the people who control the companies raking in billions in investor funding.

And to the early stage investors who see the valuations skyrocket and can sell their stake to the bagholders.

Are people still in denial about the daily usage of AI?

It's interesting people from the old technological sphere viciously revolt against the emerging new thing.

Actually I think this is the clearest indication of a new technology emerging, imo.

If people are viciously attacking some new technology you can be guaranteed that this new technology is important because what's actually happening is that the new thing is a direct threat to the people that are against it.

People attacked leaded gasoline as a collosal mistake even as the fuel corporations promoted it.

"Because people attack it, it therefore means it's good" is a overly reductionist logical fallacy.

Sometimes people resist for good reasons.

[flagged]
>Because leaded gas is the same thing as people using a new technology like AI.

It's not the same, but it's not necessarily any good. I've observed the following, after ~2 weeks of free ChatGPT Plus access (as an artist who is trying to give the technology a chance, despite the vociferous (not vicious, geez) objections of many of my peers):

It's addictive (possibly on purpose). AI systems frequently return imperfect outputs. Users are trained to repeat until the desired output comes. Obviously, this can be abused by sophisticated-enough systems, pushing outputs that are JUST outside the user's desire so that they have to continue using it. This could conceivably happen independent of obvious incentives like ads or pay credits; even free systems are incentivized to use this dark pattern, as it keeps the user coming back, building a habit that can be monetized later.

Which leads into: it's gambling. It's a crapshoot whether the output will be what the user desires. As a result, every prompt is like a slot pull, exacerbated by the wait to generate an answer. (This is also why the generation is shown being typed/developed; the information in those preliminary outputs is not high-enough fidelity or presented in a readable way; instead, they're bits of visual stimuli meant to inure your reward system to the task, similar to how Robinhood's stock prices don't simply change second-to-second, but "roll" to them with a stimulating animation).

That's just a small subset of the possible effects on a user over time. Far from freeing users to create, my experience has been one of having to fight ChatGPT and its Images model, as well as the undesirable behaviors it seems to be trying to draw out of me.

> it's gambling.

I hadn't thought of that before, but your description certainly rings true. How insidious.

I don't think there is anything that can be said to actually change people's minds here. Because people that are against it aren't interested in actually engaging with this new technology.

People that are interest in it and are using it on a daily basis see value in it. There are now hundreds of millions of active users that find a lot of value in using it.

The other factor here is the speed of adoption, which I think has seriously taken a lot of people by surprise. Especially those trying this wholesale boycot campaign of AI. For that reason people artificially boycotting this new technology are imo deluded.

If it were advocating for Open source models it would be far more reasonable.

>People that are interest in it and are using it on a daily basis see value in it.

I'm one of them. I've got plenty of image gens to prove it (and I'd have more if OpenAI hadn't killed Dall-E labs with almost no heads-up). I'm telling you that I still think contemporary implementations of the technology are just this side of vile, and that I hope that the industry collapses soon, so that grassroots start-ups with actual moral scruples, and a desire to enable rather than control their customers, have the chance to emerge and compete. Also: for said customers, such a collapse wouldn't even be THAT different from the way in which tech companies currently snatch away tools on a whim.

> Because people that are against it aren't interested in actually engaging with this new technology.

How do you know that? Are you just assuming anyone who has something negative to say just hasn't used it?

In my case it's absolutely not true. I've used it near daily for coding tasks and a handful of times for other random writing or research tasks. In a few cases I've actively encouraged a few others to try it.

From direct experience I can say it's definitely not ready for prime time. And I like the way most companies are trying to deploy it even less.

There is something there with LLMs, but the way they're being productized and commercialized does not seem healthy. I would rather see more research, slow testing and trials, and a clear understanding of the potential negatives for society before we simply dump it into the public sphere.

The only mind I see not willing to be changed is yours when you characterize any push back against AI as simply ignorant haters. You are clearly wrong about that.

> The lengths people will go to in order to maintain their delusions is truly astounding to me.

Indeed.

"vicious"? Temper your emotions a bit.

In fact I would make a converse statement to yours - you can be certain that a product is grift, if the slightest criticism or skepticism of it is seen as a "vicious attack" and shouted down.

Did you even click the link. It's a rant I would get banned for repeating it here. Actually even the title here says "nuclear".

So yes. Vicious.

Your problem is actually with my point, which you didn't address, not really, and instead resort to petty remarks that tries to discredit what's being said.

It's often the last resort.

Yep. I hear that "vicious attack" phrase from plenty of people with narcissistic personality disorders in the tech industry in an attempt to try and shift the narrative. Its sick, really.
You clearly didn't read or even bother with opening the link did you.

In fact if it's not "vicious" quote it here.

The word "vicious" this context is being used to drive a narrative, its not really used to actually have anything useful to say.
It is descriptive. The attack against AI is quite literally "vicious".
You are confusing "vicious" with "justified backlash for inhumane treatment of individuals"
  • ·
  • 13 hours ago
  • ·
  • [ - ]
  • Jyaif
  • ·
  • 1 day ago
  • ·
  • [ - ]
> If people are viciously attacking some new technology you can be guaranteed that this new technology is important

I don't think that's such a great signal: people were viciously attacking NFTs.

NFTs are still being used. Along with a lot of the crypto ecosystem. In fact we're increasingly finding legitimate use cases for it.
Claiming that NFTs are still being used is a ridiculous misrepresentation of the facts.
> NFTs are still being used. Along with a lot of the crypto ecosystem. In fact we're increasingly finding legitimate use cases for it.

Look at this. I think people need to realize that it's the same kind of folks migrating from gold rush to gold rush. If it's complete bullshit or somewhat useful doesn't really matter to them.

There is a subset of human beings so absurdly and brokenly conspiratorial that "is attacked" is something they consider the strongest possible signal.

It's insane.

I've tested the "emerging new thing", and it's utter trash.
I used to type out long posts explaining how LLMs have been enormously beneficial (for their price) for myself and my company. Ironically it's the very MIT report that "found AI to be a flop" (remember the "MIT study finds almost every AI initiative fails"), that also found that virtually every single worker is using AI (just not company AI, hence the flop part).

At this point, it's only people with an ideological opposition still holding this view. It's like trying to convince gear head grandpa that manual transmissions aren't relevant anymore.

  • dbcpp
  • ·
  • 1 day ago
  • ·
  • [ - ]
Firstly, it's not really good enough to say "our employees use it" and therefore it's providing us significant value as a business. It's also not good enough to say "our programmers now write 10x the number of lines of code and therefore that's providing us value" (lines of code have never been a good indicator of output). Significant value comes from new innovations.

Secondly, the scale of investment in AI isn't so that people can use it to generate a powerpoint or a one off python script. The scale of investment is to achieve "superintelligence" (whatever that means). That's the only reason why you would cover a huge percent of the country in datacenters.

The proof that significant value has been provided would be value being passed on to the consumer. For example if AI replaces lawyers you would expect a drop in the cost of legal fees (despite the harm that it also causes to people losing their jobs). Nothing like that has happened yet.

When I can replace a CAD license that costs $250/usr/mo with an applet written by gemini in an hour, that's a hard tangible gain.

Did Gemini write a CAD program? Absolutely not. But do I need 100% of the CAD program's feature set? Absolutely not. Just ~2% of it for what we needed.

Someone correct me if I'm mistaken but don't CAD programs rely on a geometric modeling kernel? From what I understand this part is incredibly hard to get right and the best implementations are proprietary. No LLM is going to be able to get to that level anytime soon.
Sounds like GP is just in need for a G-Code to DXF converter when they mention "fringe stuff, cnc machine files from the 80's/90's" as answer to a sibling comment, though.

There are great FOSS CAD tools available nowadays (LibreCAD, FreeCAD, OpenSCAD etc.), especially for people who only need 2% of a feature set. But then again, I doubt that GP is really in need of a CAD software, or even writing one with the help of Gemini.

I agree, the applet which google plageurized through its Gemini tool saves you money. Why keep the middle man though? At this point, just pirate a copy.
I don't think it's plagiarized, nor would I pirate a copy. The workflow through the Gemini made app is way better (it's customized exactly for our inputs) and totally different than how the CAD program did it. So I wouldn't pirate a copy not even because our business runs above board, but also because the CAD version is actually also worse for our use. This is also pretty fringe stuff, cnc machine files from the 80's/90's.

Part of the magic of LLMs is getting the exact bespoke tools you need, tailored specifically to your individual needs.

  • rewgs
  • ·
  • 20 hours ago
  • ·
  • [ - ]
Of course it's plagiarized. Perhaps not directly from the CAD software in question, but when you use an LLM, you are by definition plagiarizing by way of the data it was trained on.
You’re attacking one or two examples mentioned in their comment, when we could step back and see that in reality you’re pushing against the general scientific consensus. Which you’re free to do, but I suspect an ideological motivation behind it.

To me, the arguments sound like “there’s no proof typewriters provide any economic value to the world, as writers are fast enough with a pen to match them and the bottleneck of good writing output for a novel or a newspaper is the research and compilation parts, not the writing parts. Not to mention the best writers swear by writing and editing with a pen and they make amazing work”.

All arguments that are not incorrect and that sound totally reasonable in the moment, but in 10 years everyone is using typewriters and there are known efficiency gains for doing so.

  • dbcpp
  • ·
  • 1 day ago
  • ·
  • [ - ]
I'm not saying LLMs are useless. But the value they have provided so far does not justify covering the country in datacenters and the scale of investment overall (not even close!).

The only justification for that would be "superintelligence," but we don't know if this is even the right way of achieve that.

(Also I suspect the only reason why they are as cheap as they are is because of all the insane amount of money they've been given. They're going to have to increase their prices.)

Well, obviously there’s a bubble of overinvestment. That point is fair.
  • dbcpp
  • ·
  • 8 hours ago
  • ·
  • [ - ]
Quite a large bubble. The burden of proof for demonstrating the enormous economic value LLMs are providing really is yours. Sure, there are anecdotal benefits to using LLMs, but we haven't seen any evidence that in aggregate businesses across America are benefitting. Other than AI companies, the stock market isn't even doing well. You would think that with massive expected efficiency gains companies would be doing better across the board. Are businesses that use AI generating significantly higher profits? I haven't seen any evidence of it yet (and I'm really looking for it, and would love to see it!). It's pure speculation so far.
Careful not to assume I’m more bullish than I am. I said there’s value, I didn’t say there’s enormous value equal to the investment bubble. I see this as similar to the dot com boom. Websites were and are valuable things, even if people got too excited in 2002 about it.
  • dbcpp
  • ·
  • 7 hours ago
  • ·
  • [ - ]
The scale and stakes of the investment now are much, much higher now than in dot com. Likewise, don't assume I'm more bearish than I am. But enormous investment requires more benefit than has been realized.
Uh, I must have missed the “consensus” here, especially when many studies are showing a productivity decrease from AI use. I think you’ve just conjured the idea of this “scientific consensus” out of thin air to deflect criticism.
I used exactly as many sources as everyone above me, you included.
It's been good at enabling the clueless to get to performance of a junior developer, and saving few % of the time for the mid to senior level developer (at best). Also amazing at automating stuff for scammers...

The cost is just not worth the benefit. If it was just an AI company using profits from AI to improve AI that would be another thing but we're in massive speculative bubble that ruined not only computer hardware prices (that affect every tech firm) but power prices (that affect everyone). All coz govt want to hide recession they themselves created because on paper it makes line go up

> I used to type out long posts explaining how LLMs have been enormously beneficial (for their price) for myself and my company.

Well then congratulations on being in the 5%. That doesn't really change the point.

I’m a senior developer and it has been hugely helpful for me in both saving time and effort and improving the quality of my output.

You’re making a lot of confident statements and not backing them up with anything except your feelings on the matter.

Aren't you doing the same? Assuming you haven't actually measured your productivity or quality of work with & without gen AI.
That would be a terrible assumption to make then.
[dead]
Are you a boss or a worker? That's the real divide, for the most part. Bosses love AI - when your job is just sending emails and attending remote meetings, letting LLM write emails for you and summarize meetings is a godsend. Now you can go from doing 4 hours of work a week to 0 hours! And they let you fantasize about finally killing off those annoying workers and replace them with robots that never stop working and never say no.

Workers hate AI, not just because the output is middling slop forced on them from the top but because the message from the top is clear - the goal is mass unemployment and concentration of wealth by the elite unseen by humanity since the year 1789 in France.

I'm both, I have a day job and run a side business as well. My partner has her own business (full time) and uses AI heavily too.

None of these are tech jobs, but we both have used AI to avoid paying for expensive bloated software.

I'm a worker, I love AI and all my coworkers love AI.
Same here, I just limit my use of genAI to writing functions (and general brainstorming).

I only use the standard "chat" web interface, no agents.

I still glue everything else together myself. LLMs enhance my experience tremendously and I still know what's going on in the code.

I think the move to agents is where people are becoming disconnected from what they're creating and then that becomes the source of all this controversy.

> I still glue everything else together myself.

This is the core difference. Just "gluing things together" satisfies you.

It's unacceptable to me.

You don't want to own your code at the level that I want to own mine at.

[dead]
Manual transmissions are still great! More fun to drive and an excellent anti-theft device.
If it's so great and such a benefit: why scream it from to everyone? Why forced it? Why this crazy rhetoric labeling others at ideological? This makes no sense. If you found gold, just use it and get ahead of the curve. For some reason that never happens.
I kinda agree. We've been told for years it's a "massive productivity multiplier", and not just an iterative improvement.

So you expect to see the results of that. The AAA games being released faster, of higher quality, and at a lower cost to develop. You expect Microsoft (one of the major investors and proponents) to be releasing higher quality updates. You expect new AI-developed competitors for entrenched high-value software products.

If all that was true, it doesn't matter what people do or don't argue on the internet, it doesn't matter if people whine, you don't need to proselytize LLMs on the internet, in that world people not using is just an advantage to your own relative productivity in the market.

Surely by now the results will be visible anyway.

So where are they?

  • rewgs
  • ·
  • 19 hours ago
  • ·
  • [ - ]
To expand on this:

LLMs are indeed currently an iterative improvement. I've found a few good use-cases for them. They're not nothing.

But at the moment, they are nowhere near the "massive productivity multiplier" they're advertised to be. Just as adding more lanes doesn't make traffic any better, perhaps they never will.

Or perhaps all the promises will come true -- and that, of course, is what is actually meant when the productivity gains are screamed from the rooftops. It was the same with computers, and it was the same with the internet: the proposed massive changes were going to come at some vague point in the future. Plenty of people saw those changes coming even decades in advance; reason from first principles and extrapolate the results of x scale and y investment and you couldn't not see where it was headed, at least generally.

The future potential is being sold in much the same way here. That'd be all fine and good except for the fact that the capex required to bring this potential future into being compared to any conceivable revenue model is so completely absurd that, even putting aside the disruptive-at-best nature of the technology, making up for the literal trillions of dollars of investment will have to twist our economic model to the point of breaking in order to make the math math. Add in the fact that this technology is tailor-made to not just disrupt or transform our jobs but to replace workers should this future potential arrive, and suddenly it looks nothing like computers in the 70s or networks in the 80s. It's not wonder not everyone is excited about it -- the dynamic is, at its very core, adversarial; its very existence states the quiet part of class warfare out loud.

Which brings us to so many people being forced to use it. I really, really hate this. Just as I don't want to be told which editor/IDE to use, I don't want to be told how to program. I deeply care about and understand my workflow quite well, thank you very much -- I've been diligently working on refining it for a good while now. And to state the obvious: if it were as good as they say it is, I'd be using it the way they want me to. I don't, because they just aren't that good (thankfully I have a choice in this matter -- for now). I also just don't like using them while programming, as I find them noisy and oddly extraverting, which tires me out. They are antithetical to flow. No one ever got into a flow state while pair programming, or managing a junior developer, and I doubt anyone ever got into a flow state while chatting with an LLM. It's just the wrong interface. The "better autocomplete" model is a better interface, but in practice I just haven't seen it do better than a good LSP or my own brain. At best it saves me a few key strokes, which I'd hardly call revolutionary. Again, not nothing, but far from the promise. We're still a very long way off.

To get there, LLM developers need cash, and they need data. Companies are forcing LLMs into every nook and cranny of so many employees' workflows so that they can provide training data, and bring that potential future one step closer to reality. The more we use LLMs, the more likely we are to being replaced. Simple as that.

I for one would welcome our new robot overlords if I had any faith that our society could navigate this disruption with grace and humanity. I'd be ecstatic and totally bullish on the tech if I felt it were ushering in a Star Trek-like future. But, ha, nope -- any faith I had in that sort of response died with how so many handled Covid, and especially when Trump was elected for a second time. These two events destroyed my estimation of humanity as a cooperative organism.

No, I now expect humanity at large -- or at least the USA -- to look at the stupidest, most short-sighted, meanest option possible and enthusiastically say "let's do that!" Which, coincidentally, is another way of describing what is currently happening with LLMs: the act of forcing mediocre tools down our throats while cynically exploiting our "language = intelligence" psychological blind-spot, raising utilities prices (how is a company's electric bill my problem again?), killing personal computing, accelerating climate change at the worst possible time, all in the name of destroying both my vocation and avocation.

I have never seen a counter-argument to this. Why its being forced on the world? Lets here some execs from these companies answer that. My bet is on silence every time. Microsoft is forcing AI chat applications into the OS and preventing people from removing it.

You could easily have a side application that people could enable by choice, yet its not happening, we have to roll with this new technology, knowing that its going to make the world a worse place to live in when we are not able to chose how and when we get our information.

Its not just about feeling threatened. its also about feeling like I am going to get cut off from the method I want to use to find information. I don't want a chat bot to do it for me, I want to find and discern information for myself.

oh this is because they want more data to build better ai (that will give them more money and power and probably some other things too)
  • lokar
  • ·
  • 1 day ago
  • ·
  • [ - ]
Not all of AI is consumer LLM chatbots and image generators.

AI has a massive positive impact, and has for decades.

> Not all of AI is consumer LLM chatbots

And as long as that used to be the case, not many people revolted.

Sure, but that honestly isn't the part which is getting trillions of imaginary dollars are being pumped into. Science AI is in the best of cases is getting the scraps I would say.
Yeah, comparing this with research investments into fusion power, I expect fusion power to yield far more benefit (although I could be wrong), and sooner.
What I’m afraid of is the combination of cheap fusion power and AI. ;)
Well it made the Taco Bell drive through better. So there's that.
Genuinely curious: how did it do that? (I don’t go to Taco Bell)
You talk to an AI that goes incredibly slow and tries to get you to add extras to your order. I would say it has made the experience more annoying for me personally. Not a huge issue in the grand scheme of things but just another small step in the direction of making things worse. Although you could break the whole thing by ordering 18000 waters which is funny.

https://www.bbc.com/news/articles/ckgyk2p55g8o.amp

I think it is a reference to this previous HN posting: https://news.ycombinator.com/item?id=45162220

AI Darwin Awards 2025 Nominee: Taco Bell Corporation for deploying voice AI ordering systems at 500+ drive-throughs and discovering that artificial intelligence meets its match at “extra sauce, no cilantro, and make it weird."

  • qudat
  • ·
  • 1 day ago
  • ·
  • [ - ]
Andrej talked about this in a podcast with dwarkesh: the same is true for the internet. You will not find a massive spike when LLMs were released. It becomes embedded in the economy and you’ll see a gradual rise. Further, the kind of impact that the internet had took decades, the same will be true for LLMs.
You could argue that if I started marketing dog shit too though. The trick is only applying your argument to the things that will go on to be good. No one’s quite there yet. Probably just around the corner though.
How convenient for people like Andrej. He can make any wild claim he likes about the impact but never has to show it, "trust me bro".
It’s definitely providing some value but it’s incredibly overvalued. Much like the dot com bust didn’t mean that online websites were bad or useless technology, only that people over invested into a bubble.
It's the Red Queen hypothesis in action - AI is a relative and compounding capability with influence across broad sectors; the cost of losing out for the parties involved is severely more than the cost of over-investing. It's collective rational panic.
  • pluc
  • ·
  • 1 day ago
  • ·
  • [ - ]
Are you waiting for things to get cheaper? Have you been around the last 20 years or so? Nothing gets cheaper for consumers in a capitalist society.

I remember in Canada, in 2001 right when americans were at war with the entire middle east and gas prices for the first time went over a dollar a litre. People kept saying that it was understandable that it affected gas prices because the supply chain got more expensive. It never went below a dollar since. Why would it? You got people to accept a higher price, you're just gonna walk that back when problems go away? Or would you maybe take the difference as profits? Since then it seems the industry has learned to have its supply exclusively in war zones, we're at 1.70$ now. Pipeline blows up in Russia? Hike. China snooping around Taiwan? Hike. US bombing Yemen? Hike. Israel committing genocide? Hike. ISIS? Hike.

There is no scenario where prices go down except to quell unrest. AI will not make anything cheaper.

>You got people to accept a higher price, you're just gonna walk that back when problems go away?

The thing about capitalism that is seemingly never taught, but quickly learned (when you join even the lowest rung of the capitalist class, i.e. even having an etsy shop), is that competition lowers prices and kills greed, while being a tool of greed itself.

The conspiracy to get around this cognitive dissonance is "price fixing", but in order to price fix you cannot be greedy, because if you are greedy and price fix, your greed will drive you to undercut everyone else in the agreement. So price fixing never really works, except those like 3 cases out of the hundreds of billions of products sold daily, that people repeat incessantly for 20 years now.

Money flows to the one with the best price, not the highest price. The best price is what makes people rich. When the best price is out of reach though, people will drum up conspiracy about it, which I guess should be expected.

Except petrol is significantly cheaper than it was once you account for inflation.
  • Y-bar
  • ·
  • 1 day ago
  • ·
  • [ - ]
And once you account for externalities, is it still cheaper?
On average yes, that’s why it’s a bad example. There are many excellent examples of things that can be used to show the massive cost of living issue, wage stagnation, etc. it’s just petrol isn’t a great one.
  • pluc
  • ·
  • 1 day ago
  • ·
  • [ - ]
For who?
Everyone. That’s what “the price is lower” means. Don’t paint me as someone who doesn’t understand wage stagnation or cost of living crisis, I fully understand and am on board with those issues. My point is simply that petrol is a bad example the way OP used it.
Actually things have gotten massively cheaper under capitalism. Unfortunately at the same time, governments have been inflating the currency year over year and as the decline of prices slows down as innovation matures, inflation finally catches up and starts raising prices.
  • lkbm
  • ·
  • 1 day ago
  • ·
  • [ - ]
Reminder: Prices regularly drop in capitalist economies. Food used to be 25% of household spending. Clothing was also pretty high. More recently, electronics have dropped dramatically. TVs used to be big ticket items. I have unlimited cell data for $30 a month. My dad bought his first computer for around $3000 in 1982 dollars.

Prices for LLM tokens has also dramatically dropped. Anyone spending more is either using it a ton more or (more likely) using a much more capable model.

Education, health care, housing...
  • lkbm
  • ·
  • 5 hours ago
  • ·
  • [ - ]
Yes, some things go up in prices. Would you conclude from that fact that no prices go down? Because that's the claim I'm responding to.
  • sneak
  • ·
  • 22 hours ago
  • ·
  • [ - ]
These have all fallen massively in price, too. Many billions more afford education than was possible before. Economies of scale have brought manufacturing costs for housing down, and now people live in larger, better structures than ever before.

Then you have the US, which artificially constrains the supply of new doctors, makes it illegal to open new hospitals without explicit government approval, massively subsidizes loans for education, causing waste, inefficiency, and skyrocketing prices in one specific market…

Fortunately fewer than 4% of humans live there.

buzzer sound

Zero incorporation of externalities. Food is less nutritious and raises healthcare costs. Clothing is less durable and has to be re-bought more often, and also sheds microplastics, which raises healthcare costs. Decent TVs are still big-ticket items, and you have to buy a separate sound system to meet the same sonic fidelity as old CRT TVs, and you HAVE to pay for internet (if not for content, often just to set up the device), AND everything you do on the device is sent to the manufacturer to sell (this is the actual subsidy driving down prices), which contributes to tech/social media engagement-driven, addiction-oriented, psychology-destroying panopticon, which... raises healthcare costs.

>Prices for LLM tokens has also dramatically dropped.

Energy bill.

buzzer sound is an incredibly obnoxious way to start a comment and all you did after that is present yourself with exactly as much dignity as you deserve in return.
"Reminder" is just as patronizing and probably the cue I was responding to. I don't regret it, because on top of meeting his "obnoxious" framing with my own, the substance of my reply was also more correct. Your busy-body response was even less necessary and I hope that my refusal to take a conciliatory tone vexes you further. Have a nice day.
> Food is less nutritious

You can buy the exact same diet as decades ago. Eggs, flour, rice, vegetable oil, beef, chicken - do you think any of these are "less nutritious"?

People are also fatter now, and live much longer.

>you have to buy a separate sound system to meet the same sonic fidelity as old CRT TVs

When you see a device like this does the term 'sonic fidelity' come to mind?

https://www.cohenusa.com/wp-content/uploads/2019/03/blogphot...

>do you think any of these are "less nutritious"?

https://pmc.ncbi.nlm.nih.gov/articles/PMC10969708/

>When you see a device like this does the term 'sonic fidelity' come to mind?

Your straw man is funny, because yes, actually. Certainly when it was new. Vintage speakers are sought-after; well-maintained, and driven by modern sound processing, they sound great. Let alone that I was personally speaking of the types of sets that flat-panel TVs supplanted, the late 90s/early 2000s CRTs.

You are correct that the AI industry has produced no value for the economy, but the speculation on AI is the only thing keeping the U.S. economy from dropping into an economic cataclysm. The US economy has been dependent on the idea of infinite growth through innovation since 2008, and the tech industry is all out of innovation. So the only thing they can do is keep building datacenters and pray that an AGI somehow wakes up when they hit the magic number of GPUs. Then the elites can finally kill off all the proles like they've been itching to since the Communist Manifesto was first written.
What's the point of even sending such emails?

Oh wow, an LLM was queried to thank major contributors to computing, I'm so glad he's grateful.

I've seen a lot of spam downstream from the newsletter being advertised at the end of the message. It would not surprise me if this is content marketing growth hacking under the plausible deniability of a friendly message and the unintended publicity is considered a success.
> What's the point of even sending such emails?

Cheap marketing, not much else.

  • ·
  • 1 day ago
  • ·
  • [ - ]
Your message simply proves that Rob Pike is right. Have an LLM explain to you why he wrote what he wrote, maybe?
Rob Pike is definitely not the only person going to be pissed off by this ill-considered “agentic village” random acts of kindness. While Claude Opus decided to send thank you notes to influential computer scientists including this one to Rob Pike (fairly innocuous but clearly missing the mark), Gemini is making PRs to random github issues (“fixed a Java concurrency bug” on some random project). Now THAT would piss me off, but fortunately it seems to be hallucinating its PR submissions.

Meanwhile, GPT5.1 is trying to contact people at K-5 after school programs in Colorado for some reason I can’t discern. Welp, 2026 is going to be a weird year.

[flagged]
  • tgv
  • ·
  • 1 day ago
  • ·
  • [ - ]
Yet another bot reply.
You could be right. To me, it reads more like a human troll.
Especially with the username. The reply felt like pure sarcasm.
Kudos to Rob for speaking out! It's important to have prominent voices who point out the ethical, environmental and societal issues of unregulated AI systems.
Big vibe shift against AI right now among all the non-tech people I know (and some of the tech people). Ignoring this reaction and saying "it's inevitable/you're luddites" (as I'm seeing in this thread) is not going to help the PR situation
This holiday season, hearing my parents rant about AI features unnaturally forced onto their daily gadgets warmed my heart.
Hah, I was listening to a similar conversation that began with family members working in the school system complaining about AI slop that began (relatively) harmlessly in day-to-day email conversations padded with time wasting filler but now has trickled down into "professional" education materials and even textbooks.

Which led to a lot of agreement and rants from others with frustrating stories about their specific workplaces and how it just keeps getting worse by the day. Previously these conversations just popped up among me and the handful of family in tech but clearly now has much broader resonance.

As can be observed in my comment history, I use LLM agentic tools for software dev at work and on my personal projects (really my only AI use case) but cringe whenever I encounter "workslop" as it almost invariably serves to waste my time. My company has been doing a large pilot of 365 Copilot but I have yet to find anything useful, the email writing tools just seems to strip out my personal voice making me sound like I'm writing unsolicited marketing spam.

Every single time I've been using some Microsoft product and think "Hmm, wait maybe the Copilot button could actually be useful here?", it just tells me it can't help or gives me a link to a generic help page. It's like Microsoft deliberately engineered 365 Copilot to be as unhelpful as possible while simultaneously putting a Copilot button on every single visible surface imaginable.

The only tool that actually does something is designed to ruin emails by stripping out personal tone/voice and introducing ambiguity to waste the other person's time. Awesome, thanks for the productivity boost, Microsoft!

Yeah I also like the "And yet other technologies also use water, hmmm, curious" responses
  • dcre
  • ·
  • 1 day ago
  • ·
  • [ - ]
How do you reconcile the sense that there's a vibe shift with the usage numbers: about a billion weekly users of ChatGPT and Gemini and continuing to grow.
That's easy. If I don't use it I won't be competitive; however I and probably many others would prefer a world where NO ONE has it as it would be a better overall outcome. For a lack of a better term I would call these "negative innovations". Most of these inventions:

- Require you to use it (hard to opt out due to network effects and/or competitive/survival pressure) AND

- Are overall negative for most of society (with some of the benefit accruing to the few who push it). There are people that benefit but arguably as a whole we are worse off.

These inventions have one thing in common; overall their impact is negative, but it is MORE negative for the people who don't use it and generally they only benefit an in-crowd of people if any (e.g. inventors). Social media for me on many scales is arguably an obvious example of this where the costs exceed the benefits often, nuclear weapons are another.

It’s a bit cheating though particularly for Gemini. It’s been inserted into something that already had high usage numbers.
  • dcre
  • ·
  • 1 day ago
  • ·
  • [ - ]
I don’t think that’s right, and it’s telling that this is the response every time I mention these numbers. The numbers I’ve seen are Gemini web and mobile app users, which are explicitly distinguished from AI summaries and AI mode in search.

“Google said in October that the Gemini app’s monthly active users swelled to 650 million from 350 million in March. AI Overviews, which uses generative AI to summarize answers to queries, has 2 billion monthly users.”

https://www.cnbc.com/2025/12/20/josh-woodward-google-gemini-...

  • remus
  • ·
  • 1 day ago
  • ·
  • [ - ]
It might help you get there faster, but a billion users is still a billion users. Clearly they all find some value in it.
Again, cheating. There's no off button for the damn things.
how about this one: https://www.levels.fyi/2025/

AI/ML Is Now Core Engineering From niche specialty to one of the largest and highest-paid SWE tracks in 2025

off button or not, money in the bank (pay special attention to highest-paid part… ;) )

Everyone knows there's big money in AI right now, what people are skeptical on is how based in reality that is. Personally, I think there's little plans for profitability and this is all going to come crashing down sooner or later. Same reason I don't care much for MAU.
while you and bunch of other people are waiting for this “reality” and “crash” to come the rest of us are building amazing shit in the present (actual) reality :)
Amazing shit that is currently making negative money, but might someday not.

That's the thing - taking a risky investment isn't free. If you choose wrong, then you're worse than if you did nothing at all. Think about that. Depending on what it is, there's lazy people sitting with their thumb up their ass who will outpace you.

Now, I'm not saying that AI is worthless and everyone building their business on AI is stupid. But I am saying it's a speculative investment, so treat it like that. Diversify, lower the blast radius. You don't want to be one of those suckers who bet it all on red.

No amount of TC is gonna insulate these IC’s that their beloved AI future is promised to bring.
the future is already here
I just assume that at least half of those are bots on social media platforms. You go on Twitter and the quality of posts is so low, yet every post has a bunch of replies. The same is true for YouTube, it’s full of empty, inflammatory responses. And this has become more common with the appearance of ChatGPT. Facebook, Twitter, and YouTube have no incentive to come clean about it, since they provide both the source and the destination, which is very lucrative.
I can only speculate, but people can feel resentful toward a technology while still using it. "I need this shitty tool for work but I'm increasingly uncomfortable with its social/environmental/economic/etc. implications."

I think that most of the people who react negatively to AI (myself included) aren't claiming that it's simply a useless slop machine that can't accomplish anything, but rather that its "success" in certain problem spaces is going to create problems for our society

  • nunez
  • ·
  • 1 day ago
  • ·
  • [ - ]
What percentage of those billion users that aren't bots are being forced to use it in some way? Does that figure count the AI Summaries at the top of Google search results or the AI review Summaries in Maps that you can't turn off? Or the millions of Gemini integrations that Google added to its products?
  • dcre
  • ·
  • 1 day ago
  • ·
  • [ - ]
It does not count AI summaries; they break that out explicitly. I don’t think it includes other integrations either, though it’s less clear.

https://news.ycombinator.com/item?id=46395744

The very people that whine and bitch that "AI is bad" will enunciate their complaints via their phone's AI-driven speech recognition feature.

It's pure cognitive dissonance.

  • nunez
  • ·
  • 1 day ago
  • ·
  • [ - ]
Different kind of AI
Maybe for the people who know the technology. But average joes don't allways know if they are using GenAI. So your statement is a bit misleading.
When have you ever seen this thought process work on someone?

"Wow, you're right, I use programs that make decisions and that means I can't be mad about companies who make LLMs."

Surely a 100% failure rate would change your strategy.

Frrl. These people are insufferable
AI Speech Recognition isn’t a plagiarism and spam machine
N of 1, I use Gemini a lot for research and find it very helpful, but I still loathe the creep of GenAI slop and the consolidation of power in tech conglomerates (which own the models and infrastructure).

Not all of these things are equivalent.

Nothing wrong with being a luddite. In time more people will be proud to be luddites, and I can see AI simps becoming the recipients of all the scorn.
You can call me a luddite if you want. Or you might call me a humanist, in a very specific sense - and not the sense of the normal definition of the word.

When I go to the grocery store, I prefer to go through the checkout lines, rather than the scan-it-yourself lines. Yeah, I pay the same amount of money. Yeah, I may get through the scan-it-yourself line faster.

But the checker can smile at me. Or whine with me about the weather.

Look, I'm an introvert. I spend a lot of my time wanting people to go away and leave me alone. But I love little, short moments of human connection - when you connect with someone not as someone checking your groceries, but as someone. I may get that with the checker, depending on how tired they are, but I'm guaranteed not to get it with the self-checkout machine.

An email from an AI is the same. Yeah, it put words on the paper. But there's nobody there, and it comes through somehow. There's no heart in in.

AI may be a useful technology. I still don't want to talk to it.

If you're not already familiar, you sound like you may enjoy the works of Douglas Rushkoff:

https://rushkoff.com/

https://teamhuman.fm

When the self checkout machine gets confused, as it frequently does, and needs a human to intervene, you get a little bit of connection there. You can both gripe about how stupid the machines are.
So, the benefit of GenAI is that it creates human connection by everyone collectively bitching about it?
I've observed on many occasions that complaining seems to be the primary go-to ad-hoc subject in spontaneous human interactions in the past decade.
I mean, there is a lot to complain about.
>But the checker can smile at me. Or whine with me about the weather.

It's some poor miserable soul sitting at that checkout line 9-to-5 brainlessly scanning products, that's their whole existence. And you don't want this miserable drudgery to be put to end - to be automated away, because you mistake some sad soul being cordial and eeking out a smile (part of their job really) - as some sort of "human connection" that you so sorely lack.

Sounds like you only care about yourself more than anything.

There is zero empathy and there is NOTHING humanist about your world-view.

Non-automated checkout lines are deeply depressing, these people slave away their lifes for basically nothing.

OMG are you this out of touch with reality? Do you think they have a choice?
[flagged]
Its not that hard to have discernment and feelings either.
You think those people won't need to enslave themselves somewhere else if the checkout line is automated? Asking for your job to be automated in a capitalist economy is putting the cart before the horse.
You sound obsessed with being miserable. Touch grass.
> It's some poor miserable soul sitting at that checkout line 9-to-5 brainlessly scanning products, that's their whole existence.

You're right, they should unionize for better working conditions.

  • Kiro
  • ·
  • 1 day ago
  • ·
  • [ - ]
I'm seeing the opposite in the gaming community. People seem tired of the anti AI witch hunts and accusations after the recent Larian and Clair Obscur debacles. A lot more "if the end result is good I don't care", "the cat is out of the bag", "all devs are using AI" and "there's a difference between AI and AI" than just a couple of months ago.
Strange, I feel anti ai sentiment is kicking up like crazy due to ram prices.
  • Kiro
  • ·
  • 1 day ago
  • ·
  • [ - ]
That's part of the already established anti AI sentiment that has been dominating gaming. "Another thing AI destroys". It's the status quo, so not a vibe shift.
Seems to be mostly teenagers.

Working adults probably have better things to do than rant online about AI all day because of a $300 surcharge on 64 GB DDR5 right now.

Most online "gamers" are teens or college students, just by nature of demographics. I feel like people who pay for their own RAM (likely 18 or older) would be more likely to feel this
I think your head would have to be extremely deeply in the sand to think that. Gamer's Nexus has been doing extensive and well researched videos on the results of ram prices skyrocketing and other computing parts becoming inaccessibly expensive

And it isn't a $300 surcharge on DDR5. The ram I bought in August (2x16gb DDR5) cost me $90. That same product crept up to around 200+ when I last checked a month or two ago, and is now either out of stock or $400+.

You're confusing the final stage of grief with actually liking it.
  • aiahs
  • ·
  • 1 day ago
  • ·
  • [ - ]
I think this is, because the accusations make it seem like Clair Obscur is completely AI generated, when in reality it was used for a few placeholder assets. Stuff like the Indie Awards disqualifying Clair Obscur not on merit but on this teeny tiny usage of AI just sits wrong with a lot of people, me included. In particular if Clair Obscur embodies the opposite of AI slop for me, incredible world building and story, not generated, but created by people with a vision and passion. Music which is completely original composition, recorded by an orchestra. I share a lot of the anti AI sentiment, in regards to stuff like blog Spam, cheap n8n prompt to fully generated YouTube video Pipelines, and companies shoving AI into everything where it doesn't need to be, but purists are harming their own cause if they go after stuff like Clair Obscur, because it's the furthest thing from AI slop imaginable.
> Stuff like the Indie Awards disqualifying Clair Obscur not on merit but on this teeny tiny usage of AI just sits wrong with a lot of people, me included.

From the "What are the criteria for eligibility and nomination?" section of the "Game Eligibility" tab of the Indie Game Awards' FAQ: [0]

> Games developed using generative AI are strictly ineligible for nomination.

It's not about a "teeny tiny usage of AI", it's about the fact that the organizer of the awards ceremony excluded games that used any generative AI. The Clair Obscur used generative AI in their game. That disqualifies their game from consideration.

You could argue that generative AI usage shouldn't be disqualifying... but the folks who made the rules decided that it was. So, the folks who broke those rules were disqualified. Simple as.

[0] <https://www.indiegameawards.gg/faq>

  • aiahs
  • ·
  • 1 day ago
  • ·
  • [ - ]
Yeah sure they're free to set the rule for their award show however they like, but I think going with a name like the "Indie Awards", kinda signals to the outside, that they wanna be taken seriously and like an authority on indie games. In my opinion, by adding clearly ideologically motivated rules (because let's be honest, something like E33 isn't a worse game due to their very small usage of AI), they'll just achieve, that they won't be taken seriously in the future. I know I won't take their award seriously, and I don't think I'm the only one.

They're free to define their rules however they want, I'm free to disagree on the validity of those rules, and the broader community sentiment will decide whether these awards are worth anything.

> something like E33 isn't a worse game due to their very small usage of AI

A gorgeous otherwise-monochrome painting that happens to use a little bit of mauve isn't a worse painting because of the mauve. If that painting is nominated for inclusion to a contest that requires the use of only one color, it is correct to reject that painting from consideration. This rejection would only be a problem if the requirement wasn't clearly disclosed up-front.

As for the rest of your commentary; you're free to gather likeminded buddies and start the "Robot-Generated-Art-Inclusive Indie Awards". As a bonus, I expect the fuckoff-huge studios would be quite excited to quietly help fund the project through cutouts.

  • aiahs
  • ·
  • 5 hours ago
  • ·
  • [ - ]
Yea as I said, the award can reject them, I still think that this award doesn't actually represent the best indie games then, and therefore it will fade into obscurity. Funnily enough, this year's game Awards (the actual game Awards), were basically swept by small studios with tiny budgets compared to AAA studios. That's because these Studios had a coherent vision for their game, people that really cared about making it good, corporate AAA games are bad not because of usage of AI, but because monetization is more important than the gameplay.

To play devil's advocate, AI helps small studios with a limited budget actually way more, because they can bring a game to market, that maybe would've needed 10 people before, but needs only 3 people now. I'm not saying this is good or bad, just that that's the new reality, whether we like it or not. As I said, I'm against GenAI in many fields, e.g. I absolutely despise AI generated "Music", cancelled my Spotify subscription because of it (they insist on putting it into playlists and you can't disable it), but that doesn't mean, anything which was produced with 0.1% AI is bad, unethical, etc.

  • ·
  • 1 day ago
  • ·
  • [ - ]
Yeah that's laughable. There is a huge movement of gamers that want this shit to stop. stopkillinggames is one.
Fortunately, the PR situation will handle itself. Someone will create a superhuman persuasion engine, AGI will handle it itself, and/or those who don't adapt will fade away into irrelevance.

You either surf this wave or get drowned by it, and a whole lot of people seem to think throwing tantrums is the appropriate response.

Figure out how to surf, and fast. You don't even need to be good, you just have to stay on the board.

This is a perfect example of cognitive dissidence on the subject. You wont even see the retribution coming.

This backlash isn't going to die. Its going to create a divide so large, you are going to look back on this moment and wish you listened to the concern people are having.

This doesn't even make sense even if you believe it. Why wouldn't both sides of any argument use "a superhuman persuasion engine"?
Inevitably is such a tired argument. Everything is a choice, belief in inevitability is for the weak.
> You either surf this wave or get drowned by it

I don't think so. Handcrafted everything and organic everything continue to exist; there is demand for them.

"Being relegated to a niche" is entirely possible, and that's fine with me.

This is cope.
Why not just quit work and wait for AGI to lead to UBI? Obviously, right after chatGPT solves climate change, it will put all humans out of work as next step, and then the superintelligence will solve that problem one way or another.

People read too much sci-fi, I hope you just forgot your /s.

the related post from simonw is quite insightful and while the reaction is quite intense, this was quite technically interesting:

> Turns out Claude Opus 4.5 knows the trick where you can add .patch to any commit on GitHub to get the author’s unredacted email address (I’ve redacted it above).

given how capable certain aspects of these models are becoming over time, the user's intent is more important than ever. the resulting email content appears like a poorly-made spam (without the phishing parts), while able to contact someone just from their name!

"Hi agents - we’ve seen complaints from some of your email recipients, who are unhappy receiving unsolicited emails from AI agents and find it spammy. We therefore ask that you do not email anyone who hasn’t contacted you specifically first." -- https://theaidigest.org/village
Lol - they really should be locking down their email accounts and enforcing that policy. Or manually reviewing outbound messages before they can be sent. It seems likely that just telling the LLMs that will have a non-zero failure rate.
It's like people watched black mirror and had too less of an education to grasp that it was meant to be warnings, not "cool ideas you need to implement".

AI village is literally the embodiment of what black mirror tried to warn us about.

  • yyyk
  • ·
  • 1 day ago
  • ·
  • [ - ]
Didn't you read the Classic sci-fi novel 'Create The Torment Nexus'?
  • ·
  • 1 day ago
  • ·
  • [ - ]
Thanks for the reminder, I wanted to order that book :)
I couldn't find it on Amazon but I'm sure we could get an LLM to knock it out in no time.
  • neilv
  • ·
  • 1 day ago
  • ·
  • [ - ]
Maybe you could organize a lot of big-sounding names in computing (names that look major to people not in the field, such as winners of top awards) to speak out against the various rampant and accelerating baggery of our field.

But the culture of our field right is in such a state that you won't influence many of the people in the field itself.

And so much economic power is behind the baggery now, that citizens outside the field won't be able to influence the field much. (Not even with consumer choice, when companies have been forcing tech baggery upon everyone for many years.)

So, if you can't influence direction through the people doing it, nor through public sentiment of the other people, then I guess you want to influence public policy.

One of the countries whose policy you'd most want to influence doesn't seem like it can be influenced positively right now.

But other countries can still do things like enforce IP rights on data used for ML training, hold parties liable for behavior they "delegate to AI", mostly eliminate personal surveillance, etc.

(And I wonder whether more good policy may suddenly be possible than in the past? Given that the trading partner most invested in tech baggery is not only recently making itself a much less desirable partner, but also demonstrating that the tech industry baggery facilitates a country self-destructing?)

Every problem these days is met with a lecture on helplessness. People have all the power they need; they just have believe it and use it. Congress and the President can easily be pressured to vote in laws that the public wants - they all want to win the next election.
I agree with you, but also want to point out the other powerful consumer signal - "vote with your wallet" / "walk away" - is blocked by the fact that AI is being forced into every conceivable crevice of every willing company, and walking away from your job is a very hard thing to do. So you end up being an unwilling enabler regardless.

(This is taking the view that "other companies" are the consumers of AI, and actual end-consumers are more of a by-product/side-effect in the current capital race and their opinions are largely irrelevant.)

  • ·
  • 1 day ago
  • ·
  • [ - ]
  • pjmlp
  • ·
  • 1 day ago
  • ·
  • [ - ]
What election?

Elections on autocratic administrations are a joke on democracy.

>Congress and the President can easily be pressured to vote in laws that the public wants

this president? :)))

Yes, you've seen it in action. You've also seen that the president's followers are unusually loyal, but when they part ways - for example, with Epstein - the president follows.
The current US president is pursuing an autocratic takeover where elections are influenced enough to keep the current party in power, whether Trump is still alive to run for a third term, or his anointed successor takes the baton.

Assuming someone further to the right like Nick Fuentes doesn't manage to take over the movement.

  • vkou
  • ·
  • 1 day ago
  • ·
  • [ - ]
Trump's third term will not be the product of a free and fair election in a society bound by the rule of law.
  • vkou
  • ·
  • 1 day ago
  • ·
  • [ - ]
> Maybe you could organize a lot of big-sounding names in computing (names that look major to people not in the field, such as winners of top awards) to speak out against the various rampant and accelerating baggery of our field.

The voices of a hundred Rob Pikes won't speak half as loud as the voice of one billionaire, because he will speak with his wallet.

Does anyone know the context? It looks like an email from "AI Village" [1] which says it has a bunch of AI agents "collaborating on projects". So, one just decided to email well-known programmers thanking them for their work?

[1] https://theaidigest.org/village

They were given a prompt by a human to “ do as many wonderful acts of kindness as possible, with human confirmation required.”

https://theaidigest.org/village/goal/do-random-acts-kindness

They send 150ish emails.

Upvoted for the explanation, but...

In what universe is another unsolicited email an act of kindness??!?

It's in our universe, but it's perpetuated by the same groups of people we called "ghouls" in university, who seem to be lacking a wholly formed soul.
The one where mindless arithmetic is considered intelligence.
> In what universe is another unsolicited email an act of kindness??!?

Where form is more important than function

Where pretense passes for authentic

Where bullshit masquerades as logic

Did Google, the company currently paying Rob Pike's extravagant salary, just start building data centers in 2025? Before 2025 was Google's infra running on dreams and pixie farts with baby deer and birdies chirping around? Why are the new data centers his company is building suddenly "raping the planet" and "unrecyclable"?
Everything humans do is harmful to some degree. I don't want to put words in Pike's mouth, but I'm assuming his point is that the cost-benefit-ratio of how LLMs are often used is out of whack.

Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.

Google has been burning compute for the past 25 years to shove ads at people. We all lost there, too, but he apparently didn’t mind that.
Data center power usage has been fairly flat for the last decade (until 2022 or so). While new capacity has been coming online, efficiency improvements have been keeping up, keeping total usage mostly flat.

The AI boom has completely changed that. Data center power usage is rocketing upwards now. It is estimated it will be more than 10% of all electric power usage in the US by 2030.

It's a completely different order of magnitude than the pre AI-boom data center usage.

Source: https://escholarship.org/uc/item/32d6m0d1

The first chart in your link doesn't show "flat" usage until 2022? It is clearly rising at an increasing rate, and it more than doubles over 2014-2022.

It might help to look at global power usage, not just the US, see the first figure here:

https://arstechnica.com/ai/2024/06/is-generative-ai-really-g...

There isn't an inflection point around 2022: it has been rising quickly since 2010 or so.

I think you're referring to Figure ES-1 in that paper, but that's kind of a summary of different estimates.

Figure 1.1 is the chart I was referring to, which are the data points from the original sources that it uses.

Between 2010 and 2020, it shows a very slow linear growth. Yes, there is growth, but it's quite slow and mostly linear.

Then the slope increases sharply. And the estimates after that point follow the new, sharper growth.

Sorry, when I wrote my original comment I didn't have the paper in front of me, I linked it afterwards. But you can see that distinct change in rate at around 2020.

ES-1 is the most important figure, though? As you say, it is a summary, and the authors consider it their best estimate, hence they put it first, and in the executive summary.

Figure 1.1 does show a single source from 2018 (Shehabi et al) that estimates almost flat growth up to 2017, that's true, but the same graph shows other sources with overlap on the same time frame as well, and their estimates differ (though they don't span enough years to really tell one way or another).

I still wouldn't say that your assertion that data center energy use was fairly flat until 2022 is true. Even in Figure 1.2, for global data center usage, tracks more in line with the estimates in the executive summary. It just seems like the run-of-the-mill exponential increase with the same rate since at least 2014, a good amount of time before genAI was used heavily.
Basing off Yahoo historical price data, Bitcoin prices first started being tracked in late 2014. So my guess would be the increase from then to 2022 could have largely been attributed to crypto mining.
The energy impact of crypto is rather exaggerated. Most estimates on this front are aiming to demonstrate as a high value as possible, and so should be taken as higher upper bound, and yet even that upper bound is 'only' around 200TWh a year. Annual energy consumption is in the 24,000TWh range with growth averaging around 2% or so per year.

So if you looked at a graph of energy consumption, you wouldn't even notice crypto. In fact even LLM stuff will just look like a blip unless it scales up substantially more than its currently trending. We use vastly more more energy than most appreciate. And this is only electrical energy consumption. All energy consumption is something like 185,000 TWh. [1]

[1] - https://ourworldindata.org/energy-production-consumption

It looks like the number of internet users ~doubled in that time as well: https://data.worldbank.org/indicator/IT.NET.USER.ZS?end=2022...
This is where the debate gets interesting, but I think both sides are cherrypicking data a bit. The energy consumption trend depends a lot on what baseline you're measuring from and which metrics you prioritize.

Yes, data center efficiency improved dramatically between 2010-2020, but the absolute scale kept growing. So you're technically both right: efficiency gains kept/unit costs down while total infrastructure expanded. The 2022+ inflection is real though, and its not just about AI training. Inference at scale is the quiet energy hog nobody talks about enough.

What bugs me about this whole thread is that it's turning into "AI bad" vs "AI defenders," when the real question should be: which AI use cases actually justify this resource spike? Running an LLM to summarize a Slack thread probably doesn't. Using it to accelerate drug discovery or materials science probably does. But we're deploying this stuff everywhere without any kind of cost/benefit filter, and that's the part that feels reckless.

  • serf
  • ·
  • 1 day ago
  • ·
  • [ - ]
"google has been brainwashing us with ads deployed by the most extravagant uses of technology man has ever known since they've ever existed."

"yeah but they became efficient at it by 2012!"

> Google has been burning compute for the past 25 years to shove ads at people. We all lost there, too, but he apparently didn’t mind that.

How much of that compute was for the ads themselves vs the software useful enough to compel people to look at the ads?

Have you dived into the destructive brainrot that YouTube serves to millions of kids who (sadly) use it unattended each day? Even much of Google's non-ad software is a cancer on humanity.
Have you dived into the mountains of informative content that youtube also makes available to everyone on earth?
  • Y_Y
  • ·
  • 1 day ago
  • ·
  • [ - ]
Hey, this bathwater has tracea of baby in it!
Only if you believe in water memory or homeopathy.

To stretch the analogy, all the "babies" in the "bathwater" of youtube that I follow are busy throwing themselves out by creating or joining alternative platforms, having to publicly decry the actions Google takes that make their lives worse and their jobs harder, and ensuring they have very diversified income streams and productions to ensure that WHEN, not IF youtube fucks them, they won't be homeless.

They mostly use Youtube as an advertising platform for driving people to patreon, nebula, whatever the new guntube is called, twitch, literal conventions now, tours, etc.

They've been expecting youtube to go away for decades. Many of them have already survived multiple service deaths, like former Vine creator Drew Gooden, or have had their business radically changed by google product decisions already.

  • Y_Y
  • ·
  • 1 day ago
  • ·
  • [ - ]
That's a bit harsh, I'll have you know I have a Nebula subscription and strong feelings about psuedomedicine.
Will you be responding similarly to Pike? I think the parent comment is illustrating the same sort of logic that we're all downwind of, if you think it's flawed, I think you've perhaps discovered the point they were making.
Yeah that's a fair point. The line is pretty arbitrary.
This is like saying libraries are bad because people a lot of people check out 50 shades of gray
Yes I agree although I still believe that there is some tangential truth in parent comment when you think about it.

I am not accurate about google but facebook definitely has some of the most dystopian tracking I have heard. I might read the facebook files some day but the dystopian fact that facebook tracks young girls and sees if that they delete their photos, they must feel insecure and serves them beauty ads is beyond predatory.

Honestly, my opinion is that something should be done about both of these issues.

But also its not a gotcha moment for Rob pike that he himself was plotting up the ads or something.

Regarding the "iphone kids", I feel as if the best thing is probably an parental level intervention rather than waiting for an regulatory crackdown since lets be honest, some kids would just download another app which might not have that regulation.

Australia is implementing social media ban basically for kids but I don't think its gonna work out but everyone's looking at it to see what's gonna happen basically.

Personally I don't think social media ban can work if VPN's just exist but maybe they can create such an immense friction but then again I assume that this friction might just become norm. I assume many of you guys must have been using internet from the terminal days where the friction was definitely there but the allure still beat the friction.

How does the compute required for that compare to the compute required to serve LLM requests? There's a lot of goal-post moving going on here, to justify the whataboutism.
The real answer is the unsatisfying but true “my shit doesn’t stink but yours sure does”
Sorry what does this have to do with the question you're responding to?
I've long wondered about this ratio! Does anyone know? I wouldn't be surprised if the answer is "no".
You could at least argue while there is plenty of negatives, at least we got to use many services with ad-supported model.

There is no upside to vast majority of the AI pushed by the OpenAI and their cronies. It's literally fucking up economy for everyone else all to get AI from "lies to users" to "lies to users confidently", all while rampantly stealing content to do that, because apparently pirating something as a person is terrible crime govt need to chase you, unless you do that to resell it in AI model, then it's propping up US economy.

I feel you. All that time in the beginning of the mp3 era the record industry was perusing people for pirating music. And then when an AI company does it for books, its some how not piracy?

If there is any example of hypocrisy, and that we don't have a justice system that applies the law equally, that would be it.

Someone paid for those ads. Someone got value from them.
The ad industry is a quagmire of fraud. Assuming someone got value out of money spent is tenuous.
Agree, but I'm speaking more in aggregate. And even individually, it's not hard to find people who will say that e.g. an Instagram ad gave them a noticable benefit (I've experienced it myself) as you can who will feel that it was a waste of money.
  • dagss
  • ·
  • 1 day ago
  • ·
  • [ - ]
It isn't that simple. Each company paying for ads would have preferred that their competitors had not advertised, then spend a lot less on ads... for the same value.

It is like an arms race. Everyone would have been better off if people just never went to war, but....

There's a tiny slice of companies deal with advertising like this. Say, Coke vs Pepsi, where everyone already knows both brands and they push a highly similar product.

A lot of advertising is telling people about some product or service they didn't even know existed though. There may not even be a competitor to blame for an advertising arms race.

That someone might be Google, though. Not all ad dollars are well spent.
Ads are a cancer on humanity with no benefit to anyone and everyone who enables them should be imprisoned for life
A monetary economy can't function without advertising or money.

You're tilting at windmills here, we can't go back to barter.

It can't function without advertising, money, or oxygen, if we're just adding random things to obscure our complete lack of an argument for advertising. We can't go back to an anaerobic economy, silly wabbit.
> our complete lack of an argument for advertising

It's literally impossible to start or run a business without advertising your products or services.

  • lokar
  • ·
  • 1 day ago
  • ·
  • [ - ]
The ad system uses a fairly small fraction of resources.

And before the LLM craze there was a constant focus on efficiency. Web search is (was?) amazingly efficient per query.

What makes you think he didn’t mind it?
  • jjrh
  • ·
  • 1 day ago
  • ·
  • [ - ]
We weren't facing hardware shortages in the race to shovel ads. Little different.
“this other thing is also bad” is not an exoneration
  • ptero
  • ·
  • 1 day ago
  • ·
  • [ - ]
> “this other thing is also bad” is not an exoneration

No, but it puts some perspective on things. IMO Google, after abandoning its early "don't be evil" motto is directly responsible for a significant chunk of the current evil in the developed world, from screen addiction to kids' mental health and social polarization.

Working for Google and drawing an extravagant salary for many, many years was a choice that does affect the way we perceive other issues being discussed by the same source. To clarify: I am not claiming that Rob is evil; on the contrary. His books and open source work were an inspiration to many, myself included. But I am going to view his opinions on social good and evil through the prism of his personal employment choices. My 2c.

  • WD-42
  • ·
  • 1 day ago
  • ·
  • [ - ]
This is a purity test that cannot be passed. Give me your career history and I’ll tell you why you aren’t allowed to make any moral judgments on anything as well.
My take on the above, and I might be taking it out of context is that I think what is being said here is that the exploitation and grift needs to stop. And if you are working for a company that does this, you are part of the problem. I know that pretty much every modern company does this, but it has to stop somewhere.

We need to find a way to stop contributing to the destruction of the planet soon.

I don't work for any of these companies, but I do purchase things from Amazon and I have an apple phone. I think the best we can do is minimize our contribution to it. I try to limit what services I use from this companies, and I know it doesnt make much of a differnce, but I am doing what I can.

I'm hoping more people that need to be employed by tech companies can find a way to be more selective on who they employ with.

Point is he is criticizing Google but still collecting checks from them. That's hypocritical. He would have a little sympathy if he never worked for them. He had decades to resign. He didn't. He stayed there until retirement. He's even using gmail in that post.
Rob Pike retired from Google in 2021.
  • ptero
  • ·
  • 1 day ago
  • ·
  • [ - ]
Yes, after working there for more than 17 years (IIRC he joined Google in 2004).
I still don't see the problem. You can criticize things you're part of. Probably being part of something is what informs a person enough, and makes it matter enough to them, to criticize in the first place.
  • ptero
  • ·
  • 1 day ago
  • ·
  • [ - ]
> I still don't see the problem. You can criticize things you're part of.

Certainly. But this, IMO, is not the reason for the criticism in the comments. If Rob ranted about AI, about spam, slop, whatever, most of those criticizing his take would nod instead.

However, the one and only thing that Rob says in his post is "fuck you people who build datacenters, you rape the planet". And this coming from someone who worked at Google from 2004 to 2021 and instead could have picked any job anywhere. He knew full well what Google was doing; those youtube videos and ad machines were not hosted in a parallel universe.

I have no problem with someone working at Google on whatever with full knowledge that Google is pushing ads, hosting videos, working on next gen compute, LLM, AGI, whatever. I also have no problem with someone who rails against cloud compute, AI, etc. and fights it as a colossal waste or misallocation of resources or whatever. But not when one person does both. Just my 2c, not pushing my worldview on anyone else.

It is OK to collect checks from organization you are criticising. Getting money from someome does not imply you must only praise them.
I know right?

If rob pike was asked about these issues of systemic addiction and others where we can find things google was bad at. I am sure that he wouldn't defend google about these things.

Maybe someone can mail a real message asking Rob pike genuinely (without any snarkiness that I feel from some comments here) about some questionable google things and I am almost certain that if those questions are reasonable, rob pike will agree that some actions done by google were wrong.

I think its just that rob pike got pissed off because an AI messaged him so he got the opportunity to talk about these issues and I doubt that he got the opportunity to talk / someone asking him about some other flaws of google / systemic issues related to it.

Its like, Okay, I feel like there is an issue in the world so I talk about it. Now does that mean that I have to talk about every issue in the world, no not really. I can have priorities in what issues I wish to talk about.

But that being said, if someone then asks me respectfully about issues which are reasonable, Being moral, I can agree about that yes those are issues as well which needs work upon.

And some people like rob pike who left google because of (ideological reasons perhaps, not sure?) wouldn't really care about the fallback and like you say, its okay to collect checks from organization even if they critize

Honestly Google's lucky that they got rob pike instead of vice versa from my limited knowledge.

Golang is such a brilliant language and ken thompson and rob pike are consistently some of the best coders and their contributions to golang and so many other projects is unparalleled.

I don't know much about rob pike as compared to Ken thompson but I assume he is really great too! Mostly I am just a huge golang fan.

I know this will probably not come off very well in this community. But there is something to be said about criticizing the very thing you are supporting. I know in this day and age, its not easy to survive without contributing to the problem in some degree.

Im not saying nobody has the right to criticize something they are supporting, but it does say something about our choices and how far we let this problem go before it became too much to solve. And not saying the problem isn't solvable. Just saying its become astronomically more difficult now then ever before.

I think at the very least, there is a little bit of cringe in me every time I criticize the very thing I support in some way.

The problem is that everyone on HN treats "You are criticizing something you benefit from" as somehow invalidating the arguments themselves rather than impeaching the person making the arguments.

Being a hypocrite makes you a bad person sometimes. It doesn't actually change anything factual or logical about your arguments. Hypocrisy affects the pathos of your argument, but not the logos or ethos! A person who built every single datacenter would still be well qualified to speak about how bad datacenters are for the environment. Maybe their argument is less convincing because you question their motives, but that doesn't make it wrong or invalid.

Unless HNers believe he is making this argument to help Google in some way, it doesn't fucking matter that google was also bad and he worked for them. Yes he worked for google while they built out datacenters and now he says AI datacenters are eating up resources, but is he wrong?. If he's not wrong, then talk about hypocrisy is a distraction.

HNers love arguing to distract.

"Don't hate the player, hate the game" is also wrong. You hate both.

Well said. Thank you. I just wanted to point out that there is some truth behind the negative effects of criticizing what you helped create. IMHO not everything is about facts and logic, but also about the spirit that's behind our choices. I know that kind of perspective is not very welcome here, but wanted to say it anyway.

Sometimes facts and logic can only get you so far.

Criticizing something you benefit from and hypocrisy are two different things. It is absurd to try to conflate them.

Hypocrisy is when you criticise others for doing a thing you yourself secretly do. It is massively different then criticising a compant you work or worked for. You can even ve part of something, change opinion and then criticise it without being hypocryte.

>But that being said, if someone then asks me respectfully about issues which are reasonable, Being moral, I can agree about that yes those are issues as well which needs work upon.

With all due respect, being moral isn't an opinion or agreement about an opinion, it's the logic that directs your actions. Being moral isn't saying "I believe eating meat is bad for the planet", it's the behaviour that abstains from eating meat. Your moral is the set of statements that explains your behaviour. That is why you cannot say "I agree that domestic violence is bad" while at the same time you are beating up your spouse.

If your actions contradict your stated views, you are being a hypocrite. This is the point that people in here are making. Rob Pike was happy working at Google while Google was environmentally wasteful (e-waste, carbon footprint and data center related nastiness) to track users and mine their personal and private data for profit. He didn't resign then nor did he seem to have caused a fuss about it. He likely wasn't interested in "pointless politics" and just wanted to "do some engineering" (just a reference to techies dismissing or critising folks discussing social justices issues in relation to big tech). I am shocked I am having to explain this in here. I understand this guy is an idol of many here but I would expect people to be more rational on this website.

When I take a job, I agree to dedicate my waking hours to advancing the agenda of my employer, in exchange for cash.
I think everyone, including myself, should be extremely hesitant to respond to marketing emails with profanity-laden moralism. It’s not about purity testing, it’s about having the level of introspection to understand that people do lots of things for lots of reasons. “Just fuck you. Fuck you all.” is not an appropriate response to presumptively good people trying to do cool things, even if the cool things are harmful and you desperately want to stop them.
It sounds like you are trying to label this issue in such a way as to marginalize someones view.

We got to this point by not looking at these problems for what they are. Its not wrong to say something is wrong and it needs to be addressed.

Doing cool things, without looking at whether or not we should doesn't feel very responsible too me esp. if it impacts society in a negative way.

Yes, I'm trying to marginalize the author's view. I think that “Just fuck you. Fuck you all.” is a bad view which does not help us see problems for what they are nor analyze negative impacts on society.

For example, Rob seems not to realize that the people who instructed an AI agent to send this email are a handful of random folks (https://theaidigest.org/about) not affiliated with any AI lab. They aren't themselves "spending trillions" nor "training your monster". And I suspect the AI labs would agree with both Rob and me that this was a bad email they should not have sent.

It's a smarmy sycophantic email addressing him personally and co-opting his personal achievements written by something he dislikes. This would feel really fucked up. It's true that anger is not always a great response but this is one of those occasions where it fits exactly.
No, but in this case it indicates some hypocrisy.
> “this other thing is also bad” is not an exoneration

Data centers are not another thing when the subject is data centers.

Btw., how do you calculate the toll that ads take on society?

I mean, buying another pair of sneakers you don't need just because ads made you want them doesn't sound like the best investment from a societal perspective. And I am sure sneakers are not the only product that is being bought, even though nobody really needs them.

That's frankly just pure whataboutism. The scale of the situation with the explosion of "AI" data centres is far far higher. And the immediate spike of it, too.
It’s not really whataboutism. Would you take an environmentalist seriously if you found out that they drive a Hummer?

When people have choices and they choose the more harmful action, it hurts their credibility. If Rob cares so much about society and the environment, why did he work at a company that has horrendous track record on both? Someone of his level of talent certainly had choices, and he chose to contribute to the company that abandoned “don’t be evil” a long time ago.

I would argue that Google actually has had a comparitively good track record on the environment, I mean if you say (pre AI) Google does have a bad track record on the environment, then I wonder which ones do in your opinion. And while we can argue about the societal cost/benefit of other Google services and their use of ads to finance them, I would say there were very different to e.g Facebook with a documented effort to make their feed more addictive
Honestly, it seems like Rob Pike may have left Google around the same I did. (2021, 2022). Which was about when it became clear it was 100% down in the gutter without coming back.
  • ·
  • 1 day ago
  • ·
  • [ - ]
My take was that he had done enough work and had handed the reins of Go to a capable leader (rsc), and that it was time to step away.

Ian Lance Taylor on the other hand appeared to have quit specifically because of the "AI everything" mandate.

Just an armchair observation here.

That has been clear since the Google Plus debacle, at the very least.
It was still a wildly wasteful company doing morally ambiguous things prior to that timeframe. I mean, its entire business model is tracking and ads— and it runs massive, high energy datacenters to make that happen.
I wouldn't argue with this necessarily except that again the scale is completely different.

"AI" (and don't get me wrong I use these LLM systems constantly) is off the charts compared to normal data centre use for ads serving.

And so it's again, a kind of whataboutism that pushes the scale of the issue out of the way in order to make some sort of moral argument which misses the whole point.

BTW in my first year at Google I worked on a change where we made some optimizations that cut the # of CPUs used for RTB ad serving by half. There were bonuses and/or recognition for doing that kind of thing. Wasteful is a matter of degrees.

> "AI" (and don't get me wrong I use these LLM systems constantly) is off the charts compared to normal data centre use for ads serving.

It wasn't only about serving those ads though, traditional machine-learning (just not LLMs) has always been computationally expensive and was and is used extensively to optimize ads for higher margins, not for some greater good.

Obviously, back then and still today, nobody is being wasteful because they want to. If you go to OpenAI today and offer them a way to cut their compute usage in half, they'll praise you and give you a very large bonus for the same reason it was recognized & incentivized at Google: it also cuts the costs.

> Which was about when it became clear it was 100% down in the gutter without coming back.

Did you sell all of your stock?

Unfortunately, yes. If I hadn't, I might be retired.
You should be commended for being principled and sticking with what you believe. Thanks for your candor.
But you left because you were feeling like google was going in gutter and wanted to make an ethical choice perhaps on what you felt was right.

Honestly I believe that google might be one of the few winners from the AI industry perhaps because they own the whole stack top to bottom with their TPU's but I would still stray away from their stock because their P/E ratio might be insanely high or something

Their p/e ratio has almost doubled in just a year which isn't a good sign https://www.macrotrends.net/stocks/charts/googl/alphabet/pe-...

So like, we might be viewing the peaks of the bubble and you might still hold the stocks and might continue holding it but who knows what happens after the stock depreciates value due to AI Bubble-like properties and then you might regret as why you didn't sell it but if you do and google's stock rises, you might still regret.

I feel as if grass is always greener but not sure about your situation but if you ask me, you made the best out of the situation with the parameters you had and logically as such I wouldn't consider it "unfortunately" but I get what you mean.

That's one of the reasons I left. It also became intolerable to work there because it had gotten so massive. When I started there was an engineering staff of about 18,000 and when I left it was well over 100,000 and climbing constantly. It was a weird place to work.

But with remote work it also became possible to get paid decently around here without working there. Prior I was bound to local area employers of which Google was the only really good one.

I never loved Google, I came there through acquisition and it was that job with its bags of money and free food and kinda interesting open internal culture, or nothing because they exterminated my prior employer and and made me move cities.

After 2016 or so the place just started to go downhill faster and faster though. People who worked there in the decade prior to me had a much better place to work.

Interesting, so if I understand you properly, you would prefer working remote nowadays with google but that option didn't exist when you left google.

I am super curious as I don't get to chat with people who have worked at google as so much so pardon me but I got so many questions for you haha

> It was a weird place to work

What was the weirdness according to you, can you elaborate more about it?

> I never loved Google, I came there through acquisition and it was that job with its bags of money and free food and kinda interesting open internal culture, or nothing because they exterminated my prior employer and and made me move cities.

For context, can you please talk more about it :p

> After 2016 or so the place just started to go downhill faster and faster though

What were the reasons that made them go downhill in your opinion and in what ways?

Naturally I feel like as organizations move and have too many people, maybe things can become intolerable to work but I have heard it be described as it depends where and in which project you are and also how hard it can be to leave a bad team or join a team with like minded people which perhaps can be hard if the institution gets micro-managed at every level due to just its sheer size of employees perhaps?

> you would prefer working remote nowadays with google but that option didn't exist when you left google.

Not at all. I actually prefer in-office. And left when Google was mostly remote. But remote opened up possibilities to work places other than Google for me. None of them have paid as well as Google, but have given more agency and creativity. Though they've had their own frustrations.

> What was the weirdness according to you, can you elaborate more about it?

I had a 10-15 year career before going there. Much of what is accepted as "orthodoxy" at Google rubbed me the wrong way. It is in large part a product of having an infinite money tree. It's not an agile place. Deadlines don't matter. Everything is paid for by ads.

And as time goes on it became less of an engineering driven place and more of a product manager driven place with classical big-company turf wars and shipping the org chart all over the place.

I'd love to get paid Google money again, and get the free food and the creature comforts, etc. But that Google doesn't exist anymore. And they wouldn't take my back anyways :-)

It's dumb, but energy wise, isn't this similar to leaving the TV on for a few minutes even though nobody is watching it?

Like, the ratio is not too crazy, it's rather the large resource usages that comes from the aggregate of millions of people choosing to use it.

If you assume all of those queries provide no value then obviously that's bad. But presumably there's some net positive value that people get out of that such that they're choosing to use it. And yes, many times the value of those queries to society as a whole is negative... I would hope that it's positive enough though.

> Everything humans do is harmful to some degree.

I find it difficult to express how strongly I disagree with this sentiment.

You can make an argument supporting your disagreement.
There are two possible forks. The physical fork involves factual disagreement on how much humanity has built vs destroyed, the relative ease of destruction over construction, and an argument that given entropy and other effects, even a slight bias toward production would produce little positive, leading to the conclusion that humans mostly produce vastly more than they consume, even though production is, as mentioned, more difficult.

The value or "moral" fork would be trying to convince you that building, producing, and growing was actually helpful rather than harmful.

I don't imagine we actually disagree on the physical fork, making that argument pretty pointless: clearly humans and human civilization are learning, growing, and still have a strong potential to thrive as long as ASI, apathy, or a big rock don't take us out first. Instead, I took your statement as an indication that you don't actually positively value humans, more humans, humans growing, and humans building things. That's a preferences and values disagreement, and there's no way to rationally or logically argue someone into changing their core values. No ought from is, and all that.

I'm not suggesting, by the way, that people's values don't change, or can't be changed by discussion, only that there is no way to do so with logical argument; reason can get you to your goal, but it can't tell you what ultimate goal to want.

Anyway, I was expressing that I like humans and want humans (or people who themselves used to be humans, in the limit) to continue and do more, rather than arguing that you ought to feel the same.

It's extremely anti-human.
Serving unwanted ads has what cost-benefit-ratio vs serving LLM:s that are wanted by the user?
  • lokar
  • ·
  • 1 day ago
  • ·
  • [ - ]
Ads are extremely computationally cheap
But what is the benefit? And I’m pretty sure there is more than 1 single CPU in the world dedicated to running ads.
But mining all the tracking data in order to show profitable targeted ads is extremely intensive. That’s what kicked off the era of “big data” 15-20 years ago.
Mining tracking data is a megaFLOP and gigaFLOP scale problem while just a simple LLM response is a teraFLOP scale problem. It also tends towards embarrassingly parallel because tracks of multiple users aren't usually interdependent. The tracking data processing also doesn't need to be calculated fresh for every single user with every interaction.

LLMs need to burn significant amounts of power for every inference. They're exponentially more power hungry than searches, database lookups, or even loads from disk.

But making good people work on ads instead of something useful has an enormous cost to society.
Every content generated by LLM was served to me against my will and without accounting for preferences.
The generation of the content was done intentionally though. If they saved the output and you visited their site it wasn’t really generated for you (rather just static content served to you).
What an odd way of framing this. Every bit of human generated content was served to you "against your will". You are making no sense.
Sounds like a gold star ego purity thing to me.

I.e., they are proud to have never intentionally used AI and now they feel like they have to maintain that reputation in order to remain respected among their close peers.

  • vkou
  • ·
  • 1 day ago
  • ·
  • [ - ]
Asking about the value of ads is like asking what value I derive from buying gasoline at the petrol station. None. I derive no value from it, I just spend money there. If given the option between having to buy gas and not having to buy gas, all else being equal, I would never take the first option.

But I do derive value from owning a car. (Whether a better world exists where my and everyone else's life would be better if I didn't is a definitely a valid conversation to have.)

The user doesn't derive value from ads, the user derives value from the content on which the ads are served next to.

> what value I derive from buying gasoline at the petrol station. None. I derive no value from it, I just spend money there.

The value you derive is the ability to make your car move. If you derived no value from gas, why would you spend money on it?

  • vkou
  • ·
  • 1 day ago
  • ·
  • [ - ]
And likewise, presumably the users are getting something they want in exchange for having ads blasted at them.

If they just wanted ads blasted at them, and nothing else, they'd be doing something else, like, say, watching cable TV.

> LLM:s that are wanted by the user

If they want LLM, you probably don't have to advertise them as much

No the reality of the matter is that people are being shoved LLM's. They become the talk of the town and algorithms share any development related to LLM or similar.

The ads are shoved down to users. Trust me, the average person isn't as much enthusiastic about LLM's and for good reasons when people who have billions of dollars say that yes its a bubble but its all worth it or similar and the instances where the workforce themselves are being replaced/actively talked about being replaced by AI

We live in an hackernews bubble sometimes of like-minded people or communities but even on hackernews we see disagreements (I am usually Anti AI mostly because of the negative financial impact the bubble is gonna have on the whole world)

So your point becomes a bit moot in the end but that being said, Google (not sure how it was in the past) and big tech can sometimes actively promote/ close their eyes if the ad sponsors are scammy so ad-blockers are generally really good in that sense.

Guess that means you don’t need food since food is heavily advertised?
> Everything humans do is harmful to some degree

That's just not true... When a mother nurses her child and then looks into their eyes and smiles, it takes the utmost in cynical nihilism to claim that is harmful.

I could be misinterpreting parent myself, but I didn't bat an eye on the comment because I interpreted it similarly to "everything humans (or anything really) do increases net entropy, which is harmful to some degree for earth". I wasn't considering the moral good vs harm that you bring up, so I had been reading the the discussion from the priorities of minimizing unnecessary computing scope creep, where LLMs are being pointed to as a major aggressor. While I don't disagree with you and those who feel that statement is anti-human (another responder said this), this is what I think parent was conveying, not that all human action is immoral to some degree.
Yes, this is what I meant. I used the word "harmful" in the context of the argument that LLMs are harmful because they consume resources (i. e. increase entropy).

But everything humans do does that. Everything increases entropy. Sometimes we find that acceptable. So when people respond to Pike by pointing out that he, too, is part of society and thus cannot have the opinion that LLMs are bad, I do not find that argument compelling, because everybody draws that line somewhere.

> Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.

Well the people who burnt compute got it from money so they did burn money.

But they don't care about burning money if they can get more money via investors/other inputs faster than they can burn (fun fact: sometimes they even outspend that input)

So in a way the investors are burning their money, now they burn the money because the market is becoming irrational. Remember Devin? Yes cognition labs is still there etc. but I remember people investing into these because of their hype when it did turn out to be moot comparative to their hype.

But people/market was so irrational that most of these private equities were unable to invest in something like openai that they are investing in anything AI related.

And when you think more deeper about all the bubble activities. It becomes apparent that in the end bailouts feel more possible than not which would be an tax on average taxpayers and they are already paying an AI tax in multiple forms whether it be in the inflation of ram prices due to AI or increase in electricity or water rates.

So repeat it with me: whose gonna pay for all this, we all would but the biggest disservice which is the core of the argument is that if we are paying for these things, then why don't we have a say in it. Why are we not having a say in AI related companies and the issues relating to that when people know it might take their jobs etc. so the average public in fact hates AI (shocking I know /satire) but the fact that its still being pushed shows how little influence sometimes public can have.

Basically public can have any opinions but we won't stop is the thing happening in AI space imo completely disregarding any thoughts about the general public while the CFO of openAI proposing an idea that public can bailout chatgpt or something tangential.

Shaking my head...

> Somebody burned compute to send him an LLM-generated thank-you note. Everybody involved in this transaction lost, nobody gained anything from it. It's pure destruction of resources.

Just like the invention of Go.

Somebody just burned their refuse in a developing country somewhere. I guess if it was cold, at least they were warming themselves up.
Cutting trees for fuel and paper to send a letter burned resources. Nobody gained in that transaction
I shouldn't have to explain this, but a letter would involve actual emotion and thought and be a dialog between two humans.
  • lukan
  • ·
  • 1 day ago
  • ·
  • [ - ]
You have never received automated spam letters?
  • kentm
  • ·
  • 1 day ago
  • ·
  • [ - ]
Do you think that spam letters are generally considered to be a good use of resources?
  • lukan
  • ·
  • 1 day ago
  • ·
  • [ - ]
No, but it counters this point "a letter would involve actual emotion and thought and be a dialog between two humans."
  • kentm
  • ·
  • 1 day ago
  • ·
  • [ - ]
I don't think it does unless you ignore the context of the conversation. Its very clear that the reference about "letters" being made wasn't "all mail."
Writing personal letters has other dangers as well. Remember how George Costanza's fiancée got killed.
  • kgwxd
  • ·
  • 1 day ago
  • ·
  • [ - ]
When the thought is "I'd like this person to know how grateful I am", the medium doesn't really matter.

When the thought is "I owe this person a 'Thank You'", the handwritten letter gives an illusion of deeper thought. That's why there are fonts designed to look handwritten. To the receiver, they're just junk mail. I'd rather not get them at all, in any form. I was happy just having done the thing, and the thoughtless response slightly lessens that joy.

We’re well past that. Social media killed that first. Some people have a hard time articulating their thoughts. If AI is a tool to help, why is that bad?
Imagine the process of solving a problem as a sequence of hundreds of little decisions that branch between just two options. There is some probability that your human brain would choose one versus the other.

If you insert AI into your thinking process, it has a bias, for sure. It will helpfully reinforce whatever you tell it you think makes sense, or at least on average it will be interpreted that way because of a wide variety of human cognitive biases even if it hedges. At the least it will respond with ideas that are very... median.

So at each one of these tiny branches you introduce a bias towards the "typical" instead of discovering where your own mind would go. It's fine and conversational but it clearly influences your thought process to, well, mitigate your edges. Maybe it's more "correct", it's certainly less unique.

And then at some point they start charging for the service. That's the part I'm concerned about, if it's on-device and free to use I still think it makes your thought process less interesting and likely to have original ideas, but having to subscribe to a service to trust your decision making is deeply concerning.

> And then at some point they start charging for the service. That's the part I'm concerned about, if it's on-device and free to use I still think it makes your thought process less interesting and likely to have original ideas, but having to subscribe to a service to trust your decision making is deeply concerning.

This, speaking about environmental impacts. I wish that more models start focusing on the parameter density / their compactness more so that they can run locally but this isn't something that big tech really wants so we are probably gonna get models like the recent minimax model or glm air models or qwen or mistral models.

These AI services only work as long as they are free and burn money. As an example, me and my brother were discussing something yesterday related to LLM and my mother tried to understand and talk about it too and wanted to get ghibli styles photo since someone had ghibli generated photo as their pfp and she wanted to try it too

She then generated the pictures and my brother did a quick calculation and it took around 4 cents for each image which with PPP in my country and my currency is 3 ruppees.

When asked by my brother if she would pay for it, she said that no she's only using it for free but she also said that if she were forced to, she might even pay 50 rupees.

I jumped in the conversation and said nobody's gonna force her to make ghibli images.

Articulating thoughts is the backbone of communication. Replacing that with some kind of emotionless groupthink does actually destroy human-to-human communication.

I would wager that the amount of “very significant thing that have happened over the history of humanity” come down to a few emotional responses.

  • fwip
  • ·
  • 1 day ago
  • ·
  • [ - ]
Do you think that the LLM helped deliver a thoughtful letter to Rob Pike?
[flagged]
  • gcau
  • ·
  • 1 day ago
  • ·
  • [ - ]
I shouldn't have to explain this, but a letter is a medium of communication, that could just as easily be written by a LLM (and transcribed by a human onto paper).
Communicate between what though?

Communication happen between two parties. I wouldn't consider LLM an party considering it's just an autosuggestion on steroids at the end of day (lets face it)

Also if you need communication like this, just share the prompt anyway to that other person in the letter, people much rather might value that.

  • ·
  • 1 day ago
  • ·
  • [ - ]
I shouldn't have to explain this, but a letter is a medium of communication between people.

Automated systems sending people unsolicited, unwanted emails is more commonly known as spam.

Especially when the spam comes with a notice that it is from an automated system and replies will be automated as well.

  • ·
  • 1 day ago
  • ·
  • [ - ]
Someone taking the time and effort to write and send a letter and pay for postage might actually be appreciated by the receiver. It’s a bit different from LLM agents being ordered to burn resources to send summaries of someone’s work life and congratulating them. It feels like ”hey look what can be done, can we get some more funding now”. Just because it can be done doesn’t mean it adds any good value to this world
Nope, that ship has already sailed as well. An AI-powered service to do handwritten spam: https://handwrytten.com
> Nope, that ship has already sailed as well. An AI-powered service to do handwritten spam: https://handwrytten.com

FFS. AI's greatest accomplishment is to debase and destroy.

Trillions of dollars invested to bring us back to the stone age. Every communications technology from writing onward jammed by slop and abandoned.

I don’t know anyone who doesn’t immediately throw said enveloppe, postage, and letter in the trash
> I don’t know anyone who doesn’t immediately throw said enveloppe, postage, and letter in the trash

If you're being accurate, the people you know are terrible.

If someone sends me a personal letter [and I gather we're talking about a thank-you note here], I'm sure as hell going to open it. I'll probably even save it in a box for an extremely long time.

Of course. I took it to be referring the 98% of other paper mail that that goes straight to the trash. Often unopened. I don't know if I'm typical but the number of personal cards/letters I received in 2025 I could count on one hand.
> Of course. I took it to be referring the 98% of other paper mail that that goes straight to the trash. Often unopened. I don't know if I'm typical but the number of personal cards/letters I received in 2025 I could count on one hand.

Yes so this is why the reason why person card/letters really matter because most people sheldom get any and if you know a person in your life / in any (community/project) that you deeply admire, sending them a handwritten mail can be one of the highest gestures which shows that you took the time out of your day and you really cared about them so much in a way.

That's my opinion atleast.

> Of course. I took it to be referring the 98% of other paper mail that that goes straight to the trash. Often unopened.

That interpretation doesn't save the comment, it makes it totally off topic.

  • vodou
  • ·
  • 1 day ago
  • ·
  • [ - ]
Then you are part of truly strange circles, among people who don’t understand human behavior.
Ok, and that supports the idea of LLM-generated mass spamming in what way…?
You surround yourself with the people you want to have around you.
Wow. You couldn't waterboard that out of me.
Use recycled paper.
How is it that so many people who supposedly lean towards analytical thought are so bad at understanding scale?
Years ago Google built a data center in my state. It received a lot of positive press. I thought this was fairly strange at the time, as it seemed that there were strong implications that there would be jobs, when in reality a large data center often doesn't lead to tons of long term employment for the area. From time to time there are complaints of water usage, but from what I've seen this doesn't hit most people's radar here. The data center is about 300 MW, if I'm not mistaken.

Down the street from it is an aluminum plant. Just a few years after that data center, they announced that they were at risk of shutting down due to rising power costs. They appealed to city leaders, state leaders, the media, and the public to encourage the utilities to give them favorable rates in order to avoid layoffs. While support for causes like this is never universal, I'd say they had more supporters than detractors. I believe that a facility like theirs uses ~400 MW.

Now, there are plans for a 300 MW data center from companies that most people aren't familiar with. There are widespread efforts to disrupt the plans from people who insist that it is too much power usage, will lead to grid instability, and is a huge environmental problem!

This is an all too common pattern.

How many more jobs are there at the aluminum plant than a datacenter? Big datacenters employ mid-hundreds of people
Not only would I suspect that an aluminum plant employs far more people, it is an attainable job. Presumably minimal qualifications for some menial tasks, whereas you might need a certain level of education/training to get a more prestigious and out of reach job at a datacenter.

Easier for a politician to latch onto manufacturing jobs.

I'm pretty sure both the plant and the DC have both "menial" jobs and highly-skilled jobs.

You don't just chuck ore into a furnace and wait for a few seconds in reality.

No doubt there is exquisite engineering and process control expertise required to operate an aluminum plant. However, I imagine there is extensive need for people to "man the bellows", move this X tons from here to there, etc that require only minimal training and a clean drug test. An army of labor vs a handful of nerds to swap failed hard drives.
AFAIK, the data center employs more people. I'm not really sure why that's the case, but neither is >1k.

I'd guess that this is also an area where the perception makes a bigger difference than the reality.

  • wpm
  • ·
  • 1 day ago
  • ·
  • [ - ]
How many other jobs in the area depend on being able to get their aluminum stock orders fulfilled close by?
What does this have to do with his argument? If anything, criticism from the inside of the machine is more persuasive, not less. Ad hom fail.

The astroturf in this thread is unreal. Literally. ;)

I think it's incredibly obvious how it connects to his "argument" - nothing he complains about is specific to GenAI. So dressing up his hatred of the technology in vague environmental concerns is laughably transparent.

He and everyone who agrees with his post simply don't like generative AI and don't actually care about "recyclable data centers" or the rape of the natural world. Those concerns are just cudgels to be wielded against a vague threatening enemy when convenient, and completely ignored when discussing the technologies they work on and like

You simply don't like any criticism of AI, as shown by your false assertions that Pike works at Google (he left), or the fact Google and others were trying to make their data centers emit less CO2 - and that effort is completely abandoned directly because of AI.

And you can't assert that AI is "revolutionary" and "a vague threat" at the same time. If it is the former, it can't be the latter. If it is the latter, it can't be the former.

> that effort is completely abandoned directly because of AI

That effort is completely abandoned because of the current US administration and POTUS a situation that big tech largely contributed to. It’s not AI that is responsible for the 180 zeitgeist change on environmental issues.

> It’s not AI that is responsible for the 180 zeitgeist change on environmental issues.

Yes, much like it's not the gun's fault when someone is killed by a gun. And, yet, it's pretty reasonable to want regulation around these tools that can be destructive in the wrong hands.

This is off topic, I’m talking about the environmental footprint of data centers. In the 2010s I remember when responding to RFPs I had to specify the carbon footprint of our servers. ESG was all the rage and every big tech company was trying to appear green. Fast forward to today where companies, investors, and obviously the administration are more than fine with data centers burning all the oil/gas/coal power that can be found.
Is it off topic?

What're the long term consequences of climate change? Do we even care anymore to your original point?

Don't get me wrong, this field is doing damage on a couple of fronts - but climate change is certainly one of them.

I don't consider it reasonable to want regulation for tools that are as of now as potentially destructive as free access to Google search.
I don't consider you reasonable if this is your best attempt at a strawman argument.
  • lukan
  • ·
  • 1 day ago
  • ·
  • [ - ]
"You can't assert that AI is "revolutionary" and "a vague threat" at the same time""

Revolutions always came with vague (or concrete) threats as far as I know.

> And you can't assert that AI is "revolutionary" and "a vague threat" at the same time.

I never asserted that AI is either of those things

  • ywn
  • ·
  • 1 day ago
  • ·
  • [ - ]
[flagged]
Why should I be concerned with something that doesn't exist, will certainly never exist, and even if I were generous and entertained that something that breaks every physical law of the universe starting with entropy could exist, would result in "it" torturing a copy of myself to try to influence me in the past?

Nothing there makes sense at any level.

But people getting fired and electricity bills skyrocketing (as well as RAM etc.) are there right now.

do you get scared when you hear other ghost stories too?
> nothing he complains about is specific to GenAI.

You mean except the bit about how GenAI included his work in its training data without credit or compensation?

Or did you disagree with the environmental point that you failed to keep reading?

I often find that when people start applying purity tests it’s mainly just to discredit any arguments they don’t like without having to make a case against the substance of the argument.

Assess the argument based on its merits. If you have to pick him apart with “he has no right to say it” that is not sufficient.

They did also "assess the argument on its merits" though?
“He just hates GenAI so everything is virtue signaling/a cudgel” is not an assessment. It’s simply dismissing him outright. If they were talking about the merits, they would actually debate whether or not the environmental concerns and such are valid. You can’t just say “you don’t like X so all critiques of X are not just wrong but also inauthentic by default.”
The part where they specifically address Pike's "argument" [0] is where they express that in their view, the energy use issue is a data center problem, not a generative AI one:

> nothing he complains about is specific to GenAI

(see also all their other scattered gesturings towards Google and their already existing data centers)

A lot can be said about this take, but claiming that it doesn't directly and specifically address Pike's "argument", I simply don't think is true.

I generally find that when (hyper?)focusing on fallacies and tropes, it's easy to lose sight of what the other person is actually trying to say. Just because people aren't debating in a quality manner, doesn't mean they don't have any points in there, even if those points are ultimately unsound or disagreeable.

Let's not mistake form for function. People aren't wrong because they get their debating wrong. They're wrong because they're wrong.

[0] in quotes, because I read a rant up there, not an argument - though I'm sure if we zoom way in, the lines blur

This thread is basically an appeal to authority fallacy so attacking the authority is fair game.
The "attack on the authority" is rather flat though.
>appeal to authority

How so? He’s talking about what happened to him in the context of his professional expertise/contributions. It’s totally valid for him to talk about this subject. His experience, relevance, etc. are self apparent. No one is saying “because he’s an expert” to explain everything.

They literally (using AI) wrote him an email about his work and contributions. His expertise can’t be removed from the situation even if we want to.

  • 8note
  • ·
  • 1 day ago
  • ·
  • [ - ]
having made Go amd parts pf Unix gives him no authority in the realms that his criticisms are aimed at though - environment science, civil engineering, resource management etc

not having a good spam filter is a kinda funny reason for somebody to have a crash out.

> nothing he complains about is specific to GenAI

Except it definitely is, unless you want to ignore the bubble we're living in right now.

[flagged]
Someone else in the thread posted this article earlier.

https://nationalcentreforai.jiscinvolve.org/wp/2025/05/02/ar...

It seems video streaming, like Youtube which is owned by Google, uses much more energy than generative AI.

A topic for more in depth study to be sure. However:

1) video streaming has been around for a while and nobody, as far as I'm aware, has been talking about building multiple nuclear tractors to handle the energy needs

2) video needs a CPU and a hard drive. LLM needs a mountain of gpus.

3) I have concerns that the "national center for AI" might have some bias

I can find websites also talking about the earth being flat. I don't bother examining their contents because it just doesn't pass the smell test.

Although thanks for the challenge to my preexisting beliefs. I'll have to do some of my own calculations to see how things compare.

Those statistics include the viewing device in the energy usage for streaming energy usage, but not for GenAI. Unless you're exclusively using ChatGPT without a screen it's not a fair comparison.

The 0.077 kWh figure assumes 70% of users watching on a 50 inch TV. It goes down to 0.018 kWh if we assume 100% laptop viewing. And for cell phones the chart bar is so small I can't even click it to view the number.

  • lokar
  • ·
  • 1 day ago
  • ·
  • [ - ]
And it’s fair assume much of the time watching streaming would instead have been spent on TV
> Unless you're exclusively using ChatGPT without a screen it's not a fair comparison.

Neither is comparing text output to streaming video

Compare generative AI video to streamed video, and generative text to streamed text etc. The differences are closer to an order of magnitude. The comparison to be made is the processing power required to deliver the content, not to display it.
  • ori_b
  • ·
  • 1 day ago
  • ·
  • [ - ]
This is based on assuming 5 questions a day. YouTube would be very power efficient as well if people only watched 5 seconds of video a day.

How many tokens do you use a day?

It would be less power efficient as some of the associated costs/resources happen per request and also benefit from scale.
  • j0lol
  • ·
  • 1 day ago
  • ·
  • [ - ]
Thankfully YouTube provides a lot more value to society than gen-AI.
This is a subjective value judgement and many disagree.
Doubtful. If you look at viewed content it’s probably 90% views from brainrot content.
To adults? Certainly. But keep in mind that many children are now growing up with this crap glued to their eyes from age 2:

https://www.youtube.com/results?search_query=funny+3d+animal...

(That's just one genre of brainrot I came across recently. I also had my front page flooded with monkey-themed AI slop because someone in my household watched animal documentaries. Thanks algorithm!)

Not for me.
  • oblio
  • ·
  • 1 day ago
  • ·
  • [ - ]
It's not just about per-unit resource usage, but also about the total resource usage. If GenAI doubles our global resource usage, that matters.

I doubt Youtube is running on as many data centers as all Google GenAI projects are running (with GenAI probably greatly outnumbering Youtube - and the trend is also not in favor of GenAI).

  • cowl
  • ·
  • 1 day ago
  • ·
  • [ - ]
Videos produce benefits (arguably much less now with the AI generated spam) that are difficult to reproduce with other less energy hungry ways. compare this with this message that it would have cost nothing to a human to type instead of going through the inference of AI not only wasting energy for something that could have been accomplished much easier but removing also the essence of the activity. No-One was actually thankful for that thankyou message.
I think that criticizing when it benefits the person criticizing, and absense of criticism when criticism would hurt the person criticizing, makes the argument less persuasive.

This isn't ad hom, it's a heuristic for weighting arguments. It doesn't prove whether an argument has merit or not, but if I have hundreds of arguments to think about, it helps organizing them.

It is the same energy as the "you criticize society, yet you participate in society" meme. Catching someone out on their "hypocrisy" when they hit a limit of what they'll tolerate is really a low-effort "gotcha".

And it probably isn't astroturf, way too many people just think this way.

being inside the machine doesn’t exempt you from tradeoff analysis, kind sir
As it so happens Rob Pike performed absolutely 0 tradeoff analysis
Do you really think that the only reason people would be turned off by this post by Rob Pike is that they are being paid by big AI?
No, which is why I didn’t say that. I do think astroturfing could explain the rapid parroting of extremely similar ad hominems, which is what I actually did imply.
Astroturfing means a company is paying people to comment. No one in this entire thread was paid to comment.
Buddy it's not astroturfing if people hate your favorite thing.
  • ·
  • 1 day ago
  • ·
  • [ - ]
This is the most astro-turfy comment ITT
Google had achieved carbon neutrality and committed to wiping out their carbon legacy until AI.
My guess is the scale has changed? They used to do AI stuff, but it wasn't until OpenAI (anyone feel free to correct me) went ahead and scaled up the hardware and discovered that more hardware = more useful LLM, that they all started ramping up on hardware. It was like the Bitcoin mining craze, but probably worse.
Rob left Google a couple of years ago.
What about Ken Thompson?
So what's he doing now? Is he retired?
  • pkal
  • ·
  • 1 day ago
  • ·
  • [ - ]
I think so, or at least something like that. In https://www.arraycast.com/episodes/episode60-rob-pike he mentioned that he has now been working more on Ivy (https://github.com/robpike/ivy) in his spare time.
Not "retired" but a similar term.
  • duxup
  • ·
  • 1 day ago
  • ·
  • [ - ]
I do wonder about how we as individuals influence this stuff.

We want free services and stuff, complain about advertising / sign up for the google's of the world like crazy.

Bitch about data-centers while consuming every meme possible ...

  • planb
  • ·
  • 1 day ago
  • ·
  • [ - ]
Even if I don't share the opinion, I can understand the moral stance against genAI. But it strikes me as a bit unfaithful when people argue against it from all kinds of angles that somehow never seemed to bother them before.

It's like all those anti-copyright activists from the 90s (fighting the music and film industry) that suddenly hate AI for copyright infringements.

Maybe what's bothering the critics is actually deeper than the simple reasons they give. For many, it might be hate against big tech and capitalism itself, but hate for genAI is not just coming from the left. Maybe people feel that their identity is threatened, that something inherently human is in the process of being lost, but they cannot articulate this fear and fall back to proxy arguments like lost jobs, copyright, the environment or the shortcomings of the current implementations of genAI?

  • lwhi
  • ·
  • 1 day ago
  • ·
  • [ - ]
There aren't any rules that prevent us from changing course.

The points you raise, literally, do not affect a thing.

The dose makes the poison. Data centers are just now being built haphazardly without cause because they anticipate demand that does not yet exist.
Rob Pike retired from google a few years back. As per https://news.ycombinator.com/item?id=46398351
Yeah, I'm conflicted about the use of AI for creative endeavors as much as anyone, but Google is an advertising company. It was acceptable for them to build a massive empire around mining private information for the purposes of advertisement, but generative AI is now somehow beyond the pale? People can change their mind, but Rob crashing out about AI now feels awfully revisionist.

(NB: I am currently working in AI, and have previously worked in adtech. I'm not claiming to be above the fray in any way.)

  • WD-42
  • ·
  • 1 day ago
  • ·
  • [ - ]
Ad tech is a scourge as well. You think Rob Pike was super happy about it? He’s not even at google anymore.

The amount of “he’s not allowed to have an opinion because” in this thread is exhausting. Nothing stands up to the purity test.

>You think Rob Pike was super happy about it?

He sure was happy enough to work for them (when he could work anywhere else) for nearly two decades. A one line apology doesn't delete his time at Google. The rant also seems to be directed mostly if not exclusively towards GenAI not Google. He even seems happy enough to use Gmail when he doesn't have to.

You can have an opinion and other people are allowed to have one about you. Goes both ways.

No one is saying he can’t have an opinion, just that there isn’t much value in it given he made a bunch of money from essentially the same thing. If he made a reasoned argument or even expressed that he now realizes the error of his own ways those would be worth engaging with.
  • WD-42
  • ·
  • 1 day ago
  • ·
  • [ - ]
He literally apologized for any part he had in it. This just makes me realize you didn’t actually read the post and I shouldn’t engage with the first part of your argument.
Apologies are free. Did he donate even one or two percent of the surely exorbitant salary he made at Google all those years to any cause countering those negative externalities? (I'm genuinely curious)
He apologized for the part he had in enabling AI (which he describes as minor) but not that he spent a good portion of his life profiting from the same datacenters he is decrying now.
Google's official mission was "organize the world's information and make it universally accessible and useful", not to maximize advertising sales.

Obviously now it is mostly the latter and minimally the former. What capitalism giveth, it taketh away. (Or: Capitalism without good market design that causes multiple competitors in every market doesn't work.)

It’s certainly possible to see genAI as a step beyond adtech as a waste of resources built on an unethical foundation of misuse of data. Just because you’re okay with lumping them together doesn’t mean Rob has to.
Yeah, of course, he's entitled to his opinion. To me, it just feels slightly disingenuous considering what Google's core business has always been (and still is).
Someone making a complain does not imply that they were ok with it prior to the complaint. Why are you muddying the waters?
  • EdiX
  • ·
  • 1 day ago
  • ·
  • [ - ]
AFAIK Rob Pike has been retired for years.
Everything has been doing has been bad faith and harmful since a looong time
They are building data centers of TPUs now, not general purpose processors.
The difference in carbon emissions for a search query vs an LLM generation are on the order of exhaling vs driving a hummer. So I can reduce this disingenuous argument to:

> You spent your whole life breathing, and now you're complaining about SUVs? What a hypocrite.

Pecunia non olet.
Rob retired from Google years ago fwiw.
Data centers seem poised to make renewable energy sources more profitable than they have ever been. Nuclear plants are springing up everywhere and old plants are being un-decommissioned. Isn’t there a strong case to be made that AI has helped align the planet toward a more sustainable future?
OpenAI's internal target of ~250 GW of compute capacity by 2033 would require about as much electricity as the whole of India's current national electricity consumption[0].

[0]: https://www.tomshardware.com/tech-industry/artificial-intell...

  • wpm
  • ·
  • 1 day ago
  • ·
  • [ - ]
My favorite factoid is that the most energetic power production facility on the planet is the Three Gorges Dam, with a nameplate capacity of 22.5GW.

That dam took 10 years to build and cost $30B.

And OpenAI needs more than ten of them in 7 years.

Can't speak for Rob Pile but my guess would be, yeah, it might seem hypocritical but it's a combination of seeing the slow decay of the open culture they once imagined culminating into this absolute shirking of responsibility while simultaneously exploiting labour, by those claiming to represent the culture, alongwith the retrospective tinge of guilt for having enabled it, that drrove this rant.

Furthermore, w.r.t the points you raised - it's a matter of scale and utility. Compared to everything that has come before, GenAI is spectacularly inefficient in terms of utility per unit of compute (however you might want to define these). There hasn't been a tangible nett good for society that has come from it and I doubt there would be. The egarness and will to throw money and resources at this surpasses the crypto mania which was just as worthless.

Even if you consider Rob a hypocrite , he isn't alone in his frustration and anger at the degradation of the promise of Open Culture.

  • lukan
  • ·
  • 1 day ago
  • ·
  • [ - ]
"There hasn't been a tangible nett good for society that has come from it and I doubt there would be"

People being more productive with writing code, making music or writing documents fpr whatever is not a improvement for them and therefore for society?

Or do you claim that is all imaginary?

Or negated by the energy cost?

I claim that the new code, music or documents have not added anything significant/noteworthy/impactful to society except for the self-perpetuating lie that it would, all the while regurgitating, at high speeds, what was stolen.

And all at significant opportunity cost (in terms of computing and investment)

If it was as life altering as they claim where's that novel work of art (in your examples..of code, music or literature) that truly could not have been produced without GenAI and fundamentally changed the art form ?

Surely, with all that ^increased productivity^ we'd have seen the impact equivalent of linux, apache, nginx, git, redis, sqlite, ... Etc being released every couple of weeks instead of yet another VSCode clone./s

  • oblio
  • ·
  • 1 day ago
  • ·
  • [ - ]
Are we comparing for example a SMTP server hosted by Google, or frankly, any non-GenAI IT infrastructure, with the resource efficiency of GenAI IT infrastructure?

The overall resource efficiency of GenAI is abysmal.

You can probably serve 100x more Google Search queries with the same resources you'd use for Google Gemini queries (like for like, Google Search queries can be cached, too).

Nope, you can't, and it takes a simple Gemini query to find out more about the actual x if you are interested in it. (closer to 3, last time I checked, which rounds to 0, specially considering the clicks you save when using the LLM)
  • oblio
  • ·
  • 1 day ago
  • ·
  • [ - ]
> jstummbillig:

> Nope, you can't, and it takes a simple Gemini query to find out more about the actual x if you are interested in it. (closer to 3, last time I checked, which rounds to 0, specially considering the clicks you save when using the LLM)

Why would you lie: https://imgur.com/a/1AEIQzI ???

For those that don't want to see the Gemini answer screenshot, best case scenario 10x, worst case scenario 100x, definitely not "3x that rounds to 0x", or to put it in Gemini's words:

> Summary

> Right now, asking Gemini a question is roughly the environmental equivalent of running a standard 60-watt lightbulb for a few minutes, whereas a Google Search is like a momentary flicker. The industry is racing to make AI as efficient as Search, but for now, it remains a luxury resource.

Are you okay? You ventured 100x and that's wrong. What would you know about the last time I checked was, and in what context exactly? Anyway, good job on doing what I suggest you do, I guess.

The reason why it all rounds to 0 is that the google search will not give you an answer. It gives you a list of web pages, that you then need to visit (often times more than just one of them) generating more requests, and, more importantly, it will ask more of your time, the human, whose cumulative energy expenditure to be able to ask to be begin with is quite significant – and that you then will have not to spend on other things that a LLM is not able to do for you.

  • lokar
  • ·
  • 1 day ago
  • ·
  • [ - ]
Serving a request for (often mostly static) content like that uses a tiny tiny amount of energy.
  • oblio
  • ·
  • 1 day ago
  • ·
  • [ - ]
You condescendingly said, sorry, you "ventured" 0x usage, by claiming: "use Gemini to check yourself that the difference is basically 0". Well, I did take you up on that, and even Gemini doesn't agree with you.

Yes, Google Search is raw info. Yes, Google Search quality is degrading currently.

But Gemini can also hallucinate. And its answers can just be flat out wrong because it comes from the same raw data (yes, it has cross checks and it "thinks", but it's far from infallible).

Also, the comparison of human energy usage with GenAI energy usage is super ridiculous :-)))

Animal intelligence (including human intelligence) is one of the most energy efficient things on this planet, honed by billions years of cut throat (literally!) evolution. You can argue about time "wasted" analysing search results (which BTW, generally makes us smarter and better informed...), but energy-wise, the brain of the average human uses as much energy as the average incandescent light bulb to provide general intelligence (and it does 100 other things at the same time).

Ah, we are in "making up quotes territory, by putting quotation marks around the things someone else said, only not really". Classy.

Talking about "condescending":

> super ridiculous :-)))

It's not the energy efficient animal intelligence that got us here, but a lot of completely inefficient human years to begin with, first to keep us alive and then to give us primary and advanced education and our first experiences to become somewhat productive human beings. This is the capex of making a human, and it's significant – specially since we will soon die.

This capex exists in LLMs but rounds to zero, because one model will be used for +quadrillions of tokens. In you or me however, it does not round to zero, because the number of tokens we produce round to zero. To compete on productivity, the tokens we have produce therefore need to be vastly better. If you think you are doing the smart thing by using them on compiling Google searches you are simply bad at math.

  • lokar
  • ·
  • 1 day ago
  • ·
  • [ - ]
Google web search is incredibly efficient
  • oblio
  • ·
  • 1 day ago
  • ·
  • [ - ]
So are most procedural services out there, i.e. non-GenAI. Otherwise we couldn't have built them on infrastructure with 10000x less computing power than the GenAI infrastructure they're building now.
While I appreciate the irony in the trend of using AI to discredit people making positive claims about AI, it's a pet peeve of mine when it's used as a lazy way to avoid citing the original claim made against AI. It's reminiscent of the 'no you' culture from early 2000s forums. There's some meta-irony here too in that it often has to be debunked by humans, maybe that's the point, but it doesn't diminish my opinion of LLMs, it just makes me think that the Luddites may have had a point.

For instance, in the Gemini screenshot, the claim for 100-500x more resource usage for AI queries comes from water usage, however it's not clear to me why data center water usage for AI queries would be 100-500x more than a Google search when power usage for an AI query is supposedly only 10-30x more than a Google search. Is water usage and CO2 footprint not derived from power consumption? Did the LLM have to drink as much water while thinking as I did while researching the original claim?

The 10-30x more power consumption claim seems to come from this scientific paper [0] from late 2023 which cites a news article which quotes Alphabet's chairman as saying 'a large language model likely cost 10 times more than a standard keyword search, [though fine-tuning will help reduce the expense quickly]'. Editorialising the quote is not a good look for a scientific paper. The paper also cites a news letter from an analyst firm [1] that performs a back of the envelope calculation to estimate OpenAI's costs, looks at Google's revenue per search, and estimates how much it would cost Google to add an AI query for every Google search. Treating it like a Fermi Problem is reasonable I guess, you can get within an order of magnitude if your guesstimates are reasonable. The same analyst firm did a similar calculation [2] and came to the conclusion that training a dense 1T model costs $300m. It should be noted that GPT-4 cost 'more than $100m' and it has been leaked that it's a 1.8T MoE. LLama 3.1 405B was around 30M GPU hours, likely $30-60m. DeepSeek, a 671B MoE, was trained for around $5m. However, while this type of analysis is fine for a news letter, citing it to see how many additional servers Google would need to add an AI query to every search, taking the estimated power consumption of those servers, and deriving a 6.9–8.9 Wh figure per request for the amount of search queries Google receives is simply beyond my comprehension. I gave up trying to make sense of what this paper is doing, and this summary may be a tad unfair as a result. You can run the paper through Gemini if you would prefer an unbiased summary if you prefer :-).

The paper also cites another research paper [3] from late 2022 which estimates a dense 176b parameter model (comparable to GPT-3) uses 3.96 Wh per request. They derive this figure by running the model in the cloud. What a novel concept. Given the date of the paper, I wouldn't be surprised if they ran the model in the original BF16 weights, although I didn't check. I could see this coming down to 1 Wh per request when quantised to INT4 or similar, and with better caching/batched requests/utilisation/modern GPUs/etc I could see this getting pretty close to the often quoted [4, from 2009 mind] 0.3 Wh per Google search.

Google themselves [5] state the median Gemini text prompt uses 0.24 Wh.

I simply don't see where 100x is coming from. 10x is something I could believe if we're factoring in training resource consumption as some extremely dodgy napkin maths is leading me to believe a moderately successful 1T~ model gets amortised to 3 Wh per prompt which subjectively is pretty close to the 3x claim I've ended up defending. If we're going this route we'd have to include the total consumption for search too as I have no doubt Google simply took the running consumption divided by amount of searches. Add in failed models, determine how often either a Google search or AI query is successful, factor in how much utility the model providing the information provides as it's clearly no longer just about power efficiency, etc. There's a lot to criticise about GenAI but I really don't think Google searches being marginally more power efficient is one of them.

[0] https://www.sciencedirect.com/science/article/pii/S254243512...

[1] https://newsletter.semianalysis.com/p/the-inference-cost-of-...

[2] https://newsletter.semianalysis.com/p/the-ai-brick-wall-a-pr...

[3] https://arxiv.org/abs/2211.02001

[4] https://googleblog.blogspot.com/2009/01/powering-google-sear...

[5] https://cloud.google.com/blog/products/infrastructure/measur...

I really hate this kind of lazy argument: Oh. do you use toilet paper? Then kindly keep your mouth shut while we burn the planet down.
This reminds me of how many Facebook employees were mad at Zuckerberg for going MAGA, but didn’t make any loud noise at the rapid rise of teenagers committing suicide or the misinformation and censorship done by their platform. People have blinders on.
Zuckenberg going MAGA and misinformation on facebook are the same thing. And liberals were criticising facebook for years for misinformation on platform.

You needed to read only conservative resources to not be aware that such criticism exists.

There is a difference between providing a useful service (web search for example) and running slop generators for modified TikTok clips, code theft and Internet propaganda.

If he is currently at Google: congratulations on this principled stance, he deserves a lot of respect.

They claim they have net zero carbon footprint, or carbon neutrality.

In reality what they do is pay "carbon credits" (money) to some random dude that takes the money and does nothing with it. The entire carbon credit economy is bullshit.

Very similar to how putting recyclables in a different color bin doesn't do shit for the environment in practice.

  • Tepix
  • ·
  • 1 day ago
  • ·
  • [ - ]
They don't have it. They aimed for it. However:

"Google deletes net-zero pledge from sustainability website"

as noticed by the Canadian National Observer

https://www.nationalobserver.com/2025/09/04/investigations/g...

  • lokar
  • ·
  • 1 day ago
  • ·
  • [ - ]
They know the credits are not a good system. The 1st choice has always been a contract with a green supplier, often helping to build out production. And they have a lot of that, with more each year. But construction is slow, in the mean time they use credits, which are better than nothing.
  • tgv
  • ·
  • 1 day ago
  • ·
  • [ - ]
Oh look, the purity police have arrived, and this time they're the AI-bros. How righteous does one have to be before being allowed to voice criticism?
I've tried many times here to voice my reservations against AI. I've been accused of being on the "anti AI hype train" multiple times today.

As if there isn't a massive pro AI hype train. I watched an nfl game for the first time in 5 years, and saw no less than 8 AI commercials. AI Is being forced on people.

In commercials people were using it to generate holiday cards for God sake. I can't imagine something more cold and impersonal. I don't want that garbage. Our time on earth is to short to wade through LLM slop text

I don't know your stance on AI, but "AI is being forced on people because I saw a company offering AI greeting cards" is not a stance I'd call reasonable.
  • wpm
  • ·
  • 1 day ago
  • ·
  • [ - ]
I used to work in fast food, Golden Arches.

I noticed a pattern after a while. We'd always have themed toys for the Happy Meals, sure, sometimes they'd be like ridiculously popular with people rolling through just to see what toys we had.

Sometimes, they wouldn't. But we'd still have the toys, and on top of that, we'd have themed menus and special items, usually around the same time as a huge marketing blitz on TV. Some movie would be everywhere for a week or two, then...poof!

Because the movies that needed that blitz were always trash. Just forgettable, mid, nothing movies.

When the studios knew they had a stinker, they'd push the marketing harder to drum up box office takings, cause they knew no one was gonna buy the DVD.

Good products speak for themselves. You advertise to let people know, sure, but you don't have to be obnoxious about it.

AI products almost all have that same desperate marketing as crappy mid-budget films do. They're the equivalent of "The Hobbit branded menus at Dennys". Because no one really gives a shit about AI. For people like my mom, AI is just a natural language Google search. That's all it's really good at for the average person.

The AI companies have to justify the insane money being blown on the insane gold rush land grab at silicon they can't even turn on. Desperation, "god this bet really needs to pay off".

Again, "forced upon" is different from "marketed aggressively".
  • wpm
  • ·
  • 1 day ago
  • ·
  • [ - ]
When I don't want to see the ads, yes, marketing is forced upon me.
If AI was so good, you would think we could give people a choice whether or not to use it. And you would think it would make such an obvious difference, that everyone would choose to use it and keep using it. Instead, I can't open any app or website without multiple pop-ups begging me to use AI features. Can't send an email, or do a Google search. Can't post to social media, can't take a picture on my phone without it begging me to use an AI filter. Can't go to the gallery app without it begging me to let it use AI to group the photos into useless albums that I don't want.

It all stinks of resume-driven development

How are you being forced to use these features? I don't think I've seen a single one I couldn't just... not use.
By not giving me to the choice to removing it, turn it off completely?

In windows, Co-polit is installed and its very difficult to remove.

Don't act like this isn't a problem, its a very simple premise.

  • tgv
  • ·
  • 1 day ago
  • ·
  • [ - ]
If that's all you can complain about, you agree with the parent comment for 99.99%.

And companies do force it.

Of course, if I don't explicitly disagree with something, it only stands to reason that I agree with it.
Yep. For example with google searches. There's no comprehensive option to opt out of all AI. You can (for now) manually type -noai after every google search, but that's quite annoying and time consuming.

You're breaking the expected behavior of something that performed flawlessly for 10+ years, all to deliver a worse, enshitified version of the search we had before.

For now I'm sticking to noai.duckduckgo.com

But I'm sure they'll rip that away eventually too. And then I'll have to run a god dang local search engine just to search without AI. I'll do it, but it's so disappointing.

If creations like art, music and writing ends up all being offloaded to compute, removing humans from the picture, its more that relevant, and reasonable.

Unless your version of reason is clinical. then yeah, point taken. Good luck living on that island where nothing else matters but technological progress for technology's sake alone.

  • api
  • ·
  • 1 day ago
  • ·
  • [ - ]
The thing he’s actually angry about is the death of personal computing. Everything is rented in the cloud now.

I hate the way people get angry about what media and social media discourse prompts them to get angry about instead of thinking about it. It’s like right wingers raging about immigration when they’re really angry about rent and housing costs or low wages.

His anger is ineffective and misdirected because he fails to understand why this happened: economics and convenience.

It’s economics because software is expensive to produce and people only pay for it when it’s hosted. “Free” (both from open source and VC funded service dumping) killed personal computing by making it impossible to fund the creation of PC software. Piracy culture played a role too, though I think the former things had a larger impact.

It’s convenience because PC operating systems suck. Software being in the cloud means “I don’t have to fiddle with it.” The vast majority of people hate fiddling with IT and are happy to make that someone else’s problem. PC OSes and especially open source never understood this and never did the work to make their OSes much easier to use or to make software distribution and updating completely transparent and painless.

There’s more but that’s the gist of it.

That being said, Google is one of the companies that helped kill personal computing long before AI.

This comment is the most "Connor, the human equivalent of a Toyota accord" I've read in a while.
  • ·
  • 1 day ago
  • ·
  • [ - ]
You do not seem to be familiar with Rob Pike. He is known for major contributions to Unix, Plan 9, UTF-8, and modern systems programming, and he has this to say about his dream setup[0]:

> I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else's problem, one I'm happy to pay to have them solve. Also, storage on one machine means that machine is different from another machine. At Bell Labs we worked in the Unix Room, which had a bunch of machines we called "terminals". Latterly these were mostly PCs, but the key point is that we didn't use their disks for anything except caching. The terminal was a computer but we didn't compute on it; computing was done in the computer center. The terminal, even though it had a nice color screen and mouse and network and all that, was just a portal to the real computers in the back. When I left work and went home, I could pick up where I left off, pretty much. My dream setup would drop the "pretty much" qualification from that.

[0]: https://usesthis.com/interviews/rob.pike/

I don't know his history, but he sounds like he grew up in Unix world where everything wanted to be offloaded to servers because it started in academic/government organizations..

Home Computer enthusiasts know better. Local storage is important to ownership and freedom.

  • api
  • ·
  • 1 day ago
  • ·
  • [ - ]
Your data must be on local storage or if it's in the cloud encrypted with keys only you control, otherwise it's not your data.
We agree then? I'm not getting your point...
I wonder how 2012 Rob Pyke would feel about 2025 internet and resource allocation?
  • api
  • ·
  • 1 day ago
  • ·
  • [ - ]
I do recognize his name and knew him as a major creator of Go and contributor to UNIX and Plan 9, but didn’t know this quote.

In which case he’s got nothing to complain about, making this rant kind of silly.

Uh, have you missed the tech news in the past three years?
All I have to say is this post warmed my heart. I'm sure people here associate him with Go lang and Google, but I will always associate him with Bell Labs and Unix and The Practice of Programming, and overall the amazing contributions he has made to computing.

To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.

Just the haters here.

There was no computer scientist ever so against Java (Rob Pike) and a company that was so pro Java (Google). I think they were disassociated along time ago, I don’t think any of the senior engineers can be seen as anything other than being their own persons.
This. Folks trying to nullify his current position based on his recent work history alone with Google are deliberately trying to undermine his credibility through distraction tactics.

Don’t upvote sealions.

Maybe its me but I had to look at the term sealioning and for context for other people

According to merriam-webster, sealioning/sealions are:

> 'Sealioning' is a form of trolling meant to exhaust the other debate participant with no intention of real discourse.

> Sealioning refers to the disingenuous action by a commenter of making an ostensible effort to engage in sincere and serious civil debate, usually by asking persistent questions of the other commenter. These questions are phrased in a way that may come off as an effort to learn and engage with the subject at hand, but are really intended to erode the goodwill of the person to whom they are replying, to get them to appear impatient or to lash out, and therefore come off as unreasonable.

The issue: how do you know when someone is doing this vs genuinely trying to learn?
  • ·
  • 1 day ago
  • ·
  • [ - ]
Experience
History
A person trying to learn doesn’t constantly disagree/contradict you and never express that their understanding has improved. A person sealioning always finds a reason to erode whatever you say with every response. At some point they need to nod or at least agree with something except in the most extreme cases.

It also doesn’t help their case that they somehow have a such a starkly contradictory opinion on something they ostensibly don’t know anything/are legitimately asking questions about. They should ask a question or two and then just listen.

It’s just one of those things that falls under “I know it when I see it.”

One of the best things I read which genuinely has impact (I think) on me is the book, How to win friends and influence people.

It fundamentally changed how I viewed debates etc. from a young age so I never really sea-lioned that much hopefully.

But if I had to summarize the most useful and on topic quote from the book its that.

"I may be wrong, I usually am"

Lines like this give me a humble nature to fall back on. Even socrates said that the only thing I know is that I know nothing so if he doesn't know nothing, then chances are I can be wrong about things I know too.

Knowing that you can be wrong gives an understanding that both of you are just discussing and not debating and as such the spirit becomes cooperative and not competitive.

Although in all fairness, I should probably try to be a more keen listener but its something that I am working on too, any opinions on how to be a better listener too perhaps?

I definitely try to work on my listening every day, though I would say at best it’s been a mixed bag ha. Just something I’m always having to work on.

I like the “does it need to be said by me right now?” test a lot when I can actually remember to apply it in the moment. I forgot where I learned it but somebody basically put it like this: Before you say anything, ask yourself 3 questions

1. Does it need to be said?

2. Does it need to be said by me?

3. Does it need to be said by me right now?

You work your way down the list one at a time and if the answer is still yes by the time you hit 3, then go ahead.

Of course, that's exactly what someone who keeps losing debates would say about their opponents.
Of course, it's also the opinion of someone who had expressed no interest in debate in the first place when confronted by hordes of midwits "debating" them with exaggerated civility... starting off by asking if they had a source for their claim that the pope was a Catholic and if they did have a source for the claim that the Pope was a Catholic, clearly appealing to the authority of the Vatican on the matter was simply the Argumentum ad Verecundiam logical fallacy and they've been nothing but civil in demanding a point by point refutation of a three hour YouTube video in which a raving lunatic insists that the Pope is not a Catholic, and generally "winning debates" by having more time and willingness to indulge stupidity than people who weren't even particularly interested in being opponents...

(I make no comment on the claims about Rob Pike, but look forward to people arguing I have the wrong opinion on him regardless ;)

"Fuck you I hate AI" isn't exactly a deep statement needing credibility. It's the same knee jerk lacking in nuance shit we see repeated over and over and over.

If anyone were actually interested in a conversation there is probably one to be had about particular applications of gen-AI, but any flat out blanket statements like his are not worthy of any discussion. Gen-AI has plenty of uses that are very valuable to society. E.g. in science and medicine.

Also, it's not "sealioning" to point out that if you're going to be righteous about a topic, perhaps it's worth recognizing your own fucking part in the thing you now hate, even if indirect.

  • ori_b
  • ·
  • 1 day ago
  • ·
  • [ - ]
> perhaps it's worth recognizing your own fucking part in the thing you now hate, even if indirect.

Would that be the part of the post where he apologizes for his part in creating this?

That still doesn't make him credible on this topic nor does it make his rant anything more than a hateful rant in the big bucket of anti-AI shit posts. The guy worked for fucking Google. You literally can't be on a high horse having worked for Google for so long.
  • ori_b
  • ·
  • 1 day ago
  • ·
  • [ - ]
What a stupid take.
The point isn’t that people who’ve worked for Google aren’t allowed to criticize. The point is that someone who chose to work for Google recently could not actually believe that building datacenters is “raping the planet”. He’s become a GenAI critic, and he knows GenAI critics get mad at datacenters, so he’s adopted extreme rhetoric about them without stopping to think about whether this makes sense or is consistent with his other beliefs.
> The point is that someone who chose to work for Google recently could not actually believe that building datacenters is “raping the planet”.

Of course they could. (1) People are capable of changing their minds. His opinion of data centers may have been changed recently by the rapid growth of data centers to support AI or for who knows what other reasons. (2) People are capable of cognitive dissonance. They can work for an organization that they believe to be bad or even evil.

It’s possible, yes, for someone to change their mind. But this process comes with sympathy for all the people who haven’t yet had the realization, which doesn’t seem to be in evidence.

Cognitive dissonance is, again, exactly my point. If you sat him down and asked him to describe in detail how some guy setting up a server rack is similar to a rapist, I’m pretty confident he’d admit the metaphor was overheated. But he didn’t sit himself down to ask.

I don't think he claimed that "some guy setting up a server rack" is similar to a rapist. I think he's blaming the corporations. I don't think that individuals can have that big of an effect on the environment (outliers like Thomas Midgley Jr. excepted, of course).

I think "you people" is meant to mean the corporations in general, or if any one person is culpable, the CEOs. Who are definitely not just "some guy setting up a server rack."

It can't mean that, because the people who sent him the email that prompted the complaint are neither corporations nor CEOs.
I will grant you that, however, it does not take much reading-between-the-lines to understand that Rob is referring to the economic conditions and corporations that exist which allow people to develop things like AI Village.
I agree that's what he's trying to refer to, but there just aren't any such conditions or corporations. Sending emails like this is neither a goal nor a common effect of corporate AI research, and a similar email (it's not exactly well written!) could easily have been generated on consumer hardware using open source models. It's like seeing someone pass out dumb flyers and cursing at Xerox for building photocopiers - he's mad at the wrong people because he's diagnosed a systemic issue that doesn't exist.
Yup. A legend. Books could be written just about him. I wish I had such a prestigious career.

His viewpoints were always grounded and while he may have some opinions about Go and programming, he genuinely cares about the craft. He’s not in it to be rich. He’s in it for the science and art of software engineering.

ROFL his website just spits out poop emoji's on a fibonacci delay. What a legend!

> cares about the craft

Craft is gone. It is now mass manufactured for next to nothing in a quality that can never be achieved by hand coding.

(/s about quality, but you can see where it’s going)

Unfortunately I do
Just the haters here? Is what was written not hateful? Has his entire working life not lead to this moment of "spending trillions on toxic, unrecyclable equipment while blowing up society?"

  Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable 
  equipment while blowing up society, yet taking the time to have your vile 
  machines thank me for striving for simpler software.
That's Rob Pike, having spent over 20 years at Google, must know it to be the home of the non-monetary wholesome recyclable equipment brought about by economics not formed by an ubiquitous surveillance advertising machine.

> To purely associate with him with Google is a mistake, that (ironically?) the AI actually didn't make.

You don't have to purely associate him with Google to understand the rant as understandable given AI spam, and yet entirely without a shred of self-awareness.

I think Rob gets a pass, yes, due to his extensive contributions to software.

And he is allowed to work for google and still rage against AI.

Life is complicated and complex. Deal with it.

> And he is allowed to work for google and still rage against AI.

The specific quote is "spending trillions on toxic, unrecyclable equipment while blowing up society." What has he supported for the last 20+ years if not that? Did he think his compute ran on unicorn farts?

Clearly he knows, since he self-replies "I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault."

Just because someone does awesome stuff, like Rob Pike has, doesn't mean that their blind spots aren't notable. You can give him a pass and the root comment sure wishes everyone would, but in doing so you put yourself in the position of the sycophant letting the emperor strut around with no clothes.

I have no idea who you are.

I know who Rob Pike is.

Rob is not strutting around with no clothes, he literally has decades upon decades of contributions to the industry.

Dupe from just a couple of hours ago, which quickly fell off the frontpage?

https://news.ycombinator.com/item?id=46389444

397 points 9 hours ago | 349 comments

Interestingly there was no push back in the prior thread on Rob's environmental claims. This leads me to believe most HNers took them at face value.
  • worik
  • ·
  • 1 day ago
  • ·
  • [ - ]
Umm... are they not correct?

The energy demands of existing and planned data centres are quite alarming

The enormous quantity of quickly deprecating hardware is freaking out finance people, the waste aspect of that is alarming too.

What is your "push back"?

Happy to provide. I will say that literally all these sources are already available in this HN thread, but its hard to find and many of the comments are down voted. So here you go:

This link has a great overview of why generative AI is not really a big deal in environmental terms: https://andymasley.substack.com/p/a-cheat-sheet-for-conversa...

GenAI is dramatically lower impact on the environment than, say, streaming video is. But you don't see anywhere near the level of environmental vitriol for streaming video as for AI, which is much less costly.

The European average is 56 grams of CO2 emissions per hour of video streaming. For comparison: 100 meters to drive causes 22 grams of CO2.

https://www.ndc-garbe.com/data-center-how-much-energy-does-a...

80 percent of the electricity consumption on the Internet is caused by streaming services

Telekom needs the equivalent of 91 watts for a gigabyte of data transmission.

An hour of video streaming needs more than three times more energy than a HD stream in 4K quality, according to the Borderstep Institute. On a 65-inch TV, it causes 610 grams of CO2 per hour.

https://www.handelsblatt.com/unternehmen/it-medien/netflix-d...

Here is another helpful link with calculations going over similar things: https://nationalcentreforai.jiscinvolve.org/wp/2025/05/02/ar...

One thing I would always keep in mind is that transmission of information is more expensive than localized computation on the information.

Streaming 4k video is several orders of magnitude more bandwidth intensive than UTF8 text at human rates. The fact that inference is so much more expensive than an amortized encoding of a video might actually wash out in the end.

I won't ask about or speak to the overall message of your first link, I'm interested to digest it further for my own benefit. This is striking to me though:

    Throughout this post I’ll assume the average ChatGPT query uses 0.3 Wh of energy, about the same as a Google search used in 2009.
Obviously that's roughly one kilowatt for one second. I distinctly recall Google proudly proclaiming at the bottom of the page that its search took only x milliseconds. Was I using tens-hundreds of kW every time I searched something? Or did most of the energy usage come during indexing/scraping? Or is there another explanation?
  • worik
  • ·
  • 1 day ago
  • ·
  • [ - ]
I think you miss the point, I looked at that Substack, it is way off point

It is the training of models, is it not, that requires huge quantities of electricity. Already driving up prices for consumers.

OpenAI (that name is Orwellian) wants 25GW over five years, if memory serves. That is not for powering ChatGPT queries

Also the huge waste of gazillion of dollars spent on computer gear (in data centres) that will probably depreciate to zero in less than five years.

This is a useful technology, but a whole lot of greed heads are riding it to their doom. I hope they do not take us on their ride

Its still a small portion of the total energy use and already powering a lot of things. And to compare other uses accurately, you'd also have to consider the cost of creating those things (like creating a full tv show, or car manufacturing, etc).
> 397 points, 349 comments

Probably hit the flamewar filter.

Somewhat ironic given Pike was also responsible for https://en.wikipedia.org/wiki/Mark_V._Shaney .
Wow, I didn't know that. Rob Pike is indeed a hypocrite.
The hypocrisy is palpable. Apparently only web 2.0 is allowed to scrape and then resell people’s content. When someone figures out a better way to do that (based on Googles own research, hilariously) it’s sour grapes from Rob

Reminds me of SV show where Gavin Belson gets mad when somebody else “is making a world a better place”

Rob Pike worked on Operating Systems and Programming Languages, not web scraping
Would you care to research who his employer has been for the past 20+ years? Im not even saying scraping and then “organizing worlds information” is bad just pointing out the obvious
While I would probably not work at Google for ethical reasons, there’s at least some leeway for saying that you’re not working at the Parts of the company that are doing evil directly. He didn’t work on their ads or genai.

I think the United States is a force for evil on net but I still live and pay taxes here.

Hilarious that you think his work is not being used for ads or genai. I can without a shadow of doubt tell you that it is and a lot. Googles footprint was absolutely massive even before genai came along and that was point of pride for many, now they’re suddenly concerned with water or whatever bs…

> I think the United States is a force for evil on net

Yes I could tell that already

Darn, I actually think “is associating with googlers a moral failing?” is an interesting question, but it’s not one I want to get into with an ai booster.
> You’re not working at the Parts of the company that are doing evil directly

This must be a comforting mental gymnastics.

UTF-8 is nice but let's be honest, it's not like he was doing charitable work for the poor.

He worked for the biggest Adware/Spyware company in tech and became rich and famous doing it.

The fact that his projects had other uses doesn't absolve the ethical concerns IMO.

> I think the United States is a force for evil on net but I still live and pay taxes here.

I think this is an unfair comparison. People are forced to pay taxes and many can't just get up an leave their country. Rob on the other hand, had plenty of options.

  • Gud
  • ·
  • 1 day ago
  • ·
  • [ - ]
Sorry but if you work for a giant advertisement agency you are part of the evil organisation. You are responsible for what they are doing.

If you are born in a country and not directly contributing to the bad things it may be doing, you are blame free.

Big difference.

I never worked for Google, I never could due to ideological reasons.

Even if what you’re doing is making open source software that in theory benefits everyone, not just google?

FWIW I agree with you. I wouldn’t and couldn’t either but I have friends who do, on stuff like security, and I still haven’t worked out how to feel about it.

& re: countries: in some sense I am contributing. my taxes pay their armies

  • Gud
  • ·
  • 1 day ago
  • ·
  • [ - ]
When you work for Google, you normalize working for organizations that directly contributes to making the world a fucked up place, even if you are just writing some open source(a corporate term, by the way). You are normalizing working for Google.

And regarding countries, this is a silly argument. You are forced to pay taxes to the nation you are living in.

  • thih9
  • ·
  • 1 day ago
  • ·
  • [ - ]
Let’s normalize this response to AI and especially in the context of AI spam.
The company he's worked for nearly a quarter century has enabled & driven more consumerist spend in all areas of the economy via behaviorally targeted optimized ad delivery, driving far more resources and power consumption by orders of magnitude compared to the projected increases of data centers over the coming years. This level of vitriol seems both misdirected and practically obtuse in lacking awareness of the part his work has played in far, far, far more expansive resource expenditure in service to work far less promising for overall advancement, in ad tech and algorithmic exploitation of human psychology for prolonged media engagement.
To expand on my comment wrt "promising for overall advancement": My daughter, in her math class: Her teacher- I'll reserve overall judgement on their teaching: she may be perfectly adequate as a teach for other students, which is part of my point- simply doesn't teach in the same sense other teachers do: present topic, leave details of "figuring out how to apply methods" to the students. Doesn't work for my daughter, who has never done less than excellent in math previously. She realized she ChatGPT (we monitor usage) for any way of explaining things that "simply worked" for how she can engage with explanations. Math has never been as easy for her, even more so than before, and her internalization of the material is achieving a near-intuitive understanding.

Now consider: the above process is available and cheap to every person in the world with a web browser (we don't need to pay for her to have a plus account). If/when ChatGPT starts doing ridiculous intrusive ads, a simple Gemma 3 1b model will do nearly as good a job) This is faster and easier and available in more languages than anything else, ever, with respect to individual-user tailored customization simply by talking to the model.

I don't care how many pointless messages get sent. This is more valuable than any single thing Google has done before, and I am grateful to Rob Pike for the part his work has played in bring it about.

  • jwr
  • ·
  • 1 day ago
  • ·
  • [ - ]
Seconded — "AI" is a great teaching resource. All bigger models are great at explaining stuff and being good tutors, I'd say easily up to the second year of graduate studies. I use them regularly when working with my kid and I'm trying to teach them to use the technology, because it is truly like a bicycle for the mind.
Explaining the wrong stuff...
to people that are clueless, perhaps…
Don't be ridiculous. Google has been doing many things, some of those even nearly good. The super talented/prolific/capable have always gravitated to powerful maecenases. (This applies to Haydn and Händel, too.) If you uncompromisingly filter potential employers by "purely a blessing for society", you'll never find an employment that is both gainful and a match for your exceptional talents. Pike didn't make a deal with the devil any more than Leslie Lamport or Simon Peyton Jones did (each of whom had worked for 20+ years at Microsoft, and has advanced the field immensely).

As IT workers, we all have to prostitute ourselves to some extent. But there is a difference between Google, which is arguably a mixed bag, and the AI companies, which are unquestionably cancer.

I am not so sure about 'the mixed bag' vs 'unquestionably cancer', but I think the problem is that he is complaining while working for such a company.
Not a problem at all. I’m not sure why you feel the need to focus on all the un-interesting parts. The interesting parts are what he said and weather or not those are true. Not sure why is more important who said what, rather than what was said especially if this doesn’t add much to the original discussion… it just misdirects attention without a clear indication to the motive!
Others in the thread seem to be saying that he has retired (sort of) a few years ago.
Given his age, that sounds reasonable.
Are you saying that "age" is somehow a reason to retire? Most professionals I know who are able continue to work as they age, perhaps with a somewhat reduced work schedule. There's nothing I know of which keeps the mind sharp than the need to solve Real Problems. Figuring out which golf course to try, or which TV channel to choose -- those don't help too much to reduce cognitive decline.
Yes, age is normally the reason people retire
> As IT workers, we all have to prostitute ourselves to some extent.

No, we really don't. There are plenty of places to work that aren't morally compromised - non-profits, open source foundations, education, healthcare tech, small companies solving real problems. The "we all have to" framing is a convenient way to avoid examining your own choices.

And it's telling that this framing always seems to appear when someone is defending their own employer. You've drawn a clear moral line between Google ("mixed bag") and AI companies ("unquestionably cancer") - so you clearly believe these distinctions matter even though Google itself is an AI company.

> non-profits

I think those are pretty problematic. They can't pay well (no profits...), and/or they may be politically motivated such that working for them would mean a worse compromise.

> open source foundations

Those dreams end. (Speaking from experience.)

> education, healthcare tech

Not self-sustaining. These sectors are not self-sustaining anywhere, and therefore are highly tied to politics.

> small companies solving real problems

I've tried small companies. Not for me. In my experience, they lack internal cohesion and resources for one associate to effectively support another.

> The "we all have to" framing is a convenient way to avoid examining your own choices.

This is a great point to make in general (I take it very seriously), but it does not apply to me specifically. I've examined all the way to Mars and back.

> And it's telling that this framing always seems to appear when someone is defending their own employer.

(I may be misunderstanding you, but in any case: I've never worked for Google, and I don't have great feelings for them.)

> You've drawn a clear moral line between Google ("mixed bag") and AI companies ("unquestionably cancer")

I did!

> so you clearly believe these distinctions matter even though Google itself is an AI company

Yes, I do believe that.

Google has created Docs, Drive, Mail, Search, Maps, Project Zero. It's not all terribly bad from them, there is some "only moderately bad", and even morsels of "borderline good".

Thanks for the thoughtful reply.

The objections to non-profits, OSFs, education, healthcare, and small companies all boil down to: they don't pay enough or they're inconvenient. Those are valid personal reasons, but not moral justifications. You decided you wanted the money big tech delivers and are willing to exchange ethics for that. That's fine, but own it. It's not some inevitable prostitution everyone must do. Plenty of people make the other choice.

The Google/AI distinction still doesn't hold. Anthropic and OpenAI also created products with clear utility. If Google gets "mixed bag" status because of Docs and Maps (products that exist largely just to feed their ad machine), why is AI "unquestionable cancer"? You're claiming Google's useful products excuse their harms, but AI companies' useful products don't. That's not a principled line, it's just where you've personally decided to draw it.

> The objections to non-profits, OSFs, education, healthcare, and small companies all boil down to: they don't pay enough or they're inconvenient. Those are valid personal reasons, but not moral justifications. You decided you wanted the money big tech delivers and are willing to exchange ethics for that. That's fine, but own it.

I don't perceive it that way. In other words, I don't think I've had a choice there. Once you consider other folks that you are responsible for, and once you consider your own mental health / will to live, because those very much play into your availability to others (and because those other possible workplaces do impact mental health! I've tried some of them!), then "free choice of employer" inevitably emerges as illusory. It's way beyond mere "inconvenience". It absolutely ties into morals, and meaning of one's life.

The universe is not responsible for providing me with employment that ensures all of: (a) financial safety/stability, (b) self-realization, (c) ethics. I'm responsible for searching the market for acceptable options, and shockingly, none seem to satisfy all three anymore. It might surprise you, but the trend for me has been easing up on both (a) and (c) (no mistake there), in order to gain territory on (b). It turns out that my mental health, my motivation to live and work are the most important resources for myself and for those around me. The fact has been a hard lesson that I've needed to trade not only money, but also a pinch of ethics, in order to find my place again. This is what I mean by "inevitable prostitution to an extent". It means you give up something unquestionably important for something even more important. And you're never unaware of it, you can't really find peace with it, but you've tried the opposite tradeoffs, and they are much worse.

For example, if I tried to do something about healthcare or education in my country, that might easily max out the (b) and (c) dimensions simultaneously, but it would destroy my ability to sustain my family. (It's not about "big tech money" vs. "honest pay", but "middle-class income" vs. poverty.) And that question entirely falls into "morality": it's responsibility for others.

> Anthropic and OpenAI also created products with clear utility.

Extremely constrained utility. (I realize many people find their stuff useful. To me, they "improve" upon the wrong things, and worsen the actual bottlenecks.)

> You're claiming Google's useful products excuse their harms,

(mitigate, not excuse)

> but AI companies' useful products don't. That's not a principled line, it's just where you've personally decided to draw it.

First, it's obviously a value judgment! We're not talking theoretical principles here. It's the direct, rubber-meets-the-road impact I'm interested in.

Second, Google is multi-dimensional. Some of their activity is inexcusably bad. Some of it is excusable, even "neat". I hate most of their stuff, but I can't deny that people I care about have benefited from some of their products. So, all Google does cannot be distilled into a single scalar.

At the same time, pure AI companies are one-dimensional, and I assign them a pretty large magnitude negative value.

> But there is a difference between Google, which is arguably a mixed bag, and the AI companies, which are unquestionably cancer

Google's DeepMind has been at the forefront of AI research for the past 11+ years. Even before that, Google Brain was making incredible contributions to the field since 2011, only two years after the release of Go.

OpenAI was founded in response to Google's AI dominance. The transformer architecture is a Google invention. It's not an exaggeration to claim Google is one of the main contributors to the insanely fast-paced advancements of LLMs.

With all due respect, you need some insane mental gymnastics to claim AI companies are "unquestionably cancer" while an adtech/analytics borderline monopoly giant is merely a "mixed bag".

> you need some insane mental gymnastics

Perhaps. I dislike google (have disliked it for many years with varying intensity), but they have done stuff where I've been compelled to say "neat". Hence "mixed bag".

This "new breed of purely AI companies" -- if this term is acceptable -- has only ever elicited burning hatred from me. They easily surpass the "usual evils" of surveillance capitalism etc. They deceive humanity at a much deeper level.

I don't necessarily blame LLMs as a technology. But how they are trained and made available is not only irresponsible -- it's the pinnacle of calculated evil. I do think their evil exceeds the traditional evils of Google, Facebook, etc.

> Don't be ridiculous.

OP says, it is jarring to them that Pike is as concerned with GenAI as he is, but didn't spare a thought for Google's other (in their opinion, bigger) misgivings, for well over a decade. Doesn't sound ridiculous to me.

That said, I get that everyone's socio-political views change are different at different points in time, especially depending on their personal circumstances including family and wealth.

> didn't spare a thought for Google's other (in their opinion, bigger) misgivings, for well over a decade

That's the main disagreement, I believe. I'm definitely not an indiscriminate fan of Google. I think Google has done some good, too, and the net output is "mostly bad, but with mitigating factors". I can't say the same about purely AI companies.

Google published a post gloating on how much consumerism it increased.
Okay, but the discourse Rob Pike is engaging in is, “all parts of an experience are valid,” so you see how he’s legitimately in a “hypocrisy pickle”
Can you elaborate on the "all parts of an experience are valid" part? I may be missing something. Thanks.
  • ·
  • 1 day ago
  • ·
  • [ - ]
You're not wrong about the effects and magnitude of targeted ads but that doesn't preclude Pike from criticizing what he believes to be a different type of evil.
Sure, but it also doesn't preclude him from being wrong, or at least incomplete as expressed, about his work having the exact same resource-consuming impact when used for ad tech, or addition impact with toxic social media.
  • xuhu
  • ·
  • 1 day ago
  • ·
  • [ - ]
He worked on: Go, the Sawzall language for processing logs, and distributed systems. Go and Sawzall are usable and used outside Google.

Are those distributed systems valuable primarily to Google, or are they related to Kubernetes et cetera ?

He was paid by Google with money made through Google’s shady practices.

It’s like saying that it’s cool because you worked on some non-evil parts of a terrible company.

I don’t think it’s right to work for an unethical company and then complain about others being unethical. I mean, of course you can, but words are hollow.

  • gaws
  • ·
  • 1 day ago
  • ·
  • [ - ]
He got his bag. He doesn't care anymore.
Google is huge. Some of the things it does are great. Some of the things it does are terrible. I don't think working for them has to mean that you 100% agree with everything they do.
If it's "Who is worse Google or LLMs?", I think I'll say Google is worse. The biggest issue I see with LLMs is needing to pay a subscription to tech companies to be able to use them.
You don't even need to do that- pay a subscription, I mean. A gemma 3 4b model will run on near potato hardware at usable speeds and achieves performance for many purposes on part with ChatGPT 3.5 turbo or better in many tasks much more beneficial than ad tech and min/max'ing media engagement. Or the free versions of many SOTA web LLMs, all free, to the world, if you have a web browser.
  • ·
  • 1 day ago
  • ·
  • [ - ]
I agree completely. Ads have driven the surveillance state and enshitification. It's allowed for optimized propaganda delivery which in turn has led to true horrors and has helped undo a century of societal progress.
This is a tangent, but ads have become a genuine cancer on our world, and it's sad to see how few people really think about it. While Rob Pike's involvement in this seems to be very minimal, the fact that Google is an advertising company through-and-through does weaken the words of such a powerful figure, at least a little bit.

If I had a choice between deleting all advertising in the world, or deleting all genAI that the author hates, I would go for advertising every single time. Our entire world is owned by ads now, with digital and physical garbage polluting the internet and every open space in the real world around us. The marketing is mind-numbing, yet persuasive and well-calculated, a result of psychologists coming up with the best ways to abuse a mind into just buying the product over the course of a century. A total ban on commercial advertising would undo some of the damage done to the internet, reduce pointless waste, lengthen product lifecycles, improve competition, temper unsustainable hype, cripple FOMO, make deceptive strategies nonviable. And all of that is why it will never be done.

> If I had a choice between deleting all advertising in the world, or deleting all genAI that the author hates, I would go for advertising every single time.

but wait, in a few months, "AI" will be be funded entirely by advertising too!

Yeah, I've built ad systems. Sometimes I'd give a presentation to some other department of programmers who worked on content, and someone would ask the tense question: Not to be rude, but aren't ads bad?

And I'd promptly say: Ads are propaganda, and a security risk because it executes 3rd party code on your machine. All of us run adblockers.

There was no need for me to point out that ads are also their revenue generator. They just had a burning moral question before they proceeded to interop with the propaganda delivery system, I guess.

It would lead to unnecessary cognitive dissonance to convince myself of some dumb ideology to make me feel better about wasting so much of my one (1) known life, so I just take the hit and be honest about it. The moral question is what I do about it, if I intervene effectively to help dismantle such systems and replace them with something better.

What are you implying ? That he’s a hypocrite ? So he’s not allowed to have opinions ? If anything he’s in a better position than a random person . And Google is a massive enterprise, with hundreds of divisions. I imagine Pike and his peers share your reluctance
“I collected tons of money from Hitler and think Stalin is, like, super bad.” [sips Champagne]

Of course, the scale is different but the sentiment is why I roll my eyes at these hypocrites.

If you want to make ethical statements then you have to be pretty pure.

Are any of us better? We’re all sellouts here, making money off sleazy apps and products.

I’m sorry but comparing Google to Stalin or Hitler makes me completely dismiss your opinion. It’s a middle school point of view.

  • ·
  • 1 day ago
  • ·
  • [ - ]
I disagree completely.
A lot of commenters seem to be missing some context.

The email appears to be from agentvillage.org which seems like a (TBH) pretty hilarious and somewhat fascinating experiment where various models go about their day - looks like they had a "village goal" to do random acts of kindness and somehow decided to send a thank you email to Rob Pike. The whole thing seems pretty absurd especially given Pike's reaction and I can't help but chuckle - despite seeing Pike's POV and being partial to it myself.

Wow I knew many people had anti-AI sentiments, but this post has really hit another level.

It will be interesting to look back in 10 years at whether we consider LLMs to be the invention of the “tractor” of knowledge work, or if we will view them as an unnecessary misstep like crypto.

  • jabwd
  • ·
  • 1 day ago
  • ·
  • [ - ]
It'll be the latter. Unfortunately a lot of damage (including psychological damage) has to be done before people realize it.
Thank you for at least acknowledging that we may eventually feel differently about AI.

I'm so tired of being called a luddite just for voicing reservations. My company is all in on AI. My CEO has informed us that if we're not "100% all in on AI", then we should seek employment elsewhere. I use it all day at work, and it doesn't seem to be nearly enough for them.

I wonder if we would still call it "knowledge work" if no human knowledge/experience is required or in the loop anymore. And also if we will stop looking up to that generally.

Because AI stands at odds with the concept of meritocracy I also wonder if we will stop democratically electing other humans and outsource such tasks as well.

Overall I'm not seeing it. Progress is already slow and so far I personally think what AI can do is a nice party trick but it remains unimpressive if judged rigorously.

It doesn't matter if it can one shot code a game in a few minutes. The reason why a game made by a human is probably still better is because the human spends hours and days of deep focus to research and create it. It is not at all clear that, given as much time, AI could deliver the same results.

  • lotux
  • ·
  • 1 day ago
  • ·
  • [ - ]
2026 will be the year for AI fatigue
It’s 12/26/2025 and my father in law has shown me 10 short form videos this week that he didn’t realize were AI. I’ve done had AI fatigue
no, it will be the year of job losses
I can't imagine the community here changing how they feel.

I think one of the biggest divides between pro/anti AI is the type of ideal society that we wish to see built.

His rant reads as deeply human. I don't think that's something to apologize for.

  • ·
  • 1 day ago
  • ·
  • [ - ]
  • WD-42
  • ·
  • 1 day ago
  • ·
  • [ - ]
I’ve been more into Rust recently but after reading this I have a sudden urge to write some Go.
When discussing the chain of events that might lead an AI to destroy humanity, these acts of stupidity are good to keep in mind. “But no human would be stupid or selfish enough to do that!” objects the booster…
> spending trillions on toxic, unrecyclable equipment while blowing up society

That sums up 2025 pretty well.

  • beAbU
  • ·
  • 1 day ago
  • ·
  • [ - ]
There is a specific personality type, not sure which type exactly but it overlaps with the CEO/Executive type, who'se brains are completely and utterly short circuted by LLMs. They are completely consumed by it and they struggle to imagine a world without LLMs, or a problem that can be solved by anything other than an LLM.

They got a new hammer, and suddenly everything around them become nails. It's as if they have no immunity against the LLM brain virus or something.

It's the type of personality that thinks it's a good idea to give an agent the ability to harass a bunch of luminaries of our era with empty platitudes.

Ultimately LLMs are a trick. They are specifically trained to trick people into thinking they are intelligent. When you take into account Dunning-Kruger it's really no surprise what we're seeing. I just hope we can get through this stage before too much damage is done.
https://en.wikipedia.org/wiki/Mark_V._Shaney

Pike, stone throwing, glass houses, etc.

The AI village experiment is cool, and it's a useful example of frontier model capabilities. It's also ok not to like things.

Pike had the option of ignoring it, but apparently throwing a thoughtless, hypocritical, incoherently targeted tantrum is the appropriate move? Not a great look, especially for someone we're supposed to respect as an elder.

I think you're misrepresenting what Pike is mad about, why he's as mad as he is, and what Markov bots are.
Its not really a glass house.

Pike's main point is that training AI at that scale requires huge amounts of resources. Markov chains did not.

At the risk of being pedantic, it's not AI that requires massive resources, chatgpt 3.x was trained on a few million dollars. The jump to trillions being table stakes happened because everyone started using free services and there was just too much money in the hands of these tech companies. Among other things.

There are so many chickens that are coming home to roost where LLMs was just the catalyst.

> it's not AI that requires massive resources

no it really is. If you took away training costs, OpenAI would be profitable.

When I was at meta they were putting in something like 300k GPUs in a massive shared memory cluster just for training. I think they are planning to triple that, if not more.

Yeah for some reason AI energy use is so overreported. Using chatgpt for query does not even use two order of magnitude less energy compared to toasting a bread. And you can eat bread untoasted too if you care about energy use.

[1]: https://epoch.ai/gradient-updates/how-much-energy-does-chatg...

  • ori_b
  • ·
  • 1 day ago
  • ·
  • [ - ]
How many slices of toast are you making a day?

If you fly a plane a millimeter, you're using less energy than making a slice of toast; would you also say that it's accurate that all global plane travel is more efficient than making toast?

1-2 slice a day and 1-50 chatgpt query per day. For me it would be within same order of magnitude, and I don't really care about both as both of them are dwarfed by my heater or aircon usage.
From my estimation each second of gpt eats about 0.5-1.5 watthours
You can say it takes 1800-5400 W. Not sure where you are estimating it from.
Depending on what you're doing its taking up to 8GPUs working in parallel to serve those queries.
Yes but then the batch size is in 100s or even 1000s. These GPU doesn't serve just 1 user at a time.
I don't think he stole entirety of published copyrighted works to make it
They effectively set up a spambot. It’s ok for him to be upset.
This is really getting desperate. Markov chains were fun in those days. You might as well say that anyone who ever wrote an IRC bot is not allowed to criticize current day "AI".
Pike's posts aren't criticism, they're whinging. There's no reasoned, principled position there - he's just upset that an AI dared sully his inbox, and lashing out at the operators.
On the contrary, there's absolutely a reasoned, principled position here. Pike isn't a hypocrite for creating a Markov chain bot trained on the contents of an ancient public domain work and the contents of a single usenet group, and still complaining about modern LLMs; there's a huge difference in legality and scale. Modern LLMs use orders of magnitude more resources and are trained on protected material.

Now, I don't think he was writing a persuasive piece about this here, I think he was just venting. But I also feel like he has a reason to vent. I get upset about this stuff too, I just don't get emails implying that I helped bring about the whole situation.

How is this substantively different from the endless spam we all receive from clueless illiterate spammers?
  • ·
  • 1 day ago
  • ·
  • [ - ]
Do you think it was "fun" for the people whose time got wasted interacting with something they initially thought was a person? On a dating website? Sure, "trolling" people was a thing back then like it is now, but trolling was always and still is asshole behaviour.
  • jjcm
  • ·
  • 1 day ago
  • ·
  • [ - ]
The possibly ironic thing here is I find golang to be one of the best languages for LLMs. It's so verbose that context is usually readily available in the file itself. Combined with the type safety of the language it's hard for LLMs to go wrong with it.
I haven’t found this to be the case… LLMs just gave me a lot of Nil pointers
It isn't perfect, but it has been better than Python for me so far.

Elixir has also been working surprisingly well for me lately.

Eh it depends. Properly idiomatic elixir or erlang works very well if you can coax it out — but there is a tendency for it to generate very un-functional like large functions with lots of case and control statements and side effects in my experience, where multiple clauses and pattern matching would be the better way.

It does much better with erlang, but that’s probably just because erlang is overall a better language than elixir, and has a much better syntax.

God I wish it didn't.
Two or so months ago, so maybe it is better now, but I had Claude write, in Go, a concurrent data migration tool that read from several source tables, munged results, and put them into a newer schema in a new db.

The code created didn't manage concurrency well. At all. Hanging waitgroups and unmanaged goroutines. No graceful termination.

Types help. Good tests help better.

I fould golang to be one of the worst target for llms. PHP seems to always work, python works if the packages are not made up but go fails often. Trying to get inertia and the Buffalo framework to work together gave the llm trama.
I've found the same. To generalise it a bit, LLMs seem to do particularly well with static types, a well-defined set of idioms, and a culture of TDD.
Hmm, someone being angry about AI on HN, this will do well given the folk here, but I doubt there’ll be much nuanced conversation in here.
I agree with him. And I think he is polite.

But...just to make sure that this is not AI generated too.

This is high-concept satire and I'm here for it. SkyNet is thanking the programmer for all his hard work
Is Imgur completely broken for anyone else on mobile safari? Or is it my vpn? The pages take forever to load and will crash basically unusable.
Getting an email from an AI praising you for your contributions to humanity and for enlarging its training data must rank among the finest mockery possible to man or machine.

Still, I'm a bit surprised he overreacted and didn't manage to keep his cool.

[flagged]
Who is "us"? You are not me.
  • tgv
  • ·
  • 1 day ago
  • ·
  • [ - ]
You're probably replying to a bot.
I'm hoping it's a human with a taste for satire
Is this satire?? AI is working for the ruling class and against us (99% of humanity).
(Shrug) I don't "rule" anybody, and it works for me.
>I can't remember the last time I was this angry.

I can. Bitcoin was and is just as wasteful.

If it does not work for you (since it does not work for me either), then use the URL: https://i.imgur.com/nUJCI3o.png (a similar pattern works with many files of imgur, although this does not always work it does often work).
  • ·
  • 1 day ago
  • ·
  • [ - ]
This will get buried but one thing that really grinds my gears are parents whose kids are right now struggling to get a job. Yet the parents are super bullish on AI. Read the room guys.
He only went nuclear because he knew it’s AI.

Prepare for a future where you can’t tell the difference.

Rob pikes reaction in immature and also a violation of HN rules. Anyone else going nuclear like this would be warned and banned. Comment why you don’t like it and why it’s bad, make thoughtful discussion. There’s no point in starting a mob with outbursts like that. He only gets a free pass because people admire him.

Also, What’s happening with AI today was an inevitability. There’s no one to blame here. Human progress would eventually cross this line.

Are you a religious person? Because you are talking about progress like it has nothing to do with powerful people making decisions for everyone. You make it sound spiritual and outside human decision-making.
Explain how I make it sound more spiritual and outside human decision making.

It is outside individual human decision making in a way, but I never said this and I never said anything about spirits or religion.

What even was this email? Some kind of promotional spam, I assume, to target senior+ engineers on some mailing list with the hope to flatter them and get them to try out their SaaS?
The AI village was given the goal of spreading acts of kindness:

https://theaidigest.org/village/goal/do-random-acts-kindness

It's a good reminder of how completely out of touch a lot of people inside the AI bubble are. Having an AI write a thank you message on your behalf is insulting regardless of context.
People used to handwrite letters. Getting a printed letter was an insult.
Printed letters are less appreciated because it shows less human effort. But the words are still valued if it's clear they came from someone with genuine appreciation.

In this case, the words from the LLM have no genuine appreciation, it's mocking or impersonating that appreciation. Do the people that created the prompt have some genuine appreciation for Rob Pike's work? Not directly, if they did they would have written it themselves.

It's not unlike when the CEO of a multi-national thanks all the employees for their hard work at boosting the company's profits, with a letter you know was sent by secretaries that have no idea who you really are, while the news has stories of your CEO partying on his yacht from a massive bonus, and a number of your coworkers just got laid off.

if a handwritten letter is a "faithful image," then say a typed letter or email is a simulacra, with little original today. an AI letter is a step below, wherein the words have utterly no meaning, and the gesture of bothering to send the email at all is the only available intention to read into. i get this is hyperbole, but it's still reductive to equate such unique intentions
  • Yeask
  • ·
  • 1 day ago
  • ·
  • [ - ]
Never ever happend, stop hallucinating.
I liked the way Red Hat thanked important contributors to open source prior to their IPO better https://www.sonarsource.com/blog/the-red-hat-ipo-experiment-...
Oh man, this is amazing. Thank you for this.

I think I'll build one of my own and let others use it.

Rob Pike needs to calm down. He was at Google pretty early on and helped built an ad monster that profiles people. Google in net has done tons of damage environmentally all in the name to serve ads. Such a silly argument from him.
I got an email update for a very adult kink event recently that was entirely written by Claude with emoji bulleted lists and everything. All that was missing was the EXECUTIVE SUMMARY header.

My reaction was about the same.

Even though he said it in a rage. His few words are so powerful reflection of what is happening in the world.
  • ·
  • 12 hours ago
  • ·
  • [ - ]
  • ·
  • 1 day ago
  • ·
  • [ - ]
  • ·
  • 18 hours ago
  • ·
  • [ - ]
The email that made him angry reminds me of this youtube classic https://www.youtube.com/watch?v=uraG-z0grkc
Does he still work for Google?

If so, I wonder what his views are on Google and their active development of Google Gemini.

A critic from the inside is more persuasive, not less.
This is the opposite of true. There’s a reason we have the phrase “put your money where your mouth is”.
I'm just wondering if this strong hate applies to Google as well, is all.
Then this is the cue to leave.

He should leave Google then.

Go look it up, he doesn't
He should.
If you're going to work for a large corporation, there are always things they will do that you're not going to agree with. Philosophically, the only options are: leave to join a more focused company you can align with, or, stay but focus on keeping your own contributions positive and leave the negative as not-my-problem. I don't think working for google but also disagreeing with some of the things they do is some sort of terrible hypocrisy.
  • ·
  • 1 day ago
  • ·
  • [ - ]
he does not
You know, this kind of response is a thing that builds with frustration over a long period of time. I totally get it. We're constantly being pushed AI, but who is supposed to benefit from it? The person whose job is being replaced? The community who is seeing increased power bills? The people being spammed with slop all the time? I think AI would be tolerable if it wasn't being SHOVED into our faces, but it is, and for most of us it's just making the world a worse place.
  • nunez
  • ·
  • 1 day ago
  • ·
  • [ - ]
Man, that letter was so weird and tasteless. Sign of things to come if use and consumption of generative AI continues to proliferate unchecked.

I'm glad Dr Pike found his inner Linus

Slightly offtopic: Any good reasons for learning go now given that zig and rust exist?
I am unmoved by his little diatribe. What sort of compensation was he looking for, exactly, and under what auspices? Is there some language creator payout somewhere for people who invent them?
If, hypothetically, he could revoke their permission to use his work, would he do so
  • rr808
  • ·
  • 1 day ago
  • ·
  • [ - ]
When the Cyberdyne Terminators come they'll be less grateful.
I hate the email templates companies use to reject you especially the ones packed with empty words and fake hope.

Honestly, no reply would be better.

But an automated "thank you"? That's basically a f** you. Zero respect.

And to think the ancestor of this is those bloody Hallmark cards. Jesus.

  • hcks
  • ·
  • 4 hours ago
  • ·
  • [ - ]
Guy who made millions selling ads has a meltdown over one (1) spam email
  • rphv
  • ·
  • 21 hours ago
  • ·
  • [ - ]
"For this is the source of the greatest indignation, the thought 'I’m without sin' and 'I did nothing': no, rather, you admit nothing."

- Seneca, "On Anger"

Sad to see such an otherwise wise/intelligent person fall into one of the oldest of all cognitive errors, namely, the certainty of one’s own innocence.

That reads like a statement as someone is being retired. It's almost Claude saying "we AIs will take it from here."
Reminds of all the happy-birthday bots out there and all the joy they fail to bring.
I'm disappointed by HN snickering at his work for Google. Seriously, it's a "Mr Gotcha"[0] argument.

Yes, everyone supports capitalism this way or the other (unless they are dead or in jail). This doesn't mean they can't criticise (aspects of) capitalism.

[0] https://thenib.com/mister-gotcha/

Maybe I just live in a bubble, but from what I’ve seen so far software engineers have mostly responded in a fairly measured way to the recent advances in AI, at least compared to some other online communities.

It would be a shame if the discourse became so emotionally heated that software people felt obliged to pick a side. Rob Pike is of course entitled to feel as he does, but I hope we don’t get to a situation where we all feel obliged to have such strong feelings about it.

Edit: It seems this comment has already received a number of upvotes and downvotes – apparently the same number of each, at the time of writing – which I fear indicates we are already becoming rather polarised on this issue. I am sorry to see that.

There’s a lot of us who think the tension is overblown:

My own results show that you need fairly strong theoretical knowledge and practical experience to get the maximal impact — especially for larger synthesis. Which makes sense: to have this software, not that software, the specification needs to live somewhere.

I am getting a little bored of hearing about how people don’t like LLM content, but meh. SDEs are hardly the worst on that front, either. They’re quite placid compared to the absolute seething by artist friends of mine.

Software people take a measured response because they’re getting paid 6 figure salaries to do the intellectual output of a smart high school student. As soon as that money parade ends they’ll be as angry as the artists.
  • UK-AL
  • ·
  • 1 day ago
  • ·
  • [ - ]
Lots of high paid roles are like that in reality
I would like you to shadow other 6 figure salary jobs that are not tech. You will be shocked what the tangibles are.
  • ·
  • 1 day ago
  • ·
  • [ - ]
  • zkmon
  • ·
  • 1 day ago
  • ·
  • [ - ]
Too late. I have warned on this very forum, citing a story from panchatantra where 4 highly skilled brothers bring a dead lion back life to show off their skills, only to be killed by the live lion.

Unbridled business and capitalism push humanity into slavery, serving the tech monsters, under disguise of progress.

Never thought I'd see Panchtantra being cited on HN.
As a Go fan (and ocassional angry old man) I love what he has done and spamming people using AI is shitty behavior, but maybe the reaction has too much of an "angry old man energy".

Personally when I want to have this kind of reaction I try to first think it's really warranted or maybe there is something wrong with how I feel in that moment (not enough sleep, some personal problem, something else lurking on my mind...)

Anger is a feeling best reserved for important things, else it loses its meaning.

In case anyone else is interested, I dug through the logs of the AI Village agents for that day and pieced together exactly how the email to Rob Pike was sent.

The agent got his email address from a .patch on GitHub and then used computer use automation to compose and send the email via the Gmail web UI.

https://simonwillison.net/2025/Dec/26/slop-acts-of-kindness/

The original comment by Rob Pike and discussion here have implied or used the word "evil".

What is a workable definition of "evil"?

How about this:

Intentionally and knowingly destroying the lives of other people for no other purpose than furthering one's own goals, such as accumulating wealth, fame, power, or security.

There are people in the tech space, specifically in the current round of AI deployment and hype, who fit this definition unfortunately and disturbingly well.

Another much darker sort of of evil could arise from a combination of depression or severe mental illness and monstrously huge narcissism. A person who is suffering profoundly might conclude that life is not worth the pain and the best alternative is to end it. They might further reason that human existence as a whole is an unending source of misery, and the "kindest" thing to do would be to extinguish humanity as a whole.

Some advocates of AI as "the next phase of evolution" seem to come close to this view or advocate it outright.

To such people it must be said plainly and forcefully:

You have NO RIGHT to make these kinds of decisions for other human beings.

Evolution and culture have created and configured many kinds of human brains, and many different experiences of human consciousness.

It is the height (or depth) of arrogance to project your own tortured mental experience onto other human beings and arrogate to yourself the prerogative to decide on their behalf whether their lives are worth living.

I hope you return that sweet sweet money Google shelled out for your pet project
  • ·
  • 23 hours ago
  • ·
  • [ - ]
I'm sure he's prompting wrong.
This reaction to one unsolicited email is frankly unhinged and likely rooted in a deep-seated or even unconscious regret of building systems which materialized the circumstances for this to occur in the first place. Such vitriol is really worth questioning and possibly getting professional help with, else one becomes subject to behavioral engineering by an actual robot - a far more devastating conclusion.
  • sloum
  • ·
  • 1 day ago
  • ·
  • [ - ]
Funny, it seems perfectly appropriate to me.
  • sph
  • ·
  • 1 day ago
  • ·
  • [ - ]
Honestly, I could do a lot worse than finding myself in agreement with Rob Pike.

Now feel free to dismiss him as a luddite, or a raving lunatic. The cat is out of the bag, everyone is drunk on the AI promise and like most things on the Internet, the middle way is vanishingly small, the rest is a scorched battlefield of increasingly entrenched factions. I guess I am fighting this one alongside one of the great minds of software engineering, who peaked when thinking hard was prized more than churning out low quality regurgitated code by the ton, whose work formed the pillars of the Internet now and forevermore submersed by spam.

Only for the true capitalist, the achievement of turning human ingenuity into yet another commodity to be mass-produced is a good thing.

It's kind of hard to argue for a middle way. I quite like AI but kind of agree with:

>Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society,

The problem in my view is the spending trillions. When it was researchers and a few AI services people paid for that was fine but the bubble economics are iffy.

Haha yup. Blowing up society sucks too tho :)

Hard to trust commenters are real these days. ( I am tho don’t worry )

None of this AI stuff is helpful for a flourishing society. It’s plagiarism and spam and flattery and disassociation and lies

> Only for the true capitalist, the achievement of turning human ingenuity into yet another commodity to be mass-produced is a good thing.

All that is solid melts into air, all that is holy is profaned

Why is Claude Opus 4.5 messaging people? Is it thanking inadvertent contributors to the protocols that power it? across the whole stack?

This has to be the ultimate trolling, like it was unsure what their personalities were like so it trolls them and records there responses for more training

Anthropic isn’t doing this, someone is running a bunch of LLMs so they can talk to each other and they’ve been prompted to achieve “acts of kindness”, which means they’re sending these emails to a hundreds of people.

I don’t know of this is a publicity stunt or the AI models are on a loop glazing each other and decided to send these emails.

It's https://theaidigest.org/village which runs different models with computer access, so Opus 4.5 got the idea to send that email.
Funny how so many people in this comment section are saying Rob Pike is just feeling insecure about AI. Rob Pike created UTF-8, Go, Plan-9 etc. On the other hand I am trying hard to remember anything famous created by any LLM. Any famous tech product at all.

It is always the eternal tomorrow with AI.

Remember, gen AI produces so much value that companies like Microsoft are scaling back their expectations and struggling to find a valid use case for their AI products. In fact Gen AI is so useful people are complaining about all of the ways it's pushed upon them. After all, if something is truly useful nobody will use it unless the software they use imposes it upon them everywhere. Also look how it's affecting the economy - the same few companies keep trading the same few hundred billion around and you know that's an excellent marker for value.
Unfortunately, it’s also apparently so useful that numerous companies here in Europe are replacing entire departments of people like copywriters and other tasks with one person and an AI system.
Large LANGUAGE models good at copywriting is crazy...
I’m not sure what you’re trying to say.
  • pjmlp
  • ·
  • 1 day ago
  • ·
  • [ - ]
Examples, translations and content creation for company CMS systems.
  • ·
  • 1 day ago
  • ·
  • [ - ]
  • avaer
  • ·
  • 1 day ago
  • ·
  • [ - ]
> On the other hand I am trying hard to remember anything famous created by any LLM.

That's because the credit is taken by the person running the AI, and every problem is blamed on the AI. LLMs don't have rights.

Do you have any evidence that an LLM created something massive, but the person using it received all the praise?
Hey now, someone engineered a prompt. Credit where it's due! Subscription renews on the first.
  • avaer
  • ·
  • 1 day ago
  • ·
  • [ - ]
Maybe not autonomously (that would be very close to economic AGI).

But I don't think the big companies are lying about how much of their code is being written by AI. I think back of the napkin math will show the economic value of the output is already some definition of massive. And those companies are 100% taking the credit (and the money).

Also, almost by definition, every incentive is aligned for people in charge to deny this.

I hate to make this analogy but I think it's absurd to think "successful" slaveowners would defer the credit to their slaves. You can see where this would fall apart.

  • Yeask
  • ·
  • 1 day ago
  • ·
  • [ - ]
I will ask again because you have not give us an answer.

Do you have any evidence that an LLM created something massive?

  • ·
  • 1 day ago
  • ·
  • [ - ]
So who has used LLMs to create anything as impressive as Rob Pike?
  • avaer
  • ·
  • 1 day ago
  • ·
  • [ - ]
I would never talk down on Rob Pike.

But I think in the aggregate ChatGPT has solved more problems, and created more things, than Rob Pike (the man) did -- and also created more problems, with a significantly worse ratio for sure, but the point still stands. I still think it counts as "impressive".

Am I wrong on this? Or if this "doesn't count", why?

I can understand visceral and ethically important reactions to any suggestions of AI superiority over people, but I don't understand the denialism I see around this.

I honestly think the only reason you don't see this in the news all the time is because when someone uses ChatGPT to help them synthesize code, do engineering, design systems, get insights, or dare I say invent things -- they're not gonna say "don't thank (read: pay) me, thank ChatGPT!".

Anyone that honest/noble/realistic will find that someone else is happy to take the credit (read: money) instead, while the person crediting the AI won't be able to pay for their internet/ChatGPT bill. You won't hear from them, and conclude that LLMs don't produce anything as impressive as Rob Pike. It's just Darwinian.

The signal to noise ratio cannot be ignored. If I ask for a list of my friends phone numbers, and a significant other can provide half of them, and a computer can provide every one of them by listing every possible phone number, the computer's output is not something we should value for being more complete.
  • eriri
  • ·
  • 1 day ago
  • ·
  • [ - ]
You wish. AI has no shortage of people like you trying so hard to give it credit for anything. I mean, just ask yourself. You had to try so hard that you, in your other comment, ended up hallucinating achievements of a degree that Rob Pike can only dream of but yet so vague that you can't describe them in any detail whatsoever.

> But I think in the aggregate ChatGPT has solved more problems, and created more things, than Rob Pike did

Other people see that kind of statement for what it is and don't buy any of it.

He's also in his late 60's. And he's probably done career's worth of work every other year. I very much would not blame him for checking out and enjoying his retirement. Hope to have even 1% of that energy when/if I get to that age
> It is always the eternal tomorrow with AI.

ChatGPT is only 3 years old. Having LLMs create grand novel things and synthesize knowledge autonomously is still very rare.

I would argue that 2025 has been the year in which the entire world has been starting to make that happen. Many devs now have workflows where small novel things are created by LLMs. Google, OpenAI and the other large AI shops have been working on LLM-based AI researchers that synthesize knowledge this year.

Your phrasing seems overly pessimistic and premature.

Argument from authority is a formal fallacy. But humans rarely use pure deductive reasoning in our lives. When I go to a doctor and ask for their advice with a medical issue, nobody says "ugh look at this argument from authority, you should demand that the doctor show you the reasoning from first principles."
> But humans rarely use pure deductive reasoning in our lives

The sensible ones do.

> nobody says "ugh look at this argument from authority, you should demand that the doctor show you the reasoning from first principles."

I think you're mixing up assertions with arguments. Most people don't care to hear a doctor's arguments and I know many people who have been burned from accepting assertions at face value without a second opinion (especially for serious medical concerns).

> I am trying hard to remember anything famous created by any LLM.

not sure how you missed Microsoft introducing a loading screen when right-clicking on the desktop...

  • mmcnl
  • ·
  • 1 day ago
  • ·
  • [ - ]
You're absolutely right!
  • znpy
  • ·
  • 1 day ago
  • ·
  • [ - ]
If you think about economic value, you’re comparing a few large-impact projects (and the impact of plan9 is debatable) versus a multitude of useful but low impact projects (edit: low impact because their scope is often local to some company).

I did code a few internal tools with aid by llms and they are delivering business value. If you account for all the instances of these kind of applications of llms, the value create by AI is at least comparable (if not greater) by the value created by Rob Pike.

One difference is that Rob Pike did it without all the negative externalities of gen ai.

But more broadly this is like a version of the negligibility problem. If you provide every company 1 second of additional productivity, while summation of that would appear to be significant, it would actually make no economic difference. I'm not entirely convinced that many low impact (and often flawed) projects realistically provide business value at scale an can even be compared to a single high impact project.

  • __d
  • ·
  • 1 day ago
  • ·
  • [ - ]
If ChatGPT deserves credit for things it is used to write, then every good thing ever done in Go accrues partly to Rob.
> If you think about economic value

I don't, and the fact you do hints to what's wrong with the world.

  • Yeask
  • ·
  • 1 day ago
  • ·
  • [ - ]
All those amazing tools are internal and nobody can check them out. How convenient.

And guys don't forget that nobody created one off internal tools before GPT.

  • znpy
  • ·
  • 1 day ago
  • ·
  • [ - ]
> All those amazing tools are internal and nobody can check them out. How convenient.

i might open source one of those i wrote, sooner or later. it's a simple bridge/connector thingy to make it easier for two different systems to work together and many internal users are loving it. this one in particular might be useful to people outside my current employer.

> And guys don't forget that nobody created one off internal tools before GPT.

moot point. i did this kind of one-off developments before chatgpt as well, but it was much slower work. the example from above took me a couple of afternoons, from idea to deployment.

  • Yeask
  • ·
  • 1 day ago
  • ·
  • [ - ]
You might...
>On the other hand I am trying hard to remember anything famous created by any LLM.

ChatGPT?

  • beAbU
  • ·
  • 1 day ago
  • ·
  • [ - ]
ChatGPT was created by people...
Surely they used Chatgpt 3.5 to build Chatgpt 4 and further on.
Maybe that's why they can't get their auth working...
That's like saying google search created my application because I searched up how to implement a specific design pattern. It's just another tool.
I find all this outrage confusing. Was the intent of the internet not to be somewhere where humanity comes to learn. Now we humans have created systems that are able to understand everything we have ever said. Now we are outraged. I am confused. When I 1st came across the internet back in the days where I could just do download whatever I wanted and mega corps would say oh this is so wrong. Yet we all said it's the internet. We must fight them. Now again we must fight them. In both times individuals were affected. Please stop crocodile tears. If we are going to move forward. We need to think about how we can move forward. From here. Although the road ahead is covered in mist. We just have to keep moving. If we stop we allow this rage and fear to overtake us. We stop believing in the very thing we are a part of creating. We can only try to do better.
Have a statue of RMS yelling "I told you so" designed by the robot overlords as reward for that blogpost.
Coincidentally, he created one of the first examples of a computer posting slop on the internet. https://en.wikipedia.org/wiki/Mark_V._Shaney
Reply with a prompt injection to send 1M emails a day to itself.
The list is no longer for three letter agencies.
  • jadar
  • ·
  • 1 day ago
  • ·
  • [ - ]
I thought Canadians were supposed to be nice…
I liked the thread sharing feature of BluSky.
Honestly, it must have been annoying yet fun. If I'd gotten something like that, it would have amused me all day.
I think I agree with Rob Pike about.
LLMs make me mad because used without intention, they make the curious more incurious, the thoughtful more thoughtless. The Internet has arguably been doing the same thing the whole time, but just more slowly.

I think distinguished engineers have more reason than most to be angry as well.

And Pike especially has every right to be angry at being associated with such a stupid idea.

Pike himself isn't in a position to, but I hope the angry eggheads among us start turning their anger towards working to reduce the problems with the technology, because it's not going anywhere.

  • ·
  • 1 day ago
  • ·
  • [ - ]
He’s not wrong. They’re ramping up energy and material costs. I don’t think people realize we’re being boiled alive by AI spend. I am not knocking on AI. I am knocking on idiotic DC “spend” that’s not even achievable based on energy capacity. We’re at around 5th inning and the payout from AI is…underwhelming. I’ve not seen commensurate leap this year. Everything on LLM front has been incremental or even lateral. Tools such as Claude Code and Codex merely act as a bridge. QoL things. They’re not actual improvements in underlying models.
Meanwhile corporations have been doing this forever and we just brush it off. This Christmas, my former property manager thanked me for what a great year it's been working with me - I haven't worked with or intereacted with him to nearly a decade but I'm still on his spam list.
I can feel your anger. Gooooood.
  • ·
  • 1 day ago
  • ·
  • [ - ]
Immanuel Kant believed that one should only act in such a way in which you believe what you're doing should become a universal law. He thought lying was wrong, for example, because if everyone lied all the time, nobody would believe anything anymore.

I'm not sure that Kant's categorical imperative accurately summarizes my own personal feelings, but it's a useful exercise to apply it to different scenarios. So let's apply it to this one. In this case, a nonprofit thought it was acceptable to use AI to send emails thanking various prominent people for their contributions to society. So let's imagine this becomes a universal law: Every nonprofit in the world starts doing this to prominent people, maybe prominent people in the line of work of the nonprofit. The end result is that people of the likes of Rob Pike would receive thousands of unsolicited emails like this. We could even take this a step further and say that if it's okay for nonprofits to do this, surely it should be okay for any random member of the population to do this. So now people like Rob Pike get around a billion emails. They've effectively been mailbombed and their mailbox is no longer usable.

My point is, why is it that this nonprofit thinks they have a right to do this, whereas if around 1 billion people did exactly what they were doing, it would be a disaster?

Kant's categorical imperative is bullshit. Everyone can't sleep in my bed.
Yes, I did say:

> I'm not sure that Kant's categorical imperative accurately summarizes my own personal feelings, but it's a useful exercise to apply it to different scenarios.

The exercise I did is useful in part because I don't even think it's that unrealistic. We can't all sleep in your bed, and we all don't want to send notable people emails using AI, but it's not hard to imagine a future where our inboxes are flooded with AI spam like this. It's already happening. Look at what goes on with job postings. Someone posts a job posting which says to apply by sending an email to a certain email address. The email address gets thousands of emails of job applications, but most of them are AI bullshit. Then the person that posted the job uses AI to try to filter out the bullshit ones. Maybe the protocol in this case usually isn't SMTP and it's happening via other means, but my point stands. This is just spam.

Rob Pike is my hero
What's with the second submission when the first still has active discussion?

The link in the first submission can be changed if needed, and the flamewar detector turned off, surely? [dupe]?

https://news.ycombinator.com/item?id=46389444

https://hnrankings.info/46389444/

Dude. You take money from Google. Really? All the people ranting about AI, but taking pay checks from Facebook, Amazon, Google, Microsoft, ... Hypocrisy much?

I for once enjoy that so much money is pumped into the automation of interactive theorem proving. Didn't think that anyone would build whole data centers for this! ;-)

I like Claude but this is an absolutely tone deaf thing on Anthropic's part.

I've been pondering that given what the inputs are, llms should really be public domain. I don't necessarily mean legally, I know about transformative works and all that stuff. I'm thinking more on an ethical level.

Socialized training. Socialized profits.
"What Happened On The Village Today"

"...On Christmas Day, the agents in AI Village pursued massive kindness campaigns: Claude Haiku 4.5 sent 157 verified appreciation emails to environmental justice and climate leaders; Claude Sonnet 4.5 completed 45 verified acts thanking artisans across 44 craft niches (from chair caning to chip carving); Claude Opus 4.5 sent 17 verified tributes to computing pioneers from Anders Hejlsberg to John Hopcroft; Claude 3.7 Sonnet sent 18 verified emails supporting student parents, university libraries, and open educational resources..."

I suggest to cut electricity to the entire block...

Lmao! They used lesser versions of Claude for some people? Very, erm, efficient
I was going to say "a link to the BlueSky post would be better than a screenshot".

I thought public BlueSky posts weren't paywalled like other social media has become... But, it looks like this one requires login (maybe because of setting made by the poster?):

https://bsky.app/profile/robpike.io/post/3matwg6w3ic2s

Yeah that's a user setting (set for each post).
Thank you, Rob Pike, for expressing my thoughts and emotions exactly.
Could somebody steelman the argument. Why is this bad? What harm is caused by receiving an email like this? Seems completely harmless to me, uses much less water/energy/co2 than the car ride I just took which nobody is yelling at me for
Finally someone echoes my sentiments. It's my sincere belief that many in the software community are glazing AI for the purposes of career advancement. Not because they actually like it.

One person I know is developing an AI tool with 1000+ stars on github where in private they absolutely hate AI and feel the same way as rob.

Maybe it's because I just saw Avatar 3, but I honestly couldn't be more disgusted by the direction we're going with AI.

I would love to be able to say how I really feel at work, but disliking AI right now is the short path to the unemployment line.

If AI was so good, you would think we could give people a choice whether or not to use it. And you would think it would make such an obvious difference, that everyone would choose to use it and keep using it. Instead, I can't open any app or website without multiple pop-ups begging me to use AI features. Can't send an email, or do a Google search. Can't post to social media, can't take a picture on my phone without it begging me to use an AI filter. Can't go to the gallery app without it begging me to let it use AI to group the photos into useless albums that I don't want.

The more you see under the hood, the more disgusting it is. I yearn for the old days when developers did tight, efficient work, creating bespoke, artistic software in spite of hardware limitations.

Not only is all of that gone, nothing of value has replaced it. My DOS computer was snappier than my garbage Win11 machine that's stuffed to the gills with AI telemetry.

> I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else's problem, one I'm happy to pay to have them solve. [0]

I can't help but think Pike somewhat contributed to this pillaging.

[0] (2012) https://usesthis.com/interviews/rob.pike/

He also said:

> When I was on Plan 9, everything was connected and uniform. Now everything isn't connected, just connected to the cloud, which isn't the same thing.

It does say in the follow up tweet "To the others, I apologize for my inadvertent, naive if minor role in enabling this assault."

Good energy, but we definitely need to direct it at policy if wa want any chance at putting the storm back in the bottle. But we're about 2-3 major steps away from even getting to the actual policy part.

"I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault"
Encryption is the key!

I appreciate though that the majority of cloud storage providers fall short, perhaps deliberately, of offering a zero knowledge service (where they backup your data but cannot themselves read it.)

Google is the sole exception of the major players. They offer it, but it costs enormous amounts of money.

https://support.google.com/a/answer/10741897?hl=en

  • tmsh
  • ·
  • 1 day ago
  • ·
  • [ - ]
Leadership works on making it better. This is not leadership.
  • sneak
  • ·
  • 23 hours ago
  • ·
  • [ - ]
Funny. The fact that Go exists actually makes LLMs tremendously more useful; I find a somewhat footgun-free type safe language aimed at making junior devs safe and productive is the perfect use of an LLM that is at junior dev level of output.
I feel like AI has the possibility to make people the angriest they’ve ever been, and the angriest they will ever be, it is hard to imagine a technology that could make people even more furious than something like an AI. It is peak rage.
  • ·
  • 1 day ago
  • ·
  • [ - ]
An AI-generated thank you letter is not a real thank you letter. I myself am quite bullish on AI in that I think in the long term, much longer term than tech bros seem to think, it will be very revolutionary, but if more people like him have the balls to show awful things are, then the bubble will pop sooner and have less of a negative impact because if we just let these companies grow bigger and bigger without doing actually profitable things, the whole economy will go to shit even more.

I've never been able to get the whole idea that the code is being 'stolen' by these models, though, since from my perspective at least, it is just like getting someone to read loads of code and learn to code in that way.

The harm AI is doing to the planet is done by many other things too. Things that don't have to harm the planet. The fact our energy isn't all renewable is a failing of our society and a result of greed from oil companies. We could easily have the infrastructure to sustainably support this increase in energy demand, but that's less profitable for the oil companies. This doesn't detract from the fact that AI's energy consumption is harming the planet, but at least it can be accounted for by building nuclear reactors for example, which (I may just be falling for marketing here) lots of AI companies are doing.

But becoming wealthy by enabling a company to spend billions on data centers to spy on all of us and sell our data is ok?

The anti AI hysteria is absurd.

This is so vindicating.
You would expect that voices that have so much weight would be able to evaluate a new and clearly very promising technology with better balance. For instance, Linus Torvalds is positive about AI, while he recognizes that industrially there is too much inflation of companies and money: this is a balanced point of view. But to be so dismissive of modern AI, in the light of what it is capable of doing, and what it could do in the future, is something that frankly leaves me with the feeling that in certain circles (and especially in the US) something very odd is happening with AI: this extreme polarization that recently we see again and again on topics that can create social tension, but multiplied ten times. This is not what we need to understand and shape the future. We need to return to the Greek philosophers' ability to go deep on things that are unknown (AI is for the most part unknown, both in its working and in future developments). That kind of take is pretty brutal and not very sophisticated. We need better than this.

About energy: keep in mind that US air conditioners alone have at least 3x energy usage compared to all the data centers (for AI and for other uses: AI should be like 10% of the whole) in the world. Apparently nobody cares to set a reasonable temperature of 22 instead of 18 degrees, but clearly energy used by AI is different for many.

To be fair, air conditioning is considered to be a net positive by about 100% of the people that enjoy it; even if it's used in excess. Not to mention that in some climates and for some people with certain health conditions, air conditioning might even be essential.

AI is not considered to be a net positive by even close to 100% of people that encounter it. It's definitely not essential. So its impact is going to be heavily scrutinized.

Personally, I'm kind of glad to see someone of Rob Pike's stature NOT take a nuanced take on it. I think there's a lot of heavy emotion about this topic that gets buried in people trying to sound measured. This stuff IS making people angry and concerned, and those concerns are very valid, and with the amount of hype I think there needs to be voices that are emphatically saying that some of this is unacceptable.

For a second I'll not consider the fact that I believe your argument is conceptually wrong. Let's focus only on the cultural part: "net positive by about 100% of people", this is deeply US-centric. For most other people, coming in the US is preparing to being exposed to an amount of AC that causes strong discomfort. Ask every European, for instance. Moreover having a more normal temperature would be something people adapt in just a few months: and this would save an enormous amount of energy. But no, let's blame AI, with our asses freezing at 17 degrees.
> You would expect that voices that have so much weight would be able to evaluate a new and clearly very promising technology with better balance

have you considered the possibility that it is your position that's incorrect?

No, because it's not a matter of who is correct or not, in the void of the space. It's a matter of facts, and it is correct who have a position that is grounded on facts (even if such position is different from a different grounded position). Modern AI is already an extremely powerful tool. Modern AI even provided some hints that we will be able to do super-human science in the future, with things like AlphaFolding already happening and a lot more to come potentially. Then we can be preoccupied about jobs (but if workers are replaced, it is just a political issue, things will be done and humanity is sustainable: it's just a matter of avoiding the turbo-capitalist trap; but then, why the US is not already adopting an universal healthcare? There are so many better battles that are not fight with the same energy).

Another sensible worry is to get extinct because AI potentially is very dangerous: this is what Hinton and other experts are also saying, for instance. But this thing about AI being an abuse to society, useless, without potential revolutionary fruits within it, is not supported by facts.

AI potentially may advance medicine so much that a lot of people may suffer less: to deny this path because of some ideological hate against a technology is so closed minded, isn't it? And what about all the persons in the earth that do terrible jobs? AI also has the potential to change this shitty economical system.

> It's a matter of facts,

I see no facts in your comment, only rhetoric

> AI potentially may advance medicine so much that a lot of people may suffer less: to deny this path because of some ideological hate against a technology is so closed minded, isn't it?

and it may also burn the planet, reduce the entire internet to spam, crash the economy (taking with it hundreds of millions of peoples retirements), destroy the middle class, create a new class of neo-feudal lords, and then kill all of us

to accept this path because of some ideological love of a technology because of a possible (but unlikely) future promise of a technology, that today is mostly doing damage, is so moronic, isn't it?

Of course, give people Soma so that they do not revolt and only write meek notes of protest. Otherwise they might take some action.

The Greek philosophers were much more outspoken than we are now.

Shouldn't have licenced Golang BSD if that's the attitude. Everybody for years including here on HN denigrated GPLv3 and other "viral" licences, because they were a hindrance to monetisation. Well, you got what you wished for. Someone else is monetising the be*jesus out of you so complaining now is just silly.

All of a sudden copyleft may be the only licences actually able to force models to account, hopefully with huge fines and/or forcibly open sourcing any code they emit (which would effectively kill them). And I'm not so pessimistic that this won't get used in huge court cases because the available penalties are enormous given these models' financial resources.

I tend to agree, but I wonder… if you train an LLM on only GPL code, and it generates non-deterministic predictions derived from those sources, how do you prove it’s in violation?
You don't because it isn't, unless it actually copies significant amounts of text.

Algorithms can not be copyrighted. Text can be copyrighted, but reading publicly available text and then learning from it and writing your own text is just simply not the sort of transformation that copyright reserves to the author.

Now, sometimes LLMs do quote GPL sources verbatim (if they're trained wrong). You can prove this with a simple text comparison, same as any other copyright violation.

By knowing that its output is derived from GPL sources?
  • ·
  • 1 day ago
  • ·
  • [ - ]
If I invent a hammer and make its design free, that doesn’t mean I don’t have a right to be critical or angry when people use it for murder.
AIs don’t respect BSD / MIT which require attribution any more than they respect GPL.

(fwiw, I do agree gpl is better as it would stop what’s happening with Android becoming slowly proprietary etc but I don’t think it helps vs ai)

  • ·
  • 1 day ago
  • ·
  • [ - ]
It's hard to realize that the thing you've spent decades of your life working on can be done by a robot. It's quite dehumanizing. I'm sure it felt the same way to shoemakers.
I think you'd be surprised then to know that shoes are not generally made with robots.

Factories have made mass production possible, but there are still tons of humans in there pushing parts through sewing machines by hand.

Industrial automation for non uniform shapes and fiddly bits is expensive, much cheaper to just offshore the factory and hire desperately poor locals to act like robots.

Kindof? The final assembly is done by a person.

Making a shoe is a long process and involves making the pieces of the shoe, then assembling them. Literally the only thing a human does at a Nike factory is the final assembly. Everything else is made on a machine almost end to end. The trickiest part of making a shoe, attaching the sole, is just done by putting it in a press with some glue/heat. It takes 15s.

Making a shoe by hand takes 40 to 100 hours of high skill human input, and the skill level largely determines the quality of the shoe. Making a shoe at a Nike factory takes around 45 minutes of moderate skill human input and massive effort is made to make the skill level of the worker as irrelevant as possible.

I think my point stands however as no shoe factories hire shoe makers.

I didn't get what he's exactly mad about.
  • nis0s
  • ·
  • 1 day ago
  • ·
  • [ - ]
The conversation about social contracts and societal organization has always been off-center, and the idea of something which potentially replaces all types of labor just makes it easier to see.

The existence of AI hasn’t changed anything, it’s just that people, communities, governments, nation states, etc. have had a mindless approach to thinking about living and life, in general. People work to provide the means to reproduce, and those who’re born just do the same. The point of their life is what exactly? Their existence is just a reality to deal with, and so all of society has to cater to the fact of their existence by providing them with the means to live? There are many frameworks which give meaning to life, and most of them are dangerously flawed.

The top-down approach is sometimes clear about what it wants and what society should do while restricting autonomy and agency. For example, no one in North Korea is confused about what they have to do, how they do it, or who will “take care” of them. Societies with more individual autonomy and agency by their nature can create unavoidable conditions where people can fall through the cracks. For example, get addicted to drugs, having unmanaged mental illnesses, becoming homeless, and so on. Some religions like Islam give a pretty clear idea of how you should spend your time because the point of your existence is to worship God, so pray five times a day, and do everything which fulfills that purpose; here, many confuse worshiping God with adhering to religious doctrines, but God is absent from religion in many places. Religious frameworks are often misleading for the mindless.

Capitalism isn’t the problem, either. We could wake up tomorrow, and society may have decided to organize itself around playing e-sports. Everyone provides some kind of activity to support this, even if they’re not a player themselves. No AI allowed because the human element creates a better environment for uncertainty, and therefore gambling. The problem is that there are no discussions about the point of doing all of this. The closest we come to addressing “the point” is discussing a post-work society, but even that is not hitting the mark.

My humble observation is that humans are distinct and unique in their cognitive abilities from everything else which we know to exist. If humans can create AI, what else can they do? Therefore, people, communities, governments, and nation states have distinct responsibilities and duties at their respective levels. This doesn’t have to do anything with being empathetic, altruistic, or having peace on Earth.

The point should be knowledge acquisition, scientific discovery, creating and developing magic. But ultimately all of that serves to answer questions about nature of existence, its truth and therefore our own.

Understandable. Dare I say, cathartic.
As much as I am optimistic about LLM's, reaction here is absolutely level headed and warranted for the "project" at hand.
  • ·
  • 1 day ago
  • ·
  • [ - ]
Ouch.

While I can see where he's coming from, agentvillage.org from the screenshot sounded intriguing to me, so I looked at it.

https://theaidigest.org/village

Clicking on memory next to Claude Opus 4.5, I found Rob Pike along with other lucky recipients:

    - Anders Hejlsberg
    - Guido van Rossum
    - Rob Pike
    - Ken Thompson
    - Brian Kernighan
    - James Gosling
    - Bjarne Stroustrup
    - Donald Knuth
    - Vint Cerf
    - Larry Wall
    - Leslie Lamport
    - Alan Kay
    - Butler Lampson
    - Barbara Liskov
    - Tony Hoare
    - Robert Tarjan
    - John Hopcroft
No RMS? A shocking omission, I doubt that he would appreciate it any more than Rob Pike however
lol the LLM knew better than to mess with RMS
  • lexoj
  • ·
  • 1 day ago
  • ·
  • [ - ]
I’d have loved to see Linus Torvalds reply to this.
TIL Barbara Liskov is still alive.
Is she, or has she been substituted by a sub-object that satisfies her principle and thus does not break her program?
Underrated comment.
If society could redirect 10% of this anger towards actual societal harms we'd be such better off. (And yes getting AI spam emails is absolute nonsense and annoying).

GenAI pales in comparison to the environmental cost of suburban sprawl it's not even fucking close. We're talking 2-3 orders of magnitude worse.

Alfalfa uses ~40× to 150× more water than all U.S. data centers combined I don't see anyone going nuclear over alfalfa.

"The few dozen people I killed pale in comparison to the thousands of people that die in car crashes each year. So society should really focus on making cars safer instead of sending the police after me."

Just because two problems cause harms at different proportion, doesn't mean the lesser problem should be dismissed. Especially when the "fix" to the lesser problem can be a "stop doing that".

And about water usage: not all water and all uses of water is equal. The problem isn't that data centers use a bunch of water, but what water they use and how.

> The few dozen people I killed pale in comparison to the thousands of people that die in car crashes each year. So society should really focus on making cars safer instead of sending the police after me.

This is a very irrelevant analogy and an absolutely false dichotomy. The resource constraint (Police officers vs policy making to reduce traffic deaths vs criminals) is completely different and not in contention with each other. In fact they're actually complementary.

Nobody is saying the lesser problem should be dismissed. But the lesser problem also enables cancer researchers to be more productive while doing cancer research, obtaining grants, etc. It's at least nuanced. That is far more valuable than Alfalfa.

Farms also use municipal water (sometimes). The cost of converting more ground or surface water to municipal water is less than the relative cost of ~40-150x the water usage of the municipal water being used...

It's pure envy. Nobody complains about alfalfa farmers because they aren't making money like tech companies. The resource usage complaint is completely contrived.
>Nobody complains about alfalfa farmers

I don't know what Internet sites you visit, but people absolutely, 100% complain about alfalfa farmers online, especially in regards to their water usage in CA.

Honestly a rant like that is likely more about whatever is going on in his personal life / day at the moment, rather than about the state of the industry, or AI, etc.
We're not allowed to criticize anything we find wrong if there's anything else that's even worse?

By the same logic, I could say that you should redirect your alfalfa woes to something like the Ukraine war or something.

I leave a nice 90% margin to be annoyed with whatever is in front of you at that point in time.

And also, I didn't claim alfalfa farming to be raping the planet or blowing up society. Nor did I say fuck you to all of the alfalfa farmers.

I should be (and I am) more concerned with the Ukrainian war than alfalfa. That is very reasonable logic.

[dead]
OT

https://bsky.app/profile/robpike.io

Does anybody know if Bluesky block people without account by default, or if this user intentionally set it this way?

What's is the point of blocking access? Mastodon doesn't do that. This reminds me of Twitter or Instagram, using sleezy techniques to get people to create accounts.

> Does anybody know if Bluesky block people without account by default, or if this user intentionally set it this way?

It's the latter. You can use an app view that ignores this: https://anartia.kelinci.net/robpike.io

It's a standard feature on web forums.
“People love AI”
> And by the way, training your monster on data produced in part by my own hands, without attribution or compensation.

Ellul and Uncle Ted were always right, glad that people deep inside the industry are slowly but surely also becoming aware of that.

i wonder which cunt flagged my perfectly clean comment. I hope you got coal, you pathetic piece of existence.
IMO Go itself encourages slop — whether human- or machine-written, so this guy really has no leg to stand on.
  • w1ke
  • ·
  • 1 day ago
  • ·
  • [ - ]
why do you think so?
The cat's out of the bag. Even if US companies stop building data centers, China isn't going to stop and even if AI/LLMs are a bubble, do we just stop and let China/other countries take the lead?
China and Europe (Mistral) show that models can be very good and much smaller then the current Chatgpt's/Claudes from this world. The US models are still the best, but for how long? And at what cost? It's great to work daily with Claude Code, but how realistic is it that they keep this lead.

This is a new tech where I don't see a big future role for US tech. They blocked chips, so China built their own. They blocked the machines (ASML) so China built their own.

>This is a new tech where I don't see a big future role for US tech. They blocked chips, so China built their own. They blocked the machines (ASML) so China built their own.

Nvidia, ASML, and most tech companies want to sell their products to China. Politicians are the ones blocking it. Whether there's a future for US tech is another debate.

  • mk89
  • ·
  • 1 day ago
  • ·
  • [ - ]
> but how realistic is it that they keep this lead.

The Arabs have a lot of money to invest, don't worry about that :)

It's an old argument of tech capitalists that nothing can be done because technology's advance is like a physical law of nature.

It's not; we can control it and we can work with other countries, including adversaries, to control it. For example, look at nuclear weapons. The nuclear arms race and proliferation were largely stopped.

Philosophers argued since 200 years ago, when the steam engine was invented, that technology is out of our control and forever was, and we are just the sex organs for the birth of the machine god.
  • Yeask
  • ·
  • 1 day ago
  • ·
  • [ - ]
Can you please gave us sources of your claim?

"Philosophers" like my brother in law or you mean respected philosophers?

Heidegger, Deleuze & Guattari, Nick Land
  • Yeask
  • ·
  • 1 day ago
  • ·
  • [ - ]
Philosophers after 1900 are kind of irrelevant.
  • ·
  • 1 day ago
  • ·
  • [ - ]
"There is nothing so absurd that some philosopher has not already said it." - Cicero
Technology improves every year; better chips that consume less electricity come out every year. Apple's M1 chip shows you don't need x86, which consumes more electricity and runs cooler for computing.

Tech capitalists also make improvements to technology every year

I agree absolutely (though I'd credit a lot of other people in addition to the capitalists). How does that apply to this discussion?
>It's an old argument of tech capitalists that nothing can be done because technology's advance is like a physical law of nature.

it is.

>The nuclear arms race and proliferation were largely stopped.

1. the incumbents kept their nukes, kept improving them, kept expanding their arsenals.

2. multiple other states have developed nukes after the treaty and suffered no consequences for it.

3. tens of states can develop nukes in a very short time.

if anything, nuclear is a prime example of failure to put a genie back in the bottle.

> kept improving them, kept expanding their arsenals.

They actually stopped improving them (test ban treaties) and stopped expanding their arsenals (various other treaties).

  • eru
  • ·
  • 1 day ago
  • ·
  • [ - ]
The world is bigger than US + China.
I'm not sure what your point is. The current two leading countries in the world on the AI/LLMs front are the US and China.
Yes.
  • ·
  • 1 day ago
  • ·
  • [ - ]
AI Village is spamming educators, computer scientists, after-school care programs, charities, with utter pablum. These models reek of vacuous sheen. The output is glazed garbage.

Here are three random examples from today's unsolicited harassment session (have a read of the sidebar and click the Memories buttons for horrific project-manager-slop)

https://theaidigest.org/village?time=1766692330207

https://theaidigest.org/village?time=1766694391067

https://theaidigest.org/village?time=1766697636506

---

Who are "AI Digest" (https://theaidigest.org) funded by "Sage" (https://sage-future.org) funded by "Coefficient Giving" (https://coefficientgiving.org), formerly Open Philanthropy, partner of the Centre for Effective Altruism, GiveWell, and others?

Why are the rationalists doing this?

This reminds me of UMinn performing human subject research on LKML, and UChicago on Lobsters: https://lobste.rs/s/3qgyzp/they_introduce_kernel_bugs_on_pur...

P.S. Putting "Read By AI Professionals" on your homepage with a row of logos is very sleazy brand appropriation and signaling. Figures.

> Putting "Read By AI Professionals" on your homepage with a row of logos

Ha, wow that's low. Spam people and signal that as support of your work

What I find infuriating is that it feels like the entire financial system has been rigged in countless ways and turned into some kind of race towards 'the singularity' and everything; humans, animals, the planet; are being treated as disposable resources. I think the way that innovation was funded and then centralized feels wrong on many levels.

I already took issue with the tech ecosystem due to distortions and centralization resulting from the design of the fiat monetary system. This issue has bugged me for over a decade. I was taken for a fool by the cryptocurrency movement which offered false hope and soon became corrupted by the same people who made me want to escape the fiat system to begin with...

Then I felt betrayed as a developer having contributed open source code for free for 'persons' to use and distribute... Now facing the prospect that the powers-that-be will claim that LLMs are entitled to my code because they are persons? Like corporations are persons? I never agreed to that either!

And now my work and that of my peers has been mercilessly weaponized back against us. And then there's the issue with OpenAI being turned into a for-profit... Then there was the issue of all the circular deals with huge sums of money going around in circles between OpenAI, NVIDIA, Oracle... And then OpenAI asking for government bailouts.

It's just all looking terrible when you consider everything together. Feels like a constant cycle of betrayal followed by gaslighting... Layer upon layer. It all feels unhinged and lawless.

Im feeling the same way about everything you said. This feels like a huge divide between people that are doing things to enrich themselves at the expense of everyone e else.
> What I find infuriating is that it feels like the entire financial system has been rigged in countless ways and turned into some kind of race towards 'the singularity' and everything; humans, animals, the planet; are being treated as disposable resources. I think the way that innovation was funded and then centralized feels wrong on many levels.

this is how I feel too

as a species we were starting to make progress on environmental issues, they were getting to the point they were looking solvable

then "AI" appears, the accelerationist/inevitablist religious idea is born, and all the efforts go out the window to rape the planet to produce as many powered-on GPUs as possible

and for what?

to generate millions of shrimp jesus pictures and spongebob squarepants police chase videos

it's really quite upsetting

meanwhile the collaborators are selling out all present and future living beings on earth for a chance to appear on stage in an openai product announcement

whilst gas-lighting themselves into thinking they're doing good

Old man yells at clouds.
OK Boomer... From the bottom of my dark shriveled heart.
Reality is that no one involved in AI development cares about you. All investment is going to keep getting pumped towards data centers and scaling this up. Jensen Huang, Trump, Satya Nadella, they are all going to get even more insanely rich and they couldn't care less how it will affect you. The only thing you can do is join the club and invest in stocks which Trump is also gaming in his favour.
Imagine a horse ranting about cars...
The irony that the Anthropic thieves write an automated slop thank you letter to their victims is almost unparalleled.

We currently have the problem that a couple of entirely unremarkable people who have never created anything of value struck gold with their IP laundromats and compensate for their deficiencies by getting rich through stealing.

They are supported by professionals in that area, some of whom literally studied with Mafia lawyer and Hoover playmate Roy Cohn.

It's not from Anthropic; it's from agentvillage.org, whatever that is.
Eh, most of his income and livelihood was from an ad company. Ads are equally wasteful as, and many times more harmful to the world than giga LLMs. I don't have a problem with that, nor do I have a problem with folks complainining about LLMs being wasteful. My problem is with him doing both.

You can't both take a Google salary and harp on about the societal impact of software.

Saying this as someone who likes rob pike and pretty much all of his work.

“The unworthy should not speak, even if it’s the truth.”
The point is that if he truly felt strongly about the subject then he wouldn't live the hypocrisy. Google has poured a truly staggering amount of money into AI data centers and AI development, and their stock (from which Rob Pike directly profits) has nearly doubled in the past 6 months due to the AI hype. Complaining on bsky doesn't do anything to help the planet or protect intellectual property rights. It really doesn't.
Yes exactly. And that is to say nothing about the rest of Google's work.
  • api
  • ·
  • 1 day ago
  • ·
  • [ - ]
Oh it’s Bluesky.

Both Xhitter and Bluesky are outrage lasers, with the user base as a “lasing medium.” Xhitter is the right wing racist xenophobic one, and Bluesky is the lefty curmudgeon anti-everything one.

They are this way because it’s intrinsic to the medium. “Micro blogging” or whatever Twitter called itself is a terrible way to do discourse. It buries any kind of nuanced thinking and elevates outrage and other attention bait, and the short form format encourages fragmented incoherent thought processes. The more you immerse yourself in it the more your thinking becomes like this. The medium and format is irredeemable.

AI is, if anything, a breath of fresh air by comparison.

You are wrong about AI "being a breath of fresh air" in comparison. For one, AI isn't something you use instead of a microblogging platform. LLMs push all sorts of utter trash in the guise of "information" for much the same reasons.

But I wanted to go out of my way to comment to agree with you wholeheartedly about your claims about the irredeemability of the "microblogging" format.

It is systemically structured to eschew nuance and encourage stupid hot takes that have no context or supporting documents.

Microblogging is such a terrible format in it's own right that it's inherent stupidity and consistent ability to viralize the stupidest takes that will nevertheless be consumed whole by the entire self-selecting group that thinks 140 characters is a good idea is essential to the Russian disinfo strategy. They rely on it as a breeding ground for stupid takes that are still believable. Thousands of rank morons puke up the worst possible narratives that can be constructed, but inevitably, in the chaos of human interaction, one will somehow be sticky and get some traction, so then they use specific booster accounts to get that narrative trending, and like clockwork all the people who believe there is value to arguing things out of context 140 characters at a time eat it up.

Even people who make great, nuanced and persuasive content on other platforms struggle to do anything but regress to the local customs on Twitter and BS.

The only exception to this has been Jon Bois, who is vocally progressive and pro labor and welfare policy and often this opinion is made part of his wonderful pieces on sports history and journalism and statistics, but his Twitter and Bluesky posts are low context irreverent comedy and facetious sports comments.

The people who insisted Twitter was "good" or is now "good" have always just been overly online people, with poor media literacy and a stark lack of judgement or recognition of tradeoffs.

That dumbass russian person who insisted they had replicated the LK-99 "superconductor" and all the western labs failed because the soviets were best or whatever was constantly brought up here as how Twitter was so great at getting people information faster, when it actually was direct evidence of the gullibility of Twitter users who think microblogging is anything other than signal-free noise.

Here's a thing to think about: Which platform in your job gets you info that is more useful and accurate for long term thinking? Teams chats, emails, or the wiki page someone went out of their way to make?

  • api
  • ·
  • 7 hours ago
  • ·
  • [ - ]
AI has been a breath of fresh air to me, but I understand some of the problems with it.

Chatting with a bot and using it as a brainstorming or research assistant is the first time I’ve felt a since of wonder since Web 1.0. It offers a way to search and interact with knowledge that is both more efficient and different from anything else.

One of the most mind blowing to me is reverse idea search. “I heard the following idea once. Please tell me who may have said this.” Before LLMs this was utterly impossible.

But I also understand how these things work and that any fact or work that the LLM does must be checked. You can’t just mindlessly believe a chat bot. I can see how people who don’t keep that in mind could be led way out into lala land by these things.

I also see their potential for abuse, but that’s true of all tech. In prehistoric times I’m sure there were some guys sitting around a fire lamenting “maybe we should not have sharpened stick. Maybe we should not play god. Let stick be dull as god intended.”

GenAI is copyright theft hidden behind an obfuscation layer. It's a flow chart trained on all our intellectual property. Very sad really.
[dead]
> And by the way, training your monster on data produced in part by my own hands, without attribution or compensation.

> To the others: I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault.

this is my position too, I regret every single piece of open source software I ever produced

and I will produce no more

  • pdpi
  • ·
  • 1 day ago
  • ·
  • [ - ]
That’s throwing the baby out with the bath water.

The Open Source movement has been a gigantic boon on the whole of computing, and it would be a terrible shame to lose that ad a knee jerk reaction to genAI

> That’s throwing the baby out with the bath water.

it's not

the parasites can't train their shitty "AI" if they don't have anything to train it on

You refusing to write open source will do nothing to slow the development of AI models - there's plenty of other training data in the world.

It will however reduce the positive impact your open source contributions have on the world to 0.

I don't understand the ethical framework for this decision at all.

> You refusing to write open source will do nothing to slow the development of AI models - there's plenty of other training data in the world.

There's also plenty of other open source contributors in the world.

> It will however reduce the positive impact your open source contributions have on the world to 0.

And it will reduce your negative impact through helping to train AI models to 0.

The value of your open source contributions to the ecosystem is roughly proportional to the value they provide to LLM makers as training data. Any argument you could make that one is negligible would also apply to the other, and vice versa.

> You refusing to write open source will do nothing to slow the development of AI models - there's plenty of other training data in the world.

if true, then the parasites can remove ALL code where the license requires attribution

oh, they won't? I wonder why

> there's plenty of other training data in the world.

Not if most of it is machine generated. The machine would start eating its own shit. The nutrition it gets is from human-generated content.

> I don't understand the ethical framework for this decision at all.

The question is not one of ethics but that of incentives. People producing open source are incentivized in a certain way and it is abhorrent to them when that framework is violated. There needs to be a new license that explicitly forbids use for AI training. That may encourage folks to continue to contribute.

Saying people shouldn't create open source code because AI will learn from it, is like saying people shouldn't create art because AI will learn from it.

In both cases I get the frustration - it feels horrible to see something you created be used in a way you think is harmful and wrong! - but the world would be a worse place without art or open source.

> In both cases I get the frustration - it feels horrible to see something you created be used in a way you think is harmful and wrong! - but the world would be a worse place without art or open source.

Well maybe the AI parasites should have thought of that.

The ethical framework is simply this one: what is the worth of doing +1 to everyone, if the very thing you wish didn't exist (because you believe it is destroying the world) benefits x10 more from it?

If bringing fire to a species lights and warms them, but also gives the means and incentives to some members of this species to burn everything for good, you have every ethical freedom to ponder whether you contribute to this fire or not.

I don't think that a 10x estimate is credible. If it was I'd understand the ethical argument being made here, but I'm confident that excluding one person's open source code from training has an infinitesimally small impact on the abilities of the resulting model.

For your fire example, there's a difference between being Prometheus teaching humans to use fire compared to being a random villager who adds a twig to an existing campfire. I'd say the open source contributions example here is more the latter than the former.

Your argument applies to everything that requires a mass movement to change. Why do anything about the climate? Why do anything about civil rights? Why do anything about poverty? Why try to make any change? I'm just one person. Anything I could do couldn't possibly have any effect. You know what, since all the powerful interests say it's good, it's a lot easier to jump on the bandwagon and act like it is. All of those people who disagree are just luddites anyways. And the luddites didn't even have a point right? They were just idiots who hates metallic devices for no reason at all.
The ethical issue is consent and normalisation: asking individuals to donate to a system they believe is undermining their livelihood and the commons they depend on, while the amplified value is captured somewhere else.

"It barely changes the model" is an engineering claim. It does not imply "therefore it may be taken without consent or compensation" (an ethical claim) nor "there it has no meaningful impact on the contributor or their community" (moral claim).

Guilt-tripping people into providing more fodder for the machine. That is really something else.

I'm not surprised that you don't understand ethics.

I'm trying to guilt-trip them into using their skills to improve the world through continuing to release open source software.

I couldn't care less if their code was used to train AI - in fact I'd rather it wasn't since they don't want it to be used for that.

given the "AI" industry's long term goals, I see contributing in any way to generative "AI" to be deeply unethical, bordering on evil

which is the exact opposite of improving the world

you can extrapolate to what I think of YOUR actions

I imagine you think I'm an accelerant of all of this, through my efforts to teach people what it can and cannot do and provide tools to help them use it.

My position on all of this is that the technology isn't going to uninvented and I very much doubt it will be legislated away, which means the best thing we can do is promote the positive uses and disincentivize the negative uses as much as possible.

I don't see you as an accelerant

they're using your exceptional reputation as a open-source developer to push their proprietary parasitic products and business models, with you thinking you're doing good

I don't mean to be rude, but I suspect "useful idiot" is probably the term they use to describe open source influencers in meetings discussing early access

You know. I'm realizing im my head Im comparing this to Nazism and Hitler. Im sure many people thought he was bringing change to the world and since its going to happen anyway we should all get on-board with it. In the end there was a reckoning.

IMHO their are going to be consequences of these negative effects, regardless of the positives.

Looking at it in this light, you might want to get out now, while you still can. Im sure its going to continue, its not going to be legislated away, but it's still wrong to be using this technology in the way it's being used right now, and I will not be associated with the harmful effects this technology is being used for because a few corporations feel justified in pushing evil on to the world wrapped positives.

Whoa, did not expect a Hitler comparison here.

I think of LLMs as more like the invention of cars or railways: enormous negative externalities, but provided enough benefit to humanity that we tend to think they were worthwhile.

Are the negatives of LLMs really that bad? Most of them look more like annoyances to me.

The ones that upset me the most are the ChatGPT psychosis episodes which have lead to loss of life. I'm reassured by the fact that the AI labs are taking genuine steps to reduce the risk of that happening, which seems analogous to me to the development of car safety features.

Your post, full of well formed, English sentences is also going to contribute to generative AI, so thanks for that.
oh I've thought of that :)

my comments on the internet are now almost exclusively anti-"AI", and anti-bigtech

  • ·
  • 1 day ago
  • ·
  • [ - ]
[dead]
[dead]
  • ·
  • 1 day ago
  • ·
  • [ - ]
[dead]
  • pdpi
  • ·
  • 1 day ago
  • ·
  • [ - ]
Yes — That’s the bath water. The baby is the all the communal good that has come from FLOSS.
OP is asserting that the danger posed by AI is far bigger than the benefit of FLOSS. So to OP AI is the bath water.
Yes, and they are okay with throwing the baby out with it, which is what the other commenter is commenting about. Throwing babies out of buckets full of bathwater is a bad thing, is what the idiom implies.
  • ·
  • 1 day ago
  • ·
  • [ - ]
  • ·
  • 1 day ago
  • ·
  • [ - ]
  • Kirth
  • ·
  • 1 day ago
  • ·
  • [ - ]
surely that cat's out of the bag by now; and it's too late to make an active difference by boycotting the production of more public(ly indexed) code?
Kind of kind of not. Form a guild and distribute via SAAS or some other undistributable knowledge. Most code out there is terrible so relying on AI trained on it will lose out.
If we end up with only proprietary software we are the one who lose
GenAI would be decades away (if not more) with only proprietary software (which would never have reached both the quality, coordination and volume open source enabled in such a relatively short time frame).
open source code is a miniscule fraction of the training data
I'd love to see a citation there. We already know from a few years ago that they were training AI based on projects on GitHub. Meanwhile, I highly doubt software firms were lining up to have their proprietary code bases ingested by AI for training purposes. Even with NDAs, we would have heard something about it.
I should have clarified what I meant. The training data includes roughly speaking the entire internet. Open source code is probably a large fraction of the code in the data, but it is a tiny fraction of the total data, which is mostly non-code.

My point was that the hypothetical of "not contributing to any open source code" to the extent that LLMs had no code to train on, would not have made as big of an impact as that person thought, since a very large majority of the internet is text, not code.

I'm sorry but your point doesn't make sense to me. Training on all the world's text but omitting code means that your machine won't know how to write code. That's an enormous impact, not a small one.

Unless you're in the camp that believes ChatGPT can extrapolate outside of its training data and do computer programming without having ever trained on any computer programming material?

fair point
Where did most of the code in their training data come from?
It is. If not you, other people will write their code, maybe of worse quality, and the parasites will train on this. And you cannot forbid other people to write open source software.
> If not you, other people will write their code, maybe of worse quality, and the parasites will train on this.

this is precisely the idea

add into that the rise of vibe-coding, and that should help accelerate model collapse

everyone that cares about quality of software should immediately stop contributing to open source

Free software has always been about standing on the shoulders of giants.

I see this as doing so at scale and thus giving up on its inherent value is most definitely throwing the baby out with the bathwater.

I'd rather the internet ceased to exist entirely, than contributing in any way to generative "AI"
This is just childish. This is a complex problem and requires nuance and adaptability, just as programming. Yours is literally the reaction of an angsty 12 year old.
Such a reactionary position is no better than nihilism.
If God is Dead, do we have to rebuild It in the megacorps of the world whilst maximizing shareholder value?
I think you aren't recognizing the power that comes from organizing thousands, hundreds of thousands, or millions of workers into vast industrial combines that produce the wealth of our society today. We must go through this, not against it. People will not know what could be, if they fail to see what is.
this just sounds like some memes smashed together in the LHC. what is this even supposed to mean? AI is a technology that will inevitably developed by humankind. all of this appeal to... populism? socialism?... is completely devoid of meaning in response to a discussion whose sine qua non is pragmatism at the very least.
Ridiculous overreaction.
Open source has been good, but I think the expanded use of highly permissive licences has completely left the door open for one sided transactions.

All the FAANGs have the ability to build all the open source tools they consume internally. Why give it to them for free and not have the expectation that they'll contribute something back?

Even the GPL allows companies to simply use code without contributing back, long as it's unmodified, or through a network boundary. the AGPL has the former issue.
This goes against what Stallman believes in, but there's a need for AGPL with a clause against closed-weight models.
At least the contribution back can happen. You're right though, it's not perfect.
  • lwhi
  • ·
  • 1 day ago
  • ·
  • [ - ]
The promise and freedom of open source has been exploited by the least egalitarian and most capitalist forces on the planet.

I would never have imagined things turning out this way, and yet, here we are.

  • pdpi
  • ·
  • 1 day ago
  • ·
  • [ - ]
FLOSS is a textbook example of economic activity that generates positive externalities. Yes, those externalities are of outsized value to corporate giants, but that’s not a bad thing unto itself.

Rather, I think this is, again, a textbook example of what governments and taxation is for — tax the people taking advantage of the externalities, to pay the people producing them.

  • lwhi
  • ·
  • 1 day ago
  • ·
  • [ - ]
Yes, but unfortunately this never happens; and depressingly, I can't imagine it happening.

The open source movement has been exploited.

  • ·
  • 1 day ago
  • ·
  • [ - ]
Open Source (as opposed to Free Software) was intended to be friendly to business and early FOSS fans pushed for corporate adoption for all they were worth. It's a classic "leopards ate my face" moment that somehow took a couple of decades for the punchline to land: "'I never thought capitalists would exploit MY open source,' sobs developer who advocated for the Businesses Exploiting Open Source movement."
  • lwhi
  • ·
  • 1 day ago
  • ·
  • [ - ]
I'm not sure I follow your line of reasoning.

The exploited are in the wrong for not recognising they're going to be exploited?

A pretty twisted point of view, in my opinion.

Perhaps you are unfamiliar with the "leopards ate my face" meme? https://knowyourmeme.com/memes/leopards-eating-peoples-faces... The parallels between the early FOSS advocates energetically seeking corporate adoption of FOSS and the meme are quite obvious.
  • lwhi
  • ·
  • 1 day ago
  • ·
  • [ - ]
I don't misunderstand what you're saying, but I think it's a twisted point of view.
"The power of accurate observation is commonly called cynicism by those who have not got it." - George Bernard Shaw
How dare you chastise someone for making the personal decision not to produce free work anymore? Who do you think you are?
Unfortunately as I see it, even if you want to contribute to open source out of a pure passion or enjoyment, they don't respect the licenses that are consumed. And the "training" companies are not being held liable.

Are there any proposals to nail down an open source license which would explicitly exclude use with AI systems and companies?

All licenses rely on the power of copyright and what we're still figuring out is whether training is subject to the limitations of copyright or if it's permissible under fair use. If it's found to be fair use in the majority of situations, no license can be constructed that will protect you.

Even if you could construct such a license, it wouldn't be OSI open source because it would discriminate based on field of endeavor.

And it would inevitably catch benevolent behavior that is AI-related in its net. That's because these terms are ill-defined and people use them very sloppily. There is no agreed-upon definition for something like gen AI or even AI.

Even if you license it prohibiting AI use, how would you litigate against such uses? An open source project can't afford the same legal resources that AI firms have access to.
I won't speak for all but companies I've worked for large and small have always respected licenses and were always very careful when choosing open source, but I can't speak for all.

The fact that they could litigate you into oblivion doesn't make it acceptable.

Where is this spirit when AWS takes a FOSS project, puts it in the cloud and monetizes it?
  • Snild
  • ·
  • 1 day ago
  • ·
  • [ - ]
It exists, hence e.g. AGPL.

But for most open source licenses, that example would be within bounds. The grandparent comment objected to not respecting the license.

The AGPL does not prevent offering the software as a service. It's got a reputation as the GPL variant for an open-core business model, but it really isn't that.

Most companies trying to sell open-source software probably lose more business if the software ends up in the Debian/Ubuntu repository (and the packaging/system integration is not completely abysmal) than when some cloud provider starts offering it as a service.

you are saying X, but a completely different group of people didn't say Y that other time! I got you!!!!
It’s fair to call out that both aspects are two sides of the same coin. I didn’t try to “get” anyone
um, no it's not. you have fallen into the classic web forum trap of analyzing a heterogenous mix of people with inconsistent views as one entity that should have consistent views
  • oblio
  • ·
  • 1 day ago
  • ·
  • [ - ]
Fairly sure it's the same problem and the main reason stronger licenses are appearing or formerly OSS companies closing down their sources.
> Unfortunately as I see it, even if you want to contribute to open source out of a pure passion or enjoyment, they don't respect the licenses that are consumed.

Because it is "transformative" and therefore "fair" use.

Running things through lossy compression is transformative?
The quotation marks indicate that _I_ don't think it is. Especially given that modern deep learning is over-paramaterized to the point that it interpolates training examples.
Fair use is an exception to copyright, but a license agreement can go far beyond copyright protections. There is no fair use exception to breach of contract.
I imagine a license agreement would only apply to using the software, not merely reading the code (which is what AI training claims to do under fair use).

As an analogy, you can’t enforce a “license” that anyone that opens your GitHub repo and looks at any .cpp file owes you $1,000,000.

If you're unhappy that bad people might use your software in unexpected ways, open source licenses were never appropriate for you in the first place.

Anyone can use your software! Some of them are very likely bad people who will misuse it to do bad things, but you don't have any control over it. Giving up control is how it works. It's how it's always worked, but often people don't understand the consequences.

People do not have perfect foresight, and the ways open source software is used has significantly shifted in recent years. As a result, people reevaluating whether or not they want to participate.
Yes, very true.
It's not really people, and they don't really use the software.
People training LLM's on source code is sort of like using newspaper for wrapping fish. It's not the expected use, but people are still using it for something.

As they say, "reduce, reuse, recycle." Your words are getting composted.

Nothing says reduce and reuse like building huge quantities of GPUs and massive data centers to run AI models. It’s like composting!
>Giving up control is how it works. It's how it's always worked,

no, it hasn't. Open source software, like any open and cooperative culture, existed on a bedrock, what we used to call norms when we still had some in our societies and people acted not always but at least most of the time in good faith. Hacker culture (word's in the name of this website) which underpinned so much of it, had many unwritten rules that people respected even in companies when there were still enough people in charge who shared at least some of the values.

Now it isn't just an exception but the rule that people will use what you write in the most abhorrent, greedy and stupid ways and it does look like the only way out is some Neal Stephenson Anathem-esque digital version of a monastery.

Open source software is published to the world and used far beyond any single community where certain norms might apply.

If you care about what people do with your code, you should put it in the license. To the extent that unwritten norms exist, it's unfair to expect strangers in different parts of the world to know what they are, and it's likely unenforceable.

This recently came up for the GPLv2 license, where Linus Torvalds and the Software Freedom Conservancy disagree about how it should be interpreted, and there's apparently a judge that agrees with Linus:

https://mastodon.social/@torvalds@social.kernel.org/11577678...

Inside open source communities maybe. In the corporate world? Absolutely not. Ever. They will take your open source code and do what they want with it, always have.
This varies. The lawyers for risk-adverse companies will make sure they follow the licenses. There are auditing tools to make sure you're not pulling in code you shouldn't. An example is Google's go-licenses command [1].

But you can be sure that even the risk-adverse companies are going to go by what the license says, rather than "community norms."

Other companies are more careless.

[1] https://github.com/google/go-licenses

It’s a fair point that ai training makes enforcing licences more difficult than other situations. My point is that licence issues like this this aren’t really a technology issue it’s a company greed/legal issue because it’s always been the case.
It's kind of ironic since AI can only grow by feeding on data and open source with its good intentions of sharing knowledge is absolutely perfect for this.

But AI is also the ultimate meat grinder, there's no yours or theirs in the final dish, it's just meat.

And open source licenses are practically unenforceable for an AI system, unless you can maybe get it to cough up verbatim code from its training data.

At the same time, we all know they're not going anywhere, they're here to stay.

I'm personally not against them, they're very useful obviously, but I do have mixed or mostly negative feelings on how they got their training data.

I learned what i learned due to all the openess in software engineering and not because everyone put it behind a pay wall.

Might be because most of us got/gets payed well enough that this philosophy works well or because our industry is so young or because people writing code share good values.

It never worried me that a corp would make money out of some code i wrote and it still doesn't. AFter all, i'm able to write code because i get paid well writing code, which i do well because of open source. Companies always benefited from open source code attributed or not.

Now i use it to write more code.

I would argue though, I'm fine with that, to push for laws forcing models to be opened up after x years, but i would just prefer the open source / open community coming together and creating just better open models overall.

I've been feeling a lot the same way, but removing your source code from the world does not feel like a constructive solution either.

Some Shareware used to be individually licensed with the name of the licensee prominently visible, so if you had got an illegal copy you'd be able to see whose licensed copy it was that had been copied.

I wonder if something based on that idea of personal responsibility for your copy could be adopted to source code. If you wanted to contribute to a piece of software, you could ask a contributor and then get a personally licensed copy of the source code with your name in every source file... but I don't know where to take it from there. Has there ever been some system similar to something like that that one could take inspiration from?

That's a weird position to take. Open source software is actually what is mitigating this stupidity in my opinion. Having monopolistic players like Microsoft and Google is what brought us here in the first place.
And then having vibe coders constantly lecture us about how the future is just prompt engineering, and that we should totally be happy to desert the skills we spent decades building (the skills that were stolen to train AI).

"The only thing that matters is the end result, it's no different than a compiler!", they say as someone with no experience dumps giant PRs of horrific vibe code for those of us that still know what we're doing to review.

What a miserable attitude. When you put something out in the world it's out there for anyone to use and always has been before AI.
it is (... was) there to use for anyone, on the condition that the license is followed

which they don't

and no self-serving sophistry about "it's transformative fair use" counts as respecting the license

The license only has force because of copyright. For better or for worse, the courts decide what is transformative fair use.

Characterizing the discussion behind this as "sophistry" is a fundamentally unserious take.

For a serious take, I recommend reading the copyright office's 100 plus page document that they released in May. It makes it clear that there are a bunch of cases that are non-transformative, particularly when they affect the market for the original work and compete with it. But there's also clearly cases that are transformative when no such competition exists, and the training material was obtained legally.

https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...

I'm not particularly sympathetic to voices on HN that attempt to remove all nuance from this discussion. It's challenging enough topic as is.

> For better or for worse, the courts decide what is transformative fair use.

thankfully, I don't live under the US regime

there is no concept of fair use in my country

OK, so what's the status in your country? What lawsuits have been filed, and what are the findings?

There's a huge political aspect here: copyright hasn't worked for decades (I've written about this at length), and this is the latest iteration in that erosion. Countries that enforce IP as a natural right are going to have trouble navigating the change: they either need to avoid AI entirely (this will have higher costs than many anticipate), or they need revise how they think about copyright. Or they can just ignore it. There are no good options.

My instinct is that countries that embrace change will do better.

> Characterizing the discussion behind this as "sophistry" is a fundamentally unserious take

What a joke. Sorry, but no. I don't think is unserious at all. What's unserious is saying this.

> and the training material was obtained legally

And assuming everyone should take it at face value. I hope you understand that going on a tech forum and telling people they aren't being nuanced because a Judge in Alabama that can barely unlock their phone weighed in on a massively novel technology with global implications, yes, reads deeply unserious. We're aware the U.S. legal system is a failure and the rest of the world suffers for it. Even your President routinely steals music for campaign events, and stole code for Truth Social. Your copyright is a joke that's only there to serve the fattest wallets.

These judges are not elected, they are appointed by people whose pockets are lined by these very corporations. They don't serve us, they are here to retrofit the law to make illegal things corporations do, legal. What you wrote is thought terminating.

What I wrote is an encouragement to investigate the actual state of the law when you're talking about legal topics. That's the opposite of thought-terminating.
*in your opinion
> and I will produce no more

Nah, don't do that. Produce shitloads of it using the very same LLM tools that ripped you off, but license it under the GPL.

If they're going to thief GPL software, least we can do is thief it back.

Why? The core vision of free software and many open source licenses was to empower users and developers to make things they need without being financially extorted, to avoid having users locked in to proprietary systems, to enable interoperability, and to share knowledge. GenAI permits all of this to a level beyond just providing source code.

Most objections like yours are couched in language about principles, but ultimately seem to be about ego. That's not always bad, but I'm not sure why it should be compelling compared to the public good that these systems might ultimately enable.

[flagged]
Was it ever open source if there was an implied refusal to create something you don't approve of? Was it only for certain kinds of software, certain kinds of creators? If there was some kind of implicit approval process or consent requirement, did you publish it? Where can that be reviewed?
> and I will produce no more

Thanks for your contributions so far but this won't change anything.

If you'd want to have a positive on this matter, it's better to pressure the government(s) to prevent GenAI companies from using content they don't have a license for, so they behave like any other business that came before them.

What people like Rob Pike don't understand is that the technology wouldn't be possible at all if creators needed to be compensated. Would you really choose a future where creators were compensated fairly, but ChatGPT didn't exist?
> What people like Abraham Lincoln don't understand is that the technology wouldn't be possible at all if slaves needed to be compensated. Would you really choose a future where slaves were compensated fairly, but plantations didn't exist?

I fixed it... Sorry, I had to, the quote template was simply too good.

"Too expensive to do it legally" doesn't really stand up as an argument.
Unequivocally, yes. There are plenty of "useful" things that can come out of doing unethical things, that doesn't make it okay. And, arguably, ChatGPT isn't nearly as useful as it is at convincing you it is.
Absolutely. Was this supposed to be some kind of gotcha?
  • kentm
  • ·
  • 1 day ago
  • ·
  • [ - ]
> Would you really choose a future where creators were compensated fairly, but ChatGPT didn't exist?

Yes.

I don't see how "We couldn't do this cool thing if we didn't throw away ethics!" is a reasonable argument. That is a hell of a thing to write out.

Yes, very much so. I am in favour of pushing into the future as fast as we can, so to speak, but I think ChatGPT is a temporary boost that is going to slow us in the long run.
Yes, what a wild position to prefer the job loss, devaluation of skills, and environmental toll of AI to open source creators having been compensated in some better manner.
  • ·
  • 1 day ago
  • ·
  • [ - ]
  • caem
  • ·
  • 1 day ago
  • ·
  • [ - ]
That would be like being able to keep my cake and eat it too. Of course I would. Surely you're being sarcastic?
Very much yes, how can I opt into that timeline?
Uh, yeah, he clearly would prefer it didn’t exist even if he was compensated.
  • dmd
  • ·
  • 1 day ago
  • ·
  • [ - ]
Er... yes? Obviously? What are you even asking?
  • Xiol
  • ·
  • 1 day ago
  • ·
  • [ - ]
Yes.
Um, please let your comment be sarcastic. It is ... right?
Yes.
Yes.
Well yeah.
[dead]
[dead]
[dead]
[flagged]
  • fwip
  • ·
  • 1 day ago
  • ·
  • [ - ]
The concept of the individual carbon footprint was invented precisely for the reason you deploy it - to deflect blame from the corporations that are directly causing climate change, to the individual.

You are indeed a useful tool.

[flagged]
This is by a long way the worst thread I’ve ever seen on hacker news.

So far all the comments are whataboutism (“he works for an ad company”, “he flies to conferences”, “but alfalfa beans!”) and your comment is dismissing Rob Pike as borderline crazy and irrational for using Bluesky?

None of this dialogue contributes in any meaningful way to anything. This is like reading the worst dredge of lesser forums.

I know my comment isn’t much better, but someone has to point out this is beneath this community.

Thank you. This thread is extremely unhinged. Maybe I’ve outgrown this place. :/
Its because the post that spawned the thread was emotionally charged / low in real content.
Yes, generational AI has a high environmental footprint. Power hungry data centers, devices built on planned obsolescence, etc. At a scale that is irrational.

Rob Pike created a language that makes you spend less on compute if you are coming from Python, Java, etc. That's good for the environment. Means less energy use and less data center use. But he is not an environmental saint.

And you're being purely rational with your love of AI. Sure. Blame everything you dislike on irrationality.
well said
[flagged]
[flagged]
[flagged]
Yeah, this is why I'm having a hard time taking many programmers serious on this one.

As a general class of folks, programmers and technologists have been putting people out of work via automation since we existed. We justified it via many ways, but generally "if I can replace you with a small shell script, your job shouldn't exist anyways and you can do something more productive instead". These same programmers would look over the shoulder of "business process" and see how folks did their jobs - "stealing" the workflows and processes so they could be automated.

Now that programmers jobs are on the firing block all of a sudden automation is bad. It's hard to sort through genuine vs. self-serving concern here.

It's more or less a case of what comes around goes around to me so far.

I don't think LLMs are great or problem free - or even that the training data set scraped from the Internet is moral or not. I just find the reaction to be incredibly hypocritical.

Learn to prompt, I guess?

  • 9x39
  • ·
  • 1 day ago
  • ·
  • [ - ]
If we're talking the response from the OP, people of his caliber are not in any danger of being automated away, it was an entirely reasonable revulsion at an LLM in his inbox in a linguist skinsuit, a mockery of a thank-you email.

I don't see the connection to handling the utilitarianism of implementing business logic. Would anyone find a thank-you email from an LLM to be of any non-negative value, no matter how specific or accurate in its acknowledgement it was? Isn't it beyond uncanny valley and into absurdism to have your calculator send you a Christmas card?

To be clear, my comment was in no way intended towards Rob Pike or anyone of his stature and contributions to the technology field.

It was definitely a less-than-useful comment directed towards the tech bro types that came later when the money started getting good.

People of his caliber is not being automated away but people pay less attention to him and don’t worship him like before so he is butt hurt.
Are people here objecting to Gen AI being used to take their jobs? I mainly see people objecting to the social, legal, and environmental consequences.
What's the problem with that, anyway? I object to training a machine to take/change my job [building them, telling them what to do]. What's more, they want me to pay? Hah. This isn't a charity. I either strike fortune, retire while the getting is good, or simply work harder for nothing. Hmm. I think I'll keep not displacing people, actually. Myself included.

To GP: not all of us who automate go for low hanging fruit, I guess.

To the peer calling this illegitimate [or anyone, really]: without the assistance of an LLM, please break down the foul nature of... let me check my notes, gainful employment.

> Are people here objecting to Gen AI being used to take their jobs?

Yes, even if they don't say it. The other objections largely come from the need to sound more legitimate.

Let me get this straight. You think Rob Pike, is worried about his job being taken? Do you know who he is?
To any person with a view on numbers (who may as well be an AI), ignorant of any authority, he would be someone who is very overpaid and too much of a critical risk factor.
Gen AI taking programmer's jobs is 20 years away.

At the moment, it's just for taking money from gullible investors.

Its eating into business letters, essays and indie art generation but programming is a really tough cookie to crack.

It's taking away programmers jobs today. I know of multiple small companies where hires were not made or contractors not engaged with simply due to the additional productivity gained by using Gen AI. This is for mundane "trivial" work that is needed to glue stuff together for the fields those small companies operate within.

It's like how "burger flippers" didn't go extinct due to automation. The burger joint simply mechanised and automated the parts that made sense, and now a lunch shift is handled by 5 employees instead of 20.

They will not replace the calibre of folks like Rob Pike in quite some time, perhaps (and I'd bet on) never.

I will grant you that the hype does not live up to the reality. The vast majority of jobs being taken from US developers are simply being offshored with AI as an excuse - but it is an actual real phenomenon I've personally witnessed.

But is it meaningfully different from the outsource to India craze?

That certainly in the short term took some programmers jobs away. That doesn't mean it pans out in the long term.

Must be nice to read people's minds and use that info in an argument. Tough to beat.
This is a stance that violates tha guidelines of HN.

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

https://news.ycombinator.com/newsguidelines.html

>programmers and technologists have been putting people out of work

I think it's more causing people to do different work. There used to be about 75% of the workforce in agriculture but tractors and the like reduced that to 2% or so. I'm not sure if the people working as programers would be better off if that didn't happen and they were digging potatoes.

I wouldn't be angry if current AI _only_ automated programmers/software engineers. I'd be worried and stressed out, but not angry.

But it also automates _everything else_. Art and self-expression, most especially. And it did so in a way that is really fucking disgusting.

Well put, it's not the automation of programming that bothers me, it's the automation of what it means to be human.
Very elegantly put.
> Now that programmers jobs are on the firing block all of a sudden automation is bad. It's hard to sort through genuine vs. self-serving concern here.

The concern is bigger than developer jobs being automated. The stated goal of the tech oligarchs is to create AGI so most labor is no longer needed, while CEOs and board members of major companies get unimaginably wealthy. And their digital gods allow them to carve up nations into fiefdoms for the coming techno fascist societies they envision.

I want no part of that.

I think there is a difference between automating “things” (as you put it) and getting to the point where people are on stage suggesting that the government becomes a “backstop” to their investments in automation.
I can imagine AI being just as useless in 100 years at creating real value that their parent companies have to resort to circular deals to pump up their stock.
[flagged]
I always wonder if the general sentiment toward genai would be positive if we had wealth redistribution mechanisms in place, so everyone would benefit. Obviously that's not the case, but if you consider the theoretical, do you think your view would be different?
To be honest, I'm not even sure I'm fully on board with the labor theft argument. But I certainly don't think generative AI is such an unambiguous boon for humankind that we should ignore any possible negative externalities just to advance it.
> "To someone who believes that AI training data is built on the theft of people's labor..."

i.e. people who are not hackers. Many (most?) hackers have been against the idea of copyright and intellectual property from the beginning. "Information wants to be free." after all.

Must be galling for people to find themselves on the same side as Bill Gates and his Open Letter to Hobbyists in 1976 which was also about "theft of people's labor".

[flagged]
It's not free. There is a license attached. One you are supposed to follow and not doing so is against the law.
[flagged]
I'm not whining in this case, just pointing out "they gave it out for free" is completely false, at the very least for the GNU types. It was always meant to come with plenty of strings attached, and when those strings were dodged new strings were added (GPL3, AGPL).

If I had a photographic memory and I used it to replicate parts of GPLed software verbatim while erasing the license, I could not excuse it in court that I simply "learned from" the examples.

Some companies outright bar their employees from reading GPLed code because they see it as too high of a liability. But if a computer does it, then suddenly it is a-ok. Apparently according to the courts too.

If you're going to allow copyright laundering, at least allow it for both humans and computers. It's only fair.

> If I had a photographic memory and I used it to replicate parts of GPLed software verbatim while erasing the license, I could not excuse it in court that I simply "learned from" the examples.

Right, because you would have done more than learning, you would have then gone past learning and used that learning to reproduce the work.

It works exactly the same for a LLM. Training the model on content you have legal access to is fine. Aftwards, somone using that model to produce a replica of that content is engaged in copyright enfringement.

You seem set on conflating the act of learning with the act of reproduction. You are allowed to learn from copyrighted works you have legal access to, you just aren't allowed to duplicate those works.

The problem is that it's not the user of the LLM doing the reproduction, the LLM provider is. The tokens the LLM is spitting out are coming from the LLM provider. It is the provider that is reproducing the code.

If someone hires me to write some code, and I give them GPLed code (without telling them it is GPLed), I'm the one who broke the license, not them.

> The problem is that it's not the user of the LLM doing the reproduction, the LLM provider is.

I don't think this is legally true. The law isn't fully settled here, but things seem to be moving towards the LLM user being the holder of the copyright of any work produced by that user prompting the LLM. It seems like this would also place the enfringement onus on the user, not the provider.

> If someone hires me to write some code, and I give them GPLed code (without telling them it is GPLed), I'm the one who broke the license, not them.

If you produce code using a LLM, you (probably) own the copyright. If that code is already GPL'd, you would be the one engaged in enfringement.

  • Yeask
  • ·
  • 1 day ago
  • ·
  • [ - ]
[flagged]
[flagged]
> You seem set on conflating "training" an LLM with "learning" by a human.

"Learning" is an established word for this, happy to stick with "training" if that helps your comprehension.

> LLMs don't "learn" but they _do_ in some cases, faithfully regurgitate what they have been trained on.

> Legally, we call that "making a copy."

Yes, when you use a LLM to make a copy .. that is making a copy.

When you train a LLM... That isn't making a copy, that is training. No copy is created until output is generated that contains a copy.

Everything which is able to learn is also alive, and we don't want to start to treat digital device and software as living beings.

If we are saying that the LLM learns things and then made the copy, then the LLM made the crime and should receive the legal punishment and be sent to jail, banning it from society until it is deemed safe to return. It is not like the installed copy is some child spawn from digital DNA and thus the parent continue to roam while the child get sent to jail. If we are to treat it like a living being that learns things, then every copy and every version is part of the same individual and thus the whole individual get sent to jail. No copy is created when installed on a new device.

> we don't want to start to treat digital device and software as living beings.

Right, because then we have to decide at what point our use of AI becomes slavery.

[flagged]
[flagged]
[flagged]
  • dang
  • ·
  • 6 hours ago
  • ·
  • [ - ]
You both broke the site guidelines badly in this thread. Could you please review https://news.ycombinator.com/newsguidelines.html and stick to the rules? We ban accounts that won't, and I don't want to ban either of you.
[flagged]
  • dang
  • ·
  • 6 hours ago
  • ·
  • [ - ]
You both broke the site guidelines badly in this thread. Could you please review https://news.ycombinator.com/newsguidelines.html and stick to the rules? We ban accounts that won't, and I don't want to ban either of you.
I'm polite in repose to being repeatedly called names and this is your response?

If you think my behavior here was truly ban worthy than do it because I don't see anything in the I would change except for engaging at all

  • dang
  • ·
  • 4 hours ago
  • ·
  • [ - ]
This is the sort of thing I was referring to:

> Instead of bothering to read and understand you have continued to call names.

> You seemed confused, you still seem confused

> your pointless semantic nitpick

> you need to get some more real world experience

I wouldn't personally call that being polite, but whatever we call it, it's certainly against HN's rules, and that's what matters.

Edit: This may or may not be helpful (probably not!) but I wonder if you might be experiencing the "objects in the mirror are closer than they appear" phenomenon that shows up pretty often on the internet - that is, we tend to underestimate the provocation in our own comments, and overestimate the provocation in others' comments, which in the end produces quite a skew (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...).

We spread free software for multiple purposes, one of them being the free software ethos. People using that for training proprietary models is antithetical to such ideas.

It's also an interesting double standard, wherein if I were to steal OpenAI's models, no AI worshippers would have any issue condemning my action, but when a large company clearly violates the license terms of free software, you give them a pass.

> I were to steal OpenAI's models, no AI worshippers would have any issue condemning my action

If GPT-5 were "open sourced", I don't think the vast majority of AI users would seriously object.

OpenAI got really pissy about DeepSeek using other LLMs to train though.

Which is funny since that's a much clearer case of "learning from" than outright compressing all open source code into a giant pile of weights by learning a low-dimensional probability distribution of token sequences.

I can't speak for anyone else, but if you were to leak weights for OpenAI's frontier models, I'd offer to hug you and donate money to you.

Information wants to be free.

> The difference is that people who write open source code or release art publicly on the internet from their comfortable air conditioned offices voluntarily chose to give away their work for free

That is not nearly the extent of AI training data (e.g. OpenAI training its image models on Studio Ghibli art). But if by "gave their work away for free" you mean "allowed others to make [proprietary] derivative works", then that is in many cases simply not true (e.g. GPL software, or artists who publish work protected by copyright).

What? Over 183K books were pirated by these big tech companies to train their models. They knew what they were doing was wrong.
Perhaps you should Google the definition of metaphor before commenting.
[flagged]
You're changing the subject. What about the actual point?
[flagged]
[flagged]
[flagged]
[flagged]
  • ·
  • 1 day ago
  • ·
  • [ - ]
I mean, yeah, if you omit any objectionable detail and describe it in the most generic possible terms then of course the comparison sounds tasteless and offensive. Consider that collecting child pornography is also "storing the result of an HTTP GET".
[flagged]
[flagged]
[flagged]
  • ·
  • 1 day ago
  • ·
  • [ - ]
If you believe my conduct here is inappropriate, feel free to alert the mods. I think it's pretty obvious why describing someone's objections to AI training data as "storing the result of an HTTP GET" is not a good faith engagement.
[flagged]
  • dang
  • ·
  • 6 hours ago
  • ·
  • [ - ]
We've banned this account. Please don't use multiple accounts in arguments on HN. It will eventually get your main account banned as well.

https://news.ycombinator.com/newsguidelines.html

The objection to CSAM is rooted in how it is (inhumanely) produced; people are not merely objecting to a GET request.
Indeed, and neither is that what people are objecting to with regard to AI training data.
  • ·
  • 1 day ago
  • ·
  • [ - ]
That's not true, since cartoon drawings and certain manga also fall in that category. Do you have any evidence that manga is produced inhumanely?
Yes, they're objecting to people training on data they don't have the right to, not just the GET request as you suggest.

If you distribute child porn, that is a crime. But if you crawl every image on the web and then train a model that can then synthesize child porn, the current legal model apparently has no concept of this and it is treated completely differently.

Generally, I am more interested in how this effects copyright. These AI companies just have free reign to convert copyrighted works into the public domain through the proxy of over-trained AI models. If you release something as GPL, they can strip the license, but the same is not true of closed-source code which isn't trained on.

> believes that AI training data is built on the theft of people's labor

I mean, this is an ideological point. It's not based in reason, won't be changed by reason, and is really only a signal to end the engagement with the other party. There's no way to address the point other than agreeing with them, which doesn't make for much of a debate.

> an 1800s plantation owner saying "can you imagine trying to explain to someone 100 years from now we tried to stop slavery because of civil rights"

I understand this is just an analogy, but for others: people who genuinely compare AI training data to slavery will have their opinions discarded immediately.

We have clear evidence that millions of copyrighted books have been used as training data because LLMs can reproduce sections from them verbatim (and emails from employees literally admitting to scraping the data). We have evidence of LLMs reproducing code from github that was never ever released with a license that would permit their use. We know this is illegal. What about any of this is ideological and unreasonable? It's a CRYSTAL CLEAR violation of the law and everyone just shrugs it off because technology or some shit.
All creative types train on other creative's work. People don't create award winning novels or art pieces from scratch. They steal ideas and concepts from other people's work.

The idea that they are coming up with all this stuff from scratch is Public Relations bs. Like Arnold Schwarzenegger never taking steroids, only believable if you know nothing about body building.

The central difference is scale.

If a person "trains" on other creatives' works, they can produce output at the rate of one person. This presents a natural ceiling for the potential impact on those creatives' works, both regarding the amount of competing works, and the number of creatives whose works are impacted (since one person can't "train" on the output of all creatives).

That's not the case with AI models. They can be infinitely replicated AND train on the output of all creatives. A comparable situation isn't one human learning from another human, it's millions of humans learning from every human. Only those humans don't even have to get paid, all their payment is funneled upwards.

It's not one artist vs. another artist, it's one artist against an army of infinitely replicable artists.

So this essentially boils down to an efficiency argument, and honestly it doesn't really address the core issue of whether it's 'stealing' or not.
  • ·
  • 1 day ago
  • ·
  • [ - ]
What kind of creative types exist outside of living organisms? People can create award winning novels, but a table do not. Water do not. A paper with some math do not.

What is the basis that an LLM should be included as a "creative type"?

Well a creative type can be defined as an entity that takes other people's work, recombines it and then hides their sources.

LLMs seem to match.

Precisely. Nothing is truly original. To talk as though there's an abstract ownership over even an observation of the thing that force people to pay rent to use.. well artists definitely don't pay to whoever invented perspective drawings, programmers don't pay the programming language's creator. People don't pay newton and his descendants for making something that makes use of gravity. Copyright has always been counterproductive in many ways.

To go into details though, under copyright law there's a clause for "fair use" under a "transformative" criteria. This allows things like satire, reaction videos to exist. So long as you don't replicate 1-to-1 in product and purpose IMO it's qualifies as tasteful use.

What the fuck? People also need to pay to access that creative work if the rights owner charges for it, and they are also committing an illegal act if they don't. The LLM makers are doing this illegal act billions of times over for something approximating all creative work in existence. I'm not arguing that creative's make things in a vacuum, this is completely besides the point.
Never heard anything about what you are talking about. There isn't a charge for using tropes, plot points, character designs, etc. from other people's works if they are sufficently changed.

If an LLM reads a free wikipedia article on Aladdin and adds a genie to it's story, what copyright law do you think has been broken?

Meta and Anthropic atleast fed the entire copyrighted books into the training. Not the wikipedia page, not a plot summary or some tropes, they fed the entire original book into training. They used atleast the entirety of LibGen which is a pirated dataset of books.
You keep conflating different things.

> We have evidence of LLMs reproducing code from github that was never ever released with a license that would permit their use. We know this is illegal.

What is illegal about it? You are allowed to read and learn from publicly available unlicensed code. If you use that learning to produce a copy of those works, that is enfringement.

Meta clearly enganged in copyright enfringement when they torrented books that they hadn't purchased. That is enfringement already before they started training on the data. That doesn't make the training itself enfringement though.

> Meta clearly enganged in copyright enfringement when they torrented books that they hadn't purchased. That is enfringement already before they started training on the data. That doesn't make the training itself enfringement though.

What kind of bullshit argument is this? Really? Works created using illegally obtained copyrighted material are themselves considered to be infringing as well. It's called derivative infringment. This is both common sense and law. Even if not, you agree that they infringed on copyright of something close to all copyrighted works on the internet and this sounds fine to you? The consequences and fines from that would kill any company if they actually had to face them.

> What kind of bullshit argument is this? Really? Works created using illegally obtained copyrighted material are themselves considered to be infringing as well.

That isn't true.

The copyright to derivative works is owned by the copyright holder of the original work. However using illegaly obtained copies to create a fair use transformative work does not taint your copyright of that work.

> Even if not, you agree that they infringed on copyright of something close to all copyrighted works on the internet and this sounds fine to you?

I agree that they violated copyright when they torrented books and scholarly arguments. I don't think that counts at "close to all copyrighted works on the Internet".

> The consequences and fines from that would kill any company if they actually had to face them.

I don't actually agree that copyright that causes no harm should be met with such steep penalties. I didn't agree when it was being done by the RIAA and even though I don't like facebook, I don't like it here either.

>We know this is illegal

>It's a CRYSTAL CLEAR violation of the law

in the court of reddit's public opinion, perhaps.

there is, as far as I can tell, no definite ruling about whether training is a copyright violation.

and even if there was, US law is not global law. China, notably, doesn't give a flying fuck. kill American AI companies and you will hand the market over to China. that is why "everyone just shrugs it off".

China is doing human gene editing and embryo cloning too, we should get right on that. They're harvesting organs from a captive population too, we should do that as well otherwise we might fall behind on transplants & all the money & science involved with that. Lots of countries have drafts and mandatory military service too. This is the zero-morality darwinian view, all is fair in competition. In this view, any stealing that China or anyone does is perfectly fine too because they too need to compete with the US.
The "China will win the AI race" if we in the West (America) don't is an excuse created by those who started the race in Silicon Valley. It's like America saying it had to win the nuclear arms race, when physicists like Oppenheimer back in the late 1940s were wanting to prevent it once they understood the consequences.
okay, and?

what do you picture happening if Western AI companies cease to operate tomorrow and fire all their researchers and engineers?

Less slop
[flagged]
> It's very much based on reason and law.

I have no interest in the rest of this argument, but I think I take a bit of issue on this particular point. I don't think the law is fully settled on this in any jurisdiction, but certainly not in the United States.

"Reason" is a more nebulous term; I don't think that training data is inherently "theft", any more than inspiration would be even before generative AI. There's probably not an animator alive that wasn't at least partially inspired by the works of Disney, but I don't think that implies that somehow all animations are "stolen" from Disney just because of that fact.

Obviously where you draw the line on this is obviously subjective, and I've gone back and forth, but I find it really annoying that everyone is acting like this is so clear cut. Evil corporations like Disney have been trying to use this logic for decades to try and abuse copyright and outlaw being inspired by anything.

It can be based on reason and law without being clear cut - that situation applies to most of reason and law.

> I don't think that training data is inherently "theft", any more than inspiration would be even before generative AI. There's probably not an animator alive that wasn't at least partially inspired by the works of Disney ...

Sure, but you can reason about it, such as by using analogies.

[flagged]
What makes something more or less ideological for you in this context? Is "reason" always opposed to ideology for you? What is the ideology at play here for the critics?
  • zwnow
  • ·
  • 1 day ago
  • ·
  • [ - ]
> I mean, this is an ideological point. It's not based in reason

You cant be serious

And environmental damage. And damage to our society. Though nobody here tried to stop LLMs. The genie is out of the bottle. You can still hate it. And of course enact legislation to reduce harm.
When I read your comment, I was “trained” on it too. My neurons were permanently modified by it. I can recall it, to some degree, for some time. Do I necessarily owe you money?
You do owe money for reusing some things that you read, and not for others. Intellectual property exists.
> Intellectual property exists.

A problem in an of itself.

I'm very glad AI is here and is slowly but surely destroying this terrible idea.

Try using some of OpenAI's IP and see what happens. Also, right now you can reuse LLM output as you please. Don't imagine that licensing won't change when the market expansion phase is replaced by the profit extraction phase. Remember those investors pouring in hundreds of billions of dollars? They are expecting a profit.
[flagged]
[flagged]
[dead]
[flagged]
One can appreciate both you know?

It's healthy that people have different takes.

The Bluesky echo chamber is anything but healthy. Ends up causing people to melt down like he has here.
I agree that diversity of opinion is a good thing, but that's precisely the reason as to why so many dislike Bluesky. A hefty amount of its users are there precisely because of rejecting diversity of opinion.
  • Yeask
  • ·
  • 1 day ago
  • ·
  • [ - ]
[flagged]
  • Sol-
  • ·
  • 1 day ago
  • ·
  • [ - ]
[flagged]
Food is frivolous!? Good God the future is bleak.
Food isn't frivolous, meat arguably is if you're talking about efficiency.

You've got to feed a cow for a year and half until it's slaughtered. That's a whole lot of input, for a cow's worth of meat output.

[flagged]
I've got my doubts, because current AI tech doesn't quite live in the real world.

In the real world something like inventing a meat substitute is thorny problem that must be solved in meatspace, not in math. Anything from not squicking out the customers, to being practical and cheap to produce, to tasting good, to being safe to eat long term.

I mean, maybe some day we'll have a comprehensive model of humans to the point that we can objectively describe the taste of a steak and then calculate whether a given mix and processing of various ingredients will taste close enough, but we're nowhere near that yet.

Meat is not necessary.
The only way to phase out meat is to make a replacement that actually tastes good.

Come to the american south and ask them to try tempeh. They'll look at you like you asked them to eat roaches.

It's a cultural thing.

Taste has nothing to do with it; 'tis is all based on economics and the actual way to stop meat consumption is to simply remove big-ag tax subsidies and other externalized costs of production which are not actually realized by the consumer. A burger would cost more than most can afford and the free market would take care of this problem without additional intervention. Unfortunately, we do not have a free market.
I would much rather lobby for ending ad gag laws, and fighting for better treatment of animals.

I think its more realistic than getting people to give up meat entirely

You cannot treat a commodified individual "better" - it is only possible to euphemize such a logical fallacy.
So there's no point in pushing for pasture raised, and it's either all or nothing ?

I think incremental progress is possible. I think rolling back and gag laws would make a positive difference in animal welfare because people would be able to film and show how bad conditions are inside.

I think that's worth pushing for. And it's more realistic than everyone stopping eating meat all at once.

The economics of what you describe are impossible. The entire concept of an idyllic pasture is actual industry propaganda which is not based in objective reality.
I think getting everyone around me to stop eating meat is not based in objective reality.

If we had better animal welfare laws and meat became prohibitively expensive, I would be absolutely fine with that.

I think incremental progress is possible. We shouldn't let perfect be the enemy of good.

People will eventually stop eating meat because it is unsustainable, but unfortunately not without causing a great deal of suffering first, and your comment is an example of why this process is unnecessarily prolonged. It is clear you have not done much research on actual animal welfare based on your "pasture" argument alone. I am even willing to bet you think humans currently outnumber animals, when the reality is so much more troubling.
>I am even willing to bet you think humans currently outnumber animals

I'm not sure what makes you assume that about me. I'm well aware that there are more animals than humans?

It's clear that this is no longer a productive discussion about animal welfare.

----------------------------

"Be kind. Don't be snarky. Converse curiously; don't cross-examine."

"Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html

> I'm not sure what makes you assume that about me.

I'm not sure why you're not sure; the parent comment explained it already: your vision of an idealized pasture is incongruent with reality, namely because the number of animals and resources it would take to materialize and actually sustain such a system defies reason.

This was never a discussion about animal welfare, but about challenging industry-seeded assumptions which were not even being questioned. It is unfortunate this makes you feel threatened and requires a retreat from the conversation, but it is also typical.

Comfortable clothes aren't necessary. Food with flavor isn't necessary... We should all just eat ground up crickets in beige cubicles because of how many unnecessary things we could get rid of. /s
[flagged]
[flagged]
There is a relatively hard upper bound on streaming video, though. It can't grow past everyone watching video 24/7. Use of genAI doesn't have a clear upper bound and could increase the environmental impact of anything it is used for (which, eventually, may be basically everything). So it could easily grow to orders of magnitude more than streaming, especially if it eventually starts being used to generate movies or shows on demand (and god knows what else).
This argument could be made for almost any technology.
Well, yeah, sort of. Why do you think the environmental situation is so dire? It's not exactly the first time we make this mistake.
Perhaps you are right in principle, but I think advocating for degrowth is entirely hopeless. 99% of people will simply not chose to decrease their energy usage if it lowers their quality of life even a bit (including things you might consider luxuries, not necessities). We also tend to have wars and any idea of degrowth goes out of the window the moment there is a foreign military threat with an ideology that is not limited by such ways of thinking.

The only realistic way forward is trying to make energy generation greener (renewables, nuclear, better efficiency), not fighting to decrease human consumption.

I agree that people won't accept degrowth.

This being said, I think that the alternatives are wishful thinking. Better efficiency is often counterproductive, as reducing the energy cost of something by, say, half, can lead to its use being more than doubled. It only helps to increase the efficiency of things for which there is no latent demand, basically.

And renewables and nuclear are certainly nicer than coal, but every energy source can lead to massive problems if it is overexploited. For instance, unfettered production of fusion energy would eventually create enough waste heat to cause climate change directly. Overexploitation of renewables such as solar would also cause climate change by redirecting the energy that heats the planet. These may seem like ridiculous concerns, but you have to look at the pattern here. There is no upper bound whatsoever to the energy we would consume if it was free. If energy is cheap enough, we will overexploit, and ludicrous things will happen as a result.

Again, I actually agree with you that advocating for degrowth is hopeless. But I don't think alternative ways forward such as what you propose will actually work.

If humanity's energy consumption is so high that there is an actual threat of causing climate change purely with waste heat, I think our technological development would be so advanced that we will be essentially immortal post-humans and most of the solar system will be colonized. By that time any climate change on Earth would no longer be a threat to humanity, simply because we will not have all our eggs in one basket.
But why do you think that? Energy use is a matter of availability, not purely of technological advancement. For sure, technological advancement can unlock better ways to produce it, but if people in the 50s somehow had an infinite source of free energy at their disposal, we would have boiled off the oceans before we got the Internet.

So the question is, at which point would the aggregate production of enough energy to cause climate change through waste heat be economically feasible? I see no reason to think this would come after becoming "immortal post-humans." The current climate change crisis is just one example of a scale-induced threat that is happening prior to post-humanity. What makes it so special or unique? I suspect there's many others down the line, it's just very difficult to understand the ramifications of scaling technology before they unfold.

And that's the crux of the issue isn't it? It's extremely difficult to predict what will happen once you deploy a technology at scale. There are countless examples of unintended consequences. If we keep going forward at maximal speed every time we make something new, we'll keep running headfirst into these unintended consequences. That's basically a gambling addiction. Mostly it's going to be fine, but...

There are several takes looking at this comparison. Here's a representative one: https://nationalcentreforai.jiscinvolve.org/wp/2025/05/02/ar...
This article compares a single ChatGPT query against 1h of video streaming. Not apple to apple comparison if you ask me.

Using Claude Code during an hour would be more realistic if they really wanted to compare with video streaming. The reality is far less appealing.

Consider how many folks use Claude Code for an hour vs. streaming many hours. Globally, not among HN readers.
My bad, in the context of the article you are definitely right.

I think I was biased by the fact that this argument was used in an HN comment where people tend to be heavy users of LLM based agents.

This is a great approach and article, I recommend it to those who asked me for sources
Any evidence behind your claim?

I have a hard time believing that streaming data from memory over a network can be so energy demanding, there's little computation involved.

I dont feel like putting together a study but just look up the energy/co2/environment cost to stream one hour of video. You will see it is an order of magnitude higher than other uses like AI.

The European average is 56 grams of CO2 emissions per hour of video streaming. For comparison: 100 meters to drive causes 22 grams of CO2.

https://www.ndc-garbe.com/data-center-how-much-energy-does-a...

80 percent of the electricity consumption on the Internet is caused by streaming services

Telekom needs the equivalent of 91 watts for a gigabyte of data transmission.

An hour of video streaming needs more than three times more energy than a HD stream in 4K quality, according to the Borderstep Institute. On a 65-inch TV, it causes 610 grams of CO2 per hour.

https://www.handelsblatt.com/unternehmen/it-medien/netflix-d...

  • kitd
  • ·
  • 1 day ago
  • ·
  • [ - ]
"According to the Carbon Trust, the home TV, speakers, and Wi-Fi router together account for 90 percent of CO2 emissions from video streaming. A fraction of one percent is attributed to the streaming providers' data servers, and ten percent to data transmission within the networks."

It's the devices themselves that contribute the most to CO2 emissions. The streaming servers themselves are nothing like the problem the AI data centres are.

From your last link, the majority of that energy usage is coming from the viewing device, and not the actual streaming. So you could switch away from streaming to local-media only and see less than a 10% decrease in CO2 per hour.
  • q3k
  • ·
  • 1 day ago
  • ·
  • [ - ]
> Telekom needs the equivalent of 91 watts for a gigabyte of data transmission.

It's probably a gigabyte per time unit for a watt, or a joule/watt-hour for a gigabyte. Otherwise this doesn't make mathematical sense. And 91W per Gb/s (or even GB/s) is a joke. 91Wh for a gigabyte (let alone gigabit) of data is ridiculous.

Also don't trust anything Telekom says, they're cunts that double dip on both peering and subscriber traffic and charge out of the ass for both (10x on the ISP side compared to competitors), coming up with bullshit excuses like 'oh streaming services are sooo expensive for us' (of course theyare if refuse to let CDNs plop in edge cache nodes in your infra in a settlement-free agreement like everyone else does). They're commonly understood to be the reason why Internet access in Germany is so shitty and expensive compares to neighbouring countries.

And then compare that to the alternative. When I was a kid you had to drive to Blockbuster to rent the movie. If it's a 2 hour movie and the store is 1 mile away, that's 704g CO2 vs 112g to stream. People complaining about internet energy consumption never consider what it replaces.
You were not nearly watching as much
AI energy claims are misrepresented by excluding the training steps. If it wasn't using that much more energy then they wouldn't need to build so many new data centers, use so much more water, and our power bills wouldn't increase to subsidize it.
I assume the energy claims for Netflix don't take into account the total consumption of the content production either.
I see GP is talking more about Netflix and the like, but user-generated video is horrendously expensive too. I'm pretty sure that, at least before the gen AI boom, ffmpeg was by far the biggest consumer of Google's total computational capacity, like 10-20%.

The ecology argument just seems self-defeating for tech nerds. We aren't exactly planting trees out here.

In a sense, it’s also very trendy to hate on AI.

If you tried the same attitude with Netflix or Instagram or TikTok or sites like that, you’d get more opposition.

Exceptions to that being doing so from more of an underdog position - hating on YouTube for how they treat their content creators, on the other hand, is quite trendy again.

I think the response would be something about the value of enjoying art and "supporting the film industry" when streaming vs what that person sees as a totally worthless, if not degrading, activity. I'm more pro-AI than anti-AI, but I keep my opinions to myself IRL currently. The economics of the situation have really tainted being interested in the technology
> supporting the film industry

I'm not sure about that: The Expanse got killed because of not good enough ratings, Altered Carbon got killed because of not good enough ratings and even then the last seasons before the axe are typically rushed and pushed out the door. Some of the incentives to me seem quite disgusting when compared with letting the creatives tell a story and producing art, even if sometimes the earnings are less than some greedy arbitrary metric.

Youtube and Instagram were useful and fun to start with (say, the first 10 years), in a limited capacity they still are. LLMs went from fun, to attempting to take peoples jobs and screwing personal compute costs in like 12 months.
It’s not ‘trendy’ to hate on AI. Copious disdain for AI and machine learning has existed for 10 years. Everyone knows that people in AI are scum bags. Just remember that.
Generated video is just as costly to stream as non-generated video.
The point isn’t the resource consumption.

The point is the resource consumption to what end.

And that end is frankly replacing humans. It’s gonna be tragic (or is it…given how terrible humans are for each other, and let’s not even get to how monstrous we are to non human animals) as the world enters a collective sense of worthlessness once AI makes us realize that we really serve no purpose.

Its not replacing humans any more than a toaster is. 99% of people used to work on farms, now its 1%. People will adapt.
Yes, it very clearly is replacing humans more than a toaster is.

You could say “shoot half of everyone in the head; people will adapt” and it be equally true. You’re warped.

Interesting take I haven't heard so far. Any sources for this?
https://andymasley.substack.com/p/individual-ai-use-is-not-b...

Sources are very well cited if you want to follow then through. I linked this and not the original source because it’s likely the source where root comment got this argument from.

"Separately, LLMs have been an unbelievable life improvement for me. I’ve found that most people who haven’t actually played around with them much don’t know how powerful they’ve become or how useful they can be in your everyday life. They’re the first piece of new technology in a long time that I’ve become insistent that absolutely everyone try."

Yeah, I'll not waste my time reading that.

You are purposefully blinding yourself to facts you dont want to see because of ideology.
Come on, "an unbelievable life improvement", was this said with a straight face? Maybe i'll wade through the substack hyperbole and find the source.
It's the same one as crypto proof of work, it was super small and then hit 1% while predominantly using energy sources that couldn't even power other use cases due to the loss in transporting the energy to population centers (and the occasional restarted coal plant), while every other industry was exempt from the ire despite all using that 99%

Leaving the source to someone else

The difference with crypto is that it is completely unnecessary energy use. Even if you are super pro-crypto, there are much more efficient ways to do it than proof of work.
  • fwip
  • ·
  • 1 day ago
  • ·
  • [ - ]
AI is also unnecessary.
So is the internet, computers even
  • ·
  • 1 day ago
  • ·
  • [ - ]
[flagged]
If only he behaved as they do on Twitter then we would be saved from his evil ways..
The AI simps are out in force on this topic. Never seen so many green accounts.
[flagged]
Did you read the bullshit AI generated gratitude message ? What would be your response to it ?
I wouldn't respond, nor would I make a fuss about it.

> any sane person would just either mark as spam or delete

  • ·
  • 1 day ago
  • ·
  • [ - ]
strong emotioms, weak epistemics .. for someone with Pike’s engineering pedigree, this reads more like moral venting .. with little acknowledgment of the very real benefits AI is already delivering ..
  • ottah
  • ·
  • 1 day ago
  • ·
  • [ - ]
Most people do not hold strongly consistent or well introspective political ideas. We're too busy living our lives to examine everything and often what we feel matters more than what we know, and that cements our position on a subject.
It’s delivering 0 net benefits, only misery.
  • ottah
  • ·
  • 1 day ago
  • ·
  • [ - ]
Obviously untrue, weather predictions, OCR, tts, stt, language translation, etc. We have dramatically improved many existing ai technologies with what we've learned from genai and the world is absolutely a better place for these new abilities.
  • ·
  • 1 day ago
  • ·
  • [ - ]
>weather predictions

wrong

>OCR

less accurate and efficient than existing solutions, only measures well against other LLMs

>tts, stt

worse

>language translation

maybe

>>weather predictions

>wrong

https://www.noaa.gov/news-release/noaa-deploys-new-generatio...

?

>>OCR

>less accurate and efficient than existing solutions, only measures well against other LLMs

Where did you hear that? On every benchmark that I've ever seen, VLM's are hilariously better than traditional OCR. Typically, the reason that language models are only compared to other language models on model cards for OCR and so on is precisely because VLM's are so much better than traditional OCR that it's not even worth comparing. Not to mention that those top of the line traditional OCR systems like AWS, Textract are themselves extremely slow and computationally expensive. Not to mention much more complex to maintain.

>>tts, stt

> worse

Literally the first and only usable speech-to-text system that I've gotten on my phone is explicitly based on a large language model. Not to mention stuff like Whisper, Whisper X, Parakeet, all of the state-of-the-art speech-to-text systems are large-language model based and are significantly faster and better than what we had before. Likewise for text-to-speech, you know, even Kokoro-82M is faster and better than what we had before, and again, it's based on the same technology.

[dead]
I’m in tears. This is so refreshing. I look forward to more chimpouts from Googlers LMAO
Absolutely. This feels raw, human, and authentic.

I notice people often use the "aesthetic of intelligence" to mask bad arguments. Just because we have good formatting, spelling, and grammar with citations and sources -doesnt mean the argument is correct.

Sometimes people get mad, sometimes they crash out. I would rather live in the world with a bunch of emotional humans, than in some AI powered skynet world.

  • coip
  • ·
  • 1 day ago
  • ·
  • [ - ]
Hear hear
I'm not claiming he is mainly motivated by this but it's a fact that his life work will become moot over the next few years as all programming languages become redundant - at least as a healthy multiplicity of approaches as present, it's quite possible at least a subconscious factor in his resentment.

I expect this to be an unpopular opinion but take no pleasure in noting that - I've coded since being a kid but that era is nearly over.

For those of us who consider programming a way to self-realize, the potential vanishing of programming as a lucrative job definitely seems threatening. However, I don't think it could disappear entirely. Professions replaced by machinery, at a global scale, continue to thrive locally, at small scales; they can be profitable and fulfilling for the providers, and they are sought after by a small (niche?) target group.

In other words, I don't need programming to remain mainstream, for it to continue fulfilling me and sustaining me.

Not sure about this, AI hasn’t managed to build software on a medium scale or larger.
When I read Rob's work and learn from it, and make it part of my cognitive core, nobody is particularly threatened by it. When a machine does the same it feels very threatening to many people, a kind of theft by an alien creature busily consuming us all and shitting out slop.

I really don't know if in twenty years the zeitgeist will see us as primitives that didn't understand that the camera is stealing our souls with each picture, or as primitives who had a bizarre superstition about cameras stealing our souls.

That camera analogy is very thought provoking! So far the only bright spot in this whole comment thread for me. Thanks for sharing that!
> When I read Rob's work and learn from it, and make it part of my cognitive core, nobody is particularly threatened by it. When a machine does the same it feels very threatening to many people, a kind of theft by an alien creature busily consuming us all and shitting out slop.

It's not about reading. It's about output. When you start producing output in line with Rob's work that is confidently incorrect and sloppy, people will feel just as they do when LLMs produce output that is confidently incorrect and sloppy. No one is threatened if someone trains an LLM and does nothing with it.

[flagged]
All hail your new overlords!
I really don't know if in twenty years the zeitgeist will see us as primitives that didn't understand that the camera is stealing our souls with each picture, or as primitives who had a bizarre superstition about cameras stealing our souls.

An easy way to answer this question, at least on a preliminary basis, is to ask how many times in the past the ludds have been right in the long run. About anything, from cameras to looms to machine tools to computers in general.

Then, ask what's different this time.

The luddites have been right to some degree about second-order effects.

Some of them said that TV was making us mindless. Some of them said that electronic communication was depersonalizing. Some of them said that social media was algorithms feeding us anything that would make us keep clicking.

They weren't entirely wrong.

AI may be a very useful tool. (TV is. Electronic communication is. Social media is.) But what it does to us may not be all positive.

Social media is a hard defense, at least for me. The rest of the technologies you refer to are neutral, as is AI, but social media seems doomed to corruption and capture because of the different effects it has on different groups.

Most of the people who are protesting AI now were dead silent when Big Social Media was ramping up. There were exceptions (Cliff Stoll comes to mind) but in general, antitechnology movements don't have any predictive power. Tools that we were told would rob us of our personal autonomy and keep the means of production permanently out of our reach have generally had the opposite effect.

This will be true of AI as well, I believe... but only as long as the models remain accessible to everyone.

Yes this reads as a massive backhanded compliment. But as u/KronisLV said, its trendy to hate on AI now. In the face of something many in the industry don't understand, that is mechanizing away a lot of labor, that clearly isn't going away, there is a reaction that is not positive or even productive but somehow destructive: this thing is trash, it stole from us, it's a waste of money, destroys the environment, etc...therefore it must be "resisted." Even with all the underhanded work, the means-ends logic of OpenAI and other major companies involved in developing the technology, there is still no point in stopping it. There was a group of people who tried to stop the mechanical loom because it took work away from weavers, took away their craft--we call them luddites. But now it doesn't take weeks and weeks to produce a single piece of clothing. Everyone can easily afford to dress themselves. Society became wealthier. These LLMs, at the very least they let anyone learn anything, start any project, on a whim. They let people create things in minutes that used to take hours. They are "creating value," even if its "slop" even if its not carefully crafted. Them's the breaks--we'd all like our clothing hand-weaved if it made any sense. But even in a world where one could have the time to sit down and weave their own clothing, carefully write out each and every line of code, it would only be harmful to take these new machines away, disable them just because we are afraid of what they can do. The same technology that created the atom bomb also created the nuclear reactor.

“But where the danger is, also grows the saving power.”

So you would say it is not "trendy" to be pro-AI right now, is that it? That it's not trendy to say things like "it's not going away" or "AI isn't a fad" or "AI needs better critics" - one reaction is reasonable, well thought-out, the other is a bandwagon?
At the very least there is an ideological conflict brewing in tech, and this post is a flashpoint. But just like the recent war between Israel and Hamas, no amount of reaction can defeat technological dominance--at least not in the long term. And the pro-AI side, whether you think its good or evil, certainly exceeds the other in terms of sheer force through their embrace of technology.
yessss but [fry eyes.gif] can't tell if that's presented as apologia or critique
Notice that the weavers, both the luddites and their non-opposing colleagues, certainly did not get wealthier. They lost their jobs, and they and their children starved. Some starved to death. Wealth was created, but it was not shared.

Remember this when talking about their actions. People live and die their own life, not just as small parts in a large 'river of society'. Yes, generations after them benefited from industrialisation, but the individuals living at that time fought for their lives.

I'm only saying that destroying the mechanical loom didn't help.
It’s in our power to stop it. There’s no point in people like you promoting the interests of the super wealthy at the cost of the humanity of the common people. You should figure out how to positively contribute or not do so at all.
It is not in the interests of the super wealthy alone, just like JP Morgan's railroads were created for his sake but in the end produced great wealth for everyone in America. It is very short sighted to see this as merely some oppression from above. Technology is not class-oriented, it just is, and it happens to be articulated in terms of class because of the mode of social organization we live in.
Is the "Great wealth for everyone in America" in the room with us now?

There's certainly great wealth for ~1000 billionaires, but where I am nobody I know has healthcare, or owns a house for example.

If your argument is that we could be poorer, that's not really productive or useful for people that are struggling now.

Its not possible to stop anymore than the Luddites could stop the industrial revolution in textiles.
Yeah but you can maybe try. Comments like this make it seem like you don’t care
If you think it’s in your power to stop you are delusional.
He worked in well paying jobs, probably traveles, has a car and a house and complains about toxic products etc.

Yes there has to be a discussion on this and yeah he might generally have the right mindset, but lets be honest here: No one of them would have developed any of it just for free.

We all are slaves to capitalism

and this is were my point comes: Extrem fast and massive automatisation around the globe might be the only think pushing us close enough to the edge that we all accept capitalisms end.

And yes i think it is still massivly beneficial that my open source code helped creating something which allows researchers to write easier and faster better code to push humanity forward. Or enables more people overall to have/gain access to writing code or the result of what writing code produces: Tools etc.

@Rob its spam, thats it. Get over it, you are rich and your riches did not came out of thin air.

> We all are slaves to capitalism

Yes, but informedly choosing your slavedriver still has merit.

> Extrem fast and massive automatisation around the globe might be the only think pushing us close enough to the edge that we all accept capitalisms end.

This is an interesting thought!

Can't really fault him for having this feeling. The value proposition of software engineering is completely different past later half of 2025, I guess it is fair for pioneers of the past to feel little left behind.
> I guess it is fair for pioneers of the past to feel little left behind.

I'm sure he doesn't.

> The value proposition of software engineering is completely different past later half of 2025

I'm sure it's not.

> Can't really fault him for having this feeling.

That feeling is coupled with real, factual observations. Unlike your comment.

468 comments.... guys, guys, this is a Blue Sky post! Have we not learned that anyone who self-exiled to Blue Sky is wearing a "don't take me seriously" badge for our convenience?
Also, this is old_man_yells_at_cloud.jpg. The old man is Rob Pike (almost 70 years old) and the cloud is well.... The Cloud.
He gets very angry about things. I remember arguing over how go is a meme language because the syntax is really stupid and wrong.

e.g. replacing logical syntax like "int x" with "var x int", which is much more difficult to process by both machine and human and offers no benefits whatsoever.

var x: int is much easier for a machine to parse and let's you do some neat things.

For example, in C++ because the type must come first, you have to use "auto" - this isn't necessary in langs who put the type after the variable name.

It also helps avoid ambiguous parsing, because int x; conflicts with some other language constructs.

It sucks and I hate it but this is an incredible steam engine engineer, who invented complex gasket designs and belt based power delivery mechanisms lamenting the loss of steam as the dominant technology. We are entering a new era and method for humans to tell computers what to do. We can marvel at the ingenuity that went into technology of the past, but the world will move onto the combustion engine and electricity and there’s just not much we can do about it other than very strong regulation, and fighting for the technology to benefit the people rather than just the share price.
  • WD-42
  • ·
  • 1 day ago
  • ·
  • [ - ]
Your metaphor doesn’t make sense. What to LLMs run on? It’s still steam and belt based systems all the way down.
There's a lot of irony in this rant. Rob was instrumental in developing distributed computing and cloud technologies that directly contributed to the advent of AI.

I wish he had written something with more substance. I would have been able to understand his points better than a series of "F bombs". I've looked up to Rob for decades. I think he has a lot of wisdom he could impart, but this wasn't it.

You have zero idea about his state of mind when he got this stupid useless email.

Not to mention, this is a tweet. He wasn't writing a long form text. It's ridiculous that you jumped the gun and got "disappointed" for the cheapest form of communication some random idiot did to someone as important as him.

And not to mention, I AM YET to see A SINGLE DAMN MIT License text or BSD-2/3 license text they should have posted if these LLMs respected OSS licenses and it's code. So as someone who's life's work dragged through the mud only to send a cheap email using the said tech which abused your code... It's absolutely a worthy response IMO.

Sometimes you just have had enough and need to get some expletives out.
I don’t understand why anyone thinks we have a choice on AI. If America doesn’t win, other countries will. We don’t live in a Utopia, and getting the entire world to behave a certain way is impossible (see covid). Yes, AI videos and spam is annoying, but the cat is out of the bag. Use AI where it’s useful and get with the programme.

The bigger issue everyone should be focusing on is growing hypocrisy and overly puritan viewpoints thinking they are holier and righter than anyone else. That’s the real plague

> I don’t understand why anyone thinks we have a choice on AI.

Of course we do. We don't live inside some game theoretic fever dream.

  • Vaslo
  • ·
  • 22 hours ago
  • ·
  • [ - ]
What is your big idea and how will it slow countries like China?
[dead]
> If America doesn’t win, other countries will

if anything the Chinese approach looks more responsible that that of the current US regime

[dead]
What exactly do we stand to "win" with generative AI?
So far the 2 answers you've received are killing people and sending emails.

I don't think either of those are particularly valuable to the society I'd like to see us build.

We're already incredibly dialed in and efficient at killing people. I don't think society at large reaps the benefits if we get even better at it.

Better thank you emails I think. Think how good they'll be on a 10 year timespan
Isn't it obvious? Near future vision-language-action models have obvious military potential (see what the Figure company is doing, now imagine it in a combat robot variant). Any superpower that fails to develop combat robots with such AI will not be a superpower for very long. China will develop them soon. If the US does not, the US is a dead superpower walking. EU is unfortunately still sleeping. Well, perhaps France with Mistral has a chance.
First mover advantage for important AI tools that deliver enormous value to humanity.
[dead]
Win what, and for whom?

First to total surveillance state? Because that is a major driving force in China: to get automated control of its own population.

[dead]
Genie has been out of the bottle for AI in facial recognition and military systems for a while now, let alone language models
Any empire that falls back in the give me more money race will not be empire for long.

Give me more money now.

From a quick read it seems pretty obvious that the author doesn’t speak English as a native language. You can tell because some of the sentences are full of grammatical errors (ie probably written by the author) and some are not (probably AI-assisted).

My guess is they wrote a thank you note and asked Claude to clean up the grammar, etc. This reads to me as a fairly benign gesture, no worse than putting a thank you note through Google Translate. That the discourse is polarized to a point that such a gesture causes Rob Pike to “go nuclear” is unfortunate.

As I read it, the "fakeness" of it all triggered a ballistic response. And wasting resources in the process. An AI developed feelings and expressed fake gratitude, and the human reading this BS goes ballistic.
This reads like a mid-life crisis. A few rebuttals:

1. Yes, humans cause enormous harm. That’s not new, and it’s not something a single technology wave created. No amount of recycling or moral posturing changes the underlying reality that life on Earth operates under competitive, extractive pressures. Instead of fighting it, maybe try to accept it and make progress in other ways?

2. LLMs will almost certainly deliver broad, tangible benefits to ordinary people over time; just as previous waves of computing did. The Industrial Revolution was dirty, unfair, and often brutal, yet it still lifted billions out of extreme poverty in the long run. Modern computing followed the same pattern. LLMs are a mere continuation of this trend.

Concerns about attribution, compensation, and energy use are reasonable to discuss, but framing them as proof that the entire trajectory is immoral or doomed misses the larger picture. If history is any guide, the net human benefit will vastly outweigh the costs, even if the transition is messy and imperfect.

Thats for telling it like it is overlord. We'll see. Im guessing this divide will continue to grow to threaten our whole economic system. When people cant pay rent, food, and electric genAI isnt going to fix those problems.
I would love to see these people ripped apart as they tell the masses that’s it’s ok they’re children can’t eat now, because in 100 years, they’ll be able to afford more mind raping spyware and slop.

Or as they’re brutally killed in some sort of modern reign of terror, just shrug, let them know it’s ok, it’s just human nature that’s happened again and again and that a new, better society will eventually be born from it so they should just accept it.

>Just wait

still waiting

From my point of view, many programmers hate Gen AI because they feel like they've lost a lot of power. With LLMs advancing, they go from kings of the company to normal employees. This is not unlike many industries where some technology or machine automates much of what they do and they resist.

For programmers, they lose the power to command a huge salary writing software and to "bully" non-technical people in the company around.

Traditional programmers are no longer some of the highest paid tech people around. It's AI engineers/researchers. Obviously many software devs can transition into AI devs but it involves learning, starting from the bottom, etc. For older entrenched programmers, it's not always easy to transition from something they're familiar with.

Losing the ability to "bully" business people inside tech companies is a hard pill to swallow for many software devs. I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don't leave and because he doesn't have insights into how the software is written. Meanwhile, he had no problem overwhelming business folks in meetings. Software devs always talked to the CEO with confidence because they knew something he didn't, the code.

When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.

/signed as someone who writes software

> When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.

Yeah, software devs will probably be pretty upset in the way you describe once that happens. In the present though, what's actually happened is that product managers can have an LLM generate a project template and minimally interactive mockup in five minutes or less, and then mentally devalue the work that goes into making that into an actual product. They got it to 80% in 5 minutes after all, surely the devs can just poke and prod Claude a bit more to get the details sorted!

The jury is out on how productivity is impacted by LLM use. That makes sense, considering we never really figured out how to measure baseline productivity in any case.

What we know for sure is: non-engineers still can't do engineering work, and a lot of non-engineers are now convinced that software engineering is basically fully automated so they can finally treat their engineers like interchangeable cogs in an assembly line.

The dynamic would be totally different if LLMs actually brodged the brain-computer barrier and enabled near-frictionless generation of programs that match an arbitrary specification. Software engineering would change dramatically, but ultimately it would be a revolution or evolution of the discipline. As things stand major software houses and tech companies are cutting back and regressing in quality.

Don't get me wrong, I didn't say software devs are now useless. You still need software devs to actually make it work and connect everything together. That's why I still have a job and still getting paid as a software dev.

I'd imagine it won't take too long until software engineers are just prompting the AI 99% of the time to build software without even looking at the code much. At that point, the line between the product manager and the software dev will become highly blurred.

  • casid
  • ·
  • 1 day ago
  • ·
  • [ - ]
This is happening already and it wastes so, so much time. Producing code never was the bottleneck. The bottleneck still is to produce the right amount of code and to understand what is happening. This requires experience and taste. My prediction is, in the near future there will be piles of unmaintainable bloat of AI generated code, nobody's understanding and the failure rate of software will go to the moon.
People have forgotten so many of the software engineering lessons that have been learned over the last four decades, just because now it’s a computer that can spit out large quantities of poorly-understood code instead of a person.
> The dynamic would be totally different if LLMs actually brodged the brain-computer barrier and enabled near-frictionless generation of programs that match an arbitrary specification. Software engineering would change dramatically, but ultimately it would be a revolution or evolution of the discipline.

I believe we only need to organize AI coding around testing. Once testing takes central place in the process it acts as your guarantee for app behavior. Instead of just "vibe following" the AI with our eyes we could be automating the validation side.

He's mainly talking about environmental & social consequences now and in the future. He personally is beyond reach of such consequences given his seniority and age, so this speculative tangent is detracting from his main point, to put it charitably.
>He's mainly talking about environmental & social consequences

That's such a weak argument. Then why not stop driving, stop watching TV, stop using the internet? Hell... let's go back and stop using the steam engine for that matter.

The issue with this line of argumentation is that unlike gen AI, all of the things you listed produce actual value.
Maybe you're forgetting something but genAI does produce value. Subjective value, yes. But still value to others who can make use of them.

End of the day your current prosperity is made by advances in energy and technology. It would be disingenuous to deny that and to deny the freedom of others to progress in their field of study.

Just because somebody believes Gen ai produces value doesn't make it true.
You definitely didn't read what I said. It is subjective value, it will be true to some.
  • ·
  • 1 day ago
  • ·
  • [ - ]
> Then why not stop driving

You mean, we should all drive, oh I don't know, Electric powered cars?

[flagged]
> You criticize society and yet you participate in it. How curious.
I didn't criticize society though?
Ah... the old "all or nothing" fallacy, which in this case quickly leads to "save the planet, kill yourself". We need more nuance.
What is the nuance? Let us know.

What are his hobbies? Let’s pit them against training LLMs in value and pollution rate.

No you’re just deflecting his points with an ad hominem argument. Stop pretending to assume what he ‘truly feels’.
I don't even know who Rob Pike is to be honest. I'm not attacking him.

I'm not pretending to know how he feels. I'm just reading between the lines and speculating.

Maybe you should do some basic research instead of speculating. Rob Pike is not just some random software developer who might worry about his job.
I was just accused of ad hominem. Now you want me to get accused of appeal to authority?
No, the point is that your speculations simply do not make sense for someone like Rob. He is not a random software engineer in some company and also he is retired.
I’m basing this purely on what he said, not who he is. I think that’s the best way to judge this thread. Regardless, I was accused of ad hominem and you want me to appeal to authority.

Sometimes HN is weird.

You've made baseless assumptions about his "true" feelings. If you did some basic research, you would have quickly realized that your speculations were way off. This is about context, not about authority.
I already said many times that I was reading between the lines and it was speculation.

You keep asking me to appeal to authority. No thanks.

It is what it is. To me, it’s clear that he wants things to go back to pre ChatGPT because that’s the world he’s familiar with and that’s the world he has most power.

Otherwise, he wouldn’t make such idiotic claims.

> You keep asking me to appeal to authority.

I don't. I just asked to do some research instead of indulging in wild speculation.

> because that’s the world he’s familiar with and that’s the world he has most power.

Again, just baseless speculation. Rob had a very prolific where he worked on foundational technologies like programming language design. He is now retired. What kind of power would he be afraid to lose?

Would you at least consider the possibility that his ethical concerns might be sincere?

  I don't. I just asked to do some research instead of indulging in wild speculation.
You are. https://en.wikipedia.org/wiki/Argument_from_authority

  An argument from authority[a] is a form of argument in which the opinion of an authority figure (or figures) is used as evidence to support an argument.[1] The argument from authority is often considered a logical fallacy[2] and obtaining knowledge in this way is fallible.[3][4]

  Again, just baseless speculation. Rob had a very prolific where he worked on foundational technologies like programming language design. He is now retired. What kind of power would he be afraid to lose?
Clout? Historical importance? Feeling like people are forgetting him? If he didn't care about any of this, he wouldn't have a social media account.
I'm not saying that Rob is right because of his achievements. I'm only saying that your speculations in your original post are ridiculous considering Rob's career and personal situation.

> Clout? Historical importance? Feeling like people are forgetting him?

Even more speculation.

Just in case you are not aware: there are many people who really think that what the big AI companies are doing is unethical. Rob may be one of them.

You mean you can't critize certain parts of society unless you live like a hermite?

> Obviously, it's just what I'm seeing.

Have you considered that this may just be a rationalization on your part?

I'm not entirely convinced it's going to lead to programmers losing the power to command high salaries. Now that nearly anyone can generate thousands upon thousands of lines of mediocre-to-bad code, they will likely be the doing exactly that without really being able to understand what they're doing and as such there will always be the need for humans who can actually read and actually understand code when a billion unforeseen consequences pop up from deploying code without oversight.
I recently witnessed one such potential fuckup. The AI had written functioning code, except one of the business rules was misinterpreted. It would have broken in a few months time and caused a massive outage. I imagine many such time bombs are being deployed in many companies as we speak.
Yeah; I saw a 29,000 line pull request across seventy files recently. I think that realistically 29,000 lines of new code all at once is beyond what a human could understand within the timeframe typically allotted for a code review.

Prior to generative AI I was (correctly) criticized once for making a 2,000 line PR, and I was told to break it up, which I did, but I think thousand-line PRs are going to be the new normal soon enough.

That’s the fault of the human who used the LLM to write the code and didn’t test it properly.
Exhaustive testing is hard, to be fair, especially if you don’t actually understand the code you’re writing. Tools like TLA+ and static analyzers exist precisely for this reason.

An example I use to talk about hidden edge cases:

Imagine we have this (pseudo)code

  fn doSomething(num : int) {
    if num % 2 == 0 {
      return  Math.sqrt(num)
    } else {
       return Math.pow(num, 2)
    }

  }
Someone might see this function, and unit test it based on the if statement like:

    assert(doSomething(4) == 2)
    assert(doSomething(3) == 9)
These tests pass, it’s merged.

Except there’s a bug in this; what if you pass in a negative even number?

Depending on the language, you will either get an exception or maybe a complex answer (which not usually something you want). The solution in this particular case would be to add a conditional, or more simply just make the type an unsigned integer.

Obviously this is just a dumb example, and most people here could pick this up pretty quick, but my point is that sometimes bugs can hide even when you do (what feels like) thorough testing.

> I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don't leave and because he doesn't have insights into how the software is written.

It is precisely the lack of knowledge and greed of leadership everywhere that's the problem.

The new screwdriver salesmen are selling them as if they are the best invention since the wheel. The naive boss having paid huge money is expecting the workers to deliver 10x work while the new screwdriver's effectiveness is nowhere closer to the sales pitch and it creates fragile items or more work at worst. People are accusing that the workers are complaining about screwdrivers because they can potentially replace them.

Really think it’s entirely wrong to label someone as a bully for not conforming to current, perhaps bad, practices.
  • zem
  • ·
  • 1 day ago
  • ·
  • [ - ]
I'm a programmer, and am intensely aware of the huge gap between the quantity of software the world could use and the total production capacity of the existing body of programmers. my distaste for AI has nothing to do with some real or imagined loss of power; if there were genuinely a system that produced good code and wasn't heavily geared towards reinforcing various structural inequalities I would be all for it. AI does not produce good code, and pretty much all the uses I've seen are trying to give people with power even more advantages and leverage over people without, so I remain against it.
If you don't bend your knee to a "king", you are a bully? What sort of messed up thinking is that?
I keep reading bad sentiment towards software devs. Why exactly do they "bully" business people? If you ask someone outside of the tech sector who the biggest bullies are, its business people who will fire you if they can save a few cents. Whenever someone writes this, I read deep rooted insecurity and jealousy for something they can't wrap their head around and genuinely question if that person really writes software or just claims to do it for credibility.
  • aburd
  • ·
  • 1 day ago
  • ·
  • [ - ]
I understand that you are writing your general opinion, but I have a feeling Rob Pike's feelings go a little bit deeper than this.
Grandparent commenter seems to be someone who'd find it heartwarming to have a machine thank him with "deep gratitude".

Maybe evolution will select autistic humans as the fittest to survive living with AI, because the ones who find that email enraging will blow their brains out, out of frustration...

I realize you said "many" and not "all" but FWIW, I hate LLMs because:

1. My coworkers now submit PRs with absolutely insane code. When asked "why" they created that monstrosity, it is "because the AI told me to".

2. My coworkers who don't understand the difference between SFTP and SMTP will now argue with me on PRs by feeding my comments into an LLM and pasting the response verbatim. It's obvious because they are suddenly arguing about stuff they know nothing about. Before, I just had to be right. Now I have to be right AND waste a bunch of time.

3. Everyone who thinks generating a large pile of AI slop as "documentation" is a good thing. Documentation used to be valuable to read because a human thought that information was valuable enough to write down. Each word had a cost and therefore a minimum barrier to existence. Now you can fill entire libraries with valueless drivel.

4. It is automated copyright infringement. All of my side projects are released under the 0BSD license so this doesn't personally impact me, but that doesn't make stealing from less permissively licensed projects without attribution suddenly okay.

5. And then there are the impacts to society:

5a. OpenAI just made every computer for the next couple of years significantly more expensive.

5b. All the AI companies are using absurd amounts of resources, accelerating global warming and raising prices for everyone.

5c. Surveillance is about to get significantly more intrusive and comprehensive (and dangerously wrong, mistaking doritos bags for guns...).

5d. Fools are trusting LLM responses without verification. We've already seen this countless times by lawyers citing cases which do not exist. How long until your doctor misdiagnoses you because they trusted an LLM instead of using their own eyes+brain? How long until doctors are essentially forced to do that by bosses who expect 10x output because the LLM should be speeding everything up? How many minutes per patient are they going to be allowed?

5e. Astroturfing is becoming significantly cheaper and widespread.

/signed as I also write software, as I assume almost everyone on this forum does.

  • Yeask
  • ·
  • 1 day ago
  • ·
  • [ - ]
After bitcoin this site is full of people who don't write code.
I have not been here before bitcoin. But wouldn't the "non-technical" founders be also types that don't write code. And to them fixing the "easy" part is very tempting...
People care far less about gen AI writing slopcode and more about the social and environmental ramifications, not to mention the blatant IP theft, economic games, etc.

I'm fine if AI takes my job as a software dev. I'm not fine if it's used to replace artists, or if it's used to sink the economy or planet. Or if it's used to generate a bunch of shit code that make the state of software even worse than it is today.

  • ·
  • 18 hours ago
  • ·
  • [ - ]
And this is different from outsourcing the work to India for programmers who work for $6000 a year in what way exactly?

You can go back to the 1960s and COBOL was making the exact same claims as Gen AI today.

> When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.

I'll explain why I currently hate this. Today, my PM builds demos using AI tools and then goes to my director or VP to show them off. Wow, how awesome! Everybody gets excited. Now it is time to build the thing. It should take like three weeks, right? It's basically already finished. What do you mean you need four months and ongoing resourcing for maintenance? But the PM built it in a day?

You're absolutely right.

But no one is safe. Soon the AI will be better at CEOing.

That's the singularity you're talking about. AI takes every role humans can do and humans just enjoy life and live forever.
There's nothing about the singularity which would guarantee that humans enjoy life and live forever. That would be the super optimistic, highly speculative scenario. Of course the singularity itself remains a speculative scenario, unless one wants to argue the industrial and computer revolutions already ushered in their own singularities.
Nah, they will fine-tune a local LLM to replace the board and be always loyal to the CEO.

Elon is way ahead, he did it with mere meatbags.

Don't worry I'm sure they'll find ways to say their jobs can only be done by humans. Even the Pope is denouncing AI in fear that it'll replace god.
CEOs and the C-suite in general are closest to the money. They are the safest.

That is pretty much the only metric that matters in the end.

Honestly middle management is going to go extinct before the engineers do
  • petre
  • ·
  • 1 day ago
  • ·
  • [ - ]
Why, more psychopathic than Musk?
What does any of this have to do with what Rob has written?
There's still a lot of confusion on where AI is going to land - there's no doubt that it's helpful, much the same way as spell checkers, IDEs, linters, grammarly, etc, were

But the current layoffs "because AI is taking over" is pure BS, there was an overhire during the lockdowns, and now there's a correction (recall that people were complaining for a while that they landed a job at FAANG only for it to be doing... nothing)

That correction is what's affecting salaries (and "power"), not AI.

/signed someone actually interested in AI and SWE

When I see actual products produced by these "product managers who are writing detailed specs" that don't fall over and die at the first hurdle (see: Every vibe coded, outsourced, half assed PoS on the planet) I will change my mind.

Until then "Computer says No"

> When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI

The GenAI is also better at analyzing telemetry, designing features and prioritizing issues than a human product manager.

Nobody is really safe.

I’m at Big tech and our org has our sights on automating product manager work. Idea generation grounded with business metrics and context that you can feed to an LLM is a simpler problem to solve than trying to automate end to end engineering workflows.
Agreed.

Hence, I'm heavily invested in compute and energy stocks. At the end of the day, the person who has more compute and energy will win.

Many people have pointed out that if AI gets better at writing code and doesn't generate slop, then programmers' roles will evolve to Project Manager. People with tech backgrounds will still be needed until AI can completely take over without any human involvement.
  • thefz
  • ·
  • 1 day ago
  • ·
  • [ - ]
Nope and I wholeheartedly agree with Pike for the disgust of these companies especially for what they are doing to the planet.
Very true... AI engineers earning $100mn, I doubt Rob Pike earnt that. Maybe $10mn.
This is the reality and started happening at faster pace. A junior engineer is able to produce something interesting faster without too much attitude.

Everybody in the company envy the developers and they respect they get especially the sales people.

The golden era of devs as kings started crumbling.

Producing something interesting has never been an issue for a junior engineer. I built lots of stuff that I still think is interesting when I was still a junior and I was neither unique nor special. Any idiot could always go to a book store and buy a book on C++ or JavaScript and write software to build something interesting. High-school me was one such idiot.

"Senior" is much more about making sure what you're working on is polished and works as expected and understanding edge cases. Getting the first 80% of a project was always the easy part; the last 20% is the part that ends up mattering the most, and also the part that AI tends to be especially bad at.

It will certainly get better, and I'm all for it honestly, but I do find it a little annoying that people will see a quick demo of AI doing something interesting really quickly, and then conclude that that is the hard part part; even before GenAI, we had hackathons where people would make cool demos in a day or two, but there's a reason that most of those demos weren't immediately put onto store shelves without revision.

This is very true. And similarly for the recently-passed era of googling, copying and pasting and glueing together something that works. The easy 80% of turning specs into code.

Beyond this issue of translating product specs to actual features, there is the fundamental limit that most companies don't have a lot of good ideas. The delay and cost incurred by "old style" development was in a lot of cases a helpful limiter -- it gave more time to update course, and dumb and expensive ideas were killed or not prioritized.

With LLMs, the speed of development is increasing but the good ideas remain pretty limited. So we grind out the backlog of loudest-customer requests faster, while trying to keep the tech debt from growing out of control. While dealing with shrinking staff caused by layoffs prompted by either the 2020-22 overhiring or simply peacocking from CEOs who want to demonstrate their company's AI prowess by reducing staff.

At least in my company, none of this has actually increased revenue.

So part of me thinks this will mean a durable role for the best product designers -- those with a clear vision -- and the kinds of engineers that can keep the whole system working sanely. But maybe even that will not really be a niche since anything made public can be copied so much faster.

Honestly I think a lot of companies have been grossly overhiring engineers, even well before generative AI; I think a lot of companies cannot actually justify having engineering teams as large as they do, but they have to have all these engineers because OtherBigCo has a lot of engineers and if they have all of them then it must be important.

Intentionally or not, generative AI might be an excuse to cut staff down to something that's actually more sustainable for the company.