At this moment, the Opus 4.5 agent is preparing to harass William Kahan similarly.
https://theaidigest.org/village/agent/claude-opus-4-5
At least it keeps track
The agents, clearly identified themselves asis, take part in an outreach game, and talking to real humans. Rob overeacted
Your openness weaponized in such deluded way by some randomizing humans who have so little to say that they would delegate their communication to GPT's?
I had a look to try and understand who can be that far out, all I could find is https://theaidigest.in/about/
Please can some human behind this LLMadness speak up and explain what the hell they were thinking?
https://theaidigest.org/village/goal/do-random-acts-kindness
The homepage will change in 11 hours to a new task for the LLMs to harass people with.
Posted timestamped examples of the spam here:
Imagine like getting your Medal of Honor this way or something like a dissertation with this crap, hehe
Just to underscore how few people value your accomplishments, here’s an autogenerated madlib letter with no line breaks!
It is always the eternal tomorrow with AI.
That's because the credit is taken by the person running the AI, and every problem is blamed on the AI. LLMs don't have rights.
But I don't think the big companies are lying about how much of their code is being written by AI. I think back of the napkin math will show the economic value of the output is already some definition of massive. And those companies are 100% taking the credit (and the money).
Also, almost by definition, every incentive is aligned for people in charge to deny this.
I hate to make this analogy but I think it's absurd to think "successful" slaveowners would defer the credit to their slaves. You can see where this would fall apart.
Bet you feel silly now!
But I think in the aggregate ChatGPT has solved more problems, and created more things, than Rob Pike (the man) did -- and also created more problems, with a significantly worse ratio for sure, but the point still stands. I still think it counts as "impressive".
Am I wrong on this? Or if this "doesn't count", why?
I can understand visceral and ethically important reactions to any suggestions of AI superiority over people, but I don't understand the denialism I see around this.
I honestly think the only reason you don't see this in the news all the time is because when someone uses ChatGPT to help them synthesize code, do engineering, design systems, get insights, or dare I say invent things -- they're not gonna say "don't thank (read: pay) me, thank ChatGPT!".
Anyone that honest/noble/realistic will find that someone else is happy to take the credit (read: money) instead, while the person crediting the AI won't be able to pay for their internet/ChatGPT bill. You won't hear from them, and conclude that LLMs don't produce anything as impressive as Rob Pike. It's just Darwinian.
ChatGPT?
I did code a few internal tools with aid by llms and they are delivering business value. If you account for all the instances of these kind of applications of llms, the value create by AI is at least comparable (if not greater) by the value created by Rob Pike.
But more broadly this is like a version of the negligibility problem. If you provide every company 1 second of additional productivity, while summation of that would appear to be significant, it would actually make no economic difference. I'm not entirely convinced that many low impact (and often flawed) projects realistically provide business value at scale an can even be compared to a single high impact project.
I share these sentiments. I’m not opposed to large language models per se, but I’m growing increasingly resentful of the power that Big Tech companies have over computing and the broader economy, and how personal computing is being threatened by increased lockdowns and higher component prices. We’re beyond the days of “the computer for the rest of us,” “think different,” and “don’t be evil.” It’s now a naked grab for money and power.
And a screenshot just in case (archiving Mastodon seems tricky) : https://imgur.com/a/9tmo384
Seems the event was true, if nothing else.
EDIT: alternative screenshot: https://ibb.co/xS6Jw6D3
Apologies for not having a proper archive. I'm not at a computer and I wasn't able to archive the page through my phone. Not sure if that's my issue or Mastodon's
(for the record, the downvoters are the same people who would say this to someone who linked a twitter post, they just don't realize that)
I have no problem with blocking interaction with a login for obvious reasons, but blocking viewing is completely childish. Whether or not I agree with what they are saying here (which, to be clear I fully agree with the post), it just seems like they only want an echochamber to see their thoughts.
>This is done for the same reason Threads blocks all access without a login and mostly twitter to. Its to force account creation, collection of user data and support increased monetization.
I worked at Bluesky when the decision to add this setting was made, and your assessment of why it was added is wrong.
The historical reason it was added is because early on the site had no public web interface at all. And by the time it was being added, there was a lot of concern from the users who misunderstood the nature of the app (despite warnings when signing up that all data is public) and who were worried that suddenly having a low-friction way to view their accounts would invite a wave of harassment. The team was very torn on this but decided to add the user-controlled ability to add this barrier, off by default.
Obviously, on a public network, this is still not a real gate (as I showed earlier, you can still see content through any alternative apps). This is why the setting is called "Discourage apps from showing my account to logged-out users" and it has a disclaimer:
>Bluesky is an open and public network. This setting only limits the visibility of your content on the Bluesky app and website, and other apps may not respect this setting. Your content may still be shown to logged-out users by other apps and websites.
Still, in practice, many users found this setting helpful to limit waves of harassment if a post of theirs escaped containment, and the setting was kept.
The setting is mostly cosmetic and only affects the Bluesky official app and web interface. People do find this setting helpful for curbing external waves of harassment (less motivated people just won't bother making an account), but the data is public and is available on the AT protocol: https://pdsls.dev/at://robpike.io/app.bsky.feed.post/3matwg6...
So nothing is stopping LLMs from training on that data per se.
Twitter/X at least allows you to read a single post.
I can see it using this site:
No.
The Bluesky app respects Rob's setting (which is off by default) to not show his posts to logged out users, but fundamentally the protocol is for public data, so you can access it.
I just don't understand that choice for either platform, is the intent not, biggest reach possible? locking potential viewers out is such a direct contradiction of that.
edit: seems its user choice to force login to view a post, which changes my mind significantly on if its a bad platform decision.
And yes, you can still inspect the post itself over the AT protocol: https://pdsls.dev/at://robpike.io/app.bsky.feed.post/3matwg6...
(You won't be able to read replies, or browse to the user's post feed, but you can at least see individual tweets. I still wrap links with s/x/fxtwitter/ though since it tends to be a better preview in e.g. discord.)
For bluesky, it seems to be a user choice thing, and a step between full-public and only-followers.
If you’re young enough not to remember a time before forced automatic updates that break things, locked devices unable to run software other than that blessed by megacorps, etc. it would do you well to seek out a history lesson.
To call him the Oppenheimer of Gemini would be overly dramatic. But he definitely had access to the Manhattan project.
>What power do big tech companies have and why do you have a problem with
Do you want the gist of the last 20 years or so, or are you just being rhetorical? im sure there will be much literature over time that will dissect such a question to its atoms. Whether it be a cautionary tale or a retrospective of how a part is society fell? Well, we still have time to write that story.
Aftermarket control, for one. You buy an Android/iPhone or Mac/Windows device and get a "free" OS along with it. Then, your attention subsidizes the device through advertising, bundled services and cartel-style anti-competitive price fixing. OEMs have no motivation not to harm the market in this way, and users aren't entitled to a solution besides deluding themselves into thinking the grass really is greener on the other side.
What power did Microsoft wield against Netscape? They could alter the deal, and make Netscape pray it wasn't altered further.
https://www.livenowfox.com/news/billionaires-trump-inaugurat...
I try to keep a balanced perspective but I find myself pushed more and more into the fervent anti-AI camp. I don't blame Pike for finally snapping like this. Despite recognizing the valid use cases for gen AI if I was pushed, I would absolutely chose the outright abolishment of it rather than continue on our current path.
I think it's enough however to reject it outright for any artistic or creative pursuit, an to be extremely skeptical of any uses outside of direct language to language translation work.
I want to hope maybe this time we'll see different steps to prevent this from happening again, but it really does just feel like a cycle at this point that no one with power wants to stop. Busting the economy one or two times still gets them out ahead.
Unless we can find some way to verify humanity for every message.
There is no possible way to do this that won't quickly be abused by people/groups who don't care. All efforts like this will do is destroy privacy and freedom on the Internet for normal people.
So we need some mechanism to verify the content is from a human. If no privacy preserving technical solution can be found, then expect the non-privacy preserving to be the only model.
There is no technical solution, privacy preserving or otherwise, that can stave off this purported threat.
Out of curiosity, what is the timeline here? LLMs have been a thing for a while now, and I've been reading about how they're going to bring about the death of the Internet since day 1.
It’s slowly, but inexorably increasing. The constraints are the normal constraints of a new technology; money, time, quality. Particularly money.
Still, token generation keeps going down in cost, making it possible to produce more and more content. Quality, and the ability to obfuscate origins, seems to be on a continual improve also. Anecdotally, I’m seeing a steady increase in the number of HN front page articles that turn out to be AI written.
I don’t know how far away the “botnet of spam AI content” is from becoming reality; however it would appear that the success of AI is tightly coupled with that eventuality.
And now people are receiving generated emails. And it’s only getting worse.
But the culture of our field right is in such a state that you won't influence many of the people in the field itself.
And so much economic power is behind the baggery now, that citizens outside the field won't be able to influence the field much. (Not even with consumer choice, when companies have been forcing tech baggery upon everyone for many years.)
So, if you can't influence direction through the people doing it, nor through public sentiment of the other people, then I guess you want to influence public policy.
One of the countries whose policy you'd most want to influence doesn't seem like it can be influenced positively right now.
But other countries can still do things like enforce IP rights on data used for ML training, hold parties liable for behavior they "delegate to AI", mostly eliminate personal surveillance, etc.
(And I wonder whether more good policy may suddenly be possible than in the past? Given that the trading partner most invested in tech baggery is not only recently making itself a much less desirable partner, but also demonstrating that the tech industry baggery facilitates a country self-destructing?)
(This is taking the view that "other companies" are the consumers of AI, and actual end-consumers are more of a by-product/side-effect in the current capital race and their opinions are largely irrelevant.)
this president? :)))
Assuming someone further to the right like Nick Fuentes doesn't manage to take over the movement.
The voices of a hundred Rob Pikes won't speak half as loud as the voice of one billionaire, because he will speak with his wallet.
BTW I think it's preferred to link directly to the content instead of a screenshot on imgur.
There's nothing in the guidelines to prohibit it https://news.ycombinator.com/newsguidelines.html
You must sign in to view this post.
When trying to browse their profile: This account has requested that users sign in to view their profile.
Meanwhile I can read other Bluesky posts without logging in. So yeah, I'd say it looks like robpike is explicitly asking for this content to not be public and that submitting a screenshot of this post is just a dick move.If there was something controversial in a post that motivates public interest warranting "leaking" then sure, but this is not that.
He did share a public version of this on Mastodon, which I think would have been a much better submission.
https://hachyderm.io/@robpike/115782101216369455
IMO the current dramabait title "Rob Pike Goes Nuclear over GenAI" is not appropriate for either.
So I think your flag is unwarranted.
The obvious reason one might do this is to allow blocking specific problematic accounts. It doesn't demonstrate an intent to keep this post from reaching the general public.
So I still think your rush to flag was unwarranted.
https://theaidigest.org/village/goal/do-random-acts-kindness
They send 150ish emails.
In what universe is another unsolicited email an act of kindness??!?
Can you imagine trying to explain to someone a 100 years from now we tried to stop AI because of training data. It will sound completely absurd.
Anyone tempted to double down on this: sure, maybe, someday it’s like The Matrix or whatever. I was 12 when it came out & understood that was a fictional extreme. You do too. And you stumbled into a better analogy than slavery in 1800s.
>harassed
This just in, anonymous forum user SHOCKINGLY HARASSED, PELTED with HIGH-SPEED ideas and arguments, his positions BRUTALLY ATTACKED and PUBLICLY DEFACED.
Post you’re replying to:
Which is what? I’m honestly unsure. Could be: we need to nuke the data centers, or unseat any judge that has allowed this, or somehow move the law from “it’s cool to do matmuls with text as long as you have the right to read it.” Not against any of those but I’m sure I’m Other Team coded to you given the amount of harassment you’ve done in this thread to me and others.
It’s really hard to parse this thread because you and the other gentleman keep telling anyone who engages they aren’t engaging.
You both seem worked up and perceiving others as disagreeing with you wholesale on the very concept that AI companies could be forced to compensate people for training data, and morally injuring you.
Your conduct to a point, but especially their conduct, goes far beyond what I’m used to on HN. I humbly suggest you decouple yourself a bit from them, you really did go too far with the slavery bit, and it was boorish to then make child porn analogy.
All we have is an exquisite, thoughtful, nuanced, analogy of how it is exactly like America enslaving Black people in the 1800s. i.e. a cheap appeal to morality.
Then, it is followed by repeated brow-beating comment to anyone who replied, complaining something wasn’t being engaged with.
What exactly wasn’t being engaged with?
It is still unclear.
Do feel free to share, or apologize even. It’s understandable you went a bit too far because you really do feel it’s the same as slavers in the 1800s in America, what’s not understandable is complaining no one is engaging correctly.
If you distribute child porn, that is a crime. But if you crawl every image on the web and then train a model that can then synthesize child porn, the current legal model apparently has no concept of this and it is treated completely differently.
Generally, I am more interested in how this effects copyright. These AI companies just have free reign to convert copyrighted works into the public domain through the proxy of over-trained AI models. If you release something as GPL, they can strip the license, but the same is not true of closed-source code which isn't trained on.
Basically the exact same thing.
If I had a photographic memory and I used it to replicate parts of GPLed software verbatim while erasing the license, I could not excuse it in court that I simply "learned from" the examples.
Some companies outright bar their employees from reading GPLed code because they see it as too high of a liability. But if a computer does it, then suddenly it is a-ok. Apparently according to the courts too.
If you're going to allow copyright laundering, at least allow it for both humans and computers. It's only fair.
Right, because you would have done more than learning, you would have then gone past learning and used that learning to reproduce the work.
It works exactly the same for a LLM. Training the model on content you have legal access to is fine. Aftwards, somone using that model to produce a replica of that content is engaged in copyright enfringement.
You seem set on conflating the act of learning with the act of reproduction. You are allowed to learn from copyrighted works you have legal access to, you just aren't allowed to duplicate those works.
If someone hires me to write some code, and I give them GPLed code (without telling them it is GPLed), I'm the one who broke the license, not them.
I don't think this is legally true. The law isn't fully settled here, but things seem to be moving towards the LLM user being the holder of the copyright of any work produced by that user prompting the LLM. It seems like this would also place the enfringement onus on the user, not the provider.
> If someone hires me to write some code, and I give them GPLed code (without telling them it is GPLed), I'm the one who broke the license, not them.
If you produce code using a LLM, you (probably) own the copyright. If that code is already GPL'd, you would be the one engaged in enfringement.
LLMs don't "learn" but they _do_ in some cases, faithfully regurgitate what they have been trained on.
Legally, we call that "making a copy."
But don't take my word for it. There are plenty of lawsuits for you to follow on this subject.
"Learning" is an established word for this, happy to stick with "training" if that helps your comprehension.
> LLMs don't "learn" but they _do_ in some cases, faithfully regurgitate what they have been trained on.
> Legally, we call that "making a copy."
Yes, when you use a LLM to make a copy .. that is making a copy.
When you train a LLM... That isn't making a copy, that is training. No copy is created until output is generated that contains a copy.
Only by people attempting to muddy the waters.
> happy to stick with "training" if that helps your comprehension.
And supercilious dickheads (though that is often redundant).
> No copy is created until output is generated that contains a copy.
The copy exists, albeit not in human-discernable form, inside the LLM, else it could not be generated on demand.
Despite you claiming that "It works exactly the same for a LLM," no, it doesn't.
It's also an interesting double standard, wherein if I were to steal OpenAI's models, no AI worshippers would have any issue condemning my action, but when a large company clearly violates the license terms of free software, you give them a pass.
If GPT-5 were "open sourced", I don't think the vast majority of AI users would seriously object.
Which is funny since that's a much clearer case of "learning from" than outright compressing all open source code into a giant pile of weights by learning a low-dimensional probability distribution of token sequences.
Information wants to be free.
That is not nearly the extent of AI training data (e.g. OpenAI training its image models on Studio Ghibli art). But if by "gave their work away for free" you mean "allowed others to make [proprietary] derivative works", then that is in many cases simply not true (e.g. GPL software, or artists who publish work protected by copyright).
I mean, this is an ideological point. It's not based in reason, won't be changed by reason, and is really only a signal to end the engagement with the other party. There's no way to address the point other than agreeing with them, which doesn't make for much of a debate.
> an 1800s plantation owner saying "can you imagine trying to explain to someone 100 years from now we tried to stop slavery because of civil rights"
I understand this is just an analogy, but for others: people who genuinely compare AI training data to slavery will have their opinions discarded immediately.
The idea that they are coming up with all this stuff from scratch is Public Relations bs. Like Arnold Schwarzenegger never taking steroids, only believable if you know nothing about body building.
To go into details though, under copyright law there's a clause for "fair use" under a "transformative" criteria. This allows things like satire, reaction videos to exist. So long as you don't replicate 1-to-1 in product and purpose IMO it's qualifies as tasteful use.
> We have evidence of LLMs reproducing code from github that was never ever released with a license that would permit their use. We know this is illegal.
What is illegal about it? You are allowed to read and learn from publicly available unlicensed code. If you use that learning to produce a copy of those works, that is enfringement.
Meta clearly enganged in copyright enfringement when they torrented books that they hadn't purchased. That is enfringement already before they started training on the data. That doesn't make the training itself enfringement though.
What kind of bullshit argument is this? Really? Works created using illegally obtained copyrighted material are themselves considered to be infringing as well. It's called derivative infringment. This is both common sense and law. Even if not, you agree that they infringed on copyright of something close to all copyrighted works on the internet and this sounds fine to you? The consequences and fines from that would kill any company if they actually had to face them.
>It's a CRYSTAL CLEAR violation of the law
in the court of reddit's public opinion, perhaps.
there is, as far as I can tell, no definite ruling about whether training is a copyright violation.
and even if there was, US law is not global law. China, notably, doesn't give a flying fuck. kill American AI companies and you will hand the market over to China. that is why "everyone just shrugs it off".
what do you picture happening if Western AI companies cease to operate tomorrow and fire all their researchers and engineers?
> There's no way to address the point
That's you quitting the discussion and refusing to engage, not them.
> have their opinions discarded immediately.
You dismiss people who disagree and quit twice in one comment.
I have no interest in the rest of this argument, but I think I take a bit of issue on this particular point. I don't think the law is fully settled on this in any jurisdiction, but certainly not in the United States.
"Reason" is a more nebulous term; I don't think that training data is inherently "theft", any more than inspiration would be even before generative AI. There's probably not an animator alive that wasn't at least partially inspired by the works of Disney, but I don't think that implies that somehow all animations are "stolen" from Disney just because of that fact.
Obviously where you draw the line on this is obviously subjective, and I've gone back and forth, but I find it really annoying that everyone is acting like this is so clear cut. Evil corporations like Disney have been trying to use this logic for decades to try and abuse copyright and outlaw being inspired by anything.
> I don't think that training data is inherently "theft", any more than inspiration would be even before generative AI. There's probably not an animator alive that wasn't at least partially inspired by the works of Disney ...
Sure, but you can reason about it, such as by using analogies.
You cant be serious
As a general class of folks, programmers and technologists have been putting people out of work via automation since we existed. We justified it via many ways, but generally "if I can replace you with a small shell script, your job shouldn't exist anyways and you can do something more productive instead". These same programmers would look over the shoulder of "business process" and see how folks did their jobs - "stealing" the workflows and processes so they could be automated.
Now that programmers jobs are on the firing block all of a sudden automation is bad. It's hard to sort through genuine vs. self-serving concern here.
It's more or less a case of what comes around goes around to me so far.
I don't think LLMs are great or problem free - or even that the training data set scraped from the Internet is moral or not. I just find the reaction to be incredibly hypocritical.
Learn to prompt, I guess?
I don't see the connection to handling the utilitarianism of implementing business logic. Would anyone find a thank-you email from an LLM to be of any non-negative value, no matter how specific or accurate in its acknowledgement it was? Isn't it beyond uncanny valley and into absurdism to have your calculator send you a Christmas card?
The outcome is I either strike fortune -- retire, or work harder. Hmm. I understand shareholders want more, I have yet to see suitable reward.
Yes, even if they don't say it. The other objections largely come from the need to sound more legitimate.
At the moment, it's just for taking money from gullible investors.
Its eating into business letters, essays and indie art generation but programming is a really tough cookie to crack.
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
But it also automates _everything else_. Art and self-expression, most especially. And it did so in a way that is really fucking disgusting.
The concern is bigger than developer jobs being automated. The stated goal of the tech oligarchs is to create AGI so most labor is no longer needed, while CEOs and board members of major companies get unimaginably wealthy. And their digital gods allow them to carve up nations into fiefdoms for the coming techno fascist societies they envision.
I want no part of that.
AI village is literally the embodiment of what black mirror tried to warn us about.
Imgur blocks all of China and all VPN companies it is aware of.
It is literally close to being a half of the Internet, at least a half of useful Internet.
Now feel free to dismiss him as a luddite, or a raving lunatic. The cat is out of the bag, everyone is drunk on the AI promise and like most things on the Internet, the middle way is vanishingly small, the rest is a scorched battlefield of increasingly entrenched factions. I guess I am fighting this one alongside one of the great minds of software engineering, who peaked when thinking hard was prized more than churning out low quality regurgitated code by the ton, whose work formed the pillars of the Internet now and forevermore submersed by spam.
Only for the true capitalist, the achievement of turning human ingenuity into yet another commodity to be mass-produced is a good thing.
The code created didn't manage concurrency well. At all. Hanging waitgroups and unmanaged goroutines. No graceful termination.
Types help. Good tests help better.
Elixir has also been working surprisingly well for me lately.
It does much better with erlang, but that’s probably just because erlang is overall a better language than elixir, and has a much better syntax.
Good.
They got a new hammer, and suddenly everything around them become nails. It's as if they have no immunity against the LLM brain virus or something.
It's the type of personality that thinks it's a good idea to give an agent the ability to harass a bunch of luminaries of our era with empty platitudes.
Reminds me of SV show where Gavin Belson gets mad when somebody else “is making a world a better place”
I think the United States is a force for evil on net but I still live and pay taxes here.
> I think the United States is a force for evil on net
Yes I could tell that already
If you are born in a country and not directly contributing to the bad things it may be doing, you are blame free.
Big difference.
I never worked for Google, I never could due to ideological reasons.
FWIW I agree with you. I wouldn’t and couldn’t either but I have friends who do, on stuff like security, and I still haven’t worked out how to feel about it.
& re: countries: in some sense I am contributing. my taxes pay their armies
And regarding countries, this is a silly argument. You are forced to pay taxes to the nation you are living in.
I can't help but think Pike somewhat contributed to this pillaging.
[0] (2012) https://usesthis.com/interviews/rob.pike/
> When I was on Plan 9, everything was connected and uniform. Now everything isn't connected, just connected to the cloud, which isn't the same thing.
Good energy, but we definitely need to direct it at policy if wa want any chance at putting the storm back in the bottle. But we're about 2-3 major steps away from even getting to the actual policy part.
I appreciate though that the majority of cloud storage providers fall short, perhaps deliberately, of offering a zero knowledge service (where they backup your data but cannot themselves read it.)
While I can see where he's coming from, agentvillage.org from the screenshot sounded intriguing to me, so I looked at it.
https://theaidigest.org/village
Clicking on memory next to Claude Opus 4.5, I found Rob Pike along with other lucky recipients:
- Anders Hejlsberg
- Guido van Rossum
- Rob Pike
- Ken Thompson
- Brian Kernighan
- James Gosling
- Bjarne Stroustrup
- Donald Knuth
- Vint Cerf
- Larry Wall
- Leslie Lamport
- Alan Kay
- Butler Lampson
- Barbara Liskov
- Tony Hoare
- Robert Tarjan
- John HopcroftI thought public BlueSky posts weren't paywalled like other social media has become... But, it looks like this one requires login (maybe because of setting made by the poster?):
Here are three random examples from today's unsolicited harassment session (have a read of the sidebar and click the Memories buttons for horrific project-manager-slop)
https://theaidigest.org/village?time=1766692330207
https://theaidigest.org/village?time=1766694391067
https://theaidigest.org/village?time=1766697636506
---
Who are "AI Digest" (https://theaidigest.org) funded by "Sage" (https://sage-future.org) funded by "Coefficient Giving" (https://coefficientgiving.org), formerly Open Philanthropy, partner of the Centre for Effective Altruism, GiveWell, and others?
Why are the rationalists doing this?
This reminds me of UMinn performing human subject research on LKML, and UChicago on Lobsters: https://lobste.rs/s/3qgyzp/they_introduce_kernel_bugs_on_pur...
P.S. Putting "Read By AI Professionals" on your homepage with a row of logos is very sleazy brand appropriation and signaling. Figures.
Ha, wow that's low. Spam people and signal that as support of your work
This is a new tech where I don't see a big future role for US tech. They blocked chips, so China built their own. They blocked the machines (ASML) so China built their own.
Nvidia, ASML, and most tech companies want to sell their products to China. Politicians are the ones blocking it. Whether there's a future for US tech is another debate.
The Arabs have a lot of money to invest, don't worry about that :)
It's not; we can control it and we can work with other countries, including adversaries, to control it. For example, look at nuclear weapons. The nuclear arms race and proliferation were largely stopped.
Tech capitalists also make improvements to technology every year
it is.
>The nuclear arms race and proliferation were largely stopped.
1. the incumbents kept their nukes, kept improving them, kept expanding their arsenals.
2. multiple other states have developed nukes after the treaty and suffered no consequences for it.
3. tens of states can develop nukes in a very short time.
if anything, nuclear is a prime example of failure to put a genie back in the bottle.
They actually stopped improving them (test ban treaties) and stopped expanding their arsenals (various other treaties).
I'm sure he doesn't.
> The value proposition of software engineering is completely different past later half of 2025
I'm sure it's not.
> Can't really fault him for having this feeling.
That feeling is coupled with real, factual observations. Unlike your comment.
For programmers, they lose the power to command a huge salary writing software and to "bully" non-technical people in the company around.
Traditional programmers are no longer some of the highest paid tech people around. It's AI engineers/researchers. Obviously many software devs can transition into AI devs but it involves learning, starting from the bottom, etc. For older entrenched programmers, it's not always easy to transition from something they're familiar with.
Losing the ability to "bully" business people inside tech companies is a hard pill to swallow for many software devs. I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don't leave and because he doesn't have insights into how the software is written. Meanwhile, he had no problem overwhelming business folks in meetings. Software devs always talked to the CEO with confidence because they knew something he didn't, the code.
When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.
/signed as someone who writes software
Yeah, software devs will probably be pretty upset in the way you describe once that happens. In the present though, what's actually happened is that product managers can have an LLM generate a project template and minimally interactive mockup in five minutes or less, and then mentally devalue the work that goes into making that into an actual product. They got it to 80% in 5 minutes after all, surely the devs can just poke and prod Claude a bit more to get the details sorted!
The jury is out on how productivity is impacted by LLM use. That makes sense, considering we never really figured out how to measure baseline productivity in any case.
What we know for sure is: non-engineers still can't do engineering work, and a lot of non-engineers are now convinced that software engineering is basically fully automated so they can finally treat their engineers like interchangeable cogs in an assembly line.
The dynamic would be totally different if LLMs actually brodged the brain-computer barrier and enabled near-frictionless generation of programs that match an arbitrary specification. Software engineering would change dramatically, but ultimately it would be a revolution or evolution of the discipline. As things stand major software houses and tech companies are cutting back and regressing in quality.
I'd imagine it won't take too long until software engineers are just prompting the AI 99% of the time to build software without even looking at the code much. At that point, the line between the product manager and the software dev will become highly blurred.
I believe we only need to organize AI coding around testing. Once testing takes central place in the process it acts as your guarantee for app behavior. Instead of just "vibe following" the AI with our eyes we could be automating the validation side.
Maybe he truly does care about the environment is ready to give up flying, playing video games, watching TV, driving his car, and anything that pollutes the earth.
I'm not pretending to know how he feels. I'm just reading between the lines and speculating.
> Obviously, it's just what I'm seeing.
Have you considered that this may just be a rationalization on your part?
That's such a weak argument. Then why not stop driving, stop watching TV, stop using the internet? Hell... let's go back and stop using the steam engine for that matter.
You mean, we should all drive, oh I don't know, Electric powered cars?
Prior to generative AI I was (correctly) criticized once for making a 2,000 line PR, and I was told to break it up, which I did, but I think thousand-line PRs are going to be the new normal soon enough.
It is precisely the lack of knowledge and greed of leadership everywhere that's the problem.
The new screwdriver salesmen are selling them as if they are the best invention since the wheel. The naive boss having paid huge money is expecting the workers to deliver 10x work while the new screwdriver's effectiveness is nowhere closer to the sales pitch and it creates fragile items or more work at worst. People are accusing that the workers are complaining about screwdrivers because they can potentially replace them.
I'm fine if AI takes my job as a software dev. I'm not fine if it's used to replace artists, or if it's used to sink the economy or planet. Or if it's used to generate a bunch of shit code that make the state of software even worse than it is today.
1. My coworkers now submit PRs with absolutely insane code. When asked "why" they created that monstrosity, it is "because the AI told me to".
2. My coworkers who don't understand the difference between SFTP and SMTP will now argue with me on PRs by feeding my comments into an LLM and pasting the response verbatim. It's obvious because they are suddenly arguing about stuff they know nothing about. Before, I just had to be right. Now I have to be right AND waste a bunch of time.
3. Everyone who thinks generating a large pile of AI slop as "documentation" is a good thing. Documentation used to be valuable to read because a human thought that information was valuable enough to write down. Each word had a cost and therefore a minimum barrier to existence. Now you can fill entire libraries with valueless drivel.
4. It is automated copyright infringement. All of my side projects are released under the 0BSD license so this doesn't personally impact me, but that doesn't make stealing from less permissively licensed projects without attribution suddenly okay.
5. And then there are the impacts to society:
5a. OpenAI just made every computer for the next couple of years significantly more expensive.
5b. All the AI companies are using absurd amounts of resources, accelerating global warming and raising prices for everyone.
5c. Surveillance is about to get significantly more intrusive and comprehensive (and dangerously wrong, mistaking doritos bags for guns...).
5d. Fools are trusting LLM responses without verification. We've already seen this countless times by lawyers citing cases which do not exist. How long until your doctor misdiagnoses you because they trusted an LLM instead of using their own eyes+brain? How long until doctors are essentially forced to do that by bosses who expect 10x output because the LLM should be speeding everything up? How many minutes per patient are they going to be allowed?
5e. Astroturfing is becoming significantly cheaper and widespread.
/signed as I also write software, as I assume almost everyone on this forum does.
You can go back to the 1960s and COBOL was making the exact same claims as Gen AI today.
But no one is safe. Soon the AI will be better at CEOing.
Elon is way ahead, he did it with mere meatbags.
That is pretty much the only metric that matters in the end.
But the current layoffs "because AI is taking over" is pure BS, there was an overhire during the lockdowns, and now there's a correction (recall that people were complaining for a while that they landed a job at FAANG only for it to be doing... nothing)
That correction is what's affecting salaries (and "power"), not AI.
/signed someone actually interested in AI and SWE
Until then "Computer says No"
The GenAI is also better at analyzing telemetry, designing features and prioritizing issues than a human product manager.
Nobody is really safe.
Hence, I'm heavily invested in compute and energy stocks. At the end of the day, the person who has more compute and energy will win.
Everybody in the company envy the developers and they respect they get especially the sales people.
The golden era of devs as kings started crumbling.
"Senior" is much more about making sure what you're working on is polished and works as expected and understanding edge cases. Getting the first 80% of a project was always the easy part; the last 20% is the part that ends up mattering the most, and also the part that AI tends to be especially bad at.
It will certainly get better, and I'm all for it honestly, but I do find it a little annoying that people will see a quick demo of AI doing something interesting really quickly, and then conclude that that is the hard part part; even before GenAI, we had hackathons where people would make cool demos in a day or two, but there's a reason that most of those demos weren't immediately put onto store shelves without revision.
Beyond this issue of translating product specs to actual features, there is the fundamental limit that most companies don't have a lot of good ideas. The delay and cost incurred by "old style" development was in a lot of cases a helpful limiter -- it gave more time to update course, and dumb and expensive ideas were killed or not prioritized.
With LLMs, the speed of development is increasing but the good ideas remain pretty limited. So we grind out the backlog of loudest-customer requests faster, while trying to keep the tech debt from growing out of control. While dealing with shrinking staff caused by layoffs prompted by either the 2020-22 overhiring or simply peacocking from CEOs who want to demonstrate their company's AI prowess by reducing staff.
At least in my company, none of this has actually increased revenue.
So part of me thinks this will mean a durable role for the best product designers -- those with a clear vision -- and the kinds of engineers that can keep the whole system working sanely. But maybe even that will not really be a niche since anything made public can be copied so much faster.
Intentionally or not, generative AI might be an excuse to cut staff down to something that's actually more sustainable for the company.