The real creepy thing is the way they force you to give up your data with these products. If it were just useful add ons, it wouldn’t bother me, but the fact that Gemini requires you to turn activity history off for paid plans for the promise they won’t train on your data or allow a person to view your prompts is insanity. If you’re paying $20 for Pro or 249.99 for Ultra, you should be able to get activity history without training or review or storing your data for several years.
I have a pixel watch, and my main use for it is setting reminders, like "reminder 3pm put the laundry in the dryer". It's worked fine since the day I bought it.

Last week, they pushed an update that broke all of the features on the watch unless I agreed to allow Google to train their AI on my content.

My Android phone comes hobbled unless I give it all my data to be used for training data (or whatever). I just asked, "Ok Google, play youtube music." And it responded with, "I cannot play music, including YouTube Music, as that tool is currently disabled based on your preferences. I can help you search for information about artists or songs on YouTube, though. By the way, to unlock the full functionality of all Apps, enable Gemini Apps Activity."

I'm new to Android, so maybe I can somehow still preserve some privacy and have basic voice commands, but from what I saw, it required me to enable Gemini Apps Activity with a wall of text I had to agree to in order to get a simple command to play some music to work.

You can switch from Gemini back to Google assistant.

https://support.google.com/gemini/community-guide/309961682/...

That is the point when I turn around and walk away from that company.
I'm almost there, but the mobile operating systems (compatible with the phones i have) are a snag at the moment.
Just stop talking to your computer and use the screen interface, that still works.
When I'm on my bike, it's difficult. I will ride no handed and change a track, but it's more dangerous than it needs to be.

I might switch back to my iOS device, but what I'd really like to do is replace the Andriod OS on this Motorola with a community oriented open source OS. Then I could start working on piping the mic audio to my own STT model and execute commands on the phone.

Just stop if you need to adjust something?
That seems a lot like corpo- excuse making about adjusting usage to compensate for the fact that a product someone purchased has been changed, and broken, in order to be forced into agreement for a contract. That is called coercion in many places, but it seems like your recommended solution is that people accept getting screwed just so corporations can make more money when people complain…is that correct?
I mean, any time you suggest "regulate companies" or "form a union", you get dogpiled. So until society gets its act together and collectively fixes these problems, the only immediate solution is to opt out.
I'm just saying, how often do you need to adjust your phone while you're cycling?

It's a step back to not be able to do it by voice but if you're concerned enough about your privacy, stopping once or twice during a ride doesn't sound like the end of the world.

I'm not saying it's fine that Google took away functionality but, from a practical perspective, it seems like OP was acting like there's no other option available to change tracks. There is and it's really not that inconvenient.

Google already broke the basic functionality they wanted, dropping android now beats constantly looking for subpart workarounds.

Microsoft pulled the this crap with Windows, you once they stop caring about their you’ve already lost it’s time to stop paying their game.

So just don't bring a phone at all?
GrapheneOS or LineageOS on your phone gets rid of the AI cruft. Linux on your computer.

There are few things AI is truly very good at. Surveillance at scale is one of them. Given everything going on in the world these days it's worth considering.

OP wants AI to change the music. They just don't want the new EULA added to the mix.
AI is meaningless term when you extend it to cover everything, may as well just call it software.

So yea the software’s EULA changed for the worse, that’s the underlying issue.

Voice recognition has always been AI, and using an LLM improves it
Some parts of voice assistants used AI techniques other parts didn’t. Calling the whole thing AI is like calling Office 365 AI, it’s too vague to be useful. The most reliable parts are using dictation to interact with the preprogrammed bits.

Also, early attempts at dictation wasn’t considered AI, instead a machine learning etc was found to be useful so it’s been tossed into the AI bucket rather arbitrarily.

How long, though, until every input is AI-interpreted and your intention is "helpfully" translated to "what you meant"
To be fair it seems to already be happening. My phone keyboard, always prone to interpolating what I type into utter nonsense, seems to have gotten worse in the past year or so.
> Just stop talking to your computer and use the screen interface, that still works.

This reply demonstrates you don't understand the problem. Please don't contribute to the enshittifacation of everything by being an apologist for unethical behavior.

  • petre
  • ·
  • 13 hours ago
  • ·
  • [ - ]
Siri still works fine, I guess. I almost never use it (Android user) but got exasperated with Apple CarPlay's menus and asked it to play something in my wife's car.
Apple CarPlay forces enabling of Siri (to enable voice control) and presumably that'll turn on Siri AI too?
We need consumer protection laws that protect against functional regressions like this -- if a widget could do X when I bought it, it should keep doing X for the life of the product and I shouldn't have to "agree" to an updated license for it to be able to keep doing X.
Or even updates that introduce new, undesired functionality. When I bought my PS4 (at launch), the section of the UI for video apps was pleasant and straightforward. It had the various video apps I had installed and that was it. Fast forward several years, and Sony updated the UI to prioritize showing apps that they wanted you to use (whether you had installed them or not), and even showed ads for movies and such.

I don't think it's asking too much to not make my product worse after I buy it, and I think we need legislation to prevent companies from doing that. I'm not sure what that would look like, and the government is bought and paid for by those same companies, so it's unlikely we will see that. But we do need it.

> not make my product worse after I buy it

How can such law be written and how can a lawyer litigate that in court? The way you've phrased it is very subjective. What is an objective measure that a court can use to determine the percentage of quality drop in a product against a timeline?

Easy, mandate that any UI changes be revertable for the life of the product, or until the company goes bankrupt.
How would that work in real life though? Now every change made to any program must be tested against an ever growing combination of enabled and disabled UI changes.
I don't know, but I do know that on my web browser I can add and remove various of the buttons and right-click menu options. And on linux I can skin my desktop environment in a variety of ways (Unity stopped working, I went to Gnome which was glitching, and now have something very much like Unity used to be in XFCE and unlike a commercial product I paid nothing for this.).
Adding and removing buttons from the UI is vastly different compared to maintaining a system where which features are enabled/disabled affect the underlying data and potentially interoperability.

Do you want to work on Oracle Database [1]?

By the way, I also don't want the software I use to suffer from quality drop due to new forced "features". I just don't think the way suggested here works well.

[1] https://news.ycombinator.com/item?id=18442941

Tough. Somehow IKEA is doing fine without being able to break into my house and change the way my furniture works. Devices and software should not be any different.
> Easy, mandate that any UI changes be revertable for the life of the product, or until the company goes bankrupt

I'm aware people are annoyed with big UI overhauls that seemingly do nothing, but I don't think you understand what it would take to support what you wrote. You're describing something that gets exponentially harder to maintain as a product ages. It's completely prohibitive to small businesses. How many UI changes do you think are made in a year for a young product? One that is constantly getting calls from clients to add this or that? Should a company support 100 different versions of their app?

I understand a small handful of companies occasionally allow you to use old UI, but those are cases where the functionality hasn't changed much. If you were to actually mandate this, it would make a lot of UIs worse, not better.

As much as people want to act like there's a clear separation, a lot of UI controls are present or absent based on what business logic your server can do. If you are forced to support an old UI that does something the company cannot do anymore, you are forcing broken or insecure functionality. And this would be in the name of something nobody outside of Hackernews would even use. Most people are not aware there is an old.reddit.com.

There are a couple of ways you can do this:

1) Have this law only apply B2C.

2) Stop having rolling feature updates except on an opt-in basis. It used to be that when I bought an operating system or a program it stayed bought, and only updated if I actively went out and bought an update. Rolling security updates are still a good idea, and if they break UI functionality then let the end customer know so that they can make the decision on whether or not to update.

For hosted software, such as Google office, is it really that much more difficult to host multiple versions of the office suite? I can see issues if people are collaborating, but if newer file formats can be used in older software with a warning that some features may not be saved or viewable, then the same can be done with a collaborative document vis-a-vis whatever version of the software is opening the document.

My wife recently went 0patch and some other programs to cover her Win10 when Microsoft stopped updating it. She still got force updated two updates having to do with patching errors in Windows' ESU feature that blocked people from signing up for the 1-year of ESUs. She let those updates happen without trying to figure out a way to block them as they have no other impact on her operating system, but it would have been nice if Microsoft have been serious about ending the updates when it said it was.

I am not a programmer, but come on. This was done in the past with far less computational ability.

This entire subthread is full of people missing the point: no removing features.

You can add them, you can even move them, but you don't get to take back something you already sold me, unless I also get to take back the money I gave you.

Really not super interested in excuses and whining. Either support the features you sold me, or refund my money. It really is that simple... and it really should be the law.

You can wish the thread was about that, but that's a completely different conversation, and you're the first to bring it up. I haven't seen any excuses for it. I don't like when I have something simple like an export tool in my app and it's suddenly gone.

But the question is how do you define what a feature is in networked apps? If you play an online game with a sniper rifle that one-shots people, and the developers nerf it, have they taken a feature from you? But everyone else loved the nerf? How do we support you and the players? Let you continue one-shotting them?

If the app you're paying for could message other users, but now they can block you, is the company supposed to give you a refund because now you can't message some users?

Good questions. I could argue that the game rules can reasonably include clauses such as, "We can adjust weapon/defense parameters at any time." But the addition of a blocklist feature is a bit harder to hand-wave away because it could be said to be economically damaging to spammers. I would say yes, if the ability to message everybody is advertised as a feature, the company would need to refund the spammers (and kick them off.) Hopefully the company will learn to provide clearer terms of service next time.

In general I think the best answer to your objections is to require companies to specify up front exactly what features are being sold, and for how long they are guaranteed to be available. The onus would then be on the consumer to evaluate the list of guaranteed features against their wants and needs. Consumers would hopefully learn, over time, not to buy products that don't provide these guarantees up front.

Right now what they (we) are learning is not to trust anything with an Internet connection, because of abuses from a small number of prominent bad actors. Which is unfortunate.

  • ·
  • 13 hours ago
  • ·
  • [ - ]
I'm not trying to be overly negative, it's just hard not to write a lot and respond point by point.

> Have this law only apply B2C.

I don't think limiting it to B2C changes much. Now instead of business customers calling and asking for features, you have swaths of people asking for a feature on the internet.

> I am not a programmer, but come on. This was done in the past with far less computational ability

If by computational ability you mean the actual power of our hardware, this isn't really a computational problem, it's a manpower problem. We have faster computers, but our velocity as developers has been relatively stagnant the past 20 years, if not worse.

Believe me, I'm totally sympathetic to the idea that web apps could support older versions. I have thought of doing it myself if I were to get out of contract work. But I'm aware of how much extra work that is, and it would be something I do for fun, not something that most people would appreciate.

> Stop having rolling feature updates except on an opt-in basis. It used to be that when I bought an operating system or a program it stayed bought, and only updated if I actively went out and bought an update

Having an opt-in doesn't really change what I'm talking about. This is lumping different kinds of software together, and it would be helpful to separate them. There are apps that do local work on your computer, apps that communicate with a network, and the OS itself.

Apps that work locally and don't need to talk to a server can have multiple versions, and they often do. That's a solved problem. I have not been forced to upgrade any third party app on my computer. But I have had AI crammed into Microsoft apps and I hate it.

Apps that communicate with a server, and other users, are the source of a lot of issues I'm talking about. Maintaining versions for these creates cascading problems for everyone.

For OS: I'm all for not being forced to upgrade my OS. But if I don't upgrade, the reality is I will miss security updates and won't be able to use newer apps. That was the case in the 90's, and it's the case now.

> Rolling security updates are still a good idea

That's doing some heavy lifting. It's a good idea, sure, but you can't just sprinkle security updates onto older versions. You're just multiplying how long each security fix takes for all users.

> For hosted software, such as Google office, is it really that much more difficult to host multiple versions of the office suite

In Google's case, it's difficult to maintain one version of an app. They kill apps left and right. You're referencing software from the biggest companies in the world. Reddit manages just one other version, and that's because the core of their app has stayed the same since 1.0. If we required all B2C to always support older versions, we'd essentially make it illegal for small companies to make networked services.

Here's how it plays out for a small company:

- Every security fix has to be backported to every version of the app. This is not free, this is extra work for each version. What if it's discovered Google Docs has a vulnerability that could leak your password and has for 20 years? That's a lot of versions to update.

- If the app interacts with other users in anyway, new features may need to support old versions anyway. How do you add a permissions system to Google Docs if the old version has no permissions? What should happen on the old app when they access a doc they couldn't access before? You have to program something in.

- Support staff has to know 10 different versions of the app. "Have you tried clicking the settings icon?" "What settings icon?"

- Internet Guides? YouTube tutorials? When you Google how to do something, you'd need to specify your version.

- Because we are doomed to support older versions in some capacity, companies will just not work on features that many people want because it's too hard to support the few people on older versions.

This is why apps with "versions" usually have support periods, because it would be impossible for them to support everything.

> This is why apps with "versions" usually have support periods, because it would be impossible for them to support everything.

And that's fine. Just leave it that way and stop with the rolling feature updates that a person can't block because the only way you sell your software is as SaaS.

As a lawyer I think this could potentially be litigated as a breach of the implied warranty of merchantability.
Would the question still be about measuring the drop in quality to prove that the product (the software in this case) is in breach of the law?
To me it seems crazy to legislate that the UI of software you have licensed cannot change.
You don't do it that way. As the other poster suggests, you mandate that UI changes can always be rolled back by the user.

It should be illegal for you to change a product you sold me in a way that degrades functionality or impairs usability, without offering me the option of either a full refund or a software rollback.

If that causes pain and grief for server-based products, oh, well. Bummer. They'll get by somehow.

I would go even further than a full refund - that doesn't really make the user whole who will now to have invest time into finding and learning an alternative product.

And even with the ability of rolling back somewhere hidden in the settings, forced UI changes are annoying at best - they should always come at a time chosen by the user (including "never") and not any other time.

Yeah my pixel watch went straight into the trash. All set. Based on my conversations with folks working on these price products, it seems they simply can’t fathom why anybody is so concerned about privacy when giving it up yields so many useful products and services.
International coordinated action by consumers taking a company to small claims court at the same time around the world to see redress about defective products would be an effective strategy.
  • Y_Y
  • ·
  • 1 day ago
  • ·
  • [ - ]
Are you proposing a "World Sue A Tech Giant Day"? A global bonanza of micro-litigation that bleeds AI-leviathans dry by a thousand cuts?

I'm in, but let's have it in October or something when I'm less busy.

I like this idea, though I'm concerned about how we could make sure the courts are ready to handle the deluge of activity.

Update: talked to some experts. IANAL, and they aren't either. This would be cataclysmic for the courts unless they knew it was coming AND every claim was filed correctly (fees paid, no errors, etc). Even if everything was done perfectly, it would be a ton of work and there's no way every case would be processed in a day. It's also likely that all the identical cases filed in a single jurisdiction would be heard together in a single trial. There's also weirdness when you consider where each claim is filed. Quote: "you may be in the right, but I can guarantee you would have a terrible time"

  • cj
  • ·
  • 21 hours ago
  • ·
  • [ - ]
The point isn’t to win every individual case, is it?

I assume the main point would be getting the attention of politicians who would step in and intervene. Especially if it’s a situation where the courts are truly overwhelmed.

Why would that be anyone's problem? If users keep having to sue Apple to get stuff Apple was supposed to have given them, courts may impose higher and higher penalties until Apple starts just giving them to users without wasting anyone's time.
  • ·
  • 23 hours ago
  • ·
  • [ - ]
You still have automatic "updates" on? In 2025?
Did you agree, or did you give up your data?
> I have a pixel watch

you rented/leased a watch for an undefined amount of time.

https://gitlab.com/natural_aliens/geminichatsaver/-/tree/mai... pull requests and any other feedback welcome
  • m463
  • ·
  • 13 hours ago
  • ·
  • [ - ]
> The real creepy thing is the way they force you to give up your data with these products

this is pretty much everything everywhere right now. except local linux mostly.

And the fact that even if you don't want it, don't use it they still charge you as if you do.
When people pay for YouTube subscriptions to avoid ads, does YouTube/Google continue to collect and store data

Do the terms allow YouTube/Google to use the data collected for any purpose

  • gishh
  • ·
  • 20 hours ago
  • ·
  • [ - ]
People pay for this?
Lots of things in life seem to be the majority having to go along with the decisions of the minority. I remember in 2012 when Facebook put white chevrons for previous- and next-photo in the web photo gallery product and thinking how this one product decision by a handful of punks has now been foisted on the world. At the time I was really into putting my photography on FB and, somewhat pretentiously, it really pissed me off to start having UI elements stuck on it!

Car dashboards without buttons, TVs sold with 3D glasses (remember that phase?), material then flat design, larger and larger phones: the list is embarrasing to type because it feels like such a stereotypical nerd complaint list. I think it’s true though — the tech PMs are autocrats and have such a bizarrely outsized impact on our lives.

And now with AI, too. I just interacted with duck.ai, duck duck go’s stab at a bot. I long for a little more conservatism.

This is what happens when you let companies become empires, with the tacit agreement of your "democratically-elected" government. In no sane world should my electricity bill go up because Google wants me to put glue on pizza. Unfortunately, I don't think we live in a sane world.
>the tech PMs are autocrats and have such a bizarrely outsized impact on our lives.

They're the ones who are just asking for it ... they, themselves need more forceful training. It's up to us to move slower and fix things.

Microsoft is all about this. You know how they also force stuff you don't want on the OS? Somewhere within Microsoft there might be a dashboard where they show their investors people are using Bing and Copilot. Borderline financial scam if you think about it.
Copy and paste is not working reliably in in windows anymore; coincidentally it's breaking at the same time Msoft is moving to replace all copy/paste with OCR only. It's garbage
  • keyle
  • ·
  • 21 hours ago
  • ·
  • [ - ]
I haven't used windows for years but the shear amount of commentary on recent changes and the claims are so beyond beliefs...

It reads like a company that is only there to squeeze money out of existing customers and hell bent on revenues above growth. Like one of those portfolio acquisitions.

As others have noted, reality has become indistinguishable from satire.
[flagged]
I just built a gaming PC after 10+ years without touching Windows, and I gotta say the experience is truly awful.

Small stuff such as: the keyboard shortcut that is setup for switching keyboards is wrong, the one displayed to me in the UI is the wrong one, I discovered it because the shortcut for the Discord overlay (Shift + `) was making me switch keyboard layouts, couldn't comprehend why until I noticed that shorcut consistently switched them while the one displayed in the UI did not. There's no way to change the shortcut, whatever I set up in the UI does not work but Shift + ` always works, no idea why.

Copy and paste has definitely surprised me sometimes, I was designing a custom livery for a sim racing game, copying images to use as stickers, the clipboard would paste very different images from many "copies" ago out of nowhere, I couldn't create a reproducible way to file a bug report, it works sometimes, it doesn't at all at other times.

I setup for updates to happen in the night, between 03.00-07.00, it doesn't matter, the computer rebooted a few times out of nowhere to apply updates, I didn't even get a notification about it, simply got the "Restarting" screen.

It's absolutely shoddy, as much as I have many complaints with macOS the past 8+ years it's nowhere as shitty of as an experience, I'm only a couple of months into Windows again, and it's way worse than I remember it from the days of Win2k/Windows XP/Windows 7.

Genuinely VSCode has been broken for me for with copying due to it desperately trying to vibe code for me. You’ve reminded me to fix that.
Had the same issue switching to cursor. Cmd + k multiple selection skip is now no longer the key map. Drives me fucking nuts.
I haven't noticed this, also how exactly would OCR copy paste work? In order to copy text I would need to select text, which would mean it's already encoded as text.
Encoded as text, but converted to MS Comic Sans to introduce some OCR errors.
I can imagine, that instead of a text select, they instead take a screenshot.

Round trip through recall and OCR, here's your "text" or image for pasting.

Sounds dumb. I know.

OCR is regularly the easiest way to copy web page text on an iPhone by taking a screenshot first, and copying text from the photo. iPhone browser text selection is often broken.

Then again, a friend sent a screenshot of a contact and I asked AI to convert that to a vCard I could import (impressively saved time and was less error-prone).

> Msoft is moving to replace all copy/paste with OCR only.

Source?

That's why this was the year I finally dropped Windows and VSCode forever. Not that hard for me because all the games I play work flawlessly in Proton, and I already used Linux at work.
What is your replacement for VSCode?
You can drop Windows and keep VSCode. I'm running it on this laptop (Kubuntu 25.04).

To install it, browse to here: https://code.visualstudio.com/ (search: "vscode"). Click on "Download for Linux (.deb)" and then use Discover to install and open it - that's all GUI based and rather obvious. You are actually installing the repository and using that which means that updates will be done along with the rest of the system. There is also a .rpm option for RedHat and the like. Arch and Gentoo have it all packaged up already.

On Windows you get the usual hit and miss packaging affair.

Laughably, the Linux version of VSCode still bleats about updates being available, despite the fact that they are using the central package manager, that Windows sort of has but still "lacks" - MSI. Mind you who knows what is going on - PShell apps have another package manager or two and its all a bit confusing.

Its odd that Windows apps, eg any not Edge browser, Libre Office, .pdf wranglers, ... anything not MS and even then, there are things like their power toy sort of apps, still need their own update agents and services or manual installs.

I learned today that you can install vscode via winget now lol
Yes but winget is not the Windows central package manager. Actually, Windows does not have one but for some reason you have enforced updates from a central source.

Why does Windows not have a formal source for safe software? One that the owner (MS) endorses?

One might conclude that MS won't endorse a source of safe software and hence take responsibility is because they are not confident in the quality of their own software, let alone someoneelses.

I believe that MS wants that to be their own MS Store, though I don't know of a single person who actually uses it as their preferred way to manage software. For what it's worth, VS Code is available there: https://apps.microsoft.com/detail/xp9khm4bk9fz7q
I decided to finally learn a modal editor and installed Helix. Ideal for me since it's very hackable if you're already familiar with Rust. Very easy to build from source. Plus all I need is LSP support and I'm good at work, clangd is all I need for an IDE.
Coming from vim the similar but not quite key binds can be confusing.
Yeah everyone I've tried to introduce helix to who was already a vim master hated it. It's great for people who don't already have that muscle memory, I found the reversed selection->action model a lot more intuitive personally.
VSCodium is pretty good.
Not who you responded to, but for a GUI editor I tend to like Zed, and for terminal I like Helix. Yes, Neovim is probably better to learn because Vim motions are everywhere, but I like Helix's more "batteries included" approach.
They've been all about this since Windows 95.
AI reminds me of the time Google+ was being shoved down our throats. If you randomly clicked on more that 7 hyperlinks on the internet, you'd magically sign up for google plus.

Around that time, one of my employer's website had added google plus share buttons to all the links on the homepage. It wasn't a blog, but imagine a blog homepage with previews of the last 30 articles. Now each article had a google plus tag on it. I was called to help because the load time for the page had grown from seconds to a few minutes. For each article, they were adding a new script tag and a google plus dynamic tag.

It was fixed, but so much resources were wasted for something that eventually disappeared. Ai will probably not disappear, but I'm tired of the busy work around it.

The difference was that Google Plus was actually kind of cool. I'm not excusing them shoving it down your throat, but at least it was well designed.

Most of the AI efforts currently represent misadventures in software design at a time when my Fitbit charge can't even play nice with my pixel 7 phone. How does that even happen?

I remember believing Google+ will win because it was quite nicely done. But I guess it never caught on with the masses to be successful in Google's definition of success (Adsense?).

PS: I was thinking that I didn't notice it being shoved down because I was high on the Koolaid. But I do remember when they shoved it in YouTube comments.

Google+ lost because when they launched, they didn't let everyone join. That means that people joined and couldn't bring their friends over, so they bounced off of it. By the time they opened it up to everyone it had a bad reputation of being "dead". And then of being obnoxious when Google refused to allow it natural growth.

I think they intended to be like Facebook and have a selective group of people join, but they just allowed any random set of people to join and then said tou can bring 5 or some low number with you. That was never going to work for the rapid growth they wanted.

I liked Google+, but it Google really mismanaged it.

  • insin
  • ·
  • 22 hours ago
  • ·
  • [ - ]
All that time and effort that went into forcing Google+ everywhere and its legacy is just lots of people accidentally ending up with 2 YouTube accounts from when they were messing with that
  • audg
  • ·
  • 2 hours ago
  • ·
  • [ - ]
thanks for reminding me why I have two YouTube profiles lol
  • duxup
  • ·
  • 16 hours ago
  • ·
  • [ - ]
I liked G+.

It felt like I had some level of control of my feed and what I saw and for the time it existed the content was pretty good :(

> I will not allow AI to be pushed down my throat just to justify your bad investment.

Pretty much my sentiment too.

The neat thing about all this is that you don’t get a choice!

Your favorite services are adding “AI” features (and raising prices to boot), your data is being collected and analyzed (probably incorrectly) by AI tools, you are interacting with AI-generated responses on social media, viewing AI-generated images and videos, and reading articles generated by AI. Business leaders are making decisions about your job and your value using AI, and political leaders are making policy and military decisions based on AI output.

It’s happening, with you or to you.

I do have a choice, I just stop using the product. When messenger added AI assistants, I switched to WhatsApp. Now WhatsApp has one too, now I’m using Signal. Wife brought home a win11 laptop, didn’t like the cheeky AI integration, now it runs Linux.
Sadly, almost none of my friends care or understand (older family members or non-tech people). If I tried to convince friends to move to Signal because of my disdain for AI profiteering, they'd react as if I were trying to get them to join a church.
  • ·
  • 21 hours ago
  • ·
  • [ - ]
Reasonably far off topic:

Visa hasn't worked for online purchases for me for a few months, seemingly because of a rogue fraud-detection AI their customer service can't override.

Is there any chance that's just a poorly implemented traditional solution rather than feeding all my data into an LLM?

I run a small online software business and I am continually getting cards refused for blue chip customers (big companies, universities etc). My payment processor (2Checkout/Verifone) say it is 3DS authentication failures and not their fault. The customers tell me that their banks say it isn't the bank's fault. The problem is particularly acute for UK customers. It is costing me sales. It has happened before as well:

https://successfulsoftware.net/2022/04/14/verifone-seems-to-...

I've recently found myself having to pay for a few things online with bitcoin, not because they have anything to do with bitcoin, but because bitcoin payments actually worked and Visa/MC didn't!

For all the talk in the early days of Bitcoin comparing it to Visa and how it couldn't reach the scale of Visa, I never thought it would be that Visa just decided to place itself lower than Bitcoin.

Kind of the same as Windows getting so bad it got worse than Linux, actually...

If by "traditional solution" you mean a bunch of data is fed into creating an ML model and then your individual transaction is fed into that, and it spits out a fraud score, then no, they'd not using LLMs, but at this high a level, what's the difference? If their ML model uses a transformers-based architecture vs not, what difference does it make?
> what difference does it make

Traditional fraud-detection models have quantified type-i/ii error rates, and somebody typically chooses parameters such that those errors are within acceptable bounds. If somebody decided to use a transformers-based architecture in roughly the same setup as before then there would be no issue, but if somebody listened to some exec's hairbrained idea to "let the AI look for fraud" and just came up with a prompt/api wrapping a modern LLM then there would be huge issues.

One hallucinates data, one does not?
Even if my favorite service is so irreplaceable, I still can use it without touching AI part of it. If majority who use a popular service never touch AI features, it will inevitably send a message to the owner one way or another - you are wasting money with AI.
Nah the owner will get a filtered truth from the middle managers that present them with information that everything's going great with AI, and the lost money is actually because of those greedy low level employees drinking up all the profit by working from home! The entire software industry has a massive Truth-To-Power problem that just keeps getting worse. I'd say the software industry in this day and age feels like Lord of the Flies but honestly feels too kind.
Exactly this. "AI usage is 20% of our customer base" "AI usage has increased 5% this quarter" "Due to our xyz campaign, AI usage has increased 10%"

It writes a narrative of success even if it's embellished. Managers respond to data and the people collecting the data are incentivised to indicate success.

almost the same as RTO mandates:

we’ll force you to come back to justify sunk money in office space.

I personally think all the gains in productivity that happened with WFH were just because people were stressed and WFH acted like a pressure relief. But too much of a good thing and people get lazy (seeing it right now, some people are filling full timesheets and not even starting let alone getting through a day of work in a week), so the right balance is somewhere in the middle.

Perhaps… the right balance is actually working only 4 days a week, always from the office, and just having the 5th day proper-off instead.

I think people go through “grinds” to get big projects done, and then plateau’s of “cooling down”. I think every person only has so much grind to give, and extra days doesn’t mean more work, so the ideal employee is one you pay for 3-4 days per week only.

We just need a metric that can't be gamed which will reliably show who is performing and who is not, and we can rid ourselves of the latter. Everyone else can continue to work wherever the hell they want.

But that's a tall order, so maybe we just need managers to pay attention. It doesn't take that much effort to stay involved enough to know who is slacking and who is pulling their weight, and a good manager can do it without seeming to micromanage. Maybe they'll do this when they realize that what they're doing now could largely be replaced by an LLM...

Not for nothing did the endless WSJ and Forbes articles about "commuting for one hour into expensive downtown offices is good, actually" show up around the same time RTO mandates did.
Don't forget about the poor local businesses. Someone needs to pay to keep the executives' lunch spots open.
Hey now. Little coffee shops and lunch spots and dry cleaners are what make cities worth living in in the first place.
[dead]
Well, not if rents crash because all the offices moved out from the area, and the lunch spot can afford to stay open and lowers prices.

We don't talk enough about how the real estate industry is a gigantic drag on the economy.

It really gives me the same vibes as the sort of products that go all in on influencer marketing. Nothing has made me less likely to try "Raid Shadow Legends" than a bunch of youtubers faking enthusiasm about it.

It's a sort of pushiness that hints not even the people behind the product are very confident in its appeal.

I see comments like this one* and I wonder if the whole AI trend is a giant scam we're getting forced to play along with.

* https://news.ycombinator.com/item?id=46096603

> And let’s be clear: We don't need AGI (Artificial General Intelligence).

In general, I think we want to have it, just like nuclear fusion, interplanetary and interstellar colonization, curing cancer, etc. etc.

We don't "need" it similar to people in 1800s don't need electric cars or airports.

Who owns AGI or what purpose the AGI believe it has is a separate discussion - similar to how airplanes can be used to transport people or fight wars. Fortunately today, most airplanes are made to transport people and connect the world.

> In general, I think we want to have it

Outside of tech circles no one I talk to wants AI for anything

All of my family members bar one use ChatGPT for search, or to come up with recipes, or other random stuff, and really like it. My girlfriend uses it to help her write stories. All of my friends use it for work. Many of these people are non-technical.

You don’t get to 100s of millions of weekly active users with a product only technical people are interested in.

Can second this. Am the only tech worker among my friends and family and every single one of them reacts to AI the same as to crypto or NFTs
Do they want self driving cars or domestic help robots though?
No.
They’re in a tiny tiny tiny minority then.
Got any stats to back that up?
I meant as a society we should want AGI, but I understand that's not how most feel.
This seems stupid to me

As a society, nothing would be more harmful than AGI controlled by corporations and governments. The rest of us should fight tooth and nail to make sure that it never happens

If AGI can be controlled by corporation or governments, that's not very intelligent of it is it?

You have a clear idea of what AGI would look like, and don't want that. I think you don't and none of us do, it will surprise us just like internet and smartphone would to someone, even if very technically inclined, 50 years ago.

AI does some things but it's nowhere near as good as it has to be to justify its valuations.
Yet somehow ChatGPT has almost a billion users. Thats a lot of tech bros.
What's even more impressive is the retention charts: https://x.com/far33d/status/1981031672142504146

Every single cohort is smiling upward from the past two years. That is insane, especially at their scale! AI is useful to people.

I think there's quite a jump from "ChatGPT has a high user retention rate" to "AI is useful to people". That's like saying funko pops were useful to nerds just because they kept buying them.
Any time there is something new everyone will sign up to try it out. Give it time. Once there are enough intrusive ads, or subtle ads shimmed into answers, social manipulation and political bias once it hits critical mass, rewriting of history, squeezed rate limits, more cost for less rate limits that number will drop if they are honest and/or deleting inactive accounts. The negative features will not creep in until they believe they have achieved critical corporate capture and dependency.
The negatives don’t change the inherent demand from people for AI.
They actually do. I used to like twitter and now I don’t use it anymore because it’s gone to shit.

People used to google stuff before it became click bait content and ads.

Same thing is gonna happen with ai chatbots. You begrudgingly use them when you have to and ignore them otherwise.

Twitter has a TON of active users though and those aren't going anywhere.

Hell, those that did leave Twitter did it to move to Bluesky which is basically Twitter under a different banner.

Even if people move away from specific instances of some form of technology (like Twitter, Bluesky, Mastodon or whatever) they are not necessarily moving away from the idea/tech iself (like microblogging in this example).

Same with other social media: notice how after "Reddit gone shit" the people who felt like that and did move away didn't move back to forums or whatever, they went to Reddit-like boards like Lemmy.

Active users on twitter has gone down massively the past 4 years! As far as we know, because the company doesn’t report numbers as proactively as it used to.

And sure, you can say people just moved to other platforms, but I don’t think you can substantiate that either.

Personally I just dropped all twitter likes, and a lot of my old twitter friends did too. We have discord servers now.

But it’s hard to have a discussion like this without data and we’re never gonna have the data. So you have to use qualitative data instead.

This might be obvious but I think the only way forwards is to disengage in services offered by these mega-tech companies. Degoogling has become popular enough to foster open communities that prioritize their time and effort to keep softwares free from parasitic enterprises.

For instance, I am fiddling with LineageOS on a Pixel (ironically enough) that minimizes my exposure to Google's AI antics. That doesn't mean to say it is easy or sustainable, but enough of us need to stop participating in their bad bets to force upon that realization.

I'm hoping "degoogle" is the 2026 word of the year.
  • smt88
  • ·
  • 23 hours ago
  • ·
  • [ - ]
No one with a white-collar job in the US can get away from Google and Microsoft. We're forced to use one or the other, and some of us are forced to use both.

That's not to mention all the other tech companies pushing AI (which is honestly all of them).

Agreed. At the level of companies, it is hard to find any practical solution. Personally, I am trying to do what I can.
My Healthcare providers App in Germany refuses to work on anything that isn't a Phone running official Google^tm verified^(r) and hardware attested OS. Same with some banks.
I feel things like these should be illegal. There must be other genuine ways to verify the end user.
We managed to have healthcare before smartphones and Google. It's definitely possible.
Is it possible to permanently disable Gemini on Android? I keep getting it inserted into my messages and other places, and it's horrible to think that I'm one misclick away from turning it on.
  • Y_Y
  • ·
  • 1 day ago
  • ·
  • [ - ]
Sorry, you've irrevocably consented by touching a button that appeared above what you were trying to tap half a millisecond earlier.
That only happens with Apple, so it's fine.
  • ajkjk
  • ·
  • 1 day ago
  • ·
  • [ - ]
My feeling is we need laws to stop it
The industry agrees with you, hence the regulatory capture.
Too big to fail now
If it only takes a few years for a private entity to become "too big to fail" and quasi-immune to government regulation, we have a real problem.
  • fyrn_
  • ·
  • 22 hours ago
  • ·
  • [ - ]
An yeah, and honestly we do seem to have a real problem. Here's hoping OpenAI doesn't get the bailout they seem to be angling for..
You don't like some features being added to products so you want laws against adding certain features?

I might not like a certain feature, but I'd dislike the government preventing companies from adding features a whole lot more. The thought of that terrifies me.

(To be clear, legitimate regulations around privacy, user data, anti-fraud, etc. are fine. But just because you find AI features to be something you don't... like? That's not a legitimate reason for government intervention.)

I think it's more about enforcing having easy mechanisms to opt out, which seem to be absent with regards to AI integration.

It's better to assume good faith when providing a counter argument.

That doesn't change anything. If there aren't any harms except that certain people don't "like" a feature, it's not the government's role to force companies to allow users to opt out of features. If you don't like a feature, don't buy the product. The government should not be micromanaging product design.
What product should I buy if I need a smartphone to e.g. pay for parking but I don't want a smartphone that tracks me?
Take it up with your city council, if they're the ones require a smartphone to pay for parking.

But also, you're going to have to be more specific about what tracking you're worried about. Cell towers need to track you to give you service. But the parking app only gets the data you enable with permissions, and the data the city requires you to give the app (e.g. a payment method). So I'm not super clear what tracking you're concerned about?

If you don't use your smartphone for anything but paying for parking, I genuinely don't know what tracking you're concerned about.

Why isn’t it the governments role?

Because you think it’s not?

What if I, and many other people, think that it is?

Because it's ultimately a form of censorship. Governments shouldn't be in the business of shutting down speech some people don't like, and in the same way shouldn't be in the business of shutting down software features some people don't like. As long as nobody is being harmed, censorship is bad and anti-democratic. (And we make exceptions for cases of actual harm, like libelous or threatening speech, or a product that injures or defrauds its users.) Freedom is a fundamental aspect of democracy, which is why freedoms are written into constitutions so simple majority vote can't remove them.
1) Integration or removal of features isn't speech. And has been subject to government compulsion for a long time (e.g. seat belts and catalytic converters in automobiles).

2) Business speech is limited in many, many ways. There is even compelled speech in business (e.g. black box warnings, mandatory sonograms prior to abortions).

I said, "As long as nobody is being harmed". Seatbelts and catalytic converters are about keeping people safe from harm. As are black box warnings and mandatory sonograms.

And legally, code and software are considered a form of speech in many contexts.

Do you really want the government to start telling you what software you can and cannot build? You think the government should be able to outlaw Python and require you to do your work in Java, and outlaw JSON and require your API's to return XML? Because that's the type of interference you're talking about here.

Mandatory sonograms aren't about harm prevention. (Though yes, I would agree with you if you said the government should not be able to compel them.)

In the US, commercial activities do not have constitutionally protected speech rights, with the sole exception of "the press". This is covered under the commerce clause and the first amendment, respectively.

I assemble DNA, I am not a programmer. And yes, due to biosecurity concerns there are constraints. Again, this might be covered under your "does no harm" standard. Though my making smallpox, for example, would not be causing harm any more than someone building a nuclear weapon would cause harm. The harm would come from releasing it.

But I think, given that AI has encouraged people to suicide, and would allow minors the ability to circumvent parental controls, as examples, that regulations pertaining to AI integration in software, including mandates that allow users to disable it (NOTE, THIS DOESN'T FORCE USERS TO DISABLE IT!!), would also fall under your harm standard. Outside of that, the leaking of personally identifiable information does cause material harm every day. So there needs to be proactive control available to the end user regarding what AI does on their computer, and how easy it is to accidentally enable information-gathering AI when that was not intended.

I can come up with more examples of harm beyond mere annoyance. Hopefully these examples are enough.

Those examples of harm are not good ones.

The topic of suicide and LLMs is a nuanced and complex one, but LLMs aren't suggesting it out of nowhere when summarizing your inbox or calendar. Those are conversations users actively start.

As for leaking PII, that's definitely something for to be aware of, but it's not a major practical concern for any end users so far. We'll see if prompt injection turns into a significant real-world threat and what can be done to mitigate it.

But people here aren't arguing against LLM features based on substantial harms. They're doing it because they don't like it in their UX. That's not a good enough reason for the government to get involved.

(Also, regarding sonograms, I typed without thinking -- yes of course the ones that are medically unnecessary have no justification in law, which is precisely why US federal courts have struck them down in North Carolina, Indiana, and Kentucky. And even when they're medically necessary, that's a decision for doctors not lawmakers.)

> Those examples of harm are not good ones.

I emphatically disagree. See you at the ballot box.

> but it's not a major practical concern for any end users so far.

My wife came across a post or comment by a person considering preemptive suicide in fear that their ChatGPT logs will ever get leaked. Yes, fear of leaks is a major practical concern for at least that user.

> If there aren't any harms except that certain people don't "like" a feature

The reason I don't like these sorts of features is because I think they are harmful, personally

In a democratic society, "government" is just a tool that the people use to exercize their will.
  • ajkjk
  • ·
  • 20 hours ago
  • ·
  • [ - ]
Yes. I think laws should be used to shut down things that are universally disliked but for which there is no other mechanism for accountability. That seems like obviously the point of laws.
Except these LLM features are not universally disliked. If they were, believe me, the companies would not be building them.

Lots of people actually find them useful. And the features are being iterated on to improve them.

  • ajkjk
  • ·
  • 4 hours ago
  • ·
  • [ - ]
not the LLM features. The undisable-able intrusions to advertise them, which rely on controlling platforms and so being able to use them to anticompetitively promote their own products.
>You don't like some features being added to products so you want laws against adding certain features?

Correct, especially when the features break copyright law, use as much electricity as Belgium, and don't actually work at all. Just a simple button that says "Enabled", and it's off by default. Shouldn't be too hard, yeah? You can continue to use the slop machine, that's fine. Don't force the rest of us to get down in the trough with you.

I have no problem with a company voluntarily choosing to make it a toggle.

I have a big problem with a government forcing companies to enable toggles on features because users complain about the UX.

If there are problems with copyright, that's an issue for the courts -- not a user toggle. If you have problems with the electricity, then that's an issue for electricity infrastructure regulations. If you think it doens't work, then don't use it.

Passing a law forcing a company to turn off a legal feature by default is absurd. It's no different from asking a publisher to censor pages of a book that some people don't like, and make them available only by a second mail-order purchase to the publisher. That's censorship.

>I have a big problem with a government forcing companies to enable toggles on features because users complain about the UX.

I have a big problem with companies forcing me to use garbage I don't want to use.

>If there are problems with copyright, that's an issue for the courts -- not a user toggle.

But in the meantime, companies can just get away with breaking the law.

>If you have problems with the electricity, then that's an issue for electricity infrastructure regulations.

But in the meantime, companies can drive up the cost of electricity with reckless abandon.

>If you think it doens't work, then don't use it.

I wish I lived in your world where I can opt out of all of this AI garbage.

>Passing a law forcing a company to turn off a legal feature by default is absurd.

"Legal" is doing a lot of heavy lifting. You know the court system is slow, and companies running roughshod over the law until the litigation works itself out "because they're all already doing it anyway" is par for the course. AirBnB should've been illegal, but by the time we went to litigate it, it was too big. Spotify sold pirated music until it was too big to litigate. How convenient that this keeps happening. To the casual observer, it would almost seem intentional, but no, it's probably just some crazy coincidence.

>It's no different from asking a publisher to censor pages of a book that some people don't like, and make them available only by a second mail-order purchase to the publisher. That's censorship.

Forcing companies to stop being deleterious to society is not censorship, and it isn't Handmaid's Tale to enforce a semblance of consumer rights.

> I have a big problem with companies forcing me to use garbage I don't want to use.

That pretty much sums it up. And the answer is: too bad. Deal with it, like the rest of us.

I have a big problem with companies not sending me a check for a million dollars. But companies don't obey my whims. And I'm not going to complain that the government should do something about it, because that would be silly and immature.

In reality, companies try their best to build products that make money, and they compete with each other to do so. These principles have led to amazing products. And as long as no consumers are being harmed (e.g. fraud, safety, etc.), asking the government to interfere in product decisions is a terrible idea. The free market exists because it does a better job than any other system at giving consumers what they want. Just because you or a group of people personally don't like a particular product isn't a reason to overturn the free market and start asking the government to interfere with product design. Because if you start down that path, pretty soon they're going to be interfering with the things you like.

> The free market exists because it does a better job than any other system at giving consumers what they want.

Bull. Free markets are subject to a lot of pressures, both from the consumers, but also from the corporate ownership and supply chains. The average consumer cannot afford a bespoke alternative for everything they want, or need, so are subject to a market. Within the constraints of that market it is, indeed, best for them if they are free to choose what they want.

But from personal experience I know damn sure that what I really really want is often not available, so I'm left signalling with my money that a barely tolerable alternative is acceptable. And then, over a long enough period of time, I don't even get that barely tolerable alternative anymore as the company has phased it out. Free markets, in an age of mass production and lower margins, universally mean that a fraction of the market will be unable to buy what they want, and the alternatives available may mean they have to go without entirely. Because we have lost the ability to make it ourselves (assuming we ever had that ability).

> But from personal experience I know damn sure that what I really really want is often not available

But that's just life. I genuinely don't understand how you can complain that not every product is exactly the product you want. Companies are designing their products to meet the needs of millions of people at the price point they can pay for it. Not for you personally.

We have more consumer choice than we've ever had in modern history, and you're still complaining it's not enough?

Even when we lived in tribes and made everything ourselves, we were extremely limited in our options to the raw materials available locally, and the extremely limited ability to transform things. We've never had more choice than we have today. I cannot fathom how you are still able to complain about it.

I'm just formulating an argument that a free market is not the be all and end all. If you have the money, bespoke is better. And if you don't have the money, making it yourself is better, if you have the skills (which most don't for most purposes).

Issues that do plague the current market in the US, that impact my household enough to notice, are:

1) Product trends. When a market leader decides to go all in on something, a lot of the other companies follow along. We've seen this in internet connectivity, touchscreens in new cars, ingredients in hair care products, among others. This greatly limits the ability of consumers to find alternatives that do not have these trends. In personal care products this is a significant issue when it comes to allergies or other kinds of sensitivities.

But in general just look at the number of people who complain about things such as a lack of discrete buttons for touchpads. Not even Framework offers buttoned touchpads as an option, despite there being a market for them.

It's obvious that it's the vocal, heavy spenders who determine what's on the market. Or it's a race to the bottom in terms of price that determines this. It's not the average consumer.

2) Perfume cross-contamination as an extension of chemical odors in general[0,1]. In recent years many companies with perfumed products such as cleaning agents have increased the perfume or increased its duration with fixatives. This amplified after so many people had their sense of smell damage during early COVID (lots of complaints about scented candles and the like not having an odor anymore, et cetera).

This wouldn't be a problem from a consumer point of view except that the perfumes transfer to non-perfumed products - basically anything that has plastic or paper absorbs second-hand fragrances pretty well. I live in as close as we can get to a perfume-free household, for medical reasons. It's effectively impossible to buy certain classes of products, or anything at all from certain retailers, that doesn't come perfumed. There are major stores such as Amazon and Target that we rarely buy from as we have to spend a lot of money, time, and effort to desmell products (basically everything purchased from Amazon or Target now has a second-hand perfume).

It's possible to have stores that have both perfumed products and non-perfumed products such that perfume cross-contamination doesn't occur. But this requires the appropriate ventilation, and isn't something that's going to happen unless one of the principals of the store has a sensitivity.

And then there are perfumes picked up in transit from the wholesaler, trucking company, or shipping company.

I hope someday to win Powerball or Mega Millions so that I can start a company dedicated to perfume-free household basics. That are guaranteed to still be perfume-free on delivery.

0 - https://www.drsteinemann.com/faqs.html

1 - https://dynamics.org/Altenberg/CURRENT_AFFAIRS/CHINA_PLASTIC...

>That pretty much sums it up. And the answer is: too bad. Deal with it, like the rest of us.

I am dealing with it, thanks, by fighting against it.

>I have a big problem with companies not sending me a check for a million dollars. But companies don't obey my whims. And I'm not going to complain that the government should do something about it, because that would be silly and immature.

Because as we all know, forcing you to use the abusive copyright laundering slop machine is exactly morally equivalent to not getting arbitrary cheques in the mail.

>In reality, companies try their best to build products that make money, and they compete with each other to do so.

In the Atlas Shrugged cinematic universe, maybe. Now, companies try to extract as much as they can by doing as little as possible. Who was Google competing with for their AI summmary, when it's so laughably bad, and the only people who want it are the people whose paycheques depend on it, or people who want engagement on LinkedIn?

>The free market exists because it does a better job than any other system at giving consumers what they want.

Nobody wants this!

>Because if you start down that path, pretty soon they're going to be interfering with the things you like.

I mean, they're doing that, too, and people like you look down your nose and tell me to take that abuse as well. So no, I'm not going to sit idly by and watch these parasites ruin what shreds of humanity I have left.

>> The free market exists because it does a better job than any other system at giving consumers what they want.

> Nobody wants this!

OK, well if you don't believe in the free market then sure.

Good luck seeing how well state ownership manages the economy and if it does a better job at delivering the software features you want, or even of putting food on your table. Because the entire history of the twentieth century says you're not going to like it.

What is so terrifying about exerting democratic control over software critical to exist in society?
> over software critical to exist in society?

I don't know what that means grammatically.

But you could ask, what is so terrifying about exerting democratic control over people's free speech, over the political opinions they're allowed to express?

The answer is, because it infringes on freedom. As long as these AI features aren't harming anyone -- if your only complaint is you find their presence annoying, in a product you have a free choice in using or not using -- then there's no democratic justification for passing laws against them. Democratic rights take precedence.

This is the argument against all customer protection as well as things like health codes, right?

Nobody is FORCING you to go to that restaurant so it's antidemocracy to take away their freedom to not wash their hands when they cook?

Please see the part of my comment where I say as long as it's not harming anyone.
> As long as these AI features aren't harming anyone

Why do you say this ? They are clearly harming the privacy of people. Or you don't in privacy as a right ? But, a lot of people do - democratically.

If you can show it's harming privacy, then regulate the privacy. That's legitimate. But I assume you're talking about AI training, not feature usage.

Trying to regulate whether an end-user feature is available just because you don't "like" AI creep is no different from trying to regulate that user interfaces ought to use flat design rather than 3D effects like buttons with shadows. It would be an illegitimate use of government power.

Making sure people have the option not to listen to your "speech" is not control over people's free speech.
It absolutely is.

When I buy a book, I don't want the government deciding in advance which paragraphs should be included, and which paragraphs people "shouldn't have to listen to". So I don't want it doing that with software either. It's the same thing.

You don't have to buy that book in the first place. The same way you don't have to use a piece of software.

Newsflash

If voters Democratically decide to do something, that's democracy at work.

"Newsflash", the entire point of constitutions that enumerate rights is that fundamental rights and freedoms may not be abridged even by majority decision.

If a Supreme Court strikes down a majority-passed law limiting free speech guaranteed by the Constitution, that's democracy at work.

If they can't be abridged, then why do we have amendments?

And no, that would be the courts at work, which may or may not be beholden to the whims of other political figures.

It takes more than majority vote to add a new amendment.

Go ahead and try, but I don't think you'll find that an amendment to restrict people's freedoms is going to be very popular. Because it will be seen as anti-democratic.

I mean you said 60 percent yourself, that would be a majority decision, and a democratic one.

I'm not sure the point you're trying to make here.

Voters restrict their own freedoms all the time. Hell, my state recently passed a law preventing Ranked Choice voting.

I'm not following you. I didn't say 60%? And 60% is a supermajority, not a majority. Which is a huge distinction. And US constitutional amendments require much stricter thresholds than that -- two thirds of Congress and three quarters of states. That's a gigantic bar.

Yes voters try to restrict their own freedoms all the time. We have constitutions with rights to block them from doing that in fundamental ways. That's what protection from tyranny of the majority is all about. Just because you have a majority doesn't mean you're allowed to take away rights. That's a fundamental principle of democracy. Democracy isn't just majority rule -- it's the protection of rights as well.

> But you could ask, what is so terrifying about exerting democratic control over people's free speech, over the political opinions they're allowed to express?

What an asinine comparison lol

> What an asinine comparison lol

https://news.ycombinator.com/newsguidelines.html

Bruh learn to take responsibility for your behavior
"Bruh", please read the guidelines. Your comments are completely inappropriate for HN.
  • smt88
  • ·
  • 23 hours ago
  • ·
  • [ - ]
You're trying to make it sound like a corporation's right to force AI on us is equivalent to an individual's right to speech, which is idiotic in its face. But I'd also point out that speech is regulated in the US, so you're still not making the point you think you're making.

And as far as I'm concerned, as long as Google and Apple have a monopoly on smartphone software, they should be regulated into the ground. Consumers have no alternatives, especially if they have a job.

It's not "idiotic on its face" and that's not appropriate for HN. Please see the guidelines:

https://news.ycombinator.com/newsguidelines.html

Code and software are very much forms of speech in a legal sense.

And free speech is regulated in cases of harm, like violent threats or libel. But there's no harm here in any legal sense. People are just unhappy with the product UX -- that there are buttons and areas dedicated to AI features.

Companies should absolutely have the freedom to build the products they want as long as there's no actual harm. If you merely don't like a UX, use a competing product. If you don't like the UX of any product, then tough. Products aren't usually perfectly what you want, and that's OK.

  • smt88
  • ·
  • 20 hours ago
  • ·
  • [ - ]
You're completely ignoring the most important point I raised, which is that I can't use a competing product. I can't stop using Microsoft, Google, Meta, or Apple products and still be a part of my industry or US society.
So what's your argument?

You're not being forced to use the AI features. If you don't want to use them, don't use them. There's zero antitrust or anticompetitive issue here.

Your argument that Google and Apple should be "regulated into the ground" isn't an argument. It's a vengeful emotion or part of a vague ideology or something.

If I want blenders to be sold in bright orange, but the three brands at my local store are all black or silver, I really don't think it's right for the government should pass a law requiring stores to carry blenders in bright orange. But that's what you're asking for, for the government to determine which features software products have.

  • ·
  • 19 hours ago
  • ·
  • [ - ]
  • smt88
  • ·
  • 16 hours ago
  • ·
  • [ - ]
> You're not being forced to use the AI features. If you don't want to use them, don't use them

You can't turn them off in many products, and Microsoft's and Google's roadmaps both say that they're going to disable turning them off, starting with using existing telemetry for AI training.

> Your argument that Google and Apple should be "regulated into the ground" isn't an argument. It's a vengeful emotion or part of a vague ideology or something.

You're just continuing to ignore that all of this is based on their market dominance. There are literally two options for smartphone operating systems. For something that's vital to modern life, that's unacceptable and gives users no choice.

If a company gets to enjoy a near-monopoly status, it has to be regulated to prevent abuse of its power. There's a huge amount of precedent for this in industries like telecom.

> If I want blenders to be sold in bright orange, but the three brands at my local store are all black or silver, I really don't think it's right for the government should pass a law requiring stores to carry blenders in bright orange

Do you really not see the difference between "color of blender" and "unable to turn off LLMs on a device that didn't have any on it when I bought it"?

> Do you really not see the difference between "color of blender" and "unable to turn off LLMs on a device that didn't have any on it when I bought it"?

Do you really not see that there is no difference?

Either the government starts dictating product design or it doesn't.

I don't want a world where the government decides which features software makers include or turn on or off by default. Whether there are 20 companies competing in a space or mainly 2.

Don't you see where that leads? Suddenly it's dictating encryption and inserting backdoors. Suddenly it starts allowing Truth Social to build new features and removing features on Twitter.

This is a bigger issue than you seem to be acknowledging. The freedom to create the software you want, provided it's not causing actual harm, is as important to preserve as the freedom to write the books or blog posts you want.

If this had something to do with antitrust then the fact that there are only two major phone platforms would be relevant. But the fact that both platforms are implementing LLM features is not anticompetitive. To the contrary, it's competitive even if you personally don't like it. It's literally no different from them both supporting 1,000 other features in common.

  • ·
  • 22 hours ago
  • ·
  • [ - ]
I uninstall the gemini app and disable the google app. It seems they are heavily linked so remmoving it may do the trick. As a practice I don't use any google apps if I can find a good replacement so I am not sure if messages is impacted.
> And let’s be clear: We don't need AGI (Artificial General Intelligence). We don't need a digital god. We just need software that works.

We (You and I) don't. Shareholders absolutely need it for that line to go further up. They love the idea of LLMs, AI, and AGI for the sole reason that it will help them reduce the cost of labour massively. Simple as that.

> I will use what creates value for me. I will not buy anything that is of no use to me.

If only everyone was thinking this way. So many around these parts have no self-preservation tendencies at all.

I don't think any of the people at the top actually believe world's-most-average-answer generator is a path that leads us to AGI. It's just a marketing boogeyman and a handy excuse to remove any remnants of agency that the workforce currently has.
Nobody is using tools that create no value for them, that’s an absurd argument. Many are overclocking their AI use because of their self preservation instinct.
  • callc
  • ·
  • 20 hours ago
  • ·
  • [ - ]
A bit more nuanced: nobody is using tools that have a perceived negative value.

People pushing AI desperately want to convince us the value is positive. 10x productivity! Fire all your humans!

In reality, depending on the use-case, the value is much smaller. And can be negative thinking about the total value over long term (incomprehensible shoddy foundations of large / long running projects)

It's a little alarming to hear from people whose managers have actually told them they expect 10x as much output. How could any competent person, no matter how optimistic, think it works that way? I guess the answer is in my question.

I'm thankful at the moment that I work for a boring company. I want to leave for something more interesting, but at least I can say that our management isn't blowing a bunch of money on AI and trying to use it to get rid of expensive developers. Heck, they won't even pay for Claude Code. Tightwads.

Yeah, that’s right. I’m reacting to the author’s statement (I won’t use tools that have negative value) to assert that it’s essentially a vapid statement. I wish the author made arguments such as yours.
Have you ever met a smoker?
  • thih9
  • ·
  • 23 hours ago
  • ·
  • [ - ]
What I also dislike about the AI is that it promotes a mainframe-like development workflow. Schedule your computation, pay for the usage, etc. Any chance this particular trend stops or reverses? Are we ever going to have local AI that is in practice comparable and sufficient?
You don’t need it for development. Neither locally nor on the mainframe. Money saved.
  • thih9
  • ·
  • 11 hours ago
  • ·
  • [ - ]
I don’t need google either, I could read docs or source code and figure things out myself. I do need it if I want to skip parts of coding that are not my focus at the moment - AI is arguably similar.
In addition to the annoyances mentioned, the pushing of AI may be leading to a massive waste of money and resources. I'm sure if, instead of AI being shoved in, want it or not, they said pay $1 if you want AI, the number of data centers needed would be reduced dramatically.
>> the pushing of AI may be leading to a massive waste of money and resources

And massive amounts of energy to run these new fangled AI data centers. Not sure if you lumped that in with "resources", but yes we're already seeing it:

A typical AI data center uses as much electricity as 100,000 households, and the largest under development will consume 20 times more, according to the International Energy Agency (IEA). They also suck up billions of gallons of water for systems to keep all that computer hardware cool.

https://www.npr.org/2025/10/14/nx-s1-5565147/google-ai-data-...

Zero attractor would likely be a safe bet.
VSCode feels like it’s in the “brand withdrawal” phase of its lifespan. I’ve turned off the sneakily named “Chat” and yet it still shows the chat sometimes when I toggle the bottom bar visibility.

> We will work with the creators, writers, and artists, instead of ripping off their life's work to feed the model.

I’m not sure I have an idea of what this might look like. Do they want money? What might that model look like? Do they want credit? How would that be handled? Do they want to be consulted? How does that get managed?

It probably starts with a reexamination of copyright law, which has always been a pragmatic rather than a principled system, but has not noticeably changed since the digital revolution.

Copyright is meant to encourage more publication by providing publishers with (temporary) control over their products after having released them to the public. Once the copyright expires, the work enters the public domain, which is really the end goal of copyright law. If publishers start to feel like LLMs are undermining that control, they might publish less original work, and therefore less stuff will eventually enter the public domain, which is considered bad for society. We're already seeing some effects of this as traffic (and ad revenue) to various websites has fallen significantly in the wake of LLM proliferation, which lowers the incentive to publish anything original in the first place.

Anyway, I'm not sure how best to adapt copyright law to this new world. Some people have thought about it though: https://en.wikipedia.org/wiki/Copyright_alternatives

Just like one of crypto’s biggest real world uses ended up being scams, LLMs are tools for bypassing copyright and enabling plagiarism with plausible deniability.
The Copilot button that comes on new laptops is the Darkest Pattern I have ever seen. UI exploitation that has jumped the software / hardware gap.

A student will be showing me something on their laptop, their thumb accidentally grazes it because it's larger than the modifier keys and positioned so this happens as often as possible. The computer stops all other activity and shifts all focus to the Copilot window. Unprompted, the student always says something like "God, I hate that so much."

If it was so useful they wouldn't have to trick you into using it.

> Unprompted, the student always says something like "God, I hate that so much."

... Dare I ask how Copilot typically responds to that? (They're doing voice detection now, right?)

> If it was so useful they wouldn't have to trick you into using it.

They delude themselves that they're doing no such thing. Of course the feature is so useful that you'd want to be able to access it as easily as possible in any context.

And the telemetry doesn't lie! Look how many people are clicking that button! KPIs go brrrrrrr
What's sad is how real this is.
BTW, the context is that one thing I teach is 3d modeling software, so the students are following my instructions to enter keyboard commands. It's usually Rhino3d where using the spacebar to repeat the last command is common.

> ... Dare I ask how Copilot typically responds to that? (They're doing voice detection now, right?)

Just as a generalization, the dozen or so times this has happened this semester the pop-up is accompanied by an "ugh" then after the window pops up from the taskbar the student immediately clicks back into the program we're using. It seems like they're used to dealing with it already. I haven't seen any voice interaction.

I mean, the statistics say the students use AI plenty - they just seem annoyed by the interruption. Which I can agree with.

> They delude themselves that they're doing no such thing. Of course the feature is so useful that you'd want to be able to access it as easily as possible in any context.

Exactly.

I'm having a hard time believing any of this, and am tempted to think this might be in bad faith. It's true it's a bit ambitious on their part that they replaced the right side key, but it isn't larger than normal and it's not positioned any differently than normal keys. Working with hundreds of laptops and humans, several ham fisted, on a daily basis I've not seen this at all.

Further, a dark pattern is where you are led towards a specific outcome but are pulled insidiously towards another. This doesn't really fall into that definition.

> We don't need to do it this quarter just because some startup has to do an earnings call.

What startups are doing earning calls?

Well not exactly earning calls in the classical sense, but haven't you heard about these startups announcing how they have scaled to $100 million in 3 months etc. Maybe revenue calls every quarter.
All the public ones?
I'd say at that point they are no longer startups, they've already started up
Lots of businesses like to claim being a "startup" as it brings connotations of innovation, dynamism, coolness, being the "next big thing" etc. There are many senses of the word, and it can be used in different ways (e.g. I work at a small business which has some elements of startup culture, and it's not an incorrect way to give people a sense of what it's like here - but we're definitely well established) but I think often being one of the "cool kids" is part of the motivation.
I would say in the AI age almost every business is a startup as per PGs definition [https://paulgraham.com/growth.html]
One issue might just be money. "Forcing" us to buy new laptops every 3 years has to be planned well enough in advance for the new OS and hardware to be ready, but meanwhile, it may have occurred to a lot of people that hanging onto the old stuff for a few more years might make the most sense right now.

Also, even many home users may be finding that they interact less and less with the "platform" now that everything including MS Office runs from a browser. I can barely remember when the differences between Windows and Linux were even relevant to my personal computer use. This was necessitated by having to find a good way to accommodate Windows, MacOS, iOS, and Android all at once.

Yep. My personal laptop is 12 years old and my work laptop is 6. A replacement battery, some extra RAM, and a replacement fan (kind of hard to get) for my personal laptop a few years ago and it still does everything I want it to do.

I cracked the screen on my work laptop last year, but IT set me up with a replacement screen. It's so much nicer having discrete buttons than a clickable trackpad, so I skipped on an upgrade. Still does everything I need it to do for work (including working with Windows 11).

And the vast majority of things I do on either laptop involves a web browser.

That's an interesting take. We really don't need TPUs and GPUs to run an OS, but that doesn't mean they won't try to sell it to us as a necessity.
> We don't need AGI (Artificial General Intelligence). We don't need a digital god. We just need software that works.

Yeah, I think the days of working software are over (at least deterministically)

"But we bought too many GPUs! We spent billions on infrastructure! They have to be put to work!"

right... none of them are saying that. They could probably use more GPUs considering the price of GPUs and memory are skyrocketing and the supply chain can't keep up. It's about experimentation, they need real users and real feedback to know how to improve the current generation of models, and figure out how to monetise them.

The author and more than half of the comments here are hallucinating reasons to be angry about AI. Ironic, really.
In what universe is having things that you do not want shoved on you an invalid reason to be angry?
The comment I’m replying to is an example of one hallucination. Specifically that AI is being pushed because companies have too many GPUs when in reality they have too little. There are several other hallucinations in this thread.
This will need a separate blog post But when you give something for free, then you will run out of that resource. So yes, companies have too little GPUs to give their services for free, but too many GPUs for their paid services.
This may be true but it doesn’t change that the idea that AI is being pushed due to a GPU glut is pure hallucination
Just to be clear, are you asserting that every opinion in this thread that you don't agree with is due to the poster hallucinating, or only specific ones?

Do you have any evidence or well established theory to back up this rather extraordinary claim?

Because if you are honestly positing that numerous people around the world are literally hallucinating despise (statistically) not being under medical supervision, presumably continuing to drive, work, and make decisions, that would be a pretty urgent global health phenomenon that you really should be chasing up. And at some point, the authorities best placed to deal with this hitherto unseen mass incapacitation might reasonably ask: what are the chances that multiple unrelated people around the world are experiencing such localised, hugely specific breaks from reality causing them to express reasonably common opinions on an internet forum, rather than the inconsistency being on the end of this one person who doesn't agree with them?

Add "patronizing proponents" to the pile.
The worst usage of AI is “content dilution” where you take a few bullet points and generate 5 paragraphs of nauseating slop. These days, I would gladly take badly written content from humans filled with grammatical errors and spelling mistakes over that.
  • jjav
  • ·
  • 1 day ago
  • ·
  • [ - ]
> generate 5 paragraphs of nauseating slop

Which then nobody will ever read, they'll just copy it into the AI bot to summarize into a few bullet points.

The amount of waste is quite staggering in this back and forth game.

> Which then nobody will ever read, they'll just copy it into the AI bot to summarize into a few bullet points.

Which more often than not will lose or distort the original intention behind the first 5 bullet points.

Which is why I avoid using LLMs for writing.

It's pretty awesome that we now have nondeterministic .Zip
/dev/yolo
We have non-deterministic compression at home
  • ·
  • 23 hours ago
  • ·
  • [ - ]
There's a humorous vignette in season 2 of "A Man on the Inside":

Our heroes are in the office of a tech billionaire who says "See that coffee machine? Just speak what you want. It can make any coffee drink."

So one character says "Half caf latte" and the machine does nothing except open a small drawer.

"You have to spit in the drawer so it can collect your DNA--then it will make your coffee" the billionaire says.

This pretty much sums up the whole tech scene today. Any good engineer could build that coffee machine, but no VC would fund her company without the spit drawer.

  • rlt
  • ·
  • 1 day ago
  • ·
  • [ - ]
I don’t we’re at the “companies bought too many GPUs” stage yet. My understanding is they still can’t get enough GPUs, or data centers to put them in, or power to run them. Most companies don’t even own them, they rent from the clouds.

We are, however, at the “we need an AI strategy” stage, so execs will throw anything and everything at the wall to see what sticks.

>“companies bought too many GPUs”

Nuance.

They have insufficient GPUs for the amount of training going on.

If you assume theres a plateau where the benefits of training constantly no longer outweigh the costs, then they probably have too many GPUs.

The question is how far away is that plateau.

They can't get enough GPUs right now, with VCs pumping money into them, but that can very quickly turn into "we're out of money, what do we do with all these GPUs?"
> My understanding is they still can’t get enough GPUs

Enough for what? The adoption is slowing down fast, people are not willing to pay for a chatbox and a 10% better model won't change that.

I think the market forces that is supposed to constrain these companies are broken.

Users don't have many other options to switch to. Even if they did, the b2b/advertising revenue they get makes up for any losses they may take on the consumer side.

  • cnees
  • ·
  • 1 day ago
  • ·
  • [ - ]
Amazon's Price History feature certainly doesn't need to open their AI assistant, but in addition to be graph I came for, I get a little summary of the graph. I really hope they aren't using an LLM for that when all it's doing is telling me it's the lowest price in 30 days.
That's the kind of lazy bullshit idea that, to me, exemplifies the AI hype slop era we're in. The point of a chart is to communicate visually. If the chart isn't clear without a supplemental explanation, why is it there?

If user research indicates your chart isn't clear enough, then improve the chart. But what are the odds they did any user research? They probably just ran an A/B test and saw number go up because of the novelty factor.

If AI was as great as they pretend, there would be no need to force it on us.
It's the classic "You don't need to tell a child that something is fun. The more times you a child something is fun, the more they will doubt you. It's easy to tell if something is fun, because it's fun."
Its the only game in town, and reasonably expected to be close to the last.
What do you mean by that?
A few days ago i took a photo of some water pipes, and asked chatgpt to review it .

Unbeknownst to me that there was an issue. It pointed out multiple signs of slow leaks and then described what i should do to test and repair it easily.

I see a lot of negative energy about the 'AI' tech we have today to the point where you will get mass downvoted for saying something positive.

If you did a writeup of this case, you could change some minds.
Did you follow through with the full repair? How long did it take? What was the materials cost?
A knowledgeable friend or family member could have helped with that too. AI is helpful, it is just not trillion dollars helpful.
  • scdnc
  • ·
  • 5 hours ago
  • ·
  • [ - ]
A tip: degoogle, demicrosoft, defacebook (and maybe also deapple)
  • Isamu
  • ·
  • 1 day ago
  • ·
  • [ - ]
The AI push is not just hype, it’s a scramble for cash. For now the only game plan is to scale up massively with a giant investment gamble, to try to get beyond the obvious limitations that threaten to burst the bubble.

Plus the general economy outlook is negative, AI is the bright spot. They are striving to keep growth up amid downward pressure.

  • duxup
  • ·
  • 16 hours ago
  • ·
  • [ - ]
I think the problem is, the folks in charge feel like they don't have a choices.

They spent a ton of money and/or they see everyone's LinkedIn posts or fantastic news stories by someone selling BS and they're afraid to say the emperor has no clothes.

As usual, those who spent too much money on it use it as a way to show their investors they didn't waste all that money and to get them to spend even more. That's why it's so messed up.
[dead]
All that Zoom and Skype during lockdown was to train AI. Those video calls weren't free.
I think the reason AI is being pushed everywhere right now is simply that companies want to experiment with it.

They want to explore what is possible and what sticks with users.

The best way to do this is to just push it in their apps as many places as possible since 1. you get a nice list of real world problems to try and solve. 2. You have more pressure on devs to actually make something that works because it is going into production. 3. You get feedback from millions of users.

Also, by working heavily with their AI, they will discover areas that can be improved and thus make the AI itself better.

They don't care that it is annoying, unhelpful or uneconomical because the purpose is experimentation.

Or they could beta test them first with users who opt in, and work out that most users hate the features there, instead of taking on AI nonsense everywhere and beta testing crappy features nobody wants on real users?
For the record, I'm not saying that this is a good thing, or that i agree with it. I also find it very annoying.
Most of these companies have an advanced features usergroup. Leave it there until its desired.
>I will not allow AI to be pushed down my throat just to justify your bad investment.

Unfortunately these reckless investments are likely to cause massive collateral damage to us 'little people'. But I'm sure the billionaires will be just fine.

This seems like a fairly reasoned screed. I can't find much to disagree with.

I am, personally, quite optimistic about the potential for "AI," but there's still plenty of warts.

Just a few minutes ago, I was going through a SwiftUI debugging session with ChatGPT. SwiftUI View problems are notoriously difficult to debug, and the LLM got to the "let's try circling twice widdershins" stage. At that point, I re-engaged my brain, and figured out a much simpler solution than the one proposed.

However, it gave me the necessary starting point to figure it out.

I love that the picture in the article is AI-generated
> I hear the complaints from the tech giants already: "But we bought too many GPUs! We spent billions on infrastructure! They have to be put to work!"

> Not my problem.

I don't know about the author specifically, but the bubble popping is a very bad thing for many people. People keep saying that this bubble isn't so bad because it's concentrated on the balance sheet of very deep pocketed tech companies who can survive the crash. I think that is basically true, but a lot is riding on the stock valuations of these big tech companies and lots of bad stuff will happen when these crash. It's obviously bad for the people holding these stocks, but I these tech stocks are so big that there is a real risk of widespread contagion.

The only thing worse than popping the AI bubble is trying to inflate it even larger with a government bailout. The longer the government tries to protect the bubble, the more extreme and destructive the level of capital misallocation is going to become. They should have popped it years ago before we were trying to build nuclear reactors to keep our internet chatbots from taking down the power grid.
As I click into this thread from the front page, "Writing a Good Claude.md" appears immediately above it. Sigh.
What’s the problem there exactly? There’s nothing necessarily diametrically opposed in the two articles. They’re almost complimentary really
"How to voluntarily use claude to get what you want"

"I dont like being forced to use AI products"

Whats the issue?

HN might be the one pushing AI down their users throat with so many top stories every dall, all day.
assume it is being ‘done wrong’, not due to the usual trifecta of greed/evil/stupidity, but due to socio-economic pressure that demands this approach.

AI really needs R&D time, where we first figure out what it’s good for and how best to exploit it.

But R&D for SW is dead. Users proved to be super-resilient to buggy or mis-matched sw. They adapt. ‘Good-enough’ often doesn’t look it. Private equity sez throw EVERYTHING at the wall and chase what sticks…

I mean, we're in the upslope stage of the hype/bubble cycle. Once this pops and 80% of invested people lose their shirts, the long-term adoption cycle will play out much more reasonably, more like OP wishes.
Also don’t shove it up our
yeah, we're f&%^ed
Fired?
Your regex parser is broken if your answer is a five char string.
it's probably the ^
> Right now, the frantic pace of deployment isn't about utility; it's about liquidity. It’s being shoved down our throats because some billionaires need to make some more billions before they die.

> We see the hallucinations. We see the errors. Let's pick the things which work and slowly integrate it into our lives. We don't need to do it this quarter just because some startup has to do an earnings call.

Citation needed?

To me this is a contentless rant. AI is about old billionaires getting richer before they die? It’s at least a lot more than that.

Seems like some people tried AI in 2023, got negative affinity for it, then just never update with new information. In my personal use, hallucinations are way down, and it’s just getting more and more useful.

If AI was amazing you wouldn’t need to push it, people would demand it!

You need to push slop, because people don’t really want it.

They have already proven they absolutely COULD tag everything with evidence of how much came from an original author/artist. Google's image generator has that hidden signature feature that can publicize ai generated, but they don't go the step further back to the real source. Just "it came from ai" aheeyuk pay us
AI can be thought of as a parasitic lifeform: it feeds on truth and excretes slop. We know AI is no good for us, but those pushing it have a nefarious plan: make people dependent on it, so we can't get rid of this parasite without destroying our society.
This article has basically nothing to say, and all of the comments here on HN are just projecting their priors into the vacuum.
It is one more voice against the frantic nature of this topic and the corporations desperate to push for it. It is no different than one more post about Claude.md making it to the front page.
It spends no time establishing the facts of franticness and desperation.
It doesn't need to. The author is voicing their opinion on the matter and is not attempting to conduct a scientific study and that is perfectly acceptable.
try watching a televised American football game. So many ads for AI. Of course ads appeal most to the gullible.
Have you seen the Workday ads featuring all the washed-up rock stars? They're pushing managing people AND AI agents - using AI. sigh...
  • bentt
  • ·
  • 22 hours ago
  • ·
  • [ - ]
This is just another example of getting the actual problem wrong.

When he says "billionaires making more billions" it's really off the mark. These people are not forcing AI down our throats to make billions.

They are doing it so they can win.

Winning means victory in a zero sum game. This is a game that is zero sum because the people that play it think that way. However, the point is not to make money. That's a side effect.

They want to win so the other guys don't. That means power, growth, prestige, and winning. Winning just to win.

Once people start to understand that this is the prime directive for the elite in the tech business, the easier it is for everyone to defend against it.

The way you defend against it is to make it non-zero sum. Spread the ideas out. Give people choices. Act more like Tim Berners Lee and less like Zuck. This will mean less money, sure, but more to the point, it deprives anyone of being "the winner". We should all celebrate any moves that take power away from the few and redistribute it to many. Money will be made in the process, but that's okay.

  • 65
  • ·
  • 1 day ago
  • ·
  • [ - ]
Oh how I'd like the AI bubble to pop already when ROIs don't justify the cost. I like AI for things like getting recommendations or classifying images. And yet execs feel the need to force every possible use case down our throats even if they don't make any sense or make quality worse.

E.g. Programming - and I do judge not only those who use AI to code but execs who force people to use AI to code. Sorry, I'd like to know how my code works. Sorry, you're not an efficient worker, you're just making yourself dumber and churning out garbage software. It will be a competitive advantage for me when slop programmers don't know how to do anything and I can actually solve problems. Silicon Valley tech utopians cannot convince me otherwise. I don't think poorly socialized dweebs know much about anything other than their AI girlfriends providing them with a simulation of what it feels like to not be lonely.

> We will work with the creators, writers, and artists, instead of ripping off their life's work to feed the model.

i support this but the Smarter Than Me types say it's impossible. It's not possible to track down an adequate number of copyright holders, much less get their permission, much less afford to pay them, for the number of works required in order to get the LLM to achieve "liftoff".

I would think that as I use Claude for coding, it would work just as well if it didnt suck down the last three years of NYT articles as if it did. There's a vast amount of content that is in the public domain, and if you're ChatGPT you can make deals with a bunch of big publishers to get more modern content too. But that's my know-nothing take.

maybe the issue is more about the image content. Screw the image content (and definitely the music content, spotify pushing this slop is immensely offensive), pay the artists. My code OTOH is open source, MIT licensed. It's not art at all. Go get it (though throw me a few thousand bucks every year because you want to do the right thing).

  • FLT8
  • ·
  • 1 day ago
  • ·
  • [ - ]
It's not' impossible', it's economically unviable. There's a difference. We really should be mandating that companies that don't pay fair market prices for the data they use to train their models must open source everything as reparation to humanity.
> i support this but the Smarter Than Me types say it's impossible.

Adobe has been training models on only licensed media. I'm not sure if it's all their models or just some of them, and I haven't seen the results, but someone's doing it.

How ironic it would be for Adobe, the literal 500 ton dreadnaught of copyright, patent, and intellectual property ownership, to be sucking down training content from unlicensed sources. At least something is not entirely wrong in the world today.
Why do AI companies get to do whatever they want in order to meet their business goals ("liftoff")?
It is not an axiom that LLMs even have the right to achieve "liftoff". They are obvious instruments of plagiarism that often just reorder sentences so as not to get caught. They can be forbidden.

If you don't mind the oligarchs stealing your code, that is your prerogative. Many other people do mind.

  • ·
  • 1 day ago
  • ·
  • [ - ]
>> It is time to do AI the right way.

Some "No True Scotsman"-flavored cope.

  • hmans
  • ·
  • 23 hours ago
  • ·
  • [ - ]
[dead]
[flagged]
And this is the new, fashionable, easy way to insult someone. Just say their work sounds like AI, and job done.
In this case they're not doing themselves any favors by leading their posts with obviously AI generated images. That just primes the reader to suspect the author is slopping it up before they even start reading.
You are right. And "work sounds like AI" is an insult says everything you need to know about the generations by AI :)
Do you want AI pushed down your throat?
  • ·
  • 1 day ago
  • ·
  • [ - ]
  • ·
  • 1 day ago
  • ·
  • [ - ]
  • xnx
  • ·
  • 1 day ago
  • ·
  • [ - ]
[flagged]
Yeah, my Windows rebooted and then bam, this web page opened itself.
And the subscription fee for your browser went up 40% because adding anti-ai content is a value add. And no, you cant opt out
[flagged]
The article renders well even if you block javascript though
GP's point is that the page attempts to run unnecessary JS and that this is objectionable.
You're implying they're the same in some way, but you haven't explained why.
Disable your script execution and cookie storage for the OP site and then attempt to view it. The page and content load fine; the host injecting coercing messages for enabling tracking cookies and scripts are the reason AI has been integrated into everything.

You either comply of face unnecessary roadblocks. OP has complied by sharing the link. My right to choose tracking cookies and script execution is parallel to my right to not utilize or be forced to utilize AI. This issue has to be addressed universally as is not simple "no ai" on the web; it's freedom to use the web or compliance with violation of that freedom

Hyperbole much? These are two different issues, you can of course write your own blog post about that.

Benefit of the doubt: this person wants to get their word out and it's more energy than they had to track down a pristine, pure, sparkling blogging engine.

Can you explain javascript and cookies similarity to LLMs?
You have the ability to turn that off…
[flagged]
I think the author cares about both.

Did that single sentence in this relatively short, 36 sentence post really make you flip the table as hard as you imply? That's surprising if so.

On the contrary, cannibalizing the commercial viability of original content creation is possibly the most short-sighted aspect of the current AI push. That isn't 'political', it's just a relatively conservative assessment of the content market.
  • rvz
  • ·
  • 1 day ago
  • ·
  • [ - ]
Ok fair enough, feel more of the AGI.
Replace AI with Blockchain. With cloud. With big data. With ERP. With service models. With basically almost any fad since virtualization. It’s the same thing: over-hyped tools with questionable value being shoe-horned into every possible orifice so the investors can make money.

I’m not opposed to any of the above, necessarily. I’ve just always been the type to want to adopt things as they are needed and bring demonstrable value to myself or my employer and not a moment before. That is the problem that capital has tried to solve through “founder mode” startups and exploitative business models: “It doesn’t matter whether you need it or not, what matters is that we’re forcing you to pay for it so we get our returns.”

The difference is the level of investment and consumer application for each service - most customers would never be able to tell you what an erp is.
How could say cloud is overhyped when we're at a point where running physical machines is a rare and specialized skill and companies couldn't run their own hardware anymore.

  Replace AI with Blockchain. With cloud. With big data. With ERP. With service models. With basically almost any fad since virtualization. It’s the same thing: over-hyped tools with questionable value being shoe-horned into every possible orifice so the investors can make money.
They said the exact same thing when electricity was invented too.

Gas companies said electricity was a fad. Some doctors said electric light harms the eyes. It's too expensive for practical use. Need too much infrastructure investment. AC will kill people with shocks. Electrification will destroy jobs, said gas lamp unions. It's unnatural, said some clergy. And on and on and on...

Thing is though, most folks don't recall that electricity itself wasn't the fad, but some methods of generation were. Same with vehicles, as everyone tried every fuel until we gradually found the ones that worked best in each engine or vehicle.

As I'm seeing from the multitude of folks scrutinizing my clearly exhaustive list of every single technology fad ever (sarcasm, for those unclear), they're missing the forest for the trees and therefore likely to hit every single rung when they fall down a ladder. I'm not saying X or Y wasn't valuable, or wasn't useful, only that they way they're shoved into every single orifice by force is neither of those things to those taken advantage of in the quest for utility.

The point isn't that advancements are bad, the point is the way we force them down the throats of everyone, everywhere, all the time creates a waste of skill and capital for the returns of a very select group of people. The point is that fads are bad, and we should let entities (companies and people) find use in what's offered naturally rather than force-feeding garbage to everyone to see what's actually palatable.

Electric light indeed harms the eyes.
This post is just one multipurpose category error.
What an utterly bizarre comparison.

Even the block chain comparison isn't valid because it didn't consist of an "AI" button getting crammed into every single product and website, turned into a popover etc.

There are nuances to the examples.

For example, I'm not a big fan of blockchain. In fact, I think crypto is just 99% scam.

But big data led to machine learning and LLMS right? Cloud led to cheaper software with faster deploy times right? In fact, cloud also means many browser based apps replacing old Windows apps.

None of these were fads. They are still tremendously useful today.

Boomers in the manager class love AI because it sells the promise of what they've longed for for decades: a perfect servant that produces value with no salary, no need for breaks, no pushback, no workers comp suits, etc.

The thing is, AI did suck in 2023, and even in 2024, but recently the best AI models are veering into not sucking territory, which when you look at it from a distance makes sense, eventually if you throw the smartest researchers on the planet and billions of dollars at a problem, something eventually will give and the wheels will start turning.

There is a strange blindness many people have on here, a steadfast belief that AI just will just never end up working or always be a scam, but the massive capex on AI now is predicated on the eventual turning of the fledgling LLM's into self-adaptive systems that can manage any cognitive task better than a human. I don't see how the improvements we've seen over the past few years in AI aren't surely heading in that direction.

> recently the best AI models are veering into not sucking territory

I agree with your assessment.

I find it absolutely wild that 'it almost doesn't entirely suck, if you squint' is suddenly an acceptable benchmark for a technology to be unleashed upon the public.

We have standards for cars, speakers, clothing, furniture, make up, even literature. Someone can't just type up a few pages of dross and put it though 100 letterboxes without being liable for littering and nuisance. The EU and UK don't allow someone to still phones with a pre-imstalled app that almost performs a function that some users might theoretically want. The public domain has quality standards.

Or rather, it had quality standards. But it's apparently legal to put semi-functioning data-collectors in technologies where nobody asked for them, why isn't it legal to sell chairs that collapse unless you hold them a specific way, clothes that don't actually function as clothes but could be used to make actual clothes by a competent tailor, headphones that can be coaxed into sporadically producing round for minutes at a time?

Either something works too a professional standard or it doesn't. If it doesn't, it is/was not legal to include it in consumer products.

This is why people are more angry than is justified by a single unreliable program. I don't care that much whether LLM's perform the functions that are advertised (and they don't, half the time). I care that after many decades of living in a first world country with consumer protection and minimum standards, all of that seems to have been washed away in the AI wave. When it receeds, we will be left paying first world prices for third world enquiring, now the acceptable quality standard for everything seems to have dropped to 'it can almost certainly be used for its intended purpose at least some times, by some people, with a little effort'.

It still kinda sucks though. You can make it work, but you can also easily end up wasting a huge amount of time trying to make it do something that it's just incapable of. And it's impossible to know upfront if it will work. It's more like gambling.

I personally think we have reached some kind of local maximum. I work 8 hours per day with claude code, so I'm very much aware of even subtle changes in the model. Taking into account how much money was thrown at it, I can't see much progress in the last few model iterations. Only the "benchmarks" are improving, but the results I'm getting are not. If I care about some work, I almost never use AI. I also watch a lot of people streaming online to pick up new workflows and often they say something like "I don't care much about the UI, so I let it just do its thing". I think this tells you more about the current state of AI for coding than anything else. Far from _not sucking_ territory.

It's not a perfect servant by any means. And let's drop the generation game.
What examples of AI integrations annoy you? Because I have such wonderful time randomly discovering AI integrations where they actually fit nicely: 1) marimo documentation has ask button to quickly get some help, kind of like way smarter RAG; 2) postman has AI that can write little scripts that visualize responses however you want (for example I turned bunch of user ids into profile links so that I could visit all of them); 3) Grok button on each Twitter post is just amazing to quickly get into what post even references and talks about. 4) Google's AI Mode saved me many clicks, even just Gemini that can quickly fetch when certain TV Show goes live and make reminder is amazing
> Grok button on each Twitter post is just amazing to quickly get into what post even references and talks about.

when Charlie Kirk was shot, and the video was posted to Twitter, people asked Grok to "fact-check" it...and Grok told them the videos were fake and Kirk was alive. [0]

Grok also spread misinformation about the identity of the shooter. [1]

> On Friday morning, after Utah Gov. Spencer Cox announced that the suspect in custody was Robinson, Grok's replies to X users' inquiries about him were contradictory. One Grok post said Robinson was a registered Republican, while others reported he was a nonpartisan voter. Voter registration records indicate Robinson is not affiliated with a political party.

and that's just one particularly egregious event in a long string of problems, such as the MechaHitler thing. [2] and the Elon Musk piss-drinking thing. [3]

so if you're going to defend these "AI" integrations as being useful and helpful...I dunno, Grok is probably not a good example to point to.

0: https://www.engadget.com/ai/grok-claimed-the-charlie-kirk-as...

1: https://www.cbsnews.com/news/ai-false-claims-charlie-kirk-de...

2: https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-...

3: https://www.404media.co/elon-musk-could-drink-piss-better-th...

  • croes
  • ·
  • 23 hours ago
  • ·
  • [ - ]
On Windows: Notepad and Edge Developer Tools
AI will with certainty increase productivity of % and the rest will fall behind, perhaps dramatically. Effectiveness with AI can still be a grind, beyond simple prompting, we are getting lots of expensive AI tools heavily subsidized right now, that may not always be the case.