Last week, they pushed an update that broke all of the features on the watch unless I agreed to allow Google to train their AI on my content.
I'm new to Android, so maybe I can somehow still preserve some privacy and have basic voice commands, but from what I saw, it required me to enable Gemini Apps Activity with a wall of text I had to agree to in order to get a simple command to play some music to work.
https://support.google.com/gemini/community-guide/309961682/...
I might switch back to my iOS device, but what I'd really like to do is replace the Andriod OS on this Motorola with a community oriented open source OS. Then I could start working on piping the mic audio to my own STT model and execute commands on the phone.
It's a step back to not be able to do it by voice but if you're concerned enough about your privacy, stopping once or twice during a ride doesn't sound like the end of the world.
I'm not saying it's fine that Google took away functionality but, from a practical perspective, it seems like OP was acting like there's no other option available to change tracks. There is and it's really not that inconvenient.
Microsoft pulled the this crap with Windows, you once they stop caring about their you’ve already lost it’s time to stop paying their game.
There are few things AI is truly very good at. Surveillance at scale is one of them. Given everything going on in the world these days it's worth considering.
So yea the software’s EULA changed for the worse, that’s the underlying issue.
Also, early attempts at dictation wasn’t considered AI, instead a machine learning etc was found to be useful so it’s been tossed into the AI bucket rather arbitrarily.
This reply demonstrates you don't understand the problem. Please don't contribute to the enshittifacation of everything by being an apologist for unethical behavior.
I don't think it's asking too much to not make my product worse after I buy it, and I think we need legislation to prevent companies from doing that. I'm not sure what that would look like, and the government is bought and paid for by those same companies, so it's unlikely we will see that. But we do need it.
How can such law be written and how can a lawyer litigate that in court? The way you've phrased it is very subjective. What is an objective measure that a court can use to determine the percentage of quality drop in a product against a timeline?
Do you want to work on Oracle Database [1]?
By the way, I also don't want the software I use to suffer from quality drop due to new forced "features". I just don't think the way suggested here works well.
I'm aware people are annoyed with big UI overhauls that seemingly do nothing, but I don't think you understand what it would take to support what you wrote. You're describing something that gets exponentially harder to maintain as a product ages. It's completely prohibitive to small businesses. How many UI changes do you think are made in a year for a young product? One that is constantly getting calls from clients to add this or that? Should a company support 100 different versions of their app?
I understand a small handful of companies occasionally allow you to use old UI, but those are cases where the functionality hasn't changed much. If you were to actually mandate this, it would make a lot of UIs worse, not better.
As much as people want to act like there's a clear separation, a lot of UI controls are present or absent based on what business logic your server can do. If you are forced to support an old UI that does something the company cannot do anymore, you are forcing broken or insecure functionality. And this would be in the name of something nobody outside of Hackernews would even use. Most people are not aware there is an old.reddit.com.
1) Have this law only apply B2C.
2) Stop having rolling feature updates except on an opt-in basis. It used to be that when I bought an operating system or a program it stayed bought, and only updated if I actively went out and bought an update. Rolling security updates are still a good idea, and if they break UI functionality then let the end customer know so that they can make the decision on whether or not to update.
For hosted software, such as Google office, is it really that much more difficult to host multiple versions of the office suite? I can see issues if people are collaborating, but if newer file formats can be used in older software with a warning that some features may not be saved or viewable, then the same can be done with a collaborative document vis-a-vis whatever version of the software is opening the document.
My wife recently went 0patch and some other programs to cover her Win10 when Microsoft stopped updating it. She still got force updated two updates having to do with patching errors in Windows' ESU feature that blocked people from signing up for the 1-year of ESUs. She let those updates happen without trying to figure out a way to block them as they have no other impact on her operating system, but it would have been nice if Microsoft have been serious about ending the updates when it said it was.
I am not a programmer, but come on. This was done in the past with far less computational ability.
You can add them, you can even move them, but you don't get to take back something you already sold me, unless I also get to take back the money I gave you.
Really not super interested in excuses and whining. Either support the features you sold me, or refund my money. It really is that simple... and it really should be the law.
But the question is how do you define what a feature is in networked apps? If you play an online game with a sniper rifle that one-shots people, and the developers nerf it, have they taken a feature from you? But everyone else loved the nerf? How do we support you and the players? Let you continue one-shotting them?
If the app you're paying for could message other users, but now they can block you, is the company supposed to give you a refund because now you can't message some users?
In general I think the best answer to your objections is to require companies to specify up front exactly what features are being sold, and for how long they are guaranteed to be available. The onus would then be on the consumer to evaluate the list of guaranteed features against their wants and needs. Consumers would hopefully learn, over time, not to buy products that don't provide these guarantees up front.
Right now what they (we) are learning is not to trust anything with an Internet connection, because of abuses from a small number of prominent bad actors. Which is unfortunate.
> Have this law only apply B2C.
I don't think limiting it to B2C changes much. Now instead of business customers calling and asking for features, you have swaths of people asking for a feature on the internet.
> I am not a programmer, but come on. This was done in the past with far less computational ability
If by computational ability you mean the actual power of our hardware, this isn't really a computational problem, it's a manpower problem. We have faster computers, but our velocity as developers has been relatively stagnant the past 20 years, if not worse.
Believe me, I'm totally sympathetic to the idea that web apps could support older versions. I have thought of doing it myself if I were to get out of contract work. But I'm aware of how much extra work that is, and it would be something I do for fun, not something that most people would appreciate.
> Stop having rolling feature updates except on an opt-in basis. It used to be that when I bought an operating system or a program it stayed bought, and only updated if I actively went out and bought an update
Having an opt-in doesn't really change what I'm talking about. This is lumping different kinds of software together, and it would be helpful to separate them. There are apps that do local work on your computer, apps that communicate with a network, and the OS itself.
Apps that work locally and don't need to talk to a server can have multiple versions, and they often do. That's a solved problem. I have not been forced to upgrade any third party app on my computer. But I have had AI crammed into Microsoft apps and I hate it.
Apps that communicate with a server, and other users, are the source of a lot of issues I'm talking about. Maintaining versions for these creates cascading problems for everyone.
For OS: I'm all for not being forced to upgrade my OS. But if I don't upgrade, the reality is I will miss security updates and won't be able to use newer apps. That was the case in the 90's, and it's the case now.
> Rolling security updates are still a good idea
That's doing some heavy lifting. It's a good idea, sure, but you can't just sprinkle security updates onto older versions. You're just multiplying how long each security fix takes for all users.
> For hosted software, such as Google office, is it really that much more difficult to host multiple versions of the office suite
In Google's case, it's difficult to maintain one version of an app. They kill apps left and right. You're referencing software from the biggest companies in the world. Reddit manages just one other version, and that's because the core of their app has stayed the same since 1.0. If we required all B2C to always support older versions, we'd essentially make it illegal for small companies to make networked services.
Here's how it plays out for a small company:
- Every security fix has to be backported to every version of the app. This is not free, this is extra work for each version. What if it's discovered Google Docs has a vulnerability that could leak your password and has for 20 years? That's a lot of versions to update.
- If the app interacts with other users in anyway, new features may need to support old versions anyway. How do you add a permissions system to Google Docs if the old version has no permissions? What should happen on the old app when they access a doc they couldn't access before? You have to program something in.
- Support staff has to know 10 different versions of the app. "Have you tried clicking the settings icon?" "What settings icon?"
- Internet Guides? YouTube tutorials? When you Google how to do something, you'd need to specify your version.
- Because we are doomed to support older versions in some capacity, companies will just not work on features that many people want because it's too hard to support the few people on older versions.
This is why apps with "versions" usually have support periods, because it would be impossible for them to support everything.
And that's fine. Just leave it that way and stop with the rolling feature updates that a person can't block because the only way you sell your software is as SaaS.
It should be illegal for you to change a product you sold me in a way that degrades functionality or impairs usability, without offering me the option of either a full refund or a software rollback.
If that causes pain and grief for server-based products, oh, well. Bummer. They'll get by somehow.
And even with the ability of rolling back somewhere hidden in the settings, forced UI changes are annoying at best - they should always come at a time chosen by the user (including "never") and not any other time.
I'm in, but let's have it in October or something when I'm less busy.
Update: talked to some experts. IANAL, and they aren't either. This would be cataclysmic for the courts unless they knew it was coming AND every claim was filed correctly (fees paid, no errors, etc). Even if everything was done perfectly, it would be a ton of work and there's no way every case would be processed in a day. It's also likely that all the identical cases filed in a single jurisdiction would be heard together in a single trial. There's also weirdness when you consider where each claim is filed. Quote: "you may be in the right, but I can guarantee you would have a terrible time"
I assume the main point would be getting the attention of politicians who would step in and intervene. Especially if it’s a situation where the courts are truly overwhelmed.
you rented/leased a watch for an undefined amount of time.
this is pretty much everything everywhere right now. except local linux mostly.
Do the terms allow YouTube/Google to use the data collected for any purpose
Car dashboards without buttons, TVs sold with 3D glasses (remember that phase?), material then flat design, larger and larger phones: the list is embarrasing to type because it feels like such a stereotypical nerd complaint list. I think it’s true though — the tech PMs are autocrats and have such a bizarrely outsized impact on our lives.
And now with AI, too. I just interacted with duck.ai, duck duck go’s stab at a bot. I long for a little more conservatism.
They're the ones who are just asking for it ... they, themselves need more forceful training. It's up to us to move slower and fix things.
It reads like a company that is only there to squeeze money out of existing customers and hell bent on revenues above growth. Like one of those portfolio acquisitions.
Small stuff such as: the keyboard shortcut that is setup for switching keyboards is wrong, the one displayed to me in the UI is the wrong one, I discovered it because the shortcut for the Discord overlay (Shift + `) was making me switch keyboard layouts, couldn't comprehend why until I noticed that shorcut consistently switched them while the one displayed in the UI did not. There's no way to change the shortcut, whatever I set up in the UI does not work but Shift + ` always works, no idea why.
Copy and paste has definitely surprised me sometimes, I was designing a custom livery for a sim racing game, copying images to use as stickers, the clipboard would paste very different images from many "copies" ago out of nowhere, I couldn't create a reproducible way to file a bug report, it works sometimes, it doesn't at all at other times.
I setup for updates to happen in the night, between 03.00-07.00, it doesn't matter, the computer rebooted a few times out of nowhere to apply updates, I didn't even get a notification about it, simply got the "Restarting" screen.
It's absolutely shoddy, as much as I have many complaints with macOS the past 8+ years it's nowhere as shitty of as an experience, I'm only a couple of months into Windows again, and it's way worse than I remember it from the days of Win2k/Windows XP/Windows 7.
Round trip through recall and OCR, here's your "text" or image for pasting.
Sounds dumb. I know.
Then again, a friend sent a screenshot of a contact and I asked AI to convert that to a vCard I could import (impressively saved time and was less error-prone).
Source?
To install it, browse to here: https://code.visualstudio.com/ (search: "vscode"). Click on "Download for Linux (.deb)" and then use Discover to install and open it - that's all GUI based and rather obvious. You are actually installing the repository and using that which means that updates will be done along with the rest of the system. There is also a .rpm option for RedHat and the like. Arch and Gentoo have it all packaged up already.
On Windows you get the usual hit and miss packaging affair.
Laughably, the Linux version of VSCode still bleats about updates being available, despite the fact that they are using the central package manager, that Windows sort of has but still "lacks" - MSI. Mind you who knows what is going on - PShell apps have another package manager or two and its all a bit confusing.
Its odd that Windows apps, eg any not Edge browser, Libre Office, .pdf wranglers, ... anything not MS and even then, there are things like their power toy sort of apps, still need their own update agents and services or manual installs.
Why does Windows not have a formal source for safe software? One that the owner (MS) endorses?
One might conclude that MS won't endorse a source of safe software and hence take responsibility is because they are not confident in the quality of their own software, let alone someoneelses.
Around that time, one of my employer's website had added google plus share buttons to all the links on the homepage. It wasn't a blog, but imagine a blog homepage with previews of the last 30 articles. Now each article had a google plus tag on it. I was called to help because the load time for the page had grown from seconds to a few minutes. For each article, they were adding a new script tag and a google plus dynamic tag.
It was fixed, but so much resources were wasted for something that eventually disappeared. Ai will probably not disappear, but I'm tired of the busy work around it.
Most of the AI efforts currently represent misadventures in software design at a time when my Fitbit charge can't even play nice with my pixel 7 phone. How does that even happen?
PS: I was thinking that I didn't notice it being shoved down because I was high on the Koolaid. But I do remember when they shoved it in YouTube comments.
I think they intended to be like Facebook and have a selective group of people join, but they just allowed any random set of people to join and then said tou can bring 5 or some low number with you. That was never going to work for the rapid growth they wanted.
I liked Google+, but it Google really mismanaged it.
It felt like I had some level of control of my feed and what I saw and for the time it existed the content was pretty good :(
Pretty much my sentiment too.
Your favorite services are adding “AI” features (and raising prices to boot), your data is being collected and analyzed (probably incorrectly) by AI tools, you are interacting with AI-generated responses on social media, viewing AI-generated images and videos, and reading articles generated by AI. Business leaders are making decisions about your job and your value using AI, and political leaders are making policy and military decisions based on AI output.
It’s happening, with you or to you.
Visa hasn't worked for online purchases for me for a few months, seemingly because of a rogue fraud-detection AI their customer service can't override.
Is there any chance that's just a poorly implemented traditional solution rather than feeding all my data into an LLM?
https://successfulsoftware.net/2022/04/14/verifone-seems-to-...
For all the talk in the early days of Bitcoin comparing it to Visa and how it couldn't reach the scale of Visa, I never thought it would be that Visa just decided to place itself lower than Bitcoin.
Kind of the same as Windows getting so bad it got worse than Linux, actually...
Traditional fraud-detection models have quantified type-i/ii error rates, and somebody typically chooses parameters such that those errors are within acceptable bounds. If somebody decided to use a transformers-based architecture in roughly the same setup as before then there would be no issue, but if somebody listened to some exec's hairbrained idea to "let the AI look for fraud" and just came up with a prompt/api wrapping a modern LLM then there would be huge issues.
It writes a narrative of success even if it's embellished. Managers respond to data and the people collecting the data are incentivised to indicate success.
we’ll force you to come back to justify sunk money in office space.
Perhaps… the right balance is actually working only 4 days a week, always from the office, and just having the 5th day proper-off instead.
I think people go through “grinds” to get big projects done, and then plateau’s of “cooling down”. I think every person only has so much grind to give, and extra days doesn’t mean more work, so the ideal employee is one you pay for 3-4 days per week only.
But that's a tall order, so maybe we just need managers to pay attention. It doesn't take that much effort to stay involved enough to know who is slacking and who is pulling their weight, and a good manager can do it without seeming to micromanage. Maybe they'll do this when they realize that what they're doing now could largely be replaced by an LLM...
We don't talk enough about how the real estate industry is a gigantic drag on the economy.
It's a sort of pushiness that hints not even the people behind the product are very confident in its appeal.
In general, I think we want to have it, just like nuclear fusion, interplanetary and interstellar colonization, curing cancer, etc. etc.
We don't "need" it similar to people in 1800s don't need electric cars or airports.
Who owns AGI or what purpose the AGI believe it has is a separate discussion - similar to how airplanes can be used to transport people or fight wars. Fortunately today, most airplanes are made to transport people and connect the world.
Outside of tech circles no one I talk to wants AI for anything
You don’t get to 100s of millions of weekly active users with a product only technical people are interested in.
As a society, nothing would be more harmful than AGI controlled by corporations and governments. The rest of us should fight tooth and nail to make sure that it never happens
You have a clear idea of what AGI would look like, and don't want that. I think you don't and none of us do, it will surprise us just like internet and smartphone would to someone, even if very technically inclined, 50 years ago.
Every single cohort is smiling upward from the past two years. That is insane, especially at their scale! AI is useful to people.
People used to google stuff before it became click bait content and ads.
Same thing is gonna happen with ai chatbots. You begrudgingly use them when you have to and ignore them otherwise.
Hell, those that did leave Twitter did it to move to Bluesky which is basically Twitter under a different banner.
Even if people move away from specific instances of some form of technology (like Twitter, Bluesky, Mastodon or whatever) they are not necessarily moving away from the idea/tech iself (like microblogging in this example).
Same with other social media: notice how after "Reddit gone shit" the people who felt like that and did move away didn't move back to forums or whatever, they went to Reddit-like boards like Lemmy.
And sure, you can say people just moved to other platforms, but I don’t think you can substantiate that either.
Personally I just dropped all twitter likes, and a lot of my old twitter friends did too. We have discord servers now.
But it’s hard to have a discussion like this without data and we’re never gonna have the data. So you have to use qualitative data instead.
For instance, I am fiddling with LineageOS on a Pixel (ironically enough) that minimizes my exposure to Google's AI antics. That doesn't mean to say it is easy or sustainable, but enough of us need to stop participating in their bad bets to force upon that realization.
That's not to mention all the other tech companies pushing AI (which is honestly all of them).
I might not like a certain feature, but I'd dislike the government preventing companies from adding features a whole lot more. The thought of that terrifies me.
(To be clear, legitimate regulations around privacy, user data, anti-fraud, etc. are fine. But just because you find AI features to be something you don't... like? That's not a legitimate reason for government intervention.)
It's better to assume good faith when providing a counter argument.
But also, you're going to have to be more specific about what tracking you're worried about. Cell towers need to track you to give you service. But the parking app only gets the data you enable with permissions, and the data the city requires you to give the app (e.g. a payment method). So I'm not super clear what tracking you're concerned about?
If you don't use your smartphone for anything but paying for parking, I genuinely don't know what tracking you're concerned about.
Because you think it’s not?
What if I, and many other people, think that it is?
2) Business speech is limited in many, many ways. There is even compelled speech in business (e.g. black box warnings, mandatory sonograms prior to abortions).
And legally, code and software are considered a form of speech in many contexts.
Do you really want the government to start telling you what software you can and cannot build? You think the government should be able to outlaw Python and require you to do your work in Java, and outlaw JSON and require your API's to return XML? Because that's the type of interference you're talking about here.
In the US, commercial activities do not have constitutionally protected speech rights, with the sole exception of "the press". This is covered under the commerce clause and the first amendment, respectively.
I assemble DNA, I am not a programmer. And yes, due to biosecurity concerns there are constraints. Again, this might be covered under your "does no harm" standard. Though my making smallpox, for example, would not be causing harm any more than someone building a nuclear weapon would cause harm. The harm would come from releasing it.
But I think, given that AI has encouraged people to suicide, and would allow minors the ability to circumvent parental controls, as examples, that regulations pertaining to AI integration in software, including mandates that allow users to disable it (NOTE, THIS DOESN'T FORCE USERS TO DISABLE IT!!), would also fall under your harm standard. Outside of that, the leaking of personally identifiable information does cause material harm every day. So there needs to be proactive control available to the end user regarding what AI does on their computer, and how easy it is to accidentally enable information-gathering AI when that was not intended.
I can come up with more examples of harm beyond mere annoyance. Hopefully these examples are enough.
The topic of suicide and LLMs is a nuanced and complex one, but LLMs aren't suggesting it out of nowhere when summarizing your inbox or calendar. Those are conversations users actively start.
As for leaking PII, that's definitely something for to be aware of, but it's not a major practical concern for any end users so far. We'll see if prompt injection turns into a significant real-world threat and what can be done to mitigate it.
But people here aren't arguing against LLM features based on substantial harms. They're doing it because they don't like it in their UX. That's not a good enough reason for the government to get involved.
(Also, regarding sonograms, I typed without thinking -- yes of course the ones that are medically unnecessary have no justification in law, which is precisely why US federal courts have struck them down in North Carolina, Indiana, and Kentucky. And even when they're medically necessary, that's a decision for doctors not lawmakers.)
I emphatically disagree. See you at the ballot box.
> but it's not a major practical concern for any end users so far.
My wife came across a post or comment by a person considering preemptive suicide in fear that their ChatGPT logs will ever get leaked. Yes, fear of leaks is a major practical concern for at least that user.
The reason I don't like these sorts of features is because I think they are harmful, personally
Lots of people actually find them useful. And the features are being iterated on to improve them.
Correct, especially when the features break copyright law, use as much electricity as Belgium, and don't actually work at all. Just a simple button that says "Enabled", and it's off by default. Shouldn't be too hard, yeah? You can continue to use the slop machine, that's fine. Don't force the rest of us to get down in the trough with you.
I have a big problem with a government forcing companies to enable toggles on features because users complain about the UX.
If there are problems with copyright, that's an issue for the courts -- not a user toggle. If you have problems with the electricity, then that's an issue for electricity infrastructure regulations. If you think it doens't work, then don't use it.
Passing a law forcing a company to turn off a legal feature by default is absurd. It's no different from asking a publisher to censor pages of a book that some people don't like, and make them available only by a second mail-order purchase to the publisher. That's censorship.
I have a big problem with companies forcing me to use garbage I don't want to use.
>If there are problems with copyright, that's an issue for the courts -- not a user toggle.
But in the meantime, companies can just get away with breaking the law.
>If you have problems with the electricity, then that's an issue for electricity infrastructure regulations.
But in the meantime, companies can drive up the cost of electricity with reckless abandon.
>If you think it doens't work, then don't use it.
I wish I lived in your world where I can opt out of all of this AI garbage.
>Passing a law forcing a company to turn off a legal feature by default is absurd.
"Legal" is doing a lot of heavy lifting. You know the court system is slow, and companies running roughshod over the law until the litigation works itself out "because they're all already doing it anyway" is par for the course. AirBnB should've been illegal, but by the time we went to litigate it, it was too big. Spotify sold pirated music until it was too big to litigate. How convenient that this keeps happening. To the casual observer, it would almost seem intentional, but no, it's probably just some crazy coincidence.
>It's no different from asking a publisher to censor pages of a book that some people don't like, and make them available only by a second mail-order purchase to the publisher. That's censorship.
Forcing companies to stop being deleterious to society is not censorship, and it isn't Handmaid's Tale to enforce a semblance of consumer rights.
That pretty much sums it up. And the answer is: too bad. Deal with it, like the rest of us.
I have a big problem with companies not sending me a check for a million dollars. But companies don't obey my whims. And I'm not going to complain that the government should do something about it, because that would be silly and immature.
In reality, companies try their best to build products that make money, and they compete with each other to do so. These principles have led to amazing products. And as long as no consumers are being harmed (e.g. fraud, safety, etc.), asking the government to interfere in product decisions is a terrible idea. The free market exists because it does a better job than any other system at giving consumers what they want. Just because you or a group of people personally don't like a particular product isn't a reason to overturn the free market and start asking the government to interfere with product design. Because if you start down that path, pretty soon they're going to be interfering with the things you like.
Bull. Free markets are subject to a lot of pressures, both from the consumers, but also from the corporate ownership and supply chains. The average consumer cannot afford a bespoke alternative for everything they want, or need, so are subject to a market. Within the constraints of that market it is, indeed, best for them if they are free to choose what they want.
But from personal experience I know damn sure that what I really really want is often not available, so I'm left signalling with my money that a barely tolerable alternative is acceptable. And then, over a long enough period of time, I don't even get that barely tolerable alternative anymore as the company has phased it out. Free markets, in an age of mass production and lower margins, universally mean that a fraction of the market will be unable to buy what they want, and the alternatives available may mean they have to go without entirely. Because we have lost the ability to make it ourselves (assuming we ever had that ability).
But that's just life. I genuinely don't understand how you can complain that not every product is exactly the product you want. Companies are designing their products to meet the needs of millions of people at the price point they can pay for it. Not for you personally.
We have more consumer choice than we've ever had in modern history, and you're still complaining it's not enough?
Even when we lived in tribes and made everything ourselves, we were extremely limited in our options to the raw materials available locally, and the extremely limited ability to transform things. We've never had more choice than we have today. I cannot fathom how you are still able to complain about it.
Issues that do plague the current market in the US, that impact my household enough to notice, are:
1) Product trends. When a market leader decides to go all in on something, a lot of the other companies follow along. We've seen this in internet connectivity, touchscreens in new cars, ingredients in hair care products, among others. This greatly limits the ability of consumers to find alternatives that do not have these trends. In personal care products this is a significant issue when it comes to allergies or other kinds of sensitivities.
But in general just look at the number of people who complain about things such as a lack of discrete buttons for touchpads. Not even Framework offers buttoned touchpads as an option, despite there being a market for them.
It's obvious that it's the vocal, heavy spenders who determine what's on the market. Or it's a race to the bottom in terms of price that determines this. It's not the average consumer.
2) Perfume cross-contamination as an extension of chemical odors in general[0,1]. In recent years many companies with perfumed products such as cleaning agents have increased the perfume or increased its duration with fixatives. This amplified after so many people had their sense of smell damage during early COVID (lots of complaints about scented candles and the like not having an odor anymore, et cetera).
This wouldn't be a problem from a consumer point of view except that the perfumes transfer to non-perfumed products - basically anything that has plastic or paper absorbs second-hand fragrances pretty well. I live in as close as we can get to a perfume-free household, for medical reasons. It's effectively impossible to buy certain classes of products, or anything at all from certain retailers, that doesn't come perfumed. There are major stores such as Amazon and Target that we rarely buy from as we have to spend a lot of money, time, and effort to desmell products (basically everything purchased from Amazon or Target now has a second-hand perfume).
It's possible to have stores that have both perfumed products and non-perfumed products such that perfume cross-contamination doesn't occur. But this requires the appropriate ventilation, and isn't something that's going to happen unless one of the principals of the store has a sensitivity.
And then there are perfumes picked up in transit from the wholesaler, trucking company, or shipping company.
I hope someday to win Powerball or Mega Millions so that I can start a company dedicated to perfume-free household basics. That are guaranteed to still be perfume-free on delivery.
0 - https://www.drsteinemann.com/faqs.html
1 - https://dynamics.org/Altenberg/CURRENT_AFFAIRS/CHINA_PLASTIC...
I am dealing with it, thanks, by fighting against it.
>I have a big problem with companies not sending me a check for a million dollars. But companies don't obey my whims. And I'm not going to complain that the government should do something about it, because that would be silly and immature.
Because as we all know, forcing you to use the abusive copyright laundering slop machine is exactly morally equivalent to not getting arbitrary cheques in the mail.
>In reality, companies try their best to build products that make money, and they compete with each other to do so.
In the Atlas Shrugged cinematic universe, maybe. Now, companies try to extract as much as they can by doing as little as possible. Who was Google competing with for their AI summmary, when it's so laughably bad, and the only people who want it are the people whose paycheques depend on it, or people who want engagement on LinkedIn?
>The free market exists because it does a better job than any other system at giving consumers what they want.
Nobody wants this!
>Because if you start down that path, pretty soon they're going to be interfering with the things you like.
I mean, they're doing that, too, and people like you look down your nose and tell me to take that abuse as well. So no, I'm not going to sit idly by and watch these parasites ruin what shreds of humanity I have left.
> Nobody wants this!
OK, well if you don't believe in the free market then sure.
Good luck seeing how well state ownership manages the economy and if it does a better job at delivering the software features you want, or even of putting food on your table. Because the entire history of the twentieth century says you're not going to like it.
I don't know what that means grammatically.
But you could ask, what is so terrifying about exerting democratic control over people's free speech, over the political opinions they're allowed to express?
The answer is, because it infringes on freedom. As long as these AI features aren't harming anyone -- if your only complaint is you find their presence annoying, in a product you have a free choice in using or not using -- then there's no democratic justification for passing laws against them. Democratic rights take precedence.
Nobody is FORCING you to go to that restaurant so it's antidemocracy to take away their freedom to not wash their hands when they cook?
Why do you say this ? They are clearly harming the privacy of people. Or you don't in privacy as a right ? But, a lot of people do - democratically.
Trying to regulate whether an end-user feature is available just because you don't "like" AI creep is no different from trying to regulate that user interfaces ought to use flat design rather than 3D effects like buttons with shadows. It would be an illegitimate use of government power.
When I buy a book, I don't want the government deciding in advance which paragraphs should be included, and which paragraphs people "shouldn't have to listen to". So I don't want it doing that with software either. It's the same thing.
You don't have to buy that book in the first place. The same way you don't have to use a piece of software.
If voters Democratically decide to do something, that's democracy at work.
If a Supreme Court strikes down a majority-passed law limiting free speech guaranteed by the Constitution, that's democracy at work.
And no, that would be the courts at work, which may or may not be beholden to the whims of other political figures.
Go ahead and try, but I don't think you'll find that an amendment to restrict people's freedoms is going to be very popular. Because it will be seen as anti-democratic.
I'm not sure the point you're trying to make here.
Voters restrict their own freedoms all the time. Hell, my state recently passed a law preventing Ranked Choice voting.
Yes voters try to restrict their own freedoms all the time. We have constitutions with rights to block them from doing that in fundamental ways. That's what protection from tyranny of the majority is all about. Just because you have a majority doesn't mean you're allowed to take away rights. That's a fundamental principle of democracy. Democracy isn't just majority rule -- it's the protection of rights as well.
What an asinine comparison lol
And as far as I'm concerned, as long as Google and Apple have a monopoly on smartphone software, they should be regulated into the ground. Consumers have no alternatives, especially if they have a job.
https://news.ycombinator.com/newsguidelines.html
Code and software are very much forms of speech in a legal sense.
And free speech is regulated in cases of harm, like violent threats or libel. But there's no harm here in any legal sense. People are just unhappy with the product UX -- that there are buttons and areas dedicated to AI features.
Companies should absolutely have the freedom to build the products they want as long as there's no actual harm. If you merely don't like a UX, use a competing product. If you don't like the UX of any product, then tough. Products aren't usually perfectly what you want, and that's OK.
You're not being forced to use the AI features. If you don't want to use them, don't use them. There's zero antitrust or anticompetitive issue here.
Your argument that Google and Apple should be "regulated into the ground" isn't an argument. It's a vengeful emotion or part of a vague ideology or something.
If I want blenders to be sold in bright orange, but the three brands at my local store are all black or silver, I really don't think it's right for the government should pass a law requiring stores to carry blenders in bright orange. But that's what you're asking for, for the government to determine which features software products have.
You can't turn them off in many products, and Microsoft's and Google's roadmaps both say that they're going to disable turning them off, starting with using existing telemetry for AI training.
> Your argument that Google and Apple should be "regulated into the ground" isn't an argument. It's a vengeful emotion or part of a vague ideology or something.
You're just continuing to ignore that all of this is based on their market dominance. There are literally two options for smartphone operating systems. For something that's vital to modern life, that's unacceptable and gives users no choice.
If a company gets to enjoy a near-monopoly status, it has to be regulated to prevent abuse of its power. There's a huge amount of precedent for this in industries like telecom.
> If I want blenders to be sold in bright orange, but the three brands at my local store are all black or silver, I really don't think it's right for the government should pass a law requiring stores to carry blenders in bright orange
Do you really not see the difference between "color of blender" and "unable to turn off LLMs on a device that didn't have any on it when I bought it"?
Do you really not see that there is no difference?
Either the government starts dictating product design or it doesn't.
I don't want a world where the government decides which features software makers include or turn on or off by default. Whether there are 20 companies competing in a space or mainly 2.
Don't you see where that leads? Suddenly it's dictating encryption and inserting backdoors. Suddenly it starts allowing Truth Social to build new features and removing features on Twitter.
This is a bigger issue than you seem to be acknowledging. The freedom to create the software you want, provided it's not causing actual harm, is as important to preserve as the freedom to write the books or blog posts you want.
If this had something to do with antitrust then the fact that there are only two major phone platforms would be relevant. But the fact that both platforms are implementing LLM features is not anticompetitive. To the contrary, it's competitive even if you personally don't like it. It's literally no different from them both supporting 1,000 other features in common.
We (You and I) don't. Shareholders absolutely need it for that line to go further up. They love the idea of LLMs, AI, and AGI for the sole reason that it will help them reduce the cost of labour massively. Simple as that.
> I will use what creates value for me. I will not buy anything that is of no use to me.
If only everyone was thinking this way. So many around these parts have no self-preservation tendencies at all.
People pushing AI desperately want to convince us the value is positive. 10x productivity! Fire all your humans!
In reality, depending on the use-case, the value is much smaller. And can be negative thinking about the total value over long term (incomprehensible shoddy foundations of large / long running projects)
I'm thankful at the moment that I work for a boring company. I want to leave for something more interesting, but at least I can say that our management isn't blowing a bunch of money on AI and trying to use it to get rid of expensive developers. Heck, they won't even pay for Claude Code. Tightwads.
And massive amounts of energy to run these new fangled AI data centers. Not sure if you lumped that in with "resources", but yes we're already seeing it:
A typical AI data center uses as much electricity as 100,000 households, and the largest under development will consume 20 times more, according to the International Energy Agency (IEA). They also suck up billions of gallons of water for systems to keep all that computer hardware cool.
https://www.npr.org/2025/10/14/nx-s1-5565147/google-ai-data-...
> We will work with the creators, writers, and artists, instead of ripping off their life's work to feed the model.
I’m not sure I have an idea of what this might look like. Do they want money? What might that model look like? Do they want credit? How would that be handled? Do they want to be consulted? How does that get managed?
Copyright is meant to encourage more publication by providing publishers with (temporary) control over their products after having released them to the public. Once the copyright expires, the work enters the public domain, which is really the end goal of copyright law. If publishers start to feel like LLMs are undermining that control, they might publish less original work, and therefore less stuff will eventually enter the public domain, which is considered bad for society. We're already seeing some effects of this as traffic (and ad revenue) to various websites has fallen significantly in the wake of LLM proliferation, which lowers the incentive to publish anything original in the first place.
Anyway, I'm not sure how best to adapt copyright law to this new world. Some people have thought about it though: https://en.wikipedia.org/wiki/Copyright_alternatives
A student will be showing me something on their laptop, their thumb accidentally grazes it because it's larger than the modifier keys and positioned so this happens as often as possible. The computer stops all other activity and shifts all focus to the Copilot window. Unprompted, the student always says something like "God, I hate that so much."
If it was so useful they wouldn't have to trick you into using it.
... Dare I ask how Copilot typically responds to that? (They're doing voice detection now, right?)
> If it was so useful they wouldn't have to trick you into using it.
They delude themselves that they're doing no such thing. Of course the feature is so useful that you'd want to be able to access it as easily as possible in any context.
> ... Dare I ask how Copilot typically responds to that? (They're doing voice detection now, right?)
Just as a generalization, the dozen or so times this has happened this semester the pop-up is accompanied by an "ugh" then after the window pops up from the taskbar the student immediately clicks back into the program we're using. It seems like they're used to dealing with it already. I haven't seen any voice interaction.
I mean, the statistics say the students use AI plenty - they just seem annoyed by the interruption. Which I can agree with.
> They delude themselves that they're doing no such thing. Of course the feature is so useful that you'd want to be able to access it as easily as possible in any context.
Exactly.
Further, a dark pattern is where you are led towards a specific outcome but are pulled insidiously towards another. This doesn't really fall into that definition.
What startups are doing earning calls?
Also, even many home users may be finding that they interact less and less with the "platform" now that everything including MS Office runs from a browser. I can barely remember when the differences between Windows and Linux were even relevant to my personal computer use. This was necessitated by having to find a good way to accommodate Windows, MacOS, iOS, and Android all at once.
I cracked the screen on my work laptop last year, but IT set me up with a replacement screen. It's so much nicer having discrete buttons than a clickable trackpad, so I skipped on an upgrade. Still does everything I need it to do for work (including working with Windows 11).
And the vast majority of things I do on either laptop involves a web browser.
Yeah, I think the days of working software are over (at least deterministically)
right... none of them are saying that. They could probably use more GPUs considering the price of GPUs and memory are skyrocketing and the supply chain can't keep up. It's about experimentation, they need real users and real feedback to know how to improve the current generation of models, and figure out how to monetise them.
Do you have any evidence or well established theory to back up this rather extraordinary claim?
Because if you are honestly positing that numerous people around the world are literally hallucinating despise (statistically) not being under medical supervision, presumably continuing to drive, work, and make decisions, that would be a pretty urgent global health phenomenon that you really should be chasing up. And at some point, the authorities best placed to deal with this hitherto unseen mass incapacitation might reasonably ask: what are the chances that multiple unrelated people around the world are experiencing such localised, hugely specific breaks from reality causing them to express reasonably common opinions on an internet forum, rather than the inconsistency being on the end of this one person who doesn't agree with them?
Which then nobody will ever read, they'll just copy it into the AI bot to summarize into a few bullet points.
The amount of waste is quite staggering in this back and forth game.
Which more often than not will lose or distort the original intention behind the first 5 bullet points.
Which is why I avoid using LLMs for writing.
Our heroes are in the office of a tech billionaire who says "See that coffee machine? Just speak what you want. It can make any coffee drink."
So one character says "Half caf latte" and the machine does nothing except open a small drawer.
"You have to spit in the drawer so it can collect your DNA--then it will make your coffee" the billionaire says.
This pretty much sums up the whole tech scene today. Any good engineer could build that coffee machine, but no VC would fund her company without the spit drawer.
We are, however, at the “we need an AI strategy” stage, so execs will throw anything and everything at the wall to see what sticks.
Nuance.
They have insufficient GPUs for the amount of training going on.
If you assume theres a plateau where the benefits of training constantly no longer outweigh the costs, then they probably have too many GPUs.
The question is how far away is that plateau.
Enough for what? The adoption is slowing down fast, people are not willing to pay for a chatbox and a 10% better model won't change that.
Users don't have many other options to switch to. Even if they did, the b2b/advertising revenue they get makes up for any losses they may take on the consumer side.
If user research indicates your chart isn't clear enough, then improve the chart. But what are the odds they did any user research? They probably just ran an A/B test and saw number go up because of the novelty factor.
Unbeknownst to me that there was an issue. It pointed out multiple signs of slow leaks and then described what i should do to test and repair it easily.
I see a lot of negative energy about the 'AI' tech we have today to the point where you will get mass downvoted for saying something positive.
Plus the general economy outlook is negative, AI is the bright spot. They are striving to keep growth up amid downward pressure.
They spent a ton of money and/or they see everyone's LinkedIn posts or fantastic news stories by someone selling BS and they're afraid to say the emperor has no clothes.
They want to explore what is possible and what sticks with users.
The best way to do this is to just push it in their apps as many places as possible since 1. you get a nice list of real world problems to try and solve. 2. You have more pressure on devs to actually make something that works because it is going into production. 3. You get feedback from millions of users.
Also, by working heavily with their AI, they will discover areas that can be improved and thus make the AI itself better.
They don't care that it is annoying, unhelpful or uneconomical because the purpose is experimentation.
Unfortunately these reckless investments are likely to cause massive collateral damage to us 'little people'. But I'm sure the billionaires will be just fine.
I am, personally, quite optimistic about the potential for "AI," but there's still plenty of warts.
Just a few minutes ago, I was going through a SwiftUI debugging session with ChatGPT. SwiftUI View problems are notoriously difficult to debug, and the LLM got to the "let's try circling twice widdershins" stage. At that point, I re-engaged my brain, and figured out a much simpler solution than the one proposed.
However, it gave me the necessary starting point to figure it out.
> Not my problem.
I don't know about the author specifically, but the bubble popping is a very bad thing for many people. People keep saying that this bubble isn't so bad because it's concentrated on the balance sheet of very deep pocketed tech companies who can survive the crash. I think that is basically true, but a lot is riding on the stock valuations of these big tech companies and lots of bad stuff will happen when these crash. It's obviously bad for the people holding these stocks, but I these tech stocks are so big that there is a real risk of widespread contagion.
"I dont like being forced to use AI products"
Whats the issue?
AI really needs R&D time, where we first figure out what it’s good for and how best to exploit it.
But R&D for SW is dead. Users proved to be super-resilient to buggy or mis-matched sw. They adapt. ‘Good-enough’ often doesn’t look it. Private equity sez throw EVERYTHING at the wall and chase what sticks…
> We see the hallucinations. We see the errors. Let's pick the things which work and slowly integrate it into our lives. We don't need to do it this quarter just because some startup has to do an earnings call.
Citation needed?
To me this is a contentless rant. AI is about old billionaires getting richer before they die? It’s at least a lot more than that.
Seems like some people tried AI in 2023, got negative affinity for it, then just never update with new information. In my personal use, hallucinations are way down, and it’s just getting more and more useful.
You need to push slop, because people don’t really want it.
When he says "billionaires making more billions" it's really off the mark. These people are not forcing AI down our throats to make billions.
They are doing it so they can win.
Winning means victory in a zero sum game. This is a game that is zero sum because the people that play it think that way. However, the point is not to make money. That's a side effect.
They want to win so the other guys don't. That means power, growth, prestige, and winning. Winning just to win.
Once people start to understand that this is the prime directive for the elite in the tech business, the easier it is for everyone to defend against it.
The way you defend against it is to make it non-zero sum. Spread the ideas out. Give people choices. Act more like Tim Berners Lee and less like Zuck. This will mean less money, sure, but more to the point, it deprives anyone of being "the winner". We should all celebrate any moves that take power away from the few and redistribute it to many. Money will be made in the process, but that's okay.
E.g. Programming - and I do judge not only those who use AI to code but execs who force people to use AI to code. Sorry, I'd like to know how my code works. Sorry, you're not an efficient worker, you're just making yourself dumber and churning out garbage software. It will be a competitive advantage for me when slop programmers don't know how to do anything and I can actually solve problems. Silicon Valley tech utopians cannot convince me otherwise. I don't think poorly socialized dweebs know much about anything other than their AI girlfriends providing them with a simulation of what it feels like to not be lonely.
i support this but the Smarter Than Me types say it's impossible. It's not possible to track down an adequate number of copyright holders, much less get their permission, much less afford to pay them, for the number of works required in order to get the LLM to achieve "liftoff".
I would think that as I use Claude for coding, it would work just as well if it didnt suck down the last three years of NYT articles as if it did. There's a vast amount of content that is in the public domain, and if you're ChatGPT you can make deals with a bunch of big publishers to get more modern content too. But that's my know-nothing take.
maybe the issue is more about the image content. Screw the image content (and definitely the music content, spotify pushing this slop is immensely offensive), pay the artists. My code OTOH is open source, MIT licensed. It's not art at all. Go get it (though throw me a few thousand bucks every year because you want to do the right thing).
Adobe has been training models on only licensed media. I'm not sure if it's all their models or just some of them, and I haven't seen the results, but someone's doing it.
If you don't mind the oligarchs stealing your code, that is your prerogative. Many other people do mind.
Some "No True Scotsman"-flavored cope.
You either comply of face unnecessary roadblocks. OP has complied by sharing the link. My right to choose tracking cookies and script execution is parallel to my right to not utilize or be forced to utilize AI. This issue has to be addressed universally as is not simple "no ai" on the web; it's freedom to use the web or compliance with violation of that freedom
Benefit of the doubt: this person wants to get their word out and it's more energy than they had to track down a pristine, pure, sparkling blogging engine.
Did that single sentence in this relatively short, 36 sentence post really make you flip the table as hard as you imply? That's surprising if so.
I’m not opposed to any of the above, necessarily. I’ve just always been the type to want to adopt things as they are needed and bring demonstrable value to myself or my employer and not a moment before. That is the problem that capital has tried to solve through “founder mode” startups and exploitative business models: “It doesn’t matter whether you need it or not, what matters is that we’re forcing you to pay for it so we get our returns.”
Replace AI with Blockchain. With cloud. With big data. With ERP. With service models. With basically almost any fad since virtualization. It’s the same thing: over-hyped tools with questionable value being shoe-horned into every possible orifice so the investors can make money.
They said the exact same thing when electricity was invented too.Gas companies said electricity was a fad. Some doctors said electric light harms the eyes. It's too expensive for practical use. Need too much infrastructure investment. AC will kill people with shocks. Electrification will destroy jobs, said gas lamp unions. It's unnatural, said some clergy. And on and on and on...
As I'm seeing from the multitude of folks scrutinizing my clearly exhaustive list of every single technology fad ever (sarcasm, for those unclear), they're missing the forest for the trees and therefore likely to hit every single rung when they fall down a ladder. I'm not saying X or Y wasn't valuable, or wasn't useful, only that they way they're shoved into every single orifice by force is neither of those things to those taken advantage of in the quest for utility.
The point isn't that advancements are bad, the point is the way we force them down the throats of everyone, everywhere, all the time creates a waste of skill and capital for the returns of a very select group of people. The point is that fads are bad, and we should let entities (companies and people) find use in what's offered naturally rather than force-feeding garbage to everyone to see what's actually palatable.
Even the block chain comparison isn't valid because it didn't consist of an "AI" button getting crammed into every single product and website, turned into a popover etc.
For example, I'm not a big fan of blockchain. In fact, I think crypto is just 99% scam.
But big data led to machine learning and LLMS right? Cloud led to cheaper software with faster deploy times right? In fact, cloud also means many browser based apps replacing old Windows apps.
None of these were fads. They are still tremendously useful today.
The thing is, AI did suck in 2023, and even in 2024, but recently the best AI models are veering into not sucking territory, which when you look at it from a distance makes sense, eventually if you throw the smartest researchers on the planet and billions of dollars at a problem, something eventually will give and the wheels will start turning.
There is a strange blindness many people have on here, a steadfast belief that AI just will just never end up working or always be a scam, but the massive capex on AI now is predicated on the eventual turning of the fledgling LLM's into self-adaptive systems that can manage any cognitive task better than a human. I don't see how the improvements we've seen over the past few years in AI aren't surely heading in that direction.
I agree with your assessment.
I find it absolutely wild that 'it almost doesn't entirely suck, if you squint' is suddenly an acceptable benchmark for a technology to be unleashed upon the public.
We have standards for cars, speakers, clothing, furniture, make up, even literature. Someone can't just type up a few pages of dross and put it though 100 letterboxes without being liable for littering and nuisance. The EU and UK don't allow someone to still phones with a pre-imstalled app that almost performs a function that some users might theoretically want. The public domain has quality standards.
Or rather, it had quality standards. But it's apparently legal to put semi-functioning data-collectors in technologies where nobody asked for them, why isn't it legal to sell chairs that collapse unless you hold them a specific way, clothes that don't actually function as clothes but could be used to make actual clothes by a competent tailor, headphones that can be coaxed into sporadically producing round for minutes at a time?
Either something works too a professional standard or it doesn't. If it doesn't, it is/was not legal to include it in consumer products.
This is why people are more angry than is justified by a single unreliable program. I don't care that much whether LLM's perform the functions that are advertised (and they don't, half the time). I care that after many decades of living in a first world country with consumer protection and minimum standards, all of that seems to have been washed away in the AI wave. When it receeds, we will be left paying first world prices for third world enquiring, now the acceptable quality standard for everything seems to have dropped to 'it can almost certainly be used for its intended purpose at least some times, by some people, with a little effort'.
I personally think we have reached some kind of local maximum. I work 8 hours per day with claude code, so I'm very much aware of even subtle changes in the model. Taking into account how much money was thrown at it, I can't see much progress in the last few model iterations. Only the "benchmarks" are improving, but the results I'm getting are not. If I care about some work, I almost never use AI. I also watch a lot of people streaming online to pick up new workflows and often they say something like "I don't care much about the UI, so I let it just do its thing". I think this tells you more about the current state of AI for coding than anything else. Far from _not sucking_ territory.
when Charlie Kirk was shot, and the video was posted to Twitter, people asked Grok to "fact-check" it...and Grok told them the videos were fake and Kirk was alive. [0]
Grok also spread misinformation about the identity of the shooter. [1]
> On Friday morning, after Utah Gov. Spencer Cox announced that the suspect in custody was Robinson, Grok's replies to X users' inquiries about him were contradictory. One Grok post said Robinson was a registered Republican, while others reported he was a nonpartisan voter. Voter registration records indicate Robinson is not affiliated with a political party.
and that's just one particularly egregious event in a long string of problems, such as the MechaHitler thing. [2] and the Elon Musk piss-drinking thing. [3]
so if you're going to defend these "AI" integrations as being useful and helpful...I dunno, Grok is probably not a good example to point to.
0: https://www.engadget.com/ai/grok-claimed-the-charlie-kirk-as...
1: https://www.cbsnews.com/news/ai-false-claims-charlie-kirk-de...
2: https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-...
3: https://www.404media.co/elon-musk-could-drink-piss-better-th...