Omegalol. Cigarette maker introduces filter, cares about your health.
> listened to parents
...but not taken significant actions
> researched issues that matter most
...but ignored the results of the research
> made real changes to protect teens
...sure, insignificant changes
While I don’t quite believe they’ll achieve their Feudal dreams in the near-medium future. I do expect the US to transition to a much more explicitly an oligarchic republic as a large, with the pretense of “Government of the people, by the people, for the people” is largely pushed to the side.
Only solution seems to be to drop out of society to whatever degree possible.
While his crimes were atrocious, Ted Kaczynski might be right in some ways. The industrial and technological revolutios have improved life dramatically for n many humans and we live in a tube of astonishing abundance, but at what cost?
aaaaannndd now I'm on a list somewhere.
Our system of incentives, operating within a system of governmental authority baked in an age where gunpowder was the new hotness, leads to a place where the movement of individual bits of law or policy don't matter. The forces at work will roll back whatever you do to make the social situation better, if they are antithetical to the interests of capital. Fix healthcare, and the insurance companies will find ways to twist it to their profit. Fix housing, and the banks and real estate developers will find ways to charge rent anyway.
The coupling between decision making and the vox populi is weak and must be strengthened. The coupling between decision making and capital is strong and must be broken. Unless we can accomplish either, any change we make is cosmetic.
I think what we need is a dissolution of representatives in favor of a more direct form of democracy, but most dismiss this as looney/impossible. I'm inclined to agree about the impossibility but that just kind of lands us back at 'what the hell do we do about it'.
Ranked choice is a good start, perhaps. Might not 'fix it' but maybe it's a foot in the door.
I took em a while to understand how things worked and when I did I found a different job.
Now this enterprise I left, could never have done what they did it was not for the developers that made it possible.
When we talk about the giants on social media, it it us, the developers who make it possible for them to do what they do.
If you are frustrated about how they are not being stopped from doing what they do, encourage people to leave. They money is great, but doe sit make it worth it?
From the other side, let us say that the US shut down Meta and the rest of the social media beasts, how many developers would be out on the street?
Serious question: What exactly do you want to see done? I mean real specifics, not just the angry mob pitchfork calls for corporate death penalty or throwing Mark Zuckerberg in jail.
Because the content served here isn't served in chronological order. The front page takes votes into account and displays hotter posts higher in the feed.
This will still impact HN because of stuff like the flame war downranker they use here. However, that doesn't automatically mean HN loses Section 230 protection. HN could respond by simplifying its ranking algorithm to maintain 230 protections.
That's a given on HackerNews, as there's only one frontpage. On Reddit that would be, users subscribed to the same subreddits would always see the same things on their frontpages. Same as users on YouTube subscribed to the same channels, users on Facebook who liked the same pages, and so on.
The real problem starts when the algorithm takes into account implicit user actions. E.g., two users are subscribed to the same channels, and both click on the same video. User A watches the whole video, user B leaves halfway through. If the algorithm takes that into account, now user A will see different suggestions than user B.
That's what gets the ball rolling into hyper specialized endless feeds which tend to push you into extremes, as small signals will end up being amplified without the user ever taking an explicit action other than clicking or not suggestions in the feed.
As long as every signal the algorithm takes into account is either a global state (user votes, total watch time, etc), or something the user explicitly and proactively has stated is their preference, I think that would be enough to curb most of the problems with algorithmic feeds.
Users could still manually configure feeds that provide hyper personalized, hyper specific, and hyper addictive content. But I bet the vast majority of users would never go beyond picking 1 specific sport, 2 personal hobbies and 3 genres of music they're interested in and calling it a day. Really, most would probably never even go that far. That's the reason platforms all converged on using those implicit signals, after all: they work much better than the user's explicit signals (if your ultimate goal is maximizing user retention/addiction, and you don't care at all about the collateral damage resulting from that).
Are you sure? The algorithm isn't public, but putting a tiny fraction of "nearly ready for the frontpage" posts on the front page for randomly selected users would be a good way to get more votes on them without subjecting everyone to /new
Same with friends, even if you have the exact same friends, if you message friend A more than friend B, and this otherwise identical account does the opposite, than the recommendation engine will give you different friend-related suggestions.
Then there's geolocation data, connection type/speeds, OS and browser type, account name (which, if they are real names such as on Facebook, can be used to infer age, race, etc), and many others, which can also be taken into account for further tailoring suggestions.
You can say that, oh, but some automated system that sent the exact same signals on all these fronts would end up with the same recommendations, which I guess is probably true, but it's definitely not reasonable. No two (human) users would ever be able to achieve such state for any extended period of time.
That's why we are arguing that only explicit individual actions should be allowed into these systems. You can maybe argue what would count as an explicit action. You mention adding friends, I don't think that should count as an explicit action for changing your content feed, but I can see that being debated.
Maybe the ultimate solution could be legislation requiring that any action that influences recommendation engines to be explicitly labeled as such (similar to how advertising needs to be labeled), and maybe require at least a confirmation prompt, instead of working with a single click. Then platforms would be incentivized to ask as little as possible, as otherwise confirming every single action would become a bit vexing.
And? These are still user's choices. They choose how long to view videos or scroll past them.
Either way. Do you have any points other than that you think any and every action, no matter how small, is explicit, and therefore it's ok that for it to be fed into the recommendation engine? Cause that's an ok position to have, even if one I disagree with. But if that's all, I think that's as long as this conversation needs to go. But if there's any nuance I'm failing to get, or you have comments on other points I raised such as labeling of recommendation altering actions, I'm happy to hear it.
It absolutely does mean that seeing as how everybody wants to see the flipped car on the side of the road. The local news reports on the car flipped on the side of the road and not the boring city council meeting for a reason.
Some people take the stance that even using view counts as part of ranking should result in a company listing section 230 protections, e.g. https://news.ycombinator.com/item?id=46027529
You proposed an interesting framing around reproducibility of content ranking, as in two users who have the exact same watch history, liked posts, group memberships, etc. should have the same content served to them. But in subsequent responses it sounds like reproducibility isn't enough, certain actions shouldn't be used for recommendation even if it is entirely reproducible. My reading is that in your model, there are "small" actions that user take that shouldn't be used for recommendations, and presumably there are also "big" actions that are okay to use for recommendation. If that's the case, then what user actions would you permit to be used for recommendations and which ones would not be permitted to use? What are the delineation between "small" and "big" actions?
Labeling is another idea but Meta does, in fact, disclaim which user actions are used for content recommendations: https://transparency.meta.com/features/ranking-and-content/
Basically, I still don't have a clear picture of what does and doesn't qualify as "algorithmically served" content in your model.
That's why I proposed that maybe the solution is:
1. only explicit actions are considered. A click, a tap, an interaction, but not just viewing, hovering, or scrolling past. That's an objective distinction that we already have legal framework for. You always have to explicitly mark the "I accept the terms and conditions" box, for example. It can't be the default, and you can't have a system where just by entering the website it is considered that you accepted the terms.
2. explicitly labeling and confirming of what is an suggestion algorithm altering action and what isn't. And I mean, in band, visible labeling right there in the UI, not a separate page like that Meta link. Click the "Subscribe" button, you get a confirmation popup "Subscribing will make it so that this content appears in your feed. Confirm/Cancel". Any personalized input into the suggestion algorithm should be labelled as such. So companies can use any inputs they see fit, but the user must explicitly give them these inputs, and the platforms will be incentivized to keep this number as low as possible, as, in the limit, having to confirm every single interaction would be annoying and drive users away. Imagine if every time you clicked on a video, YouTube prompted you to confirm that viewing that video would alter future suggestions.
I'm ok with global state being fed into the algorithm by default. Total watch time/votes/comments/whatever. My main problem is with hyper personalized, targeted, self reinforcing feeds.
> Imagine if every time you clicked on a video, YouTube prompted you to confirm that viewing that video would alter future suggestions.
In practice, I suspect this will make nearly every online interactions - posting a comment, viewing a video, liking a post, etc - accompanied by a confirmation prompt telling the user that this action will affect their recommendations, and pretty quickly users just hit confirm instinctively.
E.g. when viewing a youtube video, users often have to watch 3-5 seconds of an ad, then click "skip ad", before proceeding. Adding a 2nd button "I acknowledge that this will affect my recommendations" is actually a pretty low barrier compared to the interactions already required of the user.
The end result: a web that's roughly got the same recommendation systems, just with the extra enshitification of the now-mandated confirmation prompts.
Watching the content that is being served to you is a passive decision. It's totally different from clicking a button that says you want to see specific content in the future. You show me something that enrages me, I might watch it, but I'll never click a button saying "show me more stuff that enrages me". It's the platform taking advantage of human psychology and that is a huge part of what I want to stop.
>it remains unclear how you're constructing a set of criteria that does Hacker News, and plenty of other sites but not Meta.
I already said "This will still impact HN because of stuff like the flame war downranker...". I don't know this and your reply to my other comment seem to be implying that I think HN is perfect and untouchable. My proposal would force HN to make a choice on whether to change or lose 230 protections. I'm fine with that.
Again, something like counting the number of views in a video is, in your framing, not an active choice on the part of user. So simply counting views and and floating popular content to the top of a page sounds like it'd trigger loss section 230 protections.
The key word there is "page". I have no problem with news.ycombinator.com/active, but that is a page that a user must proactively seek out. It's not the default or even possible to make it the default. Every time a user visits it, it is because they decided to visit it. The page is also the same for everyone who visits it.
E.g. it only applies to companies with revenue <$10m. Or services with <10,000 active users. This allows blogs and small forums to continue as is, but once you’re making meaningful money or have a meaningful user base you become responsible for what you’re publishing.
If a web site makes a good faith effort to moderate things away that could get them in trouble, then they shouldn't get in trouble. And if they have a policy of not moderating or curating, then they should be treated like a dumb pipe, like an ISP. They shouldn't be able to have their cake (exercise editorial control) and eat it too (enjoy liability protection over what they publish).
If you're doing editorial decisions, you should be treated like a syndicator. Yep, that means vetting the ads you show, paid propaganda that you accept to publish, and generally having legal and financial liability for the outcomes.
User-supplied content needs moderation too, but with them you have to apply different standards. Prefiltering what someone else can post on your platform makes you a censor. You have to do some to prevent your system from becoming a Nazi bar or an abuse demo reel, but beyond that the users themselves should be allowed to say what they want to see and in what order of preference. Section 230 needs to protect the latter.
The thing I would have liked to see long time ago is for the platforms / syndicators to have obligation to notify their users who have been subjected to any kind of influence operations. Whether that's political pestering, black propaganda or even out-and-out "classic" advertising campaign, should make no difference.
If Hacker News filled their front page with hate speech and self-harm tutorials there would be public outcry. But Facebook can serve that to people on their timeline and no one bats an eye, because Facebook can algorithmically serve that content only to people who engage with it.
Yup. Accountable.
Heck, even 4chan wouldn't qualify, because despite considerably looser content rules they still actually do perform moderation.
I would like to tweak my own feed
If my dog bites somebody, I'm on the hook. It should be no different with companies.
We have to create incentives to not invest in troublesome companies. Fines are inadequate, they incentivize buying shares in troublesome companies and then selling them before the harm comes to light.
Blindly letting a CEO commit crimes should itself be a crime, but only if there's something you could've done to prevent it--that's not most shareholders.
> corporate death penalty
I don't know man these don't seem very specific. From your whole comment I do agree Mark should be in jail
This is what I meant by angry mob pitchfork ideas. This isn’t a real idea, it’s just rage venting.
It’s also wrong, as anyone familiar with the problems in pay-to-play social video games for kids, which are not ad supported, can tell you. These platforms have just as many problems if not more, yet advertising has nothing to do with it. I bet you could charge $10/month for Instagram and the same social problems would exist. It’s a silly suggestion.
The mere fact that commenters think banning advertising is a simple and realistic idea, without any constitutional road blocks or practical objections, is what I mean when I say these comment sections are just angry bloviating with unrealistic expectations.
If you think banning all advertising is “simple” then I don’t know what to say, but there isn’t a real conversation here.
constitutional roadblock…to banning digital advertisement? please do explain!
I didn’t claim it’s easy to get it done in the real world, but it’s not a reactive/vindictive pitchfork idea. it’s really not that hard, if people wanted it we’ve banned plenty of things at the federal level in this country over the years (the hard part is of course people realizing how detrimental digital advertising is)
it’s a simple solution that’s very effective. obviously any large-scale change, to fix a large-scale problem, is not “simple” to implement, but it’s also not fucking rocket science on this one mate
you’re clearly not having a conversation in good faith. you asked, I answered, I’m done with this
What constitutes an advertisement is not a simple proposition. eg Is a paragraph describing some facts (phrased carefully) about a product or company an advertisement?
To what effect speech would have to be controlled to enforce this, is unthinkable. While some handwaving is necessary, as anyone can agree (since even the simplest legislation would be corrupted by the US political class), "banning advertising" is not a practical goal.
Why stop there? Why not just shut down the whole internet? Simple and effective. Ban cell phones. Simple and effective.
These are just silly ways of thinking about the world.
please stop ascribing intent I do not have and words I did not say in your juvenile attempt to win an argument
p.s. still would love to hear your constitutional argument against it! banning digital advertisement at the federal level is not unrealistic and if you've actually given it the thought you’re pretending to and still reach that conclusion, I do have an ad hominem to throw back at you
You don’t need to hear my argument against it. The fact that advertising your services is free speech is well established. It’s a major challenge for movements like those trying to tackle pharmaceutical advertising.
Also, if you can’t see how I’ve been addressing your arguments and you think it’s all ad hominem then I don’t think there’s any real conversation to be had here. Between all the downvotes you’re collecting and the weird attempts to ignore everything I say and pretend it’s ad hominem as a defensive tactic, this is pure trolling at this point.
1) downvotes: you’re the one insinuating HN commenters (and presumably voters) are idiots; I’m not sure that I should care if I’m downvoted while correct. and regardless, doesn’t seem like I’m very downvoted (rather the opposite) so not sure what your point is. try making one next time!
2) freedom of speech: lol! I just want to point out I had no fucking clue that’s what you were angling for before. rather than launch into attacks as you do, I actually try to understand things. this argument doesn’t concern me at all, I was worried I wasn’t aware of something in the constitution you’d brilliantly raise
we are beyond having a conversation at this point, but if you actually raised your arguments against banning digital advertisement (freedom of speech and ??? solving real-world problems is hard?) I would have debated them on their merits, you troll
I don't watch regular TV, anymore, so I don't know if it still is in place.
Mentioning "banning advertising" on HN is bound to draw downvotes. A significant number of HN members make money directly, or indirectly, from digital advertising.
It's like walking into a mosque, and demanding they allow drinking.
Won't end well.
Either I misunderstand something or I'm baffled how anyone can consider that easy.
At least in my state, there isn’t even a ban on advertising online gambling!! It is quite a stretch to think we could move from there to banning any kind of advertising.
It has nothing to do with the fact that a bunch of HN readers make money from ads. I don’t.
I wish we could discuss the issue here, and instead would have liked to hear from you why you think it is a pitically unrealistic proposal, and what your criteria is for seeming something politically unrealistic.
Of course not, clearly you just need a captured congress and an EO. Can’t be too hard to find a reason to turn Trump against Zuckerberg.
https://tobaccocontrol.bmj.com/content/early/2025/01/22/tc-2...
Why do you think it would be ineffective here?
I'm also curious on how you think we might tackle these issues.
They don’t want anyone to be able to advertise anything. Not even your local contractors trying to advertise their businesses that you want to find, because that’s advertising.
The tobacco ad ban isn’t relevant to what was claimed.
This wasn't my reading of it, but it does appear that's what GP meant. I don't agree with that. Even so, if you were interested in having a good faith discussion about solutions here, you might have responded to both interpretations.
You may consider this me putting forth the suggestion as an answer to your question, if you must.
Then when states start doing things like adding ID requirements for websites it’s shock and rage as the consequences of banning things (even for under 18s) encounter the realities of what happens when you “just ban” things.
ID requirements seem like the main burden is being put on ordinary people instead of corporations, and by extension seems clearly bad.
What does that have to do with anything?
It doesn’t matter where you ban it, if you turn off oil overnight a lot of people are left stranded from their jobs, sectors of the economy collapse, unemployment becomes out of control.
Banning things like this is just fantasy talk that only makes sense to people who can’t imagine consequences or think they don’t care. I guarantee you would change your mind very quickly about banning oil overnight as soon as the consequence became obvious.
Who lost their job when leaded gas was banned? A web search did not give me any examples.
GP (and I) have given you several examples of stuff society learned was harmful and then phased out with regulations/legislation. No, it didn't and does not happen overnight.
Why are you acting in such bad faith, trying to disregard people you don't agree with as "not being able to imagine consequences"?
you just got to get enough people to nod at “…and this is caused by the underlying incentives from digital advertisement” then to “and the most effective course of action is to ban digital advertisement”
I truly don’t believe it’s a big leap, especially after a few more years of all this
Keep in mind that not very long ago some random person assassinated an insurance CEO and many people's reaction was along the lines of "awesome, that fat cat got what he deserved"
Don't underestimate how much of society absolutely loathes the upper class right now.
I would bet that many people are one layoff away from calling for execs to get much worse than jail
…why not?
> she was shocked to learn that the company had a “17x” strike policy for accounts that reportedly engaged in the “trafficking of humans for sex.”
There’s no way in hell this isn’t just tacitly incentivized the facilitation of trafficking activities through the site.
But then again, the EU are a bunch of vacuous chicken shits incapable of pulling their heads out of their arses, never mind safeguarding their own children.
Confiscate their wealth
The officers and board of the company aren’t protected by the corporate veil concerning their actions. They retain some degree of protection from actions of others within the corporation provided they did not have (or did not have a reason to suspect) knowledge of those activities. But to my knowledge that’s not special to officers, it applies to any employee which is why the rank and file Enron employees didn’t get prosecuted.
Right, that's just normal individual criminal prosecution; it doesn't require prosecuting the corporation.
Of course, it's possible for the corporation to be guilty of a crime without any individual officer or board member being guilty.
> A corporation isn’t a magical immunity shield for them - for some reason prosecutors have shied away from piercing the corporate veil.
Piercing the corporate veil is holding shareholders liable for certain debts (including criminal or civil judgements) of the corporation. It has nothing directly to do with criminal prosecution of corporate officers or board members for crimes committed in the context of the corporation (though there are certainly cases where both things are relevant to the same circumstances.)
Be careful what you wish for.
Who else should go to criminal court for facilitating human trafficking? The airlines because they sold flights to these people, statistically speaking? What if they used a messaging app you use, like Signal? Should the government shut that down or ban it too? I have a feeling these calls to regulate platforms don’t extend to platforms actually used by commenters, they just want certain platforms they don’t use shut down and don’t care how much the law is bent to make it happen, as long as the law isn’t stretched for things they do like.
To avoid nitpicking, op probably should have said knowingly facilitates, but this is conversation not legislation and 99% of readers probably understood that.
Georgism gave a good lenses on these kind of issue. All the sudden, late stage capitalism starts looking like monopolies.
to be obvious enough to downplay, it must be impossible to miss while looking the other way. To be impossible to miss, it must be inextricably linked to the profits.
Maybe those fat bonuses and generous stock options wiped away the feelings of guilt, if these Silicon Valley sociopaths even felt any in the first place.
So although this is being spun as “trafficking”, that doesn’t seem accurate.
This classification sounds like it includes selling “your own services”.
Yep. You can engage in sexually trafficking people 16 times with a warning. 17th is just too much dude.
What sort of a deranged psychopath comes up with these rules?
Edit: Meta employees, downvoting this comment won’t absolve you of your involvement in the largest child abuse organization we’ve seen yet. Look what your own company said about what it’s doing to teenage girls.
You won't want to miss this breaking story - water is wet.
Every tech company is harming the public for profits.
At this point nothing surprises me from Meta.
the current phase of social media is basically the scraping of minds. we throw hundreds of thousands of narrowly defined contexts at them, in different states and in between them. our systems learn, assimilate, adapt.
the world is going up in flames. we didn't do that. and we'd try to change it but we have a lot of data. it can't be done.
we've seen the good, the bad and the ugly. and we don't drink cheap milk. the food of the cows we get our milk from costs more than the compounded wealth of all graduates of an average European university at the end of their 30s and the health of the cows we get our milk from is better monitored, too. all for one glass of milk.
my first sentence was a provocation.
your teens will be fine. worry about their environment, not us. we know we lie in court. and you know why. because they kindly ask us to. they have friends and friends of friends who have been up our asses from day one. you think we know a lot? your intelligence services know a lot more. your local administrations and teachers conspire against your teens more often than you would like to know. some guys in your police likely know, too. as do your journalists and TV channels. it's more or less on a need to know, serve to know basis.
you think big tech is the threat? think again.
A/N: mostly gibberish, since I have no idea what I'm talking about but you're all wielding advanced tools and you're networked. you get it. I could kindly ask you to stop babbling. but without leverage nobody benefits.
it's 2025.
genetic variety. someone sampled every single cell in a human body even before they died (wahahahaha) ... no obfuscation after millions of years in safe enough environments for the sake of survival in zones in the middle between safety and "DANGER ZONE" ... sure, sure ...
tech is going well. democracy, ... well ... the pace of advancement is ... I mean ... retarded? impeded? "balanced"?
punched drugs on the streets? yes. I mean, it's a market, right? gudd for TV tropes, ey :D?
I don't even wanna do this, man. FFS.