> NAQ (Never Asked Questions)
> My website is on your list!
> Cry about it.
That's quite a suspicious attitude. Clearly the maintainer believes he is infallible. I understand the emotions behind this, but this is not how a public blacklist should be maintained.
Cry about it.
There's nothing in that repo that even pretends to be flawless, impartial or anything else. The sheer amount of mental denial of service that having to deal with SEO slopshitters opening issues saying that they promise their substack is totally written by hand makes this an impossible task.
Ban first, ask questions later. If you find that some rules are unfair, edit them yourself, for your personal usage.
Easylist and its sublist are notorious for being poorly maintained and ignoring issues opened against it. Adguard is much more active in maintaining its lists. Especially Adguard its language blocklists have much, much less breakage and missed ads than Easylist.
> A personal list for uBlock Origin
These days anticheat software is likely to snap at anything. Who knows what they think of the development tools Hacker News users are likely to have on their computers? They really hate virtual machines for example. There's no telling how they'd react to a debugger or profiler.
But if it’s the author’s blocklist that is wrong, unverified, and causing harm to others? Cry about it.
A nice alternative to this very broad anti ai list: https://github.com/laylavish/uBlockOrigin-HUGE-AI-Blocklist
Edit: Oh I should mention I found it through reddit and there is some good discussion there where they describe how they find stuff etc: https://www.reddit.com/r/uBlockOrigin/comments/1r9uo3j/autom...
But I wouldn't call the person that maintains the news letter popup block list as "newsletter hater"
He's not complaining that widgets for his favorite social network site is getting blocked, he's complaining that anything vaguely related to social networks are getting banned. Some of the sites on that list are stuff like chatgpt.com, which might be AI related, but clearly doesn't fit the criteria of "AI generated content, for the purposes of cleaning image search engines".
While I applaud the honesty of sites that are open about their content being AI generated, that type of content is never what I'm looking for when I search, so if they're in my search results it's just more distraction/clutter drowning out whatever I'm actually looking for. Blocking them improves my search experience slightly, even though there is of course still lots of other unwanted results remaining.
Granted, I definitely count as an AI hater (speaking of LLM's specifically). But even if I weren't, I don't think I'd be seeking it out specifically using a search engine; why would I do that when I could just go straight to chatgpt or whatever myself? Search is usually where people go to find real human answers (which is why appending "reddit" to one's searches became so common). So I see this as a utility thing, more than a "I am blocking all this just because I hate it" thing. Although it can be both, certainly.
Edit: removed an off-topic tangent
Edit: https://gist.github.com/SMUsamaShah/6573b27441d99a0a0c792431...
The big anti ai list also seems to be focused on hiding links from ddg/bing/google where this new more focused list just blocks sites. I tend to like block ones vs hiding because they pop up a nice warning no matter where I came from and I can still decide to ignore it if I want so they is more user agency instead of just quietly hiding a unclear chunk of the net from search engines.
> All I hear is skill issue. Imagine needing an AI to write stuff.
Grammarly users (and underrepresented non-English speakers) would complain.
E.g., bought a domain that previously hosted AI content.
E.g., Whitehouse.com used to be a porn site, now it’s not.
That being said this project seems focused on content farms not people who just need a little help writing so this whole conversation is a bit of a side tangent.
I cannot imagine what it means. To me it reads like "I know someone who can run very fast but has no legs."
Unfortunately our company is trying to be "AI First" so they'll just point to that and continue their bullshit.
Our company literally promotes AI slop over personally made content even if it's mediocre crap. All they care about is rising usage numbers of things like copilot in office.
I get why it's tempting, good translators are expensive, and few and far between. A friend of my is a professional translator and she's not exactly in need of work, but a lot of customers look at her prices and opt for machine translations instead and the result not always impressive. Errors range from wrong words, bad sentence structure to an inability to correctly translate cultural references.
I know that some people translate my French posts to read them. That’s really cool. But I would never post something I didn’t write myself (but I use spellcheking tools. I even sometimes disagree with them)
For your personal hobby site or for general online communication, you probably shouldn’t use machine translation, but it is probably useful if have B1 language skills and are checking up on your grammar, vocabulary, etc. As for using LLMs to help you write, I certainly prefer people use the traditional models over LLMs, as the traditional models still require you to think and forces you to actually learn more about the output language.
For reading somebody else’s content in a language you don‘t understand, machine translation is fine up to a point, as long as you are aware that it may not be accurate.
---
† In fact I personally I think EU should mandate translator qualification, and probably would have only 20 years ago when consumer protection was still a thing they pretended to care about.
Op is going after AI slop bot farms like android authority
In the enterprise space, there are URL reputation providers. They categorize sites based on different criteria, and network administrators block or warn users based on that information.
In my humble opinion, there needs to be a crowdsourced fund (or ideally governments would take this seriously and fund it on behalf of people) for enabling technologies that allow user friendly internet experiences. Browsers, frameworks, vpn providers, site-reputation, deceptive content, dns-providers, email providers,trusted certificate authorities(no,google and microsoft shouldn't get to police that), nation-state or corporate affiliations,etc... You shouldn't need to setup a pi-hole.
Imagine a $1B/yr non-profit fund for this stuff. if 10M people paid $10/mo that's $1.2B/yr. Proton has $97M revenue in 2024 and 100M total accounts (I don't know how many pay but the spread is roughly $1/user). I really think now is the time to talk about this when so many are wary of US tech giants and looking for other opportunities.
Metaphorically speaking, it’s the Borg we’re dealing with, not the Klingons. All Janeway did was slow the Borg’s progress.
He may not be too far off.
From the story:
“Spam-filters, actually. Once they became self-modifying, spam-filters and spam-bots got into a war to see which could act more human, and since their failures invoked a human judgement about whether their material were convincingly human, it was like a trillion Turing-tests from which they could learn. From there came the first machine-intelligence algorithms, and then my kind.”
Reminds me of [1]twitch.tv trying to remove "blind playthrough" as a tag to encourage inclusive language.
1. https://www.reddit.com/r/Twitch/comments/k7dvgw/twitch_remov...