1. Does not default to running post-install scripts (must manually approve each)
2. Let's you set a min age for new releases before `pnpm install` will pull them in - e.g. 4 days - so publishers have time to cleanup.
NPM is too insecure for production CLI usage.
And of course make a very limited scope publisher key, bind it to specific packages (e.g. workflow A can only publish pkg A), and IP bound it to your self hosted CI/CD runners. No one should have publish keys on their local, and even if they got the publish keys, they couldn't publish from local. (Granted, GHA fans can use OIDC Trusted Publishers as well, but tokens done well are just as secure)
It’s clear from the structure and commit history they’ve been working their asses off to make it better, but when you’re standing at the bottom of a well of suck it takes that much work just to see daylight.
The last time I chimed in on this I hypothesized that there must have been a change in management on the npm team but someone countered that several of the maintainers were the originals. So I’m not sure what sort of Come to Jesus they had to realize their giant pile of sins needed some redemption but they’re trying. There’s just too much stupid there to make it easy.
I’m pretty sure it still cannot detect premature EOF during the file transfer. It keeps the incomplete file in the cache where the sha hash fails until you wipe your entire cache. Which means people with shit internet connections and large projects basically waste hours several times a week doing updates that fail.
If they were trying, they'd stop doubling down on sunk costs and instead publicly concede that lock files and how npm-the-tool uses them to attempt to ensure the integrity of packages fetched from npm-the-registry is just a poor substitute for content-based addressing that ye olde DVCS would otherwise be doing when told to fetch designated shared objects from the code repo—to be accompanied by a formal deprecation of npm-install for use in build pipelines, i.e. all the associated user guides and documentation and everything else pushing it as best practice.
npm-install has exactly one good use case: probing the registry to look up a package by name to be fetched by the author (not collaborators or people downstream who are repackaging e.g. for a particular distribution) at the time of development (i.e. neither run time nor build time but at the time that author is introducing the dependency into their codebase). Every aspect of version control should otherwise be left up to the underlying SCM/VCS.
I wonder what circumstances led to saying “this is okay we’ll ship it like that”
Not to diminish the facepalm but the blame can be at least partially shared.
Our UI lead was getting the worst of this during Covid. I set up an nginx forward proxy mostly for him to tone this down a notch (fixed a separate issue but helped here a bit as well) so he could get work done on his shitty ISP.
One key shift is there is no packager anymore. Its just - trust the publisher.
Any language as big as Node should hire a handful of old unix wizards to teach them the way the truth and the life.
Repositories like NPM's, and PyPI, contain many more packages than any Linux distro. And the Linux Foundation actually gets funded.
There's a reason why most distributions dont ship upstream (except basically Arch)
I've by now grown to like Hashicorp Vaults/OpenBao's dynamic secret management for this. It's a bit complicated to understand and get to work at first, but it's powerful:
You mirror/model the lifetime of a secret user as a lease. For example, a nomad allocation/kubernetes pod gets a lease when it is started and the lease gets revoked immediately after it is stopped. We're kinda discussing if we could have this in CI as well - create a lease for a build, destroy the lease once the build is over. This also supports ttl, ttl-refreshes and enforced max-ttls for leases.
With that in place, you can tie dynamically issued secrets to this lease and the secrets are revoked as soon as the lease is terminated or expires. This has confused developers with questionable practices a lot. You can print database credentials in your production job, run that into a local database client, but as soon as you deploy a new version, those secrets are deleted. It also gives you automated, forced database credential rotation for free through the max_ttl, including a full audit log of all credential accesses and refreshes.
I know that would be a lot of infrastructure for a FOSS project by Bob from Novi Zagreb. But with some plugin-work, for a company, it should be possible to hide long-term access credentials in Vault and supply CI builds with dropped, enforced, short-lived tokens only.
As much as I hate running after these attacks, they are spurring interesting security discussions at work, which can create actual security -- not just checkbox-theatre.
'npm ci' is some mitigation, but doesn't protect against getting hit when running 'npm i(nstall)' during development.
That said, I hard pin all our dependencies and get dependabot alerts and then look into updates manually. Not sure if I'm a rube or if that's good practice.
Unfortunately you need to `npm login` with username and password in order to publish the very first version of a package to set up OIDC.
Let's say you have a limited life, package specific scoped, IP CIDR bound publishing key, running on a private GH workflow runner. That key only exists in a trusted clouds secret store (e.g. no one will have access it from their laptop).
Now let's say you're a "trusted" publisher, running on a specific GitHub workflow, and GitHub Org, that has been configured with OIDC on the NPM side. By virtue of simply existing in that workflow, you're now a NPM publisher (run any publish commands you like). No need to have a secret passed into your workflow scope.
If someone is taking over GitHub CI/CD workflows by running `npm i` at the start of their workflow, how does the "Trusted Publisher" find themselves any more secure than the secure, very limited scope token?
> Does not default to running post-install scripts (must manually approve each)
To get equivalent protection, use `--only-binary=:all:` when running `pip install` (or `uv pip install`). This prevents installing source distributions entirely, using exclusively pre-built wheels. (Note that this may limit version ability or even make your installation impossible.) Python source packages are built by following instructions provided with the package (specifying a build system which may then in turn be configured in an idiosyncratic way; the default Setuptools is configured using a Python script). As such, they effectively run a post-install script.
(For PAPER, long-term I intend to design a radically different UI, where you can choose a named "source" for each package or use the default; and sources are described in config files that explain the entire strategy for whether to use source packages, which indexes to check etc.)
> Let's you set a min age for new releases before `pnpm install` will pull them in - e.g. 4 days - so publishers have time to cleanup.
Pip does not support this; with uv, use `--exclude-newer`. This appears to require a timestamp; so if you always want things up to X days old you'll have to recalculate.
I do this by having my shell init do this:
export UV_EXCLUDE_NEWER=$(date -Iu -d "14 days ago")
That’s easy to override if you need to but otherwise seamless.Windows: ~/AppData/Local/pnpm/config/rc
macOS: ~/Library/Preferences/pnpm/rc
Linux: ~/.config/pnpm/rc
> The new versions of these packages published to the NPM registry falsely purported to introduce the Bun runtime, adding the script preinstall: node setup_bun.js along with an obfuscated bun_environment.js file.
Each of those deps contains a constraint installing only for the relevant platform.
> Unlike other npm clients, Bun does not execute arbitrary lifecycle scripts like postinstall for installed dependencies. Executing arbitrary scripts represents a potential security risk.
https://bun.com/docs/pm/cli/install#lifecycle-scripts
> To protect against supply chain attacks where malicious packages are quickly published, you can configure a minimum age requirement for npm packages. Package versions published more recently than the specified threshold (in seconds) will be filtered out during installation.
Funny that this is getting downvoted, but it installs dependencies super fast, and has the same approval feature as pnmp, all in a simple binary.
Not sure what your analogy is trying to imply.
NPM was never "too insecure" and remains not "too insecure" today.
This is not an issue with npm, JavaScript, NodeJS, the NodeJS foundation or anything else but the consumer of these libraries pulling in code from 3rd parties and pushing it to production environments without a single review. How this still fly today, and have been since the inception of public "easy to publish" repositories remains a mystery to me even today.
If you're maintaining a platform like Zapier, which gets hacked because none of your software engineers actually review the code that ends up in your production environment (yes, that includes 3rd party dependencies, no matter where they come from), I'm not sure you even have any business writing software.
The internet been a hostile place for so long, that most of us "web masters" are used to it today. Yet it seems developers of all ages fall into the "what's the worst that can happen?" trap when pulling in either one dependency with 10K LoC without any review, or 1000s of dependencies with 10 lines each.
Until you fix your processes and workflows, this will continue to happen, even if you use pnpm. You NEED to be responsible for the code you ship, regardless of who wrote it.
GP is correct. This is a workflow issue. Without a review process for dependencies, literally every package manager I know of is vulnerable to this. (Yes, even Maven.)
Many such forms of malware have already been published and detected.
> Who manually reviews new versions of their project dependencies after installing them but before running them?
One person putting in this effort can protect everyone thereafter.
The PyPI website has a "Report project as malware" button on each project page for this purpose.
But yes, this is the world we live in. Without this particular form of insecurity, there is no "ecosystem" at all.
imagine reviewing every React update. yes, some do that (Obsidian claims to review every dependency, whether new or an update), but that's due to flaws of the ecosystem.
take a look at Maven Central. it's harder to get into, but that's the price of security. you have to verify the namespace so that no one will publish under e.g. `io.gitlab.bpavuk.` namespace unless they have access to the `bpavuk` GitLab group or user, or `org.jetbrains.` unless they prove the ownership of the jetbrains.com domain.
Go is also nice in that regard - you are depending on Git repositories directly, so you have to hijack into the Git repo permissions and spoil the source code there.
That in itself is scary because Git refs are mutable. Even with compromised credentials, no one can replace artifacts already deployed to Maven Central, because they simply don't allow it. There is nothing stopping someone from replacing a Git tag with one that points to compromised code.
The surface area is smaller because Go does locking via go.sum, but I could certainly see a tired developer regenerating it over the most strenuous of on-screen objections from the go CLI.
If you care about security, you only have to care once, during the audit. And you can find a pretty high percentage of malware in practice without actually having a detailed understanding of the non-malicious workings of the code.
Libraries allow you to not think about what the code does at development time, which in general is much more significant than audit time. Also, importantly, they allow you not to have to design and write that part of the code.
I don’t control all the drivers on the road, and a company can’t magically turn all employees into perfect developers. Get off your high horse and accept practical solutions.
Sure, agree, that's why professionals have processes and workflows, everyone working together to build the greatest stuff you can.
But when not a single person in the entire company reviews the code that gets deployed and run by users, you have to start asking what kind of culture the company has, it's borderline irresponsible I'd say.
I'd argue automated dependency updates pose a greater risk than one-day exploits, though I don't have data to back that up. That's harder to undo a compromised package already in thousands of lock files, than to manually patch a already exploited vulnerability in your dependencies.
[0] https://blog.yossarian.net/2025/11/21/We-should-all-be-using...
A cooldown is a good idea, though.
This is only true if you install dependencies that break backwards compatibility.
Personally, I avoid this as much as possible.
Certainly, having a regular/automated update schedule may take less clock time in total (due to preserved knowledge etc.), and incur less long-term risk, than deferring updates until a giant, risky multi-version multi-dependency bump months or years down the road.
But if you have limited engineering resources (especially for a bootstrapped or cost-conscious company), or if the risks of outages now are much greater than the risks of outages later (say, once you're 5 years in and have much broader knowledge on your engineering team), then the calculus may very well shift towards freezing now, upgrading later.
And in a world where supply chain attacks will get far more subtle than Shai-Hulud, especially with AI-generated payloads that can evolve as worms spread to avoid detection, and may not require build-time scripting but defer their behavior to when called by your code - macro-level slowness isn't necessarily a bad thing.
(It should go without saying that if you choose to freeze things, you should subscribe to security notification services that can tell you when a security update does release for a core server-side library, particularly for things like SQL injection vulnerabilities, and that your team needs the discipline to prioritize these alerts.)
Indeed there are people doing that and communities with a consensus such approach makes sense, or at least is not frowned upon. (Hi, Gophers)
Problem is code bases are continuously evolving. A safe decision now, might not be a safe decision in the future. It's very easy to accidentally introduce a new code path that does make you vulnerable.
This practice needs to change, although it will be almost impossible to get a whole ecosystem to adopt. You shouldn’t have to take new features (and associated new problems) just to get bug fixes and security updates. They should be offered in parallel. We need to get comfortable again with parallel maintenance branches for each major feature branch, and comfortable with backporting fixes to older releases.
For open source, well these are volunteer projects on my own time, you are always welcome to fork a given version and backport any fixes that land on main/master.
For commercial libs, our users are not willing to pay extra for this service, so we don't provide it. They would rather stay on an old version and update the entire code base at given intervals. Even when we do release patch versions, there is surprisingly little uptake.
First time I've heard that. How does semver facilitate backporting?
Of course, the actual issue is that maintaining backports isn't free, so expecting it from random single-person projects is a little unrealistic. Bug fixes in new code often need to be rewritten to work on old code. I do maintain old release branches for some projects and backporting single patches can cause whole new bugs that were never present in the main branch quite easily.
- it usually contains improvements to security
- except when it quietly introduces security defects which are discovered months later, often in a major rev bump
- but every once in a while it degrades security spectacularly and immediately, published as a minor rev
We had so many distinct packages on my last project that I had to massively upgrade a tool a coworker started to track the dependency tree so people stopped being afraid of the release process.
I could not think of any way to make lock files not be the absolute worst thing about our entire dev and release process, so the handful of deployables had a lockfile each that was only utilized to do hotfix releases without changing the dep tree out from underneath us. Artifactory helps only a little here.
Also, some software are always buggy and every version is a mixed bag of new features, bugs and regressions. It could be due to the complexity of the problem the software is trying to solve, or because it's just not written well.
(This may end up not being true, in which case a lot of people are paying security vendors a lot of money to essentially regurgitate vulnerability feeds at them.)
Until no-one does, for a week. To stretch the original metaphor, instead of an overgrazed pasture, we grow a communally untended thicket which may or may not have snakes when we finally enter.
The "until no one does" is not something that can happen in something like npm ecosystem, or even among the specific user of "left-pad".
uv lock --exclude-newer $(date --iso -d "24 hours ago")
uv is considering a native relative date:https://www.npmjs.com/package/npm-check-updates#cooldown
In one command:
npx npm-check-updates -c 7> Note that previous stable versions will not be suggested. The package will be completely ignored if its latest published version is within the cooldown period.
Seems like a big drawback to this approach.
I guess the latter point depends on how are Shai-Huluds detected. If they are discovered by downstreams of libraries, or worse users, then it will do nothing.
That would be a level of mass participation yet unseen by mankind (in anything, much less something as subjective as software development). I think we're fine.
And in the cases where you have vulnerable dependencies, you'd force update them before the cooldown period had expired, while leaving everything else you can in place.
- posthog-node 4.18.1, 5.13.3 and 5.11.3
- posthog-js 1.297.3
- posthog-react-native 4.11.1
- posthog-docusaurus 2.0.6
We've rotated keys and passwords, unpublished all affected packages and have pushed new versions, so make sure you're on the latest version of our SDKs.
We're still figuring out how this key got compromised, and we'll follow up with a post-mortem. We'll update status.posthog.com with more updates as well.
(There are a variety of ways to solve this, but the one I like best is automated publishing a la Trusted Publishing with environment mediated manual signoffs. GitHub and other CI/CD providers enable this.)
What you describe sounds like a process problem to me. If an $EARLY_EMPLOYEE if the only one with the deploy keys for what is a product of the company, then that’s a problem. If a deployment of that key library can be made without anyone approving it, that’s also a problem. But those are both people problems… and you can’t solve a people problem with a technical solution.
I don’t think it is a people problem in this case: the only reason there’s a person involved at all is because we’ve decided to introduce one as an intermediating party. A more misuse-resistant scheme disintermediates the human, because the human was never actually a mandatory part of the scheme.
Users who really want to could opt in to the bleeding edge.
Sure, it might be a little bit of noise, but if you get a notice @ 3am of an unexpected publishing, you can jump on unpublishing it.
NPM supports GitHub workflow OIDC and you can make that required, disabling all token access.
Probably even safer to not have been on the latest version in the first place.
Or safer again not to use software this vulnerable.
Nearly all software you use is susceptible to vulnerabilities, whether it's malicious or enterprise taking away your rights. It's in bad taste to make a comment about "not using software this vulnerable" when the issue was widespread in the ecosystem and the vendor is already being transparent about it. The alternative is you shame them into not sharing this information, and we're all worse for it.
A short time ago, I started a frontend in Astro for a SaaS startup I'm building with a friend. Astro is beautiful. But it's build on Node. And every time I update the versions of my dependencies I feel terrified I am bringing something into my server I don't know about.
I just keep reading more and more stories about dangerous npm packages, and get this sense that npm has absolutely no safety at all.
This is gonna ruffle some feathers, but it's only a matter of time until it'll happen on the Rust ecosystem which loves to depend on a billion subpackages, and it won't be fault of the language itself.
The more I think about it, the more I believe that C, C++ or Odin's decision not to have a convenient package manager that fosters a cambrian explosion of dependencies to be a very good idea security-wise. Ambivalent about Go: they have a semblance of packaging system, but nothing so reckless like allowing third-party tarballs uploaded in the cloud to effectively run code on the dev's machine.
Like another commenter said, I do think it's partially just because dependency management is so easy in Rust compared to e.g. C or C++, but I also suspect that it has to do with the size of the standard library. Rust and JS are both famous for having minimal standard libraries, and what do you know, they tend to have crazy-deep dependency graphs. On the other hand, Python is famous for being "batteries included", and if you look at Python project dependency graphs, they're much less crazy than JS or Rust. E.g. even a higher-level framework like FastAPI, that itself depends on lower-level frameworks, has only a dozen or so dependencies. A Python app that I maintain for work, which has over 20 top-level dependencies, only expands to ~100 once those 20 are fully resolved. I really think a lot of it comes down to the standard library backstopping the most common things that everybody needs.
So maybe it would improve the situation to just expand the standard library a bit? Maybe this would be hiding the problem more than solving it, since all that code would still have to be maintained and would still be vulnerable to getting pwned, but other languages manage somehow.
My personal experience (YMMV): Rust code takes 2x or 3x longer to write than what came before it (C in my case), but in the end you usually get something much more likely to work, so overall it's kind of a wash, and the product you get is better for customers - you basically front load the cost of development.
This is terrible for people working in commercial projects that are obsessed with time to market.
Rust developers on commercial projects are under incredible schedule pressure from day 0, where they are compared to expectations from their previous projects, and are strongly motivated to pull in anything and everything they can to save time, because re-rolling anything themselves is so damn expensive.
But their point is that "developing Rust" (as in, the entire process) ends up being a similar total effort to C, only with more up front "writing" and less work on the debugging phase.
Perhaps another way to phrase this: in Rust, you spend more time telling the compiler how your code is expected to work (making the borrow checker happy, adding sync traits on objects you "know" are thread safe because of how you use them or assurances the underlying hardware provides, etc etc etc). In return, the compiler does a lot of work to make sure the code will actually work how you think it's going to work.
A good example is a simple producer-consumer problem. Make a ring buffer. Push the current sys clk tick count register to the ring buffer every time there's a rising edge interrupt on a GPIO (e.x. hook up a button or something to it). Poll the ring buffer in another thread and as soon as there is a timestamp in the buffer, pop it and log it to a UART.
Compare writing this in C vs Rust. In the text book implementation of a producer-consumer ring buffer, you don't need locks.
In C, this is a problem I'd expect a senior-level candidate to be able to knock out in a 60 minute interview.
(aside: I'd never actually ask a question like this in an interview, because it heavily favors people who just happen to be familiar with the algorithm, which doesn't give me enough signal about the candidate. This is borderline like asking someone to implement a sort algorithm in an interview - it just tells you they know how to google or memorize things. /aside).
(aside 2: If I did ask this, I'd be more interested how they design the thing - I'd be looking for questions like "how many events per second? How should I size the ring buffer? Wait, you want me to hook up an IRQ to a button? Is there a risk of interrupt storm from bounce or someone being malicious? Do you want me to add a cooldown timer between events? If we overflow the ring buffer, what should the behavior be? How fast can the UART go on this system - can it even keep up with the input?" - I'd be far more interested in that conversation than actually seeing them write code for this. /aside2).
In Rust, it's a bit more tricky. You'll need to give the compiler hints (in the form of Sync traits that are no-ops) to tell it you know what you're doing is thread safe. It's not rocket science, but the syntax is kind of weird and it will take some putzing around or aid from your favorite AI to get it right.
All of Rust ends up like this - you must be more verbose telling the compiler your intent. In exchange, it verifies the code matches your intent. So the up front cost is higher.
I suspect a lot of people will just pull a ring buffer off crates.io instead of figuring out the right incantations to make the compiler happy.
Some things in Rust just don't translate to the way you'd do them in C - e.x. using different types to say if a GPIO pin is input or output adds a ton of boiler plate, but lets the compiler assure you don't mistakenly try and use a pin configured as an input for output.
In general, the whole zero sized types paradigm in Rust leads to way more lines of code to accomplish the same thing (it all ends up compiled out in the end though).
For embedded, I'll stand by what I said: it takes longer to write idiomatic Rust but you are more likely to get functionally correct code in the end.
Because from here, it seems like they’re having a great time, which is extremely at-odds with your comment.
On the topics it does cover, Rust's stdlib offers a lot. At least on the same level as Python, at times surpassing it. But because the stdlib isn't versioned it stays away from everything that isn't considered "settled", especially in matters where the best interface isn't clear yet. So no http library, no date handling, no helpers for writing macros, etc.
You can absolutely write pretty substantial zero-dependency rust if you stay away from the network and async
Whether that's a good tradeoff is an open question. None of the options look really great
Java included 'java.util.logging'. No one uses it because it's bad, everyone just uses slf4j as the generic facade instead, and then some underlying implementation.
Golang included the "log" package, which was also a disaster. No one used it so they introduced log/slog recently, and still no one uses that one, everyone is still using zap, logrus, etc.
Golang is also a disaster in that the stdlib has "log/slog" as the approved logging method, but i.e. the 'http.Server.ErrorLog' is the old 'log.Logger' you're not supposed to use, and for backwards compatibility reasons it's stuck that way. The go stdlib is full of awkward warts because it maintains backwards compatibility, and also included a ton of extra knobs that they got wrong.
The argument of structured vs unstructured logging, arguments around log levels (glog style numeric levels, trace/debug/info/warn/error/critical or some subset, etc), how to handle lazy logging or lazy computations in logging, etc etc, it's all enough that the stdlib can't easily make a choice that will make everyone happy.
Rust as a language has provided enough that the community can make a sl4j like facade (conveniently, the clearly named 'log' crate seems to be the winner there), so there's no reason it has to be in the stdlib.
Rand has the issue of platform support for securely seeding a secure rng, and having just an unsecure rng might cause people to use it when they really shouldn't. And serde is near-universal but has some very vocal opponents because it's such a heavy library. I have however often wished that num_traits would be in the stdlib, it really feels like something that belongs in there.
The std has stability promises, so it's prudent to not add things prematurely.
Go has the official "flag" package as part of the stdlib, and it's so absolutely terrible that everyone uses pflag, cobra, or urfave/cli instead.
Go's stdlib is a wonderful example of why you shouldn't add things willy-nilly to the stdlib since it's full of weird warts and things you simply shouldn't use.
This is just social media speak for inconvenient in some cases. I have used flag package in lot of applications. It gets job done and I have had no problem with it.
> since it's full of weird warts and things you simply shouldn't use.
The only software that does not have problem is thats not written yet. This is the standard one should follow then.
Too big for many cases, there is also a lot of discussion around whether to use clap, or something smaller.
I honestly feel like that's one of Rust's biggest failings. In my ideal world libstd would be versioned, and done in such a way that different dependencies could call different versions of libstd, and all (sound/secure) versions would always be provided. E.g. reserve the "std" module prefix (and "core", and "alloc"), have `cargo new` default to adding the current std version in `cargo.toml`, have the prelude import that current std version, and make the module name explicitly versioned a la `std1::fs::File`, `std2::fs::File`. Then you'd be able to type `use std1::fs::File` like normal, but if you wanted a different version you could explicitly qualify it or add a different `use` statement. And older libraries would be using older versions, so no conflicts.
Curious, do you have specific examples of that?
That's some "small print" right there.
I'm all in favor of embiggening the Rust stdlib, but Rust and JS aren't remotely in the same ballpark when it comes to stdlib size. Rust's stdlib is decidedly not minimal; it's narrow, but very deep for what it provides.
Doing dev in a VM can help, but isn’t totally foolproof.
But NPM is more like “you’ve added me to your contact list, then it’s totally fine for me to enter your bedroom at night and wear your lingerie because we’re already BFF”. It’s “I’m doing whatever I want on your computer because I know best and you’re dumb” mentality that is very prevalent.
It’s like how zed (the editor) wants to install node.js and whatever just because they want to enable LSP. The sensible approach would have been to have a default config that relies on $PATH to find the language server.
I don't know if everyone will appreciate this, but I am in stitches right now... lol
If they see a gap in .net, which is filled in by a third party, they would have no problem qualms about implementing their own solution in .net that meets their quality requirements. And to be fair, .net delivers on that. This might anger some, but the philosophy is that it should be a batteries included one-stop shop, maybe driven by the culture of quite some ms shops that wouldn't eat anything unless ms feeds it them.
This has a consequence that the third-party ecosystem is a lot smaller, but I doubt MS regrets that. If you compare that to F#, things are quite different wrt filling in the gaps, as MS does not focus on F#. A lot of good stuff for F# comes from the community.
It would be lovely if Python shipped with even more things built in. I’d like cryptography, tabulate/rich, and some more featureful datetime bells and whistles a la arrow. And of course the reason why requests is so popular is that it does actually have a few more things and ergonomic improvements over the builtin HTTP machinery.
Something like a Debian Project model would have been cool: third party projects get adopted into the main software product by a sworn-in project member who who acts as quality control / a release manager. Each piece of software stays up to date but also doesn’t just get its main branch upstreamed directly onto everyone’s laps without a second pair of eyes going over what changed. The downside is it slows everything down, but that’s a side-effect of, or rather a synonym for stability, which is the problem we have with npm. (This looks sort of like what HelixGuard do, in the original article, though I’ve not heard of them before today.)
I don't think languages should try to include _everything_ in their stdlib, and indeed trying to do so tends to result in a lot of legacy cruft clogging up the stdlib. But I think there's a sweet spot between having a _very narrow_ stdlib and having to depend on 160 different 3rd-party packages just to make a HTTP request, and having a stdlib with 10 different ways of doing everything because it took a bunch of tries to get it right. (cf. PHP and hacks like `mysql_real_escape_string`, for example.)
Maybe Python also has a historical advantage here. Since the Internet was still pretty nascent when Python got its start, it wasn't the default solution any time you needed a bit of code to solve a well-known problem (I imagine, at least; I was barely alive at that point). So Python could afford to wait and see what would actually make good additions to the stdlib before implementing them.
Compare to Rust which _immediately_ had to run gauntles like "what to do about async", with thousands of people clamoring for a solution _right now_ because they wanted to do async Rust. I can definitely sympathize with Rust's leadership wanted to do the absolute minimum required for async support while they waited for the paradigm to stabilize. And even so, they still get a lot of flak for the design being rushed, e.g. with `Pin`.
So it's obviously a difficult balance to strike, and maybe the solution isn't as simple as "do more in the stdlib". But I'd be curious to see it tried, at least.
At the top, you have the true standard library for the language. This has very strong stability guarantees. Its purpose is twofold: to provide universal implementations of essentials and to define standard/baseline interfaces for common needs like abstract data types, relational databases, networking and filesystems to encourage compatibility and portability.
Next, you have a tier of recognised but not yet fully standardised libraries. These might be contributed by third parties, but they have requirements for identifying maintainers, appropriate licensing and mandatory peer review of all contributions. They have a clear versioning policy and can make breaking changes in new major releases, but they also provide some stability guarantees along the lines of semver and older releases are normally available indefinitely. The purpose of this tier is to provide a wider range of functionality and/or alternative implementations, but in a relatively stable way and implementing standard interfaces where applicable to improve portability.
Finally, you have the free-for-all, anyone-can-contribute tier. This should still have a sane security model where people can’t just upload malware scripts that run automatically just because someone installed a package. However, it comes with few guarantees about stability or compatibility, except that releases of published packages will be available indefinitely unless there’s a very good reason to pull them where you obviously wouldn’t want to use one anyway. A package you like might be written by a single contributor who no longer maintains it, but if someone does write something useful that simply doesn’t need any further maintenance once it’s finished and does its job, there is still a place to share it.
IMHO, the trouble with that stance is that it leaves no path to incrementally update a long-lived system to benefit from any of those improvements.
Suppose we have an application that runs on 2025’s most popular platform and in ten years we’re porting it to whatever new platform is popular in 2035. Personally, I’d like to know that all the business logic and database queries and UI structure and whatever else we wrote that was working before will still be working on the new platform, to whatever extent that makes sense. I’d like to make only some reasonably necessary set of changes for things that are actually different between the two platforms.
If we can’t do that, our only other option is a big rewrite. That is how you get a Python 2 to Python 3 situation. And that, in turn, is how you get a lot of systems stuck on the older version for years, despite all the advantages any later versions might offer.
Sure it does? Or do I not understand? It pushes the work to the application, not the language.
And it works fine on rails. If you want to stay on an EOL rails, you're on your own, or you can buy ongoing security backports from at least 2 (that I know of, maybe more) vendors. LTS lives roughly 2 years and then you either upgrade or deal with eol yourself.
That Rust does not have standard implementations of commonly-used features (such as an async runtime) is problematic for supply chain security, since then everyone is pulling in dozens (or hundreds) of fragmented 3rd-party packages instead of working with a bulletproof standard library.
PHP is a fantastic resource to learn how to do proper backward compatibility and package management. By doing the exact opposite of whatever PHP does, mostly.
Most rust programmers are mediocre at best and really need the memory safety training wheels that rust provides. Years of nodejs mindrot has somehow made pulling into random dependencies irregular release schedules to become the norm for these people. They'll just shrug it off come up with some "security initiative* and continue the madness
Saying this as someone who is cautiously optimistic about Rust for my own work.
Since your comment starts with commentary on crates.io, I'll note that this has never been possible crates.io.
> Dependency confusion attacks are still possible on cargo because the whole - vs _ as delimiter wasn’t settled in the beginning.
I don't think this has ever been true. AFAIK crates.io has always prevented registering two different crates whose names differ only in the use of dashes vs underscores.
> package namespaces
See https://github.com/rust-lang/rust/issues/122349
> proof of ownership
See https://github.com/rust-lang/rfcs/pull/3724 and https://blog.rust-lang.org/2025/07/11/crates-io-development-...
https://rust-lang.github.io/rfcs/0940-hyphens-considered-har...
Was from 2015 and the other discussions I remember were around default style and that cargo already blocks a crate when normalized name is equal.
The lack of package install hooks does feel somewhat effective, but what's really to stop an attacker putting their malicious code in `func init() {}`? Compromising a popular and important project in this way would likely be noticed pretty quickly. But compromising something widely-used but boring? I feel like attackers would get away with that for a period of time that could be weeks.
This isn't really a criticism of Go so much as an observation that depending on random strangers for code (and code updates) is fundamentally risky. Anyone got any good strategies for enforcing dependency cooldown?
In Go, access to the os and exec require certain imports, imports that must occur at the beginning of the file, this helps when scanning for malicious code. Compare this JavaScript where one could require("child_process") or import() at any time.
Personally, I started to vendor my dependencies using go mod vendor and diff after dependency updates. In the end, you are responsible for the effect of your dependencies.
I don’t believe I can do the same with Rust.
> In Go you know exactly what code you’re building thanks to gosum
Cargo.lock
> just create vendor dirs before and after updating packages and diff them [...] I don’t believe I can do the same with Rust.
cargo vendor
You can't, really, aside from full on code audits. By definition, if you trust a maintainer and they get compromised, you get compromised too.
Requiring GPG signing of releases (even by just git commit signing) would help but that's more work for people to distribute their stuff, and inevitably someone will make insecure but convenient way to automate that away from the developer
Easy reason. The target for malware injections is almost always cryptocurrency wallets and cloud credentials (again, mostly to mine cryptocurrencies). And the utter utter majority of stuff interacting with crypto and cloud, combined with a lot of inexperienced juniors who likely won't have the skill to spot they got compromised, is written in NodeJS.
As for Windows vs the other OS's, yes even the Windows NT family grew out of DOS and Win9x and tried to maintain compatiblity for users over security up until it became untenable. So yes, the base _was_ bad when Windows was dominant but it's far less bad today (why people target high value targets via NPM,etc since it's an easier entry-point).
Android/iOS is young enough that they did have plenty of hindsight when it comes to security and could make better decisions (Remember that MS tried to move to UWP/Appx distribution but the ecosystem was too reliant on newer features for it to displace the regular ecosystem).
Remember that we've had plenty of annoyed discourse about "Apple locking down computers" here and on other tech forums when they've pushed notarization.
I guess my point is that, people love to bash on MS but at the same time complain about how security is affecting their "freedoms" when it comes to other systems (and partly MS), MS is better at the basics today than they were 20-25 years ago and we should be happy about that.
Preventing the user from installing something that they want to install is another issue completely. I'm hesitant to call it exactly security, though I agree that it falls under the auspices of security.
As for "users intentionally installing malware", Windows in the early 00s had a bunch of fundamentally insecure deployment models like ActiveX controls and browsers (IE especially) were more or less swiss cheese in terms of security even outside the ActiveX controls.
Visiting the wrong webpage was often enough to get crap on your computer.
My view is that once you have bad native code running on your computer there's a large chance that it's game-over (the modern sandboxes like WASM were designed to enforce a probably safe subset where regular kernel mistakes are shielded by another layer of abstraction that needs to be broken).
Even Linux has had privilege escalations every year as far as I know. Notarization/stores is just a way to try to keep check on what code runs on end-user computers that isn't sandboxed (and allow for revoking that code if found to be malicious), maybe Linux is slightly safer still but that's probably due to less older features in the Kernel, but Windows has for example recently gotten a rewritten font-parser in Rust (the previous font parser was a common exploitation point that was placed with a too high privilegie).
Developers can feel free to not secure their computer or sell their keys. But that not means npm should allow straight code push from their computers to everyone that has downloaded their library.
That and the package runtime runs with all the same privileges and capabilities as the thing you're building, which is pretty insane when you think about it. Why should npm know anything outside of the project root even exists, or be given the full set of environment variables without so much as a deny list, let alone an allow list? Of course if such restrictions are available, why limit them to npm?
The real problem is that the security model hasn't moved substantially since 1970. We already have all the tools to make things better, but they're still unportable and cumbersome to use, so hardly anything does.
> security model
yep, some kind of seccomp or other kind of permission system for modules would help a lot. (eg. if the 3rd party library is parsing something and its API only requires a Buffer as input and returns some object then it could be marked "pure", if it supports logging then that could be also specified, and so on)
Still, I think the "allow-scripts" section or whatever it's called should be named "allow-unrestricted-access-to-everything". Or maybe just stick "dangerously-" in front, I dunno, and drop it when the mechanism is capable of fine-grained privileges.
When I download a C project, I know that it only depends on my system libraries - which I trust because I trust my distro. Rust seems to expect me to take a leap in the dark, trusting hundreds of packagers and their developers. That might be fine if you're already familiar with the Rust ecosystem, but for someone who just wants to try out a new program - it's intimidating.
Though I will say, even as someone who works at a company that sells Linux distributions (SUSE), while the fact we have an additional review step is nice, I think the actual auditing you get in practice is quite minimal.
For instance, quite recently[1] the Debian package for a StarDict plugin was configured automatically upload all text selected in X11 to some Chinese servers if you installed it. This is the kind of thing you'd hope distro maintainers to catch.
Though, having build scripts be executed in distribution infrastructure and shipped to everyone mitigates the risk of targeted and "dumb" attacks. C build scripts can attack your system just as easily as Rust or JavaScript ones can (in fact it's probably even easier -- look at how the xz backdoor took advantage of the inscrutability of autoconf).
[1]: https://www.openwall.com/lists/oss-security/2025/08/04/1
Any time I ever did the equivalent with NPM/node world it was basically unusable or completely impractical
Benchmarks:
Anyhow, here a Claude.ai comparison: https://claude.ai/share/72d2c34c-2c86-44c4-99ec-2a638f10e3f0
Claude doesn't know this, of course, because it can only read superficial summaries posted on the internet and has zero real experience actually using this dumpster fire.
In .NET you can cover a lot of use cases simply using Microsoft libraries and even a lot of OSS not directly a part of Microsoft org maintained by Microsoft employees.
Even some of the Microsoft.* namespaces have properly moved into the BCL SDKs and no longer show up in dependency lists, even though Microsoft.* namespaces originally meant non-BCL first-party.
- Packages are always namespaced, so typosquating is harder - Registries like Sonatype require you to validate your domain - Versions are usually locked by default
My professional life has been tied to JVM languages, though, so I might be a bit biased.
I get that there are some issues with the model, especially when it comes to eviction, but it has been "good enough" for me.
Curious on what other people think about it.
But realistically, I think the sum total of compromises via package managers attacks is much smaller than the sum total of compromises caused by people rolling their own libraries in C and C++.
It's hard to separate from C/C++'s lack of memory safety, which causes a lot of attacks, but the fact that code reuse is harder is a real source of vulnerabilities.
Maybe if you're Firefox/Chromium, and you have a huge team and invest massive efforts to be safe, you're better off with the low-dependency model. But for the median project? Rolling your own is much more dangerous than NPM/Cargo.
> The more I think about it, the more I believe that C, C++ or Odin's decision not to have a convenient package manager that fosters a cambrian explosion of dependencies to be a very good idea security-wise.
There was no decision in case of C/C++; it was just not a thing languages had at the time so the language itself (especially C) isn't written in a way to accommodate it nicely
> Ambivalent about Go: they have a semblance of packaging system, but nothing so reckless like allowing third-party tarballs uploaded in the cloud to effectively run code on the dev's machine.
Any code you download and compile is running code on dev machine; and Go does have tools to do that in compile process too.
I do however like the by default namespacing by domain, there is no central repository to compromise, and forks of any defunct libs are easier to manage.
I really agree, and I feel like it's a culture difference. Javascript was (and remains) an appealing programming language for tinkerers and hobbyists, people who don't really have a lot of engineering experience. Node and npm rose to prominence as a wild west with lots of new developers unfamiliar with good practices, stuck with a programming environment that had few "batteries included," and at a time when supply chain attacks weren't yet on everybody's minds. The barriers to entry were low and, well, the ecosystem sort of reflected that. You can't wash that legacy away overnight.
Rust in contrast attracts a different audience because of the language's own design objectives.
Obviously none of this makes it immune, and you can YOLO install random dependencies in any programming language, but I don't think any language is ever going to suffer from this in quite the same way and to the same extent that JS has simply due to when and how the ecosystem evolved.
And really, even JS today is not JS of yesteryear. Sure there are lots of bad actors and these bad NPM packages sneak in, but also... how widely are all of them used? The maturation of and standardization on certain "batteries included" frameworks rather than ad hoc piecing stuff together has reduced the liklihood of going astray.
1) No one forces you to use dependencies with large number of transitive dependencies. For example, feel free to use `ureq` instead of `reqwest` pulling the async kitchen sink with it. If you see an unnecessary dependency, you could also ask maintainers to potentially remove it.
2) Are you sure that your project is as simple as you think?
3) What matters is not number of dependencies, but number of groups who maintain them.
On the last point, if your dependency tree has 20 dependencies maintained by the Rust lang team (such as `serde` or `libc`), your supply chain risks are not multiplied by 20, they stay at one and almost the same as using just `std`.
Rust has already had a supply chain attack propagating via build.rs some years ago. It was noticed quickly, so staying pinned to the oldest thing that worked and had no cve pop in cargo audit is a decent strategy. The remaining risk is that some more niche dependency you use is and always has been compromised.
It's also very confusing (and I think those attack vectors benefit exactly from that), since you have a dependency but the dep itself dependent on another dep version.
Building basic CapacitorJS / Svelte app as an example, results many deps.
It might be a newbie question, but, Is there any solution or workflow where you don't end up with this dependency hell?
Maybe I'm being a bit trite but the world of JavaScript is not some mysterious place separate from all other web programming, you can make bad decisions on either side of the stack. These comments always read like devs suddenly realizing the world of user interactions is more complicated and has more edge cases than they think.
Being incredibly strict with TS compiler and linter helps a bit.
What is worse between writing potentially vulnerable code yourself and having too many dependencies.
Finding vulnerabilities and writing exploits is costly, and hackers will most likely target popular libraries over your particular software, much higher impact, and it pays better. Dependencies also tend to do more than you need, increasing the attack surface.
So your C code may be worse in theory, but it is a smaller, thus harder to hit target. It is probably an advantage against undiscriminating attacks like bots and a downside against targeted attacks by motivated groups.
What would actually stop this is writing compilers and build systems in a way that isolates builds from one another. It's kind of stupid that all a compiler really needs is an input file, a list of dependencies, and an output file. Yet they all make it easy to root around, replicate and exfiltrate. It can be both convenient and not suffer from these style of attacks.
But the basic takeover... no, it usually won't affect any Debian style distro package, due to the release process.
Thats a strong statement that I can see aging very badly.
Things like cargo-vet help as does enforcing non-token auth, scanning and required cooldown periods.
The safest code is the code that is not run. There is no lack of attacks targeting C/C++ code, and odin is just a hobby language for now.
Must read: https://wiki.alopex.li/LetsBeRealAboutDependencies
TL;DR: ditch crates.io and copy Go with decentralized packages based directly on and an extended standard library.
Centralized package managers only add a layer of obfuscation that attackers can use to their advantage.
On the other hand, C / C++ style dependency management is even worse than Rust's... Both in terms of development velocity and dependencies that never get updated.
Don't make me tap the sign: https://news.ycombinator.com/item?id=41727085#41727410
> Centralized package managers only add a layer of obfuscation that attackers can use to their advantage.
They add a layer of convenience. C/C++ are missing that convenience because they aren't as composable and have a long tail of pre-package manager projects.
Java didn't start with packages, but today we have packages. Same with JS, etc.
My real worry, for myself re the parent comment is, it's just a web frontend. There are a million other ways to develop it. Sober, cold risk assessment is: should we, or should we have, and should anyone else, choose something npm-based for new development?
Ie not a question about potential risk for other technologies, but a question about risk and impact for this specific technology.
I installed the package, obviously I intend to run it. How does getting pwned once I run it manually differ from getting pwned once I install it? I’m still getting pwned
`#![no_std]`
(These are arguably good things in other contexts.)
The alternative that C/C++/Java end up with is that each and every project brings in their own Util, StringUtil, Helper or whatever class that acts as a "de-facto" standard library. I personally had the misfortune of having to deal with MySQL [1], Commons [2], Spring [3] and indirectly also ATG's [4] variants. One particularly unpleasant project I came across utilized all four of them, on top of the project's own "Utils" class that got copy-and-paste'd from the last project and extended for this project's needs.
And of course each of these Utils classes has their own semantics, their own methods, their own edge cases and, for the "organically grown" domestic class that barely had tests, bugs.
So it's either a billion "small gear" packages with dependency hell and supply chain issues, or it's an amalgamation of many many different "big gear" libraries that make updating them truly a hell on its own.
[1] https://jar-download.com/artifacts/mysql/mysql-connector-jav...
[2] https://commons.apache.org/proper/commons-lang/apidocs/org/a...
[3] https://docs.spring.io/spring-framework/docs/current/javadoc...
[4] https://docs.oracle.com/cd/E55783_02/Platform.11-2/apidoc/at...
And what is wrong with writing your own util library that fits your use case anyway? In C/C++ world, if it takes less than a couple hours to write, you might as well do it yourself rather than introduce a new dependency. No one sane will add a third-party git submodule, wire it to the main Makefile, just to left-pad a string.
Yeah, that's why I said that this is the other end of the pendulum.
> In C/C++ world, if it takes less than a couple hours to write, you might as well do it yourself rather than introduce a new dependency.
Oh I'm aware of that. My point still stands - that comes at a serious maintenance cost as well, and I'd also say a safety cost because you're probably not wrapping your homebrew StringUtils with a bunch of sanity checks and asserts, meaning there will be an opportunity for someone looking for a cheap source of exploits.
In full generality, pretty hard. If you're just dealing with ASCII or Latin-1, no problem. Then add basic Unicode. Then combining characters. Then emojis. It won't be trivial anymore.
Well, if you're in C/C++, you always risk dealing with null pointers, buffer overruns, or you end up with use-after-free issues. Particularly everything working with strings is nasty and error-prone if one does not take care of proper testing - which many "homegrown" libraries don't.
And that's before taking the subtleties of character set encodings between platforms into account. Or locale. Or any other of the myriad ways that C/C++ and even Java offer you to shoot yourself in the foot with a shotgun.
And no, hoping for the best and saying "my users won't ever use Unicode" or similar falls apart on the first person copying something from Outlook into a multi-line paste box. Or someone typing in their non-Latin name. Oh, and right-to-left languages, don't forget about these. What does "pad from left" even mean there? Is the intent of the user still "at the beginning of the string itself?" Or does the user rather want "pad at the beginning of the word/sentence", which in turn means padding at the end of the string?
There's so much stuff that can go horribly horribly wrong when dealing with strings, and I've seen more than my fair share just reading e-mail templates from supposed "enterprise" software.
Totally 100% agree, though tools like cargo tree make it more of a tractable problem, and running vendored dependencies is first class at least.
The one I am genuinely most concerned of is Golang. The way Dependencies are handled leaves much to be desired, I'm really surprised that there haven't been issues honestly.
Every time I fire up "cmake" I chant a little spell that protects me from the goblins that live on the other side of FetchContent to promise to the Gods of the Repo that I will, eventually, review everything to make sure I'm not shipping poop nuggets .. just as soon as I get the build done, tested .. and shipped, of course .. but I never, ever do.
I understand that there's been some course correction recently (zero dependency and minimal dependency libs), but there are still many devs who think that the only answer to their problem is another package, or that they have to split a perfectly fine package into five more. You don't find this pattern of behavior outside of Node.
The medium is the message. If a language creates a very convenient package manager that completely eliminates the friction of sharing code, practically any permutation of code will be shared as a library. As productivity is the most important metric for most companies, devs will prefer the conveniently-shared third-party library instead of implementing something from scratch. And this is the result.
I don't believe you can have packaging convenience and avoiding dependency hell. You need some amount of friction.
It’s essentially remote execution a la carte.
Now will you trust that AI didn't include its own set of security issues and will you have the ability to review so much code?
Libraries will be providing raw tools like - Sockets, Regex Engine, Cryptography, Syscalls, specific file format libraries
LLMs will be building the next layer.
I have build successful running projects now in Erlang, Scheme, Rust - I know the basic syntax of two of those but I couldn't write my deployed software in any of them in the couple of hours of prompting.
The scheme it had to do a lot of code from first principles and warned me how laborious it would be - "I don't care, you are doing it."
I have tools now I could not have imagined I could build in a reasonable time.
If anything, blind reliance on LLMs will make this problem much worse.
Of course using more JSR packages does start to add more reason to prefer Deno to Node. Also, there are still some packages that are deno.land/x/ only (sort of the first version of JSR, but no npm cross-compatibility) worth checking out. For instance, I've been impressed with Lume [1], a thoughtful SSG that's sort of the opposite of Astro in that it iterates at a slow, measured pace, and doesn't try to be a kitchen sink but more of workbench with a lot of tools easy to find. It's deno.land/x/ only for now for reasons I don't entirely agree with but I can't deny that JSR can be quite a step up in publishing complexity for not exactly obvious gain.
[0] https://jsr.io/
If this were true, wouldn't there have been at least one Maven attack by now, considering the number of NPM attacks that we've seen?
From what I remember (a few years old, things may have changed) they required devs to stage packages to a specific test env, packages were inspected not only for malware but also vulnerabilities before being released to the public.
NPM on the other hand... Write a package -> publish. Npm might scan for malware, they might do a few additional checks, but at least back when I looked into it nothing happened proactively.
Maven is also a bit more complex than npm and had an issue in the system itself https://arxiv.org/html/2407.18760v4
Perhaps its package owners do.
And I am genuinely thinking to myself, is this making using npm a risk?
edit: but if that's also compromised earlier... \o/
Attack an important package, and you can get into the Node and Electron ecosystem. That's a huge prize.
Serious answer: no.
I think I'm going to just use a static site generator, maybe add some WASM modules built with a language that has a sane package manager and enjoy my life instead of getting involved with this cluster of a show.
But in reality it has nothing to do with node/js. It’s just because it’s the most used ecosystem. So I really don’t understand the argument of not using node. Just be mindful of your dependencies and avoid updating every day.
* Many dependencies, so much you don't know (and stop caring) what is being used.
* Automatic and regular updates, new patch versions for minor changes, and a generally accepted best practice of staying up to date on the latest versions of things, due to trauma from old security breaches or big migrations after not updating for a while.
* No review, trust based self-publishing of packages and instant availability
* untransparent pre/postinstall scripts
The fix is both cultural and technological:
* Stop releasing for every fart; once a week is enough, only exception being critical security reasons.
* Stop updating immediately whenever there's an update; once a week is enough.
* Review your updates
* Pay for a package repository that actually reviews changes before making them widely available. Actually I think the organization between NPM should set that up, there's trillion dollar companies using the Node ecosystem who would be willing and able to pay for some security guarantees.
I know this is a controversial approach, but it still works well in our case.
"require": { "php": ">=8.0",
"ext-mbstring": "*",
"bcosca/fatfree-core": "3.9.1",
"phpmailer/phpmailer": "6.9.3",
"ruler/ruler": "0.4.0",
"matomo/device-detector": "6.4.7" }
1. https://github.com/tirrenotechnologies/tirrenoDoes it require disciple and a project not run by developers who just learned program? You betcha.
So yes, in comparison, modern vanilla PHP with some level of developer discipline (as you mentioned) is actually quite suitable, but unfortunately not popular, for low-dependency development of web applications.
With Swift on iOS/macOS for instance it’s not strange at all for an app to have a dependency tree consisting of only 5-10 third party packages total, and with a little discipline one can often get that number down to <5. Why? Because between the language itself, UIKit/AppKit, and SwiftUI, nearly all needs are pretty well covered.
I think it’s time to beef up both JavaScript itself as well as the platforms where it’s run (such as the browser and Node), so people don’t feel nearly as much of a need to pull in tons of dependencies.
But that's not true? I initialize a project locally, there is zero dependencies by default, and like I did five years ago, I can still build backend/frontend projects with minimal set of dependencies.
What changed is what people are willing/OK with doing. Yes, it'll require more effort, obviously, but if you want things to be built properly, it usually takes more effort.
I agree that in any case, it's the courage/discipline that comes before the language choice when creating low-dependency applications.
The ones that most people use and some people complain about, and the ones that nobody uses and people keep advocating for.
It also ignores the central question of whether NPM is more vulnerable to these attacks than other package managers, and should therefore be considered an unreasonable security risk.
They are built for programmers, not users. They are designed to allow any random untrusted person to push packages with no oversight whatsoever. You just make an account and push stuff. I have no doubt you can even buy accounts if you're malicious enough.
Users are much better served by the Linux distribution model which has proper maintainers. They take responsibility for the packages they maintain. They go so far as to meet each other in person so they can establish decentralized root of trust via PGP.
Working with the distributions is hard though. Forming relationships with people. Participating in a community. Establishing trust. Working together. Following packaging rules. Integrating with a greater dynamic ecosystem instead of shipping everything as a bloated container whose only purpose is to statically link dynamic libraries. Developers don't want to do any of that.
Too bad. They should have to. Because the npm clusterfuck is what you get when you start using software shipped by totally untrusted randoms nobody cares to know about much less verify.
Using npm is equivalent to installing stuff from the Arch User Repository while deliberately ignoring all the warnings. Malware's been found there as well, to the surprise of absolutely no one.
An eco-system, if it insists on slapping on a package manager (see also: Rust, Go) should always properly evaluate the resulting risks and put proper safeguards in place or you're going to end up with a massive supply chain headache.
Sorry?
No, I'm the guy that does write all of his code from scratch so you're entirely barking up the wrong tree here. I am just realistic in seeing that people are not going to write more code than they strictly speaking have to because that is the whole point of using Node in the first place.
The Assembly language example is just to point out the fact that you could plug in at a lower level of abstraction but you are not going to because of convenience, and the people using Node.js see it no different.
JS is a perfectly horrible little language that is now being pushed into domains where it has absolutely no business being used (I guess you would object to running energy infrastructure on Node.js and please don't say nobody would be stupid enough to do that).
Node isn't fine it needs a serious reconsideration of the responsibilities of the eco-system maintainers. See also: Linux, the BSDs and other large projects for examples of how this can be done properly.
Perfect is the enemy of good; dependency cooldown etc is enough to mitigate the majority of these risks.
Familiarity breeds contempt.
I think JS is great. It's simple, anybody can use it.
TypeScript is excellent too. The structural type system is very convenient.
It's not going to replace Rust in cases where performance is essential or where you want strict runtime type checking or whatever, but for general use and graphical applications JS seems like a great pick.
I often hear people complain about JS, but really, how is it any worse than say Python?
You can't just scale out a team without assessing who you are adding to it: what is their reputation? where did they learn?
It's not quite the same questions when picking a library but it is the same process. Who wrote it? What else did they write? Does the code look like we could manage it if the developer quits, etc.
Nobody's saying you shouldn't use third party dependency. But nobody benefits if we pretend that adding a dependency isn't a lot like adding a person.
So yeah, if you need all of posthog without adding posthog's team to yours, you're going to have to write it yourself.
Thanks! Now, I will also tell this to developers.
A bit? A proper input validator is a lot of work.
To my understanding, there's less surface area for problems if I have a wrapper over the one or two endpoints some API provides, which I've written and maintain myself, over importing some library that wraps all 100 endpoints the API provides, but which is too large for me to fully audit.
When it comes to frontend, well I don't have answers yet.
You need standalone dependencies, like Tailwind offers with its standalone CLI. Predators go where there prey is. NPM is a monoculture. It's like running Windows in the 90's; you're just asking for viruses. But 90% of frontend teams will still use NPM because they can't figure anything else out.
There are some things that kind of suck (working with time - will be fixed by the Temporal API eventually), but you can get a lot done without needing lots of dependencies.
And maybe don't update your dependencies very often.
I also switched to Phoenix using Js only when absolutely necessary. Would do the same on Laravel at work if switching to SSR would be feasible...
I do not trust the whole js ecosystem anymore.
Take ruby - even before when a certain corporation effectively took over RubyCentral and rubygems.org, almost two years ago they also added a 100.000 download limit. That is, after that threshold was passed, the original author was deprived of the ability to remove the project again - unless the author resigns from rubygems.org. Which I promptly did. I could not accept any corporation trying to force me into maintaining old projects (I tend to remove old projects quickly; the licence allows people to fork it, so they can maintain it if they want to, but my name can not be associated with outdated projects I already abandoned, since newer releases were available. The new corporate overlords running rubygems.org, who keep on lying about "they serve the community", refused to accept this explanation, so my time came to a natural end at rubygems.org. Of course this year it would be even easier since they changed the rules to satisfy their new corporate overlords anyway: https://blog.rubygems.org/2025/07/08/policies-live.html)
I think we have given the Typescript / Javascript communities enough time. These sort of problems will continue to happen regardless of the runtime.
Adding one more library increases the risk of a supply-chain attack like this.
As long as you're using npm or any npm-compatible runtime, then it remains to be an unsolved recurring issue in the npm ecosystem.
I'm getting tired of the anti-Node.js narrative that keeps going around as if other package repos aren't the same or worse.
There are plenty of npm features to help assess packages and prevent unintended updates, but nothing replaces due diligence.
Please, no.
It is an absolutely terrible eco system. The layercake of dependencies is just insane.
This is something you also need to do with package managers in other languages, mind you.
People use Node because of the availability of the packages, not the other way around.
That is not why I use Node. Incidentally, I also use Bun.js, and pnpm for most package management operations. I also use Typescript instead of raw JS.
I use Node and these related tools fundamentally because:
- I like the isomorphism of the code I write (same language for server and client)
- JS may have many warts, but IMO it has many advantages many other languages lack, it is rapidly improving, and TS makes it even more powerful and the bad part parts manageable. One ting that has stuck with me over the many years of using JS/TS is just how direct and free-of-ceremony everything is. Want a functional style? It supports it to some extent without much fuss. Want something akin to OOP? You can object literal with method-style function, "constructors" that are regular functions, even no-fuss prototypical inheritance, if you want to go that far. Also, no need for any complicated dependency injection (DI), you can just implement pure DI with regular functions, etc. I don't get why you hate JS/TS so much.
- I use Bun.js as an alternative to Node that has more batteries included, so that I can limit my exposure to too many external packages. I add packages only if I absolutely need them, and I audit them thoroughly. So, no, although I may use some packages, I am not on the Node ecosystem just because I want to go on a package consumption spree.
- I use pnpm for installing and managing package, and it by default prevents packages from taking any actions during installation; I just get their code.
Outside that, the issue is not unique to Node.js.
No, they haven't.
I should know, I check those companies for a living. This is one of the most often flagged issues: unaudited Node.js dependencies. "Oh but we don't have the manpower to do that, think about how much code that is".
If they haven’t, it would be ethically dubious for you to not report it.
> If they haven’t, it would be ethically dubious for you to not report it.
I can report all I want, someone needs to act on that report for it to have an effect.
There are people out there who think that some static analysis tool plugged into their CI/CD pipeline is the equivalent of a code audit.
What does get flagged though is not getting employee permission for putting photos on the 'team' page. It's good they flag that. I'd rather they also went in much deeper on tech issues.
I've reviewed 270 companies to date. I have yet to find a single one that had audited the source code they imported. It's not untypical to find an installation that automatically updates a whole raft of dependencies during the build phase. And absolutely nobody looks at that code until something breaks.
I would love to see the equivalent of a linux distro, a curated set of packages and package versions that are known to be compatible and safe. If someone offered this as a paid product businesses would pay for it.
I know its not foolproof, but I can't believe how often people run code they haven't read where it can make a huge mess, steal secrets, etc. I'll probably get owned someday, I'm sure, but this feels like a bare minimum.
How does this work? Every single npm package has tons of dependency tree nodes
You have to make sure you're not putting any secrets in the container environment.
The reality here is this is the sort of attack SELinux should be good at stopping (it's not because no one uses SELinux, the policies most commonly used don't confine the user profile in a useful way, and a whole bunch of tools love ambient credentials in environment variables).
How does this work exactly? containers still need env vars and access to databases and cloud environments. Without these the container is just useless isolated pod.
The image itself isn't the same image that the app gets deployed in, but is a portable dev environment with everything needed to build and run my apps baked in.
This comes with some nice side effects like being able to instantly spin up clean work environments on my laptop, someone elses, or a remote vm.
Its not going to stop attacks, but it will limit blast radius a lot.
I don't run much on the root OS of my dev machine, basically everything is in a container or VM of some kind, but that's more so that I can reproduce my environment by copying a VMDK than in an effort to limit what the container can do to itself and data it has access to. Yeah, even with root access to a VM guest, an attacker they won't get my password manager, personal credit card, socials, etc. that I only use from the host OS... But they'll get everything that the container contains or has access to, which is often a lot of data!
Of course, this is not a real defense on its own, its just good practice to limit blast radius, much like not giving everybody admin rights.
Even a properly containerized app will still have these things, because you need things like environment variables (that contain passwords, api keys, etc) for your app to function.
That's a lot of work though, and increases the difference between your local dev environment and prod.
Really? What if your application needs to write to Github?
Containers do not contain.
Most attacks on popular packages last at most a few months before detection.
https://about.gitlab.com/blog/gitlab-discovers-widespread-np...
Currently reverse engineering the malicious payload and will share our findings within the next few hours.
Why in particular this community still insists on preemptively updating all deps always, on running complicated extra hooks together with package installation and pretending this all is good engineering practices? ("Look, we have so plenty of things and are so busy, thus it must be good")
Why certain kind of mindset is typical to this community?
Why the Node creator abandoned his creation years ago?
Why, oh why?
Your attempt to make it personal does not compute.
The website is a mess (broken links, broken UI elements, no about section)
There is no history on webarchive. There is no information outside of this website and their "customers" are crypto exchanges and some japanese payment provider.
This seems a bit fishy to me - or am I too paranoid?
Hey PostHog! What version do we need to avoid?
- posthog-node 4.18.1, 5.13.3 and 5.11.3
- posthog-js 1.297.3
- posthog-react-native 4.11.1
- posthog-docusaurus 2.0.6
If you make sure you're on the latest version you should be good.
For example I installed `posthog-react-native` version `4.12.4` which is between the `4.11.1` version which is compromised and the safe to install version `4.13.0`. Is that version compromised or not?
Discussion on HN last time: https://news.ycombinator.com/item?id=45326754
1. Don't
Or copy that repo’s markdown into an llm and ask it to map to the pip ecosystem
> "'No Way to Prevent This,' Says Only Nation Where This Regularly Happens" is the recurring headline of articles published by the American news satire organization The Onion after mass shootings in the United States.
Source: https://en.wikipedia.org/wiki/%27No_Way_to_Prevent_This,%27_...
Well before the npm attacks were a thing, we within the Rust project, have discussed a lot of using wasm sandboxing for build-time code execution (and also precompiled wasm for procedural macros, but that's its own thing.) However the way build scripts are used in the Rust ecosystem makes it quite difficult enforce sandbox while also enabling packages to build foreign code (C, C++ invoke make, cmake, etc.) The sandbox could still expose methods to e.g. "run the C compiler" to the build scripts, but once that's done they have an arbitrary access to a very non-trivial piece of code running in a privileged environment.
Whereas for Javascript rarely does a package invoke anything but other javascript code during the build time. Introduce a stringent sandbox for that code (kinda deno style perhaps?) and a large majority of the packages are suddenly safe by default.
Another example: all Debian packages are published to unstable, but cannot enter testing for at least 2-10 days, and also have to meet a slew of conditions, including that they can be and are built for all supported architectures, and that they don't cause themselves or anything else to become uninstallable. This allows for the most egregious bugs to be spotted before anyone not directly developing Debian starts using it.
You can definitely do this.
To be honest, you just end up with the same thing via dependabot/renovate.
You can specify a dependency version range in Maven artifacts. But the Maven community culture and default tooling behaviour is to specify exact versions.
You can specify an exact dependency version in npm packages. But the npm community culture and default tooling behaviour is to specify version ranges.
Even if a maintainer uses a bot to bump dependency versions, most typically they will test if their package works before publishing an updated version, and also because this release work is manual (even if the bot helps out), it takes some time after the dependency is released for upstream consumers of it to endorse and use it. Therefore, nobody consuming foo 1.0.4 will use dependency bar 2.3.5 until foo 1.0.5 is released... whereas an npm foo 1.0.4 with bar dependency "^2.3.0" will give its users bar 2.3.6 from the very moment bar 2.3.6 is released, even without a foo 1.0.5 release.
Hate to break it to you but from targeting enterprises, java maven artifacts would be a MASSIVE target. It is just harder to compromise because NPM is such shit.
This adds a bit more overhead to typo squatting, and a paper trail, since a domain registrar can have identity/billing information subpoenaed. Versus changing a config file and running a publish command...
This does not prevent said package from shipping with malware built in, but it does prevent arbitrary shell execution on install and therefore automated worm-like propagation.
https://bootstrappable.org/ https://reproducible-builds.org/ https://github.com/crev-dev
I maintain that the flexibility in npm package versions is the main issue here.
You still need some out-of-band process to pull upstream updates and aside from a built-in “cool down” (until you merge changes) I see that method as having a huge amount of downside.
Yes, you sidestep malicious versions pushed to npm but now you own the build process for all your dependencies and you have to find time to update (and fix builds if they break) all your dependencies.
Locking to a specific version and waiting some period of time (cool down) before updating is way easier and jus as safe IMHO.
[redacted bullshit!]
I'm confused on this. I would imagine it would protect/help you as long as releases are immutable which they are for most package managers (like npm).
> Vendoring literally just means grabbing the source code from origin and commit it to your repo after a review.
Hmm, I don't think it always necessarily means grabbing the source, it can also mean grabbing the built artifacts in my experience.
My biggest issue with vendoring dependencies is it allows for editing of said dependencies. Almost everywhere I've worked that vendored dependencies (copied source or built versions in and committed them) felt the siren song of modifying said dependencies which is hell to deal with later.
I personally don't have a problem with the general ability to change vendor code. The question is whether you want it in an specific case or not. If you update frequently then certainly not. But that decision should be deliberate team policy.
Fair, in the instances I ran into it was code that was downloaded and unzipped into a "js-library-name" folder but then the code was edited, even worse, the `.min.js` version wasn't touched, just the original one which led to some fun when someone "helpfully" switched to the min versions that didn't have the edit. IMHO, if you want to edit a library then you should be forking and/or making it super obvious that this is not "stock" library X. We also ran into issues with "just updated library Y" and only later realizing someone had modified the older version.
But yes, if it's deliberate and obvious then I don't have an issue modifying, just as long as the team understands it's "their" code now and there is no clean upgrade path in the future.
The difficulty to make changes obvious is same for forks and vendored commits, imho. You can write big warnings in commit messages, that's it I guess. Which kind of boils down to deliberate team policy again. But I generally prefer monorepos for various reasons.
You can vendor your left-pad, but good luck doing that with a third-party SDK.
As a SW developer, you may be able to limit the damage from these attacks by using a MAC (like SELinux or Tomoyo) to ensure that your node app cannot read secrets that it is not intended to read, conns that it should not make, etc. and log attempts to do those things.
You could also reduce your use of external packages. Until slowly, over time you have very little external dependencies.
ignore-scripts=true
to your .npmrcCan't they just jam the malware into the package itself? It runs with the same permissions on my machine (in unit tests, node servers, etc).
Because install scripts are being actively exploited, so blocking them will reduce your exposure. Install scripts will also run anywhere that runs npm ci, npm install, etc., including build pipelines.
> Can't they just jam the malware into the package itself
Yes. Disabling install scripts won't safeguard you from all attack vectors.
But in some cases it could help for that. For instance, if the package runs in the browser and the payload requires file-system access, etc., then the attack can’t execute in the browser. And if in addition it was added to a life-cycle script, it would be mitigated.
At any rate, it’s worth having `ignore-scripts=true` because NPM life-cycle scripts are a common target (e.g., this one targets `preinstall`).
AsyncAPI is used as the example in the post. It says the Github repo was not affected, but NPM was.
What I don't understand from the article is how this happened. Were the credentials for each project leaked? Given the wide range of packages, was it a hack on npm? Or...?
> it modifies package.json based on the current environment's npm configuration, injects [malicious] setup_bun.js and bun_environment.js, repacks the component, and executes npm publish using stolen tokens, thereby achieving worm-like propagation.
This is the second time an attack like this happens, others may be familiar with this context already and share fewer details and explanations than usual.
Previous discussions: https://news.ycombinator.com/item?id=45260741
Yes, if you depend on an infected package, sure. But then I'd expect not just a list, but a graph outlining which package infected which other package. Overall I don't understand this at all.
Infact, do this for all risky tools[2]
1 - https://github.com/ashishb/dotfiles/blob/067de6f90c72f0cf849...
2 - https://ashishb.net/programming/run-tools-inside-docker/
Good point. Here's the improvement that work for me
https://github.com/ashishb/dotfiles/commit/fe4fb15fe867bf77a...
Would you prefer doing those tricks or exposing everything on your machine to random npm packages?
The e18e community are reducing dependencies in popular libraries and building tools to prevent and reduce the impact of such attacks. Join if you want to help out! https://e18e.dev/
Just this morning, after trying to make the case over the past year, we had a change landed to remove more than a dozen dependencies from typescript-eslint! https://bsky.app/profile/benmccann.com/post/3m6fcjax7ec2h
Yay!
>Discord
...ew.
The solutions that are effective also involve actually doing work, as developers, library authors, and package managers. But no, we want as much "convenience" as possible, so the issues continue.
Developers and package authors should use a lockfile, pin their dependencies, be frugal about adding dependencies, and put any dependencies they do add through a basic inspection at least, checking what dependencies they also use, their code and tests quality, etc.
Package managers should enforce namespacing for ALL packages, should improve their publishing security, and should probably have an opt-in verified program for the most important packages.
Doing these will go a long way to ameliorate these supply chain attacks.
And it has not yet been demonstrated at PyPI/NPM scale, either.
Last time my perception was also that publishing sec is a weak point. If at least heavily used packages would be forced to do manual security steps for publishing, it would help quite a bit as long the measures a safe.
This is probably a common problem. Has anyone gotten verdaccio to enforce cool-down policies?
I also waste a ton of time because post-install scripts are disabled. Being able to cut them off from network access, and just run a local server with 2-4 week cool-down would help me sleep better at night + simplify the hell out of my build.
It's much easier to demonstrate a problem (twice!) than to convince a herd that there is a problem.
I hope that other languages with similar package manager (looking at you, cargo) take note.
12 years ago NPM went down due to a very issue and getting stuff working again wasn't easy.
Someone got on GitHub and said something along the lines of "Get it together or get forked. "
I asked my manager if this was real or just a random guy talking. My manager, correctly said, it's just talk.
But it's different now. NPM is owned by Microsoft. One of the world's biggest companies should be able to sort things out.
Ohh well. Can't fix it.
It’s basically a lightweight CLI tool you can run directly inside any local project:
npx sha1-hulud-scanner
Repo is here:
https://github.com/developerjhp/sha1-hulud-scannerIt’s not meant to be a full security product — just a simple “first-pass” detector that helps catch unexpected checksum strings or injected artifacts before they slip into CI. Feedback and contributions are welcome!
I think hijacked NPM packages are just the tip of the ice berg.
com.foo.bar
That would require domain verification, but it would add significant developer friction.
Also mandatory Dune reference:
"Bless the maker and his water"
I guess it's not going to happen...
https://gist.github.com/considine/2098a0426b212f27feb6fb3b4d...
It checks yarn.lock for any of the above. Maybe needs a tweak or two but you should be able to run from a directory with yarn.lock
Also package manager should not run scripts.
In this Turing-equivalent world, you can only know what actually executes (e.g. eval, fetch) by actually executing all code in the package and then see what functions got executed. Then the problem is the same as virus analysis; the virus can be written to only act under certain conditions, it will probe (e.g. look at what intepreter fingerprints, get the time of day, try to look at innocuous places in filesystem or network, measure network connection times, etc), so that it can determine it is in a VM being scanned, and go dormant for that time.
So the only thing that actually works is if node and other JS evaluators have a perfect sandbox, where nothing in a module is allowed (no network, no filesystem) except to explicit locations declared in the module's manifest, and this is perfectly tracked by the language, so if the module hands back a function for some other code to run, that function doesn't inherit the other code's network/fs access permissions. This means that, if a location is not declared, the code can't get to it at scanning time nor install time nor any time in the future.
This still leaves open the door for things like a module defining GetGoogleAnalyticsURL(params) that occasionally returns "https://badsite.com/copyandredirect?ga=...", to get some other module to eventually make a credential-exfiltrating network call, even if it's banned from making it directly or indirectly...
Also, detecting obfuscated code sounds like an interesting and challenging task.
Literally scanning for just "eval(" is entirely insufficient. You have to execute the code. Therefore you have to demand module authors describe how to execute code, e.g. provide a test suite, which is invoked by the scanner, and require the tests to exercise all lines of code. Provide facilities to control the behaviour of functions outside the module so that this is feasible.
This is a lot of work, so nobody wants to do it, so they palm you off with the laziest possible solution - such as literally checking for "eval(" text in the code - which then catches zero malware authors and wastes resources providing help to developers caught as a false positive, meanwhile the malware attacks continue unabated because no effective mechanism to stop them has been put in place.
Reminds me of the fraudster who sold fake bomb detectors to people who had a real need to stop bomb attacks. His detectors stopped zero bomb attacks. https://www.bbc.co.uk/news/uk-29459896
Interestingly enough, this is the premise for a lot of security in the physical world. Broken windows theory, door locks as a form of security in the first place, crimes of opportunity, etc.
But one should consider that in tech, the barrier to entry is a little higher and so maybe there are less 'dumb' criminals (or they don't get very far).
- https://socket.dev/blog/introducing-socket-firewall - https://github.com/lirantal/npq - https://bun.com/docs/pm/security-scanner-api
source: https://github.com/bodadotsh/npm-security-best-practices?tab...
([0-9a-z]{18})
That leads me to another point. Devs have to take responsibility for their code/projects. Everyone wants to blame npm or something else but, as software developers, you have to take responsibility for the systems you build. This means, among may other things, vetting code your code depends on and protecting the system from randomly updating itself with code you haven’t even heard about.
If you must use npm, containerize/VM it? treat it as if you're observing malware.
minimumReleaseAge strikes a good balance between protecting yourself against emerging threats like Shai-Hulud and keeping your dependencies up-to-date.
Because you asked: you can get another layer of protection through Socket Firewall Free (sfw), which prevents dependencies known to be malicious from being installed. Socket typically identifies malware very soon after its is published. Disclaimer: I’m the lead dev on the project, so obviously biased — YMMV.
So github has some tools available to mitigate some of the problems tied to it. Probably not perfect for all use cases. But considering the current scale, it doesn't seem to have any effect, as enough publishers seem not to care.
I think npm should force higher standards on popular packages.
If I want more stability for my OS I can choose Debian-stable rather than Ubuntu-nightly.
But for npm, there doesn't seem to be the same choice available. Either I sign up to the fire-hose or I don't.
I can choose to only upgrade once a month, but there's a chance I'm still getting a package that dropped 5 minutes before.
This made some of my more forward thinking coworkers nervous because what if this happened after we went live? So we started a repeating story called “upgrade dependencies” and assigned it round robin once a month to someone on each application. Every time someone got it the first time they would ask me, “but upgrade what?” Whatever you want, but preferable something that hasn’t been in a while.
For IP and security reasons we were already on vendored dependencies, so it was pretty straightforward to tell what was old. But that made “upgrade immediately” problematic if fixes weren’t back ported far enough and we didn’t want that live.
minimumReleaseAge: 43200My devs don't have access to production keys at all (and would never need them).
That's a wake up call to harden your operations. NPM Tokens, AWS/GCP/Azure credentials have no reason to be available in environments where packages may be installed. The same goes for sensitive environment variables.
At minimum whatever you are working on should be built in docker. The package installation then would happen during the image build step. Yes it's easy to break out of the isolation environment but i am betting this malware does not.
NPM tokens should exist in some configuration/secret management solution not on your home directory. Devs have no business holding the NPM tokens. Same goes for sensitive environment variables they have no business existing on dev laptops or even the pipeline build steps (where package installation should happen).
AWS etc credentials / tokens are harder to secure since there are legit reasons for existing in dev laptops.
And if it's a "professional" setting, the company could hire a part-time developer for writing the sandbox.
I do not pass through /proc, as it contains not only information about processes (which is not a concern), but a lot of identifying information about a machine. However, lot of modern software due to poor coding and not following standards crashes or exhibits various bugs without /proc, so I wrote a minimal Python /proc emulator using FUSE: https://pastebin.com/s49V5nVL . /proc doesn't allow bind mounting its subdirectories somewhere else, so the only option was FUSE (or writing a kernel module). As you see, with Linux you learn something new every day, like it or not. The emulator is not perfect - it doesn't expose thread information, and, for example, Grand Central Dispatch library [1] terminates the application because of this. Telegram is using the library and crashes randomly because of this. I expected better quality from Apple and Swift.
Most importantly, I run the app as a different user (Android-style), because historically Linux was built around idea of protecting users from each other, and I don't trust namespaces as much. I clear the environment variables as well. Ideally, I should also be closing file descriptors to prevent leaks (for example, you can easily leak a file descriptor to your Wayland server, Dbus or something else).
To use Xorg, I grant access to the app's user. X server has no issues with sharing access to it. I am not sure if one can use Wayland with different users (is uses unix sockets). Pipewire doesn't allow its use by different users, so I have to add a pipewire-pulse plugin listening on localhost, and configure ALSA inside sandbox to connect to that pulseaudio server. Obviously, the app in sandbox has no direct access to audio devices in /dev.
Dbus is also necessary to display tray icons, but you should never grant direct access to it because it is almost like giving the root password. I use xdg-dbus-proxy [2] which I had to patch because originally it only supported running the sandboxed app under the same user. You can still find my patch in open pull requests.
I also did some experimentation with passing through a camera through a fake video4linux device, so that the app cannot know the real camera model, but it is not working great, for example, Gnome Cheese app cannot see it, because it needs not only a kernel device, but something more to detect cameras.
The sandbox cannot use hardware acceleration. It would be not easy to add because every GPU vendor uses its own directories and device files, /sys entries to interact with the hardware and I think you can get the full list only by reverse engineering. Furthermore, I am not sure if it is safe.
This setup is not perfect, and looks like a bunch of random utilities glued with shell scripts (which is not uncommon in Linux world). If I had more time, I would write something better, that can dynamically add and revoke privileges, can provide fake serial numbers and even emulate a working network which sadly loses 100% of sent packets.
I looked into using Flatpak, but they seem to grant full access to /sys, /proc and generally I don't feel that they care much about protection from malware and fingerprinting by legitimate software. Also, as I understand, they run the app as the same user which is not secure. Probably their primary goal is making installation easier but not making the use of software safer. I am not going to use Docker because it officially says that it is not a security tool.
Pnpm also blocks preinstall scripts by default.
> Meanwhile, the aforementioned vendors are scanning public indices as well as customer repositories for signs of compromise, and provide alerts upstream (e.g. to PyPI).
https://blog.yossarian.net/2025/11/21/We-should-all-be-using...
There's nothing wrong with pinning dependencies and only updating when you know for sure they're fixing a zero-day (as it will be public at that point).
This is why it's so important to get to know what you're actually building instead of just "vibing" all the time. Before all the AI slop of this decade we just called it being responsible.
The solutions that are effective also involve actually doing work, as developers, library authors, and package managers. But no, we want as much "convenience" as possible, so the issues will continue.
Developers and package authors should use a lockfile, pin their dependencies, be frugal about adding dependencies, and put any dependencies they do add through a basic inspection at least, checking what dependencies they also use, their code and tests quality, etc.
Package managers should enforce namespacing for ALL packages, should improve their publishing security, and should probably have an opt-in verified program for the most important packages.
Doing these will go a long way to ameliorate these supply chain attacks
All this would have mitigated this incident in the event that an npm install was done during the window of this update being rolled out and unpatched. The npm install would continue as normal on the last known good version and the newer vulnerable version would simply not exist on the mirror.
"Our security engineering team is investigating the matter and thus far has concluded that while some public Postman NPM packages were infected, (1) Postman as an app is not compromised, and (2) our production cloud services are also not compromised."
Going forward, use WASM if you really want to make an SPA (and think about that choice), where the source language is not something that ties into the JS dependency ecosystem. Ban it and burn it with fire for anything on the backend, for christ.
Like previous variant, it has credential harvesting, self-replication and GitHub public repository based exfiltration.
Double base64 encoded credentials being exposed using GitHub repositories: https://github.com/search?q=%22Sha1-Hulud%3A%20The%20Second%...
The answer is really, don't.
NPM and the JS eco-system has really gone down a path of zero security and they're paying the price for it.
If you really need libraries from NPM and whatnot, vendorize them so you're relying on known-safe files and don't arbitrarily update them without re-verification.
Would be good to see projects (like those recently effected) nudging devs to do this via install instructions.
We really don't need more package managers other than the ones provided by your operating system, but I dunno maybe its just me.
Including headers isn't remotely "simple". There's so many considerations in linking, .SO version compatibility, architecture and instruction set issues, building against multiple versions on the same system. Or if you want to feel frustrated in a single word: GDAL (IYKYK)
And that's only where #include is even applicable. That is not gonna fly for any interpreted language - JS in this case, but also python, ruby, php.
I'd say for issues with instruction sets is a minor edge case, you could argue that having layers of abstraction gives even more room for fault.
That said I am not familiar with GDAL, but from google-fu it seems relatively heavy. Being said, I don't have much familiarity with compiling CXX programs as I do with programming C programs, and C with it's smaller footprint tends to not give me as many problems.
Again, this is from personal experience.
npm audit
and npm audit --fix
Or if you want to know the version of a package you have installed: npm ls some-pkg- A new version of this dependency is published
- A CI somewhere of another NPM package uses this new version dependency in a build, which trigger propagation by creating a new modified version of this dependency?
- And so on...
Am I getting this right?
Just `git clone git@github.com:openai/codex.git`, `cd codex-rs`, `cargo build --release` (If you have many cores and not much RAM, use `-j n`, where n is 1 to 4 to decrease RAM requirements)
Implementing everything yourself probably won't cut it.
Copying a dependency into your code base and maintaining it yourself probably won't yield much better results.
However, if a dependency would be part of the version control, depends could at least do a code review before installing an update.
That wouldn't help with new dependencies, that come in with issues right from.the start, but it could help preventing new malware from slipping in later.
A setup like that could benefit from a crowd-sourced review process, similar to Wikipedia.
I think, Nimble, the package manager of Nim, uses a decentralised registry approach based on Git repos. Something like that could be a good start.
My ultra hot take: there are only¹ two² programming ecosystems suitable for serious³ work:
- .net (either run on CLR or compile as an AOT standalone binary)
- jvm
The reason why is because they have a vast and vetted std lib. A good standard lib is a bigger boost then any other syntactic niceties. __
1. I don't want other programming languages to die, so I am happy if you disagree with me. Other valid objection: some problems are better served by niche languages. Still, both .net and java support a plethora of niche languages.
2. Shades of gr[e|a]y, some languages are more complete out of the box than others.
3. cf «pick boring tools»I don't ask you to judge if you like it, I'm just saying that you can totally make a professional WebUI within the dotnet stdlib.
Don't take my word for it, take a dive. You wouldn't be the first to have adjust their view.
For example, this section is just about the built-in web framework asp.net: https://learn.microsoft.com/en-us/aspnet/core
______
1. This might be a poor example as .net has NodaTime and the jvm has YodaTime as 3rd-party libs, for if one has really strict needs. Still, the builtin DateTime constructs offer way more than what Python had to offer.
2. Don't get me started on the ORM side of things. I know, you don't have to use one, but if you do, it better does a great job. And I wouldn't bat an eye if the ORM is not in the standard, but boy was I disappointed in Python's ecosystem. EF Core come batteries included and is so much better, it isn't fun anymore.
Let me know if I look at the wrong place.
typo after the listed affected packages
That seems a bit silly. Even on the beefy boi I used to work on a 10MB hiccup in deployable size would have been sufficient to make me look.
I released one of the packages I work on last night so of course this drew my eye. I assume checking the unpacked size hasn’t gotten ridiculous confirms that your code is not infected yeah? And looks like it’s past time for me to set up a separate account for release management.
The ones that got my attentions are the @ensdomains/*, that are the legit packages and are probably in every Ethereum/EVM/blockchain related apps for the resolution of decentralized domain names.
A quick search shows those Ledger hardware wallet use those libs too [0]
So I guess they weren't just after API keys.
- [0] https://github.com/search?q=org%3ALedgerHQ%20%40ensdomain&ty...
As it arguably would have reduced impact
(I'm one of the Renovate maintainers and have recently pushed for this to be more of a widely used feature)
Monocultures where everyone pulls and builds with every brand new thing for the most minor changes is dangerous.
At some point, someone has to pay for an organisation whose job will be to review the contents of all of these modules.
Maybe one could split the ecosystem into "validated" and "non validated" stacks ? much like we have stable and dev branches ?
The people validating would of course give their own identity to build trust. And so, companies (moral person) should do that.
Really looking forward to a deeper post-mortem on this.
That's a good thing, in my book.
SHA1-Hulud the Second Comming – Postman, Zapier, PostHog All Compromised via NPM
https://www.aikido.dev/blog/shai-hulud-strikes-again-hitting...
I would love to know why bubblewrap is a superior alternative.
Here's mine https://github.com/ashishb/dotfiles/blob/067de6f90c72f0cf849...
1. Show me how you would escape Docker
2. Show me npm packages doing this in the wildhttps://github.blog/security/supply-chain-security/our-plan-...
I'm guessing no one yet wants to spend the money it takes for centralized, trusted testing where the test harnesses employ sandboxing and default-deny installs, Deterministic Simulated Testing (DST), or other techniques. And the sheer scale of NPM package modifications per week makes human in the loop-based defense daunting, to the point that only a small "gold standard" subset of packages that has a more reasonable volume of changes might be the only palatable alternative.
What are the thoughts of those deep inside the intersection of NPM and cybersecurity?
instead we've got this absolute mess of bloated, over-engineered junk code and ridiculously complicated module systems.
if you run `npm i ramda` it will set this to "ramda": "^0.32.0" (as of comment)
that ^ means install any version that is a feature or patch.
so when a package is released with malware they bump version 0.32.1 and everyone just installs it on next npm i.
pinning your deps "ramda": "0.32.0" completely removes the risk assuming the version you listed is not infected.
the trade off is you don't get new features/patches without manually changing the version bump.
I see that as a desirable feature. I don’t want new functionality suddenly popping into my codebase without one of my team intending it.
personally i pin all mine because if you don't a version could be deployed during a pipeline and this makes your local version not the same as the one in docker etc.
pinning versions is the only way to be sure that the version I am running is the same as everyone elses
shinhan is a large korean bank and this admin area geo json util seems to be embedded in many korean gov services.
shinhan-limit-scrap
korea-administrative-area-geo-json-util
The number and range of affected devices may be reduced with any number of package manager level workarounds, but NOT the impact of attacks once any succeeds. For this, you NEED the above.
All libraries should strive to have dependency only on it.