# Run with restricted file system access
node --experimental-permission \
--allow-fs-read=./data --allow-fs-write=./logs app.js
# Network restrictions
node --experimental-permission \
--allow-net=api.example.com app.js
Looks like they were inspired by Deno. That's an excellent feature. https://docs.deno.com/runtime/fundamentals/security/#permiss...The "proper" place to solve this, is in the OS. Where it has been solved, including all the inevitable corner cases, already.
Why reinvent this wheel, adding complexity, bug-surface, maintenance burden and whatnot to your project? What problem dies it solve that hasn't been solved by other people?
Deployments that need to configure OSes in a particular way are difficult (the existence of docker, kubernetes, snap are symptoms of this difficulty). It requires a high level of privilege to do so. Upgrades and rollbacks are challenging, if ever done. OSes sometimes don't provide solution when we go beyond one hardware.
If "npm start" can restrain the permissions to what it should be for the given version of the code, I will use it and I'll be happy.
Do One Thing (and do it well).
A special domain specific scheduler microservice? One of the many Cron replacements? One of the many "SaaS cron"? Systemd?
This problem has been solved. Corner cases ironed out. Free to use.
Same for ENV var as configurations (as opposed to inventing yet another config solution), file permissions, monitoring, networking, sandboxing, chrooting etc. the amount of broken, insecure or just inefficient DIY versions of stuff handled in an OS I've had to work around is mind boggling. Causing a trice a loss: the time taken to build it. That time not spent on the business domain, and the time to them maintain and debug it for the next fifteen years.
[0] https://www.karltarvas.com/macos-app-sandboxing-via-sandbox-...
Also modern software security is really taking a look at strengthening software against supply chain vulnerabilities. That looks less like traditional OS and more like a capabilities model where you start with a set of limited permissions and even within the same address space it’s difficult to obtain a new permission unless your explicitly given a handle to it (arguably that’s how all permissions should work top to bottom).
The problem with the "solutions" s.a. the one in Node.js is that Node.js doesn't get to decide how eg. domain names are resolved. So, it's easy to fool it to allow or to deny access to something the author didn't intend for it.
Historically, we (the computer users) decided that operating system is responsible for domain name resolution. It's possible that today it does that poorly, but, in principle we want the world to be such that OS takes care of DNS, not individual programs. From administrator perspective, it spares the administrator the need to learn the capabilities, the limitations and the syntax of every program that wants to do something like that.
It's actually very similar thing with logs. From administrator perspective, logs should always go to stderr. Programs that try to circumvent this rule and put them in separate files / send them into sockets etc. are a real sore spot of any administrator who'd spent some times doing his/her job.
Same thing with namespacing. Just let Linux do its job. No need for this duplication in individual programs / runtimes.
Comprehensive capability protection is needed so that you actually need to have a token to do something privileged even within the process. What that looks like is the OS shows a file dialog and gives the process a descriptor (with a random ID) to that file. Similarly, network I/O would need a privileged descriptor the OS gives the application. Then even if you compromise the process you have to fully compromise the process to find the token to do privileged actions with.
I dunno how GP would do it, but I run a service (web app written in Go) under a specific user and lock-down what that user can read and write on the FS.
For networking, though, that's a different issue.
Meaning Windows? It also has file system permissons on an OS level that are well-tested and reliable.
> not all Node developers know or want to know much about the underlying operating system
Thing is, they are likely to not feel up for understanding this feature either, nor write their code to play well with it.
And if they at some point do want to take system permissions seriously, they'll find it infinitely easier to work with the OS.
Just locally, that seems like a huge pain in the ass... At least you can suggest containers which has an easier interface around it generally speaking.
nothing. Except for "portability" arguments perhaps.
Java has had security managers and access restrictions built in but it never worked very well (and is quite cumbersome to use in practice). And there's been lots of bypasses over the years, and patch work fixes etc.
Tbh, the OS is the only real security you could trust, as it's as low a level as any application would typically go (unless you end up in driver/kernal space, like those anti-virus/anti-cheat/crowdstrike apps).
But platform vendors always want to NIH and make their platform slightly easier and still present the similar level of security.
This is my thought on using dotenv libraries. The app shouldn’t have to load environment variables, only read them. Using a dotenv function/plugin like in omz is far more preferable.
The argument often heard though is 'but windows'. Though if windows lacks env (or Cron, or chroot, etc) the solution would be to either move to an env that does support it, or introduce some tooling only for the windows users.
Not build a complex, hierarchical directory scanner that finds and merges all sorts of .env .env.local and whatnots.
On dev I often do use .ENV files, but use zenv or a loadenv tool or script outside of the projects codebase to then load these files into the env.
Tooling such as xenv, a tiny bash script, a makefile etc. that devs can then replace with their own if they wish (A windows user may need something different from my zsh built-in). That isn't present at all in prod, or when running in k8s or docker compose locally.
A few years ago, I surfaced a security bug in an integrated .env loader that partly leveraged a lib and partly was DIY/NIH code. A dev built something that would traverse up and down file hierarchies to search for .env.* files and merge them runtime and reload the app if it found a new or changed one. Useful for dev. But on prod, uploading a .env.png would end up in in a temp dir that this homebuilt monstrosity would then pick up. Yes, any internet user could inject most configuration into our production app.
Because a developer built a solution to a problem that was long solved, if only he had researched the problem a bit longer.
We "fixed" it by ripping out thousands of LOCs, a dependency (with dependencies) and putting one line back in the READMe: use an env loader like .... Turned out that not only was it a security issue, it was an inotify hogger, memory hog, and io bottleneck on boot. We could downsize some production infra afterwards.
Yes, the dev built bad software. But, again, the problem wasn't that quality, but the fact it was considered to be built in the first place.
I've been trying to figure out a good way to do this for my Python projects for a couple of years now. I don't yet trust any of the solutions I've come up with - they are inconsistent with each other and feel very ironed to me making mistakes due to their inherent complexity and lack of documentation that I trust.
For a solution to be truly generic to OS, it's likely better done at the network level. Like by putting your traffic through a proxy that only allows traffic to certain whitelisted / blacklisted destinations.
With proxies the challenge becomes how to ensure the untrusted code in the programming language only accesses the network via the proxy. Outside of containers and iptables I haven't seen a way to do that.
OS generic filesystem permissions would be like a OS generic UI framework, it's inherently very difficult and ultimately limited.
Separately, I totally sympathise with you that the OS solutions to networking and filesystem permissions are painful to work with. Even though I'm reasonably comfortable with rwx permissions, I'd never allow untrusted code on a machine which also had sensitive files on it. But I think we should fix this by coming up with better OS tooling, not by moving the problem to the app layer.
Whilst this is (effectively) an Argument From Authority, what makes you assume the Node team haven't considered this? They're famously conservative about implementing anything that adds indirection or layers. And they're very *nix focused.
I am pretty sure they've considered "I could just run this script under a different user"
(I would assume it's there because the Permissions API covers many resources and side effects, some of which would be difficult to reproduce across OSes, but I don't have the original proposal to look at and verify)
I often hear similar arguments for or against database level security rules. Row level security, for example, is a really powerful feature and in my opinion is worth using when you can. Using RLS doesn't mean you skip checking authorization rules at the API level though, you check on author in your business logic _and_ in the database.
If you don't know what DNS search path is, here's my informal explanation: your application may request to connect to foo.bar.com or to bar.com, and if your /etc/resolv.conf contains "search foo", then these two requests are the same request.
This is an important feature of corporate networks because it allows macro administrative actions, temporary failover solutions etc. But, if a program is configured with Node.js without understanding this feature, none of these operations will be possible.
From my perspective, as someone who has to perform ops / administrative tasks, I would hate it if someone used these Node.js features. They would get in the way and cause problems because they are toys, not a real thing. Application cannot deal with DNS in a non-toy way. It's the task for the system.
I also wouldn't really expect it to though, that depends heavily on the environment the app is run in, and if the deployment environment intentionally includes resolv.conf or similar I'd expect the developer(s) to either use a more elegant solution or configure Node to expect those resolutions.
In other words: Node.js doesn't do anything better, but, actually does some things worse. No advantages, only disadvantages... then why use it?
For example, the problem of "one micro service won't connect to another" was traditionally an ops / environments / SRE problem. But now the app development team has to get involved, just in case someone's used one of these new restrictions. Or those other teams need to learn about node.
This is non consensual devops being forced upon us, where everyone has to learn everything.
This leads to the node js teams to have to learn DevOps anyway because the DevOps teams do a subpar job with it otherwise.
Same with doing frontend builds and such. In other languages I’ve noticed (particularly Java / Kotlin) DevOps teams maintain the build tools and configurations around it for the most part. The same has not been true for the node ecosystem, whether it’s backend or Frontend
If an existing feature is used too little, then I'm not sure if rebuilding it elsewhere is the proper solution. Unless the existing feature is in a fundamentally wrong place. Which this isn't: the OS is probably the only right place for access permissions.
An obvious solution would be education. Teach people how to use docker mounts right. How to use chroot. How Linux' chmod and chown work. Or provide modern and usable alternatives to those.
Also, I'd bet my monthly salary on that Node.js implementation of this feature doesn't take into account multiple possible corner cases and configurations that are possible on the system level. In particular, I'd be concerned about DNS search path, which I think would be hard to get right in userspace application. Also, what happens with /etc/hosts?
From administrator perspective I don't want applications to add another (broken) level of manipulating of discovery protocol. It usually very time consuming and labor intensive task to figure out why two applications which are meant to connect aren't. If you keep randomly adding more variables to this problem, you are guaranteed to have a bad time.
And, a side note: you also don't understand English all that well. "Confusion" is present in any situation that needs analysis. What's different is the degree to which it's present. Increasing confusion makes analysis more costly in terms of resources and potential for error. The "solution" offered by Node.js offers to increase confusion, but offers nothing in return. I.e. it creates waste. Or, put differently, is useless, and, by extension, harmful, because you cannot take resources and do nothing and still be neutral: if you waste resources while produce nothing of value, you limit resources to other actors who could potentially make a better use of them.
That's a cool feature. Using jlink for creating custom JVMs does something similar.
That's a good feature. What you are saying is still true though, using the OS for that is the way to go.
PHP used to have (actually, still has) an "open_basedir" setting to restrict where a script could read or write, but people found out a number of ways to bypass that using symlinks and other shenanigans. It took a while for the devs to fix the known loopholes. Looks like node has been going through a similar process in the last couple of years.
Similarly, I won't be surprised if someone can use DNS tricks to bypass --allow-net restrictions in some way. Probably not worth a vulnerability in its own right, but it could be used as one of the steps in a targeted attack. So don't trust it too much, and always practice defense in depth!
In both Java and .NET VMs today, this entire facility is deprecated because they couldn't make it secure enough.
The whole idea of a hierarchical directory structure is an illusion. There can be all sorts of cross-links and even circular references.
How can we offer a solution that is as low or lower friction and does the right thing instead of security theater.
At least we could consider this part of a defense in depth.
We; humans; always reach for instant gratification. The path of low resistance is the one that wins.
I don't understand this sort of complaint. Would you prefer that they didn't worked on this support ever? Exactly what's your point? Airing trust issues?
So what? That's clearly laid out in Node's documentation.
https://nodejs.org/api/permissions.html#file-system-permissi...
What point do you think you're making?
You seem to be confused. The system is not bypassed. The only argument you can make is that the system covers calls to node:fs, whereas some modules might not use node:fs to access the file system. You control what dependencies you run in your system, and how you design your software. If you choose to design your system in such a way that you absolutely need your Node.js app to have unrestricted access to the file systems, you have the tools to do that. If instead you want to lock down file system access, just use node:fs and flip a switch.
> need to demonstrate security compliance.
Edit: Actually, you can even get upload progress, but the implementation seems fraught due to scant documentation. You may be better off using XMLHttpRequest for that. I'm going to try a simple implementation now. This has piqued my curiosity.
Note that a key detail is that your server (and any intermediate servers, such as a reverse-proxy) must support HTTP/2 or QUIC. I spent much more time on that than the frontend code. In 2025, this isn't a problem for any modern client and hasn't been for a few years. However, that may not be true for your backend depending on how mature your codebase is. For example, Express doesn't support http/2 without another dependency. After fussing with it for a bit I threw it out and just used Fastify instead (built-in http/2 and high-level streaming). So I understand any apprehension/reservations there.
Overall, I'm pretty satisfied knowing that fetch has wide support for easy progress tracking.
const supportsRequestStreams = (() => {
let duplexAccessed = false;
const hasContentType = new Request('http://localhost', {
body: new ReadableStream(),
method: 'POST',
get duplex() {
duplexAccessed = true;
return 'half';
},
}).headers.has('Content-Type');
return duplexAccessed && !hasContentType;
})();
Safari doesn't appear to support the duplex option (the duplex getter is never triggered), and Firefox can't even handle a stream being used as the body of a Request object, and ends up converting the body to a string, and then setting the content type header to 'text/plain'.It seems my original statement that download, but not upload, is well supported was unfortunately correct after all. I had thought that readable/transform streams were all that was needed, but as you noted it seems I've overlooked the important lack of duplex option support in Safari/Firefox[0][1]. This is definitely not wide support! I had way too much coffee.
Thank you for bringing this to my attention! After further investigation, I encountered the same problem as you did as well. Firefox failed for me exactly as you noted. Interestingly, Safari fails silently if you use a transformStream with file.stream().pipeThrough([your transform stream here]) but it fails with a message noting lack of support if you specifically use a writable transform stream with file.stream().pipeTo([writable transform stream here]).
I came across the article you referenced but of course didn't completely read it. It's disappointing that it's from 2020 and no progress has been made on this. Poking around caniuse, it looks like Safari and Firefox have patchy support for similar behavior in web workers, either via partial support or behind flags. So I suppose there's hope, but I'm sorry if I got anyone's hope too far up :(
[0] https://caniuse.com/mdn-api_fetch_init_duplex_parameter [1] https://caniuse.com/mdn-api_request_duplex
Does xhr track if the packet made it to the destination, or only that it was queued to be sent by the OS?
Curious what other folks think and if there are any other options? I feel like I've searched pretty exhaustively, and it's the only one I found that was both lightweight and had robust enough type safety.
I think `ts-rest` is a great library, but the lack of maintenance didn't make me feel confident to invest, even if I wasn't using express. Have you ever considered building your own in-house solution? I wouldn't necessarily recommend this if you already have `ts-rest` setup and are happy with it, but rebuilding custom versions of 3rd party dependencies actually feels more feasible nowadays thanks to LLMs. I ended up building a stripped down version of `ts-rest` and am quite happy with it. Having full control/understanding of the internals feels very good and it surprisingly only took a few days. Claude helped immensely and filled a looot of knowledge gaps, namely with complicated Typescript types. I would also watch out for treeshaking and accidental client zod imports if you decide to go down this route.
I'm still a bit in shock that I was even able to do this, but yeah building something in-house is definitely a viable option in 2025.
Luckily, oRPC had progressed enough to be viable now. I cannot recommend it over ts-rest enough. It's essentially tRPC but with support for ts-rest style contracts that enable standard OpenAPI REST endpoints.
However, if you want to lean that direction where it is a helpful addition they recently added some tRPC integrations that actually let you add oRPC alongside an existing tRPC setup so you can do so or support a longer term migration.
Of course I'd rather not maintain my own fork of something that always should have been part of poi, but this was better than maintaining an impossible mix of dependencies.
I do feel we're heading in a direction where building in-house will become more common than defaulting to 3rd party dependencies—strictly because the opportunity costs have decreased so much. I also wonder how code sharing and open source libraries will change in the future. I can see a world where instead of uploading packages for others to plug into their projects, maintainers will instead upload detailed guides on how to build and customize the library yourself. This approach feels very LLM friendly to me. I think a great example of this is with `lucia-auth`[0] where the maintainer deprecated their library in favour of creating a guide. Their decision didn't have anything to do with LLMs, but I would personally much rather use a guide like this alongside AI (and I have!) rather than relying on a 3rd party dependency whose future is uncertain.
I would say this oversight was a blessing in disguise though, I really do appreciate minimizing dependencies. If I could go back in time knowing what I know now, I still would've gone down the same path.
[1] https://hono.dev/docs/guides/validation#zod-validator-middle...
For prototypes I'll sometimes reach for tRPC. I don't like the level of magic it adds for a production app, but it is really quick to prototype with and we all just use RPC calls anyway.
For procudtion I'm most comfortable with zod, but there are quite a few good options. I'll have a fetchApi or similar wrapper call that takes in the schema + fetch() params and validates the response.
I found that keeping the frontend & backend in sync was a challenge so I wrote a script that reads the schemas from the backend and generated an API file in the frontend.
1. Shared TypeScript types
2. tRPC/ts-rest style: Automagic client w/ compile+runtime type safety
3. RTK (redux toolkit) query style: codegen'd frontend client
I personally I prefer #3 for its explicitness - you can actually review the code it generates for a new/changed endpoint. It does come w/ downside of more code + as codebase gets larger you start to need a cache to not regenerate the entire API every little change.
Overall, I find the explicit approach to be worth it, because, in my experience, it saves days/weeks of eng hours later on in large production codebases in terms of not chasing down server/client validation quirks.
For JS/TS, I'll have a shared models package that just defines the schemas and types for any requests and responses that both the backend and frontend are concerned with. I can also define migrations there if model migrations are needed for persistence or caching layers.
It takes a bit more effort, but I find it nicer to own the setup myself and know exactly how it works rather than trusting a tool to wire all that up for me, usually in some kind of build step or transpiration.
The server validates request bodies and produces responses that match the type signature of the response schema.
The client code has an API where it takes the request body as its input shape. And the client can even validate the server responses to ensure they match the contract.
It’s pretty beautiful in practice as you make one change to the API to say rename a field, and you immediately get all the points of use flagged as type errors.
async function fetchDataWithAxios() {
try {
const response = await axios.get('https://jsonplaceholder.typicode.com/posts/1');
console.log('Axios Data:', response.data);
} catch (error) {
console.error('Axios Error:', error);
}
}
async function fetchDataWithFetch() {
try {
const response = await fetch('https://jsonplaceholder.typicode.com/posts/1');
if (!response.ok) { // Check if the HTTP status is in the 200-299 range
throw new Error(`HTTP error! status: ${response.status}`);
}
const data = await response.json(); // Parse the JSON response
console.log('Fetch Data:', data);
} catch (error) {
console.error('Fetch Error:', error);
}
}
{ throwNotOk, parseJson }
they know that's 99% of fetch calls, i do t see why it can't be baked in.
The following seems cleaner than either of your examples. But I'm sure I've missed the point.
fetch(url).then(r=>r.ok ? r.json() : Promise.reject(r.status))
.then(
j=>console.log('Fetch Data:', j),
e=>console.log('Fetch Error:', e)
);
I share this at the risk of embarrassing myself in the hope of being educated.You'd probably put the code that runs the request in a utility function, so the call site would be `await myFetchFunction(params)`, as simple as it gets. Since it's hidden, there's no need for the implementation of myFetchFunction to be super clever or compact; prefer readability and don't be afraid of code length.
So treating "get a response" and "get data from a response" separately works out well for us.
const data = (await fetch(url)).then(r => r.json())
But it's very easy obviously to wrap the syntax into whatever ergonomics you like. await fetch(url).then(r => r.json())
const data = await (await fetch(url)).json()
It's designed that way to support doing things other than buffering the whole body; you might choose to stream it, close the connection early etc. But it comes at the cost of awkward double-awaiting for the common case (always load the whole body and then decide what happens next).
let r = await fetch(...);
if(!r.ok) ...
let len = response.headers.get("Content-Length");
if(!len || new Number(len) > 1000 * 1000)
throw new Error("Eek!");
var data = await fetch(url).then(r => r.json());
Understanding Promises/A (thenables) and async/await can sometimes be difficult or confusing, especially when mixing the two like above.Code doesn't need to be concise, it needs to be clear. Especially back-end code where code size isn't as important as on the web. It's still somewhat important if you run things on a serverless platform, but it's more important then to manage your dependencies than your own LOC count.
[1] convenient capability - otherwise you'd use XMLHttpRequest
2. You don't need to use axios. The main value was that it provides a unified API that could be used across runtimes and has many convenient abstractions. There were plenty of other lightweight HTTP libs that were more convenient than the stdlib 'http' module.
You can obviously do that with fetch but it is more fragmented and more boilerplate
I haven't used it but the weekly download count seems robust.
The fetch() change has been big only for the libraries that did need HTTP requests, otherwise it hasn't been such a huge change. Even in those it's been mostly removing some dependencies, which in a couple of cases resulted in me reducing the library size by 90%, but this is still Node.js where that isn't such a huge deal as it'd have been on the frontend.
Now there's an unresolved one, which is the Node.js streams vs WebStreams, and that is currently a HUGE mess. It's a complex topic on its own, but it's made a lot more complex by having two different streaming standards that are hard to match.
Joyee has a nice post going into details. Reading this gives a much more accurate picture of why things do and don't happen in big projects like Node: https://joyeecheung.github.io/blog/2024/03/18/require-esm-in...
"exports" controls in package.json was something package/library authors had been asking for for a long time even under CJS regimes. ESM gets a lot of blame for the complexity of "exports", because ESM packages were required to use it but CJS was allowed to be optional and grandfathered, but most of the complexity in the format was entirely due to CJS complexity and Node trying to support all the "exports" options already in the wild in CJS packages. Because "barrel" modules (modules full of just `export thing from './thing.js'`) are so much easier to write in ESM I've yet to see an ESM-only project with a complicated "exports". ("exports" is allowed to be as simple as the old main field, just an "index.js", which can just be an easily written "barrel" module).
> tc39 keep making are idiotic changes to spec like `deffered import` and `with` syntax changes
I'm holding judgment on deferred imports until I figure out what use cases it solves, but `with` has been a great addition to `import`. I remember the bad old days of crazy string syntaxes embedded in module names in AMD loaders and Webpack (like the bang delimited nonsense of `json!embed!some-file.json` and `postcss!style-loader!css!sass!some-file.scss`) and how hard it was to debug them at times and how much they tied you to very specific file loaders (clogging your AMD config forever, or locking you to specific versions of Webpack for fear of an upgrade breaking your loader stack). Something like `import someJson from 'some-file.json' with { type: 'json', webpackEmbed: true }` is such a huge improvement over that alone. The fact that it is also a single syntax that looks mostly like normal JS objects for other very useful metadata attribute tools like bringing integrity checks to ESM imports without an importmap is also great.
Today, no one will defend ERR_REQUIRE_ESM as good design, but it persisted for 5 years despite working solutions since 2019. The systematic misinformation in docs and discussions combined with the chilling of conversations suggests coordinated resistance (“offline conversations”). I suspect the real reason for why “things do and don’t happen” is competition from Bun/Deno.
It's interesting to see how many ideas are being taken from Deno's implementations as Deno increases Node interoperability. I still like Deno more for most things.
Now with ESM if you write plain JS it works again. If you use Bun, it also works with TS straight away.
Web parity was "always" going to happen, but the refusal to add ESM support, and then when they finally did, the refusal to have a transition plan for making ESM the default, and CJS the fallback, has been absolutely grating for the last many years.
Also, "fetch" is lousy naming considering most API calls are POST.
const post = (url) => fetch(url, {method:"POST"})
That's said there are npm packages that are ridiculously obsolete and overused.
`const { styleText } = require('node:util');`
Docs: https://nodejs.org/api/util.html#utilstyletextformat-text-op...
text: {
angry : "\u001b[1m\u001b[31m",
blue : "\u001b[34m",
bold : "\u001b[1m",
boldLine : "\u001b[1m\u001b[4m",
clear : "\u001b[24m\u001b[22m",
cyan : "\u001b[36m",
green : "\u001b[32m",
noColor : "\u001b[39m",
none : "\u001b[0m",
purple : "\u001b[35m",
red : "\u001b[31m",
underline: "\u001b[4m",
yellow : "\u001b[33m"
}
And then you can call that directly like: `${vars.text.green}whatever${vars.text.none}`;
Using a library which handles that (an a thousand other quirks) makes much more sense
The real issue is invented here syndrome. People irrationally defer to libraries to cure their emotional fear of uncertainty. For really large problems, like complete terminal emulation, I understand that. However, when taken to an extreme, like the left pad debacle, it’s clear people are loading up on dependencies for irrational reasons.
It's easy to solve though, simply assign empty strings to escape code variables when the output is not an interactive shell.
If you want to do it yourself, do it right—or defer to one of the battle-tested libraries that handle this for you, and additional edge cases you didn’t think of (such as NO_COLOR).
Typically JS developers define that as assuming something must be safe if enough people use it. That is a huge rift between the typical JS developer and organizations that actually take security more seriously. There is no safety rating for most software on NPM and more and more highly consumed packages are being identified as malicious or compromised.
If you do it yourself and get it wrong there is still a good chance you are in a safer place than completely throwing dependency management to the wind or making wild guesses upon vetted by the community.
Also, I'm guessing if I pipe your logs to a file you'll still write escapes into it? Why not just make life easier?
cjk→⋰⋱| | ← cjk space btw | |
thinsp | |
deg°
⋯ …
‾⎻⎼⎽ lines
_ light lines
⏤ wide lines
↕
∧∨
┌────┬────┐
│ │ ⋱ ⎸ ← left bar, right bar: ⎹
└────┴────┘
⊃⊂ ⊐≣⊏
⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯====›‥‥‥‥
◁ ◿ ◺ ◻ ◸
Λ
╱│╲
╱ │ ╲
──┼──
╲ │ ╱
╲│╱
V
┌ ─┏━━┳━━━━━━━┓
│ ┃ ┃ ┃
├ ─┣━━╋━━━━━━━┫
│ ┃ ┃ ┃
└ ─┗━━┻━━━━━━━┛
┌ ─ ┬ ─ ┐
├ ─ ┼ ─ ┤
└ ─ ┴ ─ ┘
┌───┬───┐
├───┼───┤
│ │ │
└───┴───┘
.
╱│╲
↘╱ │ ╲ ↙
╱ │ ╲
→‹───┼───›←
╲ │ ╱
↗ ╲ │ ╱ ↖
╲│╱
↓↑
╳
.
╱ ╲
╱ ╲
╱ ⋰ ╲
╱⋰______╲1. Node has built in test support now: looks like I can drop jest!
2. Node has built in watch support now: looks like I can drop nodemon!
(I haven't had much problem with TypeScript config in node:test projects, but partly because "type": "module" and using various versions of "erasableSyntaxOnly" and its strict-flag and linter predecessors, some of which were good ideas in ancient mocha testing, too.)
At the end it's just tests, the syntax might be more verbose but Llms write it anyway ;-)
They’re great at mocking to kingdom come for the sake of hitting 90% coverage. But past that it seems like they just test implementation enough to pass.
Like I’ve found that if the implementation is broken (even broken in hilariously obvious ways, like if (dontReturnPancake) return pancake; ), they’ll usually just write tests to pass the bad code instead of saying “hey I think you messed up on line 55…”
The problem isn't in the writing, but the reading!
I’m sure there’s a package for jest to do that (idk maybe that’s what jest extended is?) but the vitest experience is really nice and complete
I tried making a standalone executable with the command provided, but it produced a .blob which I believe still requires the Node runtime to run. I was able to make a true executable with postject per the Node docs[1], but a simple Hello World resulted in a 110 MB binary. This is probably a drawback worth mentioning.
Also, seeing those arbitrary timeout limits I can't help but think of the guy in Antarctica who had major headaches about hardcoded timeouts.[2]
[1]: https://nodejs.org/api/single-executable-applications.html
[1]: https://notes.billmill.org/programming/javascript/Making_a_s...
[2]: https://github.com/llimllib/node-esbuild-executable#making-a...
I hope you can appreciate how utterly insane this sounds to anyone outside of the JS world. Good on you for reducing the size, but my god…
I assure you, at scale this belief makes infra fall apart, and I’ve seen it happen so, so many times. Web devs who have never thought about performance merrily chuck huge JSON blobs or serialized app models into the DB, keep clicking scale up when it gets awful, and then when that finally doesn’t work, someone who _does_ care gets hired to fix it. Except that person or team now has to not only fix years of accumulated cruft, but also has to change a deeply embedded culture, and fight for dev time against Product.
Not that I find it particularly egregious, but my rust (web-server) apps not even optimized are under 10mb easily.
Go binaries weight 20mb for example.
It says: "You can now bundle your Node.js application into a single executable file", but doesn't actually provide the command to create the binary. Something like:
npx postject hello NODE_SEA_BLOB sea-prep.blob \
--sentinel-fuse NODE_SEA_FUSE_fce680ab2cc467b6e072b8b5df1996b2
Look up dialectical hedging. Dead AI giveaway.
An alternative framing I've been thinking about is, there's clearly something bad when you leave in the bits that obviously lower signal to noise ratio for all readers.
Then throw in the account being new, and, well, I hope it's not a harbinger.*
* It is and it's too late.
https://hbr.org/2025/08/research-the-hidden-penalty-of-using...
I'm not speculating - I have to work with these things so darn much that the tells are blindingly obvious - and the tells are well-known, ex. there's a gent who benchmarks "it's not just x - it's y" shibboleths for different models.
However, in a rigorous sense I am speculating: I cannot possibly know an LLM was used.
Thus, when an LLM is used, I am seeing an increasing fraction of conversation litigating whether is appropriate, whether it matters, if LLMs are good, and since anyone pointing it out could be speculating, now, the reaction hinges on how you initially frame this observation.
Ex. here, I went out of my way to make a neutral-ish comment given an experience I had last week (see other comment by me somewhere down stream)
Lets say I never say LLM, instead, frame it as "Doesn't that just mean it's a convention?" and "How are there so many game-changers?", which is obvious to audience is a consequence of using an LLM, and yet, also looks like you're picking on someone (are either of those bad writing? I only had one teacher who ever would take umbrage at somewhat subtle fluff like this)
Anyways this is all a bunch of belly-aching to an extent, you're right, and its the way to respond. There's a framing where the only real difficulty here is critiquing the writing without looking like you're picking on someone.
EDIT: Well, except for one more thing: what worries me the most when I see someone using the LLM and incapable of noticing tells and incapable of at least noticing the tells are weakening writing is...well, what else did they miss? What else did the LLM write that I have to evaluate for myself? So it's not so much as somewhat-bad writing, 90%+ still, that bothers me: its that idk what's real, and it feels like a waste of time even being offered it to read if I have to check everything.
Placing a value judgement on someone for how the art was produced is gatekeeping. What if the person is disabled and uses an LLM for accessibility reasons as one does with so many other tools? I dunno, that seems problematic to me but I understand the aversion to the output.
For example maybe it's like criticising Hawking for not changing his monotone voice vs using the talker all together. Perhaps not the best analogy.
The author can still use LLMs to adjust the style according to criticism of the output if they so choose.
Help yourself to learn about how individuals with disabilities use technology to communicate. Shaming someone for using a tool purely based on speculation is problematic for that reason alone
It does tell you that if even 95% of HN can't tell, then 99% of the public can't tell. Which is pretty incredible.
And it sounds like you have the same surreal experience as me...it's so blindingly. obvious. that the only odd thing is people not mentioning it.
And the tells are so tough, like, I wanted to a bang a drum over and over again 6 weeks ago about the "It's not X-it's Y" thing, I thought it was a GPT-4.1 tell.
Then I found this under-publicized gent doing God's work: ton of benchmarks, one of them being "Not X, but Y" slop and it turned out there was 40+ models ahead of it, including Gemini (expected, crap machine IMHO), and Claude, and I never would have guessed the Claudes. https://x.com/sam_paech/status/1950343925270794323
IME the only reliable way around it when using an LLM to create blog-like content is to have actual hard lists of slop to rewrite/avoid. This works pretty well if done if correctly. There's actually not that many patterns (not hundreds, more like dozens) so they're pretty enumerable. On the other hand, you and me would still be able to tell if only rewriting those things.
Overall the number one thing is that the writing is "overly slick". I've seen this expressed in tons of ways but I find slickness to be the most apt description. As if it's a pitch, or a TED presentation script, that has been pored over and perfected until every single word is optimized. Very salesy. In a similar vein, in LLM-written text, everything is given similar importance. Everything is crucial, one of the most powerful X, particularly elegant, and so on.
I find Opus to have the lowest slop ratio, which this benchmark kind of confirms [1], but of course its pricing is a barrier.
The forest is darkening, and quickly.
Here, I'd hazard that 15% of front page posts in July couldn't pass a "avoids well-known LLM shibboleths" check.
Yesterday night, about 30% of my TikTok for you page was racist and/or homophobic videos generated by Veo 3.
Last year I thought it'd be beaten back by social convention. (i.e. if you could showed it was LLM output, it'd make people look stupid, so there was a disincentive to do this)
The latest round of releases was smart enough, and has diffused enough, that seemingly we have reached a moment where most people don't know the latest round of "tells" and it passes their Turing test., so there's not enough shame attached to prevent it from becoming a substantial portion of content.
I commented something similar re: slop last week, but made the mistake of including a side thing about Markdown-formatting. Got downvoted through the floor and a mod spanking, because people bumrushed to say that was mean, they're a new user so we should be nicer, also the Markdown syntax on HN is hard, also it seems like English is their second language.
And the second half of the article was composed of entirely 4 item lists.
I'm also pretty shocked how HNers don't seem to notice or care, IMO it makes it unreadable.
I'd write an article about this but all it'd do is make people avoid just those tells and I'm not sure if that's an improvement.
[0] - https://www.youtube.com/watch?v=cIyiDDts0lo
[1] - https://blog.platformatic.dev/http-fundamentals-understandin...
I did some testing on an M3 Max Macbook Pro a couple of weeks ago. I compared the local server benchmark they have against a benchmark over the network. Undici appeared to perform best for local purposes, but Axios had better performance over the network.
I am not sure why that was exactly, but I have been using Undici with great success for the last year and a half regardless. It is certainly production ready, but often requires some thought about your use case if you're trying to squeeze out every drop of performance, as is usual.
In many ways, this debacle is reminiscent of the Python 2 to 3 cutover. I wish we had started with bidirectional import interop and dual module publications with graceful transitions instead of this cold turkey "new versions will only publish ESM" approach.
Hoisting/import order especially when trying to mock tests.
Whether or not to include extensions, and which extension to use, .js vs .ts.
Things like TS enums will not work.
Not being able to import TypeScript files without including the ts extension is definitely annoying. The rewriteRelativeImportExtensions tsconfig option added in TS 5.7 made it much more bearable though. When you enable that option not only does the TS compiler stop complaining when you specify the '.ts' extension in import statements (just like the allowImportingTsExtensions option has always allowed), but it also rewrites the paths if you compile the files, so that the build artifacts have the correct js extension: https://www.typescriptlang.org/docs/handbook/release-notes/t...
What's true is that they "support TS" but require .ts extensions, which was never even allowed until Node added "TS support". That part is insane.
TS only ever accepted .js and officially rejected support for .ts appearing in imports. Then came Node and strong-armed them into it.
Sometimes I also read the proposals, https://github.com/tc39/proposals
I really want the pipeline operator to be included.
https://github.com/sindresorhus/execa/blob/main/docs/bash.md
We once reported an issue and it got fixed really quickly. But then we had troubles connecting via TLS (mysql on google cloud platform) and after a long time debugging found out the issue is actually not in deno, but in RustTLS, which is used by deno. Even a known issue in RustTLS - still hard to find out if you don't already know what you are searching for.
It was then quicker to switch to nodejs with a TS runner.
Hard to imagine that this wasn't due to competition in the space. With Deno and Bun trying to eat up some of the Node market in the past several years, seems like the Node dev got kicked into high gear.
new Error("something bad happened", {cause:innerException})
The demonstration code emits events, but nothing receives them. Hopefully some copy-paste error, and not more AI generated crap filling up the internet.
They've also been around for years as another poster mentioned.
The list of features is nice, I suppose, for those who aren't keeping up with new releases, but IMO, if you're working with node and js professionally, you should know about most, if not all of these features.
It's definitely awesome but doesn't seem newsworthy. The experimental stuff seems more along the lines of newsworthy.
Yes. It's been around and relatively stable in V8/Node.js for years now.
I highly recommend the `erasableSyntaxOnly` option in tsconfig because TS is most useful as a linter and smarter Intellisense that doesn't influence runtime code:
Also hadn't caught up with the the `node:` namespace.
1. new technologies
2. vanity layers for capabilities already present
It’s interesting to watch where people place their priorities given those two segments
Such as?
Why should it matter beyond correctness of the content, which you and the author need to evaluate either way.
Personally, I'm exhausted with this sentiment. There's no value in questioning how something gets written, only the output matters. Otherwise we'd be asking the same about pencils, typewriters, dictionaries and spellcheck in some pointless persuit of purity.
Now the existence of this blogpost is only evidence that the author has sufficient AI credits they are able to throw some release notes at Claude and generate some markdown, which is not really differentiating.
try {
// Parallel execution of independent operations
const [config, userData] = await Promise.all([
readFile('config.json', 'utf8'),
fetch('/api/user').then(r => r.json())
]);
...
} catch (error) {
// Structured error logging with context
...
}
This might seem fine at a glance, but a big grip I have with node/js async/promise helper functions is that you can't differ which promise returned/threw an exception.In this example, if you wanted to handle the `config.json` file not existing, you would need to somehow know what kind of error the `readFile` function can throw, and somehow manage to inspect it in the 'error' variable.
This gets even worse when trying to use something like `Promise.race` to handle promises as they are completed, like:
const result = Promise.race([op1, op2, op3]);
You need to somehow embed the information about what each promise represents inside the promise result, which usually is done through a wrapper that injects the promise value inside its own response... which is really ugly. // Parallel execution of independent operations
const [
{ value: config, reason: configError },
{ value: userData, reason: userDataError },
] = await Promise.allSettled([
readFile('config.json', 'utf8'),
fetch('/api/user').then(r => r.json())
]);
if (configError) {
// Error with config
}
if (userDataError) {
// Error with userData
}
When dealing with multiple parallel tasks that I care about their errors individually, I prefer to start the promises first and then await for their results after all of them are started, that way I can use try catch or be more explicit about resources: // Parallel execution of independent operations
const configPromise = readFile('config.json', 'utf8')
const userDataPromise = fetch('/api/user').then(r => r.json())
let config;
try {
config = await configPromise
} catch (err) {
// Error with config
}
let userData;
try {
userData = await userDataPromise
} catch (err) {
// Error with userData
}
Edit: added examples for dealing with errors with allSettled[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
If the config file not existing is a handleable case, then write a "loadConfig" function that returns undefined.
probably 70 to 80% of JS users have barely any idea of the difference because their tooling just makes it work.
Sure, but Bun's implementation is a confusing mess a lot of times. I prefer them separate.
Note: This is no shade toward Bun. I'm a fan of Bun and the innovative spirit of the team behind it.
Also CommonJS does not support tree shaking.
Edit: the proof my point reside in the many libraries which have an open issue because, even if ESM, they don't support tree shaking
Bundlers can treeshake it, it is just harder to do, and so it hasn't always been a priority feature. esbuild especially in the last few years has done a lot of work to treeshake `import * as` in many more scenarios than before. Sure it isn't treeshaking "all" scenarios yet, but it's still getting better.
But for larger and more complex projects, I tend to use Vitest these days. At 40MBs down, and most of the dependency weight falling to Vite (33MBs and something I likely already have installed directly), it's not too heavy of a dependency.
expect(bar).toEqual(
expect.objectContaining({
symbol: `BTC`,
interval: `hour`,
timestamp: expect.any(Number),
o: expect.any(Number),
h: expect.any(Number),
l: expect.any(Number),
c: expect.any(Number),
v: expect.any(Number)
})
);
Instead of a .env.example (which quickly gets out of date), it uses a .env.schema - which contains extra metadata as decorator comments. It also introduces a new function call syntax, to securely load values from external sources.
Node test also I dont think is great, because in isomorphic apps you’ll have 2 syntax for testing.
I think the permissions are the core thing we should do, even if we run the apps in docker/dev containers.
Aliases is nice, node:fetch but I guess will break all isomorphic code.
A couple of things seem borrowed from Bun (unless I didn't know about them before?). This seems to be the silver lining from the constant churn in the Javascript ecosystem
> Top-Level Await: Simplifying Initialization
This feels absolutely horrible to me. There is no excuse for not having a proper entry-point function that gives full control to the developer to execute everything that is needed before anything else happens. Such as creating database connections, starting services and connecting to APIs, warming up caches and so on. All those things should be run (potentially concurrent).
Until this is possible, even with top-level await, I personally have to consider node.js to be broken.
> Modern Testing with Node.js Built-in Test Runner
Sorry, but please do one thing and do it well.
> Async/Await with Enhanced Error Handling
I wish had JVM-like logging and stack traces (including cause-nesting) in node.js...
> 6. Worker Threads: True Parallelism for CPU-Intensive Tasks
This is the biggest issue. There should be really an alternative that has builtin support for parallelism that doesn't force me to de/serialize things by hand.
---
Otherwise a lot of nice progress. But the above ones are bummers.
Maybe it needs a compile-time macro system so we have go full Java and have magical dependency injection annotations, Aspect-Oriented-Programming, and JavascriptBeans (you know you want it!).
Or maybe it needs to go the Ruby/Python/SmallTalk direction and add proper metaprogramming, so we can finally have Javascript on Rails, or maybe uh... Djsango?
Rather than taking the logical focus on making money, it is wasting time on shuffling around code and being an architecture astronaut with the main focus on details rather than shipping.
One of the biggest errors one can make is still using Node.js and Javascript on the server in 2025.
I often wonder about a what-if, alternate history scenario where Java had been rolled out to the browser in a more thoughtful way. Poor sandboxing, the Netscape plugin paradigm and perhaps Sun's licensing needs vs. Microsoft's practices ruined it.
I see it being used since over 25 years for the Austrian national broadcaster. Based at least originally on rhino, so it's also mixed with the Java you love. Fail to see the big issue as it's working just fine for such a long time.
Cobalt was a mistake.
> Perhaps the technology that you are using is loaded with hundreds of foot-guns
"Modern features" in Node.js means nothing given the entire ecosystem and its language is extremely easy to shoot yourself in the foot.
I have found this to not be true.
In my experience ASP.NET 9 is vastly more productive and capable than Node.js. It has a nicer developer experience, it is faster to compile, faster to deploy, faster to start, serves responses faster, it has more "batteries included", etc, etc...
What's the downside?
The breadth of npm packages is a good reason to use node. It has basically everything.
Dotnet is batteries included. It has all the features you'll need, almost. If you need something else, the packages you find are just much higher quality.
I regularly see popular packages that are developed by essentially one person, or a tiny volunteer team that has priorities other than things working.
Something else I noticed is that NPM packages have little to no "foresight" or planning ahead... because they're simply an itch that someone needed to scratch. There's no cohesive vision or corporate plan as a driving force, so you get a random mish-mash of support, compatibility, lifecycle, support, etc...
That's fun, I suppose, if you enjoy a combinatorial explosion of choice and tinkering with compatibility shims all day instead of delivering boring stuff like "business value".
I used to agree but when you have libraries like Mediatr, mass transit and moq going/looking to go paid I’m not confident that the wider ecosystem is in a much better spot.
It's still single-threaded, it still uses millions of tiny files (making startup very slow), it still has wildly inconsistent basic management because it doesn't have "batteries included", etc...
But yes there are downsides. But the biggest ones you brought up are not true.
This is the first I'm hearing of this, and a quick Google search found me a bunch of conflicting "methods" just within the NestJS ecosystem, and no clear indication of which one actually works.
nest build --webpack
nest build --builder=webpack
... and of course I get errors with both of those that I don't get with a plain "nest build". (The error also helpfully specifies only the directory in the source, not the filename! Wtf?)Is this because NestJS is a "squishy scripting system" designed for hobbyists that edit API controller scripts live on the production server, and this is the first time that it has been actually built, or... is it because webpack has some obscure compatibility issue with a package?
... or is it because I have the "wrong" hieroglyphics in some Typescript config file?
Who knows!
> There's this thing called worker_threads.
Which are not even remotely the same as the .NET runtime and ASP.NET, which have a symmetric threading model where requests are handled on a thread pool by default. Node.js allows "special" computations to be offloaded to workers, but not HTTP requests. These worker threads can only communicate with the main thread through byte buffers!
In .NET land I can simply use a concurrent dictionary or any similar shared data structure... and it just works. Heck, I can process a single IEnumerable, list, or array using parallel workers trivially.
"But yes there are downsides. But the biggest ones you brought up are not true."
My point is.. what you said is NOT true. And even after you're reply, it's still not true. You brought up some downsides in your subsequent reply... but again, your initial reply wasn't true.
That's all. I acknowledge the downsides, but my point remains the same.
Try it for yourself:
> node
> Promise.all([Promise.reject()])
> Promise.reject()
> Promise.allSettled([Promise.reject()])
Promise.allSettled never results in an unhandledRejection, because it never rejects under any circumstance.
process.on("uncaughException", (e) => {
console.log("uncaughException", e);
});
try {
const r = await Promise.all([
Promise.reject(new Error('1')),
new Promise((resolve, reject) => {
setTimeout(() => reject(new Error('2'), 1000));
}),
]);
console.log("r", r);
} catch (e) {
console.log("catch", e);
}
setTimeout(() => {
console.log("setTimeout");
}, 2000);
Produces: alvaro@DESKTOP ~/Projects/tests
$ node -v
v22.12.0
alvaro@DESKTOP ~/Projects/tests
$ node index.js
catch Error: 1
at file:///C:/Users/kaoD/Projects/tests/index.js:7:22
at ModuleJob.run (node:internal/modules/esm/module_job:271:25)
at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:547:26)
at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:116:5)
setTimeout
So, nope. The promises are just ignored.I definitely had a crash like that a long time ago, and you can find multiple articles describing that behavior. It was existing for quite a time, so I didn't think that is something they would fix so I didn't keep track of it.
Maybe a bug in userspace promises like Bluebird? Or an older Node where promises were still experimental?
I love a good mystery!
https://chrysanthos.xyz/article/dont-ever-use-promise-all/
Maybe my bug was something else back then and I found a source claiming that behavior, so I changed my code and as a side effect my bug happened to go away coincidentally?
If you did something like:
const p1 = makeP1();
const p2 = makeP2();
return await Promise.all([p1, p2]);
It's possible that the heuristic didn't trigger?Bun being VC-backed allows me to fig-leaf that emotional preference with a rational facade.
Not to say Deno doesn't try, some of their marketing feels very "how do you do fellow kids" like they're trying to play the JS hype game but don't know how to.
Deno has a cute mascot, but everything else about it says "trust me, I'm not exciting". Ryan Dahl himself also brings an "I've done his before" pedigree.
Because its Node.js compat isn't perfect, and so if you're running on Node in prod for whatever reason (e.g. because it's an Electron app), you might want to use the same thing in dev to avoid "why doesn't it work??" head scratches.
Because Bun doesn't have as good IDE integration as Node does.
- isolated, pnpm-style symlink installs for node_modules
- catalogs
- yarn.lock support (later today)
- bun audit
- bun update —interactive
- bun why <pkg> helps find why a package is installed
- bun info <pkg>
- bun pm pkg get
- bun pm version (for bumping)
We will also support pnpm lockfile migration next week. To do that, we’re writing a YAML parser. This will also unlock importing YAML files in JavaScript at runtime.
(closing the circle)
online writing before 2022 is the low-background steel of the information age. now these models will all be training on their own output. what will the consequences be of this?
Just because a new feature can't always easily be slipped into old codebases doesn't make it a bad feature.
Yes, it’s 100% junior, amateur mentality. I guess you like pointless toil and not getting things done.
No idea why you think otherwise, I’m over here actually shipping.
While I can see some arguments for "we need good tools like Node so that we can more easily write actual applications that solve actual business problems", this seems to me to be the opposite.
All I should ever have to do to import a bunch of functions from a file is
"import * from './path'"
anything more than that is a solution in search of a problem
Feels unrelated to the article though.
Hopefully this helps! :D
That also happens automatically, it is abstracted away from the users of streams.
Streams can be piped, split, joined etc. You can do all these things with arrays but you'll be doing a lot of bookkeeping yourself. Also streams have backpressure signalling
Manually managing memory is in fact almost always better than what we are given in node and java and so on. We succeed as a society in spite of this, not because of this.
There is some diminishing point of returns, say like, the difference between virtual and physical memory addressing, but even then it is extremely valuable to know what is happening, so that when your magical astronaut code doesn't work on an SGI, now we know why.