I am so glad that Java decided to go down the path of virtual threads (JEP 444, JDK 21, Sep 2023). They decided to put some complexity into the JVM in order to spare application developers, library writers, and human debuggers from even more complexity.
x = number:foo(number:x, string:y)
It's absurd. The type system should responsible for keeping track of the async status of the function, and you should get that when hovering over the function in your IDE. It does not belong in the syntax any more than the above does and it's an absolutely terrible reason to duplicate all of your functions and introduce these huge headaches.
This does not introduce function coloring.
You are mearly pointing out the effects of pre-existing function coloring, in that there are two related symbols Symbol.dispose and Symobl.asyncDispose.
Just like there is Symbol.iterator and Symbol.asyncIterator.
All functions have color (i.e. particular categories in which they can be expressed) but only some languages make it explicit. It's a language design choice, but categories are extremely powerful and applicable beyond just threading. Plus, Java and thread based approaches have to deal with synchronization which is ... Difficult.
(JavaScript restricts itself to monadic categories and more specifically to those expressible via call with continuation essentially)
The only language I know that navigates this issue well is Purescript, because you can write code that targets Eff (sync effects) or Aff (async effects) and at call time decide.
Structured concurrency is wonderful, but my impression is we’re doing all this syntactic work not to get structured concurrency, but mostly to have, like, multiple top-level request handlers in our server. Embarassingly parallel work!
It's only when you do something wacky like try to add a whole type system to a fully duck typed language that you run into problems with this. Or if you make the mistake of copying this async/await mechanism and then hamfistedly shove it into a compiled language.
And compiled languages don’t have more trouble with this than JavaScript. Or rather, Javascript doesn’t have less issues on this front. The color issue is an issue at the syntactic level!
Likewise Promise.resolve() on a promise object just returns the original promise. You can color and uncolor things with far less effort or knowledge of the actual type.
Try running this code. It’ll print “bar” first and then “foo”, even though the function only awaits a string literal and the caller doesn’t await anything at all.
const foo = async () => console.log(await "foo");
foo();
console.log("bar");
Unless you literally mean awaiting non-awaitable type which...just doesn't make sense in any statically typed language?
Async is a decent syntax for simple tasks but that simplicity falls apart when composing larger structures and dealing with error handling and whatnot. I find it more difficult to understand what's going on compared to explicit threading.
Do you have a concrete example? It has just never really been an issue for me since async/await (callback hell was a thing though).
const defer = f => ({ [Symbol.dispose]: f })
using defer(() => cleanup())
That only just occurred to me. To everybody else who finds it completely obvious, "well done" but it seemed worthy of mention nonetheless.This is notably necessary for scope-bridging and conditional registration as `using` is block-scoped so
if (condition) {
using x = { [Symbol.dispose]: cleanup }
} // cleanup is called here
But because `using` is a variant of `const` which requires an initialisation value which it registers immediately this will fail: using x; // SyntaxError: using missing initialiser
if (condition) {
x = { [Symbol.dispose]: cleanup };
}
and so will this: using x = { [Symbol.dispose]() {} };
if (condition) {
// TypeError: assignment to using variable
x = { [Symbol.dispose]: cleanup }
}
Instead, you'd write: using x = new DisposableStack;
if (condition) {
x.defer(cleanup)
}
Similarly if you want to acquire a resource in a block (conditionally or not) but want the cleanup to happen at the function level, you'd create a stack at the function toplevel then add your disposables or callbacks to it as you go. class Connector {
constructor() {
using stack = new DisposableStack;
// Foo and Bar are both disposable
this.foo = stack.use(new Foo());
this.bar = stack.use(new Bar());
this.stack = stack.move();
}
[Symbol.dispose]() {
this.stack.dispose();
}
}
In this example you want to ensure that if the constructor errors partway through then any resources already allocated get cleaned up, but if it completes successfully then resources should only get cleaned up once the instance itself gets cleaned up.The problem in that case if if the current function can acquire disposables then error:
function thing(stack) {
const f = stack.use(new File(...));
const g = stack.use(new File(...));
if (something) {
throw new Error
}
// do more stuff
return someObject(f, g);
}
rather than be released on exit, the files will only be released when the parent decides to dispose of its stack.So what you do instead is use a local stack, and before returning successful control you `move` the disposables from the local stack to the parents', which avoids temporal holes:
function thing(stack) {
const local = new DisposableStack;
const f = local.use(new File(...));
const g = local.use(new File(...));
if (something) {
throw new Error
}
// do more stuff
stack.use(local.move());
return someObject(f, g);
}
Although in that case you would probably `move` the stack into `someObject` itself as it takes ownership of the disposables, and have the caller `using` that: function thing() {
const local = new DisposableStack;
const f = local.use(new File(...));
const g = local.use(new File(...));
if (something) {
throw new Error
}
// do more stuff
return someObject(local.move(), f, g);
}
In essence, `DisposableStack#move` is a way to emulate RAII's lifetime-based resource management, or the error-only defers some languages have.TL;DR: the problem if you just pass the DisposableStack that you're working with is that it's either a `using` variable (in which case it will be disposed automatically when your function finishes, even if you've not actually finished with the stack), or it isn't (in which case if an error gets thrown while setting up the stack, the resources won't be disposed of properly).
`.move()` allows you to create a DisposableStack that's a kind of sacrificial lamb: if something goes wrong, it'll dispose of all of its contents automatically, but if nothing goes wrong, you can empty it and pass the contents somewhere else as a safe operation, and then let it get disposed whenever.
> Integration of [Symbol.dispose] and [Symbol.asyncDispose] in web APIs like streams may happen in the future, so developers do not have to write the manual wrapper object.
So for the foreseeable future, you have a situation where some APIs and libraries support the feature, but others - the majority - don't.
So you can either write your code as a complicated mix of "using" directives and try/catch blocks - or you can just ignore the feature and use try/catch for everything, which will result in code that is far easier to understand.
I fear this feature has a high risk of getting a "not practically usable" reputation (because right now that's what it is) which will be difficult to undo even when the feature eventually has enough support to be usable.
Which would be a real shame, as it does solve a real problem and the design itself looks well thought out.
Developers are quite used to writing small wrappers around web APIs anyways since improvement to them comes very slowly, and a small wrapper is often a lesser evil compared to polyfills; or the browser API is just annoying on the typical use path so of course you want something a little different.
At least, I personally have never seen a new langauge feature that seems useful and thought to myself "wow this is going to be hard to use"
I suspect it's going to be less common in frontend code, because frontend code normally has its own lifecycle/cleanup management systems, but I can imagine it still being useful in a few places. I'd also like to see a few more testing libraries implement these symbols. But I suspect, due to the prevalence of support in backend code, that will all come with time.
using disposer = new DisposableStack;
const resource = disposer.adopt(new Resource, r => r.close());
This is still simpler than try/catch, especially if you have multiple resources, so it can be adopted as soon as your runtime supports the new syntax, without needing to wait for existing resources to update. import { SomeStreamClass as SomeStreamClass_ } from "some/library"
export class SomeStreamClass extends SomeStreamClass_ {
[someSymbol] (...) { ... }
...
}
I have not blown my foot off yet with this approach but, uh, no warranty, express or implied.It's been working excellently for me so far though.
So far I've only ever been using a private symbol that only exists within the codebase in question (and is then exported to other parts of said codebase as required).
If I ever decide to generalise the approach a bit, I'll hopefully remember to do precisely what you describe.
Possibly with the addition of providing an "I am overriding this deliberately" flag that blows up if it doesn't already have said symbol.
But for the moment, the maximally dumbass approach in my original post is DTRT for me so far.
function DisposablImageBitmap(bitmap) {
bitmap[Symbol.dispose] ??= () => bitmap.close()
return bitmap
}
using bitmap = DisposableObserver(createImageBitmap(image))
Or if you want to ensure all ImageBitmap conform to Disposable: ImageBitmap.prototype[Symbol.dispose] = function() { this.close() }
But this does leak the "trait conformance" globally; it's unsafe because we don't know if some other code wants their implementation of dispose injected to this class, if we're fighting, if some key iteration is going to get confused, etc...How would a protocol work here? To say something like "oh in this file or scope, `ImageBitmap.prototype[Symbol.dispose]` should be value `x` - but it should be the usual `undefined` outside this scope"?
(edit: changed to ImageBitmap)
Welcome to the web. This has pretty much been the case since JavaScript 1.1 created the situation where existing code used shims for things we wanted, and newer code didn't because it had become part of the language.
https://github.com/tc39/proposal-explicit-resource-managemen...
https://github.com/tc39/proposal-explicit-resource-managemen...
https://github.com/tc39/proposal-explicit-resource-managemen...
https://github.com/tc39/proposal-explicit-resource-managemen...
[Symbol.dispose]()
is very weird in my eyes. This looks like an array which is called like a function and the array contains a method-handle.
What is this syntax called? I would like to learn more about it.
https://www.samanthaming.com/tidbits/37-dynamic-property-nam...
Also in the example is method shorthand:
https://www.samanthaming.com/tidbits/5-concise-method-syntax...
Since symbols cannot be referred to by strings, you can combine the two.
Basically, there isn't any new syntax here.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
Using a Symbol as the method name disambiguates this method from any previously-defined methods.
In other words, by using a Symbol for the method name (and not using a string), it's impossible to "name collide" on this new API, which would accidentally mark a class as disposable.
The premise is that you can always access an object's properties using indexing syntax as well as the normal dot syntax. So `object.foo` is the equivalent of `object["foo"]` or `object["f" + "o" + "o"]` (because the value inside the square brackets can be any expression). And if `object.foo` is a method, you can do `object.foo()` or `object ["foo"]()` or whatever else as well.
Normally, the key expression will always be coerced to a string, so if you did `object[2]`, this would be the equivalent of object["2"]. But there is an exception for symbols, which are a kind of unique object that is always compared by reference. Symbols can be used as keys just as they are, so if you do something like
const obj = {}
obj.foo = "bar"
obj[Symbol("foo")] = "bar"
console.log(obj)
You should see in the console that this object has a special key that is a symbol, as well as the normal "foo" attribute.The last piece of the puzzle is that there are certain "well known symbols" that are mostly used for extending an object's behaviour, a bit like __dunder__ methods in Python. Symbol.dispose is one of these - it's a symbol that is globally accessible and always means the same thing, and can be used to define some new functionality without breaking backwards compatibility.
I hope that helps, feel free to ask more questions.
const key = "foo";
const obj = { [key]: "bar" };
console.log(obj.foo); // prints "bar"
Someone more knowledgeable will join in soon, but I'm pretty sure it was derived from:
const x = { age: 42 };
x[Symbol.name] = "joe"; // <--- this
so it makes a lot of sense. const o = {}
o["foo"] = function(){}
o["foo"]()
let key = "foo"
o[key]()
key = Symbol.dispose ?? Symbol.for('dispose')
o[key]()
o[Symbol.dispose]()
If the code is
obj.function()
they are notating it as `function()`.If the code is
obj[Symbol.dispose]()
they are notating it as `[Symbol.dispose]()`.Symbol.dispose is a symbol key.
> obj[Symbol.dispose]()
> they are notating it as `[Symbol.dispose]()`.
So
`obj[Symbol.dispose]()` is the same as `[Symbol.dispose]()`? That doesn't seem right, because we might also have `obj2` or `obj3`. How does JavaScript know that `[Symbol.dispose]()` refers to a specific object?
The parens are just the method definition shorthand, so it’s a shorter way of writing
[Symbol.dispose]: function()
Bracketing was introduced because Javascript was originally defined to use bare keys so foo: bar
Defines an entry with the key `”foo”`, rather than an entry whose key is the value for the variable `foo`. Thus to get the latter you use [foo]: bar
myObj["myProperty"]
If it's a function then it could be invoked,
myObj["myProperty"]()
If the key was a symbol,
myObj[theSymbol]()
Library that leverages structured concurrency: https://frontside.com/effection
async (() => (e) { try { await doSomething(); while (!done) { ({ done, value } = await reader.read()); } promise .then(goodA, badA) .then(goodB, badB) .catch((err) => { console.error(err); } catch { } finally { using stack = new DisposableStack(); stack.defer(() => console.log("done.")); } });
Intentionally or unconsciously, much of the work is about ensuring there will always be demand for more work. Or else there's a risk of naturally falling apart over time. Why would you build it that way!?
https://soundcloud.com/snickerbockers/corporate-it-webdevs-f...
Ok, point taken.
(async (e) => {
await doSomething()
while (!done) {
({ done, value }) = await reader.read()
}
promise
.then(goodA, badA)
.then(goodB, badB)
.catch(err => console.log(err))
.finally(() => {
using stack = new DisposableStack()
stack.defer(() => console.log('done.'))
})
})()
But more importantly, this isn't even close to anything a reasonable JS dev would ever write.1. It's not typical to mix await and while(!done), I can't imagine what library actually needs this. You usually use one or the other, and it's almost always just await:
await doSomething()
const value = await readFully(reader)
2. If you're already inside an Async IIFE, you don't need promise chains. Just await the stuff as needed, unless promise chains make the code shorter and cleaner, e.g.: const json = await fetch(url).then(r => r.json())
3. Well designed JS libraries don't usually stack promise handlers like the {good,bad}{A,B} functions you implied. You usually just write code and have a top level exception handler: using stack = new DisposableStack()
stack.defer(() => console.log('done.'))
try {
const goodA = await promise
const goodB = await goodA
const goodC = await goodB
return goodC
}
catch(e) {
myLogErr(e)
}
// finally isn't needed, that's the whole point of DisposableStack
4. We don't usually need AIIFEs anymore, so the outer layer can just go away.The "example code" (if we can call it that) just used goodA and goodB because it tried to make things look crazy, by writing complete nonsense: none of that is necessary, we can just use a single, awaiting return:
try {
return await promise;
} catch(e) {
handleYourExceptions(e);
}
Done. "await" waits until whatever it's working with is no longer a promise, automatically either resolving the entire chain, or if the chain throws, moving us over to the exception catching part of our code.People write Haskell for a living, after all.
I prefer Factor[1] over Forth, however. Maybe you'll like it!
> 2 3 + 4 * .
There's a lot more there to mentally parse than:
> (2 + 3) * 4
It's the same as when Rob Pike decries syntax highlighting. No, it's very useful to me. I can read much quicker with it.
It's the same principle behind how we use heuristics to much more quickly read words by sipmly looking at the begninnings and ends of each word, and most of the time don't even notice typos.
Some people prefer:
2 3 + 4 *
Some other people prefer: (* 4 (+ 2 3))
And some other people prefer: (2 + 3) * 4
I personally find the last one easier to read or understand, but I have had my fair share of Common Lisp and Factor. :DSyntax highlighting is useful for many people, including me. I can read much quicker with it, too. I know of some people who write Common Lisp without syntax highlighting though. :)
2 .... hundreds of words .... +
where the operands of + are 2 and the result produced by the hundreds of words!Which could also be:
.... hundreds of words .... 2 +
which would be a lot easier to read!If you're writing Forth, it likely behooves you to try to adhere to the latter style of chaining where you take everything computed thus far, and apply a small operation to it with a simple operand. Not sure if it's always possible:
... complex numerator ... ... complex denominator ... /
Now find the division between the numerator and denominator among all those words.Yes, this is why you are supposed to have short words. You should factor out the complex parts into short, self-contained, and descriptively named words, which is going to make your code much easier to read, test, and maintain.
For example:
Instead of:
a b + c d + * e f + g h + * /
You should probably have: : compute-numerator a b + c d + * ;
: compute-denominator e f + g h + * ;
: compute-ratio compute-numerator compute-denominator / ;
Most (if not all) Forth books mention this as well.What's the compiled version of : compute-numerator a b + c d + * ; look like? I imagine at the very least that there has to be a call to some run-time support routine to insert a compiled thunk under a name into the dictionary.
If you're concerned about polluting the global dictionary, a common idiom is (which you already know):
\ Define and forget immediately if temporary
: tmp-numerator a b + c d + * ;
tmp-numerator
FORGET tmp-numerator
or alternatively, you can isolate temporary definitions in a separate vocabulary: VOCABULARY TMP-WORDS
TMP-WORDS DEFINITIONS
: numerator 1 2 + 3 4 + * ;
: denominator 5 6 + 7 8 + * ;
ONLY FORTH ALSO TMP-WORDS ALSO DEFINITIONS
: compute-ratio numerator denominator / . ;
compute-ratio
ONLY FORTH DEFINITIONS
TL;DR: Defining intermediate words adds entries to the dictionary, but this happens at compile time, not runtime. There's no additional runtime overhead. Naming conventions, FORGET, or vocabularies can mitigate dictionary pollution / clutter, but still, factoring remains the standard idiom in Forth.Note: In some native code compiling or JIT-based Forth implementations, definitions may generate machine code or runtime objects rather than simple CFA chains I mentioned, but even in these cases, compilation occurs before runtime execution, and no dynamic thunk insertion happens during word calls.
I hope I understood your comment correctly. Please let me know!
async (() => (e) {
try { await doSomething();
while (!done) { ({ done, value } = await reader.read()); }
promise
.then(goodA, badA)
.then(goodB, badB)
.catch((err) => { console.error(err); }
catch { }
finally { using stack = new DisposableStack();
stack.defer(() => console.log("done.")); }
});
(indentation preserved as posted by OP – I don't understand how somebody can code like this either :-)JS, like HTML has the special property that you effectively cannot make backwards-incompatible changes ever, because that scrappy webshop or router UI that was last updated in the 90s still has to work.
But this means that the language is more like an archeological site with different layers of ruins and a modern city built on top of it. Don't use all the features only because they are available.
The JavaScript syntax wasn't great to begin with, and as features are added to the language it sort of has to happen within the context of what's possible. It's also becoming a fairly large language, one without a standard library, so things just sort of hang out in a global namespace. It's honestly not to dissimilar to PHP, where the language just grew more and more functions.
As others point out there's also some resemblance to C#. The problem is that parts of the more modern C# is also a confusing mess, unless you're a seasoned C# developer. The new syntax features aren't bad, and developers are obviously going to use them to implement all sorts of things, but if you're new to the language they feel like magical incantations. They are harder to read, harder to follow and doesn't look like anything you know from other language. Nor are they simple enough that you can just sort of accept them and just type the magical number of brackets and silly characters and accept that it somehow work. You frequently have no idea of what you just did or why something works.
I feel like Javascript has reached the point where it's a living language, but because of it's initial implementation and inherit limits, all these great features feel misplaced, bolted on and provides an obstacle for new or less experienced developers. Javascript has become an enterprise language, with all the negative consequences and baggage that entails. It's great that we're not stuck with half a language and we can do more modern stuff, it just means that we can't expect people to easily pick up the language anymore.
Do you have any examples?
Very specifically I also looking into JWT authentication in ASP.NET Core and found the whole thing really tricky to wrap my head around. That's more of a library, but I think many of the usage examples ends up being a bunch of spaghetti code.
Have you never worked with any other language which lets you do these?
var say = (string s) => Console.WriteLine(s);
or struct Lease(DateTime expiration)
{
public bool HasExpired => expiration < DateTime.UtcNow;
}
or var num = obj switch
{
"hello" => 42,
1337 => 41,
Lease { HasExpired: false } => 100,
_ => 0
};
You'll see forms of it in practically every (good) modern language. How on Earth is it confusing?Authentication is generally a difficult subject, it is a weaker(-ish) aspect of (otherwise top of the line) ASP.NET Core. But it has exactly zero to do with C#.
That JavaScript has progressed so much in some ways and yet is still missing basic things like parameter types is crazy to me.
The hard part is that types are very hard/complex, with many tradeoffs.
A standard with strong backwards compat and is interpreted consistently is hard (see Python).
>> JS has progressed but it still lacks types, it seems crazy to do serious programming work in a language that doesn’t have such basic things
> Serious programming work in JS is done in TypeScript
https://docs.rs/isodd/latest/isodd/
https://docs.rs/leftpad/latest/leftpad/
I bet you can find something similar in all modern package managers.
> This crate is not used as a dependency in any other crate on crates.io.
Like Bash, Python, Ruby?
That ruby doesn't have types is also bizarre to me, but Ruby also sees monkey patching code as a positive thing too so I've given up trying to understand its appeal.
And of course, actually knowing the language you use every minute of the day because that's your job helps, too, so you know to rewrite that nonsense to something normal. Because mixing async/await and .then.catch is ridiculous, and that while loop should never be anywhere near a real code base unless you want to get yelled at for landing code that seems intentionally written to go into a spin loop under not-even-remotely unusual circumstances.
Maybe not love it, but you really won't have a choice.
Hence many either had or ended up growing means of lexical (scope-based) resource cleanup whether,
- HoF-based (smalltalk, haskell, ruby)
- dedicated scope / value hook (python[1], C#, Java)
- callback registration (go, swift)
[1]: Python originally used destructors thanks to a refcounting GC, but the combination of alternate non-refcounted implementations, refcount cycles, and resources like locks not having guards (and not wanting to add those with no clear utility) led to the introduction of context managers
E.g. in Ruby you can lock/unlock a mutex, but the normal way to do it would be to pass a block to `Mutex#synchronize` which is essentially just
def synchronize
lock
begin
yield
ensure
unlock
end
end
and called as: lock.synchronize {
# protected code here
}
The dispose methods on the other hand are called when the variable goes out of scope, which is much more predictable. You can rely on for example a file being closed ot a lock released before your method returns.
JavaScript is already explicit about what is synchronous versus asynchronous everywhere else, and this is no exception. Your method needs to wait for disposing to complete, so if disposing is asynchronous, your method must be asynchronous as well. It does get a bit annoying though that you end up with a double await, as in `await using a = await b()` if you're not used to that syntax.
As for using symbols - that's the same as other functionality added over time, such as iterator. It gives a nice way for the support to be added in a backwards-compatible way. And it's mostly only library authors dealing with the symbols - a typical app developer never has to touch it directly.
https://waspdev.com/articles/2025-04-09/features-that-every-... https://waspdev.com/articles/2025-04-09/features-that-every-...
But even Mozilla doesn't recommend to use them because they're quite unpredictable and might work differently in different engines.
function processData(response) {
const reader = response.body.getReader();
try {
reader.read()
} finally {
reader.releaseLock();
}
}
So that the read lock is lifted even if reader.read() throws an error.Does this only hold for long running processes? In a browser environment or in a cli script that terminates when an error is thrown, would the lock be lifted when the process exits?
When a process is forcibly terminated, the behavior is inherently outside the scope of the ECMAScript specification, because at that point the interpreter cannot take any further actions.
So what happens depends on what kind of object you're talking about. The example in the article is talking about a "stream" from the web platform streams spec. A stream, in this sense, is a JS object that only exists within a JS interpreter. If the JS interpreter goes away, then it's meaningless to ask whether the lock is locked or unlocked, because the lock no longer exists.
If you were talking about some kind of OS-allocated resource (e.g. allocated memory or file descriptors), then there is generally some kind of OS-provided cleanup when a process terminates, no matter how the termination happens, even if the process itself takes no action. But of course the details are platform-specific.
The order of execution for unhandled errors is well-defined. The error unwinds up the call stack running catch and finally blocks, and if gets back to the event loop, then it's often dispatched by the system to an "uncaught exception" (sync context) or "unhandled rejection" (async context) handler function. In NodeJS, the default error handler exits the process, but you can substitute your own behavior which is common for long-running servers.
All that is to say, that yes, this does work since termination handler is called at the top of the stack, after the stack unwinds through the finally blocks.
I adopted it for quickjs-emscripten (my quickjs in wasm thingy for untrusted code in the browser) but found that differing implementations between the TypeScript compiler and Babel lead to it not being reliably usable for my consumers. I ended up writing this code to try to work around the polyfill issues; my compiler will use Symbol.for('Symbol.dispose'), but other compilers may choose a different symbol...
https://github.com/justjake/quickjs-emscripten/blob/aa48b619...
There is exactly zero reason to introduce a new variable binding for explicit resource management.
And now it doesn't support destructuring, etc.
It should have been
using (const a = resource()) {
}
Similar to for-of.[1] https://github.com/tc39/proposal-explicit-resource-managemen...
[2] https://github.com/tc39/proposal-explicit-resource-managemen...
Yes, and this trivially solves that!
for (const { prop1, prop2 } of iterable)
^ ^
Destructuring Iterable
using (const { prop1, prop2 } of disposable)
^ ^
Destructuring Disposable
No ambiguity. Very clear.> That is introducing a new variable binding
No. const, let, var are variable bindings with rules about scope and mutability.
using adds to that list. And for the life of me I can't remember what it says about mutability.
using-of would keep that set.
> strictly more verbose in all cases for no particular benefit.
See above.
Additional benefit is that the lifetime of the object is more clear, and it's encouraged to be cleaned up more quickly. Rather than buried in a block with 50 lines before and 50 lines after.
> This was already litigated to death and you can see the author's response in your links.
Absolutely. The owners unfortunately decided to move forward with it.
Despite being awkward, subpar.
Especially as in the one case where it was useful to create an explicit scope, I could do that with regular blocks, something like
console.log("before")
{
using resource = foo()
console.log("during", resource)
}
console.log("after")
Having used Python's `with` blocks a lot, I've found I much prefer Javascript's approach of not creating a separate scope and instead using the existing scoping mechanisms. try(var reader = getReader()) {
// do read
} // auto-close
The original proposal references all of Python's context manager, Java's try-with-resource, and C#'s using statement and declaration: https://github.com/tc39/proposal-explicit-resource-managemen...
public void AMethod() {
//some code
using var stream = thing.GetStream();
//some other code
var x = thing.ReadToEnd();
//file will be automatically disposed as this is the last time file is used
//some more code not using file
} //any error means file will be disposed if initialized
You can still do the wrap if you need more fine grained control, or do anything else in the finally.You can even nest them like this:
using var conn = new SqlConnection(connString);
using var cmd = new SqlCommand(cmd);
conn.Open();
cmd.ExecuteSql();
Edit: hadn't read the whole article, the javascript version is pretty good!* If you accidentally use `let` or `const` instead of `using`, everything will work but silently leak resources.
* Objects that contain resources need to manually define `dispose` and call it on their children. Forgetting to do so will lead to resource leaks.
It looks like defer dressed up to resemble RAII.
https://github.com/typescript-eslint/typescript-eslint/issue...
https://github.com/tc39/proposal-explicit-resource-managemen...
I imagine there will eventually be lint rules for this somewhere and many of those using such a modern feature are likely to be using static analysis via eslint to help mitigate the risks here, but until it’s more established and understood and lint rules are fleshed out and widely adopted, there is risk here for sure.
https://github.com/typescript-eslint/typescript-eslint/issue...
To me it seems a bit like popular lint libraries just going ahead and adding the rule would make a big difference here
Anyway, I didn't say it was "inferior to defer", I said that it seemed more error-prone than RAII in languages like Rust and C++.
Edit: Sorry if I'm horribly wrong (I don't use C#) but the relevant code analysis rules look like CA2000 and CA2213.
It is, but RAII really isn't an option if you have an advanced GC, as it is lifetime-based and requires deterministic destruction of individual objects, and much of the performance of an advanced GC comes from not doing that.
Most GC'd language have some sort of finalizers (so does javascript: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...) but those are unreliable and often have subtle footguns when used for cleanup.
Further C# has destructors that get used as a last resort effort on native resources like file descriptors.
True, I was going to mention that, but I saw that JS also has "finalization registries", which seem to provide finalizer support in JS, so I figured it wasn't a fundamental difference.
The problem they are trying to solve is that the programmer could forget to wrap an object creation with try. But their solution is just kicking the can down the road, because now the programmer could forget to write "using"!
I was thinking that a much better solution would be to simply add a no-op default implementation of dispose(), and call it whenever any object hits end-of-scope with refcount=1, and drop the "using" keyword entirely, since that way programmers couldn't forget to write "using". But then I remembered that JavaScript doesn't have refcounts, and we can't assume that function calls to which the object has been passed have not kept references to it, expecting it to still exist in its undisposed state later.
OTOH, if there really is no "nice" solution to detecting this kind of "escape", it means that, under the new system, writing "using" must be dangerous -- it can lead to dispose() being called when some function call stored a reference to the object somewhere, expecting it to still exist in its undisposed state later.
Why might a programmer persuade themselves of that? Because otherwise "using" has no benefit at all, beyond a slightly sweeter syntax for wrapping the function body in "try ... catch (x) { for (o of objs) o.dispose(); }”.
> I feel it doesn't make sense to conflate resource management with garbage collection
Memory is just a resource, one that conveniently doesn't require anything to be done urgently when that its lifetime ends (unlike, say, locks). GC is a system for managing that resource -- or any resource with the same non-urgency property. For example, you can imagine a GC-based resource management system for a pool of DB connections.
> You shouldn't assume you have the lock just because you have a reference to the manager resource.
Some languages make it necessary to code in this way, where you need to always check if something is in a valid state before doing something with it, but that's unfortunate, because there are languages (like C++, which is horrible in so many other ways) where you can maintain the invariant that, if you have a reference to a lock, the associated resource is locked. The general idea -- Make Invalid States Unrepresentable -- is a fantastic way to improve code quality, so we should always be looking for ways to incorporate it into languages that don't yet have it. Parse Don't Validate is the same underlying idea.
Though another problem is that the spec does not clearly specify when an object may be collected or allow the programmer to control GC in any way, which means relying on FinalizationRegistry may lead to leaks/failure to finalize unused resources (bad, but sometimes tolerable) or worse, use-after-free bugs (outright fatal) – see e.g. https://github.com/tc39/ecma262/issues/2650
They’re basically a nice convenience for noncritical resource cleanup. You can’t rely on them.
I was replying to this:
> would very explicitly cause the GC to have semantic effects, and I think that goes strongly against the JS philosophy.
Do you disagree that a finalizer provides for exactly that and thus can not be "strongly against the JS philosophy"?
> For this reason, the W3C TAG Design Principles recommend against creating APIs that expose garbage collection. It's best if WeakRef objects and FinalizationRegistry objects are used as a way to avoid excess memory usage, or as a backstop against certain bugs, rather than as a normal way to clean up external resources or observe what's allocated.
And the point that this kind of thing is against the JS philosophy is pretty explicit:
using readerResource = {
reader: response.body.getReader(),
[Symbol.dispose]() {
this.reader.releaseLock();
},
};
First, I had to refresh my memory on the new object definition shorthand: In short, you can use a variable or expression to define a key name by using brackets, like: let key = "foo"; { [key]: "bar"}, and secondly you don't have to write { "baz" : function(p) { ... } }, you can instead write { baz(p) {...} }. OK, got it.So, if I'm looking at the above example correctly, they're implementing what is essentially an Interface-based definition of a new "resource" object. (If it walks like a duck, and quacks...)
To make a "resource", you'll tack on a new magical method to your POJO, identified not with a standard name (like Object.constructor() or Object.__proto__), but with a name that is a result of whatever "Symbol.dispose" evaluates to. Thus the above definition of { [Symbol.dispose]() {...} }, which apparently the "using" keyword will call when the object goes out of scope.
Do I understand that all correctly?
I'd think the proper JavaScript way to do this would be to either make a new object specific modifier keyword like the way getters and setters work, or to create a new global object named "Resource" which has the needed method prototypes that can be overwritten.
Using Symbol is just weird. Disposing a resource has nothing to do with Symbol's core purpose of creating unique identifiers. Plus it looks fugly and is definitely confusing.
Is there another example of an arbitrary method name being called by a keyword? It's not a function parameter like async/await uses to return a Promise, it's just a random method tacked on to an Object using a Symbol to define the name of it. Weird!
Maybe I'm missing something.
[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
Here are the well-known symbols that my NodeJS 22 offers when I `Symbol.<tab>`:
Symbol.asyncDispose
Symbol.asyncIterator
Symbol.dispose
Symbol.hasInstance
Symbol.isConcatSpreadable
Symbol.iterator
Symbol.keyFor
Symbol.length
Symbol.match
Symbol.matchAll
Symbol.replace
Symbol.search
Symbol.species
Symbol.split
Symbol.toPrimitive
Symbol.toStringTag
Symbol.unscopables
more specifically, javascript will call the [Symbol.dispose] when it detects you are exiting the scope of a "using" declaration.
__proto__ was a terrible mistake. Google “prototype pollution”; there are too many examples to link. In a duck-typed language where the main mechanism for data deserialization is JSON.parse(), you can’t trust the value of any plain string key.
those methods could conflict with existing methods already used in other ways if you’d want to make an existing class a subclass of Resource.
The core purpose and original reason why Symbol was introduced in JS is the ability to create non-conflicting but well known / standard names, because the language had originally reserved no namespace for such and thus there was no way to know any name would be available (and not already monkey patched onto existing types, including native types).
> Is there another example of an arbitrary method name being called by a keyword? It's not a function parameter like async/await uses to return a Promise, it's just a random method tacked on to an Object using a Symbol to define the name of it. Weird!
`Symbol.iterator` called by `for...of` is literally the original use case for symbols.
> I'd think the proper JavaScript way to do this would be to either make a new object specific modifier keyword like the way getters and setters work, or to create a new global object named "Resource" which has the needed method prototypes that can be overwritten.
Genuinely: what are you talking about.
They added the get and set keywords to plain JS objects to identify getters and setters. So just add a dispose keyword. Like this:
const obj = {
log: ["a", "b", "c"],
get first() {
return this.log[0];
},
dispose cleanup() {
// close resource
}
};
Much cleaner.And you’re now wasting an entire keyword on a property with a fixed name, and code bases which already use that name with different semantics are unable to add `using` compatibility.
LOL. What a great line.
It's getting pretty ridiculous, I agree. I don't understand the need for so many shorthands for one. All they do is make code illegible for the sake of saving a few keystrokes. Using the Symbol object that way is just ugly.
`[Symbol.dispose]()` threw me off
Not really. Both are ways to perform deterministic resource management, but RAII is a branch of deterministic resource management which most GC'd languages can not use as they don't have deterministic object lifetimes.
This is inspired by similar constructs in Java, C#, and Python (and in fact lifted from C# with some adaptation to JS's capabilities), and insofar as those were related to RAII, they were a step away from it, at least when it comes to Python: CPython historically did its resource management using destructors which would mostly be reliably and deterministically called on refcount falling to zero.
However,
1. this was an issue for non-refcounted alternative implementations of Python
2. this was an issue for the possibility of an eventual (if unlikely) move away from refcounting in CPython
3. destructors interact in awkward ways with reference cycles
4. even in a reference-counted language, destructors share common finaliser issues like object resurrection
Thus Python ended up introducing context managers as a means of deterministic resource management, and issuing guidance to avoid relying on refcounting and RAII style management.
It's not very ergonomic so I never tried to use it anywhere.
So in this case, rather than a generic `using` built on the even more generic `try/except` you should probably have built a `withFile` callback. It's a bit more repetitive, but because you know exactly what you're working with it's a lot less error prone, and you don't need to hope there's a ready made protocol.
It also provides the opportunity of upgrading the entire thing e.g. because `withFile` would be specialised for file interaction it would be able to wrap all file operations as promise-based methods instead of having to mix promises and legacy callbacks.
Granted, you could also just import * from './low-level.wat' (or .c, and compile it automatically to WASM)
`defer` also doesn't need to change the API of thousands of objects to use it, instead now you have to add a method any resource like object, or for things that are not objects, you can't even use this feature.
Neither statement is true.
> `defer` also doesn't need to change the API of thousands of objects to use it
Callbacks can trivially be bridged to `using`:
using _cb = {[Symbol.dispose]: yourCallbackHere};
There is also built-in support for cleanup callbacks via the proposal’s DisposableStack object.> or for things that are not objects
This is javascript. Everything of note is an object. And again, callbacks can trivially be bridged to using
using disposer = new DisposableStack;
disposer.defer(yourCallbackHere);
So with using there's a little collection of language features to learn and use, and (probably more importantly), either app devs and library devs have to get on the same page with this at the same time, or app devs have to add a handful of boilerplate at each call site for wrappers or DisposableStacks.
On the frontend I suspect it'll take a bit longer to become ubiquitous, but I'm sure it'll happen soon enough.
`using` is mostly more convenient, because it registers cleanup without needing extra calls, unlike `defer`.
And of course you can trivially bridge callbacks, either by wrapping a function in a disposeable literal or by using the DisposableStack/AsyncDisposableStack utility types which the proposal also adds.
async, await, let, var, const, try, catch, yield are all meaningful and precise keywords
"use" "using" on the other hand is not a precise word at all. To any non c# person it could be used to replace any of the above words!
If we keep going down these roads, Rust actually becomes the simpler language as it was designed with all of these goals instead of shoe-horning them back in.
I think JavaScript should remain simple. If we really need this functionality we can bring in defer. But as a 1:1 with what is in golang. This in between of python and golang is too much for what JavaScript is supposed to be.
I definitely think that the web needs a second language with types, resource management and all sorts of structural guard rails. But continuing to hack into JavaScript is not it.
It IS the mother of all super villain computer languages.
We have to stop to be hypocrit now.
JS: drop but we couldn't occupy a possibly taken name, Symbol for the win!
It's hilariously awkward.
You're about a decade late to the party?
That is the entire point of symbols and "well known symbols", and why they were introduced back in ES6.
Resource scoping is important feature. Context managers (in python) are literally bread and butter for everyday tasks.
It's awkward not because of Symbol, it introduces new syntax tied to existing implicit scopes. It's kinda fragile based on Go experience. Explicit scoping is a way more predictable.
(This paragraph is getting off topic, but still... ) Below is my exact interface that I have in a .d.ts file. The reason for that file is because I like typed languages (ie TypeScript), but I don't want to install stuff like node-js for such simple things. So I realised vscode can/will check js files as ts on-the-go, so in a few spots (like this) I needed to "type" something - and then I found some posts about svelte source code using js-docs to type their code-base instead of typescript. So that's basically what I've done here...
export global {
interface Window {
MyThing?: {remove: ()=>any}
}
}
So chances are that in the places you could use this feature, you've probably already got an "interface" for closing things when done (even if you haven't defined the interface in a type system).It differs from try/finally, c# “using,” and Java try-with-resources in that it doesn’t require the to-be-disposed object to be declared at the start of the scope (although doing so arguably makes code easier to understand).
It differs from some sort of destructor in that the dispose call is tied to scope, not object lifecycle. Objects may outlive the scope if there are other references, and so these are different.
If you like golang’s defer then you might like this.
It's nothing like go's defer: Go's defer is function-scoped and registers a callback, using is block-scoped and registers an object with a well defined protocol.
> It differs from [...] c# “using,”
It's pretty much a direct copy of C#'s `using` declaration (as opposed to the using statement): https://learn.microsoft.com/en-us/dotnet/csharp/language-ref....
This can also be seen from the proposal itself (https://github.com/tc39/proposal-explicit-resource-managemen...) which cites C#'s using statement and declaration, Java's try-with-resource, and Python's context managers as prior art, but only mentions Go's defer as something you can emulate via DisposableStack and AsyncDisposableStack (types which are specifically inspired by Python's ExitStack),