If we think we need a more thoroughly virtualized machine than traditional operating system processes give us (which I think is obvious), then we should be honest and build a virtualization abstraction that is actually what we want, rather than converting a document reader into a video editor…
I'm going to assume you're being sincere. But even the crustiest among us can recognize that the modern purpose for web browsers is not (merely) documents. Chances are, many folks on HN in the last month have booked tickets for a flight or bought a home or a car or watched a cat video using the "document browser".
> If we think we need a more thoroughly virtualized machine than traditional operating system processes give us (which I think is obvious)...
Like ... the WASM virtual machine? What if the WASM virtual machine were the culmination of learning from previous not-quite-good-enough VMs?
WASM -- despite its name -- is not truly bound to the "document" browser.
Would you install a native app to book a flight? One for each company? Download updates for them every now and then, uninstall them when you run out of disk space etc
I can ask the same question about every other activity we do in these non-native apps.
Unfortunely several of them are glorified webviews.
I am old enough to have lived through the days Internet meant a set of networking protocols, not ChromeOS Platform.
And on those days hard disks were still bloody expensive, by the way.
Isn't your phone providing a sandbox, a distribution system, a set of common runtime services, etc to get these native apps functional?
You don't have to squint your eyes to realize that this thing we call "document browsers" are doing a lot of the same work that Apple/Google are doing with their mobile OSes.
All the OS frameworks that are available across most operating systems that don't fragment themselves into endless distributions?
My dear Lord! In what world are you living in?
Take a look at all of the "mobile apps" you installed on your phone and tell me which of those would ever devote any resource to make a apt/rpm repository for their desktop applications.
Even the ones that want to have a desktop application can not figure out how to reliably distribute their software. The Linux crowd itself is still at the flatpak vs AppImage holy war. Mark Shuttleworth is still beating the snap horse.
The Web as a platform is far from ideal, but if it weren't for it I would never been able to switch to Linux as my primary base OS and I would have to accept the Apple/Microsoft/Google oligopoly, just like we are forced to do it at the mobile space.
Seems like your preferred world is the totalitarian "choose any color you want as long as it is black" one, where everything is perfectly optimized and perfectly integrated into a single platform.
Two questions:
1) What is the primary OS for your desktop?
2) Would you sincerely make the argument that a world where everyone submits to a single design (Apple-style) would be better than an "organic" world where the barrier of entry is lower, but less "optimal"?
Idk, I have a feeling they would be anti systemd too
But for some reason this takes 20M lines of code, which creates a moat that prevents browser competition.
I am still shocked Google has not rubbed two brain cells together and built a serious Google ChromeOS version for developers with a real desktop environment and real access to Linux, and keeping the browser as sandboxed as they have. I would spend top dollar on such a laptop. Heck it could come with an easy way to install Android Studio, and native apps for things like Hangouts or whatever they call it now.
https://donhopkins.medium.com/alan-kay-on-should-web-browser...
>Alan Kay answered: “Actually quite the opposite, if “document” means an imitation of old static text media (and later including pictures, and audio and video recordings).”
"virtual machine" is clearly not
that said, i love WASM in the browser, high time wrapping media with code to become "new media" wasn't stuck solely with a choice between JS and "plugins" like Java, Flash, or Silverlight
it's interesting to look back at a few "what might have been" alternate timelines, when the iPhone was intended to launch as an HTML app platform, or Palm Pre (under a former Apple exec, the "pod-father") intended the same with WebOS. if a VM running a web OS shows a PDF or HTML viewer in a frame, versus if a HTML viewer shows a VM running a web OS in a frame...
we're still working on figuring out whether new media and software distribution are the same.
today, writing Swift, or Nim, or whatever LLVM, and getting WASM -- I agree with you, feels like a collective convergence on the next step of common denominator
* note: those are all documents and document workflows with skeuomorphic analogs in the same headspace, and newspaper with "live pictures" has been a sci-fi trope for long enough TV news still can't bring themselves to say "video" (reminding us "movie" is to "moving" as "talkie" was to "talking") so extending document to include "media" is reasonable. but extending that further to be "arbitrary software" is no longer strictly document nor media
Try to distribute an installer on Windows that isn't signed with an extensive EV-certificate for instance. It's scare popup galore.
Not to mention the closed gardens of the Apple and Google Stores which even when you get in, you can be kicked out again for absolutely no objective reason (they don't even need to tell you why).
> then we should be honest and build a virtualization abstraction that is actually what we want,
This is not in the interest of Microsoft, Google or Apple. They can't put the open web back into the box (yet, anyway), but they will not support any new attempts to create an open software ecosystem on "their" platforms.
And this comes from someone who started with Flash, built actual video editing apps with it, and for the last 25 years build application with "it's not a web app, it's a desktop app that lives in a browser" attitude [1].
Even with Flash we often used hybrid approach where you had two builds from same codebase - a lite version running in the browser and an optional desktop app (AIR) with full functionality. ShareObjects and LocalConnection made this approach extremely feasible as both instances were aware of each other and you could move data and objects between them in real time.
The premise is great, but it was never fully realized - sure you have few outliers like Figma, but building a real "desktop app" in a browser comes with a lot of quirks, and the resulting UX is just terrible in most cases.
[1] just to be clear, there's a huge difference between web page and web app ;D
Word and LibreOffice "documents" can run embedded macros. Emacs has `org-mode`, which can call out to any programming language on your $PATH. A PDF document is produced by running code in a stack-based virtual machine. Even fonts have bytecode instructions embedded inside them that are used for hinting.
If by "document" you mean "static text with no executable bits", then only plain text files can truly be called documents. Everything else is a computer program, and has been since the dawn of computing.
imo when you start talking about dynamic documents the distinction starts to blur but it should be fine if it's just a few parameters that are meant to be manually updated... beyond that "document" seems like the wrong term (and tech)
those artificial distinctions are essential and perfectly practical as they can convey expectations just fine
GP is correct in that the browser has generalised to a point it has clear drawbacks for its original intended purpose, but that is just a fact of life at this point
IMO, html should have scaled back from 5.0 to the feature-set of 4, if not 3, with mass deprecations and beyond that it shouldn't be called html even if the main browsers carried on adding features and interoperable OS-like characteristics, so people could see beforehand if they were visiting hypertext documents or application sites, because certainly most of the web right now could not be reasonably called "hypertext"
but that isn't the way it was handled and tbh it was to be expected
There's been various attempts to build "Internet operating systems" that are little more than a browser (that's what Chrome OS was intended to be in the beginning I thought) but Windows and it's pre-internet legacies are so entrenched in PCs and corporate life that nothing's ever gonna change there until Microsoft makes the entirety of Windows an app.
I do agree that we tend to run a lot in a web-browser or browser environment though. It seems like a pattern that started as a hack but grew into its own thing through convenience.
It would be interesting to sit down with a small group and figure out exactly what is good/bad about it and design a new thing around the desired pattern that doesn't involve a browser-in-the-loop.
"What if we made a new WASM-based platform for apps, separate from the browser?"
Heh, reminds me of those boxes Sun used to make that only ran Java. (I don’t know how far down Java actually went; perhaps it was Solaris for the lower layers now that I think about it…)
I do miss the Solaris 10/OpenSolaris tech though. I don’t know anything that comes close to it today.
dtrace/zones/smf/zfs/iscsi/... and the integration between them all was top notch. One could create a zone, spin up a clone, do some computation, trash the filesystem and then just throw the clone away... in very short time. Also, that whole loop happened without interacting with zfs directly; I know that some of these things have been ported but the ports miss the integration.
eg: zfs on Linux is just a filesystem. zfs on Solaris was the base of a bunch of technology. smf tied much of it together.
eg: dtrace gave you access all the way down to individual read/write operations per disk in a raid-z and all the way up to the top of your application running inside of a zone. One tool with massive reach and very little overhead.
Not much compels me to go back to the ecosystem; I've been burned once already.
Secure? Debatable. Functional? Not really.
For example, try accessing a security key and watch the fun. Sure, if you access it exactly the way Google wants you to, things kinda-sorta work, sometimes. If you don't want to obey your Google masters, good luck getting your Bluetooth or USB packet to your security key.
And because you are "secure", you can't store anything on the local hard drive. Oh, and you can't send a real network packet either. All you can do is send a supplication request to a network database owned by somebody else that holds your data--"Please, sir, can I have some more?". The fact that this prevents you from exporting your data away from cloud-centered SaaS providers is purely coincidental, I'm sure. </sarcasm>
So in the name of security we just kneecap the end users--if the users can't do anything, they're secure, right? Diana Moon Glampers would be proud.
No one is saying it solves every single use case and it doesn't need to.
Same way webdevs ate desktop app dev's lunch because they had no idea how to innovate on decade old ideas.
To sum up, no matter how well you are positioned for something on paper, if you won't do it, someone else will.
But if you really need more than 4GB of memory, then sure, go ahead and use it.
On x86-64, the start of the linear memory is typically put into one of the two remaining segment registers: GS or FS. Then the code can simply use an address mode such as "GS:[RAX + RCX]" without any additional instructions for addition or bounds-checking.
This multi-memory setup reminds me of my array juggling I had to do back then. While intellectually challenging it was not fun at all.
[1] https://devblogs.microsoft.com/oldnewthing/20070801-00/?p=25...
https://gcc.gnu.org/onlinedocs/gcc/Named-Address-Spaces.html
Unfortunately the obvious `__attribute__((mode(...)))` errors out if anything but the standard pointer-size mode (usually SI or DI) is passed.
Or you may be able to do it based on x32, since your far pointers are likely rare enough that you can do them manually. Especially in C++. I'm pretty sure you can just call "foreign" syscalls if you do it carefully.
Especially how you could increase the segment value by one or the offset by 16 and you would address the same memory location. Think of the possibilities!
And if you wanted more than 1MB you could just switch memory banks[1] to get access to a different part of memory. Later there was a newfangled alternative[2] where you called some interrupt to swap things around but it wasn't as cool. Though it did allow access to more memory so there was that.
Then virtual mode came along and it's all been downhill from there.
[1]: https://en.wikipedia.org/wiki/Expanded_memory
[2]: https://hackaday.com/2025/05/15/remembering-more-memory-xms-...
Schulman’s Unauthorized Windows 95 describes a particularly unhinged one: in the hypervisor of Windows/386 (and subsequently 386 Enhanced Mode in Windows 3.0 and 3.1, as well as the only available mode in 3.11, 95, 98, and Me), a driver could dynamically register upcalls for real-mode guests (within reason), all without either exerting control over the guest’s memory map or forcing the guest to do anything except a simple CALL to access it. The secret was that all the far addresses returned by the registration API referred to the exact same byte in memory, a protected-mode-only instruction whose attempted execution would trap into the hypervisor, and the trap handler would determine which upcall was meant by which of the redundant encodings was used.
And if that’s not unhinged enough for you: the boot code tried to locate the chosen instruction inside the firmware ROM, because that will have to be mapped into the guest memory map anyway. It did have a fallback if that did not work out, but it usually succeeded. This time, the secret (the knowledge of which will not make you happier, this is your final warning) is that the instruction chosen was ARPL, and the encoding of ARPL r/m16, AX starts with 63 hex, also known as the ASCII code of the lowercase letter C. The absolute madmen put the upcall entry point inside the BIOS copyright string.
(Incidentally, the ARPL instruction, “adjust requested privilege level”, is very specific to the 286’s weird don’t-call-it-capability-based segmented architecture... But it’s has a certain cunning to it, like CPU-enforced __user tagging of unprivileged addresses at runtime.)
Isn’t that an arbitrary string, though? Presumably AMI and Insyde have different copyright messages, so then what?
If the search doesn’t succeed or if you’ve set SystemROMBreakPoint=off in the [386Enh] section of SYSTEM.INI[1] or run WIN /D:S, then the trap instruction will instead be placed in a hypervisor-provided area of RAM that’s shared among all guests, accepting the risk that a misbehaving guest will stomp over it and break everything (don’t know where it fits in the memory map).
As to the chances of failing, well, I suspect the original target was the c in “(c)”, but for example Schulman shows his system having the trap address point at “chnologies Ltd.”, presumably preceded by “Phoenix Te”. AMI and Award were both “Inc.”, so that would also work. Insyde wasn’t a thing yet; don’t know what happened on Compaq or IBM machines. One way or another, looks like a c could be found somewhere often enough that the Microsoft programmers were satisfied with the approach.
At least most people design non-overlaping segments. And I'm not sure wasm would gain anything from it, being a virtual machine instead of real.
With 64-bit addresses, and the requirements for how invalid memory accesses should work, this is no longer possible. AND-masking does not really allow for producing the necessary traps for invalid accesses. So every one now needs some conditional before to validate that this access is in-bounds. The addresses cannot be trivially offset either as they can wrap-around (and/or accidentally hit some other mapping.)
The biggest contributor to pointer arithmetic is offset reads into pointers: what gets generated for struct field accesses.
The other class of cases are when you're actually doing more general pointer arithmetic - usually scanning across a buffer. These are cases that typically get loop unrolled to some degree by the compiler to improve pipeline efficiency on the CPU.
In the first case, you can avoid the masking entirely by using an unmapped barrier region after the mapped region. So you can guarantee that if pointer `P` is valid, then `P + d` for small d is either valid, or falls into the barrier region.
In the second case, the barrier region approach lets you lift the mask check to the top of the unrolled segment. There's still a cost, but it's spread out over multiple iterations of a loop.
As a last step: if you can prove that you're stepping monotonically through some address space using small increments, then you can guarantee that even if theoretically the "end" of the iteration might step into invalid space, that the incremental stepping is guaranteed to hit the unmapped barrier region before that occurs.
It's a bit more engineering effort on the compiler side.. and you will see some small delta of perf loss, but it would really be only in the extreme cases of hot paths where it should come into play in a meaningful way.
Why does it need to trap? Can't they just make it UB?
Specifying that invalid accesses always trap is going to degrade performance, that's not a 64-bit problem, that's a spec problem. Even if you define it in WASM, it's still UB in the compiler so you aren't saving anyone from UB they didn't already have. Just make the trapping guarantee a debug option only.
Seems like they got overly attached to the guaranteed trapping they got on 32-bit and wanted to keep it even though it's totally not worth the cost of bounds checking every pointer access. Save the trapping for debug mode only.
Maybe. Bugs that come from spooky behavior at a distance are notoriously hard to debug, especially in production, and it's worthwile to pay for it to avoid that.
With 64-bit pointers, you can't really reserve all the possible space a pointer might refer to. So you end up doing manual bounds checks.
Can't bounds checks be avoided in the vast majority of cases?
See my reply to nagisa above (https://news.ycombinator.com/item?id=45283102). It feels like by using trailing unmapped barrier/guard regions, one should be able to elide almost all bounds checks that occur in the program with a bit of compiler cleverness, and convert them into trap handlers instead.
Yeah, certainly compiler smarts can remove many bounds checks (in particular for small deltas, as you mention), hoist them, and so forth. Maybe even most of them in theory?
Still, there are common patterns like pointer-chasing in linked list traversal where you just keep getting an unknown i64 pointer, that you just need to bounds check...
To operate on any other size, you need to insert extra instructions to mask addresses to the desired size before they are used.
Sounds about right. Guess 512 GiB menory is the minimum to read email nowadays.
For video editing, 4GiB of completely uncompressed 1080p video in memory is only 86 frames, or about 3-4 seconds of video. You can certainly optimize this, and it's rare to handle fully uncompressed video, but there are situations where you do need to buffer this into memory. It's why most modern video editing machines are sold with 64-128GB of memory.
In the case of Figma, we have files with over a million layers. If each layer takes 4kb of memory, we're suddenly at the limit even if the webapp is infinitely optimal.
How is that data stored?
Because (2^32)÷(1920×1080×4) = 518 which is still low but not 86 so I'm curious what I'm missing?
So glad you asked. It's stored poorly because I'm bad at maths and I'm mixing up bits and bytes.
That's what I get for posting on HN while in a meeting.
(2^32)÷(1920×1080×4×3×2) = 86
Wow!
By the way now you can generate WASM via Dlang compiler LDC [1].
[1] Generating WebAssembly with LDC:
* Motivation
- Efficient support for high-level languages
- faster execution
- smaller modules
- the vast majority of modern languages need it
* Approach
- Pay as you go; in particular, no effect on code not using GC, no runtime type information unless requested
- Don't introduce dependencies on GC for other features (e.g., using resources through tables)
[1] https://github.com/WebAssembly/spec/blob/wasm-3.0/proposals/...- https://github.com/WebAssembly/design/issues/1397
- https://github.com/WebAssembly/memory-control/issues/6
This is a crucial issue, as the released memory is still allocated by the browser.
Shrinking the memory object shouldn't require any special support from GC, just an appropriate API hook. It would, as always, be up to the application code running inside the module to ensure that if a shrink is done, that the program doesn't refer to memory addresses past the new endpoint.
If this hasn't been implemented yet, it's not because it's been waiting on GC, but more that it's not been prioritized.
1. Different languages have totally different allocation requirements, and only the compiler knows what type of allocator works best (e.g. generational bump allocator for functional languages, classic malloc style allocator for C-style languages).
2. This perhaps makes wasm less suitable for usage on embedded targets.
The best argument I can make for this is that they're trying to emulate the way that libc is usually available and provides a default malloc() impl, but honestly that feels quite weak.
<html>
<body>
<div id="root"></div>
<script type="application/wasm" src="./main.wasm"></script>
</body>
</html>
Would be great for high performance web applications and for contexts like browser extensions where the memory usage and performance drain is real when multiplied over n open tabs. I'm not sure how code splitting would work in the wasm world, however.v8 could be optimized to reduce its memory footprint if it detects that no JavaScript is running - or wasm-only applications could use an engine like wasmer and bypass v8 entirely.
Another factor is that web technologies are used to write desktop applications via Electron/similar. This is probably because desktop APIs are terrible and not portable. First class wasm support in the web would translate to more efficient desktop applications (Slack, VSCode, Discord, etc) and perhaps less hate towards memory heavy electron applications.
<!doctype html>
<wasm src="my-app.wasm">
Why not just do the whole DOM out of your WASM?OTOH you still need to start a wasm runtime first, then import the WASI module into the wasm host.
P.S.: used to tinker with wasmtime and wasmi to add wasm support to my half abandoned deno clone ;) I learned this the hard way
Killing JavaScript was never the point of WASM. WASM is for CPU-intensive pure functions, like video decoding.
Some people wrongly thought that WASM was trying to kill JS, but nobody working on standardizing WASM in browsers believed in that goal.
"However, the NDK can be useful for cases in which you need to do one or more of the following:
- Squeeze extra performance out of a device to achieve low latency or run computationally intensive applications, such as games or physics simulations.
- Reuse your own or other developers' C or C++ libraries."
And I would argue WebGL/WebGPU are preferably better suited, given how clunky WebAssembly tooling still is for most languages.
Hard to believe it can compete with V8 JIT anytime soon. Might be easier to integrate fast vector libraries within javascript engines.
As others have pointed out, the Js compoment interface is define in a language called WebIDL:
https://firefox-source-docs.mozilla.org/dom/webIdlBindings/i...
How it works in Chrome(Blink) is that a compiler generates a wrapper between V8, which is a V8 object that holds onto the native object reference using this IDL.
Once V8 cleans up the Js object, the native code, holding a weak reference to the native objects, detects that it has become unreachable and cleans that up.
In any ways, the object lifetime is encapsulated by the v8 Isolate (which is depending on how you look at it, the lifetime of the HTML document or the heap), so it'd be perfectly fine to expose native references as they'd be cleaned up when you navigate away/close the page.
Once you support all the relevant types, define a calling convention, and add an appropriate verifier to Wasm, it'd be possible to expose all native objects to Wasm, probably by generating a different (and lighter weight) glue to the native classes, or delegating this task to the Wasm compiler.
Of course, if you wanted to have both JS and Wasm to be able to access the same object, you'd have to do some sort of shared ownership, which'd be quite a bit more complicated.
So I'd argue it'd make sense to allow objects that are wholly-owned by the Wasm side or the JS side, which still makes a ton of sense for stuff like WebGL, as you could basically do rendering without calling into JS.
I'm going out on a limb here, but I'd guess since this multi-memory support has landed, that means a single webassembly instance can map multiple SABs, so it might be the beginnings of that.
The whole problem with the DOM is that it has too many methods which can't be phased out without losing backwards compatibility.
A new DOM wasm api would be better off starting with a reduced API of only the good data and operations.
The problem is that the DOM is still improving (even today), it's not stabilized so we don't have that reduced set to draw from, and if you were to mark a line in the sand and say this is our reduced set, it would already not be what developers want within a year or two.
New DOM stuff is coming out all the time, even right now we two features coming out that can completely change the way that developers could want to build applications:
- being able to move dom nodes without having to destroy and recreate them. This makes it possible so you can keep the state inside that dom node unaffected, such as a video playing without having to unload and reload a video. Now imagine if that state can be kept over the threshold of a multi-page view transition.
- the improved attr() api which can move a lot of an app's complexity from the imperative side to the declarative side. Imagine a single css file that allows html content creators to dictate their own grid layouts, without needing to calculate every possible grid layout at build time.
And just in the near future things are moving to allow html modules which could be used with new web component apis to prevent the need for frameworks in large applications.
Also language features can inform API design. Promises were added to JS after a bunch of DOM APIs were already written, and now promises can be abortable. Wouldn't we want the new reduced API set to also be built upon abortable promises? Yes we would. But if we wait a bit longer, we could also take advantage of newer language features being worked on in JS like structs and deeply immutable data structures.
TL;DR: It's still too early to work on a DOM api for wasm. It's better to wait for the DOM to stabalize first.
That is the trend we face now days, there is too less stable stuff around. Take macOS, a trillion dollar company OS, not an open source without funding.
Stable is a mirage, sadly.
The goal behind the argument is to grant WASM DOM access equivalent to what JavaScript has so that WASM can replace JavaScript. Why would you want that? Think about it slowly.
People that write JavaScript for a living, about 99% of them, are afraid of the DOM. Deathly afraid like a bunch of cowards. They spend their entire careers hiding from it through layers of abstractions because programming is too hard. Why do you believe that you would be less afraid of it if only you could do it through WASM?
I see good use cases for building entirely in html/JS and also building entirely in WASM.
The only issue is that there’s a performance cost. Not sure how significant it is for typical applications, but it definitely exists.
It’d be nice to have direct DOM access, but if the performance is not a significant problem, then I can see the rationale for not putting in the major amount of work work it’d take to do this.
Say I really want to write front end code in Rust* does there just need to be a library that handles the js DOM calls for me? After that, I don't ever have to think about javascript again?
yes, e.g. with Leptos you don't have to touch JS at all
But further, WASM is more than just a browser thing at this point. You might be running in an environment that has no DOM to speak of (think nodejs). Having this bolted on extension simply for ease of use means you now need to decide how and when you communicate its availability.
And the benefits just aren't there. You can create a DOM exposing library for WASM if you really want to (I believe a few already exist) but you end up with a "what's the point". If you are trying to make some sort of UX framework based on wasm then you probably don't want to actually expose the DOM, you want to expose the framework functions.
Aren't the framework functions closely related to the DOM properties and functions?
https://hacks.mozilla.org/2018/10/calls-between-javascript-a...
It is managable if you avoid js wasm round trips, but if you assume the cost is near zero you will be in for a unpleasant surprise.
When I have to do a lot of different calls into my wasm blob, I am way way faster batching them. Meaning making one cal into wasm, that then gets all the data I want and returns it.
Most code already is so horribly inefficient that I can't imagine this making a noticeable difference in most scenarios.
If you think JavaScript has problems I have bad news about the DOM…
You can get rid of JS, but that won't help much because it's just a small language interfacing with a huge, old, backwards compatible to 20+ years ago system that is the DOM.
I don't think most languages it is even feasible to compile to wasm in a way that doesn't include a splashscreen for any non-trivial application. Which is simply unacceptable for most web content. And that is even before all the work required in user-land to support browser primitives (like URL routing, asset loading, DOM integration, etc).
So I can foresee this unlocking "heavy duty productivity apps" to run in the browser things like video editors or photoshop using web-first GUI (meaning DOM elements) without significant amounts of JS. But for general web content consumed by the masses I find it unlikely.
I expect the real "javascript death" will mean a completely new language designed from the ground-up to compile to WASM and work with browser APIs.
Web development moves faster than any software in history. A lot of this is on the back of JS being ergonomic/fast to code and having very good performance and a lot of what makes it good at this is also what coders familiar with other languages dislike.
What other language has that right combination of really fast development and good performance?
You would probably have to go outside the mainstream to something like Scheme or StandardML to get the ergonomics and performance, but that would upset a whole other group of people.
After that is an even bigger problem. If everyone adopts different languages, different frameworks for those languages, and different user-facing APIs for the WASM APIs, then finding devs for your frontend team who can be productive quickly suddenly becomes impossibly hard.
Part of the web Javascript security model is that you cannot see into garbage collection. So if you have some WASM-y pointer to a DOM element, how do you handle that?
I think with GC in properly people might come at this problem again, but pre-GC WASM this sounds pretty intractable
> When is WASM finally going to be able to touch the DOM?
Coming from a web background, and having transitioned to games / realtime 3D applications...
Fuck the DOM dude. The idea that programming your UI via not one but TWO DSLs, and a scripting language, is utter madness. In principal, it might sound good (something something separation of concerns, or whatever-the-fuck), but in reality you always end up with this tightly coupled garbage fire split across a pile of different files and languages. This is not the way.
We need to build better native UI libraries that just open up a WebGL context and draw shit to that. DearIMGUI can probably already do like 85% of what modern webapps do.
Anyways .. /rant
> Using them for GUI applications is wild and obviously a bad idea.
I agree it's wild, but bad idea implies there are better options for cross platform applications with the same ease of distribution, and accessibility as the web.
This is what Flutter does. It works well, but you do lose some nice things that the browser/DOM provides (accessibility, zooming, text sizing/selection, etc). There’s also a noticeable overhead when the app + renderer is initially downloaded).
I’m with you. Main blocker I’ve seen to “just use ImGui for everything” (which I’d love to adopt), is if I run ImGui in WASM the keyboard doesn’t open on mobile. This seems possible in theory because egui does it.
Even though running ImGui on mobile via WASM isn’t the primary use case, inevitably the boss or someone is going to need to “just do a quick thing” and won’t be able to on mobile, and this seems to be a hard ceiling with no real escape hatch or workaround.
One of those scenarios where, if we have to use a totally different implementation (React or whatever) to handle this 1% edge case, we might as well just use that for the other 99%.
1. Opening the native keyboard and plumbing those events through to the WASM runtime sounds pretty easy. It's probably not cause modern software, but conceptually it should be trivial.. right??
2. In terms of 'the boss' wanting to do 'that one weird thing' that there isn't a library/plugin/whatever for in DearImgui land. If dev time for everything else gets faster, than the 10x cost of that small corner case can be absorbed by net win. Now, I'm pretty sus on the claim everything else gets better today, but we can certainly imagine a world where they do, and it's probably not far away
I think this is the roadblock, that there isn’t always a way to pop open the keyboard programmatically. Rather, the mobile keyboard only pops up when there’s a DOM input element detected. So it would need a hidden input layered on top of the ImGui WASM app and mapping coordinates, or would need an HTML input element overlayed on top of every text input.
It'll be interesting to see what the second non-browser-based WASM runtime to fully support 3.0 will be (I'm guessing wasmtime will be first; I'm not counting Deno since it builds on v8). Garbage collection seems like a pretty tricky feature in particular.
Does anyone know how this 3.0 release fits into the previously announced "evergreen" release model?[2]
> With the advent of 2.0, the Working Group is switching to a so-called “evergreen” model for future releases. That means that the Candidate Recommendation will be updated in place when we create new versions of the language, without ever technically moving it to the final Recommendation state. For all intents and purposes, the latest Candidate Recommendation Draft[3] is considered to be the current standard, representing the consensus of the Community Group and Working Group.
[1] https://webassembly.org/features/
Wasmtime already supports every major feature in the Wasm 3.0 release, I believe. Of the big ones: garbage collection was implemented by my colleague Nick Fitzgerald a few years ago; tail calls by Jamey Sharp and Trevor Elliott last year (with full generality, any signature to any signature, no trampolines required!); and I built our exceptions support which merged last month and is about to go out in Wasmtime 37 in 3 days.
The "3.0" release of the Wasm spec is meant to show progress and provide a shorthand for a level of features, I think, but the individual proposals have been in progress for a long time so all the engine maintainers have known about them, given their feedback, and built their implementations for the most part already.
(Obligatory: I'm a core maintainer of Wasmtime and its compiler Cranelift)
The wasm features page says it is still behind a flag on wasmtime (--wasm=gc). Is that page out of date?
Our docs (https://docs.wasmtime.dev/stability-tiers.html) put GC at tier 2 with reason "production quality" and I believe the remaining concerns there are that we want to do a semi-space copying implementation rather than current DRC eventually. Nick could say more. But we're spec-compliant as-is and the question was whether we've implemented these features -- which we have :-)
Given that Wasm is designed with formal semantics in mind, why is the DX of using it as a target so bad? I used binaryen.js to emit Wasm in my compiler and didn't get a feeling that I am targeting a well designed instruction set. Maybe this is a criticism of Binaryen and its poor documentation because I liked writing short snippets of Wasm text very much.
In our compiler (featured in TFA), we chose to define our own data structure for an abstract representation of Wasm. We then wrote two emitters: one to .wasm (the default, for speed), and one to .wat (to debug our compiler when we get it wrong). It was pretty straightforward, so I think the instruction set is quite nice. [1]
[1] https://github.com/scala-js/scala-js/tree/main/linker/shared...
My major pain point was the documentation. The binaryen.js API reference¹ is a list of function signatures. Maybe this makes sense to someone more experienced but I found it hard to understand initially. There are no explanation of what the parameters mean. For example, the following is the only information the reference provides for compiling an `if` statement:
Module#if(condition: Expression, ifTrue: Expression, ifFalse?: Expression): Expression
In contrast, the Wasm instruction reference on MDN² is amazing. WASI again suffers from the same documentation issues. I didn't find any official resource on how to use `fd_write` for example. Thankfully I found this blog post³.Wasm feels more inaccessible that other projects. The everyday programmer shouldn't be expected to understand PL research topics when they are trying to do something with it. I understand that's not the intention but this is what it feels like.
1. https://github.com/WebAssembly/binaryen/wiki/binaryen.js-API
2. https://developer.mozilla.org/en-US/docs/WebAssembly/Referen...
That is, most work in Binaryen is on improving wasm-opt which inputs wasm and outputs wasm, so any toolchain can use it (as opposed to just JS/TS).
But if someone had the time to improve the JS/TS bindings that would be great!
Multiple WASM memories and Clang's/LLVM's address space feature sound like they should be able to solve that problem, but I'm not sure if it is as trivial as it sounds...
We buried far pointers with DOS and Win16 for a good reason..
It's not much different than dealing with all the alignment rules that are needed when arranging data for the GPU.
If there can be a solution that works for more languages: great. I mostly want this for Go. If it means there will be some _reasonable_ limitations, that's also fine.
It defines a framework to allow modules to communicate with structured data types by allowing each module to decide how to map it to and from its linear memory (and in future the runtime GC heap)
In your case you could be able to define WIT interfaces for your go types and have your compiler of choice use it to generate all the relevant glue code
Some of the least fun JavaScript I have ever written involved manually cleaning up pointers that in C++ would be caught by destructors triggering when the variable falls out of scope. It was enough that my recollections of JNI were more tolerable. (Including for go, on Android, curiously).
Then once you get through it you discover there is some serious per-call overhead, so those structs start growing and growing to avoid as many calls as possible.
I too want wasm to be decent, but to date it is just annoying.
Open source CAD in the browser.
Oops, I did not read that before going ham in the editor. It seems that the files are stored inside the emscripten file system, so they are not lost. I could download my exported 'test.stl' with the following JavaScript code:
var data = FS.readFile('test.stl');
var blob = new Blob([data], { type: 'application/octet-stream' });
var a = document.createElement('a');
a.href = URL.createObjectURL(blob);
a.download = 'test.stl';
a.click();
Thank you for maintaining it!
For 2 of those 3 use cases, i think it's not technically the optimal choice, but i think that future may actually come. Congratulations and nice work to everyone involved!
There's comments in there about waiting for a polyfill, but GC support is widespread enough that they should probably just drop support for non-GC runtimes in a major version.
I wonder what language this GC can actually be used for at this stage?
In general though most regular C# code written today _doesn't directly_ use many of the features mentioned apart from references. Libraries and bindings however do so a lot since f.ex. p/invoke isn't half as braindead as JNI was, but targeting the web should really not bring along all these libraries anyhow.
So, making a MSIL runtime that handles most common C# code would map pretty much 1-1 with Wasm-GC, some features like ref's might need some extra shims to emulate behaviour (or compiler specializations to avoid too bad performance penalties by extra object creation).
Regardless of what penalties,etc goes in, the generated code should be able to be far smaller and far less costly compared to the situation today since they won't have to ship both their own GC and implement everything around that.
It's definitely true that you could compile some subset of C# applications to WasmGC but the mismatch with the language as it's existed for a long time is painful.
My argument is that, code that goes on the web-side mostly would adhere to the subset, many of the ref-cases can be statically compiled away and what remains is infrequent enough that for most users it'll be a major win compared to avoid lugging along the GC,etc.
> Wasm GC is low-level as well: a compiler targeting Wasm can declare the memory layout of its runtime data structures in terms of struct and array types, plus unboxed tagged integers, whose allocation and lifetime is then handled by Wasm.
There's already a lot misunderstandings about wasm, and I fear that people will just go "It supports GC, so we can just export python/java/c#/go etc."
This is not a silver bullet. Cpp, or rust are probably still going to be the way to go.
Relying on the GC features of WASM will require writing code centered around the abstractions for the compiler that generates WASM.
What's the value proposition of WASM GC if not this?
But how those languages still need to carry around some runtime of their own, and I don't think it's obvious how much a given language will benefit.
Also just there will be a special version of those language runtimes which probably won't be supported in 10 years time. Just like a lot of languages no longer have up to date versions that can run on the common language runtime.
> This is not a silver bullet. Cpp, or rust are probably still going to be the way to go.
I don't think that's necessarily true anymore. But as you say, it depends on the compiler you use and how well it utilizes what is there. Jetbrains has big plans with Kotlin and Wasm with e.g. compose multiplatform already supporting it (in addition to IOS native and Android).
So yes, Java,C#,etc will work better (If you look at the horrible mess the current C# WASM export generates it basically ships with an inner platform containing a GC), and no, it will explicitly not speak with "javascript" objects (you can keep references to JS objects, but you cannot call JS methods directly).
The one in particular I have in mind would be to put WASM on graphical calculators, in order to have a more secure alternative to the ASM programs (it's possible nowadays to write in higher-level languages, but the term stuck) that could work across manufacturers. Mid-range has RAM on the order of 256 KiB, but a 32-bit core clocked at more than 200 MHz, so there's plenty of CPU throughput but not a lot of memory to work with.
Sadly, the closest thing there is for that is MicroPython. It's good for what it does, but its performance and capabilities are nowhere near native.
If it has less than 64 kB of memory how is it going to run a WASM runtime anyway?
And even cheap microcontrollers tend to have more than 64 kB of memory these days. Doesn't not seem remotely worth the complexity.
We're working on WASM for embedded over at atym.io if you're interested.
There is WARDuino (https://github.com/TOPLLab/WARDuino and https://dl.acm.org/doi/10.1145/3357390.3361029).
A runtime that accepts Wasm modules that use a large fraction of the functionality, there is going to be a RAM requirement in the few KiB to few tens of KiB. There seems to be a branch or fork of Wasm3 for Arduino (https://github.com/wasm3/wasm3-arduino).
If you are willing to do, e.g. Wasm -> AVR AOT compilation, then the runtime can be quite small. That basically implies that compilation does not happen on device, but at deployment time.
What security implications are there in graphical calculators in terms of assembler language?
It's a flawed idea and has led to an arms race, where manufacturers lock down their models and jailbreaks break them open. Even NumWorks, who originally had a calculator that was completely unprotected and used to publish all of their source code on GitHub, had to give in and introduce a proprietary kernel and code signing, in order to stop custom firmwares and applications from accessing the LED and stop countries from outlawing their calculators.
Unless I'm mistaken, it's been on life support for the past 15 years. It's probably more heavyweight and firmware size/Flash usage is a concern. I don't think performance would be on par with WASM and there are use-cases where that really matters (ray tracing rendering for example). I'm also not sure there are many maintained, open-source implementations for it out there. I've also heard stories that it was quite a mess in practice because it was plagued by bugs and quirks specific to phone models, despite the fact that it was supposed to be a standard.
I'd gladly be proven wrong, but I don't think Java ME has a bright future. Unless you were thinking of something else?
<sets alarm for three years from now>
See you all for WASM 4.0.
Direct DOM access doesn't make any sense as a WASM feature.
It would be at best a web-browser feature which browser vendors need to implement outside of WASM (by defining a standardized C-API which maps to the DOM JS API and exposing that C API directly to WASM via the function import table - but that idea is exactly as horrible in practice as it sounds in theory).
If you need to manipulate the DOM - just do that in JS, calling from WASM into JS is cheap, and JS is surprisingly fast too. Just make sure that the JS code has enough 'meat', e.g. don't call accross the WASM/JS boundary for every single DOM method call or property change. While the call itself is fast, the string conversion from the source language specific string representation on the WASM heap into JS strings and back is not free (getting rid of this string marshalling would be the only theoretical advantage of a 'native' WASM DOM API).
It's neither directly related to the web, nor is it an assembly syntax.
It's just another virtual ISA. "Direct DOM access for WASM" makes about as much sense as "direct C++ stdlib access for the x86 instruction set" - none ;)
WebASM is an assembly-like dialect, after all.
So you either create a very concrete JS library that translates specific WASM data into IO actions, or one that serializes and deserialize everything all around but can be standardized.
At this point, none of those options are much more capable than Java applets... Or, in fact, if you put a network call between the WASM and the JS, you won't even add much complexity.
Java applets allowed to load and call into native DLLs via JNI, so they were definitely much more capable than WASM, but also irresponsibly unsafe.
In your own WASM host implementation you could even implement a dlopen() and dlsym() to load and call into native DLLs, but any WASM host which cares about safety wouldn't allow that (especially web browsers).
The bottleneck is in the DOM operations themselves, not javascript. This is the reason virtual-dom approaches exist: it is faster to operate on an intermediate representation in JS than the DOM itself, where even reading an attribute might be costly.
WASM isn't going to magically make the DOM go faster. DOM will still be just as slow as it is with Javascript driving it.
WASM is great for heavy-lifting, like implementing FFMPEG in the browser. DOM is still going to be something people (questionably) complain about even if WASM had direct access to it. And WASM isn't only used in the browser, it's also running back-end workloads too where there is no DOM, so a lot of use cases for WASM are already not using DOM at all.
…proceeds to explain why it does make sense…
E.g. the "DOM peeps" would need to make it happen, not the "WASM peeps".
But that would be a massive undertaking for minimal benefit. There's much lower hanging fruit in the web-API world to fix (like for instance finally building a proper audio streaming API, because WebAudio is a frigging clusterf*ck, and if any web API would benefit from an even minimal reduction of JS <=> WASM marshalling overhead it would be WebGL2 and WebGPU, not the DOM. But even for WebGL2 and WebGPU the cost inside the browser implementation of those APIs is much higher than the WASM <=> JS marshalling overhead.
(I also want this feature, to drive DOM mutations from an effect system)
Out of curiosity, what issues do people have with WebAudio since audio worklets became widely supported?
With audio worklets this callbacks runs in a separate audio thread, and with the (deprecated) ScriptProcessNode this callback runs on the main thread.
E.g. a "good" web audio API replacement would only offer a callback that runs in a separate audio thread plus a convenience function call which allows to push small sample-packets from the main thread to the audio thread (at the cost of increased latency to avoid starving) - this push-function would basically be the replacement for ScriptProcessorNode.
In general, see here for a pretty good overview why WebAudio as a whole is a badly designed API: https://blog.mecheye.net/2017/09/i-dont-know-who-the-web-aud...
TL;DR: WebAudio's original design requires a lot of complexity and implementation effort for use cases that are not relevant to most of its users - and all that effort could be used instead to implement a much smaller and focused web audio API that covers actually relevant use cases.
Specifically for audio worklets: those mainly make sense when the entire audio stream generation can happen on the audio thread.
But if you need to generate audio on the main thread (such as in emulators: https://floooh.github.io/tiny8bit/), unless you want to run the entire emulator in the audio thread, you need an efficient way to communicate the audio stream which is generated on the main thread to the audio thread. For this you ideally need shared-memory multithreading, and for this you need control over the COOP/COEP response headers, and for this you need control over the web server configuration (which excludes a lot of popular web hosters, like Github Pages).
For this situation (generate sample stream on browser thread and communicate that to the audio thread) you're basically re-implementing ScriptProcessorNode, just less efficiently and limited by COOP/COEP. So at the very least ScriptProcessorNode should be un-deprecated.
From the point of view of someone who doesn't do web development at all, and to whom JS seems entirely cryptic: This argument is weird. Why is this specific (seemingly extremely useful!) "web thing" guarded by a specific language? Why would something with the generality and wide scope of WASM relegate that specific useful thing to a particular language? A language that, in the context of what WASM wants to do in making the web "just another platform", is pretty niche (for any non-web-person)?
For me, as a non-web-person, the big allure of WASM is the browser as "just another platform". The one web-specific thing that seems sensible to keep is the DOM. But if manipulating that requires learning web-specific languages, then so be it, I'll just grab a canvas and paint everything myself. I think we give up something if we start going that route.
Many important libraries have been written in C and only come with a C API. To use those libraries in non-C languages (such as Java) you need a mechanism to call from Java into C APIs, and most non-C language have that feature (e.g. for Java this was called JNI but has now been replaced by this: https://docs.oracle.com/en/java/javase/21/core/foreign-funct...), e.g. C APIs are a sort of lingua franca of the computing world.
The DOM is the same thing as those C libraries, an important library that's only available with an API for a single language, but this language is Javascript instead of C.
To use such a JS library API from a non-JS language you need an FFI mechanism quite similar to the C FFI that's been implemented in most native programming languages. Being able to call efficiently back and forth between WASM and JS is this FFI feature, but you need some minimal JS glue code for marshalling complex arguments between the WASM and JS side (but you also need to do that in native scenarios, for instance you can't directly pass a Java string into a C API).
https://component-model.bytecodealliance.org/
In my opinion it's an overengineered boondoggle, since "C APIs ought to be good enough for anything", but maybe something useful will eventually come out of it, so far it looks like it mostly replaces the idea of C-APIs as lingua-franca with "a random collection of Rust stdlib types" as lingua-france, which at least to me sounds utterly uninteresting.
The good news is that you can use very minimal glue code with just a few functions to do most JavaScript operations
I disagree. The idea of doing DOM manipulation in a language that is not Javascript was *the main reason* I was ever excited about WASM.
Anyway I am quite sure that you could almost completely get rid of js glue code by importing the static Reflect methods and a few functions like (a,b)=>a+b for the various operators, add a single array/object References to hold refs and you can do pretty much everything from wasm by mixing imported calls
...is already possible, see for instance:
https://rustwasm.github.io/docs/wasm-bindgen/examples/dom.ht...
You don't need to write Javascript to access the DOM. Such bindings still call JS under the hood of course to access the DOM API, but that's an implementation detail which isn't really important for the library user.
https://hacks.mozilla.org/2018/10/calls-between-javascript-a...
The only thing that might be expensive is translating string data from the language-specific string representation on the WASM heap into the JS string objects expected by the DOM API. But this same problem would need to be solved in a language-portable way for any native WASM-DOM-API, because WASM has no concept of a 'string' and languages have different opinions about what a string looks like in memory.
But even then, the DOM is an inherently slow API starting with the string-heavy API design, the bit of overhead in the JS shim won't suddenly turn the DOM into a lightweight and fast rendering system.
E.g. it's a bit absurd to talk about performance and the DOM in the same sentence IMHO ;)
_Telling the browser how you want the DOM manipulated_ isn't the expensive part. You can do this just fine with Javascript. The browser _actually redrawing after applying the DOM changes_ is the expensive part and won't be any cheaper if the signal originated from WASM.
https://krausest.github.io/js-framework-benchmark/current.ht...
If there ever is a WASM-native DOM API, WASM GC should help a lot with that.
I sometimes feel like js is too magic-y, I want plain boring golang and want to write some dom functions without using htmx preferably.
Please give us more freedom! This might be the most requested feature and this was how I came across knowing wasm in the first place (leptos video from some youtuber I think, sorry if i forgot)
WASM is just an extremely expensive toy for browsers until it supports DOM access.
The whole js ecosystem evolved to become a damn good environment to write UIs with, people don't know the massive complexity this environment evolved to solve over decades.
Wasm is not now and will never be a magic "press here to replace JS with a new language" button. But it works really well for bringing systems software into a web environment.
1. Non browser application (lightweight cloud, plugins, sandboxing)
2. Performance kernels (like compiling a game/rendering engine or AI stuff)
3. Compiling js-like applications from other languages (eg blazor wasm and others)
The only case where DOM access would be useful is 3 and even there 90% of the gains are already available from the JS-strings proposal to avoid copying+reencoding.
Direct DOM access is otherwise mostly a red herring
https://github.com/WebAssembly/custom-page-sizes/blob/main/p...
So far the book only covers WebAssembly 1.0 though. We'll likely publish an update to cover the (few) new features in 2.0, but WebAssembly 3.0 is a pretty big update. Garbage collection and typed references especially add quite a lot to the spec. But, they also make a lot more things possible, which is great.
The spec itself is also very readable and accessible. Everything in the spec that's described formally is also described in plain language. I can't think of any other spec that's quite as nice to read.
>Typed references. The GC extension is built upon a substantial extension to the Wasm type system, which now supports much richer forms of references. Reference types can now describe the exact shape of the referenced heap value, avoiding additional runtime checks that would otherwise be needed to ensure safety.
Why is an assembly-like dialect interacting with concepts that are at a much higher level than it?
To WASM isn't it all just pointers in a big heap like every other assembly?
Unlike any of the proposals which became part of Wasm 3.0, the component model does not make any changes to the core Wasm module encoding or its semantics. Instead, it’s designed as a new encoding container which contain core Wasm modules, and adds extra information alongside each module describing its interface types and how to instantiate and link those modules. By keeping all of these additions outside of core Wasm, we can build implementations out of any plain old Wasm engine, plus extra code that instantiates and links those modules, and converts between the core wasm ABI to higher level interface types. The Jco project https://github.com/bytecodealliance/jco does exactly that using the common JS interface used by every web engine’s Wasm implementation. So, we can ship the component model on the web without web engines putting in any work of their own, which isn’t possible with proposals which add or change core wasm.
> This is not simply due to a lack of optimization. Instead, the performance of Memory64 is restricted by hardware, operating systems, and the design of WebAssembly itself.
https://spidermonkey.dev/blog/2025/01/15/is-memory64-actuall...
While we do need a default “text mode” (html), js is not the answer to a common language and is holding everything back.
I feel like webdev ecosystem after the fall of Flash took a wrong turn and missed a huge opportunity. For whatever reason people opted for both sticking with JS and backward compatibility. I can clearly see an alternative universe where you have a browser that accepts both a <script> with JS and another, modern language, and it's up to the developer to chose if he wants to transpile/recompile the code for backward compatibility or go all in for new features.
I mean Typescript is cool, but it's reinventing the wheel, we already had that with ES4 over 15 years ago and WASM is still a second class citizen for hackers.
I was there when Macromedia took JS as their scripting language for animating GIFs. People soon started doing silly things and everyone realized it was not fit for the task. We had AS2 which added some syntactic sugar and was the exact same thing as early Typescript - an escape hatch that allowed you to use OOP while the resulting code was still just plain old JS.
But meanwhile Macromedia worked on a whole new thing called AS3. It was codified as ES4 and it's the reason why JS had all these "private", "abstract" keywords reserved.
Heck, we even had redtamarin for running AS as console! And there was Alchemy which is grandfather of WASM.
We were that close to having a compiled JS++.
But times were different, we were still in the middle of browser wars, and the standard was eventually scrapped because noone was interested in JS at the time.
And then Apple decided to kill Flash.
And for some reason the community decided to burn the library of Alexandria and spend the next 10 years on reinventing the wheel.
Sure, V8 is a world wonder, no question about it, the fact that JS can run that fast is just amazing. And people finally wrapped their head around TS and realized that types are useful. Wonderful. But I really think that we've lost 10 years running in circle and producing smart looking docs while tech is slowly getting back where it was 15 years ago.
Could be nitpicking but in the PDF (https://webassembly.github.io/spec/core/_download/WebAssembl...), there's a passage that says:
> 32-bit integers also serve as Booleans and as memory addresses. (under 1.2.1 Concepts)
While 64-bit is not mentioned. Could it be an oversight or I understood it wrong?
IPC overhead is so bad in NodeJS that most people don’t talk about it because the workarounds are just impossibly high maintenance. We reach straight for RPC instead, and downplay the stupidity of the entire situation. Kind of reminiscent of the Ruby community, which is perhaps not surprising given the pedigree of so many important node modules (written by ex Rails devs).
Really nice new set of features.
The whole magic about CL's condition system is to keep on executing code in the context of a given condition instead of immediately unwinding the stack, and this can be done if you control code generation.
Everything else necessary, including dynamic variables, can be implemented on top of a sane enough language with dynamic memory management - see https://github.com/phoe/cafe-latte for a whole condition system implemented in Java. You could probably reimplement a lot of this in WASM, which now has a unwind-to-this-location primitive.
Also see https://raw.githubusercontent.com/phoe-trash/meetings/master... for an earlier presentation of mine on the topic. "We need means of unwinding and «finally» blocks" is the key here.
That would be pretty rad!
I appreciate it is a potential security hole, but at least make it behind a flag or something so it can be turned on.
This was not necessary.. what a mistake, specially EH..
https://github.com/emscripten-core/emscripten/blob/main/syst...
Garbage collection is a small part of the go run time, but it's not insignificant.
Skimming this issue, it seems like they weren't expecting to be able to use this GC. I know C# couldn't either, at least based on an earlier state of the proposal.
so most GC-languages being ported to webassembly already have a GC, so what is the benefit of using a provided GC then?
on the other hand i see GC as a feature that could become part of any modern CPU. then the benefit would be large, as any language could use it and wouldn't have to implement their own at all anymore.
I'd think porting an existing GC to WASM is more effort than using WASM's GC for a GC'd language?
Whereas with a manual GC, if you had a JS object holding a reference to an object on your custom heap, and your heap holds a reference to that JS object (with indirections sprinkled in to taste) but nothing else references it, that'd result in a permanent memory leak, as both heaps would have to consider everything held by the other as GC roots; so you'd still be forced to manually avoid cycles despite only ever using GC'd languages. Wasm GC entirely avoids this problem.
WASM is and will always be the greatest technology of the future. It will never be the greatest technology of the present.
https://webassembly.org/features/
That isn't updated for Safari 26, but by that table Safari 18 is only missing 3 standardized features that Chrome supports, with a fourth that is disabled by default. So what's the point of your comment? Just to make noise and express your ignorance?
Apple took over the distribution to prioritize a cut to the app store which crippled/slowed the open web PWA and WASM adoption.
A: look slow compared to other engines that supported it
B: implement it
Now, stuff like the exception handling stuff and tail calls probably aren't shimmable via JS, but at this point they don't gain much from being obstructionists.
- memory64
- multiple memories
- JSPI (!!)
I recently explored the possibility of optimizing qemu-wasm in browser[0].. and it turns out that the most important features were those Safari doesn't implement.
Say you have a WASM module, straight line code that builds a stack, runs quickly because apart from overflow checks it can just truck on.
Now add this JS-Promise thing into the mix:
A: now how does a JS module handle the call into the Wasm module? classic WASM was a synchronous call, now should we change the signature of all wasm functions to async?
B: does the WASM internal calls magically become Promises and awaits (Gonna take a lot of performance out of WASM modules), if not we now have a 2 color function world that needs reconciliation.
C: If we do some magic, where the full frame is paused and stored away, what happens if another JS function then calls into the WASM module and awaits and then the first one resumes? Any stack inside the wasm-memory has now potential race conditions (and potentially security implications). Sure we could make locks on all Wasm entries but that could cause other unintended side-effects.
D: Even if all of the above are solved, there's still the low-level issues of lowlevel stack management for wasm compiled code.
Looking at the mess that is emscripten's current solution to this, I really hope that this proposal gets very well thought out and not just railroaded in because V8's compiler manages to support it because.
1: It has the potential to affect performance for all Wasm code just because people writing Qemu,etc are too lazy to properly abstract resource loading to cooperate with the Wasm model.
2: It can become a burden on the currently thriving Wasm ecosystem with multiple implementations (honestly, stuff like Wasm-GC is less disruptive even if it includes a GC).
JSPI-based coroutines are much faster than the old Asyncify ones (my demo shows that).
As for your core message - I'm just the user, but if Google engineers were able to implement that, then it is possible to implement that securely. I remember Google engineers arguing with Apple engineers in GH issues, but I'm not on that level, I just see that JSPI is already implemented in Chrome, so you can't tell me it's not possible.
EDIT: By "safari" here I actually mean WebKit.