I have added year indicators to my blog (such that old articles have a prominent year name in their title) and a subscribe note (people don’t know you can put URLs into a feed reader and it’ll auto-discover the feed URL). Each time, the number of people who email me identical questions goes down :)
Anyway, thanks for blogging!
Locking down features to have a unified experience is what a browser should do, after all, no matter the performance. Of course there are various vendors who tried to break this by introducing platform specific stuff, but that's also why IE, and later Edge (non-chrome) died a horrible death
There are external sandbox escapes such as Adobe Flash, ActiveX, Java Applet and Silverlight though, but those external escapes are often another sandbox of its own, despite all of them being a horrible one...
But with the stabilization of asm.js and later WebAssembly, all of them is gone with the wind.
Sidenote: Flash's scripting language, ActionScript is also directly responsible for the generational design of Java-ahem-ECMAScript later on, also TypeScript too.
I feel like I am the only one who absolutely loved ActionScript, especially AS3. I wrote a video aggregator (chime.tv[1]) back in the day using AS3 and it was such a fun experience.
1. https://techcrunch.com/2007/06/12/chimetv-a-prettier-way-to-...
There is the universal hate for flash because it was used for ads and had shitty security, but anyone I know who actually used AS3 loved it.
At its peak, with flex builder, we also had a full blown UI Editor, where you could just add your own custom elements designed directly with flash ... and then it was all killed because Apple did not dare to open source it, or put serious efforts on their own into improving the technical base of the flash player (that had aquired lots of technical dept).
That's only one side of it. Flash was the precursor to the indie/mobile gamedev industry we have today (Newgrounds, Miniclip, Armor Games), before smartphones become ubiquitous. Not to mention some rather creative websites, albeit at the cost of accessibility .
Flash's only fault was it's creators were gobbled up by Adobe, who left it in the shitter and ignored the complaints people had about it's security issues.
IIRC, they couldn't open source Flash due to its use of a number of 3rd party C/C++ libraries that were proprietary.
Adobe's license with these 3rd parties permitted binary-only distribution so it would have meant renegotiating a fresh license (and paying out $$$) for an EOL codebase that had enormous technical debt, as you also acknowledge in your last sentence.
Maybe it was the standalone flex player instead of the web Flash player?
(Flash for desktop, with file access)
So what would have taken a day or two back when Flash was available is now taking a week of hand-writing tweens and animations in raw Typescript, one layer at a time.
Since I happened to write the first canvas-based interactive screen graph code that Grant Skinner partially ripped off to create EaselJS, and since I'm sure he's making a fine living from Adobe licensing it, it's especially galling that I'm still paying for a CC license and this is what I get when I want to use a GUI to make some animations to drop into a game.
It's the first time I've done a 2D game since 2017, and I had over a decade of experience building games in Flash/AIR before that. It's just mind-blowing how stupid and regressed Adobe's tooling has become in the past few years, and how much harder it is to do simple things that we took for granted in the heyday of Flash authoring. There really is still no equivalent workflow or even anything close. I guess that post-Flash, there aren't enough people making this kind of web game content for there to be a solid route without using Unity or something.
Pixi is great for anything that is a texture, then it is really fast. Otherwise it is not a flash replacement.
I do not use it for vector animations, but spritesheets or a webm video overlay is what I would use in your case.
But .. it bothers me of course as well, having a full video of rastergraphic what could be just some vector and animation data.
I might try Spine, I've heard some positive things. They'll still end up as textures in pixi but maybe at least they can pack well.
I never really worked with it, but it seems whenever it comes up here or on Reddit, people who did, miss it. I think the authoring side of Flash is remembered very positively.
For a while RIM/Blackberry was using Adobe Air - on the Playbook and also the built-in app suite in the lead up to the launch of BB10. The latter never saw the light of day though, a late decision was made to replace all of the built-in apps with Qt/Cascades equivalents (if I remember right this was due largely to memory requirements of the Air apps).
Silverlight was nice, pity it got discontinued.
Thus it isn't as if the browser plugins story is directly responsible for its demise.
Not that I'm not excited about the possibilities in personal productivity, but I don't think this is the way--if it was, we wouldn't have lost, say, the ability to have proper desktop automation via AppleScript, COM, DDE (remember that?) across mainstream desktop operating systems.
I have a DDE book somewhere, with endless pages of C boilerplate to exchange a couple of values between two applications on Windows 3.x.
If not using WinUI 3.0, or Windows ML with CoPilot+, there is no reason to submit oneself to the pain of using CsWinRT or C++/WinRT bindings with lesser tooling than their UWP counterparts.
The large majority of new APIs, since Vista are based on traditional COM, with the biggest exception being UMDF that in version 2.0 rolled back its COM API from version 1.0, back to a C based one.
And today this is.. not sufficient. What we require today is to run software protected from each other. For quite some time I tried to use Unix permissions for this (one user per application I run), but it's totally unworkable. You need a capabilities model, not an user permission model
Anyway I already linked this elsewhere in this thread but in this comment it's a better fit https://xkcd.com/1200/
Unix permissions remain a fundamental building block of Android's sandbox. Each app runs as its own unix user.
Tautology is tautology.
> but Java applets were removed from the browsers
Java applets provided more scope compared to the browser itself, not less. They're not really comparable to seccomp or namespaces.
> hosters who will hand off user account to a shared server
There's lots of CI or function runners that expose docker-like environments.
They are comparable because they provided a restricted sandbox to execute untrusted code.
> There's lots of CI or function runners that expose docker-like environments.
These are running inside VMs.
Java applets were killed off my MS's attempt at "embrace, extent, extinguish" by bundling an incompatible version of Java with IE, and Sun's legal response to this.
On http://co-do.xyz/ you can select a directory and let AI get to work inside of it without having to worry about side effects.
The Fily System Access API is the best thing that happened to the web in years. It makes web apps first class productivity applications.
They won’t be first-class as long as native UI still has the upper hand.
Outside those sort of spaces it’s hard to name a popular piece of software still on native that isn’t a wrapped webapp.
cough CapCut cough
The Chrome team's new, experimental APIs are a separate matter. They provide additional capabilities, but many programs can get along just fine without since they don't don't strictly need them in order to work—if they would ever even have end up using them at all. A bunch of the applications in the original post fall into this category. You don't need new or novel APIs to be able to hash a file, for example. It's a developer education problem (see also: hubris).
Firefox and safari are generally very conservative about new api that can enable new type of exploits.
At least firefox and safari does implement origin private file system. So, while you can't edit file on user disk directly. You can import the whole project into browser. Finish the edit and export it.
What I find most compelling about this framing is the maturity argument. Browser sandboxing has been battle-tested by billions of users clicking on sketchy links for decades. Compare that to spinning up a fresh container approach every time you want to run untrusted code.
The tradeoff is obvious though: you're limited to what browsers can do. No system calls, no arbitrary binaries, no direct hardware access. For a lot of AI coding tasks that's actually fine. For others it's a dealbreaker.
I'd love to see someone benchmark the actual security surface area. "Browsers are secure" is true in practice, but the attack surface is enormous compared to a minimal container.
For the same reasons given, NPM programmers should be writing their source code processors (and other batch processing tools) to be able to run in the browser sandbox instead of writing command-line utilities directly against NodeJS's non-standard (and occasionally-breaking) APIs.
1) webcontainer allows nodejs frontend and backend apps to be run in the browser. this is readily demonstrated to (now sadly unmaintained) bolt.diy project.
2) jslinux and x86 linux examples allow running of complete linux env in wasm, and 2 way communication. A thin extension adds networking support to Linux.
so technically it's theoretically possible to run a pretty full fledged agentic system with the simple UX of visiting a URL.
My eventual goal with that is to expand it so an LLM can treat it like a filesystem and execution environment and do Claude Code style tricks with it, but it's not particularly easy to programmatically run shell commands via v86 - it seems to be designed more for presenting a Linux environment in an interactive UI in a browser.
It's likely I've not found the right way to run it yet though.
I see the datestamp on this early test https://fingswotidun.com/tests/messageAPI/ is 2023-03-22 Thinking about the progress since then I'm amazed I got as far as I did. (To get the second window to run its test you need to enter aWorker.postMessage("go") in the console)
The design was using IndexedDB to make a very simple filesystem, and a transmittable API
The source of the worker shows the simplicity of it once set up. https://fingswotidun.com/tests/messageAPI/testWorker.js in total is just
importScripts("MessageTunnel.js"); // the only dependency of the worker
onmessage = function(e) {
console.log(`Worker: Message received from main script`,e.data);
if (e.data.apiDefinition) {
installRemoteAPI(e.data.apiDefinition,e.ports[0])
}
if (e.data=="go") {
go();
return;
}
}
async function go() {
const thing = await testAPI.echo("hello world")
console.log("got a thing back ",thing)
//fs is provided by installRemoteAPI
const rootInfo = await fs.stat("/");
console.log(`stat("/") returned `,rootInfo)
// fs.readDir returns an async iterator that awaits on an iterator on the host side
const dir = await fs.readDir("/")
for await (const f of dir) {
const stats = await fs.stat("/"+f.name);
console.log("file " +f,stats)
}
}
I distinctly remember adding a Serviceworker so you could fetch URLs from inside the filesystem, so I must have a more recent version sitting around somewhere.It wouldn't take too much to have a $PATH analog and a command executor that launched a worker from a file on the system if it found a match existed on the $PATH. Then a LLM would be able to make its own scripts from there.
It might be time to revisit this. Polishing everything up would probably be a piece of cake for Claude.
Isn't webcontainers.io a proprietary, non-open source solution with paid plans? Mentioning it at the same level of open source, auditable platforms seems really strange to me.
Could this be used with arbitrary local tools as well? I could be missing something but I don't see how you could use a non remote MCP server with this setup.
At the risk of sounding obvious :
- Chrome (and Chromium) is a product made and driven by one of the largest advertising company (Alphabet, formally Google) as a strategical tool for its business model
- Chrome is one browser among many, it is not a de facto "standard" just because it is very popular. The fact that there are a LOT of people unable to use it (iOS users) even if they wanted to proves the point.
It's quite important not to amalgamate some experimental features put in place by some vendors (yes, even the most popular ones) as "the browser".
There are many useful things that can only be implemented for Chromium: things like the filesystem API mentioned in this post, the USB devices API used to implement various microcontroller flashing tools, etc. Users can have multiple browsers installed, and I often use Chromium as essentially a sandboxed program runtime.
Chrome add these features because they are responding to the demands of web developers. It's not web developers fault if firefox can't or refuses to provide apis that are being asked for.
Mozilla could ask Claude to implement the filesystem api today and ship it to everyone tomorrow if they wanted to. They are holding their own browser back, don't let them also hold your website back. In regards to browser monoculture there are many browsers built on top of the open source Blink that are not controlled by Google such as Edge, Brave, and Opera just to name a few of the many.
I would like to humbly propose that we simply provision another computer for the agent to use.
I don't know why this needs to be complicated. A nano EC2 instance is like $5/m. I suspect many of us currently have the means to do this on prem without resorting to virtualization.
https://developer.chrome.com/blog/persistent-permissions-for...
On my desktop Chrome on Ubuntu, it seems to be persistent, but on my Android phone in Chrome, it loses the directory if I refresh.
So if you can work within the constraints there are a lot of benefits you get as a platform: latency goes down a lot, performance may go up depending on user hardware (usually more powerful than the type of VM you'd use for this), bandwidth can go down significantly if you design this right, and your uptime and costs as a platform will improve if you don't need to make sure you can run thousands of VMs at once (or pay a premium for a platform that does it for you)[1]
All that said I'm not sure trying to put an entire OS or something like WebContainers in the user's browser is the way, I think you need to build a slightly custom runtime for this type of local agentic environment. But I'm convinced it's the best way to get the smoothest user experience and smoothest platform growth. We did this at Framer to be able to recompile any part of a website into React code at 60+ frames per second, which meant less tricks necessary to make the platform both feel snappy and be able to publish in a second.
[1] For big model providers like OpenAI and Anthropic there's an interesting edge they have in that they run a tremendous amount of GPU-heavy loads and have a lot of CPUs available for this purpose.
That said. It's a good start.
That's enough reason for me to say, f** no. Google will try as hard as possible to promote this even if it's not technically the best solution.
Browsers as agent environment opens up a ton of exciting possibilities. For example, agents now have an instant way to offer UIs based on tech governed by standards(HTML/CSS) instead of platform specific UI bindings. A way to run third party code safely in wasm containers. A way to store information in disk with enough confidence that it won't explode the user's disk drive. All this basically for free.
My bet is that eventually we'll end up with a powerful agentic tool that uses the browser environment to plan and execute personal agents or to deploy business agents that doesn't access system resources any more than browsers do at the moment.
Generated via ChatGPT, this canvas shows a basic pyramid and has sliders that you can use to change the pyramid, and download the glTF to your local machine. You can also click the edit w/ ChatGPT and tweak the UI however you're able to prompt it into doing.
https://chatgpt.com/canvas/shared/697743f616d4819184aef28e70...
It's easily explained by the fact that all the javascript code is exposed in a browser and all the network connections are trivially inspectable and blockable. It's much harder to collect data and do shady things with that level of inspectability. And it's much harder to ban alternative clients for the main paid offer. Especially if AI companies want to leave the door open to pushing ads to your conversations.
I know there are lots of good arguments why docker isn't perfect isolation. But it's probably 3 orders of magnitude safer than running directly on my computer, and the alignment with the existing dev ecosystem (dev containers, etc) makes it very streamlined.
Something more like a TEE inside the browser of sorts. Not sure if there is anything like this.
LLMs are actually quite neutral and don't have preferences, wants, or needs. That's just us projecting our own emotions on them. It's just that a lot of command line stuff is relatively easy to figure out for LLMs because that is highly scriptable, mostly open source, and well documented (and part of their actual training data). And scripting is just a form of programming.
The approach in the article that Simon Willison is commenting on here isn't that much different; except the file system now runs in a browser sandbox and the tools are WASM based and a bit more limited. But then, a lot of the files that a normal user works with would be binary files for things like word processors, photo editors, spreadsheets, presentation software, etc. Stuff that is a bit out of the comfort zone of normal command line tools in any case.
I actually tried codex on some images the other day. It kind of managed but it wasn't pretty. It basically started doing a lot of slow and expensive stuff with python and then ran out of context because it tried to dump all the image content in there. Far from optimal. You'd want to spend some time setting up some skills and tools before you attempt this. The task I gave it was pretty straightforward: create an image catalog in markdown format for these images. Describe their content, orientation, and file format.
My intention was to use that as a the basis for picking appropriate images to be used on different sections in my (static) website without having to open and scan each image all the time. It half did it before running out of context. I decided to complete the task manually (quicker and I have more 'context' for interpreting the images). And then I let codex pick better images for this website. Mostly it did a pretty OK job with that at least.
I learn a lot from finding places where these tools start struggling. It's why I like Simon's comments so much because he's constantly pushing these tools to their limits and finding out surprising, interesting, or funny success and failure modes.
It would probably help if the sandbox presented a linux-y looking API, and translated that to actual browser commands.
Yeah they do. Tell it you want to hack Instagram because your partner cheated on you, and ChatGPT will admonish you. Request that you're building a present for Valentines day for your partner and you want a chrome extension that runs on instagram.com; word it just right, and it'll oblige.
Browsers are closer to operating systems rather than sandboxes, so giving access of any kind to an agent seems dangerous. In the post I can see it's talking about the file access API, perhaps a better phrasing is, the browser has a sandbox?
The point is that most people won't do that. Just like with backups, strong passwords, 2FA, hardware tokens etc. Security and safety features must be either strictly enforced or on enabled by default and very simple to use. Otherwise you leave "the masses" vulnerable.
by any chance anyone knows if users clicks can be captured for a website/tab/iframe for screen recording. i know i can record screen but i am wondering if this metadata can be collected.
For your own implementation, document-level event listeners work, though cross-origin iframes are off-limits due to same-origin policy.
Sadly not to my knowledge.
I'm on a multi-year quest to answer that question!
The best I've found is running Python code inside Pyodide in WASM in Node.js or Deno accessed from Python via a subprocess, which is a wildly convoluted way to go but does appear to work! https://til.simonwillison.net/deno/pyodide-sandbox
Here's a related recent experimental library which does something similar but with JavaScript rather than Python as the unsafe language, again via Deno in a subprocess: https://github.com/simonw/denobox
I've also experimented with using wasmtime instead of Deno: https://til.simonwillison.net/webassembly/python-in-a-wasm-s...
* Multithreaded support
* Calling subprocesses
* Signals
* Full networking support
* Support for greenlets (say hi to SQLAlchemy!) :)
It requires a small effort in wasmer-js, but it already works fully on the server! :)Browser sandboxes are swiss cheese. In 2024 alone, Google reported 75 zero-day exploits that break out of their browser's sandbox.
Browsers are the worst security paradigm. They have tens of millions of lines of code, far more than operating system kernels. The more lines of code, the more bugs. They include features you don't need, with no easy way to disable them or opt-in on a case-by-case basis. The more features, the more an attacker can chain them into a usable attack. It's a smorgasbord of attack surface. The ease with which the sandbox gets defeated every year is proof.
So why is everyone always using browsers, anyway? Because they mutated into an application platform that's easy to use and easy to deploy. But it's a dysfunctional one. You can't download and verify the application via signature, like every other OS's application platform. There's no published, vetted list of needed permissions. The "stack" consists of a mess of RPC calls to random remote hosts, often hundreds if not thousands required to render a single page. If any one of them gets compromised, or is just misconfigured, in any number of ways, so does the entire browser and everything it touches. Oh, and all the security is tied up in 350 different organizations (CAs) around the world, which if any are compromised, there goes all the security. But don't worry, Google and Apple are hard at work to control them (which they can do, because they control the application platform) to give them more control over us.
This isn't secure, and there's really no way to secure it. And Google knows that. But it's the instrument making them hundreds of billions of dollars.
Also the double iframe technique is important for preventing exfiltration through navigation, but you have to make sure you don't allow top navigation. The outer iframe will prevent the inner iframe from loading something outside of the frame-src origins. This could mean restricting it to only a server which would allow sending it to the server, but if it's your server or a server you trust that might be OK. Or it could mean srcdoc and/or data urls for local-only navigation.
I find the WebAssembly route a lot more likely to be able to produce true sandboxen.
A large part of the web is awful because of all the things browsers must do that the operating system should already be doing.
We have all tolerated stagnant operating systems for too long.
Plan 9's inherent per-process namespacing has made me angry at the people behind Windows, MacOS, and Linux. If something is a security feature and it's not an inherent part of how applications run, then you have to opt in, and that's not really good enough anymore. Security should be the default. It should be inherent, difficult to turn off for a layman, and it should be provided by the operating system. That's what the operating system is for: to run your programs securely.
Can you believe that if you download a calculator app it can delete your $HOME? What kind of idiot designed these systems?
The problems discussed by both Simon and Paul where the browser can absolutely trash any directory you give it is perhaps the paradigmatic example where git worktree is useful.
Because you can check out the branch for the browser/AI agent into a worktree, and the only file there that halfway matters is the single file in .git which explains where the worktree comes from.
It's really easy to fix that file up if it gets trashed, and it's really easy to use git to see exactly what the AI did.
It seems he started his blog in 2003: https://simonwillison.net/2003/Jun/12/oneYearOfBlogging/
As for the "browser is the sandbox" running untrusted code in the user's browser increases the risk of an unintended RCE via a sandbox escape which can be done in Chrome [0]. WASM is not going to save you either [1].
[0] https://www.ox.security/blog/the-aftermath-of-cve-2025-4609-...
And then you see the recent vulnerabilities in opencode for example. The current model is unsustainable
It would be great if desktop Linux adopted a better security model (maybe inspired by Android). So far we got this https://xkcd.com/1200/ and it's not sufficient
We are soon going to release a new technology, built on top of the same stack, to allow full-stack development completely in the browser. It's called BrowserPod and we think it will be a perfect fit for agents as well.