It's the only usable form of reference! I want all the details to be presented in a reference. Where else?
> low-level tools are terrible too
It seems to me the author is confusing lack of familiarity with lack of existence. There are lots of fantastic tools out there, you just need to learn them. They don't know them, so conclude they don't exist.
> We could have editor plugins and language servers to help beginners along
We already have all that.
I guess it's like a dictionary: it's only useful if you know the word you want to look up, rather than reading through every definition until you find the function/library/ability that you want. I do agree though, when I need to look something up, I do want it in great detail - it just isn't a very good learning resource.
> It seems to me the author is confusing lack of familiarity with lack of existence. There are lots of fantastic tools out there, you just need to learn them. They don't know them, so conclude they don't exist.
Can you give some examples? The author made a compelling argument on how easy it is to use the browser debugger. I would be of great interest for something similar.
> We already have all that.
I've only seen these for simple python applications or web development, never in any 'low level' space. And certainly not for doing anything interesting in the low level space (something that is not just a C++ language tutorial).
You can see this with languages like Rust and Go—they're some of the first low-level programming languages with actually good tooling, and, as a result, they're becoming very popular. I can pull down some code, run `cargo build`, and not have to worry about the right libraries being installed on my system and whether I've generated a Makefile. Packages are easily searchable, hosted on popular platforms like GitHub, and I can file bugs and ask questions without having to join an obscure mailing list or deal with an unfriendly community.
If you want your language/library/framework/layer in the stack to become popular, make the tooling good and make it easy for folks to get their questions answered. Everything else will follow.
This is exactly what I'm trying to build. I'm writing a library on top of Qt that would make it easy to write native code as easy as it is writing React code! I would say it's even easier since we are throwing all the constraints of browsers and web apis out of the way.
Like for example if you're an automotive engineer, you can't go ahead and put in the thickest beams made out of the strongest steel on hand, because the resulting car would weigh 20 tons and cost $300k. To add to that, it would probably drive like crap, and wouldn't even protect the driver that well.
In engineering, even a 10% waste is frowned upon. Here I outlined a waste of 10x. I don't think a Reddit comment opening/closing taking 200ms is a 10x waste, but a couple orders of magnitude more.
Why is that, that despite tons of very highly paid (moreso than any other kind) software 'engineers' work on websites like Reddit (which is not the only example of this trend), the result ends up like this?
It's not the lack of resources, constraints, pace of development (I don't remember the reddit site changing all that much in the past decade).
I think if software engineers have the mindset of other engineers, this thing would be considered ridiculous and unimaginable, like the 20 ton car is for automotive engineers.
I'd phrase it differently: Civil engineering is fundamentally about understanding trade-offs within hard constraints. You have materials of known strength, known wear, and known properties (compression vs shear). It's boring by default because the physics don't budge.
A lot of software engineering, web and SaaS development in particular, hasn't had to confront comparable resource limitations. For decades we've had (for practical purposes) an infinitely fast calculator, practically infinite supply of active working data and and infinite amount of them to chain together. So, without constraints people have just run wild.
But here's where it gets interesting from my perspective: when you point out the resulting bloat (200ms to open a Reddit comment), many developers will defend it not as a technical failure but as correct business prioritisation. "User hardware is cheap, developer time is expensive" or "users will upgrade their devices anyway"—essentially externalising the performance cost onto users rather than absorbing it as an engineering constraint.
That's the fundamental difference. An automotive engineer can't build a 20-tonne car and tell customers to buy stronger roads. But we absolutely can ship bloated software and tell users to buy faster computers, more bandwidth, better phones. And for a long time, we've got away with it.
The question is whether that's still sustainable, or whether we're approaching the limits of what users will tolerate.
There is no 'Reddit 2' substitute product (or indeed for lots of software), and network effects tend to dominate, so your benchmark is 'is it bad enough so people would rather use nothing than your product', which is a very low bar to clear.
We can see this works in reverse: developer tools, CLIs, and local apps where network effects don't apply (ripgrep over grep, esbuild over webpack) performance actually matters and gets rewarded. Developers switch because they can switch without losing anything. But Instagram users can't switch to a lighter alternative without abandoning their social graph.
This is why the "developer time is expensive, user hardware is cheap" argument only works in the absence of competition. In genuinely competitive markets, efficient code becomes a competitive advantage worth investing in. The fact that it's "not worth optimising" is itself evidence of market power, not sound economics.
Your automotive analogy actually understates it: imagine if switching to a better car meant your old car's passengers couldn't ride with you anymore, and that's closer to what we're dealing with.
A civil engineer might work on a major bridge that costs a billion dollars to build. An automotive engineer might work on a car that has a cumulative billions of dollars in production costs. An aeronautical engineer might work on a plane with a $100 million price tag.
The engineer’s job there is to save money. Spend a week slimming down part of that bridge and you’ve substantially reduced costs, great! Figure out how to combine three different car parts into one and you’ve saved a couple of dollars on every car you make, well worth it.
Software doesn’t have construction costs. The “engineer” (I have the word in my job title but I hesitate to call us that) builds the whole thing. The operating costs are often cheap. Costs like slow rendering are paid by the customer, not the builder.
In that environment, it’s often not a positive ROI to spend a week making your product more efficient. If the major cost is the “engineers” then your focus is on saving them time. If you can save a week of their time at the cost of making your customers wait 50ms longer for every action, that is where you see your positive ROI.
When software contributes to the cost of a product, you tend to see better software work. Your headphones aren’t running bloated React frameworks because adding more memory and CPU is expensive. But with user-facing software, the people who pay the programmers are usually not the people who pay for the hardware or are impacted by performance.
Meanwhile, Google and Apple look for whatever ways they can to improve battery life on their phones.
But for many other developers, this isn’t going to save money or increase sales, so the incentives are more indirect.
E.g., develop a generic user interface framework which makes it very quick to produce a standard page with a series of standard fields but at the same time makes it very painful to produce a non-standard layout. After that is done it is 'discovered' that almost all pages are non-standard. But that 'discovery' could also have been made in five minutes by talking to any of the people already working for the company....
Another example: use an agent system where lots of agents do almost nothing, maybe translate one enum value to another enum value of another enum type. Then discover that you get performance problems because agent traffic is quite expensive. At the same time typical java endless typing occurs because of the enormous amount of agent boilerplate. Also the agents that actually do something useful become god classes because basically all non-trivial logic goes there....
Not quite. The path to high level always involves abstractions that fit the problem. There is still room for a decision to replace high-level with low-level in some very specific bits of a hot path, but that decision also takes into consideration the tradeoffs of foregoing straight-forward high-level solutions with low-level versions that are harder to maintain. The sales pitch to push code that is harder to maintain requires a case that goes way beyond performance arguments.
>Building it yourself might sound crazy, but it’s been done successfully many times before—for example, Figma famously built their app from scratch in WASM and WebGL, and it runs shockingly well on very large projects.
Yes, let's hear more about this. "Collapsing Reddit comments could have been like 180ms faster" isn't very convincing to smart, ambitious people deciding what they want to be about. Find more examples like Figma and get people to believe that there's still lots of room for up and comers to make a name for themselves by standing on their performance, and they'll take care of the learning and building themselves.
It's fairly compelling to an audience who spends a lot of time browsing reddit, however
Because of this, I'm really looking forward for PanGUI to step up (https://www.pangui.io/), their UI framework is very promising and I would start using it in a heartbeat when the beta actually releases!
That's the browser, native ui development failed because it didn't want to lose money on cross platform compatibility, security, or user onboarding experience.
The web is fast enough for 99% of UIs, the story is not about using web, the story is about using the web poorly. old.reddit is not qt.
But apps made with Qt as an end product don't think suck. Qt is a fully featured and modern and high quality framework.
[2] https://rubymamistvalove.com/notes-mobile-swipe-stack-view.M...
The alternative to cross platform frameworks that do not feel completely native on all platforms is to use browsers for desktop apps which do not feel native on any platform. They do not even have similar UIs to each other.
We would be better off using imperfect cross platform frameworks rather than sticking everything in the browser.
I think part of the reason this happens is that users accept it because they are used to web apps so do not expect consistency.
This is exactly what we're trying to do with Slint (https://github.com/slint-ui/slint ). It’s a native, cross-platform UI framework for desktop and embedded (Rust/C++/Python/JS), with no browser runtime
It sounds like a clever idea.
Also, really wished they've opted for a more general language like C# rather than Dart - but that's inevitable since Google needed to make use of their Dart language after they've failed to standardize it on the Web (and I think they don't want to use a language developed by Microsoft of all companies)
C# is one of the worst choices they could make at the time.
I hoped someday Flutter might be mature enough for desktop development, but so far they've focused most of their efforts on mobile and I don't think this will change in the future.
I really don't think there is any broad future for Flutter. Requiring adoption of a new programming language is making an already an uphill battle even steeper, and the way they insist on rendering websites in a single giant canvas is... ugh
That's not consensus. I very much reject a "desktop framwork". Qt has its own abstractions for everything from sockets to executing processes and loading images, and I don't want that. It forces one to build the entire app in C++, and that's because, although open-source, its design revolves around the needs of the paying customers of Trolltech: companies doing multi-platform paid apps.
I want a graphical toolkit: a simple library that can be started in a thread and allows me to use whatever language runtime I want to implement the rest of the application.
> I hoped someday Flutter might be mature enough for desktop development
Anything that forces a specific language and/or runtime is dead in the water.
Yes, that is the consensus of why Qt sucks - it's a massive framework that tries to do everything at the same time with a massive toolset of in-house libraries. This is inherently tied to the revenue model of the Qt Company - sell custom modules that work well with the Qt ecosystem at a high enterprise-level price. I also wish to just use the "good" parts of Qt but I can't, since it already has a massive QtCore as its dependency.
However, there is still no cross-platform framework except for Qt that can actually do the most important things that a desktop framework actually needs: an actual widget editor, styling and theming, internationalization, interop with native graphics APIs (though I have gripes with their RHI system), etc. That's why I'm rooting for PanGUI (https://www.pangui.io/) to succeed - it pretty much completes all the checkboxes you have, but it's still WIP and in closed alpha.
> I hoped someday Flutter might be mature enough for desktop development >> Anything that forces a specific language and/or runtime is dead in the water.
Yeah, but at that time I thought this was at least better than wrangling with Qt / QML. You can write the core application logic ("engine" code) in C++ and bind it with Dart. There are already some companies I've seen gone a similar route with C# / WPF.
In my university days I was very much into GUIs, and I've written apps with wxWidgets, plain Gtk 1 and 2, GNOME 2, Qt, Tk, GNUstep and even some fairly obscure ones like E17 and FTLK. For my tastes, the nicest ones were probably GNOME2, Elementary and wxWidgets. Especially GNOME2, which had a simple builder that let me create the basic shell of an app, with some horizontal and vertical layout boxes that I could later "hydrate" with the application logic.
They say it's in beta and it seems anyone can sign up for the beta.
Ftfy.
When the DOM is not enough, there's already WebGL and WASM. A vanishingly small sliver of use cases can't saturate human senses with these tools, and the slowest, jankiest websites tend to be the least deserving of them (ie: why is jira slow? It's literally a text box with a handful of buttons on the side!).
Despite me agreeing with your overall point, this is such a ridiculous comment to make. You and I both know Jira is much much more than that. Reductive things like this just turn off people who would otherwise listen to you.
Someone needs to build Qt’s successor, probably with more beginner-friendly declarative semantics (akin to HCL or Cue) and probably with syntax closest to YAML or Python (based on learning curve, beginner readability etc).
The backend will probably have to be something written in Zig (likely) or Nim (capable, less likely) and will probably have to leverage OpenGL/Metal, WebGL and WASM.
Obviously a massive undertaking, which is why I think the industry has not reached consensus that this is what needs to happen. The less ideal options we have now often gets the job done.
- WesAudio has a VST plugin for audio applications: https://slint.dev/success/wesaudio-daw
- LibrePCB 2.0, is migrating their code from Qt to Slint and should soon be released. https://librepcb.org/blog/2025-09-12_preview_of_next_gen_ui/
- krokiet: https://github.com/qarmin/czkawka/blob/master/krokiet/README...
For example, take Dear.IMGUI which is a c++ UI framework with a kind of data binding, which generates vertex buffers which can be directly uploaded to the GPU and rendered.
It supports most of the fancy layout stuff of CSS afaik (flexbox etc), yet its almost as low level as it gets.
The code is also not that much harder to write than React.
The specific examples in the article are about UI.
I agree that UI ecosystem is a big and slow mess, because there is actually a LOT of complexity in UIs. I would even argue that there is often more complexity to be found in UIs than in backends (unless you are working on distributed systems, or writing your own database). On backend, you usually just need paralellism (95% of jobs is just parallel for, map-reduce kind of thing).
But in UI, you need concurrency! You have tons of mutable STATE flying around that you need to synchronize - within UI, across threads or with the backend. This is /hard/ - and to come back to the point of the article - the only low-level language that I'm familiar with that can do it well and reliably is Rust.
In Rust, my absolutely favorite UI framework is egui. It is based on immediate mode rendering (maybe you're familiar with dearimgui), rather than the old, familiar-but-complex retained mode. It's really interesting stuff, recommend studying on it! Dearimgui has a wonderful quote that summarizes this well:
> "Give someone state and they'll have a bug one day, but teach them how to represent state in two separate locations that have to be kept in sync and they'll have bugs for a lifetime." -ryg
We use egui in https://minfx.ai (Neptune/Wandb alternative) and working with it is just a joy. Emilk did such a fantastic job bringing egui about! Of course it has its flaws (most painful is layouting), but other than that it's great!
These bootstraps essentially speedrun software history, and so they tell us a lot about how we got here, and why we write the things we write. But they also create perfect game to weite greenfield alternative bootstraps. The shortest, most readable bootstrap, is proof of the best abstractions, the best way of doing things.
It's a chance to finally put the sort of software stack / tech tree stuff on a more apples-to-apples basis.
And as a bonus if you control both slices of bread it's much easier to change the sandwich filling as well! (Though if the original sandwich-builder wasn't careful you might find some sticky residue left over on your bread… maybe someone should take this metaphor away before I do more damage.)
What did we gain exactly? Reddit is better at displaying videos and images now. But it's slower despite faster hardware.
Everyone always wants a frontend framework that "just works" - sounds a lot like a free lunch to me! You have to manage the state and updates of your application at some point - the underlying software cant just "guess" what you want. But I'm always like a broken record when these react hate / <insert frontend framework here> hate threads show up - most of the confusion is derived from lack of basic concepts of what problems these frameworks solve in the first place.
If everyone fails to read framework release notes then the problem is frameworks. If you change so quickly and often that almost no developer bothers to keep up to date then you are the problem, not the developer.
"we at Handmade community" - and no link to that community anywhere
blog itself? 2 posts a year, and 2025 posts aren't even on the blog itself (just redirects)
Yes, tooling and toolmaking should be promoted - but promotion itself should also be accessible somehow?
It would be nice if every language and library had a great working repl and a Jupyter Lab kernel and good mdn-like documentation and w3schools-like tutorials.
Here's the manifesto: https://handmade.network/manifesto
Also the reddit comparison is great, but I wish he would have talked about why the slop is there in the first place.
I'm pretty sure new reddit isn't optimized for speed, it's optimized for analytics and datamining.
I bet they use all those backend calls to get really granular session info. When something is super slow, it's not that it's unoptimized, but rather it's optimized for money over user experience.
there's no reason to blame it for the types of websites being made either, it doesn't really provide enough functionality to influence the type of site you use it on
Off the top of my head: $() CSS parsing and DOM traversal was way slower than querySelector or getElementById, both of which predate jquery by years. Every $('.my-class') created wrapped objects with overhead. Something like $('#myButton').click(fn) involved creating an intermediate object just to attach an event listener you could’ve done natively. The deeper the method chaining got the worse the performance penalty, and devs rarely cached the selectors even in tight loops. It was the PHP of Javascript, which is really saying something.
By the early-2010s most of the library was dead weight since everyone started shipping polyfills but people kept plopping down jquery-calendar like it was 2006.
(I say this as someone who has fond memories of using Jquery in 2007 to win a national competition in high school, after which I became a regular contributor for years)
You have that backwards – jQuery predates querySelector by years.
The reason why getElementById is fast is because it’s a simple key lookup.
absolutely correct this is because a lot of the shit jquery did was good and people built it into the browser because of that
putting jquery into a site now would be insane but at the time it pushed forward the web by quite a leap
> New Reddit was a React app
Many such cases. React is basically synonymous with horrible lag and extreme bloat to me. Its name is the highest form of irony.
I'm really not sure why JS frameworks in general are so popular (except to facilitate easy corporate turnover), when the browser already gives you a pretty complete toolset that's the easiest to use out of any GUI library in existence. It's not low level by any means.
Granted something like an <include html component> feature is desperately missing from the html spec, but there are lightweight solutions for it.
yeah this is pretty much 1. an incorrect implementation and/or 2. an incorrect take
and easily solvable with a bit of 'render auditing' / debugging
But you can also just... update the right DOM element directly, whenever a state changes that would cause it to be updated. You don't need to create mountains of VDOM only to throw it away, nor do you need to rerender entire components.
This is how SolidJS, Svelte, and more recently Vue work. They use signals and effects to track which state is used in which parts of the application, and update only the necessary parts of the DOM. The result is significantly more performant, especially for deeply nested component trees, because you're just doing way less work in total. But the kicker is that these frameworks aren't any less high-level or easy-to-use. SolidJS looks basically the same as React, just with some of the intermediate computations wrapped in functions. Vue is one of the most popular frameworks around. And yet all three perform at a similar level to if you'd built the application using optimal vanilla JavaScript.
> The web has complexity also of client/server with long delays and syncing client/server and DOM state, and http protocol. Desktop apps and game engines don’t have these problems.
Hugely multiplayer games consistently update at under 16ms.
What part of hiding a comment requires a HTTP round trip? In 200ms you could do 20 round trips.
* These articles always say that hardware is amazing but software sucks. Let's not forget that hardware has its problems. Intel's management engine is a pile of complexity: https://www.zdnet.com/article/minix-intels-hidden-in-chip-op.... The x86_64 instruction set is hardly inspiring, and I imagine we lose a pile of performance because it fails to adequately represent the underlying hardware. (E.g. there are hundreds of registers on modern CPUs, but you can't access them directly and just have to hope the hardware does a good job of register allocation.)
* Languages unlock performance for the masses. Javascript will never be truly fast because it doesn't represent the machine. E.g. it doesn't have distinct integer and floating point types. Rust represents the machine and is fast, but is not as ergonomic as it could be. OxCaml is inspiring me lately as it's an ergonomic high-level language that also represents the machine. (Scala 3 is also getting there with capture checking, but that is still experimental.) If we want more performance we have to give a way to efficiently write code that can be turned into efficient code.
Sure x86 is an absolute mess, but I don't think it's a primary bottleneck. High end x86 cpus still beat high end ARM cpus by a significant margin on raw performance. Even supposing x86/ARM are bottlenecks... yeah a bottleneck at double digit billion ops per second.
> Languages unlock performance for the masses. Javascript will never be truly fast because it doesn't represent the machine.
C# and Go are already really fast (https://github.com/ixy-languages/ixy-languages) languages for the masses and at this point you can compile most things to WASM to get them run in the browser.
I would expect 1000's of frames of opens/closes per second. Probably an order or two more. The LCD's data bandwidth and our retina's sensitivity would be decisive bottlenecks at far slower speeds.
TLDR: CPUs, that are not getting slower, are not the reason newer software implementations often get slower.
I think you missed the point of what I'm saying.