Which is to say, the interface exploration comes part-and-parcel with the agent tooling in my experience
This comment is a little orthogonal to the content of the article, but my experience made what they wrote click with me
On top of that, Hyrum’s law doesn’t go away just because your software has explicit contracts. In my experience, as more people start losing their agency over the code they generate, the system accumulates implicit cruft over time and other code starts depending on it.
Also, reliability is a reasoning problem. So operational excellence becomes scrappy with this way of working. It works for some software, but I don’t know whether this push to YOLO it is actually a good thing. C-levels are putting immense pressure in many big companies for everyone to adopt these tools and magically increase productivity while decreasing headcount to please investors. Not going too well so far.
A good interface doesn’t magically make vibe-coded implementations with little oversight usable. Rewriting it over and over again in the same manner and expecting improvement is not the kind of engineering I want to do.
Where it gets messy is when your "disposable" layer accumulates implicit contracts. A dashboard that stakeholders rely on, an export format someone's built a process around, a webhook payload shape that downstream systems expect. These aren't in your documented interfaces but they become load-bearing walls.
The discipline required is treating your documented contracts like the actual boundary - version them properly, deprecate formally, keep them minimal. Most teams don't have that discipline and end up with giant surface areas where everything feels permanent.
the problem is not in documenting the subset of a giant surface you intend to support; the problem is having a giant surface!
I wish!
So how can you keep generating disposable software on this layer?
And what you mostly want to change in software, is new features or handle more usage. If you do that, it needs in most cases changes to the data store and the “hand crafted core”.
So what part in practice will be disposable and how often will it be “generated again”?
Maybe for simple small stuff, like how fast Excel sheets are being made, changed and discarded? Maybe for embedded software?
Well... If your "users" are paying customers of a XaaS Subscription service, then there's propably little need and/or room for disposable UI.
But if you're doing something for internal processes with maybe 2-3 users at max, then you might want to do something that does not result in launching an under-budgeted project that could be a full blown SaaS project on its own.
In general I agree with you, just not at the extreme.
It’s better to be on top of the interface than the implementation. But at the same time, brittle systems caused by overly rigid interfaces are a real thing. In that case, interface exploration can be done by a human with assistance from LLMs, but allowing an LLM to arbitrarily explore and make changes to the interface sounds like a recipe for disaster.
Want a dashboard from an API with openapi docs or from SQL database with known schema, or want a quick interactive GUI that highlights something in `perf stat` data, unleash claude.
Shifting quality expectations are a result of the load of crappy software we experience, not a change in what we want from software. I.e. not a good thing, allowing us to ship crap, because people "expect it", it simply means "most software is crap". So not a good thing, but something we should work against, by producing less slop, not more.
Today you have to have it write some software to accomplish a task and it's pretty obvious what's going on but when the AI itself becomes the UI then it doesn't matter as much, are the steps to complete a task the goal or is it the end result?
The only thing really stopping the commodification of software is the development of said software.
My work is in formal verification, and we’re looking at how to apply what we do to putting guard rails on AI output.
It’s a promising space, but there’s a long way to go, and in the meantime, I think we’re about to enter a new era of exploitable bugs becoming extremely common due to vibe coding.
I vibe coded an entire LSP server — in a day — for an oddball verification language I’m stuck working in. It’s fantastic to have it, and an enormous productivity boost, but it would’ve literally taken months of work to write the same thing myself.
Moreover, because it ties deeply into unstable upstream compiler implementation details, I would struggle to actually maintain it.
The AI took care of all of that — but I have almost no idea what’s in there. It would be foolish to assume the code is correct or safe.