That's very simple. The balanced path depends directly on how much of the requirements and assumptions are going to change during the life time of the thing you are building.
Engineering is helpful only to the extent you can forsee the future changes. Anything beyond that requires evolution.
You are able to comment on the complexity of that large company only because you are standing in the future into 50 years from when those things started take shape. If you were designing it 50 years back, you would end up with same complexity.
The nature's answer to it is, consolidate and compact. Everything that falls onto earth gets compacted into a solid rock over time, by a huge pressure of weight. All complexity and features are flattened out. Companies undergo similar dynamics driven by pressures over time, not by big-bang engineering design upfront.
The first is too ambitious and ends in an unmaintainable pile around a good core idea.
The second tries to "get everything right" and suffers second system syndrome.
The third gets it right but now for a bunch of central business needs. You learned after all. It is good exactly because it does not try to get _everything_ right like the second did.
The fourth patches up some more features to scoop up B and C prios and calls it a day.
Sometimes, often in BigCorp: Creators move on and it will slowly deteriorate from being maintenaned...
> The most prevalent one, these days, is that you gradually evolve the complexity over time. You start small and keep adding to it.
> The other school is that you lay out a huge specification that would fully work through all of the complexity in advance, then build it.
I think AI will drive an interesting shift in how people build software. We'll see a move toward creating and iterating on specifications rather than implementations themselves.
In a sense, a specification is the most compact definition of your software possible. The knowledge density per "line" is much higher than in any programming language. This makes specifications easier to read, reason about, and iterate on—whether with AI or with peers.
I can imagine open source projects that will revolve entirely around specifications, not implementations. These specs could be discussed, with people contributing thoughts instead of pull requests. The more articulated the idea, the higher its chance of being "merged" into the working specification. For maintainers, reviewing "idea merge requests" and discussing them with AI assistants before updating the spec would be easier than reviewing code.
Specifications could be versioned just like software implementations, with running versions and stable releases. They could include addendums listing platform-specific caveats or library recommendations. With a good spec, developers could build their own tools in any language. One would be able to get a new version of the spec, diff it with the current one and ask AI to implement the difference or discuss what is needed for you personally and what is not. Similarly, It would be easier to "patch" the specification with your own requirements than to modify ready-made software.
Interesting times.
We have yet to see a largely llm driven language implementation, but it is surely possible. I imagine it would be easier to tell the llm to instead translate the Java implementation to whatever language you need. A vibe-coded language could do major damage to a companies data.
[0] https://iceberg.apache.org/spec/ [1] https://lists.apache.org/thread/whbgoc325o99vm4b599f0g1owhgw...
This is a really good observation and I predict you will be correct.
There is a consequence of this for SaaS. You can imagine an example SaaS that one might need to vibecode to save money. The reason its not possible now is not because Claude can't do it, its because getting the right specs (like you suggested) is hard work. A well written spec will not only contain the best practices for that domain of software but also all the legal compliance BS that comes along with it.
With a proper specification that is also modular, I imagine we will be able to see more vibecoded SaaS.
Overall I think your prediction is really strong.
> The WHATWG was based on several core principles, (..) and that specifications need to be detailed enough that implementations can achieve complete interoperability without reverse-engineering each other.
But in my experience you need more than a spec, because an implementation is not just something that implements a spec, it is also the result of making many architectural choices in how the spec is implemented.
Also even with detailed specs AI still needs additional guidance. For example couple of weeks ago Cursor unleashed thousands of agents with access to web standards and the shared WPT test suite: the result was total nonsense.
So the future might rather be like a Russian doll of specs: start with a high-level system description, and then support it with finer-grained specs of parts of the system. This could go down all the way to the code itself: existing architectural patterns provide a spec for how to code a feature that is just a variation of such a pattern. Then whenever your system needs to do something new, you have to provide the code patterns for it. The AI is then relegated to its strength: applying existing patterns.
TLA+ has a concept of refinement, which is kind of what I described above as Russian dolls but only applied to TLA+ specs.
Here is a quote that describes the idea:
There is no fundamental distinction between specifications and implementations. We simply have specifications, some of which implement other specifications. A Java program can be viewed as a specification of a JVM (Java Virtual Machine) program, which can be viewed as a specification of an assembly language program, which can be viewed as a specification of an execution of the computer's machine instructions, which can be viewed as a specification of an execution of its register-transfer level design, and so on.
Source: https://cseweb.ucsd.edu/classes/sp05/cse128/ (chapter 1, last page)
One issue is that a spec without a working reference implementation is essentially the same as a pull request that's never been successfully compiled. Generalization is good but you can't get away from actually doing the thing at the end of the day.
I've run into this issue with C++ templates before. Throw a type at a template that it hasn't previously been tested with and it can fall apart in new and exciting ways.
Everything W3C does. Go is evolving through specs first. Probably every other programming language these days.
People already do that for humankind-scale projects where there have to be multiple implementations that can talk to each other. Iteration is inevitable for anything that gains traction, but it still can be iteration on specs first rather than on code.
Ai assistance would seem to favor the engineering approach as the friction of teams and personalities is reduced in favor of quick feasibility testing and complete planning.
But if anything, all development is the search for the search for the requirements. Some just value writing them down.
Designers need malleability, that is why they all want digital design systems.
It was discussed here just 2 days ago intensively.
How rapidly has business software changed since COVID? Yet how many skyscrapers remain partially unoccupied in big cities like London, because of the recent arrival of widespread hybrid working?
The buildings are structurally unchanged and haven't been demolished to make way for buildings that better support hybrid working. Sure office fit outs are more oriented towards smaller simultaneous attendance with more hot desking. Also a new industry boom around team building socials has arrived. Virtual skeet shooting or golf, for example.
On the whole, engineered cities are unchanged, their ancient and rigid specifications lacking the foresight to include the requirements that accommodate hybrid working. Software meanwhile has adapted and as the OP says, evolved.