Processes, tools, and diligence vigilantly seem the most apparent path. Perhaps rehash the 50 year old debate of professionalization while AI vibes coding is barking at the door, because what could possibly go wrong with even less experience doing the same thing and expecting a different result.
If you want to do that on your own time, that's fine - but the purpose of a job is economic. Of course you should write software of some reasonable quality, but optimizations have diminishing economic returns. Eventually, the returns are lower than the cost (in time, money, etc) of further optimizing, and this break-even point is usually at a lower level of quality than engineers would like it to be. Leadership and engineering managers know this and behave accordingly.
One can be skeptical about the implied statement and leadership/management knows what it is doing beyond delivering at the (arbitrarily) set time. One definition of Quality is to satisfy a need entirely at the lowest cost in the shortest time, but more often that not, the last term gets 90% of the attention.
Do they? I’ve been fighting against the tide for years until I understood that all of quality this and quality that doesn’t matter. Sure, it sucks to be on the receiving end of buggy software, but this where you vote with your money. At work? Finish the task with least amount of resources and move on.
The whole ballgame is making sure you have no low quality people on your team.
The quality of your team is more-or-less a pre-existing background variable. The question is whether a team of comparable quality takes longer to produce quality software than hacked-together software, and the answer appears to be "yes". The only way out of this is if optimizing more for code quality *actually helps you recruit better engineers*.
I can put a little data to that question, at least. I run a recruiting company that does interviews, so we have data both on preferences and on apparent skill level.
I went and split our data set by whether an engineer indicated that emphasis on code quality was important to them. Our data suggests that you can probably make slightly better hires (in a raw technical-ability sense) by appealing to that candidate pool:
- Candidates who emphasized quality were slightly (non-significantly) more likely to pass our interview and,
- Candidates who emphasized quality were slightly (non-significantly) less likely to have gotten a job already
The effect is pretty small, though, and I doubt it outweighs the additional cost.
Your scenario may be true in some cases but in general, more quality in software will cost more time & effort & money. If you isolate the experimental variable of studying "quality" to a particular single dev team, that team will require more time & effort to produce higher quality. The main contributor to higher quality and reliability is tests.
E.g. SQLite is considered "high quality and bulletproof". A big reason is SQLite's test code is ~590x more LOC than the core database engine: https://www.sqlite.org/testing.html
If someone finds a rare bug, the SQLite team adds it to the tests to prevent future regressions.
Same situation with NASA's higher standards of software quality for space missions. Famous article includes descriptions of the extensive tests they do: https://archive.is/HX7n4
Tests like unit tests, fuzz tests, Red Team adversarial tests, chaos monkey failure tests, etc... all require extra engineering time. Most companies don't want to pay the extra costs or extend the timelines to include all those tests.
Key word is ‘can’. And it takes far more time and money to assemble “quality” team.
Ive watched many businesses appreciate the benefits of software quality (happy customers, few incidents, fast feature turnaround) without ascribing it to anything in particular.
Then, when it went away, they chalked up the problems to something else, throwing fixes at it which didnt work.
At no point in time did they accurately perceive what they had or what they lost, even at the point of bankruptcy.
Part of the problem is that the absence of bugs, incidents and delays just feels normal and part of the problem is most people are really bad at detecting second order effects and applying second order fixes. E.g. they think "another developer will fix it" or "devs just need to dedicate more time to manual QA".
Conversely, because it's so hard to see I think it can make a really good competitive moat.
I don't think we'll reach this promised land™ until incentives re-align. Treating software as an assembly line was obviously The Wrong Thing judging by the results - problem is how can we ever move to a model that rewards quality perhaps similar to (book) authors and royalties?
Owner-operator SaaS is about as close as you can get but limits you to web and web-adjacent.
Get couple shredded guys and gals to show off how fit they are so everyone feels guilty they are snacking past 8PM.
Sell another batch of “how to do pushups” followed by “how to do pushups vol.2” with “pushup pro this time even better”.
Where in the end normal people are not getting paid for getting shredded, they get paid for doing their stuff.
I just constantly feel like I am not a proper dev because I mostly skip unit tests - but on the other hand I built last 15 years couple of systems that worked and were bringing in value.
(The answer btw: Because nobody would be able to explain to a jury/judge that 80% or whatever is enough)
Obviously, this assumes you write enterprise grade code. YMMV
But still cottage industry of "clean code" is pushing me into self doubts and shame.
However, you should want to build quality software because building quality things is fulfilling. Unfortunately certain systems have made the worship of money the end all be all of human experience.
The QE engineers and the development engineers were in entirely separate branches of the org chart. They had different incentive structures. The interface documentation was the source of truth.
The release cadence was slow. QE had absolute authority to stop a release. QE wrote more code than development engineers did with their tests and test automation.
They did TDD for a long time, they wrote Clean Code™, they organised meetups, sponsored and went to conferences, they paid 8th Light consultants to come teach (this was actually worth it!) and sent people to Agile workshops and certificates.
At first, I was like "wow, I am in heaven".
About a year later, I noticed so much repetition and waste of time in the processes.
Code was at a point where we had a "usecase" that calls a "repository" that fetches a list of "ItemNetworkResponse" which then gets mapped into "Item" using "ItemNetworkResponseToItemMapper" and tests were written for every possible thing and path.
They had enterprise clients, were charging them nicely, paying developers nicely and pocketed extra money due to "safety buffers" added by both engineers, managers and sales people, basically doubling the length of any project for "safety".
The company kept to their "high dev standards" which meant spending way more time, and thus costing way more, than generic cookie-cutter agencies would cost for the same project.
This was great until every client wanted to save money.
The company shut down last year.
ThoughtWorks and companies like them do work but theyre heavily reliant upon heavy duty sales. Delivery at high quality is necessary but not sufficient.
In 2025 I think the only thing that makes sense is having SDETs embedded in development teams.
Sofwtare development and quality assurance should be tightly integrated and should work together on ensuring a good product. Passing builds over a wall of documentation is a recipe for disasters, not good quality software.
lol, fire business analysts and let tech writers do their job. Sounds like some kind of VC black company.
Personal Quality Coding practices have been around for as long as software has been a thing. Way back when, Watts Humphrey, Steve McConnell, and Steve Maguire wrote books on how to maximize personal Quality. Many of their techniques still hold true, today.
But as long as there are bad people managers and short-sighted execs, you'll have shit quality; regardless of who does the work.
It seems to be socially associated with the Handmade Hero and Jon Blow Jai crowd, which is not so much concerned that their software might be buggy as that it might be lame. They're more concerned about user experience and efficiency than they are about correctness.
This is not at _all_ my interpretation of Casey and JBlow's views. How did you arrive at this conclusion?
> They're more concerned about user experience and efficiency than they are about correctness.
They're definitely very concerned about efficiency, but user experience? Are you referring to DevX? They definitely don't prize any kind of UX above correctness.
And stability is important, but not critical - and the main way they want to achieve it is that errors should be very obvious so that they can be caught easily in manual testing. So C++ style UB is not great, since you may not always catch it, but crashing on reading a null pointer is great, since you'll easily see it during testing. Also, performance concerns trump correctness - paying a performance cost to get some safety (e.g. using array bounds access enforcement) is lazy design, why would you write out of bounds accesses in the first place?
IMHO this group's canonical lament was expressed by Mike Acton in his "Data-Oriented Design and C++" talk, where he asks: "...Then why does it take Word 2 seconds to start up?!"[0]. See also Muratori's bug reports which seem similar[1].
I think it is important to note, as the parent comment alludes, that these performance problems are real problems, but they are usually not correctness problems (for the counterpoint, see certain real time systems). To listen to Blow, who is actually developing a new programming language, it seems his issue with C++ is mostly about how it slows down his development speed, that is -- C++ compilers aren't fast enough, not the "correctness" of his software [2].
Blow has framed these same performance problems as problems in software "quality", but this term seems share the same misunderstanding as "correctness". And therefore seems to me like another equivocation.
Software quality, to me, is dependent on the domain. Blow, et. al, never discuss this fact. Their argument is more like -- what if all programmers were like John Carmack and Michael Abrash? Instead of recognizing software is an economic activity and certain marginal performance gains are often left on the table, because most programmers can't be John Carmack and Michael Abrash all the time.
[0]: https://www.youtube.com/watch?v=rX0ItVEVjHc [1]: https://github.com/microsoft/terminal/issues/10362 [2]: https://www.youtube.com/watch?v=ZkdpLSXUXHY
The argument made there is that "software quality" in the uncle bob sense, or in your domain version, is not necessarily wrong but at the very least subjective, and should not be used to guide software development.
Instead, we can state that the software we build today does the same job it did decades ago while requiring much vaster resources, which is objectively problematic. This is a factual statement about the current state of software engineering.
The theory that follows from this is that there is a decadence in how we approach software engineering, a laziness or carelessness. This is absolutely judgemental, but its also clearly defended and not based on gut feel but rather on these observations around team sizes/hardware usage vs actual product features.
Their background in videogames makes them an obvious advocate for the opposite, as the gaming industry has always taken performance very seriously as it is core to the user experience and marketability of games.
In short, it is not about "oh it takes 2 seconds to startup word ergo most programmers suck and should pray to stand in the shadow of john carmack", it is about a perceived explosion in complexity both in terms of number of developers & in terms of allocated hardware, without an accompanying explosion in actual end user software complexity.
The more I think about this, the more I have come to agree with this sentiment. Even though the bravado around the arguments can sometimes feel judgemental, at its core we all understand that nobody needs 600mb of npm packages to build a webapp.
At least for Casey his case is less that everyone should be Carmack or Abrash but that programmers often through their poor design choices prematurely pessimise their code when they don’t need too.
But between the sparse website, invite-only and anonymous organizers, it just feels like it's emphasizing the reactionary vibes around the Handmade/casey/jblow sphere. Like they don't want a bunch of blue-haired antifa web developers to show up and ruin everything.
Glad to see they got Sweden's own Eskil Steenberg though. Tuning in for that at least.
There's a reason web developers, and the ecosystem/community around them, are the butt of many jokes. I don't think it's at all surprising that the injection of identity politics into the software industry has had a negative effect on quality.
If it had any effect, it would be negligible compared to offshoring and weak incentives.
That's a pretty broad claim. This conference could be in response to a perceived negative effect on quality, but claiming that as a fact seems hard to back up to me
It's a clever political tactic coz a 50 year old white male middle manager at Microsoft trying to become a board member on an open source foundation would face a lot more hostility than a 20-something girl who pushes all of the diversity buttons.
It mirrors the rather successful marketing strategies for a string of movies including Ghostbusters movie and Barbie, among others. i.e. "There's a certain kind of person who doesnt like our latest corporate offering...". Who wants to be that person?
https://handmade.network/blog/p/8989-separating_from_handmad...
https://handmadecities.com/news/splitting-from-handmade-netw...
This reads like "Oh some people are meeting, so this must actually be about ME".
You write this like this is a bad thing.
I just came to a conference to learn some cool new tech, but instead got lectured about my transphobia, that my database is systemic discrimination and my HDD being named „slave“ means I burn crosses in my free time, even though I have zero family relations to anything America.
I mean this screams fun right from the get go.
Anyway, I’ll watch the twitch stream from across the pond.
I would expect this conf to expand on those types of concepts and strategies.
Why would they need to do that? Is that even a goal or something that this conference is addressing at all?
I would guess the same way humans do.
Put brain in creative mode, bang out something that works
Put brain in rules compliance mode and tidy everything up.
Then send for code review.
My question is how far does it go - are the gains going to peter out, or does it keep going or even accelerate? Seems like one of the latter two thus far.
I feel like this comes about because it's the optimal strategy for doing robust one-shot "point fixes", but it comes at the cost of long-term codebase heath.
I have noticed this bias towards lots of duplication eventually creates a kind of "ai code soup" that you can only really "fix" or keep working on with AI from that point out.
With the right guidance and hints you can get it to refactor and generalise - and it does it well - but the default style definitely trends to "slop" in my experience so far.
All I found is a Twitch tagline that reads "Software is getting worse. We're here to make it better."
I sometimes wonder if there could be an optimal number of microservices. As far as I know no one has connected issue data to the number of microservices before. Maybe there‘s an optimal number like „8“ which leads to lower number of bugs and faster resolution times.
I am going to keep saying this, if your main tagline/ethos is broken by your website you have failed.
* On mobile the topics are hidden without scroll over. You also can't read multiple of the topics without scrolling right as you read.
* The background is very distracting and disrupts readability.
* None of your speakers have links to their socials/what they are known for.
* > Who are the organizers? Sam, Sander and Charlie.
* * Ah yes, my favourite people.... At least hyperlink their socials.
Really that’s the core of it
> In a charming small town
I don't see how anyone can be "for" quality and not talk about how quality can be assessed. Where are the talks about that?
However, I would be interested in establishing a union for technologists across the nation. Drive quality from the bottom up, form local chapters, collectively bargain.
Quality is a measurement. That’s how it works in hardware land, anyway. Product defects - and, crucially, their associated cost to the company - are quantified
Quality is not some abstract, feel good concept like “developer experience”. It’s a real, hard number of how much money the company loses to product defects.
Almost every professional software developer I’ve ever met is completely and vehemently opposed to any part of their workflow being quantified. It’s dismissed as “micromanagement” and “bean counting”.
Bruh. You can’t talk about quality with any seriousness while simultaneously refusing metrics. Those two points are antithetical to one another.
1. It is partly because the typical metrics used for software development in big corporations (e.g., test coverage, cyclomatic complexity, etc) are such a snake oil. They are constantly misused and/or misinterpreted by management and because of that cause developers a lot of frustration.
2. Some developers see their craft as a form of art, or at least an activity for "expressing themselves" in an almost literary way. You can laugh at this, but I think it is a very humane way of thinking. We want to feel a deeper meaning and purpose in what we do. Antirez of redis fame have expressed something like this. [0]
3. Many of these programmers are working with games and graphics and they have a very distinct metric: FPS.
Quality is not a "real, hard number" because such a thing would depend entirely on how you collect the data, what you count as data, and how you interpret the data. All of this is brimming with controversy, as you might know if you had read more than zero books about qualitative research, epistemology, the philosophy, history, or practice of science. I say "might" because of course, the number of books one reads is no measure of wisdom. It is one indicator of an interest to learn, though.
It would be nice if you had learned, in your years on Earth, that you can't talk about quality with any seriousness while simultaneously refusing to accept that quality is about people, relationships, and feelings. It's about risks and interpretations of risk.
Now, here is the part where I agree with you: quality is assessed, not measured. But that assessment is based on evidence, and one kind of evidence is stuff that can be usefully measured.
While there is no such thing as a "qualitometer," we should not be automatically opposed to measuring things that may help us and not hurt us.