- Work at a hedge fund
- Every evening, the whole firm "cycles" to start the next trading day
- Step 7 of 18 fails
- I document Step 7 and then show it to a bunch of folks
- I end up having a meeting where I say: "Two things are true: 1. You all agree that Step 7 is incorrectly documented. 2. You all DISAGREE on what Step 7 should be doing"
I love this story as it highlights that JUST WRITING DOWN what's happening can be a giant leap forward in terms of getting people to agree on what the process actually IS. If you don't write it down, everyone may go on basing decisions on an incorrect understanding of the system.
A related story:
"As I was writing the documentation on our market data system, multiple people told me 'You don't need to do that, it's not that complicated'. Then they read the final document and said 'Oh, I guess it is pretty complicated' "
We’re going to write down what Step 7 currently is/does. No, now is not the time to start discussing what it ought to do. Please let us just get through sorting out what Step 7 currently is. Yes, some people do it differently. That’s why we hit a snag. Let’s just pick one of those wrong ways, document it, and do it all wrong together. We’ll fix it as a separate step. Now isn’t the time to fix it, as much as it feels like a convenient time to.
Seems like horses for courses to me: I can imagine my very happy healthy teams needing to operate in either mode, depending on the specific problem. I also can imagine us needing the person closest to the problem to tell us which direction applies.
(To your point though, I also can imagine that any type of pressures like these would really bring out the dysfunction in “toxic” teams.)
In my experience, the refining never happens.
The other way, you've blocked the process until every subcommittee of the committee assigned to fix the process has delivered their Final Report Draft 8 FINAL (1) (13) (1).docx. And that could be preventing an entire department from working at all.
Imagine that you have been slaving for low salary with abusive boss, who constantly promises but never delivers. If shit hit the fan and you are desperately needed, this is the perfect time to talk and solidify improvements. Game does not run on gratitude.
The same rule unfortunately also applies to relationships.
> subcommittee of the committee assigned to fix the process
That bit, is the problem.
Writing stuff down is great since it provides a baseline to agree upon, and later additions to the team will take it as given and not start to discuss minutiae and bog down discussions into nothingness. And if some point really is worth discussing, it shouldn't be hard to find support to change it. I've heard some wild misunderstandings of how things were based on how they were being done, and now I never want to do anything of any significant size without there being a clear and obvious process to it.
In Charlie Beckwith's book about Delta Force [0] there is a line where he says (paraphrasing):
"The SAS never wanted to write down what their role was and what tasks they were trained for. Why? Because they didn't want to get pigeon holed into a role. ... They also never wrote down their SOPs b/c the argument was that 'if you can't keep it in your head, you shouldn't be in the Regiment'. At Delta, we were going to write down our mission AND write down our SOPs."
Step 7 in a process which already has defined end-goals though? The fact that there were disagreements in the first place baffled me. The fact that it was impossible to write anything down about it without invoking heaven's wrath made me quit.
1) design smart(er) requirements- I.e beat up the ask and rewrite the problem statement correctly. 1B is every requirement has a persons name attached who is traceable/responsible for its inclusion- not a department.
2) delete features you don’t need or which are hedges (if you aren’t adding back 10% of the time, then you aren’t deleting enough)
3) simplify or optimize. This step must come after 1 and 2 so you aren’t wasting effort optimizing the wrong thing
4) accelerate
5) automate
This way is very clear where AI plugs in- and more importantly, WHEN it plugs in.
Also, plenty of times people try to run this process backwards, with poor outcomes.
In the world of Business IT, we get seduced by the shiny new toy. Right now, that toy is Artificial Intelligence. Boardrooms are buzzing with buzzwords like LLMs, agentic workflows, and generative reasoning. Executives are frantically asking, "What is our AI strategy?"
But here is the hard truth:
There is no such thing as an AI strategy. There is only Business Process Optimization (BPO).
This is well-expressed, and almost certainly true for an overwhelming majority of companies.
Or on a bigger scale look at FB/Social media and society. There definitely without a doubt is a boundary. They interact and overlap.
The saying "you can't solve social problems with technology" usually means - at least in the places I have heard / used it - "If your workforce fights a process - be it for the process being stupid, tools being slow, incentives do not align with policy, whatever - especially a control step, no amount of mandatory tech enforcement of that process step will yield better results." At best you get garbled data because someone hit the keyboard to fill in mandatory fields, sometimes, the process now works OUTSIDE of the system by informal methods because 'work still needs to be done', at worst, you get a mutiny.
You have to fix the people(s' problems) by actually talking to them and take the pain points away, you do not go to 'computer says no' territory first.
In my experience, no org problem is only social, and no tech problem is merely technical. Finding a sustainable solution in both fields is what distinguishes a staff engineer from a junior consultant.
I work on SaaS platform as engineer. We can have some people from customer A asking for bunch of fields to be mandatory - just to get 6 months later people from that company nagging about the fields saying our platform sucks - well no their process and requirements suck - we didn’t come up which fields are mandatory.
> no org problem is only social, and no tech problem is merely technical.
I was going for "the intersection is clearly nonempty" but maybe the better argument is "the intersection is pretty much everything."
Almost all of the tech debt we have was introduced by leadership guidance to ignore. And all additional debt to manage it or ameliorate it (since problems don't just go away) is also guidance from leadership to fast track fixes.
What happened to the days where software engineers were the experts who decided tech priority?
Outside of a very small number of firms that were called out as notable for being led in a way that enabled that, often by engineers that were themselves still hands on, they never existed, and even there it was “business leadership that happened to also be engineers, and made decisions based on business priorities informed by their understanding of software engineering”, not “software engineers in their walled-off citadels of pure engineering”, and it usually involves, in successful firms, considerable willingness to accept tech debt, just as business leadership can often not be shy about accepting funancial debt.
Business leadership is not shy about accepting financial debt when business leadership has decided it should accept financial debt. Technical leadership should ostensibly not be shy about accepting technical debt because business leadership has decided it should accept technical debt. The distribution of agency and responsibility in the two situations is different.
My takeaway was that the project was doomed because it was named wrong. Should have been called Business Process Design.
They are now owned by Private Equity. I can only wonder what madness the would have wrought with AI.
They tried to implement a system whereby a customer has a single customer number. Between mergers, acquisitions and shutdowns it was impossible to keep straight and keep tracking history. It impacted rates, contracts, sales commissions, division revenue-everything. In they end they gave everyone a new number while still using the old ones.
There are some people who insist on spamming out splog posts in that style, some of them think they are blogging, not splogging, and maybe they have good intentions but that style screams "SPAM!" and unfortunately people who are writing that don't understand how it comes across.
The processes suck because of decades of corner cutting and "fat" trimming while the executives congratulate themselves for only making the product a biiiit worse in exchange for a 0.0005% cost reduction, before then offsetting any gains by giving themselves all the money that would've gone to whatever is now dead.
Repeat this process for 30 years and you have companies like Microsoft that can barely ship anything that works anymore, and our 4 Big Websites frequently just fail to load pages for no explicable reason, Amazon goes down and takes 1/3 of the internet with it, and AI companies are now going to devour the carcass of our internet and shit it back to us in LLM waffle while charging us money for the privilege to eat it.
I do agree on execs congratulating themselves afterwards though. It was obscene last year. This year it was mildly muted.
Not really, because solving those problems with headcount defeats the point. Part of the definition of those kinds of problems is that solutions involving headcount are invalid.
Not sure what you mean here. "Fighting" as in "seeking to prevent", or "putting up with", or what exactly? Is this supposed to be bad because it's exploitative, or because it's a poor use of the smart person's time, or what exactly?
There are many sorts of struggle. There is struggle managing essential complexity and also the struggle, especially in the pre-product phase, of getting consensus over what is "essential" [1] When it comes to accidental complexity you can just struggle following the process or struggle to struggle less in the future by some combination of technical and social innovations which themselves can backfire into increased complexity.
Google can afford to use management techniques that would be impossible elsewhere because of the scale and profitability of their operations. Many a young person goes there thinking they'll learn something transferable but the market monopolies are the one thing that they can't walk out with.
[1] Ashby's law https://www.edge.org/response-detail/27150 best exemplified by the Wright flyer which could fly without tumbling because it controlled roll, pitch and yaw.
In fact, if an AI strategy becomes business process optimization, I'd say that AI strategy for that company is successful.
There are too many AI strategy today that isn't even business process optimization and detached from bottom line, and just being pure FOMO from the C suite. Those probably won't end well.
On the other hand, I have seen process stifle above average people or so called “rockstars”. The thing is, the bigger your reliance on process, the more you need these people to swoop in and fill in the cracks, save the day when things go horribly wrong, and otherwise be the glue that keeps things running (or perhaps oil for the machine is more apt).
I know it’s not “fair”, and certainly not without risk, but the best way I have (personally) seen it work is where the above average people get special permissions such as global admin or exception from the change management process (as examples) to remove some of the friction process brings. These people like to move fast and stay focused, and don’t like being bogged down by petty paperwork, or sitting on a bridge asking permission to do this or that. Even as a manger, I don’t blame them at all, and all things being equal so long as they are not causing problems I think the business would prefer them to operate as they do.
In light of those observations, I have been wrestling a lot with what it says about process itself. Still undecided.
In big corporate environments, ‘around average’ process would be a radical improvement. We are stuck in the reality where standing up a Service Now form is considered great progress.
I doubt there's much to do about the specific process that can be done to minimise the problems of the rockstars without also causing problems further down the ladder, without just starting to make exceptions like you said. It's probably just an emergent behaviour of processes like this intended to raise average quality. You pull up the bottom floor, but the roof gets lower as a result. You can find similar problems in schooling.
This is a case of bad process. No process is perfect, but the whole point of process is so when things go wrong they don't go horribly wrong, and that you don't need rockstars to fill in the cracks. It should be making your rockstars faster because the stuff they need others to take care of gets done well. Unnecessary friction that slows people down is generally a sign of management mistaking paperwork for process.
Is it slow and annoying to jump through these hoops? Without a doubt! I’ve also seen people on the other side of the process who are very frustrated that they can’t just escalate when they know devs would want to hear about it. But it’s not acceptable for people to get woken up every week because the new support engineer filed a customer error as a global outage, and smart people tried and failed to put a stop through it through training. I don’t know what the alternative could be.
Like, we recently had an incident where someone just pasted "401 - URL" into the description and sent it off. We recently asked someone to open the incident through the correct channels. We got a service request "Fix" with the mail thread attached to it in a format we couldn't open. We get incidents "System is slow, infrastructure is problem" from random "DevOps" people.
Sadly, that is the crap you need to deal with. This is the crap that grinds away cooperative culture by pure abuse. Before a certain dysfunctional project was dumped on us as "Make it Saas", people were happy to support ad-hoc, ambitious and strange things.
We are now forced by this project to enforce procedure and if this kills great ideas and adventures, so be it. The crappy, out-of-process things cost too much time.
>Processes that rely on unstructured data are usually unstructured processes.
I appreciate someone succinctly summing up this idea.
- Your process interacts with an unstructured external world (physical reality, customer communication, etc.)
- Your process interacts with differently structured processes, and unstructured in the best agreed transfer protocol (could be external, like data sources, or even internal between teams with different taxonomies)
- Your process must support a wild kind of variability that is not worth categorizing (e.g. every kind of special delivery instruction a customer might provide)
Believing you can always solve these with the right taxonomy and process diagram is like believing there is always another manager to complain to. Experienced process design instead pushes semi-structured variability to the edges, acknowledges those edges, and watches them like a hawk for danger.
We should ABSOLUTELY be applying those principles more to AI... if anything, AI should help us decouple systems and overreach less on system scope. We should get more comfortable building smaller, well-structured processes that float in an unstructured soup, because it has gotten much cheaper for us to let every process have an unstructured edge.
"Ask the vendor this set of 10 compliance questions. We can only buy if they check every box." is a structured process based on structured data.
Both kinds of processes have always existed, long before modern technology. Though only the second kind can be reliably automated.
Leaders think <buzzy-technique> is a good way to save money, but <buzzy-technique> actually is a thing that requires deeper investment to realize more returns, not a money saver.
I have seen a smattering of instances along the way where the act of defining requirements forced companies to define processes better. Usually, though, companies are unwilling to do this and instead will insist on adding flexibility to the automation tooling, to the point where the tool is of no help.
Which leads us to turning into a different team: we have to go figure out what the process engineering even is, which means becoming a bigger expert than they are at the process they want us to make tooling for.
I'm now in the process of trying to hand off chunks of the work I do to run my business to AI (both to save time but also just as my very broad, practical eval). It really is all about documentation. I buy small e-commerce brands, and they're simple enough that current SOTA models have more than enough intelligence to take a first pass at listings + financials to determine whether I should take a call with the seller. To make that work, though, I've got a prompt that's currently at six pages that is just every single thing I look when evaluating a business codified.
Using that has really convinced me that people are overrating the importance of intelligence in LLMs in terms of driving real economic value. Most work is like my evaluations - it requires intelligence, but there's a ceiling to how much you need. Someone with 150 IQ points wouldn't do any better at this task than someone with 100 IQ points.
Instead, I think what's going to drive actual change is the scaffolding that lets LLMs take on increasing numbers of tasks. My big issue right now is that I have to go to the listing page for a business that's for sale, screenshot the page, download the files, upload that all to ChatGPT and then give it the prompt. I'm still waiting for a web browsing agent that can handle all of that for me, so I can automate the full flow and just get an analysis of each listing sent to me without having to do anything.
I have learned to be careful of "too much process", but I find that the need for structure never disappears.
AI deals well with structure. You can adjust your structure to accept less-structured data, but you still need the structure, for after that.
Just maybe not too much structure[0].
The useful framing is not “where can we bolt on AI” but “what does the system look like if AI is a first-class component.” That requires mapping the workflow, identifying the decision points, and separating deterministic steps from judgment calls.
Most teams try to apply AI inside existing org boundaries.
That assumes the current structure is optimal. The better approach is to model the business as a set of subsystems, pick the one with the highest operational cost or latency, and simulate what happens if that subsystem becomes an order of magnitude more efficient. The rest of the architecture tends to reconfigure from that starting point.
For example, in insurance (just an illustration, not a claim about any specific firm), underwriting, sales, and support dominate cost. If underwriting throughput improves by an order of magnitude, the downstream constraints shift: pricing cycles compress, risk models refresh faster, and the human-in-the-loop boundary moves. That’s the level where AI changes the system shape and acts beyond the local workflow.
This lens seems more productive than incremental insertion into existing silos.
Example, one of many things, in our SDLC process, now we have test cases and documentation which never existed before (coming from a startup).
Here’s your Ai strategy: every few months re-evaluate agent fitness and start switching over. Remember backstops and canaries.
Details:
Businesses usually assign responsibilities to somewhat flaky employees, with understanding there will be a percentage of errors. This works ok so long as errors don’t fluctuate wildly and don’t amplify through the system. Most business processes are a mess and that works ok.
Once agents become less flaky and there are enough backstops to contain occasional damage business will start switching.
But I don't blame them. Process optimization is hard. If a new tool promises more speed, without changing the process, they are ready to pour money at that.
I recently did a pilot project where we reduced the time for a high friction IT Request process from 4 day fulfillment to about 6 business hours. By “handing text and unstructured data”, the process was able to determine user intent, identify key areas of ambiguity that would delay the request, and eliminate the ambiguity based on data we have (90%) or by asking a yes/no question to someone.
All using GCP tools integrating with a service platform, our ERP and other data sources. Total time ~3 weeks, although we cheated because we understood both the problem and process.
For many processes that have just suddenly changed, somewhat subjective evaluations can be made reliably by an AI. At least as reliably as was being done before by relatively junior or outsourced staff.
Replacing low-level employees relying on a decision matrix playbook-type document with AI has a LOT of applications.
> The intelligence (knowing what a "risk" actually means) still requires human governance.
Less and less. Why do you trust a human who’s considered 5000 assessments to better understand “risks” and process the next 50 better than the LLM who has internalized untold millions of assessments?
There is only Business Process Optimization (BPO)."
Exactly, that's the fundamental truth. The shiny tool of the day doesn't change it at all
What's the prompt for that one? ;)
What does it bring?
AI won't take a shoddy process (say, your process for reviewing and accepting forms from patients) and magically make it better if you don't have an idea of what "better" actually entails.
"Improving a system requires knowing what you would do if you could do exactly what you wanted to. Because if you don't know what you would do if you could do exactly what you wanted to, how on earth are you going to know what you can do under constraints?"
- Russ Ackoff
Did you read the example? The business process of human bias is gone in the cancer detective phase. AI eliminated it.
My name isn't Russ. Russ Ackoff was a business process optimization leader from the last century -- a contemporary of Deming and the Toyota school etc.
Do you understand the treatment process, here? I don’t ask that to be shitty, but I feel like you’re hand-waving away the entirety of the process because image detection is interesting.
It smells like a “disrupt healthcare” statement, of which there are many and of which none have any basis or value.