It was this fully automated airport, where the checkin is self serviced and you only interact with computers.
Eventually, when I inserted my boarding pass I had a printed piece of paper back that said that they had to change my seat from aisle to midseat
I then tried to find someone to talk to the entire way, but computers can only interact in the way the UI was designed, and no programmer accounted or cared for my scenario
The ground attendant couldn't have done anything of course because it wasn't part of the scope of her job, and this was the part of germany where nice was not one of their stereotypes.
Eventually I got a survey a week later about a different leg of the flight, so could I really complain there? that one was fine? I had a paranoid wonder if that was intentional
I arrived at the train station in the night after 6 hours train journey. German Railways app shows there will be my final leg train in 45 minutes. I wait in the cold at night, sitting in the station building because it's warmer there. 5 minutes before departure I go on the platform. The local display shows no train, even though the all still shows it. I waited for nothing.
Syncing the app with the train station? Somebody else's problem.
In half an hour there should be a replacement bus for another cancelled train. There are no signs in the app or the station that indicate where that bus is to be found. You just need to know.
Putting sings for replacement buses due to degraded service that's long planned and already happening for 2 months? Somebody else's problem.
An old man asks if the bus will allow to catch the train connection at its destination. The bus driver bitches at him for asking that question -- not his job. Somebody else's problem.
Training the bus driver that, being an official replacement of a train, he needs to know that, clearly also somebody else's problem than that of the German Railways.
It's pitch black outside, the windows are opaque due to moisture, so I can't tell where we are even though I was born the area and lived here for 18 years. The bus driver makes no announcements about the stops, there is no display. Knowing when to request a stop to get off? Somebody else's problem.
The bus is ice cold for an hour. When am old lady gets off and tells the bus driver that it was freezing all journey, he asks "well what can you do". Bewildered she answers "turn on the heating"? He didn't expect that. He seemed to think that everything except driving was somebody else's problem.
This is just one night's bus journey story. I also got my SIM card deleted and a parcel was lost in the subsequent week. Documenting here the amounts of "somebody else's problem" I encountered in their customer support hotlines is somebody else's problem for me for now.
I have experienced this many times. Thankfully the bus drivers here in Hungary are pretty helpful (well, in my county at least), and worst case: you ask other passengers who also happen to be friendly. When it is pitch black outside and the windows are opaque due to moisture, it is not only your problem, but everyone else's, and people often find a way to cooperate and work together.
The legal form doesn't determine whether something is state owned or private.
If we are discussing tendencies of "privatized vs public", it's hard to ignore that factor. Public entities that historically worked well weren't just masses of 600 subcontractors.
So, sure, you can have a rail system self-fund as long as you let it build whatever the fuck it wants at stations and funnel all the profits back into the train line.
I agree that there are many big businesses bureaucracies but they tend to be in areas closely linked to the state/government because of heavy regulation: banks and insurance for example. They tend to survive despite their terrible efficiency because it is way too costly to enter their business for a small actor. Any of their real competitors is already big enough that the bureaucracy is already well established...
Being a bus driver used to be a decent job for semi-retired construction workers, and such.
But then privatization hit, and over the last 20 years, there is no niceness left. They're even trained to disregard customers, and penalized otherwise. It's insanely inhumane.
And the causal effect is very clear, there can be no doubt about it. It's not the bus driver's fault.
Honesty, it's German politics doing precisely this that's part of the problem: flippant diagnoses too broadly applied from afar.
The more focused a company is (the more reliant it is on its core service) the more accountable it can be. I'd argue many companies are if anything more accountable than the government. It doesn't have to be true, but I'd argue it often is.
Sadly I don’t expect this all to get any better with robots and LLms and thing. We will be crying to meet a human sooner than later, and my hope is this far cry will eventually get us to the dawn of new era when you actually have people in the loop, just for humanity’s sake.
Ease of use maybe, although my parents and grandparents would like to argue differently. They are not as quick to work their smartphone, and the ticket machines are being removed everywhere to be replaced by apps that are much cheaper to run. This works fine for the younger generations, but older and less tech-savvy people are getting left behind.
Quality though, no way. Every single time I tried to give ÖPNV a chance in the last 3-4 years I was either different degrees of late or didn't arrive at all without switching to some alternative method of transport on the way. Doesn't even matter if I tried local routes (Frankfurt and Darmstadt) or longer inter-city connections to Munich or Leipzig, it's all completely broken. People in my company routinely book connections several hours earlier than they need to be places to have a chance of arriving in time, and often are still late. Trains are overbooked, connections are late or often cancelled altogether, seat reservarions don't work more often than they do, WiFi on the trains never works... Many, many things have to change for me to reconsider my default of taking the car everywhere, and I don't think they will in any sort of a relevant timeline.
at a second read (and thought) you are absolutely right, and there is a major moral to take from this story: it may be viewed as being against humanity's nature to remove legacy UIs for as long as there are old users willing to stick with them. like banning bicycles that do not run on batteries, can you imagine, as they'd be slower than other bikes!
we can definitely argue that a person in his right mind, and no matter the age, should be able to choose to stay with certain interface, if this does not incur massive costs. where I live you can still buy paper tickets from the driver, even though pay-as-you-go is the de facto choice for many, and of all ages. today I saw a minority ethnic girl buy a paper ticket rather than be penalized. everyone knew what happened, and I believe they were 100% human and much appreciated them getting paper ticket last minute from the driver.
after the Butlerian Jihad.
You have something mixed up there.
Quality is mediocre. The trains are often delayed, which is a problem with the size of the network and cascading failures. Once they do get to A, they get from A to B just fine, the seats are okay, the luggage space is okay, etc. The DB Navigator app is useful for finding alternative routes but it won't tell you whether your ticket is valid for them. It will tell you if the delay is so long that you're allowed to use any route.
The train is late. The lounges suck or are tied to a complex system of ticket tiers that seemingly don't correlate to price. You bought a specific seat but the train was changed so now no assigned seat and lol on a refund. And fuck you if you're crossing borders.
Germans travel a good amount by car for good reason [1]. When I'm in Germany, I tend to drive between cities because the alternative is burning several hours in buffers and delays.
[1] https://ec.europa.eu/eurostat/statistics-explained/index.php...
It sounds like this was the main point of failure. I’m not sure it can be considered an error in the system. I’d consider the risk inherent in traveling in a country without knowing its language.
The absolute arrogance of particularly French and German speakers is staggering. All of us from smaller countries and language spheres speak at least 2-3 languages, often more at a basic level, but they scoff at anyone who visits that didn't happen to learn theirs. Contrast this with Spanish and Italian speakers, where my experience is that they are often not great at English, but very much willing to try. To add, I've also never met an American who wasn't willing to do their best to help out.
When somebody is asking for help in a language you don't understand, your obligation as a human being is to do your best, if nothing else to help them find someone who does understand one of the likely several languages you have in common. Not everyone who speaks English at you thinks less of you because you aren't good at it, and everybody is just doing their best.
It’s worse than France in this regard.
The level of arrogance and lack of empathy and service is beyond limits.
That part seems really hard to believe for me. The only time you should get charged at all is for prank calling. In fact, if you call and tell them and decide you don't need EMS after all they will in fact come anyways because they need to check on every call. And you will not get charged for that.
Not sure how random my selection process was, but that certainly wasn't my experience when I lived in Germany a few years ago. Maybe in big cities, yes. But even in the burbs, chances are you have to look for the metaphorical needle in the haystack to find someone speaking English. Your best bet might just be teenagers and young adults.
What THE FUCK is it with the expectation that everybody has to understand and speak in-glitch? Employ a local guide. Too expensive? Bad luck. Entitled little .....
What does high mean in this context? I experienced what I would call the inverse Danish maneuver, the German obviously understand English because they often answered our English questions correctly - In German.
In Denmark if a Dane understands what you said in Danish but you have a definite accent they will often answer your question in English.
Maybe Germanic cultures are geared towards the rude.
If I'm talking to an Italian and trying to explain to them in English and they don't understand then I try with a combination of my broken Italian and hand signals, not obdurate sticking to English because that's being a jerk.
At the same time, yes Danes have a high English literacy, but switching to English when someone is talking to you in Danish is rude no matter how you slice it.
https://www.mylondon.news/news/zone-1-news/london-undergroun...
Make a good faith effort to get your problem addressed, and record the fact that you've done so to use in your hearing if it gets that far. Then just file the claim. Generally they fold immediately, and this way you incentivize actual customer service in the only language they understand.
what country is this "small claims court" in? And are you sure this country's small claims works the way your country does?
Hiding the customer service number. Making an FAQ that is missing the common but time-consuming questions. Chatbots instead of people.
I remember when amazon sent me a package once, said it was delivered, but it was nowhere to be found. There was no way to get help. They did have an FAQ at the time that said to check in the bushes.
What was annoying was the search auto-complete had many variations of "package not found says delivered"
Now, it is a little more filled out but still.
* not that I could tell if they were LLMs
I go through the usual hoops: press 1 for English, "we detected an account linked to the number you're calling from, is that that you're calling about?" ... Press 1 for support, press 1 for Internet, "no outages detected in your area. Most problems can be solved by rebooting your modem. Press 1 if you want to try rebooting." (Pause)... "thank you for your call click"
First off, rebooting doesn't solve my problem. But I guess I have to try anyway?
So I call back, this time I do pick to reboot, and get "your modem will reboot in the next few minutes, and could take up to 10 minutes to come online. If things still aren't working, try our online support chat"
So, basically there doesn't seem to be any phone technical support (with a human), at all.
Also, rebooting is offensive to me as a programmer. Kernel updates and memory leaks are the only reason you need to reboot. How absolutely shitty is modem firmware that the ISP actually spent the time to build this reboot system out??(Never mind that I personally don't feel like I've ever had a modem/isp actually problem solved by rebooting)
Made me wonder if I should have switched.
I had problems solved several times by rebooting modem. One time it was "reboot modem and access point in proper order", me naively rebooting them both at the same time didn't help, only phone support solved this problem.
> Also, rebooting is offensive to me as a programmer.
Hmm, I might be desensitivised from too much programming in erlang. It's implied that your program will encounter bugs or strange data and parts WILL be restarted, better account for that and plan on what to do on restart of each small part at the start of writing your program.
> So, basically there doesn't seem to be any phone technical support (with a human), at all.
Because it's cheaper. Those who don't have support can offer lower prices. When people search for trinkets, they only have information about what is supported, there is no good information about quality of device and support, high price also not always means better support. SO they just go for lower price and hope not to suffer too much.
Why is rebooting offensive to you? State is hard; resetting your system to know state can fix many issues.
I have a linux computer running a public server that has not be restarted in three and a half years. This is what I expect.
Every time I have to reboot my work laptop due to work pushing some updates or that I have to reboot my windows machine because it is running unreasonably slow, I am reminded that inconsiderate assholes have become more lazy and are ok with polluting the whole system, mismanaging state and resiliency, and when the equivalent of the microwave has an error, the only solution is rebooting my house. We can do better.
I restart my Linux desktop every few weeks, when the kernel updates.
For a reliable server, you want to exercise the restart ritual somewhat regularly, because when anything goes wrong (eg with the hardware), you might have to restart anyway, so you want to be sure that this works.
This surprises me - as a programmer you should realise that reboots can often help. Cache invalidation is one of the notoriously hard CS problems and an awful lot of systems will start fresh on reboot.
> (Never mind that I personally don't feel like I've ever had a modem/isp actually problem solved by rebooting)
My current ISP is better, but my previous ISP cycled IP addresses at 2am (and lost connectivity for about 30 seconds at the same time) on a Friday night. I would semi-frequently be up playing games at that hour, and it was about 50/50 as to whether devices on my network would survive the blip. Rebooting the router had a 100% success rate.
I currently (unfortunately) have a google wifi mesh system. It works great, except about once a month it reports that absolutely everything is fine, all tests pass from my mobile device, but my laptop has no internet connectivity. Rebooting fixes it just fine.
> How absolutely shitty is modem firmware that the ISP actually spent the time to build this reboot system out?
Firmware is still software, like it or lump it. Modem firmware has been shitty for a long time. A major ISP [0] in the UK had an issue with their firmware that caused massive latency spikes under load. Alsom Power loss happens sometimes. The modem/router has to be able to turn on in the first place, so a "reboot" is just going through that process again. It's attempting to return to a "last known good".
[0] https://community.virginmedia.com/t5/Forum-Archive/Hub-3-Com...
>So, basically there doesn't seem to be any phone technical support (with a human), at all.
I wish everything had support chat. IMO it's much less hassle than having to call. It's usually trivial to get through the first layer of automated support and get a human on the line.
Support chat is universally shitty. My mobile provider's website only works if you keep the browser window open, and times out if you go away for 2 minutes. The replies often take more than 2 minutes. I can only access "certain" information about my account if I'm on mobile data, except my carrier's website doesn't work if I am out of data. (granted, I am on a super budget mobile network, but still). In the last 18 months, the chat experience has been taken over by LLM's which are just acting as full text search for the doc pages that don't solve my problem.
I still choose web chat over any other method of interaction though.
Since at least in that scenario, there were humans in the Bureaucracy that could (but didn’t particularly) feel bad.
In this scenario, no humans need to be directly involved, which allows the scope and scale to be even more Dystopian.
Many parts of gov’t aren’t far off, and those are the really scary ones.
https://www.imdb.com/title/tt0088846/mediaviewer/rm557755905...
Meow!
So this might be the reason you had to change seats.
That book is now almost old enough to have a programming job.
This is a strong reason that corporations should not be considered people. People are long-lived entities with accountability and you can't just create or destroy them at will.
It's the profitable course.
As well as after they do something there is typically a recourse path provided by that org for you to protest their decisions and if that doesn't resolve favorable you can also sue them.
Which differs from the article because the corporation doesn't provide any protest path nor did it have to publish any memo/etc describing how they're going to downsize cleaning for cost-savings. But you can still sue them (but good luck showing damages over an unclean room)!
This. The problem with "voting with your wallet" is that you can't vote "no", you can only vote "yes" or abstain from voting altogether.
Ambrose Bierce already hit the nail on the head in 1911:
"Corporation, n. An ingenious device for obtaining individual profit without individual responsibility."
It has long since baffled me this isn't being talked about more – I guess everyone is just so used to it. As far as I'm concerned the entire concept of "fining a company" should be abolished and replaced with the criminal persecution of those who did the illegal thing.
This notion is currently being contested
I think a good example of the dichotomy here is Starlink. On one hand, it's an incredibly useful service that often has a positive impact. On the other hand, a private corporation is just polluting our low earth orbit with thousands of satellites.
It's not clear to me where exactly the right balance for something like this should be, but I do think that as of today, we're too far on the lessez-faire side.
Seems like a terrible example to me. I'm no fan of Musk, but I don't see how that is "polluting".
They provide an excellent service. They're a minor hindrance for astronomy, true, but I think it would be hard to make a good case for that a few people having a good view of the sky is more important than millions having good communications.
Then there's that there's nothing really special about Starlink. It's merely one of the first users of cheap rocket launches. It could be somebody else, or 1000 different entities launching smaller numbers, in the end the effect on astronomy would be the same.
I didn't say there was, and this isn't about Musk. I'm just using Starlink as an example, my point is not about Starlink.
"I don't see how that is polluting"
Starlink satellites create light pollution and disrupt radio frequencies. Astronomers are already running into issues with research due to the light from Starlink satellites. There's also the issue of reentry. We now have a Starlink reentry almost every single day, which is at least damaging to the ozone layer, and very likely causing other issues.
But like I said, this is not about Starlink. It's just an example to illustrate accountability sinks having both positive and negative effects.
There's no accountability sink to speak of here. "Accountability sink" in the article's meaning means that accountability got obscured, something bad happened (eg, lies on TV, terrible customer service), yet nobody can be clearly blamed for giving the order.
Here, it's Musk's invention, and he's clearly to blame for it. In fact Musk has a propensity to take more credit than he deserves, so it's almost the opposite from a sink really.
This article is more about the phenomenon where decisions are removed by multiple degrees. The locus of decision making is either obscured or non-existent, creating plausible deniability. This is often done by rewarding activities that don't obviously create harm but nevertheless require causing harm to carry out.
It's analogous to the Fox example in the article, where somebody at the top says, "we want high viewership." They don't want their employees to lie to their audience, and they don't force them to lie to their audience.
Does the Fox leadership at some point become aware that "lying to the audience" is a result of their performance goals, just like the decision makers at Starlink become aware that light pollution is a result of their goals? They very likely do. Does that make them feel accountable for the negative side effects? Probably not, because they didn't tell anyone to lie and pollute the skies, somebody else did that.
Cathy argues that the use of algorithm in some contexts permits a new scale of harmful and unaccountable systems that ought to be reigned in.
https://www.penguinrandomhouse.com/books/241363/weapons-of-m...
"A computer can never be held accountable, therefore a computer must never make a Management Decision." IBM presentation, 1979
Hence coders are the new managers, managers just funnel the money around, a job which can be automated
= Presentation, 21st Century
A computer is not alive. A computer system is a tool that can do harm. It can be disconnected or unplugged like any tool in a machine shop that begins to do harm or damage. But a tool is not responsible. Only people are responsible. Accountability is anchored in reality by personal cost.
= Notes
Management calculates the cost of not unplugging the computer that is doing harm. Management often calculates that it is possible to pay the monetary cost for the harm done.
People in management will abdicate personal responsibility. People try to avoid paying personal cost.
We often hold people accountable by forcing them to give back (e.g. community service, monetary fines, return of property), by sacrificing their reputation in one or more domains, by putting them in jail (they pay with their time), or in some societies, by putting them to death ("pay" with their lives).
Accountability is anchored in reality by personal cost.
IBM in 1979 was not doing anything different to 2024. They were just more relevant
To really foul things up requires scalability
An algorithm has no concept of consequences (unless programmed to be aware of such), and the more plausibly whoever wrote it can deny knowledge of the resulting consequences, also the more whoever wrote it can avoid consequences/accountability themselves. After all, we can tell Soldiers or Clerks that ‘just following orders’ is no excuse. But computers don’t do anything but follow orders.
Most people/organizations/etc have strong incentives to be able to avoid negative consequences, regardless of their actions or the results of their actions.
Everyone around them has strong incentives to ensure negative consequences for actions with foreseeable negative outcomes are applied to them.
Sometimes, organizations and people will find a way for the consequences of their actions to be borne by other people that have no actual control or ability to change actions being performed (scapegoat). Accountability ideally should not refer to that situation, but sometimes is abused to mean that.
That tends to result in particularly nasty outcomes.
What I read is yes, the point is revenge. If I can offer you a different way of preventing harmful activity, apparently you're not interested. There has to be some unpleasant consequences inflicted, you insist on it.
I think you should reconsider.
Ideally, the stick never gets used. We aren’t dealing with ideals, however, we have to deal with reality.
On any sufficiently large scale, an inability/lack of will to use the stick, results in wide scale malfeasance. Because other constrains elsewhere result in wide scale push to break those rules/boundaries/structures for competitive reasons.
No carrot, magnifies the need to use the stick, eh? And turns it into nothing but beatings. Which is not sustainable either.
It has nothing to do with revenge. But if it makes you feel more comfortable, go ahead and call it that.
It’s ensuring cause and effect get coupled usefully. And is necessary for proper conditioning, and learning. One cannot learn properly if there is no ‘failure’ consequence correct?
All you need to do to verify this is, literally, look around at the structures you see everywhere, and what happens when they are or are not enforced. (Aka accountability vs a lack of it).
Suppose I’m a bad actor that creates an unfair algorithm that overcharges the clients of my company. Eventually it’s discovered. The algorithm could be fixed, the servers decommissioned, whatever, but I’ve already won. If the people who requested the algorithm be made in that way, if the people who implemented it or ran it see no consequences, there’s absolutely nothing preventing me from doing the same thing another time, elsewhere.
Punishment for fraud seems sane, regardless of whether it’s enabled by code or me cooking some books by hand.
The evolutionary function certainly encourages it, correct?
Ignoring that means that not applying consequences makes one actually culpable in the bad behavior occurring.
Especially if nothing changed re: rules or enforcement, etc.
I don't see what the problem is. There's malice, there's negligence, and there's accident. We can figure out which it was, and act accordingly. Must we collapse these to a single situation with a single solution?
Sure! But oop mentioned a phenomenon that some companies can hide after algorithms and reject taking responsibility for it. If machines cause damage, people can easily find who's fault, but sometimes the same way does not work on software.
The original point seemed to me to be "we can't use computers because they're not accountable". I say, we can, because we can do fault analysis and fix what is wrong. I won't say "we can hold them accountable", to avoid the category error.
If your algorithm kills someone, is the accountability an improvement to the algorithm? A fine and no change to the algorithm? Imprisonment for related humans? Dissolution of some legal entity?
Algorithms are used by people. An algorithm only allows "harmful and unaccountable systems" if people, as the agents imposing accountability, choose to not hold the people acting by way of the algorithm accountable on the basis of the use of the algorithm, but...that really has nothing to do with the algorithm. If you swapped in a specially-designated ritual sceptre for the algorithm in that sentence (or, perhaps more familiarly, allowed "status as a police officer" to confer both formal immunity from most civil liability and practical immunity from criminal prosecution for most harms done in that role), it functions exactly the same way: what enables harmful and unaccountable systems is when humans choose not to hold other humans accountable for harms, on whatever basis.
"The Unaccountability Machine," based on Mandy's summary in the OP, argues that organizations can become "accountability sinks" which make it impossible for anyone to be held accountable for problems those organizations cause. Put another way (from the perspective of their customers), they eliminate any recourse for problems arising from the organization which ought to in theory be able to address, but can't because of the form and function of the organization.
"Weapons of Math Destruction" argues that the scale of algorithmic systems often means that when harms arise, those harms happen to a lot of people. Cathy argues this scale itself necessitates treating these algorithmic systems differently because of their disproportionate possibility for harm.
Together, you can get big harmful algorithmic systems, able to operate at scale which would be impossible without technology, which exist in organizations that act as accountability sinks. So you get mass harm with no recourse to address it.
This is what I meant by the two pieces being complementary to each other.
> Davies gives the example of the case of Dominion Systems vs Fox News, in which Fox News repeatedly spread false stories about the election. No one at Fox seems to have explicitly made a decision to lie about voting machines; rather, there was an implicit understanding that they had to do whatever it took to keep their audience numbers up.
Rupert Murdoch conceded under oath that "Fox endorsed at times this false notion of a stolen election."[1] He knew the claims were false and decided not to direct the network to speak about it otherwise.
Communications from within Fox, by hosts, show they knew what they were saying was false.[2]
These two examples clearly fit the definition of lying [3].
The "External Links" section of Wikipedia gives references to the actual court documents that go into detail of who said what and knew what when [4]. There are many more instances which demonstrate that, indeed, people made explicit decisions to lie.
[1] https://www.npr.org/2023/02/28/1159819849/fox-news-dominion-...
[2] https://www.nbcnews.com/politics/elections/dominion-releases...
[3] https://www.dictionary.com/browse/lie
[4] https://en.m.wikipedia.org/wiki/Dominion_Voting_Systems_v._F...
It happened without coordination and later on wasn't stopped by the people in management, either.
It was number-2 all the way up.
So, your point is entirely irrelevant.
That is a wildly different than reporting on what is demonstrated at DEFCON's voting village. What are you trying to pull.
You logic is flawed at the core. With that train of thought you can infer everything.
Why trust voting it can be manipulated.
All voting systems can be manipulated, there's no need to make it so easy though.
[my recommendation: try to build a high trust society instead of patching epicycles onto the side of a low trust one]
A lot of corporations now seem to have a structure where the org chart contains the following pattern:
- a "management layer" (or several of them) which consists of product managers, software developers, ops people, etc. The main task of this group is to maintain and implement new features for the "software layer", i.e. the company's in-house IT infrastructure.
Working here feels very much like working in a tech company.
- a "software layer": This part is fully automated and consists of a massive software and hardware infrastructure that runs the day-to-day business of the company. The software layer has "interfaces" in the shape of specialized apps or devices that monitor and control the people in the "worker's layer".
- a "worker's layer": This group is fully human again. It consists of low-paid, frequently changing staff who perform most of the actual physical work that the business requires (and that can't be automated away yet) - think Uber drivers, delivery drivers, Amazon warehouse workers, etc.
They have no contact at all with the management layer and little contact, if any, with human higher-ups. They get almost all their instructions through the apps and other interfaces of the software layer. Companies frequently dispute that those people technically belong to the company at all.
Whether or not those people are classified as employees, the important point (from the management's POV) is that the software layer serves as a sort of "accountability firewall" between the other two layers.
Management only gives the high-level goal of how the software should perform, but the actual day-to-day interaction with the workers is exclusively done by the software itself.
The result is that any complaints from the worker's layer cannot go up past the software - and any exploitative behavior towards the workers can be chalked up as an unfortunate software error.
The only thing that changed is that now instructions and procedures are oftentimes executed by software and hardware, not by actual human beings. Hence the use of software engineering wing, in addition to your usual, sorry for the lack of better word, “meat programmers” aka organisational execs.
Interestingly, the end result customers get has not changed, despite many people coloring it that way. People still get same cup of coffee or a taxi ride, just quicker/cheaper/marginally better. But such incremental improvements were achievable in the business world before IT era using same exact means, through internal product management and imrovement of org procedures, applied to people and processes instead of pieces of software.
I still think there is some difference in kind, not just degree: A human operational exec at least has to engage with the workers personally, witness the conditions they are working in, is exposed to complaints, etc. Even the most uncaring foreman is therefore forced into a position where he is subjected to accountability. He also has personal contact with the upper layer and can pass on that accountability to his higher-ups.
In contrast, a software layer is physically unable to hear complaints and to pass them back up the chain. Because it's not a human, it cannot take accountability itself - however, it can still give higher-ups plausible deniability about "not having known" about problems. (A knock-on effect is also that it will prevent workers from even attempting to communicate the problem, because no one wants to talk to a wall)
Therefore it creates an accountability sink where there was none in the old structure.
(None in theory at least, of course there were enough other ways to be shielded from accountability even before computers)
You’d be surprised how often decisions are made without ever seeing people at work, or communicating with them in a meaningful way. There are managers who do engage first hand, but they are not real decision makers, and just relay the report on situation and context upstream / execute on decisions of others. Relay of accurate first hand information from workers to execs almost never happens.
As one of the neighbor threads accurately highlighted: this is by design, both on customer side and on worker’s side. Customers get vouchers, workers get retainers, among both there is a calculated percentage of people facing what they see as “accountability sink”, what is in reality a machine intentionally designed that way.
And the process of getting the money is reasonably straightforward.
Not always. The airlines often lie.Fortunately, I've used a debit card for the purchase, which made it possible to recover my funds by using chargeback.
As a screen-reader-using person who cannot use pen and paper without assistance, I was once quite enamored by them, but I've changed my stance a bit.
The thing about pen and paper is that it accepts anything you put in, and it's up to a human to validate whether what you put in makes any sense. Computers aren't like that, if they tell you that the numbers in your application have to match up, you need to lie to the government to make them match up, even if you're a weird edge case where the numbers should, in fact, be slightly off and "inconsistent" with each other.
I called the local govt office responsible for this specific program, and they essentially told me to lie to the government in not so many words. Their system is centrally managed, they have no power of introducing updates to it, they wish they could fix it, but even they aren't empowered to do so.
There is a flag on my LinkedIn account that bars me from getting a "follow-me" link on my profile.
No one of their support team knows why. No one knows since when. No one knows when it will change.
We are already living in this world.
Judge, Jury and Executioner Firing Squad Limited Liability Organisation
Humans like to sleep at night. An emergent property of our rule of law is that it exists in a way to reduce the moral culpability of any individual. A police man, a jury member, a judge, a inspector, an executioner, a jailer, they all exist in very neat boxes. These boxes allow them to sleep at night. Surely the Judge has few qualms going by the recommended mandatory minimum, after the jury, who is assured the judge will provide a fair sentence, and the executioner doubly so, with double the potential moral hazard, is certain at least two other parties have done their due diligence.
these systems prevent a single actor from acting. More like they allow a series of hand offs, so by the time the jailer is slamming the doors shut, they are bereft of any investment in the morality of the outcome
The firing squad, with seven guns, all line up, with just one loaded. The rest are blanks. Each man can sleep at night, regardless if the murdered man was surely deserving of death
large institutions, organizations and objects are scale are fully inhumane
I would rather have my jailer be my judge and my executioner be each man or woman on the jury. Isolating each of these things allows the individuals to have almost a powerless notion of 'completing our task'. As if all tasks completed would add up to a moral outcome
Should juries be formed to perform the whipping of an individual, the institutionalization in their own homes, the judge forced to starve a prisoner in his cell, i find the outcomes would be different
Letting juries perform executions and judges be responsible for the imprisonment of the guilty just creates a massive perverse incentive for sadistic individuals.
You wrongly assume that either judge or jury would be more empathetic if they were burdened with the weight of these things. Instead, you'll get the people you least want in charge of these things doing them.
Aside from the method of using blanks at executions, everything else about the system protects the convicted, not the people in the system itself.
Of course you can run a political party suggesting that this division of powers thing was a step in the wrong direction, or better yet, take a trip to any of the dozens of countries with superdictators.
If we eat meat, we should kill it ourselves.
I don’t think there’s anything inherently wrong with eating animals. But I have a particularly carnivorous friend who thinks hunting is for sociopaths, because he “loves animals”.
If I wouldn’t harvest it, I won’t eat it. And I definitely would be too timid to slaughter a freaking cow lol
It does feel different from market-bought meat though, at least for me.
If I wouldn’t kill a chicken because you would feel bad, then I shouldn’t be able to eat chicken without feeling bad. If I’m able to do that, it’s because a big food production system has falsely separated those concepts in my head. Kind of similar to the accountability sinks in the linked post
Surely this line of reasoning requires the presence of omniscient judgement (ie the abrahamic god) to make sense. Otherwise all gunmen would (and should) assume practical responsibility
I feel the same way about artificial intelligence: it's not new, it's all around us, digital computation merely crystallizes the concept. But shiny objects should not distract us from the much more general phenomenon.
The idea of "unaccountable" failures only makes sense if both (a) the problem is so systemic that actually an executive is accountable, (b) the executive is so far removed in the hierarchy from the line employees doing the work that nobody knows each other or sometimes even sits on the same campus, (c) the levers available to the executive to fix the problem are insufficient for fixing the problem, e.g. the underlying root cause is a culture problem, but culture is determined by who you hire, fire, and promote, while hiring and firing are handled by "outside" HR who are unaccountable to the executive who is supposedly accountable. But really this is another way of saying that accountability is simply another level higher, i.e. it is the CEO who is accountable since both the executive and HR are accountable to the CEO.
No, you have to have an astoundingly large organization (like government) to really have unaccountability sinks, where Congress pass laws with explicit intent for some desired outcome, but after passing through 14 committees and working groups the real-language policy has been distorted to produce the exact opposite effect, like a great big game of telephone, one defined by everyone trying to de-risk, because the only genuine shared culture across large organizations is de-risking, and it is simply not possible to actually put in place both policy and real-life changes to hiring, firing, and promotion practices in the public sector to start to take more risks, because at the end of the day, even the politicians in Congress are trying to de-risk, and civil servants burning taxpayer money on riskier schemes is not politically popular, though maybe it should be, considering the costs of de-risked culture.
The book's point is that while this _should_ be the case, all too often it's not. AFAIK, nobody has been charged with forging documents in the case of Wells Fargo cross selling. Not the counter clerks who directly responded to incentives and management pressure nor the executives who built that system.
This is exactly why being an executive of a large organization is so incredibly difficult to pull off well. Sure, you can let your assistant fill your calendar with a bunch of meetings you don't want to be in to spend 95% of the meeting listening, 4% being the arbiter who tells people what they already knew they needed to do but refused to do it until asked by someone in authority, and 1% saying you'll take it further up the ladder. You will also fail hard because you will be constantly blindsided by people either fucking up (at best) or gaming (at worst) the processes for which you are responsible. Small example litmus test: in organizations that use Jira, whether the executives are comfortable with JQL and building their own dashboards to tell them what they need to know, or whether they expect their direct reports to present their work. If it's the latter, how can an executive be surprised that their reports are always coming in with sunny faces and graphs going up and to the right?
That too many companies are not willing to hold executives accountable for processes that they are, in theory, supposed to be accountable for is an entirely different problem. The law proscribes, the officer arrests, and the judge presides, but all rests upon the jury to convict. If a company's "jury" is not willing to "convict", because the crime is one of negligence and not treason, then the company has larger problems and I'd like to short their stock, please.
So the only one with consistent power over all groups is an executive so high up the food chain (in some cases not even the CEO!) that they can plausibly claim ignorance.
If you have 1 candidate, it's an easy call, if you have 3 candidates, you evaluate in less than a week. If you have 200 candidates, you need to hire somebody to sift through the resumes, have like 5 rounds on interview and everybody chiming in, whoever pulls the trigger or recommends someone is now on the hook for their performance.
You can't evaluate all the information and make an informed decision, the optimal strategy is to flip a 100 sided die, but no one is going to be on the hook for that.
That's not how accountability works, in the traditional sense.
What you described is Person A (accountable for hiring) hiring person B (responsible for screening and evaluating candidates). Person A is still accountable for the results of Person B. If Person B hired a sh*t candidate, it still lands on Person A for not setting up an adequate hiring system.
Being accountable for something doesn't forbid you from delegating to other people. It is very common for 1 person to be accountable for multiple people's work.
it just so happened.
In the hiring example, perhaps the person A stops being accountable for hiring someone successful in the role, and rather they are accountable for successfully hiring persons B who is capable of hiring someone to fill the role.
Essentially creating an accountability chain. If you want to describe a logical chain of accountability instead as a “accountability sink”, then I’d go along with that.
It’s true that accountability chains can be difficult to keep track of and the longer they get, the blurrier they get.
The comments here are grossly oversimplifying this concept.
You can fire your hiring manager and pick another one if he fails too often for example
https://managementpatterns.blogspot.com/2013/01/pattern-shit...
I learned early on when I moved from development to management that a big part of my job was being accountable for everything my team did (short of outright sabotage). You don't hold junior devs accountable for anything, you do your best to monitor their work anytime they're working on something mission critical and to mentor them through the mistakes they make. Senior devs take on some or a lot of that monitoring and mentoring role, especially as the team size grows, but as their manager I am still accountable for any errors they make too (especially including letting errors from junior devs slip through).
Sometimes I think the most important part of my job is standing up before senior management and saying something like "My team made this series of decisions which resulted in the bad outcome we are here to discuss. I apologise and accept full responsibility. The team has learned from this, and we can assure you we will never repeat this mistake." And then deflecting and outright refusing to throw any of my team under the bus by naming them - to the point of being accused of insubordination occasionally.
(To be honest, I didn't internalise that quite early enough. There are probably a few apologies I should have made from back then...)
One way or another you gotta own the decisions you make and deal with it. Even if the decision is to let someone else make the decision.
The issue is that, yes, absolving yourself of accountability sure does free you to scale in ways previously thought unimaginable, it doesn't mean you absolve yourself of responsibility. The cure is keeping accountability in favor of scaling which means a much smaller scale to everything we have been doing.
Another way to think about it. If you said you would give me 1 million dollars but I had to fully own up to what 1000 random people do in the next 24 hours I'd say thats a pretty raw deal. Basically no chance that a million will cover the chaos that a few of those 1000 people could cause. What some people do is take the million and then figure out how to rid themselves of the reponsibility.
Sure. And the article allows for that. You need to have "an account" that acknowledges that at the time you didn't and couldn't have enough information to completely de risk the decision, but that you'd discussed and agreed that the 1/100 (or 1/5 or 1/10,000) risk of the bad outcome was a known and acceptable risk.
"where an account is something that you tell. How did something happen, what were the conditions that led to it happening, what made the decision seem like a good one at the time? Who were all of the people involved in the decision or event?"
> In The Unaccountability Machine, Dan Davies argues that organizations form “accountability sinks,” structures that absorb or obscure the consequences of a decision such that no one can be held directly accountable for it.
Why not just call it "no-consequence sinks"?
It's somewhat of an oxymoron to say "accountability" isn't working because there's no consequence. Without any consequence there is no accountability. So why call it accountability in the first place?
This article is describing something along the lines of "shared accountability" which, in project management, is a well known phenomenon: if multiple people are accountable for something, then no one is accountable.
If someone is accountable for something that they can't do fully themselves, they are still accountable for setting up systems (maybe even people to help) to scale their ability to remain accountable for the thing.
It’s all kinda mushy. Being accountable is hearing and knowing a story. I don’t see why that has to correlate with decision power.
The point of the article could be made much more clearly by talking about systems that leave decision makers not aware of the consequences of their decisions. All the anecdotes in the article fit that pattern.
I think people don’t use the language of decision-consequences because it doesn’t capture an emotional aspect they’d rather not say out loud. They want the decision maker to feel their pain, they want the decision maker to hurt.
Decision makers can be aware of how many unready rooms are caused by less cleaning staff, how many flights they’re cancelling. I’d actually bet they are. But that’s not enough, the harmed person wants to tell their story.
You're a neolithic farmer, and plant your barley, but that year there's a drought; you suffer the consequences, but who (or what) do you hold accountable?
In certain fields, there is a serious and distinct difference between Accountability, Responsibility, Consulting, and Informing.
Source: https://en.m.wikipedia.org/wiki/Responsibility_assignment_ma...
There’s a whole philosophy behind it. My spidey senses tingle when those words get misconstrued.
After all, is it the words or the ideas behind the words that matter more? We should always be trying to improve our language, but I'm worried if we prioritize words over meanings. I feel it's an important thing considering how language is constantly evolving.
> The fundamental law of accountability: the extent to which you are able to change a decision is precisely the extent to which you can be accountable for it, and vice versa.
No.
You can absolutely be accountable for something that you can’t change a decision about. Simple example: You’re a branding agency and you decide to rename X to Y. (No pun intended). The rebrand to Y fails. You’re accountable for the failure, but likely don’t have the ability to change anything by the time you know the results of your decision.
Edit: ok, fair I agree. Bad example. A simpler example would be the person in the article continuing to point the the boss above them until there’s no one left. The chain would break somewhere along the way, but the broken chain is communication rather than one of accountability.
The information may not reach the person able to make a change. But that doesn’t make them not accountable. If that person is unable to make a change because they’re in vacation for a month without anyone filling in, that person is accountable for both the results AND future results that are caused by not having someone monitor/reroute their acckuntability.
Amazon's concept of "two way and one way doors" is useful here. A two way door decision lets you go back if the decision turns out to be bad and can be made with significantly less scrutiny that a one way door decision which you cannot back out of after you've acted on it.
The branding firm certainly does not seem to have performed well, from the scenario you described. But accountability is not the same as performance or even culpability.
At its root, responsibility is about who responds, rather than who causes.
shared accountability is spreading that risk around to a group (but I don't think it necessarily eliminates that accountability – you can fire an entire department if you need to)
author's point, which I think is interesting, is that there's bermuda triangles where accountability cannot occur and that these can manifest naturally, outside of any traditional RACI
‘An SEP is something we can't see, or don't see, or our brain doesn't let us see, because we think that it's somebody else's problem. That’s what SEP means. Somebody Else’s Problem. The brain just edits it out, it's like a blind spot.
The narration then explains:
The Somebody Else's Problem field... relies on people's natural predisposition not to see anything they don't want to, weren't expecting, or can't explain. If Effrafax had painted the mountain pink and erected a cheap and simple Somebody Else’s Problem field on it, then people would have walked past the mountain, round it, even over it, and simply never have noticed that the thing was there.’
>> a higher up at a hospitality company decides to reduce the size of its cleaning staff, because it improves the numbers on a balance sheet somewhere. Later, you are trying to check into a room, but it’s not ready and the clerk can’t tell you when it will be; they can offer a voucher, but what you need is a room.
This reads from the perspective of a person checking in. But it should read from the perspective of the person who made the decision.
The decision was made like this; On most days we have too many cleaners. If we reduce the cleaners we reduce expenses by x.
On some days some customers will need to wait to check-in. Let's move checkin time from 1pm to 2pm (now in some cases to 4pm) to compensate. n% of customers arrive after 4pm anyway. We start cleaning early, so chances are we can accommodate early checkin where necessary.
Where there's no room available before 4pm, some % will complain. Most of those will be placated with a voucher [1] which cost us nothing.
Some small fraction will declare "they'll never use us again". Some will (for reasons) but we'll lose a few.
But the savings outweigh the lost business. Put some of the savings into marketing and sales will go up. Costs remain lower. More profit.
There is perfect accountability of this plan - the board watches to see if profits go up. They don't care about an individual guest with individual problems. The goal of the business is not to "make everyone happy". It's to "make enough people happy" to keep profits.
[1] the existance of the voucher proves this possibility was accounted for.
So accountability in this case is working - except for the customer who didn't get what they want. The customer feels frustrated, so from their perspective there's a failure. But there are other perspectives in play. And they are working as designed.
But we can also look for accountability in the political system. Maybe the hotel should be obliged by the law to pay real money instead of a voucher?
And even in the case where the company's decision is arguably just "bad," it still might not be a problem from the company's point of view.
Companies (including start-ups) create buggy products all the time and don't care, and aren't very responsive to requests for support, as long as money is coming in. I don't think they are using special accountability-flushing techniques. It takes real work, intention, experience, and power in a company to create feedback channels, and use them, and ensure that the customer has an experience of quality. It doesn't happen by magic or by default.
It handles signups, restauration and housing services, grades, everything.
One example is that the grades are entered by professors and mistakes happen all the time, for everyone, due to the insane server load.
There's no one to complain to, because the excuse is always "it's the system, not us"
Accountibility always was down. Back in aristocracy you were never allowed to ask for support. Only in modern civilisation this improved. Middle management, the clueless in the Gervais principle, need their walls.
Don't be fooled by the decline of customer support in big orgs, like Google, Apple, or Amazon. They believe that support cannot scale, or if it's really needed, it needs to be outsourced to India or East Asia.
I disagree. They believe that support shouldn't scale with the size of the business, and should provide economies.
It can scale - I worked in two huge companies dominating the world markets, and we did fine with global support - but they say this is not theirs business model. Well without competition they can do what they want, but customers prefer support and accountability. That's why most countries eventually came up with anti-trust legislation.
One of his examples is that you should make yourself unavailable for contact, when you suspect someone is trying to blackmail you.
That's exactly the same severing of a link as described in the article.
Maybe I'm missing something, but how often does blackmail happen that it rises to the level of needing strategic advice like "make yourself unavailable" ?
Who is Tom Schelling's audience?
Politicians setting policies for use of nuclear weapons during the cold war, IIRC. Among others, at least.
I read parts of that book many years ago, I recall the major theme is that voluntarily sacrificing control over the situation can be a powerful way to force the other party to do what you want. Like if you and me are playing "chicken", speeding towards each other and wanting the other to turn away first, you ripping out your steering wheel and throwing it out for me to see is a guaranteed way to force me to turn first and lose. This kind of stuff.
I guess it ties into the larger topic here in that you can avoid being held accountable if you remove the ability to make any choices yourself.
Parents, of course.
You might think I'm joking, but dealing with toddlers throwing tantrums is a prime example in some of his books.
The chain breaks when incentives aren’t aligned and there’s a cascade of crap that roles downhill that seem like bad decisions. When, in fact, as the article points out, the decision made didn’t take knock on effects into consideration.
I’ve learned that seemingly poor or even terrible decisions almost always make sense in the context of when and where the decision was made.
Ideally, a consultant is hired for their specialist skills, rare experience, sage advice on niche topics, etc...
In practice, about half the work I do is to act as a lightning rod so that the guy with the power to sign the cheque for my time doesn't get fired if things go sideways. Instead, they can just blame me, shrug their shoulders, and hire another consultant.
I've had a customer where I got "fired" for an "error". My coworker replaced me. Then he was fired, and I replaced him. We alternated like this for years. Upper management just saw the "bad" consultants get fired for their incompetence, they never noticed that we were the same two guys over and over.
The modern company is a very very limited liability company:
- Cut corners so your jets crash and kill people (Boeing)?
- Cheat on emissions testing so your product kill people (VW)?
- Hush up drug trial results so you kills people (Pfizer)?
- Sloppy security leads to hundreds of millions of people's personal data being leaked (too many to mention)?
What happens to those in charge? Nothing. Perhaps they leave with a big golden handshake. If it's really bad, they get a don't do it again agreement with the Feds.
No accountability means no feedback/skin in the game. So nothing gets better.
But I don't think it is quite so black and white in the world. Because the legal system is also a way to give feedback to companies. And it can stop them in their tracks.
At the same time, though, I think it's a mistake to leave out the fact that, in many ways, modern society is just so fundamentally complex that we (as a society at large) deliberately forego demanding accountability because we believe the system is so complex that it's impossible to assign blame to a single person.
For example, given this is HN and many of us are software developers, how many times have we collectively supported "blameless cultures" when it comes to identifying and fixing software defects. We do this because we believe that software is so complex, and "to err is human", that it would be a disservice to assign blame to an individual - we say instead that the process should assume mistakes are inevitable, and then improve the process to find those mistakes earlier in the software lifecycle.
But while I believe a "blameless culture" is valuable, I think a lot of times you can identify who was at fault. I mean, somebody at CrowdStrike decided to push a data update, without verifying it first, that bluescreened a good portion of the world's Windows machines and caused billions in damages.
I just think that if you believe "accountability sinks" are always a bad thing, don't forget the flip side: would things always be better if we could always assign "root cause blame" to a specific individual?
Accountability, at least as presented here, is about feeeback between those affected by a decision and those making it. In a "blameless culture", people are still held to account for their decisions and actions but are not blamed for their results.
I would argue that a blameless culture actually makes accountability sinks less likely to develop. In blameful cultures, avoiding accountability avoids blame, but that is not needed in a blameless culture.
The two concepts we are talking about are each talked about under each label so there is enough ambiguity in both words that this is true. However choosing to use 'blame' in the opposite sense from the one being used in that context adds nothing to the conversation.
- "Sin Eaters"
- Corporations, especially companies that are spun off and take on all the debt of the original company
- Voluntary stool pigeons (in criminal organizations, etc.)
- Certain religious martyrs
Government and Civil Servant are the biggest example. I guess its time to re-watch "Yes Minister".
From Donella Meadows: “Intrinsic responsibility” means that the system is designed to send feedback about the consequences of decision-making directly and quickly and compellingly to the decision-makers.
This could change if technology could solve aggregation and analysis problem, making ready-made decision propositions to management. High risk of this mechanism just becoming another accountability sink, though.
Another solution is to build large organisations out of federated micro-orgs, where such intristic responsibility is feasible.
Transaction cost economics since the 1960's has been enumerating aspects like these, and showing they in fact determine the shape of business organizations and markets. Exported costs (implied in accountability sink) are mostly the rule rather than the exception.
What to do with them? A primary TCE finding is that if there were no transaction costs to adjudicating liability, it wouldn't matter from the social cost perspective where the liability lay (with the perpetrator or the victim) because they would adjudicate it down to their mitigation costs. As a result, the main policy goal for assigning liability (if you want to minimize the total cost to society) is actually to minimize and correct for adjudication transaction costs. (hence, no-fault divorce and car insurance)
The same dynamics are at play in the market and within organizations.
As a participant if your goal is your own profit, you can gain by making it harder to adjudicate and reducing the benefits thereof (hence binding arbitration, waivers, and lack of effective feedback). Doing so is becoming much simpler as virtual transaction interfaces and remote (even foreign) support afforded by software replace face-to-face interactions bound by social convention.
And who wouldn't want to? If you're head of customer support or developer relations, would you document your bugs or face the wrath of customers for things which can't change fast enough? You'd want to protect yourself and your staff from all the negativity. Indeed, with fixed salaries, your only way of improving your lot is to make your job easier.
To me the solution is to identify when incorporating the feedback actually benefits the participants. There, too, the scalability of virtualized software interfaces can help, e.g., the phone tree that automates simple stuff most people need and vectors complex questions to real people who aren't so harried, or the departing-customer survey querying whether it was price, quality, or service that drove one away.
You have to make accountability profitable.
Enforcing copyright law through an honest projection of 35mm film footage is a philanthropic endeavour. Making sure that every member of the production team, even the gaffers and stage hands take part in the exclusivity of re-capitalisation efforts, like the Fox complaint, is purely, legalistic sleight of hand.
https://m.imdb.com/title/tt11383280/?lang=en&ref_=ext_shr_ln...
what you can have is a discussion about this or a blog post that is read by people and maybe some new subscribers, so no worries - all is not lost. :)
When pressed, the Redditor said that the their friend was working on the mathematical theory for computers to control planes that have no power, such as under emergency landing conditions. I.e.: If the engine dies, the auto pilot can help steer the plane onto a runway.
"No, your friend is working on precision glide bombs. The emergency landing thing is just marketing to make it palatable. She might not even know, but that's definitely what she'd doing."
That stuck with me: someone could be working on bomb technology and not even know it.
Talk about an accountability sink!
1. Macro level: Departments claim broad accountability.
2. Micro level: Pinpointing task ownership causes accountability to vanish.
3. Indefinite states: Ambiguous tasks linger without resolution.
4. Entanglement: Dependent tasks inherit this ambiguity.
This creates a system where responsibility exists in superposition, tasks remain unresolved, and accountability becomes increasingly delocalized.
As bottom line at least? Can't have the one without the other, just different aspects of the same shit.
By the way this is what bitcoin was set up to solve...notice it not being solved.
As organizations become more complex, it's difficult to understand the consequences of many high-level decisions. Unless great effort is made to gather feedback, it won't happen.
Not only that: the lack of immediate, human communication results in one-way feedback mechanisms, like suggestion boxes and surveys. Many companies clearly want to make this work, because we're constantly prompted and sometimes paid to fill out surveys. But the result is survey fatigue.
The person giving feedback needs to be reassured (by people, not machines) that their feedback matters, or they won't be bothered to do it. Often, it's socially awkward to give negative feedback, so people don't. And often, the employees directly on the scene have incentive to encourage customers to avoid negativity when they fill out surveys.
One way to show that feedback matters is to respond to complaints with some sort of assistance. In the example in the article, that's a voucher. Perhaps somewhere in the organization, that voucher counts as a cost, but it's pretty unsatisfying.
In some organizations, managers are encouraged to work at the support desk occasionally as a more immediate way to understand what's going on. (I remember reading about how Craig Newmark would do this for his website.)