> P_{total}(Success) = 1 - P_{3rdParty}(Failure)^{RetryCount}
By treating P_{3rdParty}(Failure) as fixed, they're assuming a model in which each each try is completely independent: all the failures are due to background noise. But that's totally wrong, as shown by the existence of big outages like the one they're describing, and not consistent with the way they describe outages in terms of time they are down (rather than purely fraction of requests).
In reality, additional retries don't improve reliability as much as that formula says. Given that request 1 failed, request 2 (sent immediately afterward with the same body) probably will too. And there's another important effect: overload. During a major outage, retries often decrease reliability in aggregate—maybe retrying one request makes it more likely to go through, but retrying all the requests causes significant overload, often decreasing the total number of successes.
I think this correlation is a much bigger factor than "the reliability of that retry handler" that they go into instead. Not sure what they mean there anyway—if the retry handler is just a loop within the calling code, calling out its reliability separately from the rest of the calling code seems strange to me. Maybe they're talking about an external queue (SQS and the like) for deferred retries, but that brings in a whole different assumption that they're talking about something that can be processed asynchronously. I don't see that mentioned, and it seems inconsistent with the description of these requests as on the critical path for their customers. Or maybe they're talking about hitting a "circuit breaker" that prevents excessive retries—which is a good practice due to the correlation I mentioned above, but if so it seems strange to describe it so obliquely, and again strange to describe its reliability as an inherent/independent thing, rather than a property of the service being called.
Additionally, a big pet peeve of mine is talking about reliability without involving latency. In practice, there's only so long your client is willing to wait for the request to succeed. If say that's 1 second, and you're waiting 500 ms for an outbound request before timing out and retrying, you can't even quite make it to 2 full (sequential) tries. You can hedge (wait a bit then send a second request in parallel) for many types of requests, but that also worsens the math on overload and correlated failures.
The rest of the article might be much clearer, but I have a fever and didn't make it through.
Yes, that jumped out at me as well. A slightly more sophisticated model could be to assume there are two possible causes of a failed 3rd party call: (a) a transient issue - failure can be masked by retrying, and (b) a serious outage - where retrying is likely to find that the 3rd party dependency is still unavailable.
Our probabilistic model of this 3rd party dependency could then look something like
P(first call failure) = 0.10
P(transient issue | first call failure) = 0.90
P(serious outage | first call failure) = 0.10
P(call failure | transient issue, prior call failure) = 0.10
P(call failure | serious outage, prior call failure) = 0.95
I.e. a failed call is 9x more likely to be caused by a transient issue than a serious outage. If the cause was a transient issue we assume independence between sequential attempts like in the article, but if the failure was caused by a serious outage there's only a 5% chance that each sequential retry attempt will succeed.In contrast with the math sketched in the article, where retrying a 3rd party call with a 10% failure rate 5 times could suffice for a 99.999% success rate, with the above model of failure modes including a serious outage failure mode producing a string of failures, we'd need to retry 135 times after a first failed call to achieve the same 99.999% success rate.
Your points about overall latency client is willing to wait & retries causing additional load are good, in many systems "135 retry attempts" is impractical and would mean "our overall system has failed and is unavailable".
Anyhow, it's still an interesting article. The meat of the argument and logic about 3rd party deps needing to meet some minimum bar of availability to be included still makes sense, but if our failure model considers failure modes like lengthy outages that can cause correlated failure patterns, that raises the bar for how reliable any given 3rd party dep needs to be even further.
I do worry about all the automation being another failure point, along with the IaC stuff. That is all software too! How do you update that safely? It's turtles all the way down!
One of the question I frequently get is "do you automatically rollback". And I have hide in the corner and say "not really". Often, if you knew a rollback would work, you probably could also have known to not roll out in the first place. I've seen a lot of failures that only got worse when automation attempted to turn the thing on and off again.
Luckily from an automation roll-out standpoint, it's not that much harder to test in isolation. The harder parts to validate are things like "Does a Route 53 Failover Record really work in practice at the moment we actually need it to work?"
Usually the answer is yes, but then there's always the "but it too could be broken", and as you said, it's turtles all the way down.
The nice part is realistically, the automation for dealing with rollout and IaC is small and simple. We've split up our infrastructure to go with individual services, so each piece of infra is also straight forward.
In practice, our infra is less DRY and more repeated, which has the benefit of avoiding complexity that often comes from attempting to reduce code duplication. The ancillary benefit is that, simple stuff changes less frequently. Less frequent changes because less opportunity for issues.
Not-surprisingly, most incidents comes from changes humans make. Where the second most amount of incidents come from assumptions humans make about how a system operates in edge conditions. If you know these two things to be 100% true, you spend more time designing simple systems and attempting to avoid making changes as much as possible, unless it is absolutely required.
For example, for the fall over regions (from the article) you could make a pulumi function that parameterizes only the n things that are different per fall over env and guarantee / verify the scripts are nearly identical. Of course, many people use modules / terragrunt for similar reasons, but it ends up being quite powerful.
We don't use the CDK because it introduces complexity into the system.
However to make CloudFormation usable, it is written in typescript, and generates the templates on the fly. I know that sounds like the CDK, but given the size of our stacks, adding an additional technology in, doesn't make things simpler, and there is a lot of waste that can be removed, by using a software language rather than using json/yaml.
There are cases we have some OpenTofu, but for infrastructure resources that customer specific, we have deployments that are run in typescript using the AWS SDK for javascript.
It would be nice if we could make a single change and have it roll-out everywhere. But the reality is that there are many more states in play then what is represented by a single state file. Especially when it comes to interactions between—our infra, our customer's configuration, and the history of requests to change the configuration, as well as resources with mutable states.
One example of that is AWS certificates. They expire. We need them expiring. But expiring certs don't magically update state files or stacks. It's really bad to make assumptions about a customer's environment based on what we thought we knew the last time a change was rolled out.
Pulumi or CDK are for sure more powerful (and great tools) but when I need to reach for them I also worry that the infra might be getting too complex.
But if you need to do something in a particular way, the tools should never be an obstacle.
You still end up having IaaC. You can still have a declarative infrastructure.
But, as JSR_FDED's sibling comment notes & as is spelled out in the article, authress' business model offering an auth service means that their outage may entirely brick their clients customer facing auth / machine to machine auth.
I've worked in megacorp environments where an outage of certain internal services responsible for auth or issuing JWTs would break tens or hundreds of internal services and break various customer-facing flows. In many business contexts a big messy customer facing outage for a day or so doesn't actually matter but in some contexts it really can. In terms of blast radius, unavailability of a key auth service depended on by hundreds of things is up there with, i dunno, breaking the network.
AWS has very good isolation between regions and, while it relies on us-east-1 for control plane updates to Route 53, health checks and failovers are data plane operations[3] and aren't affected by a us-east-1 outage.
Relying on a single provider always seems like a risk, but the increased complexity of designing systems for multi-cloud will usually result in an increased risk of failure, not a decrease.
1. us-east-1, us-west-1, us-west-2, eu-west-1, ap-southeast-1, ap-southeast-2, ap-northeast-1 and sa-east-1 which defaults to all of them.
2. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dn...
3. https://aws.amazon.com/blogs/networking-and-content-delivery...
In that case, no matter what we are using there is going to be a critical issue. I think the best I could suggest at that point would be to have records in your zone that round robin different cloud providers, but that comes with its own challenges.
I believe there are some articles sitting around regarding how AWS plans for failure and the fallback mechanism actually reduces load on the system rather than makes it worse. I think it would require in-depth investigation on the expected failover mode to have a good answer there.
For instance, just to make it more concrete, what sort of failure mode are you expecting to happen with the Route 53 health check? Depending on that there could be different recommendations.
As far as the OP's point though, I'm going to probably assume that the health checks need to stay within/from AWS because 3rd party health checks could taint/dilute the point of the in-house AWS HC service to begin with.
One problem we've run into, which is the "DNS is single point of failure" is that there isn't a clear best strategy to deal with "failover to a different cloud at the DNS routing level."
I'm not the foremost expert when it comes to ASNs and BGPs, but from my understanding that would require some multi-cloud collaboration to get multiple CDNs to still resolve, something that feels like it would require both multiple levels of physical infrastructure as well as significant cost to actually implement correctly compared to the ROI for our customers.
There's a corollary here for me, which is, still as simple as possible to achieve the result. Maybe there is a multi-cloud strategy, but the strategies I've seen still rely on having the DNS zone in one provider that fail-overs or round-robins specific infra in specific locations.
Third party health checks have less of a problem of "tainting" and more just cause further complications, as you add in complexity to resolving your real state, the harder it is to get it right.
For instance, one thing we keep going back and forth on is "After the incident is over, is there a way for us to stay failed-over and not automatically fail back".
And the answer for us so far is "not really". There are a lot of bad options, which all could have catastrophic impacts if we don't get it exactly correct, and haven't come with significant benefits, yet. But I like to think I have an open mind here.
Proper HA is owning your own IP space and anycast advertising it from multiple IXes/colos/clouds to multiple upstreams / backbone networks. BGP hold times are like a dead-mans-switch and will ensure traffic stops being routed in that direction within a few seconds in case of a total outage, plus your own health-automation should disable those advertisements when certain things happen. Of course, you need to deal with the engineering complexity of your traffic coming in to multiple POPs at once, and it won't be cheap at all (to start, you're looking at ~10kUSD capex for a /24 of IP space, plus whatever the upstreams charge you monthly), but it will be very resilient to pretty much any single point of failure, including AWS disappearing entirely.
[1] But, it's DNS; the expectation is that some resolvers, hopefully very few of them, will cache data as if your TTL value was measured in days. IMHO, If you want to move all your traffic in a defined timeframe, DNS is not sufficient.
Love the deadpan delivery.
Had no idea that Route 53 had this sort of functionality
I'll try to add comments and answer questions where I can.
- Warren
Edit: This is a fantastic write-up by the way!
> [Our service can only go down] five minutes and 15 seconds per year.
I don't have much experience in this area, so please correct me if I'm mistaken:
Don't these two quotes together imply that they have failed to deliver on their SLA for the subset of their customers that want their service in us-east-1? I understand the customers won't be mad at them in this case, since us-east-1 itself is down, but I feel like their title is incorrect. Some subset of their service is running on top of AWS. When AWS goes down, that subset of their service is down. When AWS was down, it seems like they were also down for some customers.
We don't actually commit to running infrastructure in one specific AWS region. Customers can't request that the infra runs exactly in us-east-1, but they can request that it runs in "Eastern United States". The problem is that with scenarios that might require VPC peering or low latency connections, we can't just run the infrastructure in us-east-2 and commit to never having a problem. For the same reason, what happens if us-east-2 were to have an incident.
We have to assume that our customers need it in a relatively close region, and that at the same time need to plan for the contingency that region can be down.
Then there are the customer's users to think of as well. In some cases, those users might be globally dispersed, even if the customer infrastructure is only one major location. So while it would be nice to claim "well you were also down at that moment", in practices customer's users will notice, and realistically, we want to make sure we aren't impeding remediation on their side.
That is, even if a customer says "use us-east-1", and then us-east-1 is down, it can't look that way to the customer. This gets a lot more complicated, when the services that we are providing may be impacted differently. Consider us-east-1 dynamoDB down, but everything else was still working. Partial failure modes are much harder to deal with.
Truer words were never spoken.
(For what it's worth, for some of my services, 200ms is certainly an impact, not as bad as 2 seconds out outage but still noticable and reportable)
This is where the grey failures can come into play. It's really hard to tell, often impossible to know what the impact of an incident is to a customer, even if you know you are having an incident, without them telling you.
In order to know that you are "down", our edge of the HTTP request would need to be able to track requests. For us that is CloudFront, but if there is an issue before that, at DNS, at network level, etc... we just can't know what the actual impact is.
As far as measuring how you are down. We can pretty accurately know the list of failures that are happening, (when we can know), and what the results are.
That's because most components are behind cloudfront in any case. And if cloudfront isn't having a problem, we'll have telemetry that tells us what the HTTP request/response status codes and connection completions look like. Then it's a matter of measuring from our first detection to the actual remediation being deployed (assuming there is one).
Another thing that helps here is that we have multiple other products that also use Authress, and we can run technology in other regions that can report this information, for those accounts (obviously can't be for all customers), which can help us identify with additional accuracy, but is often unnecessary.