> 17 Feb 2026 11:32 PST A rollout is going to prevent issuance from occurring. We will provide an estimate on when issuance will stop.
> 17 Feb 2026 12:14 PST Issuance is beginning to stop. A fix to resolve the issue will roll out in about 8 hours
Mozilla root store policy: https://www.mozilla.org/en-US/about/governance/policies/secu...
Chrome root store policy: https://googlechrome.github.io/chromerootprogram/
Apple root store policy: https://www.apple.com/certificateauthority/ca_program.html
Baseline Requirements: https://github.com/cabforum/servercert/blob/main/docs/BR.md
There are countless examples of non-compliant certificates documented in the Bugzilla component I linked above. A recent example: a certificate which was backdated by more than 48 hours, in violation of section 7.1.2.7 of the Baseline Requirements: https://bugzilla.mozilla.org/show_bug.cgi?id=2016672
There are countless Bugzilla reports of clearly unprofessional CAs trying to get away with doing whatever they want, get caught, say "it's no big deal", fail to learn the lesson and eventually get kicked out, much to the chagrin and bewilderment of their management, irate that some nerds on the Internet could ruin their business, failing to understand that following the scripture of the Internet nerds is the #1 requirement of the business they chose to run.
In my experience every case in the Web PKI where we found what seems obviously to be either gross incompetence or outright criminality there were also widespread technical failures at the same CA. Principles who aren't obeying the most important rules also invariably don't care about merely technical violations, which are easier to identify.
For example, CrossCert had numerous technical problems to go along with the fact that obviously nobody involved was obeying important rules. I remember at one point asking, so, this paperwork says you issue only for (South) Korea, but, these certs are explicitly not for Korea, so, what technical measure was in place to ensure you didn't issue them and why did it fail? And obviously the answer is they didn't give a shit, they'd probably never read that paperwork after submitting it, they were just assuming it doesn't matter...
"There is an ongoing incident that will force issuance to be halted."
Feels like they were alerted to some current problem severe enough that "turn it off now" was the right move. Breaking the baseline requirements somehow maybe?
still not an outage that would endanger anyone's ability to renew in time, but for small or extremely shitty CAs (and there are a lot of those) such an outage may take enough time to cause issues in theory I guess?
compared to say, roughly 1/365 probable downtime window for a 398 days cert lifetime = 0.25% downtime probability
let's pray you don't need to rotate when it's down...
Dan Geer famously said: "Dependency is the root cause of risk"...
PS: even stricter shortlived durations in some context:
Internal/Private 1 – 7 days Corporate VPNs, Internal apps
Ephemeral 5 mins – 1 hour Docker containers, CI/CD runners
Effectively certificates are now a license to publish.
On mobile, user certs are pretty much ignored unless opted in by apps. Even firefox allows user certs (for now) but only via an obscure hidden config.
This means we cannot use self-hosted services even using a VPN with official apps without getting a signed cert.
What do you mean by this? Any service that is designed to be self-hosted will have an app that accepts user-installed CAs. HomeAssistant, for example.
[0] https://github.com/ReVanced/GmsCore/releases/tag/v0.3.13.2.2...
But... hopefully... people created overlapping windows of cert validity so there's always a valid cert available for their services and can tolerate the CA being out of action for 8(?) hours. Imagine if your TGS/Kerberos or AWS IAM IdP was down for 8 hours.
But that didn’t stop Youtube and Youtube TV from going down hard. I imagine they’re provisioning ephemeral VMs or service instances and relying on them being able to get certs immediately, or something like that.
I don't want to buy tires, I want to learn about ______. The ads don't even make sense because they're irrelevant.
Give it another 10-20 years and your 2 hour podcasts will be 30 minutes of morning zoo DJ banter, 10 minutes of guests, and 1.5 hours of ads.
We’ll have reached peak 90s all over again. With any luck we’ll avoid recreating the conditions for another Nickelback and can stay in the weird zone where Trip Hop and pop punk could chart at the same time.
On the other hand, if ads etc gets too annoying, I already have run all my downloaded podcasts through whisper to get transcripts with timestamps. Running some LLM to find ranges to delete would probably be quite easy. As a bonus I would be happy to also cut out all the filler repetitions that seem popular these days ("yes, X, I absolutely agree, [repeats everything X just said]"). Could probably cut 1 hour episodes to 20 minutes without losing any content.
You have high hopes. Next YT tool will be to split anything long in 30s reels as brains will be completely incapable of focusing for longer.
At this point it is just YT Vertical Videos.
Also there's this woman that makes very funny shorts about software development and good long videos that aren't as good. I look for her shorts too.
But now days I can admit there are a few, very few, content creators who create shorts that are very informative and straight to the point that can cover a topic and give you many facts and let you decide if you want to seek more. Sometimes it is nice to have the 30 seconds Coles notes verses a video stretched out to 10 minutes to be eligible for monetization.
BUT, and this is a big but, the shorts and similar video platform trends scare me as a parent. I can see how my kids find a 1.5 hour movie boring but can scroll endlessly through shorts. It might seem harmless letting your kid just scroll on YouTube from my perspective is like an addiction and kids are getting that dopamine hit watching a clip and seconds later watching something else. I've learned that it is very important to be aware of what your kids are being accustomed to and push them in the right direction.
I "loved" the style but I haven't found any actual radio on the internet of that style or a podcast. Not sure about name of movie but I do remember it being in the last 10-15 years.
(i'm that old)
So you're using snakeoil certificates and MITM proxies at work?
Although, if that is the case, I would expect to to impact basically every google site.
Can't search for anything without being overwhelmed with shorts in the results, many unrelated to what I'm searching.
The logged out experience is closer to the interests of the average person. So if you're not pruning (and savings) your interests, that's hardly surprising.
This is like the guy who goes to the doctor complaining of eye pain whenever he drinks tea. "Have you tried taking the teaspoon out?"
issuance flow has been undrained?
Admittedly, a nitpick, however the tech industry has a tendency to invent new words when they could say the exact same thing in plain English and be better understood by a wider audience.
oof
For code signing certificates and EV certificates, (and OV certificates, if they are even alive), this is still the case.
It will forever be known as The Great Oops.
There are a few things that can cause tremendously widespread outages, essentially all of them network configuration changes. Actually deleting customer data is dramatically more difficult to the point of impossible - there are so many different services in so many different locations with so many layers of access control. There is no "one command" that can do such a thing - at the scale of a worldwide network of data centers there is no "rm -rf /".
Break your control plane, and you can't stop the propagation of poison.
Propagate the wrong trust bundle... everywhere.
Also, it's not about the delete command. It's about the automatic cleanup following behind it that shreds everything, or repurposes the storage.
If didn't back it up yourself, it is gone forever.
The possibility Google will either manage to unleash a malicious AI on their infrastructure and/or develop a way to destroy a lot of data at scale quite efficiently or some combination of the two is far from zero.
Bear in mind, this "Little Oops" should also have been impossible: https://www.techspot.com/news/103207-google-reveals-how-blan...
"We deployed this private cloud with a missing parameter and it wasn't caught" is as different from "we wiped out all customer data" as hello world is from Kubernetes.
No one promised this "should be impossible". Did you confuse "we'll take steps to ensure this never happens again"?
You contend there's no global rm rf for a global cloud provider, but clearly a missing parameter can rm rf a customer in an irrecoverable manner.
The only half you're missing is... how every major cloud outage happens today... a bad configuration update. These companies have hundreds of thousands of servers, but they also use orchestration tools to distribute sets of changes to all of them.
You only need a command to rm rf one box, if you are distributing that command to every box.
Now sure, there are tons of security precautions and checks and such to prevent this! But pretending it's impossible is delusional. People do stupid stuff, at scale, every day.
The most likely scenario is a zero day in an environment necessitating an extremely rapid global rollout, combined with a plain, simple error.
It's the sort of thing that used to keep me up at night.
But it can happen, and it only has to happen once. (Also FYI, telling me your work history just tells me you've drunk the koolaid, ain't proof you know more.)
Though I'm sure the major players are all over this risk which is why it hasn't happened.
Frankly even with no CA redundancy, downtime would have to drag on for weeks to actually disrupt renewals. ACME certs usually get rotated after about 2/3rds of their duration has expired, so the upcoming 45 day certs will still have about 15 days of wiggle room.
https://zerossl.com/documentation/acme/
> By using ZeroSSL's ACME feature, you will be able to generate an unlimited amount of 90-day SSL certificates at no charge, also supporting multi-domain certificates and wildcards.
(Or an unrelated failure, of course)
1. Sales volume was lowest on weekends so if something went wrong it would affect fewer customers.
2. If something went wrong and I needed to revert, nobody was at work on weekends so it would not disrupt coworkers.
3. I always made it so reverting would be easy.
4. Most of my weekends were just relaxing at home, mostly doing online stuff (games, reading, videos) or doing offline stuff at my computer (programming my personal projects). It wasn't much of a bother at all to have an ssh open to something at work monitoring the new deployment for problems for the rest of Friday night and Saturday.
Now I'm wondering if you rely on OCSP in a TLS client and the pki is Google does it still works?
Some Google Services are also down at the moment, unrelated to YouTube, so probably a failure along some common infrastructure pipeline.
Your History, Subscriptions and search should all work. You should be able to see any creator's page if you go to it directly. The videos are all still watchable. It's primarily the home page and recommended videos that are having issues. Basically any place they recommend videos you haven't seen is broken right now, but the videos are still there and accessible.
I've tried via VPN from the U.S., U.K., Sweden, Germany, Russia, Colombia, etc. Same issue across the board.
Lots of excellent legal analysis, history, logistics, engineering content there.
It was initially founded by some of the most popular information YouTubers like CGPGrey, but he mysteriously left the project (I suspect one side wanted to be evil and the other side did not)
Supposedly a more holistic approach to video hosting with less oversight from the platform itself.
Youtube is demonetizing channels left, right, and centre.
ONE MILLION DOLLARS!
Looking forward to the post-mortem.