lego --domains 206.189.27.68 --accept-tos --http --disable-cn run --profile shortlived
[1] https://go-acme.github.io/lego/https://github.com/certbot/certbot/pull/10370 showed that a proof of concept is viable with relatively few changes, though it was vibe coded and abandoned (but at least the submitter did so in good faith and collaboratively) :/ Change management and backwards compatibility seem to be the main considerations at the moment.
(seems to be WIP https://github.com/caddyserver/caddy/issues/7399)
A properly configured DoH server (perhaps running unbound) with a properly constructed configuration profile which included a DoH FQDN with a proper certificate would not work in iOS.
The reason, it turns out, is that iOS insisted that both the FQDN and the IP have proper certificates.
This is why the configuration profiles from big organizations like dns4eu and nextdns would work properly when, for instance, installed on an iphone ... but your own personal DoH server (and profile) would not.
- 8 is a lucky number and a power of 2
- 8 lets me refresh weekly and have a fixed day of the week to check whether there was some API 429 timeout
- 6 is the value of every digit in the number of the beast
- I just don't like 6!
There’s your answer.
6 days means on a long enough enough timeframe the load will end up evenly distributed across a week.
8 days would result in things getting hammered on specific days of the week.
people will put */5 in cron and result will be same, because that's obvious, easy and nice number.
And 160 is the sum of the first 11 primes, as well as the sum of the cubes of the first three primes!
> A regular 160-gon is constructible with straightedge and compass.
> 160 has a representation as a sum of 2 squares: 160 = 4^2 + 12^2
> 160 is an even number.
> 160 has the representation 160 = 2^7 + 32.
> 160 divides 31^2 - 1.
> 160 = aa_15 repeats a single digit in base 15.
200 would be a nice round number that gets you to 8 1/3 days, so it comes with the benefits of weekly rotation.
The CA/B Forum defines a "short-lived" certificate as 7 days, which has some reduced requirements on revocation that we want. That time, in turn, was chosen based on previous requirements on OCSP responses.
We chose a value that's under the maximum, which we do in general, to make sure we have some wiggle room. https://bugzilla.mozilla.org/show_bug.cgi?id=1715455 is one example of why.
Those are based on a rough idea that responding to any incident (outage, etc) might take a day or two, so (assuming renewal of certificate or OCSP response midway through lifetime) you need at least 2 days for incident response + another day to resign everything, so your lifetime needs to be at least 6 days, and then the requirement is rounded up to another day (to allow the wiggle, as previously mentioned).
Plus, in general, we don't want to align to things like days or weeks or months, or else you can get "resonant frequency" type problems.
We've always struggled with people doing things like renewing on a cronjob at midnight on the 1st monday of the month, which leads to huge traffic surges. I spend more time than I'd like convincing people to update their cronjobs to run at a randomized time.
Fuzzing the lifetime of certificates would smooth out traffic, encourage no hardcoded values, and most importantly statistical analysis from CT logs could add confidence that these validity windows are not carefully selected to further a cryptographic or practical attack.
A https://en.wikipedia.org/wiki/Nothing-up-my-sleeve_number if you will.
* https://datatracker.ietf.org/doc/html/rfc9799
* https://onionservices.torproject.org/research/appendixes/acm...
I think acme.sh supports it though.
We also support ACME profiles (required for short lived certs) as of v1.18 which is our oldest currently supported[1] version.
We've got some basic docs[2] available. Profiles are set on a per-issuer basis, so it's easy to have two separate ACME issuers, one issuing longer lived certs and one issuing shorter, allowing for a gradual migration to shorter certs.
[1]: https://cert-manager.io/docs/releases/ [2]: https://cert-manager.io/docs/configuration/acme/#acme-certif...
Even getting people to use certificates on IPSEC tunnels is a pain. Which reminds me, I think the smallest models of either Palo Alto or Checkpoint still have bizarre authentication failures if the certificate chain is too long, which was always weird to me because the control planes had way more memory than necessary for well over a decade.
The real key is getting ESP HW offload.
Using the long ago past as FUD here is not useful.
This is no criticism, I like what they do, but how am I supposed to do renewals? If something goes wrong, like the pipeline triggering certbot goes wrong, I won't have time to fix this. So I'd be at a two day renewal with a 4 day "debugging" window.
I'm certain there are some who need this, but it's not me. Also the rationale is a bit odd:
> IP address certificates must be short-lived certificates, a decision we made because IP addresses are more transient than domain names, so validating more frequently is important.
Are IP addresses more transient than a domain within a 45 day window? The static IPs you get when you rent a vps, they're not transient.
They can be as transient as you want. For example, on AWS, you can release an elastic IP any time you want.
So imagine I reserve an elastic IP, then get a 45 day cert for it, then release it immediately. I could repeat this a bunch of times, only renting the IP for a few minutes before releasing it.
I would then have a bunch of 45 day certificates for IP addresses I don't own anymore. Those IP addresses will be assigned to other users, and you could have a cert for someone else's IP.
Of course, there isn't a trivial way to exploit this, but it could still be an issue and defeats the purpose of an IP cert.
6 days actually seems like a long time for this situation!
These changes are coming from the CAB forum, which includes basically every entity that ships a popular web browser and every entity that ships certificates trusted in those browsers.
There are use cases for certificates that exist outside of that umbrella, but they are by definition niche.
Hardly 'by definition niche' IMHO.
So great for E-Commerce, not so great for anyone else.
So no one that actually has to renew these certificates.
Hey! How long does a root certificate from a certificate authority last?
10 to 25 years?
Why don't those last 120 minutes? They're responsible for the "security" of the whole internet aren't they?
I believe google, who maintain chrome and are on the CAB, are an entity well known for hosting various websites (iirc, it's their primary source of income), and those websites do use https
In another comment someone linked to a document from the Chrome team.
Here’s a quote that I found interesting:
“In Chrome Root Program Policy 1.5, we landed changes that set a maximum ‘term-limit’ (i.e., period of inclusion) for root CA certificates included in the Chrome Root Store to 15 years.
While we still prefer a more agile approach, and may again explore this in the future, we encourage CA Owners to explore how they can adopt more frequent root rotation.”
https://googlechrome.github.io/chromerootprogram/moving-forw...
I am not kidding, but also the rest of your comment isn’t at all related to what I said.
Nobody is being forced to use 6-day certs for domains though, when the time comes Let's Encrypt will default to 47 days just like everyone else.
The people who innovate in security are failing to actually create new ways to verify things, so all that everyone else in the security industry can do to make things more secure is shorten the cert expiration. It's only logical that they'll keep doing it.
Yet
In your answer (and excluding those using ACME): is this a good behavior (that should be kept) or a lame behavior (that we should aim to improve) ?
Shorter and shorter cert lifetime is a good idea because it is the only way to effectively handle a private key leak. Better idea might exist but nobody found one yet
Thing is, NOTHING, is stopping anyone from already getting short lived certs and being 'proactive' and rotating through. What it is saying is, well, we own the process so we'll make Chrome not play ball with your site anymore unless you do as we say...
The CA system has cracks, that short lived certs don't fix, so meanwhile we'll make everyone as uncomfortable as possible while we rearrange deck chairs.
awaiting downvotes in earnest.
(The classic problem with self-signed certs being that TOFU doesn’t scale to millions of users, particularly ones who don’t know what a certificate fingerprint is or what it means when it changes.)
Though if I may put on my tinfoil hat for a moment, I wonder if current algorithms for certificate signing have been broken by some government agency or hacker group and now they're able to generate valid certificates.
But I guess if that were true, then shorter cert lives wouldn't save you.
Probably not. For browsers to accept this certificate it has to be logged in a certificate transparency log for anyone to see, and no such certificates have been seen to be logged.
This makes sense from a security perspective, insofar as you agree with the baseline position that revocations should always be honored in a timely manner.
TLS certs should be treated much more akin to SSH host keys in the known hosts file. Browsers should record the cert the first time they see it and then warn me if it changes before it's expiration date, or some time near the expiration date.
Obviously you might still be victim #1 of such a scheme... But in general the CA's now aren't really trusted anymore - the real root of trust is the CT logs.
the ENTIRE reason the short lifetime is used for the LE certs is that they haven't figured out how to make revoking work at scale.
Now if you're on latest browser you might be fine but any and every embedded device have their root CAs updated only on software update, which means compromise of CA might easily get access to hundreds of thousands devices.
And 200 is not "at scale". The list of difficulties in revoking roots is a very different list from the problem you're citing.
> any and every embedded device
Yes it's flawed but it's so much better than the previous nothing we had for detecting one of the too-many CAs going rogue.
This is great, and actually constructive!
I use, a hack i put together http://www.jofla.net/php__/CertChecker/ to keep a list (in json) of a bunch of machines (both https and SSH) and the last fingerprints/date it sees. Every time it runs i can see if any server has changed, just is a heads-up for any funny business. Sure its got shortcommings, it doesnt mimmic headers and such but its a start.
It would be great if browsers could all, you know, have some type of distributed protocol, ie DHT where by at least some concensus about whether this cert has been seen by me or enough peers lately.
Having a ton of CAs and the ability to have any link in that chain sing for ANY site is crazy, and until you've seen examples of abuse you assume the foundations are sound.
If I don't assign an EIP to my EC2 instance and shut it down, I'm nearly guaranteed to get a different IP when I start it again, even if I start it within seconds of shutdown completing.
It'd be quite a challenge to use this behavior maliciously, though. You'd have to get assigned an IP that someone else was using recently, and the person using that IP would need to have also been using TLS with either an IP address certificate or with certificate verification disabled.
Otoh, if you're dealing with browsers, they really like WebPKI certs, and if you're directing load to specific servers in real time, why add DNS and/or a load balancer thing in the middle?
I think a pattern like that is reasonable for a 6-day cert:
- renew every 2 days, and have a "4 day debugging window" - renew every 1 day, and have a "5 day debugging window"
Monitoring options: https://letsencrypt.org/docs/monitoring-options/
This makes me wonder if the scripts I published at https://heyoncall.com/blog/barebone-scripts-to-check-ssl-cer... should have the expiry thresholds defined in units of hours, instead of integer days?
Which should push you to automate the process.
There would be no way of determining that I can connecting to my-organisation's 10.0.0.1 and not bad-org's 10.0.0.1.
ie. https://10.0.0.1(af81afa8394fd7aa)/index.htm
The identifier would be generated by the certificate authority upon your first request for a certificate, and every time you renew you get to keep the same one.
Arguably setting up letsencrypt is "manual setup". What you can do is run a split-horizon DNS setup inside your LAN on an internet-routable tld, and then run a CA for internal devices. That gives all your internal hosts their own hostname.sub.domain.tld name with HTTPS.
Frankly: it's not that much more work, and it's easier than remembering IP addresses anyway.
> easier than remembering IP addresses
idk, the 192.168.0 part has been around since forever. The rest is just a matter of .12 for my laptop, .13 for the one behind the telly, .14 for the pi, etc.
Every time I try to "run a CA", I start splitting hairs.
1. Running a CA is more work than just setting up certbot for IP addresses, but not that much more
And that enables you to
2. Remember only domain names, which is easier than ip addresses.
I guess if you're ipv4 only and small it's not much benefit but if you have a big or bridged network like wonderLAN or the promised LAN it's much better.
To use it, you need a valid certificate for the connection to the server which has a hostname that does get broadcast in readable form. For companies like Cloudflare, Azure, and Google, this isn't really an issue, because they can just use the name of their proxies.
For smaller sites, often not hosting more than one or two domains, there is hardly a non-distinct hostname available.
With IP certificates, the outer TLS connection can just use the IP address in its readable SNI field and encrypt the actual hostname for the real connection. You no longer need to be a third party proxying other people's content for ECH to have a useful effect.
Even if it did work, the privacy value of hiding the SNI is pretty minimal for an IP address that hosts only a couple domains, as there are plenty of databases that let you look up an IP address to determine what domain names point there - e.g. https://bgp.tools/prefix/18.220.0.0/14#dns
> In verifying the client-facing server certificate, the client MUST interpret the public name as a DNS-based reference identity [RFC6125]. Clients that incorporate DNS names and IP addresses into the same syntax (e.g. Section 7.4 of [RFC3986] and [WHATWG-IPV4]) MUST reject names that would be interpreted as IPv4 addresses.
Actually the main benefit is no dependency on DNS (booth direct and root).
IP is a simple primitive, i.e. "is it routable or not ?".
1) How to secure routing information: some says RPKI, some argues that's not enough and are experimenting with something like SCION (https://docs.scion.org/en/latest/)
2) Principal-Agent problem: jabber.ru's hijack relied on (presumably) Hetzner being forced to do it by German law agents based on the powers provided under the German Telecommunications Act (TKG)
Part of the issue with RPKI is its taking time to fully deploy. Not as glacial as IPv6 but slower than it should be.
If there was 100% coverage then RPKI would have a good effect.
To be pedantic for a moment, ARIN etc. are registries.
The registrar is your ISP, cloud provider etc.
You can get a PI (Provider Independent) allocation for yourself, usually with the assistance of a sponsoring registrar. Which is a nice compromise way of cutting out the middleman without becoming a registrar yourself.
The biggest modern-era reason is direct access to update your RPKI entries.
But this only matters if you are doing stuff that makes direct access worthwhile.
If your setup is mostly "set and forget" then you should just accept the lag associated with needing to open a ticket with your sponsor to update the RPKI.
There's also this little thing called DNS over TLS and DNS over HTTPS that you might have heard of ? ;)
Erm ? Do I have to spell out that I was pointing out that there was more than the "ephemeral services" that were being guessed at that could take advantage of IP certs ?
VBA et al succeeded because they enabled workers to move forward on things they would otherwise be blocked on organizationally
Also - not seeing this kind of thing could be considered a gap in your vision. When outsiders accuse SV of living in a high-tech ivory tower, blind to the realities of more common folk, this is the kind of thing they refer to.
As a concrete example, I'll probably be able to turn off bootstrap domains for TakingNames[0].
Am I the only person that thinks this is insane? All web security is now at the whims of Google?
I don't think the root programs take these kind of decisions lightly and I don't see any selfish motives they could have. They need to find a balance between not overcomplicating things for site operators and CAs (they must stay reliable) while also keeping end users secure.
A lot of CAs and site operators would love if nothing ever changed: don't disallow insecure signature/hash algorithms, 5+ year valid certs, renewals done manually, no CT, no MPIC, etc. So someone else needs to push for these improvements.
The changes the root programs push for aren't unreasonable, so I'm not really concerned about the power they have over CAs.
That doesn't mean the changes aren't painful in the short term. For example, the move to 45 day certificates is going to cause some downtime, but of course the root programs/browsers don't benefit from that. They're still doing this because they believe that in the long term it's going to make WebPKI more robust.
There's also the CA/Browser Forum where rule changes are discussed and voted on. I'm not sure how root programs decide on what to make part of their root policy vs. what to try to get voted into the baseline requirements. Perhaps in this case Chrome felt that too many CAs would vote against for self-interested reasons, but that's speculation.
Some CAs will continue to run PKIs which support client certs, for use outside of Chrome.
In general, the "baseline requirements" are intended to be just that: A shared baseline that is met by everyone. All the major root programs today have requirements which are unique to their program.
Right, that explains it. So the use would be for things other than websites or for websites that don't need to support Chrome (and also need clientAuth)?
I guess I find it hard to wrap my head around this because I don't have experience with any applications where this plus a publicly trusted certificate makes sense. But I suppose they must exist, otherwise there would've been an effort to vote it into the BRs.
If you or someone else here knows more about these use cases, then I'd like to hear about it to better understand this.
mTLS is probably the only sane situation where private key infrastructure shall be used
* An outer SNI name when doing ECH perhaps
* Being able to host secure http/mail/etc without being beholden to a domain registrar
E.g.:
[1] https://developers.cloudflare.com/1.1.1.1/encryption/dns-ove...
[2] https://developers.cloudflare.com/1.1.1.1/encryption/dns-ove...
ECH needs for the outer (unencrypted) SNI to be somewhat plausible as a destination. For ECH GREASE what happens is that this outer SNI was real, what looks like the encrypted inner ECH data is just random noise.
For non-GREASE ECH we want to look as much like the GREASE as we can, except that it's not noise that's the encrypted payload with a real inner SNI among other things.
For local /network/ development, maybe, but you’d probably be doing awkward hairpin natting at your router.
Other than basically being a pain in the ass.
If you are supposed to have an establishable identity I think there is DNSSEC back to the registrar for a name and (I'm not quite sure what?) back to the AS.for the IP.
Why? Even regular certs are handed out via IP address.
They retire challenges that were once acceptable. What happens if they require a real chain of trust? They retire http and domain names keep working on DNS/DNSSEC.
Making IP with only http challenges is going backwards.
But what risks are attached with such a short refresh?
Is there someone at the top of the certificate chain who can refuse to give out further certificates within the blink of an eye?
If yes, would this mean that within 6 days all affected certificates would expire, like a very big Denial of Service attack?
And after 6 days everybody goes back to using HTTP?
Maybe someone with more knowledge about certificate chains can explain it to me.
With a 30 day cert with renewal 10-15 days in advance that gives you breathing room
Personally I think 3 days is far too short unless you have your automation pulling from two different suppliers.
How many "top of chain" providers is letsencrypt using? Are they a single point of failure in that regard?
I'd imagine that other "top of chain" providers want money for their certificates and that they might have a manual process which is slower than letsencrypt?
But in general, one of the points of ACME is to eliminate dependence on a single provider, and prevent vendor lock-in. ACME clients should ideally support multiple ACME CAs.
For example, Caddy defaults to both LE and ZeroSSL. Users can additionally configure other CAs like Google Trust Services.
This document discusses several failure modes to consider: https://github.com/https-dev/docs/blob/master/acme-ops.md#if...
It depends. If the ACME client is configured to only use Let’s Encrypt, then the answer is yes. But the client could fall-back to Google’s CA, ZeroSSL, etc. And then there is no single point of failure.
ZeroSSL/HID Global seems to be quite multi-national though, and it’s owned by a Swedish company (Assa Abloy).
I don’t know what what kind of mitigations these orgs have in place if the shit really hits the fan in the US. It’s an interesting question for sure.
The US has strong institutions which prevent the President or Government at large controlling these on a whim. If those institutions fail then they could all push out an update which removes all "top of chain" trusted certificate authorities other than ones approved by the US government.
In that situation the internet is basically finished as it stands now, and the OSes would be non-trustworthy anyway.
Fixing the SSL problems is the easy part, the free world would push its own root certificate out -- which people would have to manually install from a trusted source, but that's nothing compared to the real problem.
Sure, Ubuntu, Suse etc aren't based in the US, but the number of phones without a US based OS is basically zero, you'd have to start from scratch with a forked version of android which likely has NSA approved backdoors in it anyway. Non-linux based machines would also need to be wiped.
Absolutely not.
If the president attempted to force a US-based CA to do something bad they don't want to do, they would sue the government. So far, this administration loses 80% of the lawsuits brought against it.
And that's before more overt issues. Microsoft/Google/etc could sue to stop the US ordering them to do what they should. Is the CEO really willing to risk their life to do that? Be a terrible shame if their kids got caught up in a traffic accident.
It would be nice to at least have a very high level contingency plan because in worst case I won't be able to google it.
You can also read a lot of anti-Trump articles and comments on countless web-sites, some under .com and some under other top-domains. As lunatic as Trump is, he hasn’t shut that down.
“Is the TLD root management really split up vertically”
AFAIK, yes, it is.
But if the global DNS would somehow break down I guess you either have to find an alternative set of root servers. Or communicate outside of the regular Internet. Such an event surely would shock the global economy.
The majority of people use their own ISP or an anycast address from a US company (cloudflare, google, opendns). Quad9 is European.
However any split in the root dns servers signals the end of an interconnected global network. Any ISP can advertise anycast addresses into its own network, so if the US were to be cut off from the world that wouldn't be an issue per-se, but the breakdown of the internet in the western world would be a massive economic shock.
It wouldn't surprise me if it happens in the next decade or two though.
You could argue that The Don in charge of the US is in control of letsencrypt
He's not in control of letsencrypt or any other US-based CA.
It may not be well known, but Trump's administration loses about 80% of the time when they've been sued by companies, cities and states.
There's much more risk of state-sponsored cyber attacks against US companies.
A simple windows to linux migration is not enough. If certificates expire without a way to refresh you'd either need to manually touch every machine to swap root certificates or have some of other contingency plan.
I would say that the WebPKI system seems to be quite resilient, even in the face of strong geopolitical tension.
The far bigger problem is the American government forcing Microsoft/Apple/Google to push out a windows/iphone|mac/android|chrome update which removes all CAs not approved by the American government.
Canonical/Suse may be immune to such overt pressure, but once you get to that point you're way past the end of the international internet and it doesn't really matter anyway.
2) App stores review apps because they want to verify functionality and compliance with rules, not just as a box-checking exercise. A code signing cert provides no assurances in that regard.
app store review isn't what I was talking about, I meant not having to verify your identity with the appstore, and use your own signing cert which can be used between platforms. Moreover, it would be less costly to develop signed windows apps. It costs several hundred dollars today.
That's pretty reasonable, considering it is built in to all the major code signing tools on Windows, they perform the identity verification, and the private keys are fully managed by Azure. Code signing certs are required to be on HSMs, so you're most likely going to be paying some cloud CA anyway.
I owe you one @briHass :)
However, the very act of trying to make this system less impractical is a concession in the war on general purpose computing. To subsidize its cost would be to voluntarily loose that non-moral line of argument.