I remember a time when having an HTTPS connection was for "serious" projects only because the cost of the certificate was much higher than the domain. You go commando and if it sticks then you purchase a certificate for a 100 bucks or something.
We were looking for a SSL provider that had > 1 year old certs AND supported ACME... for some reason we ended up with SSL.com that did support ACME for longer lasting certs; however, there was some minor incompatibilities in how kubernetes cert-manager implemented ACME and how SSL.com implemented ACME; we ended up debugging SSL.com ACME protocol implementation.
Fun. We should have just clicked once per 3 years, better than debugging third parties APIs.
No, I don't remember the details and they are all lost in my old work emails.
(Nowadays I think zerossl.com also supports ACME for >1 year certs? but they did not back then. edit: no they still don't, it's just SSL.com I think)
Why are (some) banks always completely clueless about these things? Validating ownership of the domain more often (and with an entirely automated provisioning set-up that has no human weak links) can only be a good thing.
Perhaps the banking sector will finally enter the 21st century in another ten years?
They have these really, really long lists what all needs to be secured and how. Some of it is reasonable, some of it is bonkers, there is way too much of that stuff, and it overall increases the price of any solution 10x at least.
But OTOH I can hardly blame them, failures can be catastrophic there, as they deal with real money directly and can be held liable for failures. So they don't really care about security, and more about covering their asses.
Some of it is truly bonkers and never was good practise, but much of the irritating stuff is simply out-of-date advice. The banks tend to be very slow to change unless something happens that affects (or directly threatens to affect) the bottom line, or puts them in the news unfavourably.
Of course some of it is bonkers, like HSBC and FirstDirect changing the auth for my personal accounts from “up to 9 case-sensitive alpha-numeric characters” (already considered bad practise for some years) to “6 digits”, and assuring me that this is just as secure as before…
I read it as “we have been asked to integrate an ancient system that we can't update (or more honestly in many cases: can't get the higher-ups to agree to pay to update), so are bringing out other systems down to the lowest common denominator”. That sort of thing happens too often when two organisations (or departments within one) that have different procedures, merge or otherwise start sharing resources they didn't previously.
One of the practices was pathetic to the point of being funny: you had to input specific characters of your password (2nd, 4th, 6th, etc - this was changing at each login) AND there was a short timeout. My children probably learned a few new words when I was logging in.
Some time later they silently removed the first one.
I wonder if this would be an opportunity for revenue for Let's Encrypt? "We do 90-day automated-renewal certificates for free for everyone. If you're in an unusual environment where you need certificates with longer validity, we offer paid services you can use."
I think there's still incentive alignment here. Getting people moved from the "purchase 1 year certificate" world (which is apparently still required in some financial contexts) into the ACME-based world provides a path for making a regulatory argument that it'd be easy for such entities to switch over to shorter-lived certificates because the ACME infrastructure is right there.
The only good thing dealing with certificate resellers at the time was that they where really flexible in a lot of ways. We got our EV cert refunded, or "store credit" and used the money to buy normal certificates.
Extended Validation can still play a role in a corporate's IT control framework; the extended validation is essentially a check-of-paperwork that then doesn't need to be performed by your own auditor. Some EV certificates also come with some (probably completely useless) liability insurance.
[1] https://chromium.googlesource.com/chromium/src/%2B/HEAD/docs...
Warranties / insurance on SSL certificates typically only pay out if a certificate is issued improperly, often in conjunction with other conditions like a financial loss directly resulting from the misissuance. Realistically, any screwup serious enough to result in that warranty paying out would also result in the CA being abruptly removed from browser root certificate programs.
And another fun one unrelated to signing was when they tried to trademark "Let's Encrypt" in 2015.
But yes, it is not a common issue and effort would be better focused on improving site security in other ways. (unlike the rest of my comment, this line isn't sarcasm.)
There are some scenarios where you still have to employ EV certificates, e.g. code signing.
https://groups.google.com/a/chromium.org/g/security-dev/c/h1...
You'll still find people online clamoring EV certificates are worth anything more than $0 but you can ignore them just as well.
Not in any jurisdiction I'm aware of, though it's a big world so it wouldn't shock me if some small corner of it has bad laws.
> and also obligatory for rolling out as MasterCard/Visa merchant by their anti-fraud requirements
PCI DSS does not require EV certificates.
They don't recognize LE nor AWS's certs. Only the big paid ones. Such an annoying process too - to pay, to obtain and update the certs.
Nobody is like "Oh, the Jones Act ensures high quality ships" because it doesn't, the Jones Act just ensures that you're going to use those US shipyards, no matter what.
What about ZeroSSL, which is basically interchangeable with Let's Encrypt?
I'm really not a fan of it but I'm happier paying for a one year cert than doing that
If your DNS provider doesn't have an API, that seems like a separate issue but one that is well worth your organization's time if you're working in the enterprise!
(looking in to setting this up for a bunch of domains at work)
Lets not talk about key delivery. We will get back the admin cost and of all that in a year if we tunnel them through one of our LBs.
Let’s Encrypt is the best thing to happen to the web in at least a decade.
Before them I never used SSL for anything, because the cost/benefit ratio was just not there for my services.
Since then, I never not use it.
Glad this problem just got completely resolved.
Today is roughly the ten year anniversary of when we publicly announced our intention to launch Let's Encrypt, but next year is the ten year anniversary of when Let's Encrypt actually issued its first certificate:
https://letsencrypt.org/2015/09/14/our-first-cert/
In December of 2015 (~9 years ago today) is was made available to everyone, no invitation needed:
Can't believe its been ten years.
TLS is fairly computationally intensive - sure, not a big deal now because everyone is using superfast devices but try browsing the internet with a Pentium 4 or something. You won't be able to because there is no AES instruction set support accelerating the keyshake so it's hilariously slow.
It also encourages memoryholing old websites which aren't maintained - priceless knowledge is often lost because websites go down because no one is maintaining them. On my hard drive, I have a fair amount of stuff which I'm reasonably confident doesn't exist anywhere on the Internet anymore.... if my drives fail, that knowledge will be lost forever.
It is also a very centralised model - if I want to host a website, why do third parties need to issue a certificate for it just so people can connect to it?
It also discourages naive experimentation - sure, if you know how, you can MitM your own connection but for the not very technical but curious user, that's probably an insurmountable roadblock.
Biggest problem that Edward Snowden uncovered was - this stuff was happening and was happening en-mass FULLY AUTOMATED - it wasn't some kid in basement getting MitM on your WiFi after hours of tinkering.
It was also happening fully automated as shitty ISPs were injecting their ads into your traffic, so your fluffy kittens page was used to serve ads by bad people.
There is no "balance" if you understand bad people are going to swap your "fluffy kittens page" into "hardcore porn" only if they get hands on it. Bad people will include 0-day malware to target anyone and everyone just in case they can earn money on it.
You also have to understand don't have any control through which network your "fluffy kitten page" data will pass through - malicious groups were doing multiple times BGP hijacking.
So saying "well it is just fluffy kitten page my neighbors are checking for the photos I post" seems like there is a lot of explaining on how Internet is working to be done.
Transport security doesn't make 0-days any less of a concern.
> It was also happening fully automated as shitty ISPs were injecting their ads into your traffic, so your fluffy kittens page was used to serve ads by bad people.
That's a societal/legal problem. Trying to solve those with technological means is generally not a good idea.
> There is no "balance" if you understand bad people are going to swap your "fluffy kittens page" into "hardcore porn" only if they get hands on it. Bad people will include 0-day malware to target anyone and everyone just in case they can earn money on it.
The only people who can realistically MITM your connection are network operators and governments. These can and should be held accountable for their interference. You have no more security that your food wansn't tampered with during transport but somehow you live with that. Similarly security of physical mail is 100% legislative construct.
> You also have to understand don't have any control through which network your "fluffy kitten page" data will pass through - malicious groups were doing multiple times BGP hijacking.
I don't but my ISP does. Solutions for malicious actors interfering with routing are needed irrespective of transport security.
> So saying "well it is just fluffy kitten page my neighbors are checking for the photos I post" seems like there is a lot of explaining on how Internet is working to be done.
Not at all - unless you are also epecting them to have their fluffy kitten postcards checked for Anthrax. In general, it is security people who often need to touch grass because the security model they are working with is entirely divorced from reality.
I am going to cross the street in front of that speeding car because driver will be held liable when I get hit and die.
If there is not even a possibility to hijack the traffic whole range of things just won’t happen. And holding someone liable is not the solution.
Only if you are talking about actual events in which this is happening as a matter of course. Because that's what it is when ISPs inject ads into plain-text HTTP traffic: a matter of course. It's a bit more like saying that we don't have a way to effectively enforce our laws against maliciously reckless driving so we install a series of speed bumps on the road (it's still not quite the same thing because it doesn't make the reckless driving impossible but it does increase the cost).
But it's not like we're talking about agreeable activity here, anyway. This particular case against TLS sounds like a case that favors criticizing an imperfect solution to widespread negative behavior over criticizing the negative behavior. It seems reasonable to look at the speed bumps (which one may or may not find distasteful) and curse the reckless behavior of those who incentivized their construction.
But that analogy of course runs dry rather quick because you can look both ways when crossing street - on the internet as I mentioned you cannot control where data flows and bad actors already proven that they are doing so.
This is why it is not like overpass that you can build where the need is - because for internet traffic the need is everywhere.
> Transport security doesn't make 0-days any less of a concern.
It does. Each layer of security doesn't eliminate the problem but does make the attack harder.
Mail and food are different in that there are not limitless scalable attacks that can originate anywhere around the globe.
It does make the actual execution of said attacks significantly harder. To actually hit someone's browser, they need to receive your payload. In the naive case, you can stick it on a webserver you control, but how many people are going to randomly visit your website? Most people visit only a handful of domains on a regular visit, and you've got tops a couple days before your exploit is going to be patched.
So you need to get your payload into the responses from those few domains people are actually making requests from. If you can pwn one of them, fantastic. Serve up your 0-day. But those websites are big, and are constantly under attack. That means you're not going to find any low-hanging fruit vulnerability-wise. Your best bet is trying to get one of them to willing serve your payload, maybe in the guise of an ad or something. Tricky, but not impossible.
But before universal https, you have another option: target the delivery chain. If they connect to a network you control? Pwned. If they use a router with bad security defaults that you find a vulnerability in? Pwned. If they use a small municipal ISP that turns out to have skimped on security? Pwned. Hell, you open up a whole attack vector via controlling an intermediate router at the ISP level. That's not to mention targeting DNS servers.
HTTPS dramatically shrinks the attack surface for the mass distribution unwanted payloads down to basically the high-traffic domains and the CA chain. That's a massive reduction.
> The only people who can realistically MITM your connection are network operators and governments.
Literally anyone can be a network operator. It takes minimal hardware. Coffee shop with wifi? Network operator. Dude popping up a wifi hotspot off his phone? Network operator. Sketchy dude in a black hoodie with a raspberry pi bridging the "Starbucks_guest" as "Starbucks Complimentary Wifi"? Network operator. Putting the security of every packet of web traffic onto "network operators" means drastically reducing internet access.
> You have no more security that your food wasn't tampered with during transport but somehow you live with that.
I've yet to hear of a case where some dude in a basement poisoned a CISCO truck without having to even put on pants. Routers get hacked plenty.
HTTPS is an easy, trivial-cost solution that completely eliminates multiple types of threats, several of which are either have major damage to their target or risk mass exposure, or both. Universal HTTPS is like your car beeping at you when you start moving without your seat belt on: kinda annoying when you're doing a small thing in tightly controlled environments, but has an outstanding risk reduction, and can be ignored with a little headache if you really want to.
I can see why the centralisation is suboptimal (or even actively bad if I'm feeling paranoid!), but other schemes (web of trust, etc.) tend to end up far more complicated for the end user (or their UA). So far no one has come up with a practical alternative without some other disadvantage that would block its general adoption.
> if I want to host a website, why do third parties need to issue a certificate for it just so people can connect to it?
Because if we don't trust those few 3rd parties, we end up having to effectively trust every host on the Internet, which means trusting people and trusting all the people is a bad idea.
Some argue that needing a trusted certificate for just a personal page is extreme, but this one of those cases where the greater good has to win out. For instance: if we train people that self-signed certs are fine to trust in some circumstances, they'll end up clicking OK to trust them in circumstances where they really shouldn't. This can seem a bit nanny-ish, but people are often dumb, or just lazy to the point where it is sometimes indistinguishable from dumb (I'm counting myself here!) so need a bit of nannying. And anyway, if your site doesn't take any input then no browser will (yet) complain about plain HTTP.
> It also discourages naive experimentation
When something could affect security, discouraging naive experimentation on the public network is a good thing IMO. Do those experiments more locally, or at least on hosts you don't expect the public to access.
However, I think there is no reason at all that a system that is decentralized is not far _far_ simpler to instantiate for a user (not to mention far more secure and private). Crypto gets a lot of hate on HN, but it seems that it is mostly due to people's dislike of anything dealing with 'currency' systems or financial that touch it. This is a despised opinion here, but I am still actually excited for crypto systems that solve real world problems like TLS certs, DNS, et al.
Iroh seems like a _fantastic_, phenomenal system to showcase this idea. It allows for a very fast decentralized web experience on modern cryptography such as Blake3, QUIC, and so on but doesn't really touch any financial stuff at all. Its simply a good system.
I hope we can slowly move to a system that uses the decntralized consensus algorithms created in the crypto space to remove the trust in (typically big, corporate, and likely backdoored) centralized entities that our system today _requires_ without any alternative.
Beyond that, TLS is also adds additional points of failure. For one, it preventing users from accessing websites that are still operational but have an outdated cert or some other configuration issue. And HSTS even requires browsers to deprive users of the agency to override default policies and access the site anyway.
TLS is also a complex protocol with complex implementations that are prone to can bring their own security issues, e.g. heartbleed.
There are also many cases where there are holes in the security. E.g. old HTTP links, even if they redirect to HTTP, provide an opportunity for interception. Similarly entering domain names without a scheme requires Browsers to either allow downgrade to HTTP or break older sites. The solutions to this (mainly HSTS and HSTS preload) don't scale and bring many new issues (policy lifetimes outlive domain ownership, taking away user agency).
In my ideal world
a) There would be no separate HTTPS URL scheme for secure connections. Cool URIs don't change and the transport security doesn't change the resource you are addressing. A separate protocol doesn't prevent downgrade attacks in all cases anyway (old HTTP URLS, entering domains in the address bar, no indication of TLS version and supported ciphers in the scheme).
b) Trust should be provided in a hierarchical manner, just like domains themselves - e.g. via DNSSEC+DANE.
c) This mechanism would also securely inform browsers about what protocols and ciphers the server supports to allow for backwards compatiblity with older clients (where desired) while preventing downgrade attacks on modern clients.
d) Network operators that interfere with the transmitted data are dealth with legal means (loss of common carrier status at the very least, but ideally the practice should be outright illegal). Unecrypted connections shouldn't allow service providers to get away with scamming you.
If the website really isn't maintained, then it's only a matter of time until the server is part of a botnet. Setting up LE for a simple site takes half an hour once.
The fundamental problem is a question of trust. There’s three ways:
* Well known validation authority (the public TLS model)
* TOFU (the default SSH model)
* Pre-distribute your public keys (the self-signed certificate model)
Are there any alternatives?
If your requirement is that you don’t want to trust a third party, then don’t. You can use self-signed certificates and become your own root of trust. But I think expecting the average user to manually curate their roots of trust is a clearly terrible security UX.
The obvious alternative would be a model where domain validated certificates are issued by the registrar and the registrar only. Certificates should reflect domain ownership as that is the way they are used (mostly).
There is a risk that Let's Encrypt and other "good enough" solutions takes us further from that. There are also many actors with economic interest in the established model, both in the PKI business and consultants where law enforcement are important customers.
If the answer is to walk down the DNS tree, then you have basically arrived at DNSSEC/DANE. However I don’t know enough about it to say why it is not more widely used.
Utilizing DNS, whois, or a purpose built protocol directly would alleviate the problem altogether but should probably be done by way of an updated TLS specification.
Any realistic migration should probably exist alongside the public CA model for a very long time.
There's issues with it, but it is an alternative model, and I could see it being made to work.
I don’t see how it has too many advantages (for the internet) over creating your own CA. If you have a mutually trusted group of people, then they can all share the private key and sign whatever they trust.
I think the main problem is that it doesn’t scale. If party A and party B who have never communicated before want to communicate securely (let’s say from completely different countries), there’s no way they would be able to without a bridge. With central TLS, despite the downsides, that is seamless.
Interest is probably going to be low but not zero - I often play games long after they have been released and sometimes intentionally using older versions that are no longer supported by current mods.
If I do everything perfectly, but the CA I used makes some trivial error which, in the case of my certificate, has no real-world security impact? They can send me an e-mail at 6:40 PM telling me they're revoking my certificate at 2:30 PM the next day. Just what you want to find in your inbox when you get in the next day. I hope you weren't into testing, or staged rollouts, or agreeing deployment windows with your users - you'd better YOLO that change into production without any of that.
Even though it wasn't your mistake, and there's no suggestion you shouldn't have the certificate you have.
As far as the CA/B Forum is concerned, safety-critical systems that can't YOLO changes straight into production with minimal testing and only a few hours of notice don't belong on their PKI infrastructure. You'd better jump to it and fix their mistake right now.
Anyone whose certbot run was between 2pm and 6pm would get their cert revoked the next day at 2pm anyway - even if it was only issued 18 hours ago.
There's also a higher level question: Is this the web we want to be building? One where every site and service has to apply for permission to continue existing every 24 hours? Do we want a web where the barrier to entry for hosting is a round-the-clock ops team, complete with holiday cover? And if you don't have that, you should be using Facebook or Twitter instead?
The lack of understanding from us as technologists for people who would have had a working site and are now forced into either: an oligopoly of site hosting companies, or, for their site to break consistently as TLS standards rotate is one thing that brings me shame about our community.
You can come up with all kinds of reasons to gatekeep website hosting, “they have to update anyway” even when updating means reinstallion of an OS, “its not that hard to rotate” say people with deep knowledge of computers, “just get someone else to do it” say people who have a financial interest in it being that way.
Framing people with legitimate issues as weirdo’s is not as charming as you think it is.
Also the Kebap Shop probably has a form for reservation or ordering, which takes personal information.
True, they are all low risk things, but getting TLS is trivial (since many Webservers etc can do letsencrypt rotation fully automatically) and secure defaults are a good thing.
They’ve nearly all been lost to time now though, if a shop has a web-presence it will be through a provider such as “bokabord”, doordash, ubereats (as mentioned), some of whom charge up to 30% of anything booked/ordered via the web.
But, I guess no MITM can manipulate prices… except, by charging…
If you care about the integrity of the conveyed information you need TLS. If you don't, you wouldn't have published a website in the first place.
A while back I've seen a wordpress site for a podcast without https where people also argued it doesn't need it. They had banking information for donations on that site.
Sometimes I wish every party involved in transporting packets on the internet would just mangle all unencrypted http that they see, if only to make a point...
Like, "telnet textfiles.com 80" then "GET / HTTP/1.0", <enter>, "Location: textfile.com" <enter><enter> and you have the page.
What would be the point of making these unencrypted sites disappear?
I'd argue that that is a most likely objectively false statement and that the domain owner is in no position to authoritatively answer the question if it has ever served ads in that time. As it is served without TLS any party involved in the transportation of the data can mess with its content and e.g. insert ads. There are a number of reports of ISPs having done exactly that in the past, and some might still do it today. Therefore it is very likely that textfiles.com as shown in someones browser has indeed had ads at some point in time, even if the one controlling the domain didn't insert them.
Textfiles also contains donation links for PayPal and Venmo. That is an attractive target to replace with something else.
And that is precisely the point: without TLS you do not have any authority over what anyone sees when visiting your website. If you don't care about that then fine, my comment about mangling all http traffic was a bit of a hyperbole. But don't be surprised when it happens anyway and donations meant for you go to someone else instead.
If you browse through your smart TV, and the smart TV overlays an ad over the browser window, or to the side, is that the same as saying the original server is serving those ads? I hope you agree it is not.
If you use a web browser from a phone vendor who has a special Chromium build which inserts ads client-side in the browser, do you say that the server is serving those ads? Do you know that absolutely no browser vendors, including for low-cost phones, do this?
If your ISP requires you configure your browser to use their proxy service, and that proxy service can insert ads, do you say that the server is serving those ads? Are you absolutely sure no ISPs have this requirement?
If you use a service where you can email it a URL and it emails you the PDF of the web site, with some advertising at the bottom of each page, do you say the original server is really the one serving those ads?
If you read my web site though archive.org, and archive.org has its "please donate to us" ad, do you really say that my site is serving those ads?
Is there any web site which you can guarantee it's impossible for any possible user, no matter the hardware or connection, to see ads which did not come from the original server as long as the server has TLS? I find that impossible to believe.
I therefore conclude that your interpretation is meaningless.
> "as shown in someones browser"
Which is different than being served by the server, as I believe I have sufficiently demonstrated.
> But don't be surprised when it happens anyway
Jason Scott, who runs that site, will not be surprised.
I agree it is not. That is why I didn't say that the original server served ads, but that the _domain_ served ads. Without TLS you don't have authority over what your domain serves, with TLS you do (well, in the absence of rogue CAs, against which we have a somewhat good system in place).
> If you use a web browser from a phone vendor who has a special Chromium build which inserts ads client-side in the browser, do you say that the server is serving those ads? Do you know that absolutely no browser vendors, including for low-cost phones, do this?
This is simply a compromised device.
> If your ISP requires you configure your browser to use their proxy service, and that proxy service can insert ads, do you say that the server is serving those ads? Are you absolutely sure no ISPs have this requirement?
This is an ISP giving you instructions to compromise your device.
> If you use a service where you can email it a URL and it emails you the PDF of the web site, with some advertising at the bottom of each page, do you say the original server is really the one serving those ads?
No, in this case I am clearly no longer looking at the website, but asking a third-party to convey it to me with whatever changes it makes to it.
> If you read my web site though archive.org, and archive.org has its "please donate to us" ad, do you really say that my site is serving those ads?
No, archive.org is then serving an ad on their own domain, while simultaneously showing an archived version of your website, the correctness of which I have to trust archive.org for.
> Is there any web site which you can guarantee it's impossible for any possible user, no matter the hardware or connection, to see ads which did not come from the original server as long as the server has TLS? I find that impossible to believe.
Fair point. I should have said that I additionally expect the client device to be uncompromised, otherwise all odds are off anyway as your examples show. The implicit scenario I was talking about includes an end-user using an uncompromised device and putting your domain into their browsers URL bar or making a direct http connection to your domain in some other way.
They want the historical integrity, which includes the lack of data integrity that you want.
openssl s_client -connect news.ycombinator.com:443
and you can do the same. A simple wrapper, alias or something makes it as nice as telnet.In practice, many pages are also intentionally compromised by their authors (e.g. including malware scripts from Google), and devices are similarly compromised, so end-to-end "integrity" of the page isn't something the device owner even necessarily wants (c.f. privoxy).
The cryptography community would have you believe that the only solution to getting scammed is encryption. It isn't.
NSA was installing physical devices at network providers that was scouring through all information - they did not have to have Agent Smith opening envelopes or even looking at them. Keep in mind criminals could do the same as well just pay off some employees at provider and also not all network providers are in countries where law enforcement works - and as mentioned your data can go through any of such network providers.
If I send physical mail I can be sure it is not going through Bangkok unless I specifically send it with destination that requires it to go there.
Nothing, really. But for physical mail the attacks against it don't scale nearly as well: you would need to insert yourself physically into the transportation chain and do physical work to mess with the content. Messing with mail is also taken much more seriously as an offense in many places, while laws are not as strict for network traffic generally.
For telephone conversations, at least until somewhat recently, the fact that synthesizing convincing speech in real time was not really feasible (especially not if you tried to imitate someones speech) ensured some integrity of the conversation. That has changed, though.
And prices are more likely to be simply outdated than modified by a malicious entity. Your concerns are not based in reality.
It’s like a vaccine. We vaccinated most of the web against a very bad problem, and that has stopped the problem from happening in the first place. If 90% were still on http, way more ISPs would insert ads.
There are more than enough forgotten kebab shop restaurant pages that are now serving malware because they never updated WordPress that an out of date certificate warning is a very good "heads up, this site hasn't been maintained in 6 years"
If we're talking hosting even a static HTML file without using a site hosting company, that already requires so much technical knowledge (Domain purchasing, DNS, purchasing a static IP from your ISP, server software which again requires vuln updates) that said person will be able to update a TLS cert without any issue.
[citation needed]
There are plenty of organizations that actively scan the web for "malware" (aka anything that the almighty machine learning algorithms don't like) and are more than happy to harass the website owner and hosting company until their demands are met.
Security is ultimately a social issue. Technical means are only one way to improve it and can never solve it 100%. You must never loose sight of the cost imposed by tecnological security solutions versus what improvement they actually offer.
However, if you already have bought a domain name, the cost of setting up TLS is basically 0. You just run certbot and give it the domains you want to license. It will set up auto-renew and even edit your Apache/NGINX configs to enable TLS.
Sure, TLS standards rotate. But that just means you have to update Apache/NGINX every like 5 years. Hardly a barrier for most people imo.
certbot is a python program, better hope it keeps working- it’s definitely not kept working for me and I’m a seasoned sysadmin. a combination of my python environment becoming outdated (making updates impossible) and a deprecation of a critical API needed for it to work.
The #1 cause of issues with a hobby website: darkscience.net is that it refuses to negotiate on Chrome because the TLS suites are considered too old, yet in 2020 I was scoring A+ on Qualys SSL report.
Its just time, time and effort and its wasted mostly.
The letsencrypt tools are really wonderful, just pray they don’t break, and be ready to reinstall everything from scratch at some point.
You could try out acme.sh that's written purely in shell. It's extremely capable and supports DNS challenge and multiple providers
There is also https://github.com/srvrco/getssl which is a bash script. I have lightly audited it years ago and it did not seem to upload your private keys anywhere... I've used it occasionally, but I don't let it run as root, so I need to copy the retrieved certs into the the server config manually.
Larger point is regarding the fact that its required for what amounts to a poster on a wall: yes, someone can come along with a pen an alter the poster- but its not worth the effort to secure for most people and will degrade rapidly with such security too.
So, instead they turn to middlemen, or don’t bother.
Theres a myriad of other issues, but, its not as easy as we claim.
certbot is not even close to the pinnacle of easy TLS setup. Using an HTTP server that fully integrates ACME and tls-alpn-01 is much nicer: tell your server what domain you use, and it automatically obtains a certificate.
There is regulation, like mandatory yearly inspections and anyone is only allowed to sell road worthy vehicles. These rules are rather strict, likewise for the driver's license. They aren't impossible to know or understand, but there's a lot of details.
However, when I take it to the shop, whether for that yearly inspection, regular maintenance, or because there's something apparently wrong with it, I never know what to expect in terms of time and money.
Oh, it needs a new thingamajig? I start to mildly sweat, fearing it to cost six hundred like the flux capacitor that had to be replaced last week/month/year and took two weeks to get shipped from another country. "Ninety cents, and we put it in place for no charge, it literally takes ten seconds", like, I love to hear the news, could have saved me from the anguish by giving a hint when I asked about the price! But need a new key? Starting from three hundred fifty, plus one hundred seventy for a backup copy. Like, where do these prices come from? Actually, don't tell me, I'm a software engineer. I know, I know.
I'll just wait until you want your car shop web pages up. Oh, for that you'll need PCI DSS and we can't do that other things because of GDPR. Sorry, my hands are tied here. That'll be four thousand plus tax, mister auto mechanic shop owner.
Safe transfer should be the default.
Your argument is akin to "I don't have anything to hide."
You just do it and don't think about it. Modern servers and services make this completely transparent.
The kebab guy doesn't need to worry about this as long as they're not fooled into buying from mala fide hosting companies who tries to upsell you on something that should be the baseline.
While we might be able to find common ground in the statement that "safe transfer should be the default", we will differ on the definition of "safe".
Unfortunately these discussions often end up in techno-babble. Especially here on HN were we tend to enjoy rather binary viewpoints without too many shades of gray.
Try being your own devils advocate: "What if I have something to hide?".
Then deal with that. Legitimately. Reasonably. Unless you are an anarkist I assume that we can agree that we need authoraties. A legal framework. Policing.
So I 100% support Let's Encrypt and what they have done to destroy the certificate racket. That is a force of good!
But I do not think it was a healthy thing that the browsers (and Google search results) "forced" the world defacto to TLS only.
Why? Look at the list of Trusted Root Certificates in the big OS and browsers. You are telling me only good guys are listed? None here are or can be influenced by state actors?
But that is the good kind of MITM? This then hinges on your definition of "safe transport". Only the anarkist can win against the government. I am not.
It might sound like I am in the "I do not have anything to hide" camp. I am not that naive. But I am firmly in the "I prefer more scrutiny when I have something to hide". Because the measures the authorities needs to employ today are too draconian for my liking.
I preferred the risk of MITM on an ISP level to what the authoraties need to do now to stay in control. We have not eliminated MITM. Just made it harder. And we forgot to discuss legitimate reasons for MITM because "bad".
This is not a "technical" discussion on the fine details of TLS or not. But should be a discussion about the societal changes this causes. We need locks to keep the creeps out but still wants the police to gain access. The current system does not enable that in a healthy way but rather erodes trust.
Us binary people can define clear simple technical solutions. But the rest of the world is quite messy. And us bit twiddlers tend to shy away from that and then ignore the push-back to our actions.
We cannot have a sober conversation unless we depart from the "encrypt everything" is technically good and then that is set in stone. But here we are: Writing off arguments as irrelevant.
They usually counter with “but SSH uses TOFU” because they don't see, and can't be convinced of, the problem of not verifying the server key signature⁰. I can be fairly sure that I'm talking to the daemon that I've just setup myself without explicitly checking the signature¹, but that particular side-channel assurance doesn't apply to, for example, a client connecting to our SFTP endpoint for the first time² to send us sensitive data.
--
[0] Basically, they get away with doing SSH wrong, and want to get away with doing HTTPS wrong the same way.
[1] Though I still should, really, and actually do in DayJob.
[2] Surprisingly few banks' tech teams bother to verify SSH server signatures on first connection, I know because the ones in our documentation were wrong for a time and no one queried the matter before I noticed it when reviewing that documentation while adding further details. I doubt they'd even notice the signature changing unexpectedly even though that could mean something very serious is going on.
Plus setting up letsencrypt isn't really really easy. Last time it was failing because I had disabled HTTP on port 80 entirely on my server… but letsencrypt uses that to verify that my website has the magic file. So I had to make a script to turn it on for 5 minutes around the time when the certificate gets renewed. -_-'
None of this is easy or quick, and people have other stuff to do than to worry about completely hypothetical attacks on their blog.
So, instead, use the other authentication methods. For example, DNS.
Google "isp injecting ads", well most of it is from 10 years ago - but that is because now we have TLS everywhere.
And it is not attack on your blog but on readers of your blog, well your blog gets the blame of course in case they would be infected by malware or see adult ads.
It's nice that you can now get free TLS certs without having to resort to shady outfits like StartSSL. This allows any website to easily move to HTTPS, which has basically elimated sensitive data (including logins) from being sent over unencrypted connections.
On the otherhand, this reinforces the inherently proken trust model of TLS certificates where any certificate authority (and a lot of them are controlled by outright hostile entities) has the ability to issue certificates for your domain without your involvement. Yes there are tons of kludges to try and mitigate this design flaw (CAA records, certificate transparency) but they don't 100% solve the issue. If not for LE perhaps there would have been more motivation to implement support for a saner trust mechanism by now that limmits certificate issuance to those entities who actually have any authority to decide over domain ownership, like with DNSSEC+DANE.
I'm also concerned with the (intentional) lack of backwards compatibility with moving sites to TLS, which is not just a one time TLS on/off issue but a continual deprecation of protocols and ciphers. This is warranted for things that need to be secure like banking or email but shouldn't really be needed to view a recipe or other similar static and non-critical information. Concerns about network operators inserting ads or other shit are better solved with regulation.
I would argue that LE has only highlighted these problems, and now actually causes people with power to worry about them.
There is a chance we would have gotten something better than TLS if the lack of LE kept certificates a pain. But that seems unlikely to me. Because the fundamental problem remains hard.
Does anyone remember how we renewed certificates before LE? Yeah, private keys were being sent via email as zip attachments. That was a security charade. And as far as I know, it was a norm among CAs (I remember working with several).
Thank you Let's Encrypt.
I generate the new key on the server as part of the csr creation process. I run it on the server itself so the key never leaves the server's internal storage.
CSR gets sent off to globalsign (via a third party because #largeCompany), then a couple of days later I get the certificate back and apply to the server
Would love to use ACME instead, and store the key in memory (ramdrive etc), but these are the downsides of working for a company less agile than an oil-tanker
(I was only slightly involved with a couple of TLS certificates before then, and certainly they enforced the CSR approach, but maybe such terrible practice was more common in the real world that I knew.)
But the point still stands: the whole process was a nightmare, no automation, error prone, renewal easily forgetable...
The large companies could have had a staff to manage all that. I was just a solo developer managing my own projects, and it was a hassle.
You can sort of do some hacks with scripting this together via things like terraform, cron jobs, or whatever. But it gets ugly and the failure modes are that your site stops working if for whatever reason the certificates fail to renew (I've had this happen), which courtesy of really short life times for certificates is of course often.
So, I paid the wildcard certificate tax a few days ago so I don't have to break my brain over this. A couple of hundred. Makes me feel dirty but it really isn't worth days of my time to dodge this for the cost of effectively < 2 hours of my time in $. Twenty minute job to issue the csr, get the certificate and copy it over to the relevant load balancers.
Internally, perhaps. And also on a small scale maybe with CA "resellers" who were often shady outfits which were in it for a quick buck and didn't much care about the rules.
But as a formal issuance mechanism I very much doubt it. The public CAs are prohibited from knowing the private key for a certificate they issue. Indeed there's a fun incident some years back where a reseller (who have been squirrelling away such private keys) just sends them all to the issuing CA, apparently thinking this is some sort of trump card - and so the issuing CA just... revokes all those certificates immediately because they're prohibited from knowing these private keys.
The correct thing to do, and indeed the thing ACME is doing, although not the interesting part of the protocol, is to produce a Certificate Signing Request. This data structure goes roughly as follows: Dear Certificate Authority, I am Some Internet Name [and maybe more than one], and here is some other facts you may be entitled to certify about me. You will observe that this document is signed, proving I know a Private Key P. Please issue me a certificate, with my name and other details, showing that you associate those details with this key P which you don't know. Signed, P.
This actually means (with ACME or without) that you can successfully air gap the certificate issuance process, with the machine that knows the private key actually never talking to a Certificate Authority at all and the private key never leaving that machine. That's not how most people do it because they aren't paranoid, but it's been eminently possible for decades.
That sounds like a fun story. I'd love to read the post-mortem if it's public.
https://www.theregister.com/2018/03/01/trustico_digicert_sym...
[Edited: I originally said Trustico was out of business, but astoundingly the company is still trading. I have no Earthly idea why you would pay incompetent people to do something that's actually zero cost at point of use, but er... OK]
The claims from Trustico are very silly. They want their customers to believe everything is fine, and yet the only possible way for this event to even occur is that Trustico are at best incompetent. To me this seems like one of those Gerald Ratner things where you make it clear that your product is garbage and so, usually the result is that your customers won't buy it because if they believe you it's garbage and if they don't believe you they won't want your product anyway - but whereas Ratner more or less destroyed a successful business, Trustico is still going.
So funny that all of their security, vetting and endless verifications are standing on a single passport photo sent over an email to this day.
None of them I have ever heard of. Whatever that may mean.
Edit: On the whole list https://www.internethalloffame.org/inductees/all/ I spotted maybe seven names. Still a single digit percentage.
...but you’re missing the point of my comment, which is simply to acknowledge and honor (my late dear friend) Peter.
My point was not do criticize the achievements of the work of any of those people.
1. I was not actively aware that this hall exists
2. I am mostly critical to such awards in general. I have noted that several companies receiving the "Export company of the year" here in this country (doesn't matter which one) have went bust a couple of years later. I received the "hacker of the year" award at my workplace some years ago. It was supposed to hang with all previous awards in the cafeteria. I did not like that and "forgot" it at home. I quit the company a year later anyway.
Edit: Forgot that I worked for the "software product of the year" twice in my life. One needed heavy, painful architectural rework 3 years later. The other was Series 60. People old enough know how that went, killed a global market leader.
To explain the issue with HTTPS certificates simply, issuance is automated and rests on the security of DNS, which is achieved via DNSSEC and most do not implement.
Trouble is even CAA entries won't help here (if you're spoofing A records, you can spoof CAA records too). DNSSEC might help against this, I don't know enough about DNS though.
Another type of attack is an IP hijack, which allows you to pass things like http authentication (the normal ACME method), but won't bypass CAA records. Can't use letsencrypt to issue a cert - even if you own the IP address my A or AAAA records point to - if my CAA doesn't have letsenctypt as an approved issuer.
Another option is manual certificate issuance with a CA whose security model is better than yours, but not implementing DNSSEC leaves you open to other attacks.
Generally speaking, setting up DNSSEC is probably a bad move for most sites.
Basing it on an open protocol, so it doesn't become a single point of failure, was a clever idea that allows the idea to survive the demise of any single organization.
May there be many more such anniversaries.
But I guess automation and standards had to catch up in order for LE to securely setup their CA.
That said, I’m wondering why there aren’t 10 or so popular alternatives to LE, since that seems to be the landscape for domain registrars, for example.
In 2024, if your PaaS does not have automated encryption for deploys, I will never use it.
Do you really check your site has an EV every single time? Especially now browsers treat them the same?
If not, how do you know someone hasn't got a DV certificate for this specific visit?
Scott Helme has a thorough takedown of them, and that was 7 years ago when they were still a thing.
https://scotthelme.co.uk/are-ev-certificates-worth-the-paper...
EV and OV when it includes dns names still requires domain control validation anyway.
EV certs are generally manually verified. This means there’s a human factor in the middle of this process. DV certs can, and should, be fully automated.
Multi perspective validation is about to be required too: https://cabforum.org/2024/11/07/ballot-smc010-introduction-o...