https://arstechnica.com/security/2023/12/just-about-every-wi...
https://arstechnica.com/security/2024/07/secure-boot-is-comp...
SecureBoot would have been better off with certificates that never expire. That's not a problem in cases where users (or organisations) manage their own hosts, since they can just changed the certificate when the previous one is no longer valid or leaked or whatever.
In practice, SecureBoot rolled out with a single CA for everyone, one controlled by Microsoft. This provides little value for anyone—restricting your computer to "only boot stuff signed by a third party" doesn't really protect from attackers in any way. They'll just boot into one of the many programs signed by MS. But because a single CA is used globally, you want expiration so as to roll them over every few years. But remember: there's no way to have a reliable clock. And so, we have the mess that we have.
The grand majority of Linux users could disable SecureBoot tomorrow and their system's security would not change in any meaningful way.
For example, you want your users' laptop hard drives to be encrypted - but also you have users who regularly forget their passwords? With bitlocker their hard drive can decrypt itself, so they only need to remember their windows login, which you can reset remotely.
You give laptops to your field workers, who have full physical access and would love to play video games or access netflix when work puts them in a hotel over night with nothing to do? With secure boot you can keep your precious spreadsheets locked down, even if they're willing to boot from USB sticks or swap the hard drive.
And perhaps most importantly, it has "secure" in the name. So the corporation's IT security auditors will like to see it turned on even if they have only a vague understanding of what it does.
Anyway, it's good to hear that I probably don't have anything to worry about.
I'm not understanding how it's the desktop Linux users who have to deal with poor security.
On Linux Mint if you run a program without granting any extra permissions it can: Record your mic, record your camera, record your screen, steal your browser history/ cookies/passwords, alias sudo or show a fake update dialog to collect the user's password to elevate to root, see if you copied a crypto address and replace it with a similar looking one owned by the attacker, encrypt all of your files, send any sensitive pictures or documents to the attacker, etc.
The existence of a 50 year old concept of file permission is not good enough to combat the modern security problems users can encounter.
Lot of good that will do you when Linux users will curl | bash most any garbage.
The Windows NT file permission system is far more advanced (and I'm not even including AppLocker or software whitelisting).
> thinking about whom to trust and primarily sourcing their software from the distro package manager
So "app store" is the wave of the future?
The days of Linux users using magic healing crystals to protect themselves from malware are long over. Most malware these days targets Linux servers. If you think chmod u+x is what is preventing your computer from catching digital AIDS I have news for you.
Same for Windows users who zoom through UAC prompts without reading.
> The Windows NT file permission system is far more advanced (and I'm not even including AppLocker or software whitelisting).
...and much more convoluted and easy to break while most systems allow unfettered access to everywhere. On the other hand SELinux and AppArmor already provide transparent system isolation for decades now, and they are completely invisible. If you want even more security, you can install an immutable distro.
> So "app store" is the wave of the future?
App stores are capitalist versions of software repositories which are present for more than 20 years now? Plus, these repositories are generally well-vetted and observed by their maintainers.
> Most malware these days targets Linux servers. If you think chmod u+x is what is preventing your computer from catching digital AIDS I have news for you.
No, instead many sysadmins who know what they're doing are depending on a layered security system, provided by Linux kernel and its peripheries. Containers, CGroups, namespaces, SELinux/AppArmor, package integrity checks, multiple limited users (with reduced capabilities as well), UNIX file permissions, and many more.
If you think Linux only has file permissions for system security, I have news for you.
UAC is not a security boundary, so it is not relevant when talking about security.
>SELinux and AppArmor already provide transparent system isolation for decades
If they are setup and most Linux distros only limit individual apps. So a brand new app can still run wild.
>you can install an immutable distro.
Even immutable distros let people download new software off the internet and run it.
>Plus, these repositories are generally well-vetted and observed by their maintainers.
This has been shown to be false in practice due to the xz backdoor. Maintainers do not actually vet anything other than that the code is coming from the developer. Which is also what app stores do.
That is there excuses but you don't seem to realize that this makes it only worse because that means there is no boundary at all.
>If they are setup and most Linux distros only limit individual apps. So a brand new app can still run wild.
new apps will be either installed from a trusted repository (often with a MAC profile) or sandboxed by default from flatpak/snap store. You don't seem to understand that the entire install process is different. You don't get your software from random sites found on Google between malware ads on Linux.
>This has been shown to be false in practice due to the xz backdoor
XZ has nothing to to with a lack of vetting and even if it was it would be an argument for it because it got caught in testing.
This is absolutely false, it was not caught in any sort of regular testing whatsoever.
It was caught by - of all people - a Microsoft employee who noticed SSH logins were taking a split second too long. Not distro packagers. The packages were already staged in the testing branches of the distros they were targeting and could have easily made it into the LTS versions had this one curious MS guy not noticed.
LTS doesn't mean set in stone. Debian publishes fixes within 24 hours in most cases, even if the upstream doesn't provide any, plus some packages come with Debian's own security patches on top of upstream patches.
Linux security landscape is very different than Windows' central "we'll patch it when we patch it" stance.
>The packages were already staged in the testing branches
Thanks for making my argument for me. It was also literally caught in (Debian) TESTING.
It does not matter for who he works unless you believe a cooperation owns there employees time and achievements 24/7.
He notices something off, tested it, looked at the source code (impossible on windows ;) and reported the issue he found which got quickly and transparently (also impossible on windows) fixed. Again that is how FOSS should work and why it's superior to proprietary software.
Earlier versions of Windows were a much bigger threat to adoption of Windows 8 than Linux was.
Most people experience this via Windows, which automatically sets up that chain of trust so that you can know you've not had a rootkit injected somewhere. In other cases it may be Linux or something more exotic booting, and it requires some management by whoever is operating the device, but that comes with the benefit of knowing that if one of our devices has got to the point of decrypting it's storage we can be reasonably confident that it hasn't been tampered with, and so we can trust it to send good data.
Code has bugs. There's any number of critical vulnerabilities in Linux, Windows, MacOS that have allowed bypass of all security features - does that mean all security features remain security theatre?
The cost in terms of freedom/flexibility and reliability/longevity is very high. But we're told, this is necessary, it's the only way to guarantee the security of the poor user. But if in practice the security wasn't actually guaranteed, for most motherboards over most years, due to pretty big dumb oversights ... was it worth the extreme costs? The cost of losing compatibility with older or newer software/hardware, of losing convenient repairs and recovery? Nope.
You sold your soul for "guaranteed security" of securing the entire boot and runtime from the lowest level hardware up ... and didn't really get it anyway.
They could've used a time stamping service to include a signed timestamp in the binary to compare the expiry date against, but that still leaves the system unbootable after the time stamping certificate expires in the far future.
Besides, a hacking group powerful enough to steal Microsoft's Secure Boot private key will likely be able to steal a timestamping private key from a certificate authority as well.
With custom hierarchies, it's a bit more compelling. But it's a lot of work to maintain.
EVERYBODY wants that! And I mean ABSOLUTELY EVERYBODY! Updates are now mandatory everywhere, in both Windows and Linux, and GPU manufactureres would LOVE to make the old cards obsolete, even if technically the new cards aren't much better.
So expect to see the old certificate invalidated quickly and automatically, in the name of security, of course!
Secure Boot is a fine thing if you're a huge corporation and want to harden laptops against untrustworthy employees, or you've got such a huge fleet of servers they go missing despite your physical security controls, or you're making a TiVo style product you want to harden against the device owners. But when the user is the device owner? Doesn't do much.
In the end what matters is always money. Always.
What brings more money? TiVo or buyer-owned device? You think 5% of technically competent potential buyers would make a difference when the 95% illiterate users will just replace the product no questions asked?
It started as a fight against piracy and half-competent users that break their own systems (and the company's systems too, like you said). But slowly the industry sees that there's more money to be made if the same technology can provide a belivable argument in right to repair and planned obsolescence court cases.
[1] https://github.com/melontini/bootloader-unlock-wall-of-shame
The reality is that PC's address the needs of a fundamentally different market than "TiVo"s or even mobile phones. While most could, and probably should, be using secure boot noone seems to be eager to take away the option to disable it.
Microsoft perennially makes small movements in that direction. Reduced control over the OS and attempts to exert control over the software ecosystem. I assume they're still trying to push consumers towards Windows S mode devices.
Kernel mode anticheat that won't run on systems that aren't attested. Streaming platforms that won't serve up decent quality streams. Even if you don't notice the pot being boiled there are those of us that do.
Tangent: To me that sounds like a reference to the "frog boiling" story. This has been debunked [1], a healthy frog will not remain in a gradually heated pot of water. We need a better analogy for this.
Hello from 2013, and here you go!
https://wiki.ubuntu.com/ARM/SurfaceRT#Secure_Boot
https://openrt.gitbook.io/open-surfacert/common/boot-sequenc...
Plus, tablets are not PCs. People are happy with tablets and phones as locked devices. They are not happy with PCs as locked devices, and have not accepted such control, maybe outside the MacOS ecosystem.
That said, not all general purpose computing devices are useful for all things. For example: you can, but probably aren't, going to use a mobile phone for a server. On the flip side: you can use a server to do your banking, but most people won't find it as convenient as using their phone for banking (even though banking from a stationary computer is far more convenient than it was in the days when you had to go to a branch). Likewise: mobile devices can be used for content creation, but I doubt that you would find many office workers jumping at the opportunity to use them in the place of a desktop or laptop. On the other hand: someone who is on the road a lot would probably appreciate their portability.
This sentence just makes me so sad
Throw in jail time for decision makers. Lets make markets honest with real incentives.
Do you own a phone that's easily rooted? Who else does?
What about your WiFi routers? Internet modem? AirTags? Smart home appliances?
Still, my point was not about running a rooted phone with unlocked bootloader (secure boot disabled on a pc equivalent), but whether if this is possible accounts in your purchasing decision.
Full disk encryption on a device you have full control of is sufficient.
Containerization helps if you install untrusted apps.
Not having root helps if you install untrusted apps (either vulnerabilities/exploitable or malicious) as root.
Don't trust containers to have the same level of isolation as a VM.
It's still a problem if manufacturers force ExploitationOS on the device I bought, but it's not-as-bad when everyone can collaborate to disable the exploitation-parts.
It was even explicitly designed to prevent "tivoization." https://www.gnu.org/philosophy/tivoization.en.html
One just has to use it to prevent their software from being locked away from the end user
Its the same theory behind the issues with the office toolbar. They find that people only use 5% of the buttons but there is almost zero overlap among millions of users.
Its one of my interview questions these days. What device will I be issued?
If its a chromebook I know that no matter what they say they don't really care about the postion.
On the other had I've seen execs/directors that barely turn on their PC get $10k monster laptops because they are considered important. While staff get recycled garbage equipment or a $1000 max per person equipment budget.
A decent Secure Boot implementation together with a BIOS/EFI password at least makes the life of US CBP or similar thugs wanting to use my devices against me much more difficult.
And no, that's not an imaginary threat, certainly not under this administration which has come under fire multiple times for first detaining and then deporting random tourists.
Certainly. Just one problem: Modern consumer BIOS interfaces are graphical and your GPU is off.
If you let arbitrary code run before you start checking, you don't have a secure boot chain.
Please don't use uppercase for emphasis. If you want to emphasize a word or phrase, put asterisks* around it and it will get italicized.*
Linux and Secure Boot certificate expiration - https://news.ycombinator.com/item?id=44601045 - July 2025 (265 comments)
https://techcommunity.microsoft.com/blog/windows-itpro-blog/...
worked perfectly on a fully updated Windows 11 24H2 installed on an old Surface Pro LTE i5-7300U that is perhaps unlikely to receive another firmware update...
So, dumb question: If the expiry dates are not enforced, why rotate the certificates at all? The only consequences of Microsoft introducing new keys seems to be that compatibility with old software and systems will over time become worse. But what's the upside - or the actual threat model this is defending against?
- on a mobo the motherboard provider signs the PK
- there's only one PK
- the PK signs one or more KEK, like "Microsoft Corporation UEFI CA 2011"
If that understanding is correct, can I add myself the new "Microsoft Corporation UEFI CA 2023" (the one that expires in 2038: I think that its name) the same way I can enroll new keys in the dbx? (say my own signed keys?)If I add the new Microsoft key myself, shall it be as a KEK or in the dbx?
Will motherboard manufacturer release new firmware, with the new Microsoft key already signed? In that case, shall be a KEK ?
Basically instead of thinking, as TFA suggests: "Let's not worry about anything, everything shall be fine and keep working because keys expiration date aren't enforced", can I pro-actively enroll the new Microsoft key myself?
P.S: I don't drink the SecureBoot kool-aid but something has to be said about having a Linux unikernel (kernel+initramfs) signed and enforced by SecureBoot. And SecureBoot does at least somehow work. Source: I modified on bit of my kernel and had a SecureBoot error and the kernel refused to boot. You can try it for yourself.
As well as the new root certificates in db, which are used to decide whether signed code will execute or not, there will be a new signed Microsoft key for KEK. This isn't involved in the boot process, but is required for Microsoft to be able to sign further revocation updates. The article is discussing the db case, and if you want to ensure things signed only with the new key will boot on your system, you would want to add them to db.
Microsoft can sign a db update themselves (since there's a valid Microsoft key in KEK and db updates need to be signed with a key in KEK), but KEK updates need to be signed with PK. Microsoft doesn't own PK, so adding the new KEK requires the system vendor produce an update signed with their PK.
If you are in a position to enroll the new keys then you should enroll the new db keys if you want new binaries to be guaranteed to boot, and add the new KEK if you want to be able to apply future Microsoft-signed dbx updates.
Users should absolutely be able to install the db update by hand if they choose to, but it's late and I don't have the commands to hand. I'll write another post on this soon.