GnuPG for all its flaws has a copyleft license (GPL3) making it difficult to "embrace extend extinguish". If you replace it with a project that becomes more successful but has a less protective (for users) license, "we the people" might lose control of it.
Not everything in software is about features.
Seems like a legitimate difference of opinion. The researcher wants a message with an invalid format to return an integrity failure message. Presumably the GnuPGP project thinks that would be better handled by some sort of bad format error.
The exploit here is a variation on the age old idea of tricking a PGP user into decrypting an encrypted message and then sending the result to the attacker. The novelty here is the idea of making the encrypted message look like a PGP key (identity) and then asking the victim to decrypt the fake key, sign it and then upload it to a keyserver.
Modifying a PGP message file will break the normal PGP authentication[1] (that was not acknowledged in the attack description). So here is the exploit:
* The victim receives a unauthenticated/anonymous (unsigned or with a broken signature) message from the attacker. The message looks like a public key.
* Somehow (perhaps in another anonymous message) the attacker claims they are someone the victim knows and asks them to decrypt, sign and upload the signed public key to a keyserver.
* They see nothing wrong with any of this and actually do what the attacker wants ignoring the error message about the bad message format.
So this attack is also quite unlikely. Possibly that affected the decision of the GnuPG project to not change behaviour in this case, particularly when such a change could possibly introduce other vulnerabilities.
[1] https://articles.59.ca/doku.php?id=pgpfan:pgpauth
Added: Wait. How would the victim import the bogus PGP key into GPG so they could sign it? There would normally be a preexisting key for that user so the bogus key would for sure fail to import. It would probably fail anyway. It will be interesting to see what the GnuPG project said about this in their response.
(1) Rewrites the ciphertext of a PGP message
(2) Introducing an entire new PGP packet
(3) That flips GPG into DEFLATE compression handling
(4) And then reroutes the handling of the subsequent real message
(5) Into something parsed as a plaintext comment
This happens without a security message, but rather just (apparently) a zlib error.
In the scenario presented at CCC, they used the keyserver example to demonstrate plaintext exfiltration. I kind of don't care. It's what's happening under the hood that's batshit; the "difference of opinion" is that the GnuPG maintainers (and, I guess, you) think this is an acceptable end state for an encryption tool.
The problem with PGP is that it's a Swiss Army Knife. It does too many things. The scissors on a Swiss Army Knife are useful in a pinch if you don't have real scissors, but tailors use real scissors.
Whatever it is you're trying to do with encryption, you should use the real tool designed for that task. Different tasks want altogether different cryptosystems with different tradeoffs. There's no one perfect multitasking tool.
When you look at the problem that way, surprisingly few real-world problems ask for "encrypt a file". People need backup, but backup demands backup cryptosystems, which do much more than just encrypt individual files. People need messaging, but messaging is wildly more complicated than file encryption. And of course people want packet signatures, ironically PGP's most mainstream usage, ironic because it relies on only a tiny fraction of PGP's functionality and still somehow doesn't work.
All that is before you get to the absolutely deranged 1990s design of PGP, which is a complex state machine that switches between different modes of operation based on attacker-controlled records (which are mostly invisible to users). Nothing modern looks like PGP, because PGP's underlying design predates modern cryptography. It survives only because nerds have a parasocial relationship with it.
I really would like to replace PGP with the "better" tool, but:
* Using my Yubikey for signing (e.g. for git) has a better UX with PGP instead of SSH
* I have to use PGP to sign packages I send to Maven
Maybe I am a nerd emotionally attached to PGP, but after a year signing with SSH, I went back to PGP and it was so much better...
That's a "great" idea considering the recent legal developments in the EU, which OpenPGP, as bad as it is, doesn't suffer from. It would be great if the author updated his advice into something more future-proof.
If you want a suggestion for secure messaging, it's Signal/WhatsApp. If you want to LARP at security with a handful of other folks, GPG is a fine way to do that.
Which jurisdiction are you on about? [1] Pick your poison.
For example, UK has a law forcing suspects to cooperate. This law has been used to convict suspects who weren't cooperating.
NL does not, but police can use force to have a suspect unlock a device using finger or face.
The available open-source options come nowhere close to the messaging security that Signal/Whatsapp provide. So you're left with either "find a way to access Signal after they pull out of whatever region has criminalized them operating with a backdoor on comms" or "pick any option that doesn't actually have strong messaging security".
And there are lots of tools for file encryption anyways. I have a bash function using openssh, sometimes I use croc (also uses PAKE), etc.
I need an alternative to "gpg --encrypt --armor --recipient <foo>". :)
That's literally age.
Would "fetch a short-lived age public key" serve your use case? If so, then an age plugin that build atop the AuxData feature in my Fediverse Public Key Directory spec might be a solution. https://github.com/fedi-e2ee/public-key-directory-specificat...
But either way, you shouldn't have long-lived public keys used for confidentiality. It's a bad design to do that.
This statement is generic and misleading. Using long-lived keys for confidentiality is bad in real-time messaging, but for non-ephemeral use cases (file encryption, backups, archives) it is completely fine AND desired.
> Would "fetch a short-lived age public key" serve your use case?
Sadly no.
> This statement is generic and misleading.
It may be generic, but it's not misleading.
> Using long-lived keys for confidentiality is bad in real-time messaging, but for non-ephemeral use cases (file encryption, backups, archives) it is completely fine.
What exactly do you mean by "long-lived"?
The "lifetime" of a key being years (for a long-lived backup) is less important than how many encryptions are performed with said key.
The thing you don't want is to encrypt 2^50 messages under the same key. Even if it's cryptographically safe to do that, any post-compromise key rotation will be a fucking nightmare.
The primary reason to use short-lived public keys is to limit the blast radius. Consider these two companies:
Alice Corp. uses the same public key for 30+ years.
Bob Ltd. uses a new public key for each quarter over the same time period.
Both parties might retain the secret key indefinitely, so that if Bob Ltd. needs to retrieve a backup from 22 years ago, they still can.
Now consider what happens if both of them lose their currently-in-use secret key due to a Heartbleed-style attack. Alice has 30 years of disaster recovery to contend with, while Bob only has up to 90 days.
Additionally, file encryption, backups, and archives typically use ephemeral symmetric keys at the bottom of the protocol. Even when a password-based key derivation function is used (and passwords are, for whatever reason, reused), the password hashing function usually has a random salt, thereby guaranteeing uniqueness.
The idea that "backups" magically mean "long-lived" keys are on the table, without nuance, is extremely misleading.
> > Would "fetch a short-lived age public key" serve your use case?
> Sadly no.
shrug Then, ultimately, there is no way to securely satisfy your use case.
The Alice / Bob comparison is asymmetric in a misleading way. You state Bob Ltd retains all private keys indefinitely. A Heartbleed-style attack on their key storage infrastructure still compromises 30 years of backups, not 90 days. Rotation only helps if only the current operational key is exposed, which is an optimistic threat model you did not specify.
Additionally, your symmetric key point actually supports what I said. If data is encrypted with ephemeral symmetric keys and the asymmetric key only wraps those, the long-lived asymmetric key's exposure does not enable bulk decryption without obtaining each wrapped key individually.
> "There is no way to securely satisfy your use case"
No need to be so dismissive. Personal backup encryption with a long-lived key, passphrase-protected private key, and offline storage is a legitimate threat model. Real-world systems validate this: SSH host keys, KMS master keys, and yes, even PGP, all use long-lived asymmetric keys for confidentiality in non-ephemeral contexts.
And to add to this, incidentally, age (the tool you mentioned) was designed with long-lived recipient keys as the expected use case. There is no built-in key rotation or expiry mechanism because the authors considered it unnecessary for file encryption. If long-lived keys for confidentiality were inherently problematic, age would be a flawed design (so you might want to take it up with them, too).
In any case, yeah, your point about high-fan-out keys with large blast radius is correct. That is different from "long-lived keys are bad for confidentiality" (see above with regarding to "age").
I wrote this to answer this exact question last year.
Keys (even quantum safe) are small enough that having one per application is not a problem at all. If an application needs multi-context, they can handle it themselves. If they do it badly, the damage is contained to that application. If someone really wants to make an application that just signs keys for other applications to say "this is John Smith's key for git" and "this is John Smith's key for email" then they could do that. Such an application would not need to concern itself with permissions for other applications calling into it. The user could just copy and paste public keys, or fingerprints when they want to attest to their identity in a specific application.
The keyring circus (which is how GPG most commonly intrudes into my life) is crazy too. All these applications insist on connecting to some kind of GPG keyring instead of just writing the secrets to the filesystem in their own local storage. The disk is fully encrypted, and applications should be isolated from one another. Nothing is really being accomplished by requiring the complexity of yet another program to "extra encrypt" things before writing them to disk.
I'm sure these bad ideas come from the busy work invented in corporate "security" circles, which invent complexity to keep people employed without any regard for an actual threat model.
For most apps on non-mobile devices, there isn't filesystem isolation between apps. Disk/device-level encryption solves for a totally different threat model; Apple/Microsoft/Google all ship encrypted storage for secrets (Keychain, Credential Manager, etc), because restricting key material access within the OS has merit.
> I'm sure these bad ideas come from the busy work invented in corporate "security" circles, which invent complexity to keep people employed without any regard for an actual threat model.
Basically everything in PGP/GPG predates the existence of "corporate security circles".
If you use crypographic command line tools to verify data sent to you, be mindful on what you are doing and make sure to understand the attacks presented here. One of the slides is titled "should we even use command line tools" and yes, we should because the alternative is worse, but we must be diligent in treating all untrusted data as adversarial.
Handling untrusted input is core to that.
In any case I figured storing an SSH key in 1Password and using the integrated SSH socket server with my ssh client and git was pretty nice and secure enough. The fact the private key never leaves the 1Password vault unencrypted and is synced between my devices is pretty neat. From a security standpoint it is indeed a step down from having my key on a physical key device, but the hassle of setting up a new Yubikey was not quite worth it.
I’m sure 1Password is not much better than having a passphrase-protected key on disk. But it’s a lot more convenient.
I’m still working through how to use this but I have it basically setup and it’s great!
Keychain and 1Password are doing variants of the same thing here: both store an encrypted vault and then give you credentials by decrypting the contents of that vault.
I see this sentiment a lot, but you later hint at the problem. Any "replacement" needs to solve for secure key distribution. Signing isn't hard, you can use a lot of different things other than gpg to sign something with a key securely. If that part of gpg is broken, it's a bug, it can/should be fixed.
The real challenge is distributing the key so someone else can verify the signature, and almost every way to do that is fundamentally flawed, introduces a risk of operational errors or is annoying (web of trust, trust on first use, central authority, in-person, etc). I'm not convinced the right answer here is "invent a new one and the ecosystem around it".
This is why basically every modern usage of GPG either doesn't rely on key distribution (because you already know what key you want to trust via a pre-established channel) or devolves to the other party serving up their pubkey over HTTPS on their website.
This is a bit like looking at electric cars and saying ~"well you can't claim to be a viable replacement for gas cars until you can solve flight"
(We’re also long past the point where key distribution has been a significant component of the PGP ecosystem. The PGP web of trust and original key servers have been dead and buried for years.)
What do you mean? Web of Trust? Keyservers? A combination of both? Under what use case?
if you have a website put your keys in a dedicated page and direct people there
If you are in an org there can be whatever kind of centralised repo
Add the hashes to your email signature and/or profile bios
There might be a nice uniform solution using DNS and derived keys like certificate chains? I am not sure but I think it might not be necessary
As a practical implementation of "six degrees of Kevin Bacon", you could get an organic trust chain to random people.
Or at least, more realistically, to few nerds. I think I signed 3-4 peoples signatures.
The process had - as they say - a low WAF.
GPG is terrible at that.
0. Alice's GPG trusts Alice's key tautologically. 1. Alice's GPG can trust Bob's key because it can see Alice's signature. 2. Alice's GPG can trust Carol's key because Alice has Bob's key, and Carol's key is signed by Bob.
After that, things break. GPG has no tools for finding longer paths like Alice -> Bob -> ??? -> signature on some .tar.gz.
I'm in the "strong set", I can find a path to damn near anything, but only with a lot of effort.
The good way used to be using the path finder, some random website maintained by some random guy that disappeared years ago. The bad way is downloading a .tar.gz, checking the signature, fetching the key, then fetching every key that signed in, in the hopes somebody you know signed one of those, and so on.
And GPG is terrible at dealing with that, it hates having tens of thousands of keys in your keyring from such experiments.
GPG never grew into the modern era. It was made for persons who mostly know each other directly. Addressing the problem of finding a way to verify the keys of random free software developers isn't something it ever did well.
I vaguely recall the PGP manuals talking about scenarios like a woman secretly communicating with her lover, or Bob introducing Carol to Alice, and people reading fingerprints over the phone. I don't think long trust chains and the use case of finding a trust path to some random software maintainer on the other side of the planet were part of the intended design.
I think to the extent the Web of Trust was supposed to work, it was assumed you'd have some familiarity with everyone along the chain, and work through it step by step. Alice would known Bob, who'd introduce his friend Carol, who'd introduce her friend Dave.
Archive link: https://web.archive.org/web/20251227174414/https://www.gnupg...
(PGP/GPG are of course hamstrung by their own decision to be a Swiss Army knife/only loosely coupled to the secure operation itself. So the even more responsible thing to do is to discard them for purposes that they can’t offer security properties for, which is the vast majority of things they get used for.)
(I think you already know this, but want to relitigate something that’s not meaningfully controversial in Python.)
(I think you already know this as well)
This is exactly analogous to the Web PKI, where you trust CAs to identify individual websites, but the websites themselves control their keypairs. The CA's presence intermediates the trust but does not somehow imply that the CA itself does the signing for TLS traffic.
Again, I must emphasize that this is identical in construction to the Web PKI; that was intentional. There are good criticisms of PKIs on grounds of centrality, etc., but “the end entity doesn’t control the private key” is facially untrue and sounds more like conspiracy than anything else.
On my web server where the certificate is signed by letsencrypt I do have a file which contains a private key. On pypi there is no such thing. I don't think the parallel is correct.
With attestations on PyPI, the issuance window is 15 minutes instead of 90 days. So the private key is kept in memory and discarded as soon as the signing operation is complete, since the next signing flow will create a new one.
At no point does the private key leave your machine. The only salient differences between the two are file versus memory and the validity window, but in both cases PyPI’s implementation of attestations prefers the more ideal thing with respect to reducing the likelihood of local private key disclosure.
But also, that’s an implementation detail. There’s no reason why PyPI couldn’t accept attestations from local machines (using email identities) using this scheme; it’s just more engineering and design work to determine what that would actually communicate.
(This would of course require Codeberg to become an IdP + demonstrate the ability to maintain a reasonable amount of uptime and hold their own signing keys. But I think that's the kind of responsibility they're aiming for.)
Most people have never heard of it and never used it.
(So I agree that it’s de facto dead, but that’s not the same thing as formal deprecation. The latter is what you do explicitly to responsibly move people away from something that’s not suitable for use anymore.)
As they said, they were on it...
But trust in Werner Koch is gone. Wontfix??
People who are serious about security use newer, better tools that replace GPG. But keep in mind, there’s no “one ring to rule them all”.
> Don't.
https://www.latacora.com/blog/2019/07/16/the-pgp-problem/#en...
I’m not sure I completely agree here. For private use, this seems fine. However, this isn’t how email encryption is typically implemented in an enterprise environment. It’s usually handled at the mail gateway rather than on a per-user basis. Enterprises also ensure that the receiving side supports email encryption as well.
edit: formatting
This page is a pretty direct indicator that GPG's foundation is fundamentally broken: you're not going to get to a good outcome trying to renovate the 2nd story.
Why do high-profile projects, such as Linux and QEMU, still use GPG for signing pull requests / tags?
https://docs.kernel.org/process/maintainer-pgp-guide.html
https://www.qemu.org/docs/master/devel/submitting-a-pull-req...
Why does Fedora / RPM still rely on GPG keys for verifying packages?
This is a staggering ecosystem failure. If GPG has been a known-lost cause for decades, then why haven't alternatives ^W replacements been produced for decades?
It's a pretty great ecosystem, most hardware smartcards are surrounded by a lot of black magic and secret handshakes and stuff like pkcs#11 and opensc/openct are much much harder to configure.
I use it for many things but not for email. Encrypted backups, password manager, ssh keys. For some there are other hardware options like fido2 but not for all usecases and not the same one for each usecase. So I expect to be using gpg for a long time to come.
The attack on detached signatures (attack #1) happens because GnuPG needs to run a complicated state machine that can put processing into multiple different modes, among them three different styles of message signature. In GPG, that whole state machine apparently collapses down to a binary check of "did we see any data so that we'd need to verify a signature?", and you can selectively flip that predicate back and forth by shoving different packets into message stream, even if you've already sent data that needs to be verified.
The malleability bug (attack #4) is particularly slick. Again, it's an incoherent state machine issue. GPG can "fail" to process a packet because it's cryptographically invalid. But it can also fail because the message framing itself is corrupted. Those latter non-cryptographic failures are handled by aborting the processing of the message, putting GPG into an unexpected state where it's handling an error and "forgetting" to check the message authenticator. You can CBC-bitflip known headers to force GPG into processing DEFLATE compression, and mangle the message such that handling the message prints the plaintext in its output.
The formfeed bug (#3) is downright weird. GnuPG has special handling for `\f`; if it occurs at the end of a line, you can inject arbitrary unsigned data, because of GnuPG's handling of line truncation. Why is this even a feature?
Some of these attacks look situational, but that's deceptive, because PGP is (especially in older jankier systems) used as an encryption backend for applications --- Mallory getting Alice to sign or encrypt something on her behalf is an extremely realistic threat model (it's the same threat model as most cryptographic attacks on secure cookies: the app automatically signs stuff for users).
There is no reason for a message encryption system to have this kind of complexity. It's a deep architectural flaw in PGP. You want extremely simple, orthogonal features in the format, ideally treating everything as clearly length-delimited opaque binary blobs. Instead you get a Weird Machine, and talks like this one.
Amazing work.
From what I can piece together while the site is down, it seems like they've uncovered 14 exploitable vulnerabilities in GnuPG, of which most remain unpatched. Some of those are apparently met by refusal to patch by the maintainer. Maybe there are good reasons for this refusal, maybe someone else can chime in on that?
Is this another case of XKCD-2347? Or is there something else going on? Pretty much every Linux distro depends on PGP being pretty secure. Surely IBM & co have a couple of spare developers or spare cash to contribute?
A major part of the problem is that GPG’s issues aren’t cash or developer time. It’s fundamentally a bad design for cryptographic usage. It’s so busy trying to be a generic Swiss Army knife for every possible user or use case that it’s basically made of developer and user footguns.
The way you secure this is by moving to alternative, purpose-built tools. Signal/WhatsApp for messaging, age for file encryption, minisign for signatures, etc.
If there were a PGP vulnerability that actually made it possible to push unauthorized updates to RHEL or Fedora systems, then probably IBM would care, but if they concluded that PGP's security problems were a serious threat then I suspect they'd be more likely to start a migration away from PGP than to start investing in making PGP secure; the former seems more tractable and would have maintainability benefits besides.
That's mostly incorrect in both counts. One is that lots of mirrors are still http-only or http default https://launchpad.net/ubuntu/+archivemirrors
The other is that if you get access to one of the mirrors and replace a package, it's the signature that stops you. Https is only relevant for mitm attacks.
> they'd be more likely to start a migration away from PGP
The discussions started ages ago:
Debian https://wiki.debian.org/Teams/Apt/Spec/AptSign
Fedora https://lists.fedoraproject.org/archives/list/packaging@list...
If you only need a specific version and you already know what that one is, then using a cryptographic hash will be a better way to verify packages, although that only applies for one specific version of one specific package. So, using an encrypted protocol (HTTPS or any other one) alone will not help, although it will help in combination with other things; you will need to do other things as well, to improve the security.
I haven't seen those outside of old mailing list archives. Everyone uses detached signatures nowadays, e.g. PGP/MIME for emails.
Edit: even better. It was both. There is a signature type confusion attack going on here. I still didn't watch the entire thing, but it seems that unlike gpg, they do have to specify --cleartext explicitly for Sequoia, so there is no confusion going on that case.
Fortunately, it turned out that there wasn't anything particularly wrong with the current standards so we can just do that for now and avoid the standards war entirely. Then we will have interoperability across the various implementations. If some weakness comes up that actually requires a standards change then I suspect that consensus will be much easier to find.
Because you're clearly presenting it as a defense of PGP on a thread from a presentation clearly delineating breaks in it using exactly the kind of complexity that the article you're responding to predicts would cause it to break.
Isn't this what ffmpeg did recently? They seemed to get a ton of community support in their decision not to fix a vulnerability