Hacker News new | past | comments | ask | show | jobs | submit login
Signal on Android: Images sent to wrong contacts (github.com/signalapp)
654 points by jiripospisil on July 25, 2021 | hide | past | favorite | 382 comments



Hi there, Signal-Android developer here. I updated the issue to reflect this, but this bug has been fixed. I was tracking it on a separate issue, and had forgotten to close this one.

We do, in fact, take issues like this very seriously. This bug was extraordinarily rare, and because we have no metrics/remote log collection, there was an initial period where we had to spend time adding logging and collecting user-submitted logs to try to track it down. As soon as we were able to pick up a scent, it was all we worked on, and we were able to get a fix out very quickly.


> This bug was extraordinarily rare, and because we have no metrics/remote log collection, there was an initial period where we had to spend time adding logging and collecting user-submitted logs to try to track it down.

Without telemetry, can you actually back up the claim that this issue was extremely rare?


Some details on how this assumption was made would be nice, but I think it's pretty obvious that any developer involved in a project can make a reasonable assumption of how rare a bug is depending on the technical details on what is required for the bug to happen. For example, if we say for the sake of argument that a hypothetical bug requires you to have more than ten contacts of the exact same name and these also need to share the same country and area code, one can make the assumption this use case is very rare without knowing the exact number of users that this applies to, just based on common sense regarding how the application is normally used.

edit: The linked github issue says:

> The TL;DR is that if someone had conversation trimming on, it could create a rare situation where a database ID was re-used in a way that could result in this behavior. It was very difficult to track down, with earlier phases involving getting additional logging into builds. Once we had some more information, it did in fact become our top priority, a fix was made, and we got it out as quickly and as safely as possible. The fix itself should make it so that database issues like the one that caused this bug can't happen again.


> a hypothetical bug requires you to have more than ten contacts of the exact same name and these also need to share the same country and area code

Sounds like the village a part of my family tree comes from.


> just based on common sense regarding how the application is normally used

Just based on assumptions of how the application is used.

"falsehoods programmers believe" comes to mind. The uniqueness of names in your example could be false in some (or many? No idea) cultures.


> The fix itself should make it so that database issues like the one that caused this bug can't happen again.

I've said that a few times, and been wrong about it. :D

I don't know what they mean by "where a database ID was re-used", but I guess it had to do with caching, and cache invalidation is one of the hardest part in computer programming.


> cache invalidation is one of the hardest part in computer programming.

Yes, famously it is one of the two hardest parts of computer programming, the other being naming things, and off-by-one errors.


Why would anyone think that re-use of ID was a good idea?


I'm not defending the practice of re-using a database ID, but it's absolutely something that can happen (or change) as an oversight rather than an active choice from the developer.


Perhaps that’s why it was a bug?


> it's pretty obvious that any developer involved in a project can make a reasonable assumption of how rare a bug is

... is it? The fact that a bug exists means there's a logic gap. You can try and patch it with theory, but that's just adding assumption to a scenario created from broken assumptions. Also, the job of telemetry in incident reporting isn't to be vague - its to add precision.


There's probably a ratio of bug-report-to-occurrences that they're used to for difference kinds of bugs. Ex: If user-visible security bugs have good report rates, say 100-1000 leaks per 1 report, and 10 reports, then 1K-10K incidents. This is harder in b2b, but in b2c, PM's should have a feel for it..


You’d be surprised how difficult it is to estimate the frequency that someone sees a bug. The only way to have a “feel for it” is to base it on… other data.

Say there is a bug that happens in the photo taking flow – you’d need to know how often people take photos in Signal. You’d think you could spitball something for that, but it is actually really hard. But if you log how often photos are taken, then that is a great starting point.

But further, lower level logic errors like this, especially ones involving race conditions, are even harder to pin down. That is why on iOS you can log “faults” which are non-fatal but very not-expected events:

https://developer.apple.com/documentation/os/logger/3551617-...

They generate reports with stack traces that you can use to a) judge the prevalence of an issue and b) see where it originated


>for the sake of argument that a hypothetical bug requires you to have more than ten contacts of the exact same name and these also need to share the same country and area code

This is only rare if you have a small social circle. My circle has multiple first name-collisions of at least 5 participants, but my circle is not very big and the area code is also quite small.

Some countries do not use area codes for mobile phone numbers, which are used for Signal, meaning country code is the only area limiting factor.


Well he just wanted to make an example. One could also construct an example, where the bug only occurs for people with a rare sequence of unicode symbols (e.g. U+2600 U+2601 U+2602) in their username and have a specific date (e.g. 05.04.1920) as their birthday.


Your argument depends on Signal implementing username support, because we do not support unicode in phone numbers.


My argument depends on an alternate universe in which signal supports it, everything else being equal, because why not.


Does that alternate universe support phone numbers with unicode characters too?


You are arguing against a hypothetical situation of your opponent's creation. This is like when Ross tried to beat Chandler at Cups


I go to bed now and dream of a universe in which NullPrefix is Ross and continues to argue endlessly.


I'll argue with anyone about anything. For free.


> I'll argue with anyone about anything. For free.

No, you won't.


Obviously that was a hyperbolic statement.


Don't argue with me!


I am curious how different that alternative universe needs to be for my argument to be invalid.


It's not that your argument was incorrect. It's that it is tangential to the parent comment. hnarn was not making the point of something being rare, they were demonstrating that a developer can estimate the rarity based on the conditions that trigger it. The rarity itself is no matter.

Arguing over small semantic or circumstantial differences is often considered impolite


It is a hypothetical scenario for the purpose of an example. It could have been literally anything, the point is for it to be rare.


Yes, but the fact that scenario is hypothetical doesn't counter my argument that whatever the developer thinks is rare is actually rare.


I do not know why I keep replying to this... but regardless, here it comes: your counter-argument is irrelevant! It does not require any counter-argument, because it is hypothetical! He could have said anything else that is rare, something that you think is rare, too. Just assume he said something that you think is rare as well. It is a hypothetical scenario, so you can do that; in fact: you should have done that, because all that is important is that it is rare, whatever that is. Again, argument over what he said is rare or not is not needed. Just assume it is rare, or assume he said something you think is rare.

I hope this gets the point across. Yes, maybe what he said is not rare, but that is besides the point.


>for the sake of argument


Yes.

And for the sake of the same argument I made a counter argument, stating that some initially believed to be rare circumstances are actually not that rare.


"For the sake of argument" does not mean "I invite you to debate this", it means the exact opposite: "let's assume this is correct/we agree/etc. for a while and debate what comes after". So in this case you are doing the exact opposite of what the phrase "for the sake of argument" is requesting.

https://idioms.thefreedictionary.com/for+the+sake+of+argumen...

> I know you want to go to Stanford, but just for the sake of argument, let's talk about what some of the other schools you got into have to offer.

This does not mean "I invite you to make a counter-argument as to why no other schools than Stanford are worth going to".


It sounds to me like NullPrefix accepted the hypothetical bug, for the sake of the argument, and then addressed the rarity assumptions made about that bug. "For the sake of argument" doesn't mean "No debating" it means "Let's assume some thing x is taken as a given and go from there."


The whole point was that "given this assumed rare bug, one can know it's pretty rare". Whether the example is actually rare enough is completely beside the point. If it's not rare enough for you, replace it. The argument comes next: given a rare bug, a developer can make an assumption of how rare it is. Debating the example of rareness is splitting hairs.


[flagged]


English is my second language.


There is no argument, and it did not require a counter-argument, because that is not the point. He could have used literally anything else, it just has to be rare.


If you know the reproduction steps and you can look at the code, you should be able to make an estimate of rarity.


Even with telemetry, would you be able to back up the claim that that the issue was rare? All too often, people add telemetry to something in an attempt to try to find things like this, but they end up leaking things elsewhere and not accounting for them. Bad statistics are more dangerous than no statistics, and all that.


If this is the first instance of this bug being reported, then yes it's rare.


The rarity of a bug is independent from reports of it.


That also depends on the severity of the bug. :)


No, those are directly correlated.


I would like to think so. I used a program that is full of bugs, yet those are not reported[1], nor have they been fixed. :(

Edit:

[1] I did report one (or two?, it has been some time ago) after some time (could have been months, because I could do without it as a compromise for that time being). They did not respond nor did they fix it (so I did not report any other bugs I have found). I recently re-installed it to give it another go, and the easy-to-fix bug still occurs. They just do not give a damn. Their website has been rebranded and everything, but this bug has not been fixed for over a year.

Edit #2:

Wow, I was way off with the "over a year". I checked the bug report of the one I remember reporting. It does seem like some other people have reported it too. Well, it is a bug from 2018, after all. Still open, no word from a developer. :D It is a bug of a crucial feature.

In any case, if for some reason I had to use this software again, I would never report any bugs because it is pretty much futile. They do not answer, they do not categorize bugs, they do not seem to check the bug reports at all. It would be a waste of time.


Then why not report them?


Because the bug happens when you're trying to get something done and you don't have 5 spare minutes to interrupt your flow in order to file a bug. You might make a mental note to report it later and then forget. Or, if the bug is annoying enough, you might just never use that program again and you don't care if it gets fixed. Or your wifi glitched at the same time the bug happened and you've no idea if it is a bug or the glitchy wifi. Or you don't know it's a bug until later and by that time you've no idea what you were doing to trigger it.

There's a million and one valid reasons people don't report bugs.


The reports of it are correlated to the rarity, except in instances where reports are not generated, such as apps that do not collect error telemetry.


Users can make reports manually, and you would even expect them to do so at a high rate in bugs like this. This implies a high correlation between bug reports and rarity.


Telemetry does not matter in this case. If a bug occures often, then either the issue ticker will get this bug often, or there might even be a "known bugs" section.


Even with, what telemetry would allow you to determine how many pictures (and presumably content) you sent to whom?


You can fix the bug but emit a metric on the original code path to estimate the impact after the fix is made. It wouldn't tell you who sent what to who, but would give you a better idea of impact for an incident report or postmortem.


I read it as bugs of this severity are extremely rare. Which claim doesn’t require telemetry.


Can you provide a link to the commit that fixes it?

Shouldn't there have been an announcement to inform users what has been leaked and under which circumstances?

How can user A send an image to user B that neither of them took? Isn't everything end-2-end encrypted? Then how can unencrypted data from user C end up on the device of user B?


> Can you provide a link to the commit that fixes it?

If I understand the issue [0] correctly, these two commits should be the fix:

https://github.com/signalapp/Signal-Android/commit/e90fa05d6...

https://github.com/signalapp/Signal-Android/commit/b9657208f...

The former updates how recipients (or really threads, I suppose) are merged (the issue occurred when trimming threads) and the latter changes the way how thread ids are generated (now automatically incremented). Together they should prevent unrelated recipients (threads) from being merged.

[0]: https://github.com/signalapp/Signal-Android/issues/10247#iss...


Using incrementing ids as your source of ownership is just asking for trouble. This just means a programming error can have a high probability of ids lining up and leaking resources. Guids make this practically impossible.


I wonder if there's any widely implemented programming pattern that would catch this better, eg consisting of concatenated type code and id. Using GUIDs here would still hide the bug, not flag an error.


I don’t know if it’s widely implemented, but the way Stripe IDs resources comes to mind.


Interesting, I hadn't thought of that advantage of uuids/guids before.


I love the terrible commit name. "Updating recipient merging."


From the explanation [0], this looks like the commit [1].

I do have to say though, the commit messages are very barren and barely any reference an issue directly. My guess is that they develop on private branches with an internal issue tracker and decide not to reference issues as that would be confusing. It would help a bunch if they referenced the public issues and made it clear which issues they're working on. They don't seem to use Github Projects [2].

All in all, the communication / transparency of the project is lacking - even though I'm glad they're providing something usable. Hopefully Matrix will be able to provide something as easily usable as well.

[0]: https://github.com/signalapp/Signal-Android/issues/10247#iss...

[1]: https://github.com/signalapp/Signal-Android/commit/83086a5a2...

[2]: https://github.com/signalapp/Signal-Android/projects


If it’s a client bug that just switches out recipients, then the messages will just get encrypted for the wrong recipient.


No PR with name that would suggest the fix in the client https://github.com/signalapp/Signal-Android/pulls?q=is%3Apr+...

and no PR from OP in the opensource part of the server https://github.com/signalapp/Signal-Server/pulls?q=is%3Apr+i...


*edit, I think this issue was specific to the Android client, the desktop client has a totally different sqlite schema.

The child comment to your comment is deleted, but I think autoincrement IDs shouldn't be used under an ambient authority context.

It would make more sense to have IDs based on an LSF or Feistel sequence, perhaps split into a master ID and a conversation sequence.

Autoincrement on this field makes it easy for off by one errors. Even just moving to guids and maintaining a proper parent child relationship would have prevented this.

Or maybe there should be a database per conversation (set of all parties).

https://github.com/signalapp/Signal-Android/commit/83086a5a2...

https://github.com/signalapp/Signal-Android/commit/b9657208f...

Row IDs shouldn't have so much power.


The good thing about autoincrementing unsigned 64-bit integers is that 1) it's insanely fast, 2) SQLite is doing it automatically, 3) seriously, why don't they have it on all tables yet? SQLite guarantees no collisions within a table.

Doing homegrown ID generation is how such bugs are being introduced in the first place. Your application level trigger got bypassed, oops. Your check was experimentally disabled and left like that for a year until people started noticing, oops.

If you make an SQLite-backed application and it has a bug like this, I can safely bet $500 it's not going to be SQLite that has the bug.

Just don't expose the numeric IDs in links, and generate your UUIDs in tables where you need to point to rows in an outside-accessible link. This is database 101, for crying out loud.


I appreciate that this was a difficult and rare bug, but for an app that sells itself as 'secure', it feels like this isn't acceptable.

How can users be assured that this type of issue won't occur again?


What does it mean for it to "not be acceptable?" Accepting or not accepting the bug is not an option on the table. It happened. The options you have open to you are to use or not use the freely provided software, or donate or not donate to the foundation providing it. Users cannot be assured that future bugs will not occur. That is an assurance that is not available with most consumer software, and certainly not with software delivered for free by a not-for-profit.


If you are unclear, perhaps it would be useful to ask yourself what you would personally need from a product to consider it 'secure'.


Yes, Signal is a messaging app whose whole reason for existing is privacy, but it does not actually guarantee any privacy features; it just tries its best. Despite this "who cares if it works" premise, they have raised tens of millions of dollars.

I dream of a world where commercial software is held to the same standards as even the lowest quality commercial hardware.


Concretely, what would you like to see done? Should the foundation be hit with a fine or a lawsuit for this bug? Would that improve the overall security stance of the public at large?


I would like to see the foundation put their enormous investment into making a product that works correctly.

If that's too much to ask, then they really ought to be more public and forthcoming about the fact that Signal is not actually private or secure.


> How can users be assured that this type of issue won't occur again?

By not using software.

And I mean software in general, not this software in particular. You're basically asking for assurance that they won't have any more bugs, but no one can actually provide such an assurance in the real world.


Yes, they can.


> Yes, they can.

If they do, they're either incompetent, lying, or building something enormously expensive yet completely impractical for most if not all real world uses.


I don't understand this attitude. Where did we go wrong as a discipline where making products that actually work is such an outlandish proposition? No other consumer product industry would talk like this.


It comes from the halting problem. People can be more careful writing code, but it is impossible to be certain about all the things code will or won't do. Even very simple programs can have flaws that get found and fixed years later. It happens all the time.

We need to be open and honest about the possibility that our code may act in ways we don't foresee.


If this was actually true, we wouldn't have safety critical software systems that have been running for decades without fatal bugs.


> If this was actually true, we wouldn't have safety critical software systems that have been running for decades without fatal bugs.

Not hitting a bug is not the same as not having a bug. I'd bet money that whatever system you're talking about has bugs. Plus, the system may be far simpler than you assume.


> I don't understand this attitude. Where did we go wrong as a discipline where making products that actually work is such an outlandish proposition? No other consumer product industry would talk like this.

Part of it is cost/benefit ratio for the extra effort, part of it is market demands, part of it is lack of technology, part of it is unavoidable stuff like the halting problem.

It's also worth remembering that Signal is developed by a non-profit with a total of 36 staff members (if Wikipedia is correct). That means they have fewer developers, and even fewer Android developers.


This is a messaging app we're talking about here. There's nothing outrageously difficult or complex that hasn't already been done 25 years ago. If 36 people and $100 million in funding is not enough to make a messaging app that doesn't suck, what _is_ required and why is it more than that?


> This is a messaging app we're talking about here. There's nothing outrageously difficult or complex that hasn't already been done 25 years ago.

If you think it's so easy, be your own change and do it, and then we can judge the results.

> If 36 people and $100 million in funding is not enough to make a messaging app that doesn't suck, what _is_ required and why is it more than that?

You're assuming there's a solution of a certain form to get you what you want, but maybe it's your assumption that's wrong.

I mean, there are formally verified systems that might be like what you're asking, but they're both 1) very expensive, 2) extremely feature poor.


I'm busy, but tell you what, I'll do it for only $75 million. Maybe I should launch a Kickstarter.


I disagree. If that really is the choice, they should drop the secure moniker without further debate.

--

If you produce a product that claims to be secure, the onus is on you to back up those claims.

Off the top my my head there are many ways to implement measures that can help to encourage security going forward.

One of the benefits of coding in the open, and ascribing to opens standards and protocols is transparency and the ability for all to interrogate the code.

Can that work? It's obviously partly also down to the culture of the product team. As another poster in this thread has highlighted, the commit messages are terse and not as helpful as they could be. Perhaps more openness re. intention would help.

Also, why are we finding out about this bug over 7 months after it was reported? Transparency regarding vulnerabilities needs to be at the forefront of the products communications if the team really are serious about security.

In terms of isolating bugs; what kind of testing is in place. TDD, functional testing, beta testing?

There are so many avenues which _could_ be discussed in relation to my initial question.

Your response, is unfortunately not providing anything helpful.


> One of the benefits of coding in the open, and ascribing to opens standards and protocols is transparency and the ability for all to interrogate the code.

They already do all of those things. https://github.com/signalapp/Signal-Android


In word but not in spirit. They stopped updating their repo for an entire year while integrating a crypto shitcoin in secret.

These actions betray trust, and trust is Signal's entire reason for being.


You need to reread my post.

The situation is far more complicated than your link to the GitHub repo would indicate.


Users are not entitled to a guarantee that this will never happen again, because Signal is free and open source software provided free of charge and without warranty.

The Android app is GPL licensed. The license clearly states:

> For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software.

If you feel let down by open source software, you have many options available to you to make that software more reliable.


Moxie Marlinspike is famously belligerent against Signal forks, so no, the license does not really help here.


It does. You are allowed to fork or reimplement the Signal client, and even distribute it, with the official Signal servers configured, so long as you do not infringe the Signal trademark.

They don't like forks using their servers, but it's the users who connect to their servers, not the fork publisher. Those users are permitted to connect via the TOS, which is independent from the GPL.


They also refuse to open source their server software.

This is the problem in my mind. Open source in words, but not spirit.


Not only is it false as has already been pointed out, it doesn't matter at all. There is nothing stopping them from silently running a fork.


As far as I'm aware, I have to use the official server if I wish to communicate via the platform.

If this isn't true, I'd be very interested to learn more.



So are you telling me I can run my signal server and use it to communicate with others on the signal network?


You can run your signal server and use it to communicate with others on your signal network.

The openness or otherwise of the code has literally nothing to do with the terms of use of any service being run.


No.

The product _is_ the network. By restricting access to the network, access to the product is also curtailed.


You originally asked about running your signal server. Now you're talking about the network that's hosted through Signal Inc's servers.

So what's your point? Do you think everyone should have a right to connect to their servers however they like, and the service operator can't exert any control over that?


I am telling you that "They also refuse to open source their server software." is false.


Signal typically delay releasing the server source code, so the latest version of the server is not open source. In one case it took them almost a year (no public commits between 20 April 2020 and 6 April 2021).

https://github.com/signalapp/Signal-Android/issues/11101#iss...

Among the official reasons given was staying ahead of spammers. In this instance it was also speculated that the payment function which they were building into the server was to remain secret.

https://github.com/signalapp/Signal-Android/issues/11101#iss...


[flagged]


Are you trying to subtly accuse chithanh of something? If so I would request that you do it explicitly instead of in a passive-agressive way.

Regardless, their post contained only factual statements relevant to the discussion. I noticed no entitlement.


If you call it a "whim" if I point out that the latest version of the Signal server is proprietary software most of the time then so be it.

Or maybe you are referring to the other commenters who were entitled to expect that the one critical task of a crypto messenger is ensuring the confidentiality of communication, which has been broken by this bug (and at least one similar bug before it, https://github.com/signalapp/Signal-Android/issues/7909 ).

You can both be grateful that Signal is free and at the same time call out shenanigans of its owners. Just saying.


[flagged]


What a buzz kill. If I understand you correctly we shouldn't have expectations or constructive criticism on stuff we don't directly pay money for. I think that's nonsense. Does this only apply to certain opinions?

You are not paying money for Signal. But by using it, and getting others to use it, you are definitely improving their position on the market by helping them become a monopoly. Money isn't everything.


> What a buzz kill. If I understand you correctly we shouldn't have expectations or constructive criticism on stuff we don't directly pay money for. I think that's nonsense. Does this only apply to certain opinions?

You can have constructive criticisms, but they have to actually be reasonable and constructive. It's not reasonable or constructive to expect their software to never have bugs (i.e. be outraged when they have one); to rag on them because they didn't fix some bug more quickly than they may have been able to; to throw around labels like "proprietary" just because they don't release something on your schedule; to be bitter that their vision is not your vision (e.g. they won't implement some client or server feature you want), etc. All of that is in this thread.

Signal, for whatever reason, seems to attract a lot of entitled complaints like the ones I enumerated. I don't know exactly why, but it probably involves some combination of contrarianism, being in popular software category, and people who want to feel better for having picked a different team.

Signal isn't perfect software for me: I really wish they had an unencrypted message export feature, for instance. But I understand that doesn't fit into their vision, and instead of bitching about it, I just have it on my todo list to write my own (and have done some preliminary work on it).

> You are not paying money for Signal. But by using it, and getting others to use it, you are definitely improving their position on the market by helping them become a monopoly. Money isn't everything.

What now? Signal is nowhere near being monopoly, and even if they were, they're a non-profit, which makes the idea far less threatening.


> It's not reasonable or constructive to expect their software to never have bugs

I think nobody demanded anything of that sort, that is just a strawman. What was actually demanded is that priority of such bugs be raised, and perhaps users be adequately warned about known defects that may compromise the confidentiality of their messages.

That Signal developers didn't have an idea what was going on for 6 months? And then it turned out to be similar to that other stale/invalid database bug where messages were sent to unintended recipients? Back then fixing only the bug at hand but not taking steps to ensure that the type of bug (wrongly matching expired message IDs to existing messages) won't happen again? Doesn't paint them in the best light.

> to throw around labels like "proprietary" just because they don't release something on your schedule

I throw around the label "proprietary" because software which doesn't come with source code is in fact, proprietary. If Signal pushes new server code to production and keep its source to themselves, calling that still "open source" requires serious mental gymnastics.

> Signal, for whatever reason, seems to attract a lot of entitled complaints like the ones I enumerated.

No, besides your strawman the other commenters were all reasonable and constructive.


> I throw around the label "proprietary" because software which doesn't come with source code is in fact, proprietary. If Signal pushes new server code to production and keep its source to themselves, calling that still "open source" requires serious mental gymnastics.

It comes with source code, just not on your time-frame. That's still open source.

And if that's unacceptable to you, just use something else.

>> Signal, for whatever reason, seems to attract a lot of entitled complaints like the ones I enumerated.

> No, besides your strawman the other commenters were all reasonable and constructive.

We're going to have to agree to disagree there.

But if you want to (for instance) implement a plan so we can be "assured that this type of issue won't occur again" in Signal (https://news.ycombinator.com/item?id=27951759), be my guest. Or maybe you could develop a fork of it and show us your vision for it (including a source release schedule that's satisfying to you, a federated protocol, and all the the other demands in ITT).


> We're going to have to agree to disagree there.

Everyone can of course have their own opinions. But they cannot have their own facts, a discussion does not work that way.

Whether one considers some statement as entitled, that is an opinion and we can disagree about this.

But whether a program is open source or not is a fact. It doesn't matter if the source code is going to be released a day or a year after the Signal server has been pushed into production, at that very moment the program is not open source. Your comment about my time-frame is irrelevant. In the github issue 11101 link I posted above is Moxie admitting to running versions of the server that are ahead of the public git repository. These are factually closed source, and you continuing to argue against that fact doesn't reflect well on you, nor the ability to have serious discussions with you.


To be pedantic, only the ones that posses the binary need the source code for something to be open source. Since they did not publish the binary and since they have the source code we could say that it is actually open source software.


No, you are utterly incorrect.

If you produce a product that people depend on — and through which, you either intentionally or inadvertently cause damage to your user base — they have every reason to be upset.

If the product is open source, the user base luckily has the option to fix the problem.

This isn't true with Signal's model.


You seem to be confusing open source for a federated protocol.

Signal is not federated, as an explicit design decision.

The source code for the server and the client are both free software.


‘Many forms of secure messaging systems have been tried, and will be tried in this world of sin and woe. No one pretends that Signal is perfect or all-wise. Indeed it has been said that Signal is the worst messaging except for all those other forms that have been developed from time to time…’

Winston S Churchill, 11 November 1947


Sure, but there's a small theoretical difference with democracy. You have to live under some system of government. You don't have to use a secure messenger. You can choose to have sensitive conversations in person or not have them at all.

I agree that in practice, a lot of people are going to use their phones for relatively sensitive conversations, and in practice, Signal remains the best choice for doing so. But there are a few real threat models where the options aren't Signal vs. SMS / Google Chat / Discord / etc., the options are Signal vs. nothing. For instance, you could be a journalist deciding whether to ask clarifying questions to a government whistleblower via Signal or meet up with them in a park. You could be an activist/demonstrator under a repressive regime deciding whether to coordinate some action this weekend via Signal or hold off on it entirely and tactically preserve your freedom. And so forth.

For those people, if (and to be clear this is a big "if," while this issue is one serious piece of evidence it is nonetheless inconclusive) Signal isn't trustworthy, it doesn't matter if Signal is the least-bad of the options.

(Also, it's not like Signal is the only e2e messenger around. There's iMessage/FaceTime, for instance. Churchill's claim was that the abstract idea of democracy was good, not that any concrete implementation like the British government was good.)


> You can choose to have sensitive conversations in person or not have them at all.

I don't think this is fair. Most of the solution to this is "not having them at all." That's not a good solution and still doesn't solve your problem since you can still be listened to.

> There's iMessage/FaceTime, for instance.

Which also has gotten in trouble recently with Pegasus as there was a 0-click exploit in iMessage. Honestly, that is a far more serious issue than the one here. That being said, I still trust iMessage and that the devs are doing the best that they can. I just recognize that security is difficult and will always be a cat and mouse game. There is no such thing as perfect security.


I think what I'm trying to get at is that incorrectly believing you have access to a secure messenger can be worse than acting as if you don't, if those are your options. The whistleblower might choose not to make contact, but if the alternative is making contact and immediately going to prison (because someone else on your contact list saw a classified screenshot from you and told the authorities), maybe that's better. The activists might choose not to protest, but if the alternative is being caught before they even start their protest (because your group was forwarding a little advertisement image or annotated map around among trusted people, and someone untrusted got it), maybe that's better.

Take Reality Winner, for instance (the mechanics of that case were entirely unrelated to secure messengers, but it makes a relevant example overall). The effect on the world of her whistleblowing seems to have been minimal, and the cost to her was significant. Was it worthwhile? If she had been told the risks of the government identifying her were higher and decided not to leak anything, wouldn't that have been a better outcome?

I'm not saying there's perfect security. Vulnerable users absolutely need to be making risk assessments and deciding what they're comfortable with, and we should be clear nothing is risk-free. I'm just saying my sense of Signal's risk, in absolute terms, is higher than it was before I learned about this, and that matters to vulnerable users, not just the fact that it probably remains the lowest-risk messenger of the various options.

I agree with you overall, and the Pegasus exploit does reflect badly on Apple (and probably should reflect more badly on them than seems to be happening).


> I think what I'm trying to get at is that incorrectly believing you have access to a secure messenger can be worse than acting as if you don't, if those are your options.

For the average person, I do not believe this is true. For the non-average person, I believe you are correct but most of these people are aware and should be constantly trained.

I'm not saying you're wrong, I'm saying that there are two different conversations to be had and we need to know which one we're having. To me it looks like Signal is about as good as you get without loads of complication and for what it is meant to do.

> he Pegasus exploit does reflect badly on Apple

And same with this on Signal. I do believe we should hold these companies to high standards. But the point I'm trying to make is that these also aren't reasons to abandon the platforms completely (as many users are suggesting here). That's throwing the baby out with the bathwater.


Yeah, to be clear, I only mean this from the point of view of non-average Signal users.

It's a little weird that Signal is both the "baseline security that everyone should have" product (a la HTTPS or WPA2) and the "you are literally hiding from the government" product. Of course, the target market for the latter, when you are not another government yourself, is by definition mostly illegal activity (whether or not the laws are justifiable), so it makes sense that there isn't a good product just for that.

In this particular case, it also complicates things that people who are literally hiding from the government also have normal ordinary conversations with lots of people, and it helps things for those ordinary conversations to happen on Signal, but this bug is particularly bad if you do that.

(I'm also not really sure where, say, people buying recreational drugs fit on the "average"/"non-average" axis. Is it a reasonable precaution to not text incriminating information to your drug dealer over Signal? It feels like it shouldn't be necessary, but I can see the argument for it.)


You make some fair points. But even from the eyes of those people, what is the alternative? Is iMessage guaranteed to not have any hidden exploits out there? And on the flip side, what do they lose out on by only having those conversations in person? Well, I'd argue that their world becomes a lot smaller, and there sources are instantly at a higher risk.


If iMessage were sending photos to the wrong people (even with extremely low probability) for over half a year, there would be serious negative publicity to Apple for it, even if they had never implemented end-to-end encryption. Apple also has more software testers and more willingness to use telemetry. So while there are no 100% guarantees, I think the incentives are aligned with iMessage at least as well as they are with Signal.

Apple suffered negative publicity from the 2014 iCloud photo leaks, even though those were "just" phishing and not a vulnerability/bug in the strict sense. Tim Cook had to give statements to the media, and in fact Apple stepped up its phishing protection by pushing two-factor authentication and notifying users about additional iCloud logins.


Probably by installing some hardened memory on your phone to prevent cosmic bitflips or the such.

Unless you are going to go to NASA lengths of hardware reliability, you can't really hope much for the software that has to deal with the issues of... how many different android phones are there?


The available evidence indicates that this was due to a logic error, not to cosmic rays and/or Android ecosystem diversity.


> How can users be assured that this type of issue won't occur again?

Actually, this very type of issue (sending messages to wrong recipients due to stale/invalid database entries) has occured previously.

https://github.com/signalapp/Signal-Android/issues/7909


> How can users be assured that this type of issue won't occur again?

By writing code defensively. Despite the other comments, it's possible.

The key is to be redundant: for example, off-by-one errors are very common when accessing an set of indexed items by number.

Yet you can split the set (e.g. an array) in multiple ones to make it more unlikely that you pick the wrong item (e.g. picture vs users).

You can also "tag" the outgoing image with some attributes, e.g. the recipient and a sent/not-sent flag.

You can cross-check and stop if something is inconsistent. Many other things are possible e.g. to protect from RAM bit flips.

It's not a matter of language or tooling, it's a matter of mindset.


> By writing code defensively. Despite the other comments, it's possible.

I'm going to tack onto this and suggest that signal drastically slow down the pace of feature development. They don't have the same profit motivations other companies have. The messaging market, at least in its current state, is largely known. These both give Signal an advantage in that they can slow down and harden the product while baking security into their DevOps practices as a first class citizen.

Tl;dr: signal needs to slow down.


The answer is that you can’t be assured. Act as you will with that info.


They could choose to use a less transparent company's software, so that they never become aware of rare issues like this in the first place?


From the explanation of the bug on github it seems to me like this is a client-side database issue and nothing was actually leaked. Database ids were reused so random images that were previously received were displayed in newly received messages.

Is this correct? If it is then it's probably worth mentioning.


My understanding is that the database issue caused your client to send pictures A, B and C to person X, when you were trying to send picture C to person X (where A and B are pictures that were previously sent to someone else).


The person reporting the issue specifically said that they couldn't find those pictures on their phone and don't remember ever sending them to anyone.

The recipient also wouldn't be able to find those images anywhere else because they have chat trimming enabled. The result is that because a newly received message happened to share the id of an old deleted message, the new message is now displaying pictures from the old message.

This does require the recipient to have received those pictures and also not remember them but I believe it is easier to forget a random picture you received than one you sent.

Again, this is me speculating with very limited knowledge of client internals but it makes sense to me. I would like to see a developer confirm this.


> we were able to get a fix out very quickly.

I'm not sure if 8 months can be categorized as fast... The issue was posted on Dec 4, 2020, and the fix (5.17) was released on July 21.

Also, sounds like quite a big issue considering that Signal is all about privacy...


Selective quoting?

"As soon as we were able to pick up a scent, it was all we worked on, and we were able to get a fix out very quickly."


Oh sorry, I interpreted it differently. Tho it still doesn't change anything, they prolonged investigating this issue for months and only put mayor work behind it when they "pick[ed] up a scent on it". Although they knew about it from day one (one Signal staff replied on the same day the issue was posted).


Yeah but he explained, that they could not track down the bug, without having the luxory of default user tracking and metrics. So what can you do, when you have a non reproducible bug, but limited ressources(and other problems around)? You wait for the bug to show up again and then have more data to work with.


What you do is you tell everyone about the issue so they can take the appropriate steps to protect themselves, whatever that might be (don't send sensitive photos, don't use Signal to do it, only use Signal on iOS, etc.); they did not seem to do that.


Yes, this is a valid point.

I only referred to the actual bug fixing.


Question: do you guys have a software or product security team? I suggested the roles to workwithus @ Signal on 5/18/2018 and have never seen a public follow-up in the form of career postings.

Asking because such a team may be best equipped to serve as both the support and internal accountability function for such while minimizing business conflicts when engineering is facing challenges integrating security into DevOps natively.

At this point, it's probably warranted; the last time I asked was when Signal was seeing its spree of XSS defects in the desktop app. If Signal has one, a simple "yes" will suffice, but without a reply, I have to assume not.


Given Signal's raison d'être, I would think nearly their entire team is the "security team".

I'm not being entirely facetious either - security is the USP of the product, I really would expect security knowledge and a feeling of responsibility for the product's security to be pervasive throughout the whole team.


> we were able to get a fix out very quickly.

Is 6 months really what Signal considers quick for a bug that leaks private data?


Selective quoting?

"As soon as we were able to pick up a scent, it was all we worked on, and we were able to get a fix out very quickly."


It's not at all selective. This should have been "all they worked on" from the moment they got several confirmations, not from the moment people beat them over the head with data. If they couldn't fix it they should have pulled the app.

This is a company that aggressively markets itself to people needing privacy, and mistakes can ruin lives. And before you say it, they have tens of millions of dollars in funding.


Well yes, maybe they should have put more people on it, from day one. But even though they have solid funding, doesn't mean they can throw it out the window.

And non reproducible bugs can be hard, even when you throw money at them.

But your quote was almost a textbook example of selective quoting, because you said, that they said they did a quick fix, when it really took over 6 months. But they did not say this - they said "once they pick up the scent" they delivered a quick fix. This is something very different.


"But even though they have solid funding, doesn't mean they can throw it out the window."

This is a product which is advertised as private, marketed extensively toward people requiring privacy. Knowing they're accidentally sending images to the wrong people is a HUGE, priority 1 problem.


"Knowing they're accidentally sending images to the wrong people is a HUGE, priority 1 problem."

It is. But I have no insight in all the other problems and bugs they have. Do you? There is never a guarantee of total safety. So focusing all the ressources on one problem that happens extremeley rarely and miss out a bigger problem, that affects millions? But I don't know if this was the case here. Might also be neglect or not wanting to spread the image of Signal as being non-secure, while in a path of growth.


Does it change anything tho? They prolonged investigating this issue for months and only put mayor work behind it when they "pick[ed] up a scent on it". Although they knew about it from day one (one Signal staff replied on the same day the issue was posted).


I find this quite concerning and am really wondering if there are any privacy advantages to Signal if these things happen.

Could you say something on:

1. Have any defend in depth measures been applied in the Android and other clients to make sure this issue does not happen again? E.g., an additional check when sending/encrypting a message to make sure it is absolutely meant for the person it is being send to? 2. Why did it take 8 months to fix this if some users could reproduce it consistently?


One thing I wonder is: how could this happen at all? Considering the E2E Encryption in place, I would expect the incorrect recipient simply wouldn't be able to decode the image considering they never have exchanged keys with the sender?


Not a signal developer, but I would imagine it would be fairly simple that the same bug also encrypted the message with the eventual receiver’s key (as opposed to the intended receiver’s key). Resulting in a message which the eventual receiver could decrypt but not the intended receiver.


"How could this happen at all"

> I remember before Jellybean Android, sending SMS would break up my message and send it to multiple people, and the Android alarm clock would drift by hours.

So how could this happen? Because software is hard.


Its not hard - its bad. Do better.


Just wanted to say, thank you so much for your contributions. Signal is an amazing product, and things like these happen to the best of projects.


Not mentioning in this comment when the issue was fixed--two weeks ago in the code and days ago in production--is extremely dishonest. I have been contacted by literally everyone I have sent this to saying something akin to "dude says it was fixed quickly but they just didn't close the issue" despite my message with the link saying (exactly) "<- issue opened on december 3rd and only fixed last week", making me have to again point out "closed today, but only fixed in production last week" (fixed in the code two weeks ago, but that doesn't matter much)... at which point they are forced to do a double take and suddenly care.

I have worked on high-impact open source software--the iOS jailbreak ecosystem, writing some of the most core software for it, such as the mechanisms which support runtime code modification and which install the userland--with a much smaller team than Signal, and when you run into serious issues you need to disclose them, and you need to be super honest about the post-mortem. You shouldn't just kind of sweep the issue under the rug in the hope you can fix it before someone notices or it affects enough people to become a PR problem.

(On what is maybe a side note of my thesis here for a moment, but for completeness on the related issue of why I am so dissatisfied with these PR-like responses: you say it is "extremely rare", but not so rare that tons of people aren't reporting the issue happening at least to people they know; this is being used here as an excuse for why it was hard to find and fix, but is then being taken by some as "oh it was also unimportant": issues have to be additionally weighted by their impact, and this bug was clearly critical.)

The equivalent of this sort of thing I have run into is "there is like a one in a hundred thousand chance that you will experience catastrophic data loss from using my software", and I took those issues very seriously, as when you have tens of millions of users that's still a non-negligible number of people in the absolute: I considered every single person who would lose something like their camera roll on their phone to be a crushing defeat that I should internalize and take super personally, as I know the feeling of loss of important information and have enough empathy to assign it to my users.

(Hell: one time I actually hired someone to spend a bunch of time going back and building a tool that would take videos recorded by Cycorder (my video recorder for the original iPhone) that had been damaged by a bug I found in one version of the app that had led to some videos being misrecorded and lost--in a way that I think was even more random than "merely" if the power ran out while recording?--and repair them, to send to the almost no people who had taken videos of their family they realized only after an event weren't playable. This is different, of course, as this was after the fact, but a demonstration of the empathy I feel developers should have for their users: if I can do it with the tiny resources I had... anyone can do it.)

When you find reports of such an issue, you carefully track every single one of them down... but you also have a small time box, past which you need to disclose the issue to everyone: you put a large message on the download link of the product or update the homepage of the app to explain your status finding the issue and asking for help with leads, as it is important that people know that if this could affect them they can mitigate... maybe they don't send photos to anyone they couldn't afford sending to someone wrong, they use a different tool, or they switch to running Signal on iOS.

This doesn't seem to have happened? Hell: if anything, this issue seems to have been sufficiently boring to you that you didn't even close it the second you fixed it or keep people abreast in the issue on GitHub of when they could expect the fix you committed to roll into production. This is both an unacceptable communication style and level of empathy for a product trying to be as important as Signal (though sadly not terribly surprising on either count... it is this same lack of empathy that springs up when Signal has database corruption issues or lacks export tooling or spends its time undermining the wrong opponents or throws in a cryptocurrency--which I should want to celebrate as I am in that space!!--built on DRM tooling and without any warning or thought as to what it means for the one open source secure messenger... sigh).


> I was tracking it on a separate issue

Do you mind linking to that issue? Thanks!


[flagged]


> Edit: Anyone downvoting this also simply doesn't understand how serious this privacy leak really is. Shame on you too.

Wouldn't showing a warning suggesting not to post sensitive images while the bug is being investigated be better than straight up shutting down the service while solving the privacy issue?

Wouldn't closing the service have made people leave for other (probably worse) services, the one relying on privacy, privacy-minded people and "regular users" alike, and have worse consequences for privacy than this bug?

(not currently a Signal user, and I did not downvote your comment)


It's pretty much impossible to distinguish rare bugs from user error if you don't have logging/telemetry.


Several years ago, when I worked at FB, I ran into a similar bug on an early internal version of a Messenger rewrite. Sent pictures to one chat, showed up in another.

My bug report on it kicked off an absolute maelstrom of dev activity and investigation. High level engineers showed up in the comments. Lots of immediate followup. The severity was clearly understood and resolving it was clearly prioritized.

I exclusively use Signal now, but the discrepancy between what I see here and what I saw there is pretty disheartening. This kind of bug is not only a massive privacy risk, but it also massively erodes user confidence and trust.


They've just posted an update saying that the issue was fixed on July 21. It's certainly good that it's fixed...

But that's still over 7 months before it was fixed, including a 2 month period where people were still bumping the issue asking for help with no response from maintainers (afterwards, the issue went quiet until ~2 weeks ago). And there was at least one other issue on the same problem a few months later that received no response [1].

I understand the team is probably understaffed given the vast number of open issues (1300+) they have, many with no response, and I can sympathize with the challenges of being a small team developing an app used by millions, but they probably need to figure out a better way to triage...

[1] https://github.com/signalapp/Signal-Android/issues/11137


It was fixed a long time and only closed recently, see the message from the dev.


No, the dev writes on GitHub that "this issue was fixed in 5.17 (which hit 100% production on 7/21)". Releases show 5.17.0 was released on July 15. They've also linked the commits that fix the bug - the fixes were committed 10 days ago.


I don't think Signal has many devs[0] and if you look at the contributors[1] you can see that Grayson is pretty much the only dev for the Android app. So seeing a second dev get involved is probably them freaking out.

[0] Personally I believe this is a big bump in the road for Signal and is why a lot of people are frustrated. About promises about things like usernames (it is no longer early 2021), channels, and everything else. A few devs can only do so much. A dozen (maybe 2 dozen?) devs can still only do so much. How do you compete with other platforms like Telegram that has hundreds of employees or WhatsApp with far more than that?

[1] https://github.com/signalapp/Signal-Android/graphs/contribut...


I mean, they invested a year into covert development of a crypto wallet inside Signal. Maybe that time could have been spent better.


From the commits I only really saw Moxie adding this and he hasn't been doing as much dev work in the last few years. So I don't feel that this took much away. It's hard to tell if it is a good move or not since Telegram and WA are both adding payments to their platforms and there is a need for feature parity. But regardless, MOB probably wasn't a good fit and we've seen no update since.

My complaint is more than Signal moves far too slow. I'm not saying to move fast and break things, that's far from what I want. But I am saying maybe add a few more devs.


> But I am saying maybe add a few more devs.

Absolutely, then such an important issue probably wouldn't stay open for this long.


No, Signal does not get to play the limited resources card when they so firmly discourage 3rd parties from working on their project.


No, contrary to common belief, coordinating with multiple client projects to release features simultaneously etc, is not easier. The number of meetings does not drop, the quality will not improve if you now have to check that multiple clients are safe. You won't magically get more eyes on your code, when people are working on their code, not yours. And at that point you now have to deal with people who think they have as much say as you have because their fork is "equally important". And trying to explain to a non-cryptographer hobbyist why some change needs to be done, or why some feature can/should not be implemented, is not speeding things up.


Could you explain what Signal is doing to discourage contributions?


By not allowing 3rd parties apps to coexist with official signal app. (Using same servers)


Signal placing restrictions on who can use their service has nothing to do with whether or not people can contribute to the codebase.


It does. There is less incentive to work on a Signal client fork if it can't be used to interoperate with the Signal service.


That's a bit like saying there's less incentive to work on (for example) Elasticsearch, because you can't deploy your fork on Elastic Co's official managed service. It's nonsense.


There's a difference here between Elasticsearch and Signal, namely that that network effect is a very important factor with messaging apps.


Nor when they've received millions in grant money.


Not true


Bug report is eight months old now. I don't think they're freaking out much.


But the issue is fixed. Forgetting to close a bug report is different than not fixing the bug


True, but the issue was fixed in 5.17, which was released only 10 days ago [1]. For an issue opened December last year, that's still quite a lot of time before a fix could be found.

[1] https://github.com/signalapp/Signal-Android/commit/a47448b6c...


Try fixing a rare bug quicker without constant user metrics.


Yes, indeed.

This kind of bug is an argument for having metrics.


I'm not convinced. The bug is rare and requires a specific set of circumstances that not many people are going to perform. That is not an argument to collect metrics, or in other words, change the entire paradigm of Signal (no collection of Metadata). It does propose an argument for more audits, more eyes, and more care. But we do not expect Signal to be perfect, as no software is. Systematic failure, on the other hand, creates worry about Signal. But not individual.


> I'm not convinced. The bug is rare and requires a specific set of circumstances that not many people are going to perform.

I don't think you would say the exact same thing if this happened to closed-source apps like WhatsApp or Discord and open-source apps like Telegram or Element. All of these apps have funding behind them and lots of resources to urgently address security issues when reported or discovered.

The same goes for Signal and they knew about this issue and left this open and unfixed for months. They have $60M in funding, fully open-source, full time engineers working on it and the priority was a secret cryptocurrency project over a critical security issue.

No matter how 'rare' the bug was is pointless. There is no excuse for not prioritising for critical security issues and leaving them unfixed for months as these issues risk ruining their main selling point on privacy and security.

> It does propose an argument for more audits, more eyes, and more care.

Yet despite having a string of audits, it seems the priority for Signal was 'cryptocurrencies' last year and creating a new coin to be listed on an exchange for that purpose, instead of fixing this 7 month old critical issue that they knew about.


> I don't think you would say the exact same thing if this happened to closed-source apps like WhatsApp or Discord

You're right. Because I judge a project backed by a company worth hundreds of billions of dollars and with hundreds of developers differently than I judge a company with a few tens of millions and only a dozen developers. I'm not sure why any sane person would judge these with the same metric. 15 devs just can't do what 1500 can. I'm not sure why you think differently.


> Because I judge a project backed by a company worth hundreds of billions of dollars and with hundreds of developers differently than I judge a company with a few tens of millions and only a dozen developers.

Any project that can at least afford a string of external audits and proudly advertises on multiple claims of high quality security and privacy should be held to very high standards, especially if they are serious projects in security and privacy and are not toy or pet projects.

Hence this, I would expect all Signal engineers to be the best in their field and qualified in both of these standards to justify the compensation price and uphold these claims for Signal. The same goes for any serious secure messaging platform prioritising security and privacy.

The harsh reality is that serious projects and competitors with bold claims of security and privacy all get treated the same. No exceptions or passes. Otherwise it can't be considered a serious project or even recommended to users if they don't prioritise and fix critical issues urgently.

> I'm not sure why any sane person would judge these with the same metric.

So you're telling me that Telegram or Element are able to prioritise urgent and critical security issues much better than Signal could? Signal is a serious messaging app going with its bold claims of high quality security and privacy isn't it?


> So you're telling me that Telegram or Element are able to prioritise urgent and critical security issues much better than Signal could?

No, I'd say it is about the same actually. Telegram has a lot of hacks but HN doesn't throw a fit. Lot more serious ones too. Signal never had an issue with leaking someone's physical location to any user (read: "not a rare set of circumstances needed to reproduce"). Besides, Telegram still isn't e2e by default, doesn't have e2e groups, and has no security audits. I'm not sure why this is in the same category as Signal. As for Matrix, well it only recently enabled e2e. But the project is very small. Just because you don't know of a bug doesn't mean one doesn't exist. There's an old saying: "There's two types of software. Those with bugs and those that nobody uses." (read: "all software has bugs")


> Telegram has a lot of hacks but HN doesn't throw a fit.

It had the attention of HN. They seem to care about both Telegram and Signal's flaws. Just like you highlighting the 'security issues' in Telegram, there is no escape of highlighting Signal's 'security issues' and security researchers will do exactly the same. Once again, there are no exceptions.

> Besides, Telegram still isn't e2e by default, doesn't have e2e groups, and has no security audits. I'm not sure why this is in the same category as Signal.

I expect better from a 'secure alternative' that claims to be focusing on 'privacy and security' and that also proudly shows its list of security audits. Despite all of that, they introduce their own cryptocurrency coin just to get it listed on an exchange and used in Signal, Similar to Telegram's own cryptocurrency venture which failed. [0] Combine that with the security issues in this post which one of them taking half a year to fix and still using a phone number to login, it is no different to Telegram. They still haven't even fixed this serious security issue either. [1]

The worst part of all of this is their prioritisation on addressing these issues and went in favour of creating a cryptocurrency coin just like Telegram, which most likely explains the 7 months to address that security issue. At this point, their claim of upholding privacy and security is already damaged by all of the above.

[0] https://www.theverge.com/2018/5/2/17312046/telegram-initial-...

[1] https://github.com/signalapp/Signal-Android/issues/10247#iss...


If this is the case, then we should just say:

Signal is not secure because they have limited resource and cannot invest in an area with Security adequately.


Or perhaps we drop the pretence of anything being absolute ('secure' vs 'not secure') and have a more honest discussion about the different threats and where different products do better or worse? I'm sure Whatsapp is much better in being able to resource their security measures, yet being owned by Facebook, and being closed-source diminishes their security in other ways.


I personally trust whatsapp, great product


This happens to me all the time in Messenger. Just locally though. Like if I sent an image and delete it from the phone, the app shows some other random image instead.


The bug seems to reuse images already present on your device, not send new images to other users.


The older software probably spoke xmpp which meant people could just leave when it misbehaved. Signal has been against this from the beginning, it's against the ToS and the owner has asked devs of alternative clients to stop developing them.

No "apps" are ever good.


> [..] his Signal randomly sending images to me that he didn't intend to, even without initiating the addition of any attachments on the GUI... he even sees one of my messages displayed on his side with a random image attached to it, as if i have sent that image to him, even though that image is not even present on my phone.

https://github.com/signalapp/Signal-Android/issues/10247#iss...

Yikes.

> [..] I've also recently had a probably unrelated issue where my mic was still audible to the other party after I hung up the call.

https://github.com/signalapp/Signal-Android/issues/10247#iss...

Double yikes.


> webworxshop opened this issue on 4 Dec 2020

Triple yikes.

Though it looks like the issue was finally closed minutes ago:

> Hi there, sorry, this issue was fixed in 5.17 (which hit 100% production on 7/21). There was another issue tracking this and it looks like I forgot to close this one.

Still, that's a lot of time for such a bug to exist!


> I've also recently had a probably unrelated issue where my mic was still audible to the other party after I hung up the call.

That one there is a cataclysmic security land mine. Absolutely unacceptable.

That was the last straw after [0]. I don't think I can recommend Signal at this time.

To Downvoters: So these bugs are all fine then? They are not security issues then? Not only having images being sent to the wrong contacts but also having the microphone still on after ending the call and being audible to the other party? That's fine right?

If this happened on any other messaging app, I would expect a massive outcry and urgency to fix these critical issues.

[0] https://news.ycombinator.com/item?id=27951076


Yes. Signal's only selling point is privacy. Both of these bugs are huge privacy breaches that kill its value proposition.

Which type of privacy breach is more likely to have tangible and direct negative effects on an average user's life - a nation state storing their communications in a database, Facebook graphing their contacts and using them for friend recommendations, or their friends/family/boss/acquaintances being sent random private photos from their phone and audio of private conversations they have in their home, without them knowing?

One of the main worries with companies having access to your unencrypted private data is that no matter how careful they are with it, it can still end up in the wrong hands. Signal is directly sending your data into the wrong hands.


I agree these bugs that Signal has are serious. With hat said, your examples aren't that great for counters.

"a nation state storing their communications in a database" - The power differential and historic missteps of governments makes this ludicrous to think of as "OK" in comparison.

"Facebook graphing their contacts and using them for friend recommendations" - but, it's not just for friend recommendations and possibly more importantly, it's not just their users, is it? Not to mention it's ignoring the purposeful opinion-biasing they have openly taken part in to manufacture consent for any number of issues.

While these bugs are bad and should be prioritized for fix, they are seemingly random. Sure they can possibly be exploited (possible, haven't seen proof of concept for purposeful exploitation), but random bugging vs clear and present danger on current and historical precedents of governments and technocratic oligarchs? Me thinks your trust may be a bit misplaced or you're just being obtuse for sake of obtuseness.


>The power differential and historic missteps of governments makes this ludicrous to think of as "OK" in comparison.

wasn't that kind of the point?


>Yes. Signal's only selling point is privacy. Both of these bugs are huge privacy breaches that kill its value proposition.

Absolutely agree. However, you should always look at the bigger picture

a) How was the issue handled? What was the priority? Did they try to downplay it? Was the type of vulnerability patched altogether?

b) If a) merits for change, what's the alternative that has better overall security, UX, existing userbase, and track record.

Personally, I'd rather take a product with good public incident handling track record, than one without anything on public record.

>One of the main worries with companies having access to your unencrypted private data is that no matter how careful they are with it, it can still end up in the wrong hands. Signal is directly sending your data into the wrong hands.

Categorical label of "wrong hands" is unnecessarily ominous. Company with access to your private data can lead to that data being sold, or stolen by nation states / organized crime. You sending nudes / sensitive documents to your friend on Signal is less dangerous, although it can be much more embarrassing. Your peer probably isn't going to sell it to the highest bidder (or was it the case the recipient could be any Signal user? IIUC that was not the case)


Laurens Cerulus @laurenscerulus Feb 20, 2020

News: The EU Commission told its staff to start using @signalapp to chat to friends and contacts. The move is part of an effort to fix the holes in EU cybersecurity. Story (for Pros for now) https://twitter.com/bendrath/status/1230455295018766337


Why on Earth are people downvoting you? This is an absolute dealbreaker for any messaging app, much less one whose raison d'etre is privacy and secure messaging.


Probably because the common mindset here is that anyone can make a mistake, and that the person who did it learned their lesson and will never do it again.


And to verify, the "mistake" is seeming to not actually care that this was a serious bug for 7 months (cough while they launched MobileCoin)? That is an attitude issue--and one endemic to Signal (which, most charitably, simply doesn't have the resources to sufficiently care sometimes)--not a "mistake" I expect to be easily rectified.

(And as I mention on a nearby post: you don't have to fix it to widely disclose it; like, you don't work in the dark to fix an issue like this for seven months as, even if it were your "top priority": you quickly time box it and then disclose the issue so people can mitigate their exposure or help better crowdsource finding the information you need to fix the issue.)


It also sounded like a very difficult bug to track down, even as a top priority. Requiring a combination of certain settings plus a rare database ID intersection.

Combine that with not logging user behaviour heavily for privacies sake makes this a very tough one to replicate.

All of which was addressed in the bug report.

The realities of software development on a large scale with a privacy focus are sometimes hard to grasp. Although I do admit 7 months for a production release is quite a long time, even factoring in the pandemic and mobile app Play Store release cycles.


So, did they disclose the issue? Were people using Signal warned somewhere that this was a known issue that they were hunting down? (I am guessing not as no one here has been like "oh yeah: everyone using Signal knew to be careful with this feature".) It being a difficult bug to fix doesn't mean that's your only recourse for something this serious.


Does any app push bug report notifications to users? Should Microsoft Windows or Google Chrome warn users every time there’s a bug that can compromise their whole system just by visiting a certain website or downloading random pieces of software only a tiny subset of users will ever be exposed to?

I get the motivation with a security/privacy critical app like Signal but this would also be a UX and customer support nightmare that IRL could grind a project to a halt.

Not to mention expecting users to know how to balance the risks of said bugs vs not using the app at all because they were scared off it. Back to using far less secure options.

I think having public forums to report and track the bugs for more advanced users is probably the right balance.

The better solution is internal fixes and triaging the serious bugs appropriately so they get the attention they need. Instead of just offloading highly technical information barrages to average users.

Temporarily blocking features until a patch is released is something that could make sense. But again only in certain circumstances. You can turn off photo sharing here but other cases it’s not so straight forward without crippling the entire app for a rare bug. It’s a difficult balancing act without a uniform solution.


Usually when there are serious bugs in Windows, these get notified.

Latest example: https://msrc.microsoft.com/update-guide/vulnerability/CVE-20...

Released Jul 1, 2021

Workarounds:

Option 1 - Disable the Print Spooler service

Option 2 - Disable inbound remote printing through Group Policy


Let’s be honest, if Telegram or WhatsApp did that mistake, all of that mindset would beat it to death and then jump on its corpse for three days straight.


And bring it up in every conversation even slightly tangential to either of them =)


So which app would you recommend?


Matrix.


This isn't a valid argument.


Yeah you see that question mark at the end of the sentence? It's a question.


Stopped doing calls on Signal after my Android contacts started telling me that they see active calls minutes after I successfully hung up from my side (iOS). Not to mention the countless UX bugs.

I’m back on WhatsApp and not telling anyone anymore to move to Signal or any app whatsoever. I’m done.


The first issue was fixed and just closed, and seems like it was very difficult to track down.


It's fixed in 5.17 and this is the release number I see on the Google playstore.

Unfortunately for my ubuntu 18.04 LTS and this is in no way Signal's fault (but maybe the desktop version doesn't have that bug ?):

    $ apt-cache policy signal-desktop
    signal-desktop:
      Installé : 5.10.0
      Candidat : 5.10.0
     Table de version :
     *** 5.10.0 500
            500 https://updates.signal.org/desktop/apt xenial/main amd64 Packages
            100 /var/lib/dpkg/status
         5.9.0 500
            500 https://updates.signal.org/desktop/apt xenial/main amd64 Packages
         5.8.0 500
            500 https://updates.signal.org/desktop/apt xenial/main amd64 Packages



I don't think so:

    $ apt-cache policy signal-desktop-beta
    signal-desktop-beta:
      Installé : (aucun)
      Candidat : 5.11.0-beta.1
     Table de version :
         5.11.0-beta.1 500
            500 https://updates.signal.org/desktop/apt xenial/main amd64 Packages


It was closed 4 minutes ago; when was it fixed? ETA: Ah, 4 days ago (more than half a year after it was opened).


While I suppose the protocol is not at fault and it's a UI and client bug it's still a huge problem.

Just today I was thinking ”it's been weeks since they moved the GIF button to a different place but there's still the old button at the old place and when you click on it there's a pop-up "wrong, the button is somewhere else now"”.

Why even keep the old button in the old place ?

And it led me to thinking "what else could be wrong/buggy in the UI and the UX that is not obvious to them ?".

edit: according to this comment https://news.ycombinator.com/item?id=27951648 there is only one dev working on the Android client ? Hats off to that person, it's incredible.

So I should have written: And it led me to thinking "what else could be wrong/buggy in the UI and the UX that they haven't had time to catch and fix yet ?".


> Just today I was thinking ”it's been weeks since they moved the GIF button to a different place but there's still the old button at the old place and when you click on it there's a pop-up "wrong, the button is somewhere else now"”.

That's actually a UX FEATURE. Google Maps did exactly the same thing when they reorganized and move the toggle between maps/satelite/traffic layers.

The answer is pretty straightforward: Users get used to a certain UX. If you do a reorganization (to introduce new features, to improve performance, whatever), and move a button, a meaningful percentage of your users won't read the blog post, the release notes, or hunt for a new "Intuitive" location for the feature. They'll just assume you removed the feature and either panic or get mad.

Leaving the old button in the old place is actually kind of a clever way to "Deprecate" a UX feature.


Yep, this was the intent. It's a relatively common pattern. We kept it that way for a few releases so people could see it, and we removed the button in the latest release.


I agree. Am an app dev myself and the weird maps relayout of gestures got me dozens of times... Can't imagine how condused my mum or some not phone-savy person is.


> Leaving the old button in the old place is actually kind of a clever way to "Deprecate" a UX feature.

That would drive me demented in 5 minutes.


Which would force you to learn the location of the new feature, and in the next version the button would be gone.


It has certainly not trained me to reach for the new way since it was introduced considering I strangely still reach for the usual and visible old way every time despite being treated with "lol, no". But I may be the exception.


Yeah, I actually like this UI feature. I would have been very frustrated if they had just changed it over on Google Maps. I use Timeline all the time and it was important to me.


This underscores why it’s important to allow third party clients to connect. When only the first-party client is allowed, the failings of its UI drag down the core, too — it doesn’t matter how good the core is if it’s permanently mated to a half-baked UI.


I'd go further than this.

It underscores why open standards and protocols are essential for a more secure world.


With this party clients, you don’t avoid this. The failings of a third party client will still reflect on the whole of the core product as well. Articles would still talk about a bug like this breathing a Signal issue, even if limited to a third party app.

Additionally, I find it quite a stretch to call the app “half-baked”. I think it’s pretty great.


I don't see this happening. As long as the app that's buggy doesn't have more than 33% market share I doubt the media would write it down as the system as a whole having the bug. Thought I am disappointed by lazy tech articles by otherwise good media regularly.


And when third parties can connect, the protocol can't evolve because every change becomes "good to simplement" but it takes an enormous amount of time, resources And influence to change to "mandatory to implement". As always it's a delicate balance between security And ease of use, and Signal has always been up front in favoring the former.


That would be the case if they’d standardize the signal protocol. Letting third parties connect has nothing to do with that. Signal can still change their API any time they want.


I replied in another comment, but letting third parties connect widens the surface area of the potential attacks. If that bug happened in a third party, from a security pov you must block them from accessing the service, at which point you need to decide if you want to police every single possible client or focus on one and make it widespread.


And Microsoft could in principle choose to do a hard break with all backwards compatibility in the next version of Windows. But they won't. Eventually, the third-party base may be so large and varied, that it becomes quite painful to actually make such a change even if the letter of the law says you can.


In which world does Microsoft conform to open standards or protocols?

Your point simply doesn't make sense.


Microsoft's Windows APIs are, in some sense, an "open platform".

They don't accept third party contributions, but they are thoroughly documented and their whole purpose is to support third party developers.

Because that's their goal, they won't ever break that backwards compatibility.

That, I think, is what the GP was getting at - the Signal team does not want to wind up being constrained in the ways that MS is by supporting third-party developers.

Making their protocol an open standard would have that effect, because they could no longer unilaterally change the protocol as they see fit. They would be constrained by the need to support and think about all the other stakeholders who rely on the standard.

If you control the client, the server, and the internal-only standard, in really bad situations, you can just push out an update that fixes the apocalypse in a backwards-incompatible way and drop support for all previous clients.

This is not hypothetical - see the KRACK attack from 2017 for an example where an open standard was found to have a security flaw (https://www.krackattacks.com/).

We all got very lucky that the flaw could be mitigated in a backwards-compatible way.


Yes, this is about right. I was struggling to express the idea. Thank you.

Win32 is an open API in a certain sense. Of course it's not open as in open source as in consensus and cooperative standards as in FLOSS, at all. But. It's well documented. Anyone can target it. It's stable.

Textbooks on Windows Vista programming 15 years ago are almost completely applicable to Windows 10 development. Even software that was targeted at a reasonable subset of the Windows 95 API, whether binary or source, is still going to run on Windows 11 without change in many cases. You get notice of what parts of the API they're gonna break in the next version (usually) too.

In the olden days this was an "open platform" meant, usually. It's what the open in OpenWindows, OpenUnix, OpenStep was about. An open API. As opposed to a possibly undocumented, unstable API you only used under special contract and arrangement with the supplier. Often there was no actual prohibition (legal or technical) but there'd be no support. And it'd break randomly without notice.

As you say, it's a potentially enormous commitment, and organizational issue, to provide that sort of long-term API stability. And you can box yourself in regarding development and design choices.


This doesn't have to be the case. Look at how stripe does it with their API which would be a disaster if older versions just stopped working. Versioning is doable even with chat apps.


But Signal isn't just a chat app, it's an app with a very strong focus on security, and you can't have backward compatibility with security. Otherwise you end up with some servers still implementing SSLv3 years after its due date, or GPG with settings that make it insecure by default. You must force everyone in the ecosystem to use the latest version of the API, but even that is not enough: if there's an issue with the client (the issue in the article could happen with a third party) you must find a way to force it to upgrade; if they dont want to it's better to block them, but at this point if you need to choose where to best spend your resources you might as well block everyone else.

Opening the service to other clients widens the potential surface area of attacks. It must be considered with a lot of care.


> You must force everyone in the ecosystem to use the latest version of the API

Couldn't Signal just announce a "flag day"[0] in advance and say that their servers would block connections from clients that don't support a specific version of the API by that date? For non-essential upgrades, the API change should be announced well in advance, but client developers might be given just a few days notice before security patches become mandatory. (Given the circumstances of this bug, though, perhaps several months of leeway would be more fair).

[0] https://en.wikipedia.org/wiki/Flag_day_%28computing%29


They could, but again it's not just about the API, the client itself must also be secue enough. That means enforcing security in some way to third parties, or just blocking them until they've solved all issues. And in practice if you let people migrate at their own pace they just won't migrate until clients complain.


It's not at all clear that the messaging ecosystem would be better if Signal could say "We won't let you send this message to your friend because we think their client isn't as secure as ours."

I'm sure there are cases where a third party client would be less secure and Signal would be justified in refusing to relay messages to/from that client, but what about situations, like the current one, where it is Signal itself that is insecure? To be consistent, shouldn't Signal have to block their own client until they fixed the issue?

Even if you accept the idea of Signal having a (inconsistently applied) veto over which apps are allowed to use its infrastructure, how far do you take it? Should they deliberately brick official Signal clients running on versions of iOS or Android that are deemed insecure? Should they require that the phone manufacturer is "trustworthy" so that it doesn't risk containing Chinese government spyware?

Taken to extremes, Signal would need to come with its own antivirus scanner or support some sort of remote attestation of all the versions of all the software running on the phone, to prevent messages being leaked through side-channels. And what would that achieve, other than pushing people to use other, less secure apps?

I think it is a dangerous distraction for Signal's threat model to worry about anything other than making their own app secure, and making sure that the protocol supported by their servers is secure.


This is the same app that only 6 months ago had an outage lasting more than a full day[0] - an outage that to my knowledge remains unexplained. The protocol is one thing but these clowns are obviously not careful or reliable engineers.

[0] https://www.theverge.com/2021/1/17/22235707/signal-back-app-...


Have you ever experienced millions of new users starting on your service in the space of a few days?

Predicting the future is hard.

Calling a team "clowns" because they didn't make the right guesses when they did capacity planning and had a day with downtime as a result of meteoric growth seems unfair to me.


I love Signal and have advocated for its use. But I have to say that this issue is trust-breaking.

I lovingly forgive the occasional bug or unpolished feature and I understand that the team behind Signal are human and that programming is hard. But sending messages to the wrong people is very high on the list of things a messenger should never ever ever do!

Having an issue like this remain open for 7.5 months hints at a systemic issue, which is probably be related to Signal being underfunded/understaffed. But regardless of the reason and of everyone's good intentions, the fact remains that similar issues can and probably will happen again, and may again take months to fix.


FWIW the problem did not remain open for 7.5 months (GitHub issue did, but not the problem). The dev is in the thread and explains.


Bug reported 2020-12-04, fixed release tagged 2021-07-15 (if I'm identifying the commit correctly, same day the fix is merged, which one would hope for a high-priority bug like that). That's technically 7.3 months, not 7.5, true, but ...


Signal has been adding lots of silly social media like features lately, not surprising that they are messing up the core value prop. I’m shopping for a new encrypted messenger. They used to say every program expands in scope until it can read email, now every app expands until you can add Snapchat filters to your selfies.


Matrix is a solid replacement. Element isn't as easy to use but it's coming along. Quality-of-life features normal users expect like stickers, gifs, etc. are woefully lacking, but the important stuff (y'know, actual messaging) is solid.

The most important thing to me is if Element screws up like Signal and starts pushing a shitcoin, I can swap clients without affecting my network.


Also, Matrix supports other client implementations than Element, like https://fluffychat.im/


Fluffychat is my go-to for getting normies on matrix.


I can’t even possibly pitch the name alone to anyone but few preteens. Even teenagers won’t be interested.


Matrix is anything but a Signal replacement. It’s so convoluted and really questionable UX for personal communication.

I think Signal (in its current form and direction) is a gone case but so is Matrix.

Matrix is anyway chasing Slack not WhatsApp. I think that’s a smart move.


Matrix is a protocol, Element is a Slack-like app on Matrix.

It sounds like you have issues with Element as a replacement for Signal, which I totally get, but I think it's worth distinguishing Element from Matrix.

Element being the only full-featured Matrix app hopefully won't always be the case and it's Matrix's express goal to change that. Element is a single reference implementation -- if it's successful it could spawn many others supporting different UIs and use-cases, which we're already seeing with Fluffychat sporting a Signal-like chat interface.

I'm pushing my family and friends to use Matrix because that's the direction I want the world to go (open protocol with many different clients and servers communicating), not because we're already there today.


I should have been clear about that. Yes, I am aware that Matrix and Vector>Riot>Element are different.

In fact Matrix as a protocol even less of a Signal replacement. People get perturbed by tiny amount of sign up friction and imagine self hosting. But then if you assume, and that's what I did assume, that for the sake of considering Matrix/Element as a replacement of Signal we stick to matrix.org server. That essentially makes Matrix as Signal replacement.

Also, when people usually talk about Matrix being adopted by masses they are talking about Matrix on matrix.org (or that's my understanding which may be incorrect as well).


> Also, when people usually talk about Matrix being adopted by masses they are talking about Matrix on matrix.org (or that's my understanding which may be incorrect as well).

No idea how you got that understanding. With respect to adoption the server does not matter at all (though many consider it advantageous if it is not matrix.org)


It’s not enough for signal to work for tech people. You have to be able to convince your family and friends to use it, it’s a network effects problem. They are adding features so that ordinary people can have private communications


ordinary people don't give a flying f** about being able to send someone crypto currency via their messaging app.


Though many of their competing apps support this feature. Facebook, Apple, WeChat, etc.


But everyone wants to send money or money-equivalents. I use Venmo almost every other day.


But why roll that in to a chat app? Why not use your bank's app (or if you live in a country with a 19th century banking system like the USA, something like Venmo)?


I guess because your chat app is also your contacts management app, even if by accident.


Because I have performed out of band verification for contacts in my chat app.


If that's the case, why did they remove SMS import?

It used to be "just install that, you will be more secure and won't notice the difference". To "you will lose all your messages".


And its MMS support is both gravely broken and has been declared a non-priority (I don't have the link handy, but also don't care if folks think I'm lying)

I don't know that this incident will cause me to uninstall Signal, but it for sure is going to get my recurring donation cancelled

If I fix the MMS for my carrier *again* and then that fix is ignored or rejected, that's when I'll uninstall it


I use delta-chat (chat over imap) and it’s fantastic.

Decentralized / federated e2e chat, running on the Internet’s most well-known, resilient, universally supported, self-hostable infrastructure: email.


How does delta deal with new chats, fallback and other standard email client stuff? Does it allow mixing encrypted and unencrypted messages? What happens if I use an alternative mail client, will I still be able to read email from people after the project dies and the clients stop working?

I don't think people will appreciate it when I suddenly start using email as a standard communications method, but it's worth a shot. The client looks like a copy of Telegram (a good thing, in my opinion) and everyone already has email, so I'm willing to give it a shot I guess. I'm currently on the Matrix camp but it's not like that's a protocol many non techs use in the real world.

I'm just wary of using email for this because of all of the previous failures to secure email, like PGP, S/MIME and variations thereof.


> Does it allow mixing encrypted and unencrypted messages? What happens if I use an alternative mail client, will I still be able to read email from people after the project dies and the clients stop working?

Yes. It uses the AutoCrypt standard, which exists independent of delta chat and has standalone software, + plugins for various email clients. I use a plugin for mutt so I can read my delta chat messages without the official client. You can issue new key pairs or turn off e2e as you like.

> I don't think people will appreciate it when I suddenly start using email as a standard communications method, but it's worth a shot

Fair, though perhaps technically simpler than getting others to download Yet Another Chat App™.

> ...previous failures to secure email, like PGP, S/MIME and variations thereof.

Did PGP and S/MIME fail? Services like ProtonMail use that tech to great effect. IMO it never hit the main stream because mainstream mail providers, etc want to read your email. There's some argument about usability but after using ProtonMail I don't buy those arguments.


> Did PGP and S/MIME fail? Services like ProtonMail use that tech to great effect. IMO it never hit the main stream because mainstream mail providers, etc want to read your email. There's some argument about usability but after using ProtonMail I don't buy those arguments.

In my personal experience, I've never seen anyone use PGP extensively for more than a month or so. The lack of PGP in most common mobile mail clients certainly doesn't help; switching email apps is annoying.

I don't think companies reading your email is the main incentive for companies to not implement PGP. It's perfectly possible to do PGP server side for free mail accounts like Gmail or Outlook, with proper PGP support in Outlook, Thunderbird, Apple Mail, etc.

PGP is complex, especially for people who don't know cryptography, and the lack of cloud sync of private keys and central account management makes it much harder to use than modern chat applications even for non-novice users.

I've never used ProtonMail and I don't know anyone who does, so my experience may not be representative. Then again, the fact I don't know anyone who uses ProtonMail might also indicate that PGP still hasn't gained that much market share.


It seems to use E2E encryption with some trust-on-first-use protocol called Autocrypt:

https://delta.chat/en/help#encryption


I really like the idea of delta chat, except that I cannot bring myself to allow it access to my full imap mailbox. And most providers do not offer a way to scope access to only selected folders.


You can use a separate mailbox just for delta chat


Sure. But that throws away the advantage of already having an identity.


Every time something like this comes up, I say something like, "Who wants to switch to Matrix (ie, Element, and before that, Riot Chat)?"

But then, I myself don't end up doing it, largely because of the network effect on Signal.

I think we need to just remember to always keep 3-5 of them open so we can have some horizontal evolution.


I’ve tried Element. I think Signal is probably easier to setup and use for most non-technical people on comparison.


Personally I want to have secure communications with _all_ my friends and family, not just the paranoid tech nerds and people I can pressure into it.

If this is what the masses want, give it to them. Encryption only works if people use it.


Threema seems great so far. Not many users though, sadly. Element (matrix.org) is good as well, but it's not polished and lacks users as well.


Threema could get on Matrix and it would be awesome - or federate with it. Maybe Wire as well. Such excellent apps.

However finding connections just by numbers is really a killer feature


Matrix, in non-federated mode, is by far the toughest messaging protocol to crack.

Still dependent on proper UI engineering, like Signal failed to do.


>by far the toughest messaging protocol to crack.

Citation needed.


It's necessary if you want to attract the mainstream users, who could not care less for e2e security but values stickers, filters and stories above all else.


I hate the new ability to add emoji responses to message chat bubbles. It has turned my conversations (especially group ones) into a Facebook like experience where everyone expects a cry face emoji or heart on everything they say. It gives me that same feeling of dread I used to get when I had a Facebook. I just want to send and receive text messages, not be engaged constantly to my phone.


Which makes it like using Slack!

I for one would rather they spent less time on these ‘features’.


The only possible competitor (from a network effect perspective) is Matrix/Element.

The UX is completely unpolished and at least 5 years behind Signal.


While I hate these features as much as everybody else on HN, they are likely necessary if they ever want the app to go mainstream.


Threema is worth a shot. Signal looks promising but isn't quite there yet.


Crap, I meant Session looks promising, not Signal.


Delta chat starts from reading email and theb works backwards: https://delta.chat/en/


Cool, my contacts can keep using their gmail account and leak all communication metadata to Google without me having any say on that if I want to talk to them. Where can I sign up?


Maybe gently encourage your contacts to use another provider?


Nah, easier to get them to Signal when a large portion of their peers is already there. Plus Signal's more secure than Delta-Chat that's not even forward secret, that isn't audited, and that's using 30 year old PGP.


is Wire still around and does somebody know if it's good? I remember reading about it because it used Haskell but I never tried it out.


I used it and found it excellent, fully featured (chat/voice/video), use phone or email as identifier, and supported on all devices (iOS, Android, Mac, Win, Linux).

I don't understand why it didn't take off - seems like a modern version of Betamax/VHS to me.


They can't even be bothered to do 2FA


Which messenger apps can be bothered to do 2FA?


Signal requires a PIN and an SMS code. WhatsApp has the option to add a password in addition to the SMS code. There are probably others.


Wouldn't XMPP fit the bill?


Several years ago I had this same issue occur in Facebook Messenger. I was using a pretty slow outdated device even for the time. I went to take a picture to send with the in-app camera. I actually pressed send before the picture rendered on my screen and somehow what was sent was not the picture I took, but a picture of some man's forehead who neither of us had ever seen before. It seemed like a pretty huge bug that could be a serious problem if anyone could reliably recreate it, which I could not. I went about trying to report it but ran into so many problems and broken links searching for Facebook's bug reporting that I gave up. Here's hoping it's been fixed, though I haven't used Messenger for at least a few years now anyway.


It also happened to Skype around 2011ish. IIRC it was so frequent that I simply stopped using the software until a fix was released.


so that's where the picture of my forehead went to. give it back!



Signal is becoming a joke that we should reconsider using, and now has dangerous bugs that is at the edge of compromising people's privacy.

Has this app/service really been audited properly?

We now need to consider serious alternatives that we should get behind like Element [0] or Session [1] but I am open to user friendly alternatives other than Signal (at worst even Quill [2] or Delta Chat [3]).

[0] https://element.io

[1] https://getsession.org

[2] https://quill.chat

[3] https://delta.chat


> Has this app/service really been audited properly?

Yes, repeatedly: https://community.signalusers.org/t/wiki-overview-of-third-p...

Edit: that said, this did make me revisit a question I asked signal via their Careers portal a long while back. Reposted here: https://news.ycombinator.com/item?id=27952315


And yet there are serious bugs like this that slip through the net, a simple benchmark of any chat app should not be showing other people's messages like what Signal is doing.

I would expect that an app that has repetitive audits would have resulted in this bug being fixed already.


At least the main audits are clearly described as auditing internal components, so it's not surprising app-level errors aren't covered by them.


Which this bug was left open for months while users were experiencing this privacy issue.

How can I recommend a chat app that does this and claim they are a privacy based app? and also does not respond to urgent bugs in this manner?


So what ingenious method should they have deployed to prevent all programming errors beforehand, and how should they have handled it? Advice users to fall-back to non E2EE SMS?

The team deployed logging as fast as they could, successfully detected the issue as soon as it happened again, and deployed fix as fast as possible. What should they have done?

If you only recommend chat apps with perfect track record, you're basically recommending chat apps with internal policy of not disclosing vulnerabilities, and ones that downplay any revealed vulnerabilities.


I don't think anyone takes issue with the fact that the mistake was made or that it was really hard to track down. Shit happens.

How you handle it is everything. No communication on it and issuing no warning to their users for a critical bug that risked user privacy in a substantial way for 7 months is unacceptable for an app that calls itself secure, full stop.


I'm not saying you should recommend Signal, just pointing out that "there are audits, why does it have such bugs" doesn't tell the entire story.


> just pointing out that "there are audits, why does it have such bugs" doesn't tell the entire story.

So? Isn't that the point though? Having regular audits should have caught this issue? I thought this being 'open source' this would made this even easier.

Which leads me to believe a team that has $60M~ in funding is unable to fix this issue in a matter of urgency.

Remember this issue was open for half a year with users noticing this, no matter how you slice this, this issue does not give me any more confidence in Signal being secure.


>So? Isn't that the point though? Having regular audits should have caught this issue? I thought this being 'open source' this would made this even easier.

You have it the wrong way. Testing, audits, and open source are all best practices. They should be done. None of them are guarantees of security.

Open source is not guarantee of finding all bugs, it's a necessity to allow anyone to look for bugs (and backdoors).

Audits can not be passed. They can only be failed. Kind of like how RNG tests can not be passed, they can only be failed. Example: Use SHAKE256 to extrude any keystream on initial value 0x00. It will not be secure, but it will pass any statistical test.

>this issue does not give me any more confidence in Signal being secure.

No application can actively prevent a bug like this. As an author of high assurance comms system, see what I wrote under threat model:

"If hardware such as computers/optocouplers user has bought is pre-compromised to the point it actively undermines the security of the user, TFC (or any other piece of software for that matter) is unable to provide security on that hardware."

This also applies to software issues that actively undermine the security of the user. So the thing is, a software bug that outputs sensitive data to wrong contact, can not be absolutely prevented. You would need a friendly MITM-guard node that runs a Google-grade image recognition algorithm that detects you're trying to output a legal document to the wrong client, or a nude to not-your-SO.

Again, bugs are unavoidable, what matters is the incident response, and is Signal actively trying to protect you from everyone, including themselves.

Another PoV: If you punitively fire people that get caught in social engineering pentests, you're replacing a person who now has real-life experience with social engineers, with someone who may or may not have such experience.

Sure, if the person fails multiple times, it's time to let them go, but Signal's reaction is indication of a good employee who takes personal responsibility in making sure it won't happen again.

I'm extremely careful about what I recommend, and I have serious trouble finding a way to agree with your assessment that just because a rare bug is open 6 months is of serious concern. It wasn't being sat on for six months. But you're very keen on giving that idea. Would you care to elaborate?


Completely irrelevant.

Nobody mentioned anything about 'guarantees', this is a matter of urgency and priorities.

I don't care if this was a 'rare' issue, Signal knew this was open for half a year and what were they doing? Testing cryptocurrency payments.

If security was really that important to Signal, where was the urgency there?

If this was any other app that did this (especially Facebook) you'd rain down on them like a ton of bricks.


>If security was really that important to Signal, where was the urgency there?

If the cause is a random database key collision you can't immediately discover it obviously. You have no idea what was causing it, so you'd have to do logging.

>and what were they doing? Testing cryptocurrency payments.

Yeah I'm sure they just decided to abandon their core value because they wanted to hurry a feature they had advertised to no-one, and were thus in no rush to deploy.

If this was an actual issue I wouldn't care if it was my own app, I would pour a truck load of bricks on top of that.


The bug wasn't open for months, the dev just forgot to close it and he's in the thread.


Where is the original issue for this, if that is the case?

Otherwise I see signal developers closing duplicates [0], towards the main issue [1] which leads me to believe it was open for months.

Even if the bug was fixed at 7/21 on the original thread and this main one, this issue was still open for months.

[0] https://github.com/signalapp/Signal-Android/issues/11137

[1] https://github.com/signalapp/Signal-Android/issues/10247


He forgot to close it 10 days ago.


The protocol has been reviewed plenty of times. The rest of the app has not. At least according to your list.


Quill? Why would we want to use Quill that needs all of our private data?

Have you not seen over at Apple App Store what they’re sucking off of your phone from your Quill app?


Which is why I said at worst. Please read.

Could you recommend a better chat app that is user friendly enough for regular people to use that is not Signal or WhatsApp and is cross platform?

Quill are at least working on E2E, not introducing a cryptocurrency like Signal and don't require your phone number.


>are at least working on E2E

So they're in the process of moving it from 1995 to 2004. That's great!

If all you have on Signal is opt-in feature for payment, and an issue of usernames that's being worked on -- but you want to offer as a solution a product that's for now, completely insecure by design (but it's being worked on), you're in dangerous waters. This is especially condemnable because the issue with Signal here is confidentiality of sensitive data.

You'd replace a one-in-a-billion database key collision problem with 100% of content leaking to service provider that literally offered the Telegram defense "The AES256 key is on a DIFFERENT computer". It's not. It by definition of how computers work, can not be. The database key sits in the RAM of the database server doing the database commits. The CPU can't perform AES operations without the key, and the key isn't being quantum teleported from another machine's RAM to the registers of the computer doing the encryption. These guys have no idea how computer security works, yet you deem them worthy of your attention. This makes me question your expertise on the subject matter too.


Calm down, I only said at worst for Quill, it seems like now I have to question your reading skills.

> If all you have on Signal is opt-in feature for payment, and an issue of usernames that's being worked on...

Opt in or not, don't think I want cryptocurrencies in my chat app. Look what happened to Keybase after that. Usernames of some kind should have been be there from day one. We don't need any more phone number leaks.

Also don't forget that Signal cannot end calls properly and the recipient is still able listen after the call has ended. Very bad. [0]

[0] https://github.com/signalapp/Signal-Android/issues/10247#iss...


That is idiotic. It's strictly, objectively worse than Signal and not even acceptable in the worst case.


How can you be so sure if you haven't tried it?

I'd rather use a chat app that takes security matters seriously and urgently and will eventually have E2E.

What's more 'idiotic' is prioritising and bolting on cryptocurrencies [0] than fixing urgent security issues, leaving it for months unfixed, while also claiming to be private and secure, and also requiring your phone number.

[0] https://www.wired.com/story/signal-mobilecoin-payments-messa...


I agree that adding a crypto coin the way they did it is idiotic as well. But I wouldn't ditch an encrypted app for one that will 'eventually' be E2E encrypted. matrix.org is E2E encrypted and secure now and it's being used by France and Germany.


Matrix / Element is unfortunately just far too technical for end users and the general population, but it is better than IRC.

A mistake was the naming, for example you refer to the name of the protocol 'Matrix' instead of the name of the client 'Element'. Having a naming issue risks confusing lots of people, other than that I have already mentioned it as an alternative.


>I'd rather use a chat app that takes security matters seriously and urgently and will eventually have E2E.

Deploying messaging app without E2EE being the first four chars on the security design paper -- even before the product name -- is the opposite of taking security matters seriously.


>serious alternatives

How is Quill Chat that's proprietary and not E2EE a serious alternative?

Element's UX is behind Signal but at least the encryption is E2EE by default.

Session is a Signal fork with bad metadata protection: There's 60 entities owning Loki nodes, and top three players own 80% of nodes.

Delta chat leaks metadata to email providers, and PGP has no forward secrecy or deniability.

Element is the only one that's even remotely fixing the issue.

The issue here was client side, and no architectural design, not even hardware system can prevent the "wrong contact receives plaintext message" vulnerability categorically.

The fix is now in place, and I'll eat my shorts if they don't have a unit test in place to detect reintroduction of this issue.


> Delta chat leaks metadata to email providers, and PGP has no forward secrecy or deniability.

Signal leaks metadata to the signal server. Deniability is useless. Forward secrecy is cool as long as you are not using more than one device.


Signal publishes their response to any subpoenas they receive; they only ever respond with the timestamp of when the user in question signed up and when they were last active - which I guess implies that's all the data they have?

https://signal.org/bigbrother/eastern-virginia-grand-jury/


I‘ve been on Threema ever since I learned that WhatsApp did not even use TLS. It‘s a great chat app and nothing else (which is a feature in my book).



A rather vague explanation. Sounds a bit evasive.

Where is the commit that fixes it?



Can we get a better understanding of the root cause and blast radius?

You say, "if someone had conversation trimming on, it could create a rare situation where a database ID was re-used in a way that could result in this behavior."

Is this someone user A or user B? Where is this database and what is it storing? Are these images previously sent or received from either A or B, or are they possibly from some thread between users C and D? How does this agree with end-to-end encryption?

How can you expect people to use your product with a bug this severe and no analysis of the impact or a statement as to who might have been affected?


A lot of people are talking about the bug but not the fix.

Is anyone else incredibly surprised that the fix was just adding auto_increment to the primary key column for two tables[0][1]? Not having these as auto_increment seems like incredible oversight to me. In what common scenario would you want a setup like that?

[0] https://github.com/signalapp/Signal-Android/commit/83086a5a2...

[1] https://github.com/signalapp/Signal-Android/commit/b9657208f...


A non-techie relative of mine told me about images being sent to wrong people and them asking why they sent the photo. I first assumed it was just user error but apparently that's quite a bit data leak on signals side.


Well, from a glass half-full perspective this provides a user with perfect plausible deniability about any illegal content found on Signal on their phone, or a message claiming to be from them to another user.

Maybe it's a feature, not a bug ? :)


That's a lot of faith that you'll never log anything sensitive by making the user's debug logs public if they want to report an issue.


I'm rooting for Delta Chat [1] which puts a nice chat UI on top of email. It is such a brilliant and simple solution. It is decentralized unlike Signal which recently had big reliability problems when new users flooded in.

[1] https://delta.chat


I had to dig a little bit to get a better view of "will this make a mess of my email if I try/test it without commitment?"; the tl;dr is basically:

(a) there is a DeltaChat subfolder in your IMAP storing the messages;

(b) the app looks for a "Chat-Version" header on emails to know to move it to (a) folder (and you can set a server side rule to also do that);

(c) a number of popular email providers (IMAP is used) are listed with some notes to help you get started: https://providers.delta.chat/; and

(d) it's using the Autocrypt/PGP standards and you can apparently import your existing PGP key if you want

It's all in the FAQ or other docs, just highlighting the things which I wanted to know straightaway before making a mess by accident just to give it a try.


Isn't this going to pollute my mail server with thousands of individual chat "emails" instead of a few large emails?


Dealing with thousands of messages in a folder somewhere is not really problem for mail servers.


Which unfortunately suffers from the same metadata leaks as email.


Is this some kind of cache bug? Pretty serious, whatever it is, and judging by the linked issue, they either haven't taken it seriously, or worse, they aren't able to pinpoint the bug.


Re-using of message IDs.


All facts aside about how it's now resolved and only surfaced using a certain setting, etc...

This an absolutely horrific bug - worse than even an encryption snafu. Can you imagine depending on Signal's privacy features, possibly with your life, and encountering this bug?

Fuck - this could ruin someone that hasn't even done anything wrong.

If I knew this bug existed and I was on this team, I would have been in all out panic mode all these months. Literally shitting my pants.


One among many reasons I use different apps for different people in my life. My partner is the only person I message on one, my friends on another, my family only over text messages, and my coworkers only over email or phone calls.


I've never thought of doing this, but I often feel a pang of uncertainty whenever I open my phone's share sheet.

Like.. when I pick person A, is the app going to screw it up and send it to person B somehow?

I honestly have nothing life ruining going on, but huge embarrassment sometimes if a mistake were made? Definitely.

This bug is a worst fear realized.


I also make sure that if I'm using a platform with more than one person each contact must be visually distinct. For example, all my friends are on WhatsApp and they each have their own chat background image. You can't slip up that way.


I like signal but having to explain to my parents why they cant use it on their android tablet or that they cant register without a phone number makes me reconsider if I should just use another software.


How can an unencrypted copy of some media end up at the wrong user? Isn't that supposed to be end-to-end encrypted, especially when stored on the signal servers?


The chat client misinterprets something and attaches a file to the message. The encryption works fine, the business logic of the app failed.

E2EE won't protect you from a client accidentally encrypting and submitting files in the wrong chats.


But what exactly went wrong with signal here?

Could someone remotely instruct my signal client to share media? Previously sent or arbitrary files?


They would have to compromise your client which is in no way different from compromising your device. The NSO / Pegasus systems do just that. They allow arbitrary command execution, which includes sending any file on your phone to any contact over Signal. Nothing software can do to protect from that. If you need 100% guarantee something doesn't leak over electronics, don't store it electronically. Ask them Slavs https://www.theguardian.com/world/2013/jul/11/russia-reverts...


The app accidentally attached seemingly random media to messages. The other end has no control over what images they receive when. There was no hack or remote control at play, just a bug.


It was a bug on the client that encrypted and sent the message to the wrong user. If it was a bug in the server that messed up the routing it would be impossible for the wrong recipient to see the message.


I have no horse in this race. I don't use Signal or any of its competitors, so allow me to ask some basic questions.

Could some users explain why you currently use Signal, and additionally, why you would continue to do so? It appears to me that not only this bug, but more importantly, the laissez-faire resolution of it is the opposite of what a privacy based app should do.

Based on their homepage, it looks like they're proud of the fact that Snowden uses the app. I'm interested if he, as a person with "real shit" to hide, still does.


Personally, I use it because it was the best choice a couple of years ago, and I (somewhat recently) managed to convince family to use it too.

However, after this, I am probably going to set up my own Matrix server and use that as much as possible, encrypted of course.


I live in authoritarian country. so yeah Signal is the answer and my country does fear it and trying to block it. They don't fear whatsapp, telegram or facebook but they do fear signal.

And weird this bug never encountered with me.


I have had random Signal contacts and phone numbers appear in my Signal installation. It was scary to see other phone numbers mixed up. Despite all the PR they do (and paid trolls on social media websites), I still don't understand the fascination for this "non-profit" "e2ee" application. There are better options out there. Why stick with a buggy app that can't get it basics right?


This is a good reminder that no matter how security and privacy-focused a project is, we still don't know how to reliably develop software without bugs - and all it takes is one bug to negate all of those security and privacy features.


Fixed in 5.17, though I only have 5.16.1 available to me despite the fact it’s supposed to be available from a few days ago.


My Android phone has 5.17.3. What platform are you on?


iOS


This bug seems to only have affected Android. 5.17 is still in beta for iOS.


Thanks for the clarification:-)


Looks like this was fixed [1]. Doesn't fill me with confidence if this type of issue can occur though.

[1] https://github.com/signalapp/Signal-Android/issues/10247#iss...


I'm not up on my signal protocol but using PGP, sending encrypted messages to the wrong person would result in them getting an encrypted message they can not decipher. If a bug in signal allows a third party to decrypt a message intended for another person, does that means that signal servers see plaintext messages?


Fixed on 7/21. Forgot to close the issue:

https://github.com/signalapp/Signal-Android/issues/10247


Can you explain exactly how auto-incrementing IDs were the crux here? Were they overflow-wrapping or did the tables forget to actually use them?


Looking at the comments here, the amount of badmouthing signal for arguments sake feels suspiciously like some organizations' motives.


The bugfix comments says:

> The TL;DR is that if someone had conversation trimming on, it could create a rare situation where a database ID was re-used in a way that could result in this behavior.

How is this bug even possible with E2E encryption?

If picture.png exists on user A's phone and gets sent to user B, shouldn't it be client-side encrypted in such a way that user C, even if they receive it via some database ID screwup, are unable to view it (because it was encrypted with user B's public key)?


its probably not using chat keys for the db, for the sake of being indexable


But it’s showing up on both sides of the conversation. (Both user’s devices show the same wrong unsent pic.)


And they say that I don't value privacy since I use Telegram and not Signal... in reality Telegram may not be end to end encrypted like Signal, but I never recall doing a think like that. It means poor attention to the security of the application, and poor testing.


So let me see if I got this straight...

You will never use an app that HAD 0.000000001% chance of outputting a file on your phone to wrong peer over 100% end-to-end encrypted channel...

but...

You knowingly use an app that leaks 100% of your group chats, including attachments, 100% of your 1:1 desktop messages to the service provider, who can be bought, or hacked at any time without you (or them) knowing, and that doesn't provide any kind of active protection mechanism against similar bugs than this one...

...on the grounds...

...that such bug hasn't happened, yet?

Is that what I'm reading?


If it was 0.000000001% chance, it was a great luck to see that happen!

> that leaks

To who?

I think we have to stop with this nonsense of end2end encrypted for nothing. Back in the day we only used plain text protocols (with one still in use, the email, but all old chat protocols like IRC, XMPP, where plain text) and nobody cared. Now we all need end2end encrypted messages for what? The absurd thing is that we want end2end encryption to talk with our friends about last night football game, but we receive our bills, medical reports, bank balance, and all sort of official documents via email, that is not end2end encrypted. And nobody seems to care even about using GPG to encrypt the emails (something it's around since forever basically). But a chat app has to be end2end encrypted because it's cool.

I don't care about encryption, I use Telegram because it has more features, thanks to the fact that the messages are on the server (and thus I don't have to waste space on my phone for stuff that gets sent, in fact I use Telegram to send large file usually or even just to transfer them between my devices), chats are synced in real time on every device, I can log into another device whenever I want, I can use it without installing a client on a computer.


I've not experienced reports of this for myself. I'll ask my driend group to do a comparison between our shared room and see if there are any problems. to be fair i mostly use groups so maybe the behavior is limited to 1x1 messaging?


Do users se wrongly sent images in outbox/sent?

Did the transfer always happened immediately or with a delay?

Did the transfer or chat always had to have a gif sent?


I was just talking about Signal one hour ago whether if it was available on PinePhones or the Librem 5 which seems very unclear, and now this happens on Android devices.

Does this mean that not only I can't yet recommend a PinePhone or Librem 5 yet, but for current Android users I can't even recommend Signal to anyone due to this issue?


Email. Why are we still trying to push these instant messaging apps that are a privacy and security nightmare? (I realise email has security issues too).


Email has weaker EtoE encryption than these IM solutions. Even with GPG. Too much metadata is leaked. However the decentralised nature of email is one crucial advantage it has over these apps.


I agree about the EtoE encryption weaknesses. However since I can send email from my own email server to another email server without it touching a 3rd party (not including the ISPs and DNS servers) means EtoE is not such a massive issue.

I can't make phone calls or video calls over email, but for text, small files and images it's perfect (given how long email has been around it goes to show how good it is).


You could in theory (but this is like putting plasters on a colander) relocate some of the MIME meta data (Subject:, To:, From:) to the email body and then encrypt it.

So basically obfuscate the MIME headers and use some kind of guid@domain type addresses for the MTA routing.


"Weaker E2EE" as in "PFS is not commonly used with email". As for the metadata, no metadata is leaked that signal does not also leak.


>As for the metadata, no metadata is leaked that signal does not also leak.

Signal does not leak to third party servers with whom I talk to. There's encrypted comms to the server, and that's it. I try to talk to someone who has gmail account, Google now has access to 100% of my metadata with that contact. I trust Signal more than I trust Google with my metadata.

Also, there's precedent from TWO court cases Signal doesn't collect your metadata. Show me one email vendor that has such real-life proof about not collecting metadata about their users.


Metadata is leaked only to your server and to the server of the person that receives the email, just like signal. The only difference is that with email you get a maximum of 2 servers while with signal you get one.

> I trust Signal more than I trust Google with my metadata.

Fair enough, I will agree with this.


I really enjoy the email chains that have about 1mb of HTML for a footer. The chain ends up being 100s of MB for about 6 paragraphs of text. Very inefficient




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: