Hacker News new | past | comments | ask | show | jobs | submit login

I appreciate that this was a difficult and rare bug, but for an app that sells itself as 'secure', it feels like this isn't acceptable.

How can users be assured that this type of issue won't occur again?




What does it mean for it to "not be acceptable?" Accepting or not accepting the bug is not an option on the table. It happened. The options you have open to you are to use or not use the freely provided software, or donate or not donate to the foundation providing it. Users cannot be assured that future bugs will not occur. That is an assurance that is not available with most consumer software, and certainly not with software delivered for free by a not-for-profit.


If you are unclear, perhaps it would be useful to ask yourself what you would personally need from a product to consider it 'secure'.


Yes, Signal is a messaging app whose whole reason for existing is privacy, but it does not actually guarantee any privacy features; it just tries its best. Despite this "who cares if it works" premise, they have raised tens of millions of dollars.

I dream of a world where commercial software is held to the same standards as even the lowest quality commercial hardware.


Concretely, what would you like to see done? Should the foundation be hit with a fine or a lawsuit for this bug? Would that improve the overall security stance of the public at large?


I would like to see the foundation put their enormous investment into making a product that works correctly.

If that's too much to ask, then they really ought to be more public and forthcoming about the fact that Signal is not actually private or secure.


> How can users be assured that this type of issue won't occur again?

By not using software.

And I mean software in general, not this software in particular. You're basically asking for assurance that they won't have any more bugs, but no one can actually provide such an assurance in the real world.


Yes, they can.


> Yes, they can.

If they do, they're either incompetent, lying, or building something enormously expensive yet completely impractical for most if not all real world uses.


I don't understand this attitude. Where did we go wrong as a discipline where making products that actually work is such an outlandish proposition? No other consumer product industry would talk like this.


It comes from the halting problem. People can be more careful writing code, but it is impossible to be certain about all the things code will or won't do. Even very simple programs can have flaws that get found and fixed years later. It happens all the time.

We need to be open and honest about the possibility that our code may act in ways we don't foresee.


If this was actually true, we wouldn't have safety critical software systems that have been running for decades without fatal bugs.


> If this was actually true, we wouldn't have safety critical software systems that have been running for decades without fatal bugs.

Not hitting a bug is not the same as not having a bug. I'd bet money that whatever system you're talking about has bugs. Plus, the system may be far simpler than you assume.


> I don't understand this attitude. Where did we go wrong as a discipline where making products that actually work is such an outlandish proposition? No other consumer product industry would talk like this.

Part of it is cost/benefit ratio for the extra effort, part of it is market demands, part of it is lack of technology, part of it is unavoidable stuff like the halting problem.

It's also worth remembering that Signal is developed by a non-profit with a total of 36 staff members (if Wikipedia is correct). That means they have fewer developers, and even fewer Android developers.


This is a messaging app we're talking about here. There's nothing outrageously difficult or complex that hasn't already been done 25 years ago. If 36 people and $100 million in funding is not enough to make a messaging app that doesn't suck, what _is_ required and why is it more than that?


> This is a messaging app we're talking about here. There's nothing outrageously difficult or complex that hasn't already been done 25 years ago.

If you think it's so easy, be your own change and do it, and then we can judge the results.

> If 36 people and $100 million in funding is not enough to make a messaging app that doesn't suck, what _is_ required and why is it more than that?

You're assuming there's a solution of a certain form to get you what you want, but maybe it's your assumption that's wrong.

I mean, there are formally verified systems that might be like what you're asking, but they're both 1) very expensive, 2) extremely feature poor.


I'm busy, but tell you what, I'll do it for only $75 million. Maybe I should launch a Kickstarter.


I disagree. If that really is the choice, they should drop the secure moniker without further debate.

--

If you produce a product that claims to be secure, the onus is on you to back up those claims.

Off the top my my head there are many ways to implement measures that can help to encourage security going forward.

One of the benefits of coding in the open, and ascribing to opens standards and protocols is transparency and the ability for all to interrogate the code.

Can that work? It's obviously partly also down to the culture of the product team. As another poster in this thread has highlighted, the commit messages are terse and not as helpful as they could be. Perhaps more openness re. intention would help.

Also, why are we finding out about this bug over 7 months after it was reported? Transparency regarding vulnerabilities needs to be at the forefront of the products communications if the team really are serious about security.

In terms of isolating bugs; what kind of testing is in place. TDD, functional testing, beta testing?

There are so many avenues which _could_ be discussed in relation to my initial question.

Your response, is unfortunately not providing anything helpful.


> One of the benefits of coding in the open, and ascribing to opens standards and protocols is transparency and the ability for all to interrogate the code.

They already do all of those things. https://github.com/signalapp/Signal-Android


In word but not in spirit. They stopped updating their repo for an entire year while integrating a crypto shitcoin in secret.

These actions betray trust, and trust is Signal's entire reason for being.


You need to reread my post.

The situation is far more complicated than your link to the GitHub repo would indicate.


Users are not entitled to a guarantee that this will never happen again, because Signal is free and open source software provided free of charge and without warranty.

The Android app is GPL licensed. The license clearly states:

> For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software.

If you feel let down by open source software, you have many options available to you to make that software more reliable.


Moxie Marlinspike is famously belligerent against Signal forks, so no, the license does not really help here.


It does. You are allowed to fork or reimplement the Signal client, and even distribute it, with the official Signal servers configured, so long as you do not infringe the Signal trademark.

They don't like forks using their servers, but it's the users who connect to their servers, not the fork publisher. Those users are permitted to connect via the TOS, which is independent from the GPL.


They also refuse to open source their server software.

This is the problem in my mind. Open source in words, but not spirit.


Not only is it false as has already been pointed out, it doesn't matter at all. There is nothing stopping them from silently running a fork.


As far as I'm aware, I have to use the official server if I wish to communicate via the platform.

If this isn't true, I'd be very interested to learn more.



So are you telling me I can run my signal server and use it to communicate with others on the signal network?


You can run your signal server and use it to communicate with others on your signal network.

The openness or otherwise of the code has literally nothing to do with the terms of use of any service being run.


No.

The product _is_ the network. By restricting access to the network, access to the product is also curtailed.


You originally asked about running your signal server. Now you're talking about the network that's hosted through Signal Inc's servers.

So what's your point? Do you think everyone should have a right to connect to their servers however they like, and the service operator can't exert any control over that?


I am telling you that "They also refuse to open source their server software." is false.


Signal typically delay releasing the server source code, so the latest version of the server is not open source. In one case it took them almost a year (no public commits between 20 April 2020 and 6 April 2021).

https://github.com/signalapp/Signal-Android/issues/11101#iss...

Among the official reasons given was staying ahead of spammers. In this instance it was also speculated that the payment function which they were building into the server was to remain secret.

https://github.com/signalapp/Signal-Android/issues/11101#iss...


[flagged]


Are you trying to subtly accuse chithanh of something? If so I would request that you do it explicitly instead of in a passive-agressive way.

Regardless, their post contained only factual statements relevant to the discussion. I noticed no entitlement.


If you call it a "whim" if I point out that the latest version of the Signal server is proprietary software most of the time then so be it.

Or maybe you are referring to the other commenters who were entitled to expect that the one critical task of a crypto messenger is ensuring the confidentiality of communication, which has been broken by this bug (and at least one similar bug before it, https://github.com/signalapp/Signal-Android/issues/7909 ).

You can both be grateful that Signal is free and at the same time call out shenanigans of its owners. Just saying.


[flagged]


What a buzz kill. If I understand you correctly we shouldn't have expectations or constructive criticism on stuff we don't directly pay money for. I think that's nonsense. Does this only apply to certain opinions?

You are not paying money for Signal. But by using it, and getting others to use it, you are definitely improving their position on the market by helping them become a monopoly. Money isn't everything.


> What a buzz kill. If I understand you correctly we shouldn't have expectations or constructive criticism on stuff we don't directly pay money for. I think that's nonsense. Does this only apply to certain opinions?

You can have constructive criticisms, but they have to actually be reasonable and constructive. It's not reasonable or constructive to expect their software to never have bugs (i.e. be outraged when they have one); to rag on them because they didn't fix some bug more quickly than they may have been able to; to throw around labels like "proprietary" just because they don't release something on your schedule; to be bitter that their vision is not your vision (e.g. they won't implement some client or server feature you want), etc. All of that is in this thread.

Signal, for whatever reason, seems to attract a lot of entitled complaints like the ones I enumerated. I don't know exactly why, but it probably involves some combination of contrarianism, being in popular software category, and people who want to feel better for having picked a different team.

Signal isn't perfect software for me: I really wish they had an unencrypted message export feature, for instance. But I understand that doesn't fit into their vision, and instead of bitching about it, I just have it on my todo list to write my own (and have done some preliminary work on it).

> You are not paying money for Signal. But by using it, and getting others to use it, you are definitely improving their position on the market by helping them become a monopoly. Money isn't everything.

What now? Signal is nowhere near being monopoly, and even if they were, they're a non-profit, which makes the idea far less threatening.


> It's not reasonable or constructive to expect their software to never have bugs

I think nobody demanded anything of that sort, that is just a strawman. What was actually demanded is that priority of such bugs be raised, and perhaps users be adequately warned about known defects that may compromise the confidentiality of their messages.

That Signal developers didn't have an idea what was going on for 6 months? And then it turned out to be similar to that other stale/invalid database bug where messages were sent to unintended recipients? Back then fixing only the bug at hand but not taking steps to ensure that the type of bug (wrongly matching expired message IDs to existing messages) won't happen again? Doesn't paint them in the best light.

> to throw around labels like "proprietary" just because they don't release something on your schedule

I throw around the label "proprietary" because software which doesn't come with source code is in fact, proprietary. If Signal pushes new server code to production and keep its source to themselves, calling that still "open source" requires serious mental gymnastics.

> Signal, for whatever reason, seems to attract a lot of entitled complaints like the ones I enumerated.

No, besides your strawman the other commenters were all reasonable and constructive.


> I throw around the label "proprietary" because software which doesn't come with source code is in fact, proprietary. If Signal pushes new server code to production and keep its source to themselves, calling that still "open source" requires serious mental gymnastics.

It comes with source code, just not on your time-frame. That's still open source.

And if that's unacceptable to you, just use something else.

>> Signal, for whatever reason, seems to attract a lot of entitled complaints like the ones I enumerated.

> No, besides your strawman the other commenters were all reasonable and constructive.

We're going to have to agree to disagree there.

But if you want to (for instance) implement a plan so we can be "assured that this type of issue won't occur again" in Signal (https://news.ycombinator.com/item?id=27951759), be my guest. Or maybe you could develop a fork of it and show us your vision for it (including a source release schedule that's satisfying to you, a federated protocol, and all the the other demands in ITT).


> We're going to have to agree to disagree there.

Everyone can of course have their own opinions. But they cannot have their own facts, a discussion does not work that way.

Whether one considers some statement as entitled, that is an opinion and we can disagree about this.

But whether a program is open source or not is a fact. It doesn't matter if the source code is going to be released a day or a year after the Signal server has been pushed into production, at that very moment the program is not open source. Your comment about my time-frame is irrelevant. In the github issue 11101 link I posted above is Moxie admitting to running versions of the server that are ahead of the public git repository. These are factually closed source, and you continuing to argue against that fact doesn't reflect well on you, nor the ability to have serious discussions with you.


To be pedantic, only the ones that posses the binary need the source code for something to be open source. Since they did not publish the binary and since they have the source code we could say that it is actually open source software.


No, you are utterly incorrect.

If you produce a product that people depend on — and through which, you either intentionally or inadvertently cause damage to your user base — they have every reason to be upset.

If the product is open source, the user base luckily has the option to fix the problem.

This isn't true with Signal's model.


You seem to be confusing open source for a federated protocol.

Signal is not federated, as an explicit design decision.

The source code for the server and the client are both free software.


‘Many forms of secure messaging systems have been tried, and will be tried in this world of sin and woe. No one pretends that Signal is perfect or all-wise. Indeed it has been said that Signal is the worst messaging except for all those other forms that have been developed from time to time…’

Winston S Churchill, 11 November 1947


Sure, but there's a small theoretical difference with democracy. You have to live under some system of government. You don't have to use a secure messenger. You can choose to have sensitive conversations in person or not have them at all.

I agree that in practice, a lot of people are going to use their phones for relatively sensitive conversations, and in practice, Signal remains the best choice for doing so. But there are a few real threat models where the options aren't Signal vs. SMS / Google Chat / Discord / etc., the options are Signal vs. nothing. For instance, you could be a journalist deciding whether to ask clarifying questions to a government whistleblower via Signal or meet up with them in a park. You could be an activist/demonstrator under a repressive regime deciding whether to coordinate some action this weekend via Signal or hold off on it entirely and tactically preserve your freedom. And so forth.

For those people, if (and to be clear this is a big "if," while this issue is one serious piece of evidence it is nonetheless inconclusive) Signal isn't trustworthy, it doesn't matter if Signal is the least-bad of the options.

(Also, it's not like Signal is the only e2e messenger around. There's iMessage/FaceTime, for instance. Churchill's claim was that the abstract idea of democracy was good, not that any concrete implementation like the British government was good.)


> You can choose to have sensitive conversations in person or not have them at all.

I don't think this is fair. Most of the solution to this is "not having them at all." That's not a good solution and still doesn't solve your problem since you can still be listened to.

> There's iMessage/FaceTime, for instance.

Which also has gotten in trouble recently with Pegasus as there was a 0-click exploit in iMessage. Honestly, that is a far more serious issue than the one here. That being said, I still trust iMessage and that the devs are doing the best that they can. I just recognize that security is difficult and will always be a cat and mouse game. There is no such thing as perfect security.


I think what I'm trying to get at is that incorrectly believing you have access to a secure messenger can be worse than acting as if you don't, if those are your options. The whistleblower might choose not to make contact, but if the alternative is making contact and immediately going to prison (because someone else on your contact list saw a classified screenshot from you and told the authorities), maybe that's better. The activists might choose not to protest, but if the alternative is being caught before they even start their protest (because your group was forwarding a little advertisement image or annotated map around among trusted people, and someone untrusted got it), maybe that's better.

Take Reality Winner, for instance (the mechanics of that case were entirely unrelated to secure messengers, but it makes a relevant example overall). The effect on the world of her whistleblowing seems to have been minimal, and the cost to her was significant. Was it worthwhile? If she had been told the risks of the government identifying her were higher and decided not to leak anything, wouldn't that have been a better outcome?

I'm not saying there's perfect security. Vulnerable users absolutely need to be making risk assessments and deciding what they're comfortable with, and we should be clear nothing is risk-free. I'm just saying my sense of Signal's risk, in absolute terms, is higher than it was before I learned about this, and that matters to vulnerable users, not just the fact that it probably remains the lowest-risk messenger of the various options.

I agree with you overall, and the Pegasus exploit does reflect badly on Apple (and probably should reflect more badly on them than seems to be happening).


> I think what I'm trying to get at is that incorrectly believing you have access to a secure messenger can be worse than acting as if you don't, if those are your options.

For the average person, I do not believe this is true. For the non-average person, I believe you are correct but most of these people are aware and should be constantly trained.

I'm not saying you're wrong, I'm saying that there are two different conversations to be had and we need to know which one we're having. To me it looks like Signal is about as good as you get without loads of complication and for what it is meant to do.

> he Pegasus exploit does reflect badly on Apple

And same with this on Signal. I do believe we should hold these companies to high standards. But the point I'm trying to make is that these also aren't reasons to abandon the platforms completely (as many users are suggesting here). That's throwing the baby out with the bathwater.


Yeah, to be clear, I only mean this from the point of view of non-average Signal users.

It's a little weird that Signal is both the "baseline security that everyone should have" product (a la HTTPS or WPA2) and the "you are literally hiding from the government" product. Of course, the target market for the latter, when you are not another government yourself, is by definition mostly illegal activity (whether or not the laws are justifiable), so it makes sense that there isn't a good product just for that.

In this particular case, it also complicates things that people who are literally hiding from the government also have normal ordinary conversations with lots of people, and it helps things for those ordinary conversations to happen on Signal, but this bug is particularly bad if you do that.

(I'm also not really sure where, say, people buying recreational drugs fit on the "average"/"non-average" axis. Is it a reasonable precaution to not text incriminating information to your drug dealer over Signal? It feels like it shouldn't be necessary, but I can see the argument for it.)


You make some fair points. But even from the eyes of those people, what is the alternative? Is iMessage guaranteed to not have any hidden exploits out there? And on the flip side, what do they lose out on by only having those conversations in person? Well, I'd argue that their world becomes a lot smaller, and there sources are instantly at a higher risk.


If iMessage were sending photos to the wrong people (even with extremely low probability) for over half a year, there would be serious negative publicity to Apple for it, even if they had never implemented end-to-end encryption. Apple also has more software testers and more willingness to use telemetry. So while there are no 100% guarantees, I think the incentives are aligned with iMessage at least as well as they are with Signal.

Apple suffered negative publicity from the 2014 iCloud photo leaks, even though those were "just" phishing and not a vulnerability/bug in the strict sense. Tim Cook had to give statements to the media, and in fact Apple stepped up its phishing protection by pushing two-factor authentication and notifying users about additional iCloud logins.


Probably by installing some hardened memory on your phone to prevent cosmic bitflips or the such.

Unless you are going to go to NASA lengths of hardware reliability, you can't really hope much for the software that has to deal with the issues of... how many different android phones are there?


The available evidence indicates that this was due to a logic error, not to cosmic rays and/or Android ecosystem diversity.


> How can users be assured that this type of issue won't occur again?

Actually, this very type of issue (sending messages to wrong recipients due to stale/invalid database entries) has occured previously.

https://github.com/signalapp/Signal-Android/issues/7909


> How can users be assured that this type of issue won't occur again?

By writing code defensively. Despite the other comments, it's possible.

The key is to be redundant: for example, off-by-one errors are very common when accessing an set of indexed items by number.

Yet you can split the set (e.g. an array) in multiple ones to make it more unlikely that you pick the wrong item (e.g. picture vs users).

You can also "tag" the outgoing image with some attributes, e.g. the recipient and a sent/not-sent flag.

You can cross-check and stop if something is inconsistent. Many other things are possible e.g. to protect from RAM bit flips.

It's not a matter of language or tooling, it's a matter of mindset.


> By writing code defensively. Despite the other comments, it's possible.

I'm going to tack onto this and suggest that signal drastically slow down the pace of feature development. They don't have the same profit motivations other companies have. The messaging market, at least in its current state, is largely known. These both give Signal an advantage in that they can slow down and harden the product while baking security into their DevOps practices as a first class citizen.

Tl;dr: signal needs to slow down.


The answer is that you can’t be assured. Act as you will with that info.


They could choose to use a less transparent company's software, so that they never become aware of rare issues like this in the first place?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: