Hacker News new | past | comments | ask | show | jobs | submit login
A Statement on Recent Events Between Signal and the Anti-Censorship Community (github.com/net4people)
243 points by 1una on Feb 9, 2021 | hide | past | favorite | 275 comments



The answer from Moxie to these people: https://github.com/signalapp/Signal-TLS-Proxy/pull/15#issuec...

I think that says it all.

I'm also a bit concerned that "security researchers" don't seem to understand the threat model. Signal has never claimed to be able to hide that it was being used. The TLS proxy is only meant to help circumvent censorship, not obfuscate its protocol. And indeed, as a temporary solution, it's not ideal even to circumvent censorship. But they're apparently working on something better, and all this distraction is not helping.


> Signal has never claimed to be able to hide that it was being used.

From their blog post some days ago, I thought it did just that:

> Unlike a standard HTTP proxy, connections to the Signal TLS Proxy look just like regular encrypted web traffic. There’s no CONNECT method in a plaintext request to reveal to censors that a proxy is being used. Valid TLS certificates are provisioned for every proxy server, making it more difficult for censors to fingerprint the traffic than it would be if static self-signed certificates were used instead. In short, everything is designed to blend into the background as much as possible.

They should probably make that post less reassuring and list the exact risks.

https://signal.org/blog/help-iran-reconnect/


My read on that statement was "the censors can't just /dev/null anything with a plaintext CONNECT".

Given its broad user base, it wouldn't hurt for Signal to clearly state "we can't keep the fact of the communication private, only the contents".

That would short-circuit attempts to gain notoriety by pointing out obvious facts and calling them vulnerabilities. It's also common sense for anyone who knows their way around a puter, but that's not Signal's median user.

"Privacy" products are a market for lemons, and Signal's public messaging should strive to insulate its users from FUD.


I currently live in a country without such strong protections for individuals, and discussing this with a friend, I feel this is a disconnect between many of the HN posters who don't live in countries where such concerns are very common. This is not a judgement of "who has it worse", but more that from my observation, there are many important elements missing from the discussion that those who haven't had to consider that their posts/comments might land them or their family (or both and more) in jail.

Everyone in my country of residence uses Telegram; not because it's secure, but because for non-serious chats, it's convenient.

This is a statement/truth that I think a LOT of people don't quite get; Pavel Durov and his team might push that Telegram is secure, but no one uses Telgram for security because nothing about its security ensures a circle of trust.

This is true for any messaging app. The general consensus I've encountered is that any application you can readily pull from the AppStore/GooglePlay, so can adversarial persons. If they really want to target you for some reason, it's as simple as getting your friend and unlocking their phone, knowing the the same protections that people in the United States have don't apply world wide.

Signal et.al., can have the world's most amazing crypto, but it means nothing if the person behind the unlocked phone is an adversary, and I think a threat-vector that is missed in the 200+ post discussion here is that in many parts of the world, __this is a real threat vector__. A password/second screen/whatever is not a guarantee of protection! It's a speed bump, and how resilient you are to the person driving over the speed bump is how effective that bump is.

So, for the article when I read persons concerned about the statements made by Signal and not outlining the threat factor for countries where the Circle of Trust might literally be a matter of life or death, yeah, I side with the concerned persons. Signal has impressive tech, but again, that tech means __nothing__ if the person behind the unlocked device is adversarial.


What technology can address that threat model? Signal certainly doesn't try to.


Disclaimer: I not a security expert by any means. Also I am NOT from a country "where such concerns are very common".

So I feel like I am missing something here.

I don't want to minimise empathy towards living under such a society but if The Adversary can obtain your friends unlocked phone, what do you want to have happen?

A sham-password for Signal that sends out a "I've been caught." message to all contacts? So you can "fat finger" the password in front of your captor?


Ignoring the rest of the noise this seems to be the critical question here.


> Signal has never claimed to be able to hide that it was being used. The TLS proxy is only meant to help circumvent censorship, not obfuscate its protocol.

I don't understand. How would you circumvent censorship of the protocol without obfuscating the protocol?

It seems to me that signal has never claimed to be able to hide that it was being used... until now?

But thanks for posting the thing from Moxie, it does sound quite reasonable.

What would be useful to me and presumably other HN readers is a clear summary of the tech involved, readable by an audience who is technical but not security/circumention experts. Like, the people complaining could be spending time on that, to educate users and developers, instead of doing... whatever they are doing. That seems to have turned into a much less interesting argument about etiquette or something.


Reading the config in the TLS repo, it seems to me that the censorship is at the domain level.

So I guess they're trying to pop up as many endpoints as possible to circumvent that.

I don't know the details about the network block though, so I might be mistaken. But the nginx config in that repo is purely a TLS proxy. Nothing magical happening there at all, just an entrance node to the main signal network


A lot of people get confused about this with p2p type stuff. At ZeroTier we constantly have to explain to people that it is not Tor, and we get bug reports about how "people can see my IP!" Of course they can see your IP. You are communicating directly with them.

End-to-end encryption means content privacy but not necessarily meta-data privacy or anonymity.


"You were blocked because you know that we don't use GH for discussion, but came here anyway and started opening fake PRs so that you could post and harass other people on GH."

That is a very odd statement.


Reading through the PR in the link, I can see where Moxie is coming from. There's very little actual discussion happening, mostly just flamewar-lite. Maybe the other PR's/issues/forum posts are better.


What’s odd about it?

I do wonder how Moxie knew this guy knew that GH is not used for discussion. Maybe the only way to tell is to see that there are no other active discussions?


How do they harass people if they don't use GitHub for discussion? What is a "fake pull request"? Why is the project on GitHub if they don't use it?


Why does it need your phone number? Seems pretty weird for a “secure” program. And why does it use AWS? Isn’t that subject to all kinds of privacy risks including National Security Letters?

Why isn’t Signal just a Free and open source, infrastructure-less p2p solution? Maybe the goal isn’t really security or privacy after all...


> Why does it need your phone number?

Great question! It's a good way to make it easy for general-purpose users with limited technical expertise to adopt, use, and find one another.

> Seems pretty weird for a “secure” program.

You're right! It's definitely weird, but it's also understandable as a tradeoff in favor of less technically adept users. It's not one I'm in love with, but I think it makes sense.

> And why does it use AWS? Isn’t that subject to all kinds of privacy risks including National Security Letters?

The risk from NSLs depends a lot on what is hosted. If it's opaquely encrypted blobs, there's minimal risk. And where could things be hosted that wouldn't be subject to privacy risks from a government of some sort?

> Why isn’t Signal just a Free and open source, infrastructure-less p2p solution?

That's such a good idea that Signal is already a Free and open source solution!

That said, nothing is ever actually infrastructure-less, just like no data store is actually schema-less. There's just explicit infrastructure and implicit infrastructure. Implicit p2p infrastructure is not immune to governments or NSLs, and is often subject to more by virtue of being in more countries.


[flagged]


Discoverability, familiarity, and usability are good reasons that many find convincing. I understand that some people might disagree.

Personal insults aren't called for. Please stop.


[flagged]


The major group Signal wants to market to is normal people, and they've stated repeatedly that they optimize for that over maximum security with what they consider worse usability.

And even though I disagree with the focus on phone numbers and wish they'd prioritized a model that makes them optional, I do understand the network effect argument for including it. Kind of annoying that the alternatives that do it better have a hard time, but I also have to admit that it proves Moxies point to a degree.


Normal people have no issues creating user handles on Reddit and Discord and Twitter. Again, the only reason to require a phone number is because Rosenfeld wants it.


No, because using phone numbers gives you contact discovery through the phone book "for free" (with further privacy implications Signal has discussed at length). And an entire argument around it providing a social graph independent of service infrastructure that is important for some aspects of user freedom - again something that has been discussed publicly at length, both from Moxie and from other players in the wider messenger ecosystem (many of which at least partially disagree and have made different tradeoffs - but generally acknowledge the tradeoff exists).


How many casual users have you personally convinced to switch to some E2E-encrypted messaging app?


What is stopping someone from running signal on a dedicated burner if they're worried about it?

Different people have different threat models and I think asking for a phone number for the reasons posted above is acceptable for most people.


There really is no such thing as a dedicated burner. you don't even need NSA level threat vectors for most phone sim purchases in western countries to exfiltrate tons of user data.

Wifi is even worse, not better..


I think the best way to get a burner account is to use the free services to receive text messages online. These are enough to create an account, and you can then set a pin to prevent any takeover, assuming you synchronize your account every week.


Using phone numbers as identifiers for encrypted messages is the core feature of Signal. It was marketed from day one as a drop in SMS replacement. Initially it even used SMS as the transport for encrypted messages. It was literally called "TextSecure".

You can find any number of infrastructure-less p2p solutions. The number of users they have compared to Signal might be illuminating.


Quick answer: Find the right one to blame, please.

If you think that, just by making authorities know your phone number is registered on Signal is dangerous enough for you to be arrested, you should not use Signal.

Signal, like any other software, can not solve political, or dictatorship. Signal is a chat app, not a magical tool, even if it is helpful for those objectives. That's what we mean when we says "security is layered".

So, if your government have unlimited resources (that is to say, they can simply arrest and sentence you if they *think* you *may* using Signal, Telegram, Whatsapp, Tox chat, ..., without judicial review), then maybe Signal is not your biggest problem.


It seems like the phone number is used mainly for matching you up to your contacts, and secondarily used for a first level of authentication. Signal has always encouraged independent verification of folks' public keys for sensitive communication.

Whether or not AWS is risky, I don't think signal has any increased risk hosting their infrastructure on it vs. any other service. The whole point is that comms are end-to-end encrypted from handset to handset, and so any data in Amazon's hands is encrypted.


Seems like using a phone number as an account identifier is a huge risk to privacy. Has Rosenfeld admitted this? It’s just weird to require a phone number unless you’re talking about some big tech botnet like Facebook or Google.


The Rosenfeld stuff is weird, man.


It seems that a couple of security researchers from this community felt that Signal's implementation of a TLS-in-TLS proxy to allow its use in censored Iran didn't live up to their standards (it can be detected by censors and blocked). However, after Signal rejected this issue, they turned toxic and were prevented from posting anymore [1].

The above post is their reaction, which feels more like them lashing out rather than attempting to uphold the greater values of the anti-censorship community. I feel that it doesn't benefit anyone that they behaved this way, choosing to attack the Signal team and the reporter of the article below, rather than resolving the issue productively while allowing the community to continue focusing on their mission.

[1] https://www.bleepingcomputer.com/news/security/removal-notic...


It's more important how we all feel about each other and our drama than the fact there isn't a currently easily available obvious way to have private secure conversations.

Your "they are not being constructive enough" is actually very unconstructive, because it drags the conversation into more drama.

The tone is not more important than the facts. It never is.

Im not suggesting you have some alternative motive to deflect the facts. Any one could have written this reaction.

The top comment on a thread like this is always the same. Talking about tone. But I don't mean this offensively, I'm sure I've done it myself as well at times, but it feels like theater. Like a journalist asking a question they know they won't get an answer to. Talking about drama is the same participating in it.

So what should we use instead of signal?


Tone can often be more important than facts.

At one point in my career I had a somewhat public facing role. I made a tough decision that aggravated a user, who decided to send me several death threats. Suddenly that tough decision wasn’t so tough anymore. Any possible resolution was gone.

These situations involve people. We aren’t fact machines.


> Tone can often be more important than facts.

Exactly: https://www.edge.org/response-detail/27181


That is really useful:

> Russell Conjugation (or “emotive conjugation”) is a presently obscure construction from linguistics, psychology and rhetoric which demonstrates how our rational minds are shielded from understanding the junior role factual information generally plays relative to empathy in our formation of opinions.

> Years later, the data-driven pollster Frank Luntz stumbled on much the same concept unaware of Russell’s earlier construction. By holding focus-groups with new real time technology that let participants share emotional responses to changes in authoritative language, Luntz was lead to make a stunning discovery that pushed Russell’s construction out of the realm of linguistics and into the realm of applied psychology. What he found was extraordinary: many if not most people form their opinions based solely on whatever Russell conjugation is presented to them and not on the underlying facts.


And the last sentence:

> (Humans) fear (that) authentic emotions will get us into trouble with our social group, and so continue to look to others to tell us what is safe to feel.

By e.g. paying attention to if the speaker said "whistle blower" or "snitch"


brilliantly explored in 'yes prime minister': https://www.youtube.com/watch?v=G0ZZJXw4MTA



this was very helpful. Never heard of 'Russell conjugation' before.


Good read. Formalizes what Fox News etc. do to everything they spout out.


the entire mainstream media no longer just reports on the facts, but now has to in the reporters opinions and feelings too...


I think the point is that these people just sound like they are barking up the wrong tree. They're bitching at a non-profit org who gives away their services for free for not doing things exactly the way they want. And then getting self-righteously upset when said non-profit bans them for acting in an abusive manner.

> The tone is not more important than the facts. It never is.

This is 100% wrong. Tone does matter. If you want someone to do something for you, acting entitled and insulting them usually isn't going to get you where you want to go. Unfortunately straight facts don't sway hearts and minds. That is just how human psychology works. I wish it were different, but wishing does not make it so (speaking of facts!).

The Signal team does not owe these people a way to conduct private secure conversations. Yet they are working on it anyway, because they believe it's the right thing to do. And I bet it's pretty demotivating for a bunch of people to come and tell them that they're doing it wrong and their current interim efforts are useless. No one is owed an explanation or dialogue from the Signal team, and behaving aggressively in order to demand one is about the most unproductive thing they could do.


> The tone is not more important than the facts. It never is.

This is an error software engineers sometimes make.

When working with human beings, tone matters. Tone always matters. "Nature cannot be fooled," but presenting facts with the wrong tone can lead to them being discarded, harming the project and/or people involved. You get better outcomes recognizing that people make better decisions when they aren't emotionally tilted.

The successful projects operated by people who don't know how to interact with other people are significant outliers (and in some cases, their creators and maintainers have recanted their past approach as counter-productive, ref. https://arstechnica.com/gadgets/2018/09/linus-torvalds-apolo...).


The thing about Mother Nature is her dependability, not only can she not be fooled, she's the firmest conceivable foundation upon which to build. When you're depending upon tone that's never more than a subtle shift of tone from disaster. "Four legs good, Two legs bad" becomes "Four legs good, Two legs better" so easily.

I agree with you that tone matters, but I think that's a bad thing, a weakness or vulnerability. We should take "tone matters" into consideration the same way we'd take "OCSP without stapling results in a query to the CA for each leaf certificate examined, thereby harming privacy" into consideration. Can we prevent it? Can we mitigate the resulting harms? We definitely shouldn't celebrate it.


> not only can she not be fooled, she's the firmest conceivable foundation upon which to build

I agree.

What do we do with that observation when we then observe that human beings care so deeply about how they're being interacted with by other human beings? We are products of nature, after all.


>The tone is not more important than the facts. It never is

I think this framing is wrong. Tone and facts are both important (often equally so) and must both be addressed in parallel tracks.

If someone rudely raises concerns about the security of your product, it's fine to ban them as long as you also address their claims of insecurity. You can kill a community by not addressing claims of technical flaws and you can kill a community by not enforcing standards of conduct within it.


> The tone is not more important than the facts

Of course it can be. If your tone is so bad, that nobody listens to you or implements things that you want, then it doesn't matter how right you are on anything.

Getting stuff done and solving problems relies on way more things than just being right.

> Im not suggesting you have some alternative motive to deflect the facts.

Ok but by making these comments you are also deflecting from real problems that having a bad tone causes.


People respond poorly to abuse. This is basic human nature, trying to fight against this is a fool’s errand.


I agree, it is a chain of poor responses to abuse, these people probably considered the original response that they got (along with having their pull request deleted) as an abuse which is why they responded in that way.


I don't really have any particular opinion on Moxie or Signal; I've never interacted with either the product, the community, or the person. But I will say in the abstract that many founders drastically underestimate how much extra headache poor community management will cause them in the long run.


> there isn't a currently easily available obvious way to have private secure conversations.

Ricochet[1] works really well. It uses Tor hidden services to communicate. Your Ricochet ID is your onion address. To add a contact, you input their Ricochet ID and a short message, and Ricochet connects to their onion address and sends a contact request. If the contact request is accepted then you'll each show up as a contact on each other's client and can chat whenever you want.

Tor is really perfect for this, you can't get more private or censorship-resistant than Tor.

The UI is currently not great, but that's not a protocol problem.

The biggest problem with Ricochet is that hardly anyone is using it.

Private secure conversations: yes.

Easily available: yes.

Obvious: sadly not, for most people :(.

[1] https://ricochet.im/


Doesn't the security of Tor depend on the proposition "Surely my opponent would never operate a bunch of exit nodes"? That has always been my impression, and it seems like a problem when your opponent is a state actor.


Hidden services don't use exit nodes.


You could first charitably strengthen their argument by silently correcting “exit nodes” to “nodes”. The core point stands.


I don't think the core point does stand.

1. To deanonymise a hidden service connection you need to observe the traffic of all of the nodes in the circuit.

2. OK, let's say your adversary controls all of the nodes in the circuit and deanonymises the endpoints. Now what? You're no worse off than you would be if you weren't using Tor in the first place, so it's not an argument against Tor at all. All it's saying is "the absolute worst case of using Tor is no worse than the best case of not using Tor".


> the absolute worst case of using Tor is no worse than the best case of not using Tor

While this is true I just wanted to point out that one does in fact not need *all* the nodes. It is possible to perform traffic analysis and infer which nodes are used by a certain user even if the attacker only controls a part of the nodes. [1]

While this of course doesn't change the fact that using tor is a good idea, one should not let themselves be lured into a wrong feeling of security when using tor.

[1]: https://murdoch.is/papers/oakland05torta.pdf


I think we are both talking about tone. While you are saying that expressing emotions and the drama is important in a discussion over Signals' future, I believe that such conduct only drives a wedge into it. These issues are emotional and affect important freedoms, but while expressing them is important doing it in such a high-profile, damaging way can only bruise egos and create even more tension. Instead, both parties should sit down and take a long, serious look at their grievances and how they will address them.


How exactly do both parties sit down to discuss their grievances when the incumbent party is clearly banning the party with a different perspective?


They're banning the other party for their abusive language and behaviour, for their unsubstantiated, bad-faith claims of suppression and for misusing project resources. On top of the fact that they're not listening to why their assertions are incorrect.

Any party acting in such a belligerent, infantile manner is going to be banned since they have proven they cannot act like grown-ups in a grown-up setting.


That's a fair point and I agree with you. Something I've been wondering as of lately, what can we (as a society) do to move off the edge of high emotions? I feel as if it's a common theme anywhere I look.


It's a very tough problem to solve, especially given how social media algorithms stir outrage and throw civility away for higher engagement numbers.


We must start reading the rules of the online places we visit, as a start, and obeying them. If we don't agree with the rules, don't like "codes of conduct"? Fine, we do not participate there at all.

It's their house and we abide by their rules.

If we break a rule and it's pointed out, then we apologise and goto 10: read and follow the rules. We do not throw tantrums, we do not cry "censorship! suppression!".

We act in good faith: if we post a thread, open an issue, submit a PR, and it is closed, then we do not simply repeat our action. Whether we agree with the closure or not, repeating is an attempt at evasion and a smack in the face of those running the place. Either of these two behaviours then invite us to be banned, because we have acted in bad faith.

We do not immediately and vocally assume that an act we don't like is a personal attack against ourselves or our values. If our post is "hidden by the community", this does does not mean "the leadership of the project is orchestrating an agenda against us". It means our peers have found our conduct distasteful and is a very loud alarm that we must heed: that we have behaved outside of the expected conduct and our peers found it distasteful, unhelpful, insulting. If a web site algorithm has prevented us from posting a link, an image because our account is new or it has triggered anti-spam measures, we do not post elsewhere about how we're being persecuted.

We invite like-minded people to join the discussion when they have innovative ideas, when they can add material to a discussion that has not yet been supplied, an angle that has not been addressed, or a concept that has been misunderstood. We never ping our friends to jump on our bandwagon, shouting the same things over and over again. Perhaps if a concern is dismissed as an outside, then more voices can be constructive, but they must conduct themselves with civility and be particularly aware that they need to add to the discussion, not to add pressure.

If a counterpoint is given to something we passionately believe in, then to discuss is to use logic and data to refute it. In the ideal, we ask ourselves to fight for this counterpoint: perhaps it is entirely valid? What we must refrain from is reading a fair and polite counterpoint and immediately treating it as an attack, a dismissal. This prompts a counter-attack and we are no longer discussing - we are now detracting from the point. When we make our issue or improvement a negative it reflects back upon us. Who wishes to discuss with a party that cannot cope with rational disagreement? In addition, we must resist the urge to simply exaggerate our cause: to state an incorrect point more loudly does not make it correct, it just antagonises those who disagree. Those who we are trying to see our reasons, our solutions, or problems.

Once we have broken the rules, assumed and publicised bad faith, breached expected conduct, ignored the ire of our peers, evaded bans, repeated actions which were turned down, called on our friends to flame and troll, replied to constructive criticism with louder voices, manipulated the conversation with hyperbole and outright refused to listen to the possibility we may be wrong...then we hold a beacon above our heads, advertising that we are incapable of joining a rational debate and seek not to improve anything but only be told we are right and righteous.

I say this not to you, but to answer your question: if anyone reads what has transpired in this matter, and then asks your question, they need to very deeply analyse their behaviour because it is unacceptable in any civilised society.


I'm sorry, but we don't read replies longer than 140 characters or that use the word "persecuted". Please create a new account and re-submit your argument in the form of a haiku.

Having made rules is not sufficient for those rules to be just. Rules are not themselves authority bearing - nor can one side be upset when they make obnoxious rules and get push back. When you respond to criticism of those rules by deleting the criticisms... well it's clear you are no longer hosting an open forum and instead trying to shut down speech you don't like.

The posters did not use insults, they did not attack the people behind signal - they pointed out that the statement regarding the proxies was false (which it factually was) and that the circumvention that Signal gave was likely insufficient for most users. Shutting down a potentially serious security bug because it's in the wrong spot or because you don't like the tone is bullshit - it tell me you as a person care more about tone policing then keeping your users safe. When you're doing battle against nation-states who like to jail their dissidents, you don't get to reap half-successes.

This isn't a child's baseball game, this is a situation where lives are at risk. "Sorry, we really tried to put out the fire, but your yard sign made me upset and I had to go write in my journal instead of doing my job."


> Having made rules is not sufficient for those rules to be just

Quite. And if one doesn't think the rules are just, then simply don't play the game. However, rules such as "don't spam an issue", "don't spam a PR", "don't insult others", "please use the forum for this discussion" strike me as being simple, sensible and just rules. Which rules are unjust, in this context?

> Rules are not themselves authority bearing - nor can one side be upset when they make obnoxious rules and get push back.

In a dictatorship - such as a web site forum - the rules are, in fact, authority bearing. Since a user or their content can be removed at the whim of an operator, that authority is proven. This entire dramatic performance has been because the entirety of one "side" is upset when they've been subjected to pushback because they have broken the rules, and the authority of those rules has been effected.

> When you respond to criticism of those rules by deleting the criticisms... well it's clear you are no longer hosting an open forum and instead trying to shut down speech you don't like.

You are conflating what happened here. A user committed malconduct (of the sort that most projects would react badly to) and their offending material was deleted because it was an unhelpful duplicate placed in the wrong forum. Such content can only be deleted because it is...unhelpful, duplicate and in the wrong forum. All that was needed was the discussion moved to where it was expected. GitHub projects are not open forums and the PR was not speech.

> The posters did not use insults, they did not attack the people behind signal

I refute this statement with the following: https://github.com/signalapp/Signal-TLS-Proxy/pull/15#issuec... https://github.com/signalapp/Signal-TLS-Proxy/pull/15#issuec... https://github.com/signalapp/Signal-TLS-Proxy/pull/15#issuec... https://github.com/signalapp/Signal-TLS-Proxy/pull/15#issuec... https://community.signalusers.org/t/why-signal-blocked-me-fr... (more a threat than insult, I guess) https://community.signalusers.org/t/why-signal-blocked-me-fr... https://community.signalusers.org/t/why-signal-blocked-me-fr...

> Shutting down a potentially serious security bug

They were not shut down to begin with - they were simply asked to post in the correct forum. Once they started their abusive behaviour they had to be shut down because they couldn't behave themselves.

> This isn't a child's baseball game, this is a situation where lives are at risk. "Sorry, we really tried to put out the fire, but your yard sign made me upset and I had to go write in my journal instead of doing my job."

I'm not sure what you're trying to achieve here, other than proving one of my latter points.


> I refute this statement with the following..

If any of those so much as raise an eyebrow, you must be the most sheltered darling on the entire internet. "Moxie and Signal is shit"? Really? I get called worse names in online gaming by kids.

> ..their offending material was deleted because it was an unhelpful...

Their offending material was a security issue! A fair amount of people seem to share their concerns. If it's in the wrong place then move it, and if it's a duplicate then close it and add a link to the original where conversation is happening. If you can't handle basic moderation of your forum, then stop using your damned forum and maybe use github issues like every other project.

> In a dictatorship...

Yeah, Signal can throw a tantrum, take their toys and go home. So can us as their userbase and the people who recommend it. Right now I'm one of those people who can be reached on Signal and who recommends it to others - and if Signal can't find a way to appropriately receive feedback then I'm no longer going to be doing that.

> Once they started their abusive behaviour they had to be shut down..

No. They didn't. Signal staff could have literally just responded: "Hey, thank you for the report, we're examining this now and will update as we can. Please mind the language." That's literally all it would have taken. Instead Signal continues to stick it's head in the sand and ruin it's relationship with it's users.


> The tone is not more important than the facts. It never is.

We don’t live in the same world. Without proper tone, my message is never received. And yours is?


> So what should we use instead of signal?

Threema is one alternative.


I recently bought Threema and I can only say that I like it more than Signal. Now it even has local (meaning your images don't leave your phone) object detection in images, global search in chats, etc. The only thing that's missing is usage on multiple devices and a native desktop client - but the app itself is great so far.


Or Session


Telegram?


> rather than resolving the issue productively

Unfortunately it's not possible to productively resolve issues with the Signal team, something you can find documented again and again.

(My own experience: I had to justify the the user impact of 30+sec freezes on every sent message, confirmed by multiple people. Bug was closed wontfix.)

This is a known thing with Moxie and the culture he's created at Signal and it's unfortunate that he's still starting drama with everyone instead of doing any self-reflection.


FWIW, my experience with Signal sadly confirms this. There’s a critical issue for years with the iOS app that there’s no way to backup or otherwise extract your chat logs (contrary to the usual behavior of iOS apps which automatically backup to your computer or iCloud), no warning of this when you first install, and almost no communication from developers on the subject for years despite huge numbers of complaints.

They hide behind the shield of being volunteers to justify not addressing or communicating about any user concerns, but they also want to play in the big leagues and have hundreds of millions of users who would otherwise be using other chat platforms.


> They hide behind the shield of being volunteers to justify not addressing or communicating about any user concerns

I agree this lacking feature is an important matter, but the Signal team have explained why the iOS app doesn't have a backup facility. Saying there's "no communication" is not true.

I'm not sure what the purpose of saying "hiding behind the shield of being volunteers" is. Are you inferring they're lying and that they simply don't care? Perhaps that they're raking in their paycheck whilst leaving the volunteers to martyr themselves against complaining users? Neither are helpful accusations.


Not no communication. Almost no communication. They have occasionally given explanations for why the feature is difficult to implement, and occasionally given explanations for why they think the feature shouldn’t exist at all even if it could be implemented (despite it existing for Android users). They have never clearly communicated what their intent is: Will they implement it? If so, when? And they have never clearly warned users of this sharp edge in advance of installing software which will hurt them if they care about not losing control over their own data.

By “hiding behind the shield of being volunteers”, I’m not implying anything about them lying about anything. I’m saying that they have explicitly, on multiple occasions, indicated that it’s bad form for users to feel entitled to certain dealbreaker issues being fixed, or even to feel entitled to communication about whether those issues will be fixed and on what schedule. And their reason for believing users are not entitled to anything from them is that they are just volunteers.


But where are they supposed to do the more communication? Surely they can't go reading and responding to every thread online that discusses Signal - there's just so many of them. In GitHub, too, issues often get duplicated or drowned in comments.

(Although I strongly disagree that they should be saying when they are going to implement it, as that's only setting themselves up for failure: unless it's almost ready, there's just too many things that can influence your roadmap.)

For what it's worth, they did provide another update on this one month ago saying that they do intend to implement it and thus think it can and should be, I think: https://www.reddit.com/r/technology/comments/kt91qk/signal_p...

> Thanks, we know this is a big deal and think about it a lot. We're working on ways to do it that would be privacy preserving, and in the mean time we've got the p2p device transfer you mention.


> But where are they supposed to do the more communication? Surely they can't go reading and responding to every thread online that discusses Signal - there's just so many of them.

They could put out an official statement on their web site about the matter that everyone can reference. "We intend to do this and here's the way we intend for it to work, and we expect it to take roughly 1 year ±6 months to implement. Here's the GitHub issue to track our progress."

Or they could post something on this thread on their official forums which has 18.7k views, 731 likes, and 384 replies: https://whispersystems.discoursehosting.net/t/ios-backup-kee...

This isn't rocket science; plenty of other organizations have ways of disseminating information to millions of people so that everyone knows what's up. I don't expect the White House Press Secretary to speak to me personally, but I do expect her to answer questions from reporters and make official statements about matters that huge numbers of people care about.

Either way, there needs to be some acknowledgment that this is not just a nice-to-have feature request, but that things are actively, terribly broken for certain users at the moment. They should not be working on aesthetic features like Stickers when something is so fundamentally broken. They should be acknowledging their users' pain, apologizing for having screwed up, and emphasizing that they appreciate the priority of this matter.

And until the issue is fixed, it would also take approximately 0 effort for them to warn users about this prior to installing or using the app, so that users can opt out in the meantime if they want control over their data.

>> Thanks, we know this is a big deal and think about it a lot. We're working on ways to do it that would be privacy preserving

Thanks, that's a small step in the right direction which I hadn't seen. Still, it comes after years of being almost entirely mum on the subject, and "think[ing] about it a lot" isn't terribly great comfort to users who have been stuck in the lurch for literally years. How long are they going to be thinking about it? When do they start taking action? What does "privacy preserving" mean?


> This isn't rocket science; plenty of other organizations have ways of disseminating information to millions of people so that everyone knows what's up.

Is that so? Do people know when WhatsApp is going to add feature x or address bug y?

> Thanks, that's a small step in the right direction which I hadn't seen. Still, it comes after years of being almost entirely mum on the subject

Does it? Or is it possible that you also hadn't seen all earlier instances where they made statements like that? It's just that that sounds very possible to me, given how many different issues there are that affect many different people.

(In addition to the other questions you mention seem unanswerable, unless it really is there one and only number one priority, which seems unlikely given e.g. events like the outage not too long ago.)


As they've claimed, this is a security vulnerability, instead of a software bug. If they really think this is a vulnerability, send to Signal's email: https://support.signal.org/hc/en-us/articles/360007320791-Ho...


But even then, there's really no point in trolling the PR section of Github besides griefing. Just fork the thing and make a better Signal if you believe so harshly that there's no hope with Moxie at the helm.


Even if one thought that this would help the people that need help on this matter, you can't really fork signal as it is today, I think. Or at least whatever it is that signal is using on its servers because that is very unlikely to be the software in its public repo, which hasn't been updated in almost a year. And even for a while before then, most of the commits were version bumps with no visible changes on the code.

If anything, that's another problem with signal that's not getting enough attention (that I've seen): It claims to be open source, but as of now, it doesn't seem to be. At least not in the servers.


Not only that, but Signal has indicated that third-party clients are not welcome to use their servers. So even if you contented yourself with forking the client, you can't use it.

https://github.com/LibreSignal/LibreSignal/issues/37#issueco...


Just fork it isn't useful if they believe this is putting people in danger right now.


What could have been the more productive way? If their issues are closed (and Signal does not seem interested in discussing this) and they feel like this is actively putting peoples lives in danger I feel they should call this out.


Following the project guidance on interaction, especially when directed specifically. Remaining cordial when engaged on the technical aspects, rather than throwing one's toys out of the pram the moment one is challenged. Avoiding excessive exaggeration of the issues as a tool to amplify one's point of view. By not immediately stomping around the project's places and throwing insults, factually incorrect accusations and orating about how one must be correct, rather than engaging in reasoned debate. By not drumming up a playground of like-minded people to assail those who disagree with one.

Nowhere in technical communities is this behaviour tolerable, productive or successful. This affair is painfully cringey to watch; it reads like a sugar-induced temper tantrum by a class of kindergarteners screeching at an adult that their juice cartons should be a different shape because corners are dangerous.

It would have been more productive if the group had not embarrassed themselves with every single action they've made.


>If their issues are closed (and Signal does not seem interested in discussing this)

Signal merely asked that they post on community.signalusers.org instead of Github.

>they feel like this is actively putting peoples lives in danger

That's obviously bullshit though, this can't possibly put peoples lives in more danger than using signal without a proxy a week ago would've.


> this can't possibly put peoples lives in more danger than using signal without a proxy a week ago would've.

I see one reason it could, it filters out people who do "need" to use it. It could even be people who did not use it before, but think it's undetectable now. Signal implies it can't be detected, at least to non-technical readers.

>Unlike a standard HTTP proxy, connections to the Signal TLS Proxy look just like regular encrypted web traffic. There’s no CONNECT method in a plaintext request to reveal to censors that a proxy is being used.

https://signal.org/blog/help-iran-reconnect/


If the mere use of Signal is banned, traffic analysis tools an DPI can be used to identify users and bring them the unwelcome attention of the regime’s well-staffed secret police. I’m sure the Chinese are selling them surveillance tech, and if not Iranians are quite capable of developing it themselves.

It’s not a simple issue to resolve. WireGuard is better in that it only establishes a flow if authenticated, but UDP traffic is a giveaway.

The bug reporters reacted immaturely to being asked to submit the report on the Signal forum instead of GitHub, but Signal hiding before a CoC to avoid discussing substantive issues is not a good look.


> I’m sure the Chinese are selling them surveillance tech, and if not Iranians are quite capable of developing it themselves.

Actually it seems more likely that it's US-built censorship tools -- specifically BlueCoat, which was detected in 2013[1]. BlueCoat claimed they didn't sell the hardware to Iran because it would violate sanctions but that's not much consolation for the people who are being surveilled using their tools.

[1]: https://www.washingtonpost.com/world/national-security/repor...


Here is how we do things, we responsible security researchers. Do things by following steps:

1. Is this a security vulnerability, or simply a bug? If just a bug, send to Github Issue, or send to the user forum, according to the maintainer's instruction (Signal use the forum, instead of issue). If this is a security vulnerability, go to step 2.

2. Is there a secure channel to contact software provider, or the provider can give a secure channel? For Signal, the best way is open a issue to say "hey we found a vuln, any PGP pubkey i can trust". If they did not provided after 14 days, go to step 4b. If they provided, go to step 3.

3. Contact with the provider and tell them what this vulnerability is, and how to fix it. Now, it's provider's responsibility to track down the bug fix flow. If they fixed it, delivered it, and told you their customers are all safe now, go to step 4a. If anything else happened (e.g they refused and think this is not a bug), or 90 days passed, whichever comes first, go to step 4b.

4. Finally:

4a. In this case, vendor fixed everything, patches should have been delivered, so whatever those vendor thinks about, you can just write a blog and says "i found a vulnerability in some software, here is the PoC". If you have a CVE number, congrats, now you can write an article about it. Now things are all done, and you can hunt next bug if you want.

4b. In this case, either vendor does not want to fix this bug, they failed to fix this bug in time, they failed to manage their software in time, or they just don't want to give a thing about you. This is the vendor's failure, not yours. So now you can write a blog and says 'here is a 0 day, try it if you want, have fun'.

So this is a general ruleset of how we do things. The word, "Productive", especially when it is used to describe doing a job very quick, is sometimes in contradiction of our primary object. We are fuzzing and digging for vulnerabilities to *make users safer*, instead of *being productive*. To protect users, protect ourselves, and protect everyone from being attacked by evil maids, we (responsible security researchers) all agree following this rule, to ensure everyone can make profit from finding vulnerabilities. If I failed to tell you what is a responsible disclosure, search it on Wikipedia. Most teams are following this rule, including Project Zero from Google, MSRC, Amazon's bug bounty, BugCrowd, and thousands of other platforms/teams.

Let's go back to the topic: Why I think those people are gangsters?

1. They directly send the full exploit, not even a simple PoC. This is far beyond the basic consensus. Once they made that, all rules above is no longer suitable, because they are just responsible security researchers. I don't think they deserve any CVE numbers, or any other vulnerability program's credit, except for an warrant from FBI, or China's MPS, since this is simply a criminal behavior.

2. Closing an issue does not mean ending an talk. Signal's team clearly said they should go to the forum, but they are simply not following the rule. Signal also have a bounty e-mail (https://support.signal.org/hc/en-us/articles/360007320791-Ho...), but clearly those gangsters just ignored it, or they will fill their mailbox with PGP signatures.

3. They claims this is a vulnerability, but they are just not treating it as a vulnerability, since they simply did not think releasing PoC is a risk for users - fun fact, security for users is their weapon for all articles they have published, including to the bleeping computers (https://www.bleepingcomputer.com/news/security/removal-notic...).

4. In a private Chinese group, one of the author's followers commented on this event: "They should just use V2Ray for that", and the author replied with agreement: "Why build your own software instead of using good old ones?". I believe this is enough for me to believe they are not having a good faith to Signal, or users of Signal.

Let's leave there and find more vulnerabilities of GFW, instead of Signal. This is just a amusing joke, presented to you by some V2Ray authors, to propaganda their own software.


For starters, don't name your proof-of-concept repo "fuck-signal-tls-proxy"?


Start their own app that's better.

The fork option is there and always has been.


It would take years of effort and years of time to get people to switch to $BetterSignal. It is a last resort.


So they would rather steal the hard work of others. Doesn't make me any more sympathetic to them.


This isn't putting anybody's life in danger - to my rough understanding the only thing detection of a proxy allows for is its takedown. I doubt the Iranian government has the resources or will to trawl their entire net for these proxies and trace their physical locations. What I meant by resolving the situation in a more productive way entails taking a step back and considering the situation outside this Twitter and Github row.

Both the Signal team and this anti-censorship BBS strive towards the same values, and the only thing drama and indignation does is to crack and weaken the effect of the community as a whole. The public sparring should stop and longer-term dialogues should be held to consider everyone's points and come to a conclusion that reasonably satisfies all sides. Depending on emotional investment this may be tough to do at the moment, but down the line it will do wonders for increasing cohesion and productivity.


People have been arrested for merely using an anti-censorship proxy in my side of the world. There is a real danger, even if you have never witnessed one.


> I doubt the Iranian government has the resources or will to trawl their entire net for these proxies and trace their physical locations.

The proxies are necessarily run outside of Iran, as Signal is blocked inside of Iran. I think the (tenuous) argument is that the government could see that user X is connecting to proxy host Y (outside of Iran), and then themselves connect to proxy host Y to verify it's a Signal proxy, and then take action (including potential violence) against user X for connecting to it (and presumably block further connections from within Iran to proxy host Y).

It's overblown, I think.

> Both the Signal team and this anti-censorship BBS strive towards the same values, and the only thing drama and indignation does is to crack and weaken the effect of the community as a whole.

This is precisely why I'm so curious about why this happened. It's easy to dismiss it as simple douchebaggery, but at least one of the accounts harassing Moxie on twitter about it have the classic hallmarks of sockpuppets, and the whole over-the-top PGP signing thing (and opening of multiple issues, and seeking press) makes me think this is a bit more of a coordinated smear campaign.


>It's overblown, I think.

https://freedomhouse.org/country/iran/freedom-net/2019

>Several harsh prison sentences were handed down during the reporting period in retaliation for online activities. Mostafa Abdi, an editor of the news site Majzooban Noor, was sentenced to 26 years in prison and 74 lashes in August 2018. Five other journalists at the outlet received sentences ranging from 7 to 12 years (see C3).


Did they draw the government's attention because they were connecting to banned websites or because they were running a high-profile news outlet?

It's seems unlikely to me that the Iranian government would be able to prosecute even a small fraction of instances of the former, whereas there's only a small number of high-profile news outlets at any given time.


Due process is not really a thing in Iran. If the Government wants you in prison, they will throw you in on flimsy pretexts. Which is why journalists need to be extra careful: they don’t just need to hide their activities but need to hide any trace or hints of participating in those activities. Outspoken people will be monitored closely, and even an attempt to access signal (say) could be used by the Government to imprison the user.


Some where jailed just for being Instagram "influencers". So it doesn't seem to be just high-profile news outlets.


Signal is end-to-end encrypted. The only thing the network surveillance would be able to determine is that you were connecting to Signal via an open Signal proxy, not the people you were talking to, or the content of your messages.


When proxying, who you're talking to can be determined from large-scale network surveillance. You look for patterns of messages sent from your device and messages of the same size received by another device immediately after.


The whole point is that, in countries in which Signal is forbidden, the mere fact to connect to it can lead to big troubles.

However I acknowledge that the problem solved here by Signal team is orthogonal to the one of hidding the users.


Yes? To some people just that fact would be revealing of needing to use Signal, it could be used in prosecution. Especially in the case where most people would not be using it at the moment.

Revealing the fact that one uses Signal can be an issue on itself.


> classic hallmarks of sockpuppets

I doubt it, the two main researchers behind it both have an extensive history of contributions on Github. The correct explanation is most likely the simplest: egos mixed with typical programmer idealism proved to be a Molotov cocktail that flared into drama.


I'm in the process of re-reading the book, How To Win Friends and Influence People. It's an older book but the discussion on human behavior and utilizing that to influence people is still useful.

I found the behavior and statements around this to be the kind that make the situation worse rather than make it better. They appear to be working against their own goal and may not realize it.


Seems to me it was Moxie attacking, not them.


I recommend quote their "anti-censorship community". I'm anti-censorship, but I'm not a member of them. Their behavior insults me. I'm not represented by them.


They even talk about their own inappropriate behaviour in this statement:

>2021-02-06 12:00 @DuckSoft sended a pull request that adds the PoC to Signal TLS proxy's repository. It has since been deleted and both @DuckSoft and @studentmain were banned by the Signal organization on GitHub in the afternoon. A repost by @U-v-U was later closed and locked.


I do not see any evidence of this in said quote.


You don't see why making such a pull request would be inappropriate? Do you understand what pull requests are for?

Does this look like an attempt at productive contribution to you? https://github.com/signalapp/Signal-TLS-Proxy/pull/15

Is this a good patch? It just drops a random file into the repo. https://github.com/signalapp/Signal-TLS-Proxy/commit/40f4d9d...

These people decided to abuse the pull request system after being asked to use https://community.signalusers.org/ instead of GH issues to discuss their concerns.


This has nothing to do with my post.


Of course it does, unless you were replying to the wrong comment in the first place.

What was it you did not see any evidence of?


I think that it was you who replied to the wrong comment. I did not see any evidence of them talking about any inappropriate behavior of their own nor did I see them saying anything inappropriate in the quote that you posted.


You have a weird definition of "inappropriate".


You have to be pretty thick if you can’t understand how those pull requests were inappropriate.


You have to be pretty thick if you can’t understand what my post was about.


>2021-02-06 12:00 @DuckSoft sended a pull request that adds the PoC to Signal TLS proxy's repository.

This is inappropriate. Pulling in a random PoC to the repo is not how you’re supposed to use PRs. Issues exist for this purpose, but theirs had already been removed.

>It has since been deleted [...] A repost by @U-v-U was later closed and locked.

Reposting the inappropriate PR is also inappropriate.


I do not find either of these to be inappropriate.


They are, along with all of the misbehaviour this group is perpetrating. Acting in this way well result in nothing but derision and bans from any organisation (and prospective employer, for that matter) because it is childish and unproductive.


How is reposting content that was previously removed by maintainers not inappropriate? Signal made it very clear that this stuff should be posted on their community forums, not github.


Look at this statement, it says it all:

"We are the underdogs, doing the real work, and yet unappreciated by many people."

This is the number one reason why people's tone gets sharper and sharper in online "communities", and often they are 100% right.

Most online "communities" devolve into cliques, where the powerful gang up on dissenters. Often the dissenters indeed do a lot of real work behind the scenes, while 80% of the powerful are well spoken parasites.

The powerful then resort to censorship, which escalates the situation.

In this case, who cares about resolving issues "productively" if people's lives are at stake?


Moxie - and the Signal team - seems to have a real issue taking feedback from outside experts. See the way he has been completely dismissive of the IME vulnerability highlighted by Naomi Wu and others.

I remember back when it was TextSecure - I tried to raise some usability and security issues. First I was ignored, then dismissed, then - a few years later - they implemented some of the changes.

I still use Signal. But the way the project is run is, dare I say, arrogant and dismissive.


This is exactly why I moved away from signal. A combination of ignoring user's concerns and confusing/inaccurate communication about the security and the privacy of their users. That's absolutely not what you want to have for a service people depend on for privacy.

If you're promoting your service to people who risk their lives and freedom by using it you need to make it 100% clear to them what their risks are. Today I still run into people who have no idea that Signal is storing their profile information and their contacts on signal's servers, and that opting out of setting a pin will not prevent that, and Signal still haven't updated their privacy policy to reflect it either (it still states "Signal is designed to never collect or store any sensitive information.")


> Signal still haven't updated their privacy policy to reflect it either (it still states "Signal is designed to never collect or store any sensitive information.")

They don't need to update their privacy policy because they never have access to the profil information.

Technically, the encrypted profile information and your messages (when they are in transit to your contacts) are being stored on their servers in the exact same way. The only difference is that messages are deleted afterwards whereas your profile is stored permanently until you decide to change it. That doesn't make the profile information any less secure, though. Yes, maybe in 20 years someone will be able to break AES-256 (or whatever symmetric encryption algorithm they use) and then the stored cyphertext version of your profile information might be valuable. Personally, I doubt it. But even if I turn out to be wrong: The possible attack vector against your profile information is the same as for messages: After all, tomorrow Signal could get convinced by an intelligence agency to permanently store all your encrypted messages from now on and then the exact same risks of AES-256 getting broken would apply.

Conclusion: When Signal says they're not collecting any sensitive information, they mean that they themselves don't have access to any such information because it gets encrypted. This is the promise of end-to-end encryption. They're not promising anything beyond that.

In particular, they can't promise that the encryption will never get broken. No one knows. And no one in their right mind would promise anything like that. But at least they do everything to mitigate that risk by openly publishing all their cryptography algorithms for peer review and actively participating in scientific research surrounding that topic.

> I still run into people who have no idea that Signal is storing their profile information and their contacts on signal's servers

The precise meaning of the phrase "Signal is storing their profile information on [their] servers" vs. what the average person will actually understand here, are two entirely different things here: Most people will think that Signal stores that profile information in cleartext on their servers – because that is the current status quo with almost all popular online platforms – when in reality this is not the case.

Normally, I would be saying at this point: Please stop spreading FUD. But I do agree with your statement that

> If you're promoting your service to people who risk their lives and freedom by using it you need to make it 100% clear to them what their risks are.

Signal could indeed do a better job here. In view of the above, however, I'm having the feeling the risks weren't really clear to you, either? (No offense)


>> They don't need to update their privacy policy because they never have access to the profil information.

irrelevant. Their policy states that their software is "designed to never collect or store any sensitive information." when in fact, it does. Where and how they store the sensitive information they are collecting is entirely beside the point. Assumptions about what people may or may not think that means don't really matter. As written it's a very straightforward and 100% false statement.

> The only difference is that messages are deleted afterwards whereas your profile is stored permanently until you decide to change it. That doesn't make the profile information any less secure, though.

Yes it does. Regardless of the fact that it's encrypted, it isn't as secure as you might think. See this thread for details: https://community.signalusers.org/t/proper-secure-value-secu...

> After all, tomorrow Signal could get convinced by an intelligence agency to permanently store all your encrypted messages from now on and then the exact same risks of AES-256 getting broken would apply.

Funny you should mention that because it turns out the Signal was handed a subpoena back in 2016 demanding that they hand over subscriber's names, phone numbers, and contacts. At the time, they were very proud to say they told them "Too bad, we don't have any of that data". In their own words:

"We’ve designed the Signal service to minimize the data we retain about Signal users, so the only information we can produce in response to a request like this is the date and time a user registered with Signal and the last date of a user’s connectivity to the Signal service."

"Notably, things we don’t have stored include anything about a user’s contacts (such as the contacts themselves, a hash of the contacts, any other derivative contact information), anything about a user’s groups (such as how many groups a user is in, which groups a user is in, the membership lists of a user’s groups), or any records of who a user has been communicating with."

Today, they are collecting and storing exactly that data. User's names, photos, numbers, and contacts. Yes, it's encrypted, but that's when they pull out something like this: https://community.signalusers.org/t/sgx-cacheout-sgaxe-attac... assuming they haven't forced Intel to leave a backdoor for them already or they don't want to take the time to brute force a pin.


I see where you're coming from. Ultimately, it comes down to a question of terminology. You say that a service storing the ciphertext of sensitive information is "collecting" this information. I, in turn, maintain it's at least not that clear, given that "collecting personal information" has a completely different meaning in the context of online platforms that don't offer end-to-end encryption.

This is why I think that your original statement that

> Today I still run into people who have no idea that Signal is storing their profile information and their contacts on signal's servers

is at least highly misleading and you're doing those people a disfavor by being similarly vague as the Signal website (albeit in the opposite way).

In any case, I agree that the statement you're quoting from their website,

> Signal is designed to never collect or store any sensitive information.

should be worded much more carefully (as should a lot more information on their website). Nevertheless, it should be noted that that particular statement is not part of their ToS / Privacy Policy and rather just an introductory statement.

> Regardless of the fact that [the profile information is] encrypted, it isn't as secure as you might think. See this thread for details: […]

Like you, I've been very concerned about Signal relying on SGX enclaves and I'm still extremely disappointed by the way they have been handling this topic. In fact, I've sent them multiple messages over the past year, asking them how come they trust SGX so much and what they've taken away from the Signal PIN UI/UX debacle. (I still think it's very poor UX to name a passphrase which should be as long as possible a "PIN".) Unfortunately, time and again they chose not to respond.

Nevertheless, the questionable security of SGX enclaves only comes into play if you choose to activate the Signal PIN feature and choose an insecure PIN. Obviously, this is still a huge red flag as the majority of users will do just that. But at least if you don't use Signal PINs you're good – in the sense that the app chooses a random lengthy passphrase for you. So yes, the encrypted profile still gets stored on their servers but, again, the attack vector is the same as in the case of messages getting stored during transmission.

Overall, you might think I'm contradicting myself – arguing both in favor and against Signal at the same time. And you would be right. Unfortunately, Signal is still by far the best tool we have for secure communication these days. (Where "best" is defined as "striking the best balance of versatility, mainstream acceptance and security".)


Maybe, but I think that the way these researchers reacted when their criticism wasn't heard doesn't benefit anybody. By ratcheting up the tension and participating in an internet catfight against the Signal team, a net loss occurs for the anti-censorship community.

If the Signal team does indeed have a real problem with taking feedback and criticism, a better approach might've been to gather support and enter into long-term negotiation over Signal's future relationship with the community. This would make its development a lot smoother and prevent anger and bruised egos from building in the future.


The owners and maintainers of the product get to decide on how to handle issues like this one. But I’m not convinced that an “internet catfight” is a good enough reason for shutting down the conversation completely as it was done. I am aware that it’s totally unfair that signal owners should have to deal with this kind of behavior and not take strong measure like they did... I don’t really know what a good resolution would have been. But it’s also a fact that if a system is broken in some way then that hard truth remains true even if it’s wrapped in layers of shit. The ultimate losers will be the users.


> You were blocked because you know that we don't use GH for discussion, but came here anyway and started opening fake PRs so that you could post and harass other people on GH.

> …If you want to discuss anything about circumvention or any other aspects of Signal in a way that is respectful to the rest of the community, please join in on the forums.

https://github.com/signalapp/Signal-TLS-Proxy/pull/15#issuec...

That does not to me seem like "shutting down the conversation completely".

(As if there is even a way to do that on the internet if you were to try!)


Like I said, it’s totally upto the project owners to decide how they want to interface with the community. But the concrete result of that decision was just to move the discussion to a GitHub issue on another repo.


he was banned on the forums too



im just quoting the parent article. can yall read it before commenting and making accusations?


The timeline is incorrect.


Do you have a source for that?



At this point, a clear demonstration of how Signal's current design fails to keep users safe forwarded to the New York Times, WaPo, and Fox News would be a lot more valuable than shitposting via the Github PR system.


The New York Times published an article just a week ago claiming Signal was "problematic" for affording anonymity. I would assume WaPo feels the same way. No benefit in forwarding to either institution.


I've submitted 10+ signal bug reports (none security related) going back to the TextSecure days. I've never had any rude or dismissive responses from the team, but have had my issues hijacked by other people with the issue being overly demanding or rude.


What should Signal do about "the IME vulnerability"? They can't possibly defend against compromised phones.

Why call it "the IME vulnerability" anyway? This isn't about a vulnerability, we're discussing compromised phones. "IME vulnerability" seems designed to make this sound like a Signal issue, which it isn't.


There are two practical options.

1. Bundle an Open source IME to be used when in incognito mode. 2. Warn users when they switch to incognito that their IME may still be recording the words they type.

This isn't just about compromised phones. A 3rd party keyboard doesn't have to respect the incognito flag.


"Important: Keyboards and IME’s can ignore Android’s Incognito Keyboard flag. This Android system flag is a best effort, not a guarantee. It’s important to use a keyboard or IME that you trust. Signal cannot detect or prevent malware on your device."

https://support.signal.org/hc/en-us/articles/360055276112-In...

Sure, the app should say that too, not sure if it does.

Also, the small team of developers can only fix so many things at a time. There's ~50M more users today than a month ago, there's bound to be more work wrt. maintenance which will slow down implementing new features.


That was only added 19 days ago - after months of people (politely) asking for it to be acknowledged as a serious concern.

https://github.com/signalapp/Signal-Android/commit/0a29ffcf4...


Shouldn't Signal then also warn or refuse to work on Android versions with known vulnerabilities? Or if there are apps installed on the device with the accessibility permission?

Where would you say the line should be drawn?


There's some missing nuance here. Naomi Wu documented this much better than my summary, but the short version is that you need an IME keyboard for Chinese text entry, and the only one that's any good (and so, has a huge install base) is an application created and owned by a corporation with strong ties to the Chinese government.

When there's a security rake-in-a-darkened-shed that a large fraction of your users will step on, with a demonstrable risk to their life and liberty, I think reasonable people can agree that we're standing on the "hey, maybe we should at least pop a dialog about this" side of the line.

It took Moxie well over a year to come to the same conclusion, and then in a really lazy way as documented by the commit upthread. I'm starting to see him as a particularly unreasonable person.


First, I speak Chinese, I understand what the IME thing is about. I agree that the "Incognito Keyboard" flag is a miscommunication, it should say "Politely tell my IME don't use my input to make smart suggestions", but IMO it is more of an OS issue instead of application issue. Android decided this should be called "IME Incognito Mode", but in reality it is not enforced and merely a hint to the IME. Maybe in addition to calling out Signal, we should also try to convince Google to change the name?

For "pop a dialog about this", I don't know, that's an interesting idea, but it is hard to draw the borderline if you pursue this route.

For example, do you know that Tencent QQ bundles a full-blown endpoint security solution trying to "protect their users" and warn them their computing environment is compromised? To the point it installs a kernel driver to do the detection. Most of my tech-savvy Chinese friends believe this is bad, not only because the possible privacy dilemma but simply because it is not an messaging app's duty to ensure the user have a safe computing environment. Surely Signal can pop up a dialog about the IME concern, but what's next? When somebody bring up an interesting cross app side-channel leak on Android, should Signal scan the installed package list, try to flag any "suspicious app"?


"But what's next" is a slippery slope argument that isn't interesting. This particular issue is unique, and it makes sense to warn about it, and there's no really good reason not to when doing so is so trivially accomplished.

Hemming and hawing whether a line warning about a vulnerability that compromises a secure messaging app for over a year for any reason, but especially because of a shitty, arguably sexist tone argument not behavior I want to see from people who make security tools.


I feel like this is somewhat disingenuous.

IME keylogging is a known, serious, and frequently exploited issue that affects a substantial portion of Signal users. Signal's "Incognito Keyboard" setting didn't mention that the flag can be ignored, which was misleading and dangerous.

But yes, warning about accessibility settings if there's evidence of that being an attack vector seems like a good idea. I don't know about unsupported Android versions.

https://twitter.com/RealSexyCyborg/status/134995902394088652...


> if there's evidence of that being an attack vector seems like a good idea

Actually *most* Android malware use accessibility APIs to perform malicious action, random example from a quick Google search: https://medium.com/axdb/%EF%B8%8F-dissecting-defensor-a-stea... . That's simply because this is the most convenient way to perform malicious actions on Android without an exploit. Sure you have to convince the victim for permission, but with a nice lure people usually just fall for it.

It is much much more prevalent than malicious IMEs. Now help your "freedom in danger" friends by raising this up as a security vulnerability to Signal developers plz! /s


> frequently exploited

Do you happen to have a source for this? There’s lots of speculation out there, but I’ve never seen anyone claiming to have proof of this being frequently exploited.


Yes, I linked it above.


You did not. Realsexycyborg has no idea if this is being exploited or not, she’s only speculating.


In fact it might not be such a bad idea to warn about those things, and perhaps a short list of other possibly privacy-compromising settings


What would you consider to be an acceptable length of time for a feedback cycle for an understaffed organization who gives away their services for free? I think "months" can be entirely reasonable.

At this point you're not complaining about the end result -- they did actually implement something as a result of the feedback -- you're just complaining about the time it took them to do so. Which is IMO pretty silly, as the Signal folks make their own decisions about prioritization, and they're not at all beholden to the people who don't pay them for their service for any kind of schedule guarantees.


It took well over a year for them to address it, and only after blocking many of the people who raised the issue and denying it was an issue. The nature of the problem meant that people were being detained because Signal was misrepresenting the degree of protection it could reasonably provide.


I think a swift acknowledgement is useful.

Moxie could have said "gosh, that sounds like a serious issue. Let us investigate it." Instead, he ignored the women reporting it, sent snarky DMs about the people involved, and stonewalled any attempt to discuss it.

The fix was made - as far as I can tell - without any engagement with the community affected by the problem.

Signal received $50m in funding a few years ago. If they're understaffed, something is awry.


That... Sounds exactly like taking external feedback?


>Bundle an Open source IME to be used when in incognito mode

Is there a good open source IME? I thought Apple/Google/Microsoft haven't been able to ship a decent one and most people use Baidu's.

> 2. Warn users when they switch to incognito that their IME may still be recording the words they type.

Is a blanket "Your phone might be compromised, we can't help you if it is." warning actually useful? This doesn't really provide the user with any actionable information.

>This isn't just about compromised phones.

This is 100% about compromised phones running malicious keyboard apps.


I don't think this framing of the issue is helpful in this case. The people who have installed a custom keyboard likely did so for a tangible benefit, they may not have understood the warning from the phone at installation time or they may have forgotten about the warning entirely. I think it is unreasonable to characterize these phones as "compromised" or these keyboard applications as "malicious". While some keyboard apps are truly out to get people, this isn't the case for all of these applications and they meet a real need (i.e. foreign language keyboards).

As you say, a blanket warning that the customer's phone may be compromised is unhelpful. Warning customers who have a custom keyboard of the risks those keyboards pose (similar to the warning Android displays at custom keyboard install time)[0] could go a long way towards educating customers.

Signal markets itself as a one stop solution to privacy issues. I think it makes sense that they should outline the areas where they cannot, in fact, assure the customer's privacy.

[0]: https://support.swiftkey.com/hc/article_attachments/11501105...


I think jsiepkes addressed this quite well in a sibling comment:

>Shouldn't Signal then also warn or refuse to work on Android versions with known vulnerabilities? Or if there are apps installed on the device with the accessibility permission?

>Where would you say the line should be drawn?


I don't see how this is related... In this case they are talking about known vulnerabilities on specific Android versions. Here we are not talking about a specific vulnerability but about the way Android works and how custom keyboards function.

Perhaps the accessibility permissions are relevant. If Signal could detect these settings and warn the customer if these settings are egregiously open, that would be a valuable feature in my opinion. To me, support for older versions of Android sound like an entirely different discussion.

Signal does include the "incognito" function[0]; Signal is already taking some steps to address the issue. However I'd argue that many people have likely forgotten that they ever installed a custom keyboard and if it was pre-installed on their phone they may not be aware of it.

[0]: https://support.signal.org/hc/en-us/articles/360055276112-In...


I use AnySoft for English and used to use Trime for Chinese. I now use SwiftKey (not open source) for Pinyin.

What activists have been saying - and you should speak to them, not me - is that a warning is better than lulling people into a false sense of security.

Again, your phone may not be compromised but your IME could still be malicious.

The fact that Moxie and his team won't even engage with the people who originally brought this up is somewhat vexing.


>is that a warning is better than lulling people into a false sense of security.

But in the end any such warning is meaningless as it can't possibly be acted upon.

>Again, your phone may not be compromised but your IME could still be malicious.

If you're using a malicious keyboard app I think it's fair to say that your phone is compromised.


It can be acted on: you can realize that you probably shouldn't talk about everything using Signal despite the person urging you to install it swearing that it's secure. (which was the exact event that was given as a reason to add this: some journalist telling Chinese students(?) to use Signal to talk to them freely)


If your keyboard sends everything you type to the state, and there are no usable alternative keyboards, what realistic actions can you take?

a) type nothing anywhere on your phone: send only emojis, 'gifs', and voice notes?

b) learn to read and write a language with keyboards that don't phone home; or transcribe your written language to an alphabet with a keyboard that doesn't phone home

c) buy an expensive phone with an OS supplied keyboard that doesn't phone home (assuming such phones exist?)

d) learn Android development and input method theory and build a new keyboard for yourself

Are any of these actions actually feasible for the general population?


Knowing what you can't do safely is important. How is "don't send 'incriminating' messages to that journalist through a phone, or if you do be aware you might be monitored and there might be consequences" not a realistic action in the scenario?


Should Signal then come with a blanket warning “Do not trust Signal!”?


A brief explanation of the threat model (i.e. what it aims to protect against and what it doesn't) would probably be more useful. "Do not trust people that tell you Signal is perfectly secure" is true but probably doesn't lead to the right user behavior. I'm not claiming communicating these things well is easy.

(and obviously a bunch of the blame lies with people that do uncritically push Signal, if you are journalist not misleading your "sources" is important, but again they need to be educated too and it's not surprising that's not happening perfectly - efforts in that would also have been a reasonable response IMHO. And of course this is based on the assumption that the events have been presented somewhat accurately)


Why is it the responsibility of the Signal developers to do people's threat modeling for them?

If your situation is that you need to communicate things that could get you killed or imprisoned, you should be using a burner phone that has pretty much nothing installed on it but Signal (or whatever app you choose to use for secure comms). You should also be using a third-party OS/ROM that you can be pretty sure hasn't been backdoored by a local telco or government, or a device that you've managed to import from abroad that likely doesn't have local modifications.

I would assume that most people do not do this, and yet somehow expect Signal to magically make the entire stack below it secure, which is a ridiculous expectation.


It shouldn't do their threat modeling, but it is being widely promoted as "secure" (with little to no further restrictions) by and to people without detailed tech knowledge and is in an excellent position to inform their "threat modeling". As a principle, "People should know better" somewhat works for experts tools, tools for non-experts should where possible let users know what they are not aware of. (What exactly that means and where the limits are is, as said, a non-trivial question)


[flagged]


Why does her view on something completely unrelated matter to the facts? Why are you even comping through her Twitter history? This is clearly a personal attack "just because". Disgusting.


[flagged]


We've banned this account for using HN primarily for political and nationalistic battle and ignoring our many requests to stop. This is standard HN policy. Regardless of which sides you're battling for or against, it nukes this site for the curious conversation it's supposed to exist for.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future.

https://news.ycombinator.com/newsguidelines.html


Sorry, where's the vulnerability in _signal_ here?

The TLS proxy is not sufficient. Marlinspike addressed this in their incredibly childish PR [0]:

>As we said in the blog post, it is nothing more than a simple TLS proxy as an interim solution to help people while we're working on something more scalable and more robust

I'm not so sure they made it clear they were working on another solution in that blog post [1], but it's a known problem that proxies can be fingered. I don't see the value add here and I can't read this as anything other than "boo hoo, we weren't listened to" (which is not surprising, given their behavior)

[0]: https://github.com/signalapp/Signal-TLS-Proxy/pull/15#issuec...

[1]: https://signal.org/blog/help-iran-reconnect/


Why is Signal positioning itself as a solution when Rosenfeld admits it’s not ready?


They're not. They released something as a stopgap measure that will help some, but not all, people in Iran get back on the app, because their better, longer-term solution is not ready, and they believed that people there needed at least something in the short term.


That something could easily get them killed.


They must already have been killed then. Clearly there are relevant numbers of users using Signal before the TLS proxy was published.


Yes, indeed, I'm baffled that the people from Signal who dismiss these critics think that the only people possibly "endangered" are the proxy owners.

It does not cross their mind that the users are immediately endangered too.

They don't understand that it is very easy to identify the proxy users once the Signal proxies themselves are detected?

I'm here replying on the top level to this comment, because I think this is very important: https://news.ycombinator.com/item?id=26076113

Edit:

Actually it is because it is a different problem they are trying to solve.

What Signal is solving by these additional proxies is to avoid being blocked. So this is orthogonal to avoiding the detection of users.

The real way to avoid detection of users is going through something like Tor.

Edit 2:

The real problem is that, in countries where Signal is blocked, it is ALSO forbidden and illegal.

If it was just blocked and not forbidden, nothing wrong in working around the blocking.

But actually permitting users to work around the blocking when it is illegal is not helping them, unless there is also a way to hide them. Else it helps them commit (overtly) the crime for which they risk many troubles.


Exactly, according to NGO's people get lashed and jailed for online activities. After Signal has been blocked being detected could actually endanger peoples life.

https://freedomhouse.org/country/iran/freedom-net/2019 https://freedomhouse.org/country/iran/freedom-net/2020


Was the better solution here that signal does nothing?


They could have prioritized the dev (or scheduled the release/deployment):

1/ integrate some Tor-like system, and 2/ the amateur proxies feature

Actually they can still do it, by deactivating and putting the amateur proxies feature in standby temporarily while the Tor-like system is being implemented.

The very least they can do is not denying that, now they have already deployed 2/, developing 1/ is becoming an emergency.


Possibly, or at least make it very clear to users that they can still be detected.


Signal seems to get a lot of unfair criticism. I think this is at least partly because they made something a lot of people actually do use. This would otherwise be quite a rarity in cryptography.

This ‘statement’ is quite weird. Is it normal to declare oneself an oppressed minority over a github issue?

I feel like we should be a bit more charitable to people who make things. Otherwise nobody will make anything anymore...


I agree. If you don't like it, create a fork? It is open source.



Fork the client and the server then. Yes, I've seen from other comments that the server repo is apparently rarely updated. If that's significant to getting a working client, probably fork the client from earlier; most likely, it you get a significant number of users, you're going to need to get really familiar with the server environment anyway.

Running a server environment is probably time consuming ane expensive, but that's kind of why the people running the servers get to set the rules.


That then means you can't communicate with others running the normal Signal client. Signal is not a federated protocol.


I would phrase that differently. You can't use your (forked) client to communicate with others who are not running your client. With a forked server, and a forked client, you're welcome to add federation, but Signal doesn't want it. But then again, the complaint is that people don't like how the Signal project is doing things.

Maybe running your own network will help explain why they do things the way you don't like. Maybe running your own network will help explain how running things the way you like can actually work.


> That then means you can't communicate with others running the normal Signal client. Signal is not a federated protocol.

You also can't communicate with WhatsApp users directly either.

If your fork of Signal in better, then you should have no difficulty it convincing people to switch to it. Just because the software is open source, you're not entitled to connect to and use someone else's service in any way you like.


Right, but the idea and comment I'm addressing is "If you don't like it, create a fork".

This "do it yourself" mantra doesn't apply to Signal, and they have a history of rejecting community work. It's much more valuable to advocate for changes and fixes to the main app.

> If your fork of Signal in better, then you should have no difficulty it convincing people to switch to it.

This isn't true. Signal has a marketing budget and millions of downloads of inertia.


Sure, and they can fork the server (which I know isn't completely up to date) as well. No one is entitled to the work and resources of the Signal team. If people don't like where they are going, they are free to fork and build and maintain an alternative on their own.

Yes, convincing the Signal team to address their particular issues would be a much easier path for them, but a) again, they are not entitled to anything, and b) attempting to convince by acting childishly and ignoring the Signal team's wishes is not a great way to convince anyone of anything.


A fork won't tell people that doesn't use it that the original project they forked is dangerous to use. The proxy shouldn't be there in the first place unless it actually worked. I'm not saying i agree (or disagree) with the current issue's writers but this isn't the first time Signal have put their head in the sand when a problem was pointed out to them. It has become a well-known pattern of Moxie's.


> Our community have been silent for too long. We are the underdogs, doing the real work, and yet unappreciated by many people. Our opinions are underrepresented. That's what makes me believe that we must speak out this time, that we should release a joint statement, to condemn Signal's dismissive and irresponsible attitude to the anti-censorship community, and to call for our unity as a community and their immediate action on the matter.

What an entitled, self-serving, narcissistic framing. Even if their technical claims are 100% correct, they have almost no credibility issuing propaganda like this. Yikes.


That seems to be the new thing today. Everyone trying to frame themselves as an oppressed victim of some higher power.

If they think wasted hours programming solutions not getting adopted by OSS makes them some special oppressed group then they must be new to this whole thing. That’s such such a common scenario in OSS and hacker culture in general it’s comical. There used to be a special pride in doing the thankless work, especially in infosec.

Or maybe I’m just getting old and the new social media/political culture status quo has brewed up some entitled people where victimhood is the common currency.


I think both Moxie and Signal have to be more open to criticism instead of hiding behind either a CoC or a reactive/elitist mindset.

They can't eat their cake and have it. If they advise vulnerable groups to use their technology, then they're morally obligated to explore and mitigate any and all issues brought to the table.

Signal has lots of funding, so getting "insulted" is not an option — in my view that only applies to FOSS maintainers who work for free.


Signal should place someone who is somewhat technical between the customers of Signal and the development team. Many companies do this and it ensures that the person handling communication with the public has the time and energy to do so. I don't think this would materially change any outcomes but a couple messages to passionate customers along the lines of "everything you say is super interesting and I am listening, we as a company value your feedback" could go a long way.

And, of course, someone who is a bit more diplomatic may have better luck getting some of these issues across to the development team in an impartial manner.

Why is the lead Signal developer responding to the public on GitHub and Twitter? It is really helping the project? At this point I'd argue that it's actually hurting the project as we see more of these pointless and public flame wars. Others have pointed out the similarity between this situation and the IME keyboard kerfuffle a couple weeks back.


Even if I agree with the principles of the anti-censorship people, to be an activist to apply pressure on Signal for features instead of forking and building solutions is suspicious to me. Signal does a great job of frustrating mass interception, which I think was its original point.

Inventing new criteria and re-framing their product as inadequate for this scope change as an activism play seems insincere. We can expect this kind of pressure to be applied to all BDFL-run software projects, as I think there is an emerging organized play to insert new governance over foundational internet software.


> instead of forking and building solutions

What would you fork? The signal server code that hasn’t been updated in almost a year[1]?

If that is truly the same code that we use with signal today, would your fork work with this same network? Or would it be it’s own 1-server network all alone?

[1]: https://github.com/signalapp/Signal-Server


Either fork the code, or fork a new effort that implements the things you want, and then share it with people who also want it.

That these people think it is more viable to co-opt an existing product using organizing pressure for their ends than to build one someone actually wants and share it is indicative of their strategy and attitude. Project leaders need to recognize this tactic coming from afar and then exercise their prerogative to reject meta- and political ploys. Sure, talk to users, get features, but pressure? Treat it like a weed.


> That these people think it is more viable to co-opt an existing product using organizing pressure for their ends than to build one someone actually wants and share it is indicative of their strategy and attitude

Ironic, considering that Signal itself co-opted the existing network of SMS to build their product on top of. Even having the signal app on android manage regular SMS messages.

Point being that products aren’t created in a vacuum and they need network effects to be successful. In fact, that was a design philosophy of Signal from the start. Paraphrasing: “Don’t let people choose, but rather give them the best defaults”


I think we should ask this guy how he build it ?

https://www.reddit.com/r/signal/comments/l5dug8/signal_serve...


Has the year-old code somehow "gone bad"? At worst they can roll back to year-old versions of the mobile clients as well, and start with that as a base.

Would it be better/easier to have an up-to-date server? Sure, but they don't have it, and that's life.


Signal is open source in the same way pfsense is: it is impossible to actually build everything current from publicly available source.


Yeah moxie is the #1 reason I don't promote signal to my friends as an alternative to WhatsApp. His attitude to third party clients I find very bad too. They could have added a lot of usability too the signal ecosystem

If I move to something else it had to be fully open, not just the source of the app but the network too. Movie is just creating another walled garden. A lot less microphones hanging in the trees than WhatsApp but still a walled garden.


"Who we are [...] V2Fly maintains V2Ray, a proxy and routing tool that helps people behind China's GFW and Iran's Internet firewall stay connected to the internet."


Yes, that's some of them. I was more impressed that some of the shadowsocks team signed the statement. That's great software I use every day. And it's software other people use in life threatening circumstances.


> some of the shadowsocks team signed the statement

And hilariously enough also demonstrated that they don’t know how to use PGP.


Signal seems like a magnet for toxic avengers. It's really unfortunate because every negative interaction has a cost. It doesn't matter how valid what "net4people" is claiming because how they're saying it is unacceptable. The Signal team has its reasons for not adopting their recommendations. That's enough.


Reminds me of the way that signal handled RealSexyCyborg's report of how 3rd party keyboards often leak data.


By blocking people that abuse them and by having rational debate drowned out by drum beats? I agree.


No, do not put words in my mouth please.


Do I have it correct that the anti-censorship team refused to take the trivial step just to copy/paste their original issue on a forum as suggested by the project?


If you see their timeline and screenshots here [0], it says they weren't allowed to post in the forum.

[0] https://github.com/net4people/bbs/issues/60


All I see there is the automatic hold which they took a screenshot of, apparently one minute after it was issued!

What happened in minute 2 to the present? Did Signal ever approve them to post on the forum? They don't say.


I'm not 100% sure I'm right, but I think that at some point they were denied access to the forum because of spam protections or moderation.


As I wrote in a comment[1] in their other attention-seeking post[2], they keep talking about "risks" and "vulnerability".

There's no exploit or vulnerability here (despite their use of the "PoC" and "responsible disclosure" terms that apply to such things). The fact that you can detect a Signal proxy as a Signal proxy isn't a vulnerability; if it gets censored you're no worse off than you were if that proxy didn't exist: the main Signal servers are censored in Iran already. Indeed, this is the Signal circumvention proxy working precisely as designed.

As I understand it, these people got banned from the Signal forum for spreading this FUD there, too. Predictably, they started accusing Signal of some coverup. They managed to get an interview to further publicize their FUD, but eventually reason prevailed and that was pulled by the author, too.

Sometimes I really wonder the motives and identities behind the people causing such massive and unnecessary drama and fear in the community surrounding the only mainstream, reliable, end-to-end encrypted messenger out there. iMessage and WhatsApp both got their end-to-end crypto backdoored en masse via plaintext backup/escrow systems, but Signal remains generally safe and secure (provided general endpoint security practices are followed). These sorts of FUD attacks make me wonder about why they're happening, and the motives and incentives of the people causing them.

One of the people harassing Moxie about it on Twitter has <50 followers and an account that's only ~2 years old, with only a handful of posts in that time. My money's on sockpupppets.

1: https://github.com/net4people/bbs/issues/60#issuecomment-775...

2: https://github.com/net4people/bbs/issues/60


> The fact that you can detect a Signal proxy as a Signal proxy isn't a vulnerability; if it gets censored you're no worse off than you were if that proxy didn't exist: the main Signal servers are censored in Iran already. Indeed, this is the Signal circumvention proxy working precisely as designed.

There is more risk than just "if it gets censored". If the proxy can be detected, so can users of that proxy. If users of a proxy can be detected, they can be punished for that.

To what extend this actually happens, I am not sure of. So the severity of this vulnerability is unclear to me. What is clear, is that this is a vulnerability. Circumventing blocks tends to be illegal. If we want to help people circumvent such blocks, we need to help them from being caught as well. After all, we want to help against the blocks because we believe the blocks to be immoral.


> What is clear, is that this is a vulnerability. Circumventing blocks tends to be illegal. If we want to help people circumvent such blocks, we need to help them from being caught as well.

Nah, circumventing a block doesn't imply obfuscation of any kind. Signal's normal server connections are not obfuscated, there is no reasonable expectation that a connection to it via a proxy would be, either.

It seems like people are considering this a vulnerability because accessing Signal (via a proxy or otherwise) in Iran is illegal (as I understand it).

It doesn't seem like people would view this as a vulnerability if that weren't legally the case, so I don't think that points to this being a vulnerability in the software.


Signal put forward the TLS-proxy as a way to circumvent specifically the block on signal. This suggests they want people to be able to access signal despite it being illegal.

The feature is 'use access in Iran despite it being illegal'. If that is your pitch, then 'being detectable as doing the illegal thing we want to enable' counts as a vulnerability for that feature.

Whether software has a vulnerability or not cannot be determined without knowing how that software is meant to be used.


> Sometimes I really wonder the motives and identities behind the people causing such massive and unnecessary drama

In this case, it's pretty boring. They are just a group of "your average power users" or "wannabe programmers" in their highschool or junior years who happened to be born in China so had some exposure to anti-censorship. Being in their overconfident period of life, they pass by various myth they don't really understand as truth. The community is quite toxic but usually they don't cause trouble outside of their own circle, but when it happens, I don't know how to deal with them either.

They also misuses words like "vulnerability" or "responsible disclosure" because some of them have read a lot of news about security research, thought it is extremely cool but have no idea what it actually means.


I've seen a growing number of anti-Signal posts and activism lately, mostly surrounding the (well-documented) design tradeoffs that Signal makes for usability and privacy (such as opting to use phone numbers for usernames, to avoid having to store contact lists/social graphs on Signal servers), or their famous decision not to federate/interoperate.

Perhaps it's just criticism growing in lockstep with Signal's overall growth and notoriety, and there aren't any concerted efforts to discredit Signal and sow doubt about using it because it's harder for the intelligence agencies to surveil. I'd like to live in that world.

I'm not sure that I do.


One major reason for the push back against Signal promotion is that it does not represent any sort of federated protocol. It is a complete silo. So if it did become popular it would eventually be a serious problem and would need to be fought against.


That isn't a reason to push against Signal, it is a reason to push for a federated solution.

Not your point, but federation is often heralded as some must-have feature for communication around here, but we already have that: XMPP does that, even with encryption. Or Email with PGP. Or even the old Textsecure code, as Signal/Textsecure started out federated, but, surprise, that didn't help adoption.


> It is a complete silo. So if it did become popular it would eventually be a serious problem and would need to be fought against.

This is an explicit design decision. It used to federate, and they found that to be terrible, so they stopped, and now it's better.

Maybe you should find something else to fight against.

https://signal.org/blog/the-ecosystem-is-moving/

Worth noting: Google Talk used to federate, via XMPP. They found that almost nobody actually used the federation functionality, and something like 99%+ of the federation traffic was inbound spam.

Federated protocols aren't very good, and don't evolve.


> Google Talk found that almost nobody actually used the federation functionality,

Compared to the number of GTalk users, a bunch of self-hosted users probably didn't count for much.

> and something like 99%+ of the federation traffic was inbound spam

I heard this too, not sure if it was the only reason.

Too bad Google doesn't know how to manage spam. Good thing they don't have any other federated communication products. /s

> Federated protocols aren't very good, and don't evolve.

What makes you say this? XMPP has come a long way since 1999 and is still evolving to this day. Even email is slowly evolving. Is Matrix not evolving? HTTP isn't really federated, but Moxie also mentions the web being stuck on HTTP/1.1, because HTTP/2 and HTTP/3 don't exist.


>So if it did become popular it would eventually be a serious problem and would need to be fought against.

It already is.


>use phone numbers for usernames, to avoid having to store contact lists/social graphs on Signal servers

Why can't the user just be expected to deal with their contact list? The phone company doesn't store your address book either.

This is a terrible excuse. They require the phone number as an anti-spam/moderation measure and hide behind privacy.


> Why can't the user just be expected to deal with their contact list?

They are; they let the user maintain their own list of phone numbers for their contacts, precisely like the phone company in your example.

> They require the phone number as an anti-spam/moderation measure and hide behind privacy.

This is (inaccurate) speculation from ignorance. Signal, unlike almost every other phone-number-using-service on the planet, does not block burner/disposable/voip numbers from being used with the service.


Then why not let users sign up with a random unique ID number that they can share with their contacts?


Usernames are coming.


Did you know Signal, like Tor, was financed by an offshoot of the CIA?

https://www.opentech.fund/results/supported-projects/open-wh...

https://pando.com/2015/03/01/internet-privacy-funded-by-spoo...

Now if I were an Iranian dissident, I would be reasonably confident Signal is designed to withstand the Iranian regime’s interception efforts (but not necessarily traffic analysis). If I were someone on the US government’s shitlist like Edward Snowden or Julian Assange, my calculus would be entirely different. (Yes, I know Snowden has endorsed Signal)


Except Snowden said he uses the app everyday and is still alive. OTF money doesn't come with strings attached, if you can _prove_ otherwise, you might actually have something valuable to contribute to this conversation, but now it's just words that should be dismissed as conspiratorial.


What interests me more, is Signal's principal stance about censorship. If non-tech people ever come to Signal in numbers, the moderation problem will inevitably arise. Would they censor things that we currently have public consensus about? Like CP, terrorism etc.


I doubt that this will ever be an issue, because Signal is a messaging application, on which censoring/moderation is thus irrelevant. It's not a social network (contrary to e.g. Telegram which has tons of SNS features).

Let's hope it will remain just a messaging/videocall app.


This ignores demonstrated harm* from the combination of human behavior and low-effort large-scale communications.

* https://en.wikipedia.org/wiki/Indian_WhatsApp_lynchings


Moderation is still irrelevant.

You don't solve the problem you refer to with moderation, you solve it by privacy laws and their enforcement, because publishing the contact details of somebody without their consent is a crime.

The problem exists since long before WhatsApp: you have the same issue when an official or a celebrity gets their phone/email flooded...


They would have no other choice but add group and social features. That's what non-tech people come for.


Group feature is already present and is a different thing than Telegram's channels or Facebook's groups.

On Signal it is a group of your contacts, so it remains private conversation.

There is nothing publicly said, and it is not open for strangers to participate.

I don't see any compelling reason for Signal to evolve towards more SNSish groups, to the contrary, by remaining in the current state they avoid the costly conundrum of moderation.

> That's what non-tech people come for.

I disagree, people come to Signal for what it is. Arguably even more people would come if there was SNS features, but on the business/feasibility aspect (Signal is still an open-source based modest-size project), it would not be worth the cost and endless legal trouble of moderation in all the different countries with all the different laws.

Most importantly, introducing SNS would entail moderation which would fail the very purpose of Signal's existence (since the contents of the messages is ciphered and private).

In the end, actually less people would maybe come to Signal if it launched SNS features.


Signal is end-to-end encrypted. They have no way of discerning CP from non-CP as it transits their service.



And a very different tone in the discussion.


I would have thought that most any "large" complaint regarding "Your Open Source software doesn't do what I want!" could be resolved with "Well, you do it then."

Doubly so if the complainants claim to be experts of some kind.


Signal's architecture makes it incredibly prone to censorship on multiple levels. Rather focus your energy on something which is not as architecturally prone to censorship such as Matrix or XMPP.


why is instant messaging so important? why can't people use eg an encrypted tor bridge to send and receive encrypted emails? or is a mobile phone cheaper/more practical than a laptop in such a situation?


PGP lacks forward secrecy. E.g. the Iranian government can collect every PGP-message you ever send, and if and when they compromise your private key, they can retrospectively

a) decrypt your entire message history, even if you've deleted it from your endpoint

b) prove that you're the author of every message, because only your private key can be used to craft the digital signatures.

Signal solves both problems. For dissidents' communication, PGP is hard to use and incredibly dangerous even when used correctly. It needs to be killed with fire and buried next to nuclear waste in a container made of Beskar or something.


>decrypt your entire message history, even if you've deleted it from your endpoint

But how many people actually delete their old messages? If they don't then forward secrecy doesn't help. They get your messages when they get you key material.

Encrypted instant messaging is inherently less secure than something that can be performed offline like encrypted email because the key information is exposed all the time. So it is much less likely that you will have your key information exposed in the first place with encrypted email. An instant messenger on a phone can normally be defeated simply by grabbing your unlocked phone from your hand and scrolling though your old messages.

>prove that you're the author of every message, because only your private key can be used to craft the digital signatures.

A private key that in the case of, say, PGP does not have to be associated with any particular identity at all. Also, PGP offers actual deniability by simply not signing the message in the first place while, say, Signal only offers a particularly weak version of forgeability[1] which is problematic in general.

[1] https://articles.59.ca/doku.php?id=pgpfan:repudiability#forg... (see Forgeablity Light)


> But how many people actually delete their old messages?

I don't know, because I'm not in this position, but I would really really hope that someone who is having conversations that could get them killed or thrown in jail by an oppressive government would be using disappearing messages, or at least setting things up so messages are auto-deleted after some fairly short amount of time.

> A private key that in the case of, say, PGP does not have to be associated with any particular identity at all.

No, but presumably you will have possession of that private key. If you realize that the authorities are closing in on you, you can destroy your copy of the key, but if you're caught unexpectedly, they can tie your possession of the key to the messages.


Your private encryption key only can be used to decrypt the messages sent to you. Anyone with your public key can create a message to you. Without the signature there is actually no cryptographic proof that you created a particular message.

The really interesting thing about the PGP case is that you can do this and still insure that no one does a MITM attack on your messages. The encryption key is signed by the signature key (identity). So you can verify the identity but still leave no identity laying around.


You do not have to sign anything when you use PGP for encryption.


Does HN recommend a particular "getting started with Matrix" guide?


Offtopic, but what's with all the PGP signatures? One message is literally just "this message is signed with my key", followed by a key and a previous key. Is this a meta joke, automated signing (like signed emails), or am I tripping?!


This threw me off too. PGP in these contexts isn't a proof of any meaningful properties, and reads mostly as theater/additional drama.

Using a non-repudiatable signing system to promote claims about how a proxy is easy to detect comes off as very hinky to me, to use a technical term.


This is a community with a strong focus on security - they're proving their identity when they post to add their agreement.


I don't mean to argue, but I believe github's account system with 2FA should be more than secure enough. If it's not, then why even start a bbs there? Why not just use a signed & encrypted email chain? Seems trivial, especially for what wants to appear to be security professionals.


I work in account security, not for GH, but another platform.

Account security with 2FA is a long way from foolproof. Accounts get compromised all the time, especially by phishing or malware.

That's why my company's internal emails are all PGP encrypted and signed, even with managed accounts and YubiKey authentication.

When it really, really matters, you need more than 2FA.


Yes, of course, I agree! Where I disagree is the notion that putting some PGP keys in a github issues comment is going to prevent anything :/

Edit: Like, if I had hacked one of their accounts, what's keeping me from commenting there and just copy-pasting the key they used before, or generating a new one? Are they going to check?


They easily could if they wanted to, which is the point.

If at any point in the future, someone wanted to say, "Well, so-and-so may not really have been the one who posted it," or, on the other hand, one of the signers later wanted to renege and say they didn't really sign it, it's going to be a lot harder for anyone to buy that the account credentials and PGP privkey were stolen and used than just that someone somehow spoofed a post from an account.

It's like the difference between posting a +1 retweet and having a signed document notarized. One of those is a lot harder to claim was faked/unauthorized later.


It's proof that it IS them who posted that comment.


It is proof that someone with a copy of the private key posted that comment. Also this https://xkcd.com/1181/


But that's exactly the point isn't it? I mean, using pgp because you don't trust the communication channel. And you would still have the same issue with a mailing list.


And many of them providing digitally-signed statements such as "I agree with this article" which anyone can simply copy and paste into any other discussion, at any time. I'd suggest that implies a dangerous level of understanding of the tools they're using.


It's a LARP, not about a strong focus on security. These people do not normally sign their comments.


It's useful when e.g. Roger Dingledine who represents the entire Tor Project wants to make a public statement. It's pseudo-intellectual BS when some random GitHub account with edgy repos like https://github.com/studentmain/fuck-signal-tls-proxy does it.


Here's how moxie feels about people using PGP. https://moxie.org/2015/02/24/gpg-and-me.html


No, they're cosplaying security/encryption experts, in an effort to have their attempt at seeking attention seem less like the farce that it is.


My first reaction was that surely this must be satire, because the last thing I ask myself reading that post is "but how do I know these 2FA GitHub accounts are not hacked?!" ...


PGP means it's serious!


William Barr would have them all in jail.


Elon Musk should tweet about Matrix.

Signal team seems completely irresponsible here.

Censorship in countries where this app could help puts opponents lives at risk and already led to executions.


+1 for Matrix. Signal is a honeypot.


It looks more and more like it.

I even wonder now if they don’t have ulterior motives.


A group of security researchers who:

* Publish the exploit before vendor know it

* Publish the exploit before vendor delivered the patch

* Send their own opinion to every media possible (including ycombinator) without mentioning the full event, and using new account to looks more neutral

* Disrespect other people

* And also have their own "secure" software (v2fly, v2ray, ...)

Okay, looks like we need to have a new definition of "security researcher".

I think Signal did what they should do when communicate with those "trick or treat" guys: treat me with fame, or I'll trick you with a PoC. Is there a better word to shorten this review...? Oh there is: robber.


> * Publish the exploit before vendor know it

> * Publish the exploit before vendor delivered the patch

It's called full disclosure and it is the only ethical way to handle it.


"Full disclosure is the only ethical way to handle it".

haha


Censorship and privacy are important issues. So is civilised online debate, and communities learning to work together in a nice way.

I admire the people that put in time and energy to create a safer future for us all.

Hope that this is not going to be taken the wrong way, but whenever I read such threads (and again - I respect all the people involved, their efforts and the importance of this issues) - I can't help but being reminded with this: https://www.youtube.com/watch?v=a0BpfwazhUA




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: