This old post from Moxie Marlinspike in 2012 about having the worst material possessions made a huge impact on me for some unclear reason. Fun read. https://moxie.org/2012/11/27/the-worst.html
Ahh, I'm from the camp of "the worst". My girlfriend is from that "the best" camp, but doesn't have too much money. So we can't use that little more expensive aluminum foil, because it's for "special occasions". She got upset because I've slightly damaged cheapest workshop vacuum (it's cheapest, but happened to be rather good quality) using it for heavy duty tasks (and saved 50x it's price by doing those tasks myself). Is this the live one should live? To be slave of your possesions? Do you really need the best tools from the start? I've started with cheapest possible ones and if they break too soon, I'll get better ones. But for now, the almost cheapest ones fully earned their worth.
I don't find these ideas really contrast as much as they initially appear. Curtis advocates optimizing for some absolute best when shopping for stuff (which as a hobbyist product reviewer I find a bit absurd; few products are one-dimensional enough for that). Marlinspike talks about the value of unplanned life experiences.
There's a middle way for a lot of product categories: find an enthusiast community and get their recommendation for a low-cost and beginner-friendly product. It's often listed in their FAQ. The gap between enthusiast-grade and mass-market products in some categories is huge, while the gap in price may not be.
This was an entertaining read. I'm probably more of an adherent of the worst than then best. The ideal is probably somewhere in the middle as it usually is. I bought my kid a cheap bike and it fell apart while he was riding it. So then I bought him a more expensive bike.
I've bought a lot of junk and I've spent more time trying to fix the junk or replace it than I'd like to admit and I'd rather spend the upfront time finding the stuff that's not junk in the future so I can spend more time with my family and less time running to the store for a new toaster after mine blows up. just my 2 cents.
There is something liberating in having cheap stuff: I wasn't scared to park my old beat-up Jetta in sketchy parts of town; who's dumb enough to steal that POS?
But there is also an enjoyment to be had with nice stuff. Just be aware that that enjoyment often comes with a "duty" to care for it. The trade-off might be worth it, might not. That care might even be part of the enjoyment: my dad _really_ enjoys cleaning, waxing and polishing his Mini Cooper S (a special edition John Cooper Works model).
I do like the sentiment, and there's a lot to agree with in the post, but I think he strawmanned Dustin a bit: I don't think Curtis was saying that possessions should be "things we live in service to." I think most people can appreciate what Dustin is saying about owning a high-quality object that won't break on you when you need it (even if the cutlery might be a bit extreme).
It's great that you're learning a bunch by buying the worst motorcycle or boat, but you probably don't want to buy the worst lifejacket or helmet.
> The best means waiting, planning, researching, and saving until one can acquire the perfect equipment for a given task. Partisans of the best will probably never end up accidentally riding a freight train 1000 miles in the wrong direction...
Also, and I'm being a bit tongue-in-cheek here: with people trusting Moxie with their privacy, I hope he's past his phase of (metaphorically) riding freight trains without checking maps, and is spending some time waiting, planning, researching and saving.
Perrin told me that, despite appearances, ‘Moxie leads from the front, and he just leads by doing. One of his favorite quotes is “The only secret is to begin.” If you want to get good at something or do something, you just do it, and you figure it out along the way.’
The fundamental point is that he's doing this for good reasons, beyond the trivial in issues lists of a git backed repo: he believes in what he is doing. And, he's doing it in ways which cryptographers I respect relate to: its visible work and its open to critique.
I have my own kibbitz about stuff down in the weeds, I think the decision to make a cellphone/SMS identity key in the recruitment and to have one device per identity is a design issue for me and my use case, but I understand this is not a black/white thing, and there were noises made in 2019 about moving to a different model of identity, I an content to wait, but there is the vague meta-question if the ranking of this kind of idea gets exposure too: How do we know the thinking around identity and recruitment?
Cryptography is a black, and white field. Either crypto algorithm is deemed 99.9999999999+% physically unbreakable, or anything less, and it doesn't work.
Read some advanced cryptography papers and you'll discover it's not quite that simple. There are the exotic hardness assumptions: are those mathematical problems really hard to solve or not? They aren't classical, natural problems like discrete log. Then there are the odd threat models, e.g. plenty of algorithms out there claim to be proven secure in the honest-but-curious model. Although this threat model isn't entirely naive, in the real business world "honest but curious" adversaries do appear in the imaginations of business leaders at least, but most people assume the point of cryptography is to keep you secure against arbitrarily malicious adversaries.
Except that it's not true.
Even proven mathematical algorithms can be breaked if not implemented correctly, or some side channels might be discovered along the way.
Signal’s recent (not even yet out for many) implementation of mention-only notifications has reinvigorated my investment in them.
I am a bit more confident now that usernames will eventually come.
But the fact that it’s so slow to change is quite ironic, considering that the whole point is “the ecosystem is moving”. Well apparently it moves like molasses.
Not if said moving leads to having to spend all time fixing existing things and leaving little to add new ones. Don't know how true this is though.
Re: mention-only notifications I definitely agree - if they exist now for groups then this makes Signal quite an attractive alternative to other messengers. I'm using Matrix for some things now but notifications are set globally (not per client/device). I'd like to keep all channels on mobile on mention-only (I try to minimize mobile notifications in general) but when I have the desktop (or web) app open I don't mind getting all notifications from particular channels. That makes me able to keep up with conversations in real time. And if I want to not be disturbed I just close the app or disable notifications globally (MacOS). With Matrix (at least the Element.io clients) this is not possible so I end up switching the notification settings back and forth...
> You can now get someone’s attention with @mentions in groups. You can mention anyone in a group message simply by typing “@” and selecting them from the picker. […] And you can configure the group’s notifications in Group Details to only notify you when you’re mentioned.
I don't know enough about Moxie Marlinspike or Signal to really have an opinion, and was hoping to maybe learn something by reading this but I'm afraid this is more hagiography than profile.
Yeah this might be appreciated way more by those who have some kind of "connection" with/to him already. I first learned about Moxie by watching his sailing documentary Hold Fast[0] (I'm an avid sailor in a similar low-tech gritty way as him) although I just thought he was some random dude. It was only years later, after having used Signal for some while that I actually made the connection.
I have lots of respect for the man and love his thinking, writing and outlook on things. His blog[1] has a lot of good reads.
What is it about cryptography that creates a tendency toward hagiographies?
During the Snowden Leaks the same thing happened with Jacob Appelbaum and Julian Assange. There was no critical analysis on what they were actually doing or a lot of the advice they were giving people.
It's almost like the field somehow requires a defacto head who conveniently fits the media stereotype.
It's frustrating to see Signal's reputation undermined in technical circles by the shortsighted zeitgeist.
Signal's constitutional emphasis on usability supports user demographics that no other security product can attract. My elderly relatives use Signal now instead of Skype. This drew in other family members who just wanted to video chat with grandma and grandpa. Matrix will never win markets like this.
When tech nerds nit pick Signal's implementation, they ignore that the unfederated nature of Signal limits the damage these decisions can cause. Thanks to Signal's security posture, global protection from weak ciphers, buffer overflows, and even SGX, is just one software update away. This even protects you from faults in your contact's clients! Like the key agility that makes the Axolotl Ratchet so superior to GPG, update agility makes Signal infinitely superior to every existing or proposed federated network.
The Signal group has an uncompromising commitment to user privacy and a poignant security philosophy. Signal has no competition in its usability class, making hypothetical protections from other products worse than useless for user cohorts who will probably switch back to skype or sms. Signal's detractors do a terrible disservice to the people they dissuade from using it.
However, their decision to force PINs broke Signal for one of my non-technical contacts, to this day (despite them supposedly offering an opt-out now). This person opens Signal, gets some weird confusing modal dialog, and is now stuck.
There is supposed to be a way to opt out without having to set and remember a PIN now, but I was unable to guide this person to find it, and I can't exactly travel there to see what's on the screen and find the damn button (or figure out why it isn't showing up). So this person can no longer use Signal, and we had to fall back to WhatsApp. Which leaks more information, but has a killer feature that Signal now lacks: it allows me to communicate with this person!
(For those unaware: Signal made a really boneheaded decision a couple months ago, against all specific criticism that they were receiving - they introduced PINs that would then be used to back up at least metadata to the server, protected under a questionable scheme whose security assumptions only work if SGX is unbreakable. To force users into this, they blocked access to the messenger and your messages, so you had to set a PIN and upload your data if you wanted to talk to your friends and didn't have a contact for them on a different messenger.)
Intentionally breaking a product like this also breaks trust. I can no longer rely on Signal for "just wanting a video chat with Grandma and Grandpa". I can no longer trust that Moxie will make reasonable decisions, and not e.g. tomorrow break my ability to communicate with people unless I agree to upload my message history.
OK, this is shit. I was not under the impression that this PIN was used to encrypt user data for storage somewhere else. That was not made clear to me.
It would have been more clear if you had clicked the link to their web page and spent time reading a whole long document while you had other things to do.
If I'm not mistaken, this PIN also allows people to hijack your account. That's kind of the point; you export here and you import on your next phone.
But I'm not sure, because the whole thing is so badly explained!
> When tech nerds nit pick Signal's implementation, they ignore that the unfederated nature of Signal limits the damage these decisions can cause.
It limits the damage that some decisions can cause, but exacerbates others. Signal only allows the first-party client to connect to its network; if the developers were legally compelled to add a backdoor into that client, users would have few options.
Its security depends on a single company being perpetually trustworthy, free of influence, and supported. Having used many chat platforms that have been shut down/acquired/etc in the past, that’s not a bet I’m willing to take.
I’d also contest the idea that Matrix can never have a client as usable as Signal, but I’ll agree that there isn’t one yet.
> I’d also contest the idea that Matrix can never have a client as usable as Signal, but I’ll agree that there isn’t one yet.
After being around for multiple years and having not managed to do it even once (or come close) I think I am inclined to bet they won't. Or they'd just focus on being enterprise software (which I think they are doing and they must if they want to earn money).
Of all the IM apps Matrix is my most favourite in the spirit and least favourite in usability. Any attempt to introduce friends and family has failed miserably. Irony is, I myself was not convinced and I was trying to convert them.
I mean, keep in mind that while Element is effectively the reference client, it's also still just one client. It's not trying to be a Signal-style messenger, it's trying to be something like Slack - and at that I'd say it succeeds.
There are other clients in development that are closer UI-wise to your typical mobile messenger, e.g. https://dittochat.org/
Having spent a decent amount of time reading and attempting to implement small parts of the horrendous mess that is the Matrix protocol, I'd bet decent money that there will never be any reasonably complete alternatives to Element. Ditto looks nice, but is very barebones.
> It's not trying to be a Signal-style messenger, it's trying to be something like Slack - and at that I'd say it succeeds.
I'm of the completely opposite opinion I find Element absolutely atrocious compared to Slack in terms of usability The mobile app is decent though. And I don't know at what pace Element is being improved, only been using it for a couple of weeks.
That said Slack has been going downhill UI/UX-wise for years now.
Indeed it's a protocol. A horrendous, reinvent-everything, excessively complex mess of a protocol, that the developers of the reference server have gone on record saying that they don't expect anyone else to be able to successfully implement.
> the developers of the reference server have gone on record saying that they don't expect anyone else to be able to successfully implement.
Despite this, there are multiple alternative server implementations. Dendrite is the next gen server from the same core team, though Construct and Conduit are fledgling servers from different groups.
Neither Construct nor Conduit are complete implementations, and may never be. The protocol developers were actively discouraging them at one point too.
> It limits the damage that some decisions can cause, but exacerbates others. Signal only allows the first-party client to connect to its network; if the developers were legally compelled to add a backdoor into that client, users would have few options.
Moxie has explicitly said several times that third-party clients connecting to the main Signal servers are actively not supported and has threatened to start blocking them or enforcing the Signal trademark if they get big enough. (I tried to find a link to the exact comments he's made, but the threads involved have hundreds of comments and I'm on my phone -- he said this in one of the huge Google Play issues.)
Signal is OSS and you can start your own fork & network if you want to.
App publishing platforms not having a good binary signature verification system is the orthogonal issue that you're bringing up, that would in many ways apply to matrix for most users too. Most will never bother to sideload it.
>Signal is OSS and you can start your own fork & network if you want to.
Resources and network effects make this a much much less likely endeavor than in federated systems. I can reasonably (and do!) self-host Matrix and XMPP servers for myself and a few friends; I cannot reasonably host a Signal server for everyone I might want to possibly contact via Signal. (And that's setting aside the effort involved in convincing everyone I might possibly want to contact to use my Signal server. It's just not going to happen.)
EDIT: Oh, and I don't think the VoIP side of the Signal server is FOSS? At least I believe that used to be the case.
>App publishing platforms not having a good binary signature verification system is the orthogonal issue that you're bringing up, that would in many ways apply to matrix for most users too. Most will never bother to sideload it.
If the main client developers are pressured to put a backdoor into their client, I would expect non-backdoored forks to rapidly pop up in the stores.
If the platform provider is also legally pressured to not allow any non-backdoored clients into the store... well, I hope that never happens, but if we get to that point I hope more people would bother to learn more and start sideloading things. Maybe wishful thinking on my part.
Not to mention that in a centralized system only a single entity has to be manipulated to turn over the data, whether it's through court orders, laws, extortion, corruption (as e.g. happened with Skype), or a combination of the above. In a decentralized system, the job of the centralization has to be done by the people who want access to the data in the first place. Have to ring more bells, creates more people who know about the spying. A much harder job and there isn't any "easy" access to almost all data any more.
Signal disallows federation from forks (it has been attempted). "Using signal" requires using the official build, and that requires closed-source libraries and services (e.g. Google Play Services).
Signal doesn't require Google Play Services. It will use Play Services if installed for notifications, but works fine without them, and it can be installed on LineageOS etc.
In fairness to GP, this only became true a few years ago and Signal has not advertised this (nor the fact that you can now download the Signal APK from their website and it auto-updates outside of the play store[1]). Folks who opted to not use Signal some time ago likely had no way of finding this out, and Moxie's comments on this in ~2014 gave the impression that it would never happen.
My approach is to use Signal for organizing surprise birthday parties. (or at least I did before the world went to shit.) "We need SUPER SECURITY to make sure Rob doesn't find out what's up! So we're all using Signal to plan... Join up!"
Reddit is (was?) open source without federation just like signal. Reddit used to be pro free speech. Now they are banning people and communities like the_donald and brag about how deplatforming works. People can create their own little ghettos but it's inconsequential and no one uses them.
It's not a stretch to imagine a repeat. That moxie will brag in 10 years about how much he is banning people and how well it works. I wonder who will be unpopular in 10 years..
EDIT: Or he might hang himself like Aaron Schwartz.
> Thanks to Signal's security posture, global protection from weak ciphers, buffer overflows, and even SGX, is just one software update away.
Conversely, thanks to Signal's centralized model, implementations of backdoors are also one software update away. By the time the "nerds" find out, it'd probably be far too late and lives could be at stake.
It's unfortunately such the nature of the beast that being half-hearted about security does not yield a half-secure product, or a product that's fully secure against half the hostile actors, it yields a product that only gives the presumption of safety, which is far more dangerous.
I use many messaging services in my life as security absolutism leads to a very miserable, very paranoid life, but my expectations are accordingly tempered when I use them, and I let my contacts know my expectations too. Everyday chat? Sure. Sensitive, personal info? Maybe, depends on the exact topic. Trade/state secrets (if I were to handle them)? Hell no.
If Signal's security boils down to reputation and community trust, why not just use WhatsApp or Facebook Messenger or really any chat product where the makers claim it's secure and private?
My primary concern about Signal and usability is the insistence on using phone numbers to identify a person as a user ID.
Anything that relies on SS7/PSTN working correctly is a very bad design choice in my opinion.
Giving out your phone number to possibly untrusted contacts is also a bad decision because it opens up opportunities for scams, spam and social-engineering attempted hijacks of accounts (SIM swap attacks etc).
Signal is moving to allow users registering without a phone number. The infamous PIN was a necessary step to achieve that and other features, such as proper group management, which has just been released.
> update agility makes Signal infinitely superior to every existing or proposed federated network.
Translation: Signals ability to forcefully change the software and protocol out for users without any consent (much less informed consent) renders it functionally immune to any criticism or review because any aspect of the protocol could be changed ('improved') at a moments notice.
I think signal is a fine insecure messenger. If you mistake it for a cryptographic security product you are making a significant mistake. The fact that is uses cryptographic techniques internally just means that it isn't grossly incompetently constructed, but that doesn't make it achieve any particularly strong notion of security against any particular attack model.
I strongly recommend people us it (at least on devices which are already compromised by mystery meat binary updates that could be remotely backdoored at any time)... just don't think it's going to provide you with substantial security against active attackers (much less state level attackers or from Signal themselves).
You mention Signal is fine as an insecure messenger. I'd be very curious if you know of an app that is fine _secure_ messenger. Do you know of one, or is your point that nothing that's made generally available can be considered secure?
If you can convince your contacts to use it, and they're on platforms with good clients, XMPP with OMEMO ticks all the boxes, and has since 4-5 years ago.
It's properly federated, with a diversity of client and server implementations (something that will likely never happen with Matrix due to the excessive complexity and poor design of the protocol), servers are easy to deploy and don't need a lot of system resources, and it has strong security.
The main issue is that there are only a few really good clients, since network effects convinced everyone to use inferior (but better marketed) messaging systems. Conversations (on Android) is excellent, though.
You mention the "excessive complexity" of the matrix protocol, so I am a bit surprised that you advertise XMPP as an alternative. I never have looked at either protocols, but I remember that in the days when jabber looked like the next thing, it was strongly criticised for being overly complex and in particular using xml being unsuitable for a chat protocol. So I'm wondering if something changed and if you can I've examples of what is complex in matrix but easy in XMPP?
The use of XML is a bit unfortunate, but these days there are a huge range of mature XML libraries for any language, with a variety of different styles of API to suit the programmer / program design, so the use of XML doesn't really affect implementation that much. Not to mention there are decent XMPP libraries for many languages, so only the application-specific logic needs to be implemented.
XMPP has a clear design based around general concepts, that make it useful for a wide variety of messaging-type applications. The criticism I hear often seems to come from people approaching it as an IM protocol, and then being upset that they need to learn some general XMPP concepts to understand how IM is implemented on top of XMPP. The reality is that it's a general protocol, which can be used to implement a variety of different real-time messaging applications. That generality can cause people to think it's overcomplex if they approach it as if it's designed purely for IM.
You'll often find XMPP as the underlying tech for IoT real-time communication, or for signaling/presence (as an alternative to SIP) for videoconferencing (for example, Jitsi Meet uses XMPP for signaling). In this way, XMPP is quite widely used, but often not in a way that people are aware of.
By contrast, Matrix appears to attempt a clean-sheet design without learning any of the lessons of the many protocols that came before it. The protocol is heavy on special cases, has many serious inefficiencies (many coming from an apparent desire to make the protocol "more Web"), and especially on the server side requires a large amount of implementation work simply to get to a basic level of functionality (as opposed to XMPP, where a simple server for a basic set of functionality can be written in a very small amount of code, especially if you make use of an existing XML library).
I've written code for both protocols (as well as many other protocols with similar goals) and will do my best to avoid touching Matrix ever again.
>Translation: Signals ability to forcefully change the software and protocol out for users without any consent (much less informed consent) renders it functionally immune to any criticism or review because any aspect of the protocol could be changed ('improved') at a moments notice.
What percentage of users, globally, do you suppose are qualified to make informed consent on arbitrary patches to $secure_messenger of your choosing?
Informed consent for a drug doesn't mean that you personally understand molecular biology. It usually means that you've received and understand factual information about the tradeoffs from hopefully neutral parties who are acting in your best interest and can weigh those considerations based on your own priorities and preferences. And that you can make the choice you make free of coercion, or otherwise it isn't consent at all.
Signal's software management practices make the system largely opaque even to experts, and what review does happen comes at a significant lag. This is evidenced by the substantial number of times that signal has had to back-pedal on a change after substantial backlash. Even where it isn't opaque, there is no consent: the software stops running if you don't accept all changes, and seldom are changes introduced as optional features (default-on or otherwise).
Centralized solutions vs federated solutions is like monarchy vs republic.
Under monarchy, the power of the monarch is greatly amplifed. A great monarch can accomplish beneficial reforms at scope and speed not achievable by any republic. A bad monarch can inflict damage at scope and speed unprecedented in any republic.
Equally, a centralized network can achieve colossal heights of UX, security, speed, architectural cleanliness, etc; moxie wrote a lot about that. Or it can fall into bottomless chasms of poor UX, poor functioning, and insecurity (examples are many, take maybe ICQ by the early 2000s). By construction, three's no way for a part of the network to avoid the common fate, try and find a recourse, etc.
A republic (and a democracy) severely limits the power of officials it elects, more, it intentionally pitches different branches against each other in a system of checks and balances. This prevents it from achieving grandiose visions in short time, most of the time, even when the best officials are elected. It also prevents it from turning into hell, most of the time, even if some egregiously bad officials are elected to the highest offices.
Same with federated networks: they offer a much less coherent UX, are more complex, less powerful, more slow, harder to upgrade — but they are also resilient against a bad actor that grabs power, for they lack a SPOF, and have a healthy nonzero amount of distrust between parts. If one part rots, others remain, and new can be made.
So yes, bet on the centralized system, a brilliant benevolent dictator if you think the dictator outlives your need to use a system. Bet on moxie, Steve Jobs, or king Arthur to live effectively forever.
Or bet on an open standard with a half-dozen implementations of varying quality if you think that things will inevitably go sour in some of them during your lifetime, or whatever is the time scope for your use of the system.
There is no single right answer, no Goldilocks solution. Choose your poison.
I’m not even sure how you get there. Signal can’t move my messages from my iphone to a new laptop, it’s just all gone unless I was fortunate enough to have been manually making unencrypted copies of the database on the laptop, and I’m technically literate.
Saying the usability mode of: “you lose your history when you get a new device unless it happens to fall into a very narrow window” isn’t user friendly. And it has nothing to do with geeky nitpicking.
> The Signal group has an uncompromising commitment to user privacy
Would you say that outing people who are signal users and putting them in signal contact lists without their permission is 'uncompromising commitment to user privacy?'
Far from "compromise", Signal implements the only privacy conscious possibility here.
Without automatic contact discovery "Signal private messenger" would be forced to default to not-private for all contacts until manual confirmation could occur. Adding UX friction to the primary use case would hinder adoption among non security-enthusiasts. With protection for the casual user eliminated, the truly sensitive communications of those motivated for privacy would stick out like a sore thumb, revealing many more bits of meta data than just "this number had signal once."
Elite users who care about being "outted" simply for signal participation can take steps to mitigate. The rest of us need automatic contact discovery for Signal to not be useless.
Alternatively, users could be given the option to opt-out of automated contact discovery so that most users have the current behaviour but people who don't want others to find out they've started using Signal (there are dozens of fairly serious situations where this could be very bad -- dismissing them like you did by calling them "elite" betrays you don't actually care about the issue being described).
I mean, the fact that Signal actively notifies you when a contact registers for Signal is a complete misfeature (yes, this is basically a cosmetic complaint). It actively outs people who don't want to be outed and it's a useless notification to all of your contacts if you don't care about it.
I imagine that getting caught using Signal could be dangerous, whether to an "elite user" planning a one way trip from Hawaii to Russia with a layover in HK, or even to a minor with an abusive parent-over-shoulder. In some countries, a single boolean of metadata could put you in jail or worse. A future Signal update (usernames?) may address this use case.
But update or no, it's irresponsible to proffer these scenarios as relevant to the typical westerner, especially given the abysmal privacy of the popular alternatives.
If you're reading this in English, Signal is the most effective tool for private communications available to you.
> But update or no, it's irresponsible to proffer these scenarios as relevant to the typical westerner, especially given the abysmal privacy of the popular alternatives.
Have you considered the (sadly) common case of an abusive partner who is likely to see your sudden usage of Signal as an indication that you are speaking to other people about their abuse, or at the least asking others for help?
I don't think these scenarios are nearly as rare as you think they are. These cases do definitely happen in English-speaking countries -- not that I understand your argument here in the first place (Signal is available world-wide and has been translated to many other languages, what does language have to do with anything?). Not to mention that Signal themselves don't warn you about this behaviour at all.
(Yes, you could argue that an abusive partner has many other avenues through which to discover that you're asking for help -- but that isn't a defense of providing more avenues for an abuser to discover you're asking for help. Again, Signal actively notifies all of your contacts that you've started using Signal -- which is above and beyond the transparent upgrade from SMS to Signal messages you mentioned.)
Signal isn’t optimized for the use case where you need to use it to ask for help in an abuse scenario.
But isn’t that scenario a weak one? How would you know to install signal and reach out to a specific person for help if you hadn’t already asked for help?
>Like the key agility that makes the Axolotl Ratchet so superior to GPG,
I am not sure how you can usefully compare a thing originally intended to secure email to a thing intended to secure IM.
The ratchet only provides forward secrecy anyway. My take on that is that it is a pointless thing for something like email and pointless in practice for most IM applications. Most IM users keep their old messages and practically all email users keep their old messages...
There is nothing particularly wrong with Signal. Some of the criticism might a reaction to the constant promotion of it over all other things, even when (as in this case) the things are fundamentally different.
It should be noted that (while cool from a cryptographic perspective), this security primitive is practically useless in any scenario where you might think it would be useful:
* The burden of evidence to introduce chat logs in legal proceedings is much lower than you would think. Often screenshots of Facebook messages are used, or SMS records (which are also easily spoofed). A judge or jury will likely not entertain the possibility the messages were fabricated unless you have substantial evidence to the contrary.
* If you're scared of national security actors discovering you've leaked some information, the fact there is a record of the secret information in the other person's phone is more than enough for them to black bag you. (And let's be honest, lack of evidence has never stopped the USA from black bagging people.)
* Generally speaking, the probability of the chat record being faked is so small that no reasonable person will entertain the possibility it was faked (even though it could've been).
Not to mention there are all sorts of forensic methods to prove that a particular person wrote some message using fingerprinting, and ultimately Signal LLC does actually know whether you sent a message or not and who to (though admittedly they claim they don't keep metadata logs).
Now, if you're dealing with an anonymised chat system with this property then it could be argued it is practically useful. But at that point the anonymisation is doing most of the heavy lifting.
> The burden of evidence to introduce chat logs in legal proceedings is much lower than you would think. Often screenshots of Facebook messages are used, or SMS records (which are also easily spoofed). A judge or jury will likely not entertain the possibility the messages were fabricated unless you have substantial evidence to the contrary.
This is true for now. It seems likely that fabrication of screenshots etc will start to appear in legal cases, and as soon as it does, the burden of proof will shift.
For that to have any hope of working, you need easily available tools that can fake a chat database. Currently, I'm not aware of any such tools, and I'm not even sure that you can import a faked chat log into Signal on Apple iOS.
In court none of that is likely to hold up anyway, as evidence like chat databases is pretty much assumed to be true until proven otherwise.
I agree that an emphasis on usability is required, and that is why Zoom now rules. My elderly relatives use Zoom exclusively, partly because that is what they are invited to use for the 80th birthday celebrations that seem to happen every week. But also because they don't have smart phones; they are somewhat comfortable with a desktop computer, but smart phones are just unusable due to lack of dexterity, poorer eyesight and worse user interfaces. I can't see a mobile-only solution winning family use cases. Maybe it will do better with business.
My biggest annoyance with Signal is that when I send an SMS, people with Signal on an Android will not reply back with an SMS but with a Signal message. It doesn’t make any sense that I don’t get replies on the same channel as I send.
This is a big usability no-go for me an the reason I’m not using Signal anymore. Don’t interfere with other messaging systems like SMS.
The only way to avoid this seems to be delete your Signal account, if you just uninstall the app these contacts will still try to reach me (reply to my messages), but the messages will not be delivered as SMS as long as my account exists.
Users can explicitly send SMS messages (long press the send button and select "insecure SMS"), but this option isn't sticky -- I think if the app restarts it will again default to Signal messages. As you say, unregistering your Signal account[1] is the only permanent solution.
I have to agree. I've convinced a non-technical group of friends to use this over SMS for group chat, and their experience so far has been delightful. Even without getting into encryption, they're delighted at its support for threading, any emoji as a reaction, and its non-corporate origins.
I imagine it was even more frustrating for the eavesdroppers when they could no longer listen in on the top secret discussions with your grandparents. The eavesdroppers will not give up easily though; grandparents everywhere must stay vigilant.
You're revealing your own lack of aspiration while condescending toward older people.
Some people have prominent positions or intend to do important things in their life and they don't want to see personal conversations being leaked under the threat of blackmail.
Marlinspike is the guy who wrote a protocol with a built in MITM functionality, and had enough chutzpah to call it end-to-end encryption, and said that UI feedback for key changes will "confuse users."
A bad, and weak encryption scheme he wants. It's like the previous discredited idea: "let's put SSL/TLS everywhere, but trust self signed certs because otherwise we will confuse users"
Could Signal use phone number pairs to robustly implement a "make me visible to" discovery method?
It could certainly implement contact matching based on number pairs instead of just numbers, but I haven't reasoned through whether it could robustly (and efficiently I suppose) keep other parties from enumerating the number pairs.
I applaud the man for trying really, really, hard to make a difference, and succeeding. I look forward to the day when Signal doesn't require a phone number. Telegram, while requiring a phone number doesn't require you to disclose it to receive messages. Others having your phone number opens you to all kind of other attacks, which are quite bad. Sim swapping, and SS7 attacks. SS7 vulns can disclose your real time location and more.
Sigh. That Stack Exchange answer is incredibly old and points to the flaws everyone knew in MTProto 1, which has been superseded by MTProto 2 for years. MTProto 2 is based on standard crypto primitives that not a single human being has found fault in.
Please do your research before putting this stuff out there in the future: it spreads unnecessary fear, uncertainty and doubt.
You are welcome to develop your argument and point out where in MTProto 2 you find fault or why using standard crypto primitives isn't enough and what you'd like to see from MTProto 2 to secure it in your mind.
The gp comment literally just posted a link to a deprecated comment that painted MTProto 1 as (rightfully) insecure.
That was not thorough research and it attaches the FUD associated with an unused protocol to MTProto 2, which I'd again like to remind you and everyone, not a single security researcher or otherwise has found fault in.
The point the parent is making is that ‘not a single researcher has found fault’ is not actually a valid baseline for considering cryptographic protocol to be secure.
We have to assume that all protocols are insecure by default. That is standard practice.
Only protocols which have withstood serious scrutiny and incentivized adversaries can be considered to have a level of security.
So if you want to claim MTProto 2 to be secure, it isn’t enough to say ‘it uses standard primitives’. It also isn’t enough to say ‘nobody has found any bugs yet’.
You need to demonstrate that received sufficient attention from motivated attackers before you can claim it is secure. That’s just how it is with crypto.
Great. Go ahead and demonstrate that Signal Protocol has received sufficient attention from motivated hackers to your satisfaction. Telegram has an open invitation to crack their protocol for 100k USD. Nobody's collecting. I guess those high-value targets using Telegram must be worth more than 100k.
I don't need to demonstrate anything other than what I have already: when would we know a protocol has received significant attention from motivated hackers? I'll take a gander and say it's when that protocol is broken and is discovered by a security researcher or someone on the crisis response team.
I'm not claiming MTProto 2 is secure. Neither am I claiming Signal Protocol is secure. Undoubtedly both protocols have undiscovered vulnerabilities. I am directly refuting and admonishing a user for linking to a Stack Exchange comment that addresses a previous protocol with almost nothing in common to the present one.
Eh, thanks for the info on MTProto2. They made poor decisions initially, and doubled down on them, and thus I didn't (and still don't) think it worth my time to follow their changelog.
The contests are not remotely sufficient. Here's a good article:
"I’ll repeat it again: If you want to show that a system is secure, give the adversary as much power as possible, and if they still can’t break it, the security is good."
Secure protocols take a lot of expert analysis to build consensus that they're worthwhile. That requires engaging with the community of researchers on their own terms and using established best practices.
Finally, Telegram's server code is still closed source and thus can't be reasonably audited. Could an attacker (or state actor) with server access decrypt stored messages? Maybe! That's why default e2e encryption with well-audited, open-source code is the gold standard...
Is there a reason you continue to link to outdated articles? This one is from 2013(!) We're getting close to a decade now.
> and thus I didn't (and still don't) think it worth my time to follow their changelog.
I strongly advise you not to comment on topics you don't find worth following. You directly harm a project that has done a lot of good (for protestors in Hong Kong, Belarus, and Russia) with your ignorance.
Now you're putting words in my mouth. I did indeed say MTProto 2 was based on standard primitives and no one has publicly claimed it insecure or vulnerable. I did not say to trust MTProto 2 any more than you would trust the Signal Protocol: that is given the information we have on known vulnerabilities one appears about as good as the other.
“You are welcome to develop your argument and point out where in MTProto 2 you find fault or why using standard crypto primitives isn't enough and what you'd like to see from MTProto 2 to secure it in your mind.”
The only person putting words in someone’s mouth is you - I never compared MTProto 2 to signal. You keep claiming that you never said MTProto 2 is more secure than signal. I never said you did.
The idea that being based on standard primitives is enough is a commonly held but dangerous piece of cryptographic reasoning.
> You claimed that the reason to trust MTProto 2 because it is based on standard primitives and because no security research had yet found a bug.
Those are the words you put in my mouth. I don't trust MTProto 2. When did I say that I trusted MTProto 2?
This whole argument was me putting bad research to shame and then challenging another user not you to develop their argument.
But you wanted to back them up with something about concentrated efforts from serious adversaries which is an impossible argument to refute because neither of us has the insider knowledge to know what these protocols have endured and survived since only knowledge of their successful exploitation would be on the menu for public consumption.
So please, do bring something of greater meat to the table if you have something to share.
Directly from the first three sentences of your link:
> Paying people rewards for finding security flaws is not the same as hiring your own analysts and testers. It’s a reasonable addition to a software security program, but no substitute.
Why is it, whenever Signal is brought up on Hacker News, we get inundated with the people who object to the core decisions of the Signal Project? Would Signal really be better if, instead of having a secure messenger available to the masses, it spent massive amounts of time implementing the things these people want? No. I would be comfortable recommending Signal (or WhatsApp) to a nontechnical friend and communicating with them on it, with the expectation of a certain level of privacy. I spend a lot of time on Hacker News and consider myself a very technical person, and I'm not sure I trust myself to use Matrix in a forward-secure way where it's at right now. If everybody spent the time they spent complaining about Signal working on getting Matrix or whatever to a point where it was usable... well, frankly I don't think it'd be much better off than it is right now, but it seems more likely to bring about results to me than endlessly lobbying to have Moxie do _the thing he thinks cannot be freaking done right_. Right now, Signal exists and can be used securely (given certain common threat profiles) by the typical smartphone user. I'm really tired of people comparing the security that Signal offers to the security of imagined hypothetical messengers.
A great deal of human communication is dedicated to signalling high rank/superiority, or demonstrating familiarity/intimacy.[1] In the case of HN, very few people know much about Moxie, so the only useful signal they can convey is expertise. Many people come here because of their technical or product development background/interests, and the way they show expertise is by second-guessing technical, user interface, and other issues.
As a result of this combination of constraint and desire, we get a bunch of comments where HNers talk about how they'd make Signal (as well as every other product) better/more useful.
I think this is an uncharitable view, and while it might be true in some cases it is certainly not always the case. Many times the people who point out issues with Signal do it not because they think they want to show off, but because they honestly are frustrated that no product seems to meet their needs and Signal has specific issues that matter to them. I honestly believe “Signal is stupid for relying on phone numbers and SGX” is really just “I don’t trust this things, they have a track history of having issues, I would really like to use this service and am sad that you chose to do this”. Hacker News is often not very good at conveying what it is trying to say, but I remain optimistic that it’s more than a intellect measuring contest.
I remember the outrage quite well when Facebook started spamming ads to the phone numbers of people who were forced to give Facebook phone numbers for "security" purposes and promised to never been shown ads on those numbers. Or when Jack Dorsey's Twitter account was hacked because of SMS 2fA.
Last, phone numbers are general identifiers used in the search boxes of various data collection tools. Maybe you can search by the Threema ID as well, but that requires the tool to be a tiny bit more sophisticated, and that means the people who like to invade privacy of are a bit more frustrated.
That isn't "smartness signalling" or whatever, that's a real concern.
The phone number was the only thing that was preventing them from keeping metadata such as usernames on their servers(until they introduced the PIN). It was a tradeoff between two concerns.
I really don't think folks like nullc (one of the prominent critics here) need (or care) to signal rank here. Saying that's driving the criticism is an easy way to write it off.
He's got a master mariners license, which to me is way more cool than any of the computer stuff. Legally captain a merchant ship of any size, of any type, operating anywhere in the world.
Signal only has any userbase due to advocacy; compared to the secure-enough-for-most WhatsApp which has literally billions of users and is the default choice, every user of Signal had to be convinced to install it.
So I think it's unfair to bash people who criticise the project. The 'drag' they apply to wider adoption is still miniscule.
Put it this way: why does Signal exist when WhatsApp is good enough? Wouldn't Mr Rosenfeld be better putting all that effort and ingenuity into a truly innovative new messenger?
> why does Signal exist when WhatsApp is good enough?
WhatsApp sends all your contacts to Facebook instead of the Signal Foundation, and unlike Signal, doesn't use SGX to keep Facebook from knowing what they are.
WhatsApp encourages people to back up their messages to either Apple Cloud or Google Drive which breaks the E2E encryption. There is no way to tell whether the other party is doing this and just one person in a group chat can accidentally / on purpose back up the chat at any time.
Signal allows chat backups but it's not the default option and the UX deliberately discourages this behaviour. They are also encrypted with a password which offers more security.
> No. I would be comfortable recommending Signal (or WhatsApp) to a nontechnical friend and communicating with them on it
I kind of like Signal. It seems like an honest effort for a really great goal and the dedication as well as some of the things they have achieved seems seriously impressive!
As for WhatsApp, you realize that they upload all your chats to the cloud, unencrypted, easily retrievable by you and whoever you chat with, and that getting rid of them will take effort on both sides?
You are aware that Facebook probably use your metadata to feed their algorithms?
Edit: my point is that as engineers and technologists it is so easy to be blinded by tbe technical rick solid technical implementation if an (important) part and forget that security depends on all parts of the puzzle.
- End-to-end doesn't matter if the endpoints by default upload everything to cloud storage unencrypted. If it is going to end up unencrypted in Googles cloud you can just as well use gmail. That way you won't make your metadata accessible to Facebook at the same time.
- End-to-end only means so much when an adversary is running the routing: they cannot know the contents of your messages as they fly through their networks but if you don't trust Facebook, do you really want them to know who you talk to and when?
They have built a tor/onion network and use it as the backend for their messenger (forked from signal). Its end to end encrypted and doesnt have centralised servers.
Session is exciting but it's currently broken. Group chats don't work properly with messages frequently getting lost and not delivered to all users. We've tried to switch several times for groups that need encrypted group chat with disappearing messages, group admin, and usernames instead of phone numbers (basically making them the only game in town) but it just doesn't work right 100% of the time which is a deal breaker.
incidentally, does anyone know what happened to happs like whispercore from 2011? are there new apps like that that exist on android or ios? is Adguard DNS about the closest thing to that now?
Other than matrix, what are some other alternatives or upcomers in the messaging field I should try.
Matrix still has a high burden of me setting up my own server and maintaining it. Till their P2P solution doesnt release, they have their own metadata problem.
Tox (clients include uTox and qTox) has an... interesting history, but is technically interesting (true p2p), and worked fairly well when I last used it (around 2018).
Wire was my favorite...much more featureful than Signal at least when I was testing them a couple years back. They also had some not so nice things to say about the Signal folks, but that's mostly gossip.
I was vouching for Signal for all my friends for years, supported them with donations, and was really rooting for them. However, I regrettably can't trust in moxie having the best intentions with Signal anymore. Lost count of the number of times this has been recycled on HN so I'm not taking the time to formulate this extremely well but:
* For the longest time, requires phone number as identifier. When asked to remove this restriction, the reply is "so we can bootstrap off the phone's native contact application". But Android and iOS contacts have natively supported e-mail addresses on contacts since forever? And those could be optional? Every time the conversation has gone this far with anyone involved in Signal, silence/ghosting.
* The whole PIN requirement debacle - where further amounts of metadata would be uploaded to Signal's servers (encrypted of course, with the PIN as passphrase of the key). Suddenly the app wouldn't start and users were locked out of all their Signal conversations (even reading them) without setting a PIN, no way to circumvent it. We were told this would be a strict requirement to be able to solve the above. After a lot of backlash they rolled it back after a week or so?
* Hostility towards alternative implementations.
* There's an open issue on GH for verified builds. Since 2015.
Signal may be good today. What about tomorrow? I think the only durable and realistic solution (because let's face it, pure P2P for the mainstream is not the latter) is federation.
Anyone feeling similarly should check out matrix.org. It may not be ready for the masses yet. But if we consider how we communicate in 5, 10, 20 years from now, it becomes obvious that we can't rely on a single actor as provider, regardless of how good they are today. The only way we get there is with people like the ones frequenting HN getting involved by using it, contributing, filing bugs, hosting home servers and using the clients.
These are all exact reasons I have never used Signal.
Additionally I find concerning the choice to bet the farm on SGX which has been repeatedly broken with repeated opportunities to extract the keys, at which point cracking small numeric pins is trivial.
Moxie simply promised they always patch right away and never take the keys when they have such opportunities to do so.
I do believe him, today, but I don't believe he is above being threatened or coerced tomorrow or that he won't get replaced by someone that cooperates better with state powers as happened with VK and countless others.
I also took issue with Signal having central network metadata chokepoints his DCs or ISP could do timing heuristics on even if he does not.
I tried to engage with Moxie about these issues but he dismissed each as things we will mostly have to accept because he feels it is not possible to create and rapidly improve a privacy focused communication tool for the masses that isn't produced and controlled by a single large company.
Moxie is brilliant, and his contributions to cryptography and bringing education on privacy and end to end encryption to millions can't be overstated.
The problem is Moxie and Signal are struggling to engineer best efforts under an assumption they treat like a law of nature: No end to end end encrypted messaging platform can succeed without complete centralization.
I operate under a different assumption: No centralized platform will ever escape censorship and control by the government it operates under.
Moxie has said before he is happy to be proven wrong.
I hope as Matrix and others continue to prove him wrong with his own cryptography research that he comes around and rejoins the fight for a decentralized internet with irrevocable privacy.
You can't coherently be alarmed by Signal's use of SGX while at the same time endorsing systems that use no cryptography whatsoever to protect the metadata Signal uses SGX for.
> You can't coherently be alarmed by Signal's use of SGX while at the same time endorsing systems that use no cryptography whatsoever to protect the metadata Signal uses SGX for.
You can be: It isn't difficult to argue that signal overstates the security properties of their solution.
It's not an incoherent position to say that it's more important to be conservative (or at least accurate) about the claimed properties than it is to provide epsilon more security.
If it were the case that signal claimed that this metadata had no privacy towards them (and the hosts running their systems), but when you dig into the protocol or detailed tech docs you find that they're using SGX to minimize risk-- then I think there would be nothing to fault about the addition of SGX. In that case it would be purely additive security.
I’m not sure that that’s the only conclusion you can draw. Some systems choose to not handle that metadata, which often makes them a worse service but they make that choice by looking at the landscape and choosing to not use SGX. The other plausible argument is that Signal uses SGX, which is better than nothing, but in selling it as more secure than it is they do more harm than good.
Some people choose not to turn their servers on at all. It makes for a worse server, but it's very secure.
Security is always about tradeoffs and compromises. A system that is very, very good, but not perfect and widely used is far preferable to a system that is perfect but used by no one.
I don't need a phone number, nor do I need to upload my contacts' phone numbers[1], in order to use Matrix.
[1]: I work around this by running Signal in a work profile with no contacts. So it's... kinda sorta optional, but in practice there are many caveats and complications that make it effectively not. Certainly not when we're claiming it for average users, which is Signal's argument for using it.
I think there’s a difference between what you’re saying, which seems to be that you personally don’t upload your contact metadata to Matrix, and what I asked / the previous commenter was describing, which is the idea that there are messaging platforms which have decided to not support storing this kind of metadata.
Notably, last I looked, Signal does allow users to opt out of SGX metadata storage, though the initial implementation didn’t cleanly allow for that.
You can optionally add your public identifiers to Matrix (phone, email, etc), which lets people search for you via identity providers.
What Signal does is 1) require your phone number, 2) automatically upload every phone number in your contacts, and 3) send a push notification to people on Signal that have uploaded your number in the past that you have joined.
It's a world of difference. And a huge breach of implied trust for multiple people I've known, who joined and immediately uninstalled when they found out a bunch of random people were notified.
As noted above, I’m pretty sure that Signal’s current state allows a user to choose whether their contacts are uploaded.
But again, the point I made originally, and which you replied to, isn’t about what users may or may not do. It’s whether any messaging platforms have resolved to not store metadata, rather than attempting (as Signal has) to only store it securely.
We can debate whether people agree with Signal’s approach to the tech or the UX, but Matrix doesn’t bother; if you give them your metadata, they’re just storing it without any attempt at E2E encryption.
You're really missing the point here. The core Matrix protocol doesn't handle contact discovery at all.
An entirely separate system exists to facilitate contact discovery. You have to explicitly choose to publish an identifier to it. You choose the server you want to publish to. You choose if, which, and when to query such servers. You choose what to query them with.
A server implementation can (in theory) support whatever arbitrary thing it wants to for an identifier. The fact that email addresses are commonly supported is already _significantly_ better than being forced to use a phone number.
In the interest of being an informed member of this discussion, I went to try out Matrix.
Per the top recommendation from matrix.org, I downloaded the Elements app for iOS. When I opened it, the very first thing it did was ask me if it could access my contacts for sharing them with my (as yet unchosen?) identity server.
Then, I registered an account and clicked the “People” tab. I was immediately prompted to enable contact discovery, via a modal called “Contact Discovery”. It specifically calls out finding contacts by email and phone number. There was never an option presented to select my own identity server, or change what info is published.
If “contact discovery” isn’t part of the core offering, it certainly makes a great attempt to look like it is. The only choice I had available to me is to decline sharing my metadata, which is the same option Signal makes available for SGX.
I get that due to the magic of open source and federation, Matrix implementations could technically use random numbers or favorite colors or some other identifier, and I’m sure there are other less recommended clients which let me pick my own identity server or only share certain slices of data. But if we can’t agree that the flow recommended on the website is the one that the vast majority of users are going to follow, there’s not much room for a productive discussion.
> If “contact discovery” isn’t part of the core offering, it certainly makes a great attempt to look like it is.
You are looking at only a single implementation here. The point is that this is a set of federated protocols that has been specifically designed to provide freedom of implementation in this regard.
> But if we can’t agree that the flow recommended on the website is the one that the vast majority of users are going to follow, there’s not much room for a productive discussion.
I never claimed otherwise? We seem to be talking past each other.
You say you followed the top recommendation provided to you and that the end result was essentially equivalent to Signal. Since you appear to approve of how Signal handles things, that hardly seems like a complaint to me?
The key difference, of course, is that for Signal that's the only option. If the central authority changes it tomorrow, then tough. Matrix, on the other hand, provides you the freedom to select (or build, or patch, or whatever) a client that meets your needs. It's providing a superset of what Signal is offering.
Except this isn’t the same as how Signal handles things. Signal uses SGX specifically so that they can store metadata without exposing it to the server. Matrix’s contact discovery spec doesn’t.
My initial question, earlier up the chain, was in regards to this dilemma: Signal didn’t upload contact metadata to their servers for years, until they implemented SGX to allow them to do so securely. By contrast, every other messaging platform (Matrix included) just wrote up a spec for contact metadata storage that doesn’t address protections from the server operator.
We’ve got back and forth about the upsides of federation and open source clients and such, but as I’ve tried to point out, none of that is relevant to the actual question I asked, or the upstream point in the thread: that Signal’s spec didn’t include contact uploads at all until there was a way they could store them securely, and other message platforms just skipped straight to storing them.
In practice I have no way of preventing Signal from turning off SGX any time they feel like it because they aren't open and federated. They could push a software update and I'd be none the wiser until a security researcher noticed and pointed it out!
I guess it would be nice if Matrix integrated some sort of remote attestation support into the identity server protocol. Maybe they should have done so from the start, maybe not. At this point I don't see their offering as being less secure than Signal's though, just slightly different.
It's also worth noting that SGX (and AMD's competitive offering) has been broken an embarrassing number of times at this point. From my perspective, secure storage by the server is more or less orthogonal to any given spec or implementation. You either provide data to a server as cleartext or ciphertext. The server does with it what it will - you as the user have effectively no control over that.
Also note that with Matrix, you have the option of running your own home server (or use one provided by someone you personally trust) and communicate seamlessly with people on other home servers. The dynamic is completely different.
Oh, and any/every device/client is a first-class citizen which means you can run it on more then one phone at the same time, or use it without a phone at all.
Matrix lets people host their own servers and in doing so put the metadata anywhere they are comfortable.
One can have metadata for sensitive internal corporate channels stay on a network they own in whatever country they want while still being able to chat with outside parties on matrix.org or other servers. Several friends host their own servers and more recently matrix p2p is rapidly maturing to dump the need for servers at all for many use cases.
Federated systems allow people like me to choose to host a server for my own family in my own home closet rack so data shared between my family and I never leave our network.
Still others can host matrix as a Tor hidden service where it will be very expensive to even learn who to target.
If all participants are using Tor and not using identifying information like a phone number, then bulk deanonymization becomes very expensive, particularly if many small highly targeted social circles roll their own.
Signal does not give users a choice but to trust one SPOF setup built under a one size fits all threat model that flows all IP metadata to one place which can leak information regardless of any encryption.
I don't want a world where one central party holds all communications metadata in one place under one legal jurisdiction with one proprietary memory isolation technology under one threat model.
Moxie here likes to point out that email demonstrates all the things that can go wrong with federated systems, but if had chosen the popular alternative of letting a single party take over this whole class of communication, we could all still be using AOL.
Internet messaging will outlive us all, and if we advocate everyone lock up their communications with one (even benevolent) dictator, it won't end well when the next dictator is not so benevolent.
See: pretty much every social network in China and Russia that is now under state control in spite of early promises of privacy by founders.
Here's something I've never understood about federated approaches to secure messaging:
With a centralized approach like Signal, I have to trust one single provider, i.e. the Signal Foundation running the servers, with my (meta)data. Sure, in an ideal world I'd rather not trust anyone. Fair enough. With a federated network, however, I generally have to trust every single host that any of my contacts has decided to sign up with, as my metadata will necessarily leak to those hosts when I'm communicating with those contacts in question. And while I personally put in a lot of time and effort into making sure I only use software and internet platforms that protect my privacy as best as possible, most of my contacts don't. The NSA would have an easy time setting up a rogue host.
Now imagine that, in addition, there weren't just one single version of the Signal app, but dozens of forks by dozens of developers. The security risks would get even greater and I would now also have to make sure that my friends use the right (secure) version.
Am I the exception here? Does everyone here only have friends who are IT security experts and know how to tell apart trustworthy hosts and app developers from non-trustworthy ones?
On a side note, another reason why I no longer strongly believe in the idea of decentralization is the following: There was a CompSci paper a few years ago that argued that in any human-made (initially) decentralized network/graph (whether digital or analog; whether a graph of company relationships or mankind's social graph) there will eventually appear "supernodes" which have a disproportionally high number of edges/connections to other nodes. (In the same vain, even in federated networks like email, some hosts will become bigger than others and will eventually take over most of the users and traffic.) In short: There is no such thing as full decentralization. Nodes will necessarily "accumulate" and gather around local centers. (If anyone finds/knows that paper: Please let me know!)
I think perfect really is the enemy of good in this case. Attack vectors will presumably always exist. I look upon any argument against openness - be it software licensing, protocol federation, data export, binary blob usage, or anything else - with _extreme_ skepticism.
> ... I generally have to trust every single host that any of my contacts has decided to sign up with ...
In general, you should only have to trust a particular host with communications going to the specific contact that uses it.
> Am I the exception here? Does everyone here only have friends who are IT security experts ...
No, I think you're just looking at the problem the wrong way.
With a centralized model, you generally have to unconditionally trust the central authority.
A federated system offers much more flexibility. Metadata is likely to be spread piecemeal across multiple hosts and network paths, making it much more difficult for an adversary to analyze in the general case. Instead of a blanket trust decision, you make a per-contact decision based on the nature of the interaction you intend to have with them. If you feel the need, you can even self host and insist that a particular contact register with _your_ server to communicate with you.
If you absolutely can't trust someone to make good decisions with security critical information, it's unlikely Signal can do much to change the threat they pose to you in a real world scenario anyway. At the end of the day you're ultimately choosing how much trust to place in the person you're communicating with regardless of which system you use to do so.
> ... there will eventually appear "supernodes" which have a disproportionally high number of edges/connections to other nodes ...
That's certainly an interesting dynamic but it's hardly an argument against federation. Centralization is literally the worst case in that model (ie a single node and nothing else). In a federated network you have a choice of whether to make use of such supernodes. If it really matters, I can (for example) refuse to communicate with a particular contact regarding some sensitive topic via (say) gmail. A centralized system doesn't provide that option at all.
I think we need to agree on a common threat model because right now I'm getting the impression that you and I are assessing things from very different angles. If any of the 3-letter agencies decide to target John Doe specifically and expend significant resources on this task, I think we can agree that chances are they will succeed. Neither a centralized or a federated approach will protect John from that because, unless John is a security expert and very paranoid and careful in everything he does online, there will be many other attack vectors.
Signal's goal, however, is to protect the masses and, thus, society as a whole, by protecting as many people's privacy as possible. The things that are at stake here are not the data and the social graph of a handful of individuals, but the data and social graph of society as a whole. (I'm sure I don't have to mention the implications for democracy and social order.)
My perspective, therefore, was that of an average user. The reason I mentioned my personal situation and the fact that I myself spend quite a bit of time on making sure my systems are safe, was to emphasize that if the situation is already overly complicated for me, it will be much worse for the average user.
> In general, you should only have to trust a particular host with communications going to the specific contact that uses it.
I don't think the word "only" is appropriate here. Let's say John Doe has roughly ~1000 contacts. This means that, in the worst case, he would have to research ~1000 hosts and their privacy policies. Now the "accumulation effect" I mentioned previously will reduce that number quite a bit but there are still going to be dozens of different hosts. I doubt we could expect John to look into each and everyone of them before he gets in touch with his contacts. (Note that this gets even worse when people can freely switch between hosts, as suggested e.g. by other replies to my comment, as John will then have to repeatedly do the checking.) Therefore, I think it is reasonable to expect that a significant number (millions, if not billions) of users will end up residing on insecure and untrustworthy hosts and their social graph and metadata – and possibly even their data (see below) – won't be protected at all.
> With a centralized model, you generally have to unconditionally trust the central authority.
In the federated model, John has to do that, too. Sure, he can self-host but how many people are actually going to do that? Put differently, the vast majority of all acts of communication in the network is going to get routed not through self-hosted nodes but through the servers of providers whose trustworthiness is at least questionable. I hope you will agree that, for society as a whole, this exacerbates the trust problem you mentioned.
> A federated system offers much more flexibility. Metadata is likely to be spread piecemeal across multiple hosts and network paths, making it much more difficult for an adversary to analyze in the general case.
A federated system also makes it much easier for adversaries to enter the game, as they don't have to compromise a well-known provider like Signal that is under the close scrutiny of the public. Instead, they can just create new hosts (just like they do in the case of the Tor network). What's worse, in this case they won't just be able to access their user's social graphs but very likely also the content of their messages, as the key discovery problem is usually solved by hosts distributing their users' public keys.
I'm arguing that the contact themselves, not the node they use, is the primary threat.
> Signal's goal, however, is to protect the masses and, thus, society as a whole, by protecting as many people's privacy as possible.
Sure, by positioning themselves as the only node, which you are then forced to trust. That hardly seems like an acceptable solution to me.
The chance of a given mainstream node being actively malicious seems unlikely to me. Maybe that's misguided, but at least (as you point out) there will be relatively few big ones to research. Non-mainstream nodes don't pose a significant threat by virtue of having little to no use (and so seeing little to none of your traffic and contact graph).
> ... the vast majority of all acts of communication in the network is going to get routed not through self-hosted nodes ... I hope you will agree that, for society as a whole, this exacerbates the trust problem you mentioned.
Actually, it seems to diminish the problem to me. Instead of everything going through one central authority it's now being split across multiple actors. No single entity has access to the complete picture anymore. Moreover, you as the user have the freedom to avoid such supernodes if you feel the need. Yes, that will likely introduce usability hurdles, but at least you have the freedom to do so (as compared to a strictly centralized model).
> Instead, they can just create new hosts (just like they do in the case of the Tor network).
Doesn't this attack model fail to account for how users go about selecting and using servers? If I join the Mozilla instance to chat with them, that wasn't an arbitrary choice - I selected the instance that the organization I want to communicate with is using. So the other party, which I'm going to communicate with one way or another, is the real threat here.
As far as MITM attacks go, that scenario seems to get a bit out into the weeds cryptographically. I'm not sure how viable a large scale attack would be here - presumably at some point odd traffic patterns would become noticeable? And you still have the issue of a malicious node that actively attacks all traffic needing to somehow manage to grow its user base to a significant degree.
> With a federated network, however, I generally have to trust every single host that any of my contacts has decided to sign up with
I can see why, at first glance, this seems strictly worse because there are more entities to trust, but this is not a static system.
As an analogy, think about Facebook. Let's say that your friends all hate Facebook, but continue to use it, because for them the inconvenience and switching costs of moving to a different social network (perhaps one you are offering to host for them) is higher than the cost of continuing to use Facebook.
With a federated network, not only can you encourage (or demand) your friends to use a host that you approve of, but also, the very fact that people can move from one host to another (and there isn't a single basket containing all the eggs) means that these hosts are less likely to risk their reputation or be attacked in the first place.
> even in federated networks like email, some hosts will become bigger than others and will eventually take over most of the users and traffic
It seems you are arguing both that federated systems lead to there being too many nodes to trust, and also that in practice only a few big nodes will be trusted. I may be over-simplifying your points there, and I do think there are challenges with federated networks (particularly when federation is deliberately broken due to spamming/abuse/politics) but it's worth having a clear SWOT analysis here.
Anyway, my point, as before, is that a network coalescing around a few big nodes is not a problem as long as there is portability of accounts between them. I would rather trust 3 big providers who fear losing users to a more secure platform, than 1 monopolistic provider that's too big to fail.
> ... the very fact that people can move from one host to another ...
I think it's worth explicitly spelling out that federation reduces the cost to switch hosts to near zero in many cases. There's no technical reason you can't use a different host on your end per contact in Matrix (for one to one messaging at least). It would be absurd, but you could do it provided that your client supported it.
If it's really needed, federation combined with open source software means that tooling can be adapted to facilitate your particular security and workflow requirements. (Not that you should need it generally, but the freedom is always there if you do.)
> federation combined with open source software means that tooling can be adapted to facilitate your particular security and workflow requirements
In theory, yes. In practice, however, it gets very complicated. (See e.g. Jabber.) You end up with a situation where only experts are able to set up a secure system for themselves, whereas the average user can't even accurately assess how well her/his privacy is currently protected.
I agree, I never meant to imply that maintaining your own fork is simple. I was merely highlighting that the freedom to do so while still interoperating with everyone else is there. Centralization generally deprives you of that ability.
It's analogous to open source vs proprietary software. End users shouldn't generally need to modify the software they use. Doing so isn't typically easy or straightforward for the vast majority of people. But if you ever need to do so, the option is there.
Yes, but it should be noted identity servers are only used for associating phone numbers and email addresses to Matrix IDs. Matrix IDs themselves are federated just like email addresses (@cyphar:cyphar.com is mine, and it's hosted on my own homeserver at cyphar.com).
If you don't register your phone number or email address (which last I checked is not part of the default account creation flow) then it really makes no difference. I agree they should be federated (and they have been working on that among many other things) but it's not like every user's identity is centralised.
Optional security isn't security. Especially with the sort of metadata that is in _someone else's phone_. Basically, everybody who has my phone number probably has it in their iOS/Android contacts. I can't opt out of _them_ using a bad identity server.
But you already can't opt out of them sharing their entire address book with the latest sketchy app they downloaded. What's the difference?
They either have your phone number and other contact details in their phone or they don't. They either make good decisions or they don't. You choose how much to trust them and what with. Federation and third party implementations of identity servers for one particular app changes absolutely none of that.
You can be alarmed by and critical of Signal's insistence that it begin storing your data under any scheme without an option of "hey, just don't store my data, thanks".
I have some concerns around the Signal foundation:
The initial $50M in funding was a loan, not a donation, from Brian Acton to the new nonprofit Signal Technology Foundation. By the end of 2018, the loan had increased to $105,000,400, which is due to be repaid on February 28, 2068. The loan is unsecured and at 0% interest.[5]
The Foundation is completely controlled by Brian Acton, who is the sole "member" of the nonprofit, with the right to appoint or remove every member of the Board of Directors. The Board consists of Acton and Marlinspike. Acton is also the President.[5]
Is there a reason to suspect Brian Acton’s motives? I’m always skeptical of celebrity-hackers, but I don’t know why I should be worried about Acton in particular.
> The Foundation is completely controlled by Brian Acton, who is the sole "member" of the nonprofit
It doesn't seem unusual to have a single member nonprofit. However, is it a usual thing to loan to your own nonprofit foundation at 0%? Without a plan to generate positive cashflow into the foundation, would he forgive this loan to offset future taxes?
There is a reason why signal is relatively mainstream succesful (by crypto product standards) and PGP wasn't. Its because its willing to make tradeoffs in the name of usability (while still emphasizing security).
A secure messenger product nobody uses helps nobody's security.
Many of the security design flaws in signal have had little to no direct impact on usability.
For example, for years we asked for a simple mechanism which could be used to view a users key and mark it as identified, to prevent MITM -- even one buried in a menu for advanced users (who could at least act as canaries against widespread interception). Not only was the request turned down but usually responded to with vigorous personal attacks against the requesters.
Subsequently once a fingerprint validation method was added it was gratuitously bound to be pair-wise, making it largely unusuable -- e.g. post a pgp signed signal fingerprint that any of your contacts could use to transfer trust from an existing key. Requests for a non-pairwise version, even as just some advanced thing that grandma never sees ... again, hostility.
The constant logic-bombed auto-expiring software that make signal effectively open-source-in-name-only. etc.
I think signal is fine as an insecure messanger: Unencrypted communications protocols should have no place in on the internet today. But to advance it as a security tool almost certainly puts people's lives and freedom at risk.
Common usage of signal has zero security against MITM: users aren't effectively notified that the contacts keys have changed-- and given how often users lose/wipe phones perhaps that's really the best that can be done for normy friendly software.
If that's how it is, that's how it is. Some resistance against passive monitoring is still a critical upgrade. But don't call it secure: if you do people will do and say things using it that they wouldn't otherwise.
> Many of the security design flaws in signal have had little to no direct impact on usability.
By far the biggest complaint I read about Signal is "It uses phone numbers, and I don't like that". While I agree that I'd prefer to be able to make up my own unique/random identifier, there's absolutely no doubt in my mind that the downside of phone numbers was far far outweighed by the benefits of usability. Most of my friends use Signal, I seriously doubt I'd have been able to help make that come true if the contact discovery features - along with their side channel privacy leaks - didn't work as well as they do. In retrospect, if I were trying to kickstart a global-scale network of secure messenger users - I'd make exactly the same choice today as Moxie did back then.
> users aren't effectively notified that the contacts keys have changed
That's just not true. Its become a running joke amongst my friends that Signal Safety Numbers changing is how we know when people broke or upgraded their phones.
I assume it's a defensible question of priorities.
In spite of the many issues to complain about signal it's still a lot better than what $random_person would likely use otherwise.
Do we do the most good for the world by under-representing signal's weaknesses and overstating its security in order to maximize the people using it over choices that are worse and cheerlead its development, or do we do the most good by trying to be frank and being critical of its limitations (at least some of which are pretty nitpicky), potentially at the expense of the discussion causing people to continue using clearly worse alternatives?
Personally I answer this conflict like this: Outside of a techy venue I don't go into any details about signal's limitations-- I tell people they should use it, but don't assume it keeps them private from governments, google, or other powerful parties.
And even there, if I send a message to you and on receipt the software finds that your key has changed, my client sends it again to the new key. Which means that even if I was savvy enough to catch the message, it's tool late and potentially a critical secret has already been disclosed to a MITM by the time I see it.
I've also heard that Signal on android leaks message text because of google's keyboard auto-prediction feature. They should implement their own keyboard or find a way to disable text prediction - and educate their users.
In general I would not expect much privacy from a stock Android device on any chat tool considering they shamelessly ship lots of third party blobs carriers demand be included, some of which like OMA-DM toolkits have sweeping permissions on your device.
On Apple this is less true, but it is also a centralized black box that grants you no freedom, and apps can be banned at any time with little recourse.
You could use something other than Android like PostmarketOS or Librem5 however odds are Signal won't create clients for those platforms considering Moxie has stated publicly he prefers distribution through proprietary channels in order to collect usage stats. Moxie has also stated he will actively fight any third party clients that try to join his network, so we won't likely see many of those attempted for alternative platforms either.
> some of which like OMA-DM toolkits have sweeping permissions on your device.
Fortunately this seems to be going away.
For those not aware as to what you're talking about, this is the proprietary 'rootkit' providers like Sprint had to use to activate CDMA modems on their carriers on many of their devices that did not come with a CSIM. As it's doing a lot with the phone's modem, it needs tons of permissions on the device. Verizon Wireless and Google Fi also had similar apk's that were required to provision phones and update PRL (prority roaming list) information.
In the US all modern devices on Verizon are CDMA-less and new Sprint customers are usually activated on T-Mobile network (with fallback to GSM/WCDMA not CDMA).
On Android, Signal sets the flag on the text entry box that disables IME personalization, the same way that the text entry box works in Chrome's incognito mode.
Whether or not Google respects it is not clear, but it does show a little incognito mode icon on the keyboard.
In older versions of Android it'll look like this (https://www.androidpolice.com/2017/07/14/gboard-6-4-brings-i...), while newer versions of Android will have an icon on the top left of the keyboard, before the stickers, gif, clipboard, and cog buttons.
This is with the default Google keyboard (gboard), your mileage may vary with other keyboards.
I won’t bother reading this, it seems to be a standard time waster.
Coincidentally I saw this yesterday https://youtu.be/IWMZ17Iyu3o
“What’s wrong with Signal etc. “
You haven't even glanced at the article, which tells us a lot about the fascinating life of a very interesting person, and to add insult to injury, you then link to a video that does nothing but state the obvious (with much melodrama) for 17 minutes. Frankly, this is the laziest dismissal I've ever seen.
The New Yorker's house style is to use a dieresis to indicate when adjacent vowels do not form a diphthong. There's a syllable-break in between the Os of "coöperate", as opposed to between the Os of "chicken coop", so it gets a special marker.
Just to complete the trifecta, the URL date is presumably that of the print issue in which this article will appear. This "cover date" is usually a week or so ahead of the actual printing date: https://en.wikipedia.org/wiki/Cover_date
The diereses over the o (or any doubled letter) is an old-fashioned way of indicating a syllable break, rather than a longer vowel sound (as in “good”). It’s the New Yorker’s house style.
Also, I don’t see any spelling mistakes in the sentence you mention...
> Marlinspike is the C.E.O. of Signal, the end-to-end encrypted messaging service, which he launched in 2014; he is also a cryptographer, a hacker, a shipwright, and a licensed mariner.
What's all the hype about Signal? It's a bad application.
Let's see. Install it. Oh, it starts with a screen about Terms and confidentiality. But... why does Signal get me to agree to things if it's so much about privacy? Are they... reading our chats?
OK, next. Oh, it needs access to Contacts and Media to improve communications. Right -- because nothing like that sweet social graph. But I can skip this one.
Oh, now it wants my phone number!
Experiment complete. Let's uninstall this app.
I fail to see how this is any better than Whatsapp.