Hacker News new | past | comments | ask | show | jobs | submit login
Cryptanalysis of GPRS Encryption Algorithms GEA-1 suggest intentional weakness (iacr.org)
487 points by anonymfus on June 16, 2021 | hide | past | favorite | 107 comments



Excerpt from the abstract:

"This paper presents the first publicly available cryptanalytic attacks on the GEA-1 and GEA-2 algorithms."

[..]

"This unusual pattern indicates that the weakness is intentionally hidden to limit the security level to 40 bit by design."

So in other words: GPRS was intentionally backdoored.


> So in other words: GPRS was intentionally backdoored.

Note that this high level insight isn't really a contribution of the paper, given that the authors of the algorithm basically admitted this themselves. Excerpt from the paper:

    It was explicitly mentioned as a design requirement that “the algorithm
    should be generally exportable taking into account current export restrictions”
    and that “the strength should be optimized taking into account the above require-
    ment” [15, p. 10]. The report further contains a section on the evaluation of
    the design. In particular, it is mentioned that the evaluation team came to the
    conclusion that, “in general the algorithm will be exportable under the current
    national export restrictions on cryptography applied in European countries” and
    that “within this operational context, the algorithm provides an adequate level of
    security against eavesdropping of GSM GPRS services”
Basically, export regulations from that era implied that you had to make your algorithm weak intentionally. The main contribution of the paper was to give a public cryptoanalysis and point out the specific location of the flaw. I think it's highly interesting. Another excerpt:

    Further, we experimentally show that for randomly chosen LFSRs, it is very
    unlikely that the above weakness occurs. Concretely, in a million tries we never
    even got close to such a weak instance. Figure 2 shows the distribution of the
    entropy loss when changing the feedback polynomials of registers A and C to
    random primitive polynomials. This implies that the weakness in GEA-1 is un-
    likely to occur by chance, indicating that the security level of 40 bits is due to
    export regulations.
Also, even though operators don't deploy GEA-1 any more, according to the paper many semi-recent phones still support GEA-1 (e.g. iPhone 8, Samsung Galaxy S9, OnePlus 6T, ...). The paper describes a possible downgrade attack that allows the session key to be obtained which can be used to extract prior communication that happened under more secure encryption algorithms (section 5.3). This is big. The paper authors managed to get a test added to the conformance test suite that checks that GEA-1 is not supported, so hopefully future phones will drop the GEA-1 support.


>Note that this high level insight isn't really a contribution of the paper

The problem with this statement is that nobody outside of the design staff understood how the algorithm was weak, or (AFAIK) precisely what the criteria for "weak" actually were. Moreover -- after the export standards were relaxed and GEA-2 had shipped, nobody came forward and said "remove this now totally obsolete algorithm from your standards because we weakened it in this way and it only has 40-bit security" which is why it is present in phones as recent as the iPhone 8 (2018) and potentially may be vulnerable to downgrade attacks.

There are some stupid ways to weaken a cipher that would make it obvious that something was weak in the design, e.g., just truncating the key to 40 bits (as IBM did with DES from 64->56 bits, by reducing the key size and adding parity bits.) The designers didn't do this. They instead chose a means of doing this that could only be detected using a fairly sophisticated constraint solver which (may not have been) so easily available at the time. So I don't entirely agree with this assessment.


> nobody outside of the design staff understood how the algorithm was weak

And, as I mention, pointing that out was a contribution of the paper.

> or (AFAIK) precisely what the criteria for "weak" actually were

I think this 40 bit limit is well documented for other encryption algorithms. I couldn't find it in any (old) US regulation text though after a cursory search.

    The "U.S. edition" supported full size (typically 1024-bit or larger) RSA public keys in combination with full size symmetric keys (secret keys) (128-bit RC4 or 3DES in SSL 3.0 and TLS 1.0). The "International Edition" had its effective key lengths reduced to 512 bits and 40 bits respectively (RSA_EXPORT with 40-bit RC2 or RC4 in SSL 3.0 and TLS 1.0)
https://en.wikipedia.org/wiki/Export_of_cryptography_from_th...


> And, as I mention, pointing that out was a contribution of the paper.

Maybe I didn't make it clear. The open question prior to this paper was not "precisely how did the algorithm implement a specific level of security", the question was: what is that specific level of security? This was totally unknown and not specified by the designers.

Notice that the specification doesn't define the desired security, in the same way that it defines, say, the key size. It just handwaves towards 'should be exportable'. I can't find a copy of the requirements document anymore, but the quote given in the spec doesn't specify anything more than that statement.

>I think this 40 bit limit is well documented for other encryption algorithms. I couldn't find it in any (old) US regulation text though after a cursory search.

In the United States (note: GEA-1 was not a US standard) some expedited licenses were granted to systems that used effective 40 bit keys. In practice (for symmetric ciphers) this usually meant RC2 and RC4 with explicitly truncated keys. GEA-1 does not have a 40-bit key size -- a point I made in the previous post. It has a 64-bit key size. Nowhere does anyone say "the requirement for this design is effective 40-bit security": they don't say anything at all. It could have had 24 bit security, 40 bit security, 56 bit security or even 64-bit security.

ETA: Moreover, there is more to this result than 40-bit effective keysize. A critical aspect of this result is that the attackers are able to recover keys using 65 bits of known keystream. The uncompromised GPRS algorithms require several hundred (or over 1000) bits. Note that these known plaintext requirements are somewhat orthogonal to keysize: capturing 65 bits of known keystream is possible in protocols like GPRS/IP due to the existence of structured, predictable packet headers -- as the authors point out. Capturing >1000 bits may not be feasible at all. That's really significant and interesting, and not the result one would expect if the design goal was simply "effective 40-bit key size". One has to wonder if "ability to perform passive decryption using small amounts of known plaintext" is also included in the (missing) security design requirements document. I bet it isn't.


> In the United States (note: GEA-1 was not a US standard) some expedited licenses were granted to systems that used effective 40 bit keys. In practice (for symmetric ciphers) this usually meant RC2 and RC4 with explicitly truncated keys. GEA-1 does not have a 40-bit key size -- a point I made in the previous post. It has a 64-bit key size. Nowhere does anyone say "the requirement for this design is effective 40-bit security": they don't say anything at all. It could have had 24 bit security, 40 bit security, 56 bit security or even 64-bit security.

Really? How would you get 64-bit security from a 40-bit key?


GEA-1 doesn’t have a 40-bit key.


> Basically, export regulations from that era implied that you had to make your algorithm weak intentionally

I think the question then becomes, is the regulation still satisfied if the specifics of the intentional limitation/weakness/exploit are undocumented? It's likely moot these days, but curious nonetheless.


Your list of 2017-2018 phones suggests that manufacturers have already dumped GEA-1 on current hardware.

I kind of suspect that the weakness of GEA-1 is one of those industry secrets that everybody knows but nobody talks about.


I've taken the phones from a table in the paper. There weren't any more recent phones in the list. The listed phones are probably still common, even in the first world.


>operators don't deploy GEA-1 any more

in 1st world maybe


Yeah I omitted maybe too much in the summary. The paper mentions that 2G is used in some places as a fallback, so operators still support it if the phone doesn't support newer standards.

In fact in Germany, the country of one of the paper's authors, precisely this is happening: 3G is being turned off while 2G is used as fallback for devices that don't support LTE. Apparently there are some industrial use cases of 2G that still rely on it. In Switzerland, they are instead turning off 2G and keeping 3G as the fallback.

IDK how the situation is in third world countries, but note that India at least is in the top ten when it comes to LTE coverage. https://www.speedtest.net/insights/blog/india-4g-availabilit...


Also 2G is less battery hungry. We use it for our professional applications on Android


That blanket statement is unwarranted. Due to various signal limitations, newer protocol can have fewer retransmissions and may use less transmit power.

Especially later revisions of 4G and all 5G due to MIMO and signal power estimates for roaming. Also may allow the use of local WiFi to save power.

It would be nice to use newer protocols on lower frequencies for range but regulation is what it is.

Trust the phone to do the right thing.


Worldwide. Quoth the paper:

> According to a study by Tomcs ́anyi et al. [11], that analyzes the use of the ciphering algorithm in GRPS of 100 operators worldwide, most operators prioritize the use of GEA-3(58) followed by the non-encrypted mode GEA-0(38). Only a few operators rely on GEA-2(4), while no operator uses GEA-1(0).


> so hopefully future phones will drop the GEA-1 support.

And they will move to exploiting backdoors in newer standards.


As were its other cyphers.

Even in their best case effort, 64 bit keys were in the realm of supercomputer level bruteforcing by late nineties if just few more cypher quality degradations, and key leaks were know.


At the time when digital cellular was first designed, the main objective was to design sufficiently strong authentication to stem the rampant theft of service then occurring on the analog cellular systems. In the US this was estimated at roughly $2 billion/annum.

Encrypting traffic for privacy purposes was less important. Prior analog cellular telephony systems were unencrypted, as were analog and digital wireline services. Thus, the privacy cryptography was intended to be strong enough to make it a little more inconvenient/expensive to eavesdrop on digital cellular than it was on analog cellular or wireline services without significantly impeding law enforcement.


You make very good points. End-to-end encryption was never the goal for cellular networks in the first place.


It may not have been the goal of governments or telephone companies, but I'm sure users would have appreciated it.

There's a reason this was done so discreetly.


All of the GSM algorithms were weak, and deliberately so. It is well known that the French wanted it that way, while the German side wanted a bit more safety on account of the ongoing Cold War. So compromises like the infamous A5 algorithm were created whose security was mainly based on the algorithm itself being kept a secret - the cardinal sin of cryptography.


I think stripping government from the ability to influence cryptography was a very important step. They try to get access again, so the fight is far from over.

Those government that did probably violated their own laws, we deal with criminals here.


Too many coincidences for this to be by chance.

IBM had Commercial Data Masking Facility which did key shortening to turn a 56 bit cipher into a 40 bit one.

Now we've got this weird interaction which similarly reduces key length.

Seems pretty obviously intentional?


I agree, it is probably intentional, but I think it remains to be proven that it was malicious?

Maybe 40 bit was seen as sufficient at the time, but are there any engineering reasons to actually shorten the key intentionally, does it improve the transfer rate in any way?

I can't think of any, but I'm no expert, so maybe somebody else can chime in?


Depending on your definition of "malicious," I think it clears that bar. The problem is not making a good-faith argument that 40 bits was sufficient (which was done for some extent for export-approved 40 bit SSL), but that it misleads people into believing that it's 64 bit encryption while it only has 40 bits of strength for people who are in on the backdoor.

And as far as the other half of your question, no, there's no possible benefit (other than to the backdoor owners) from a smaller keyspace, as it goes through the motion of encrypting with the larger one.


Thanks for the explanation!


At the time 40 bits was not considered a backdoor, it was considered a weakness that would allow folks like NSA (with big budgets and intercept capabilities) to wiretap the way they had other communication approaches.

Some situations, rather than designing new codecs, they would just weaken key gen side. The IBM effort there was public to allow for easier export, but also an approach that could be used to hide the weakness which in other settings may have been beneficial. It's possible however that folks involved understood what was going on to a degree but that it was seen as necessary to avoid export / import restrictions.

More recently I think places like China ask that the holders of key material be located in country and make the full keys available or accessible to their security services. Not sure how AWS squares that circle unless they outsource their data centers in China to a third party that can then expose the keys for China to use.


> Not sure how AWS squares that circle unless they outsource their data centers in China to a third party that can then expose the keys for China to use.

This is in fact precisely what they do. The Beijing region is operated by Sinnet and the Ningxia region is operated by NWCD. It's documented here: https://www.amazonaws.cn/en/about-aws/china/


AWS KMS is a regional service but China is still extra special.

With KMS actually the only thing that prevents encrypt/decrypt from anywhere on the Internet is the access control logic of the service. AWS of course hold the private keys for ACM certs too and could provide them upon request. (Side note: the KMS "BYOK" functionality is almost completely useless).

Realistically you should expect any cloud-hosted infrastructure to be fully transparent to relevant authorities. This is usually not a business problem.


> Not sure how AWS squares that circle unless they outsource their data centers in China to a third party that can then expose the keys for China to use.

If you check something like the Public Suffix List [1], you will notice that Amazon maintains separate com.cn domains for several of its services in China. Amazon doesn't appear to do that for any other country. It follows that AWS in China might well be isolated from AWS elsewhere.

[1] https://publicsuffix.org/list/public_suffix_list.dat


40bits was "export level" encryption, i.e. stuff you could safely export to foreign enemy countries. Because western secret services were certain they could decrypt that if necessary. So this is malicious without a doubt.


Indeed. The question is where the malice lies. If I make a product for US use and export use, am I malicious by telling export customers that I'll only support a weaker key by US law? Or it the malice in the US requiring that? Can we expect companies, especially international conglomerates, to give up on potential markets because to protest a law?


OK, try this one:

If I make a product for global use and Russian use, am I malicious for telling Russian customers that I’ll cut out information on homosexuality by Russian law? Or is the malice in Russia requiring that? Can we expect companies, especially international conglomerates, to give up on potential markets in order to protest a law?

(This is not a caricature, this is more or less what a couple of Russian acquaintances working on YouTube at Google tell me. One could also say the same, mutatis mutandis, for China or India or Saudi Arabia, but I know much less about what’s happening there. Also intentionally not the most outrageous example, only the one that hopefully hits closest to home for a Western audience.)


I see the similarities. However, there's a difference between selling a technical solution and publishing social information. There's also a vast difference between doing special work to appease one target country so you can sell there compared to choosing whether or not to lose the entire world outside your home country because you can offer some protection but not the best protection to foreign buyers. 40 bits was not nearly as easy to break in 1998 as it is now.


Far simpler: An entity (here: the US) is malicious towards its enemies, makes malicious laws and commits malicious deeds to harm them. All the others are just proxies for that maliciousness (provided they don't have their own separate agenda there).


I don't necessarily think that you're wrong - but it's a moot point since the consequences are the same.

For example, I don't think James Comey was acting with malicious intent calling for universal crypto back-doors; I do however think that he was dangerously naïve and deeply wrong-headed.

No back-door will ever remain unbreached and by baking vulnerabilities into the specification you're paving the way for malicious manufacturers to exfiltrate your network's communications as they see fit.

There's a reason that 5G rollouts have national security implications and they could've been largely avoided (metadata aside).


The real interesting question is the proprietary crapto in 4G and 5G. How long until that backdoor is proven?


Pardon my ignorance, what is gprs and GEA-1 and what is it’s significance?


GPRS is the mobile data standard for GSM mobile phones. It's from the 2G era, and is old and slow. GEA-1 is an encryption algorithm used with GPRS.


So it’s no longer in use?


I'd be surprised if there are many phones that still use it, but I'd be equally unsurprised if there weren't a lot of embedded devices (utility substations, gas/electricity meters, security systems, traffic light controllers, etc.) that still use GPRS modems to phone home to monitoring systems. Its simple, lightweight, and supports PPP (for devices that use serial comms) and X.25 as well as IP.


I presume phones just use what the network tells them to use.


Even then, I'd expect (perhaps naively) that the vast majority of smartphone data transmissions these days are done on top of HTTPS, so breaking the first layer would only get you so far.


That’s true, except for conversations and SMS


Aah, I assumed that this used a different protocol. Then yeah, it's pretty bad.


No, GPRS is still around; the 2 and 3G networks aren't supposed to go away until late 2022 in the US, and then there's the rest of the world to consider.


We need to consider illegitimate carrier devices, too. Will the Stingray type devices stop supporting it? If the phones still fall back to it, it's still a threat.


In the US AT&T shut down their 2G service at the end of 2019. Only T-Mobile has GSM/GPRS service active and that is shutting down at the end of this year. It's 3G services that will be shutting down through the end of 2022/3.


Yeah, like I said, currently operating 2 & 3G networks are scheduled to go away by late 2022, and GPRS is still in service.


kinda interesting, but looking at this table 2G phaseouts will basically take more time in developped nations than elsewhere. Probably a big case of how there's a lot of technological leapfrogging in developping nations cuz they tend to only have access to really cheap commodity stuff (so they end up getting stuff replaced more quickly). Very much the "fancy hotel wifi" problem IMO

https://www.emnify.com/en/resources/global-2g-phase-out


I think it's a timeframe thing, too. 2G data network rollouts were 20-ish years ago, and infrastructure and handsets were expensive, so less extensive deployment in the developing world. Shifting those deployments forward by 5 years puts you into 3G timeframe.


Actually, because of legacy support issues and quality of service requirements, a lot of Emergency services are still using it instead of newer tech. Also, a lot of low data consumption IoT devices use older, but also cheaper, chipsets that use this for their telemetry backbone.


My phone switches to 2G in some parts of the house, and some places in the garden. And, when the weather gets really bad, or if there's a long power outage, 2G is the only option.

I live quite far out though, so I guess it won't apply to that many.


GPRS is the general packet radio service, used for (very slow) data and SMS transfer in combination with the old GSM protocol.


So, it encrypts sms messages and is still in use?


It doesn't encrypt SMS. It's over the air encryption for packet data (not calls or SMS) like WEP or WPA on WiFi. On GSM networks SMS was transmitted in leftover signaling space on voice circuits.



IIRC "SMS over GPRS" was an optional extension


For those who don't know what GEA-1 or GEA-2 is, it's used in mobile phone traffic.

There's a clearer write-up of this discovery and its implications here: https://www.miragenews.com/a-backdoor-in-mobile-phone-encryp...


Post Showden this hardly qualifies as a smoking gun surprise, but it's nice to see a concrete example identified and examined. It's like nearfield archaeology, the difference between reading accounts of the use of greek fire in some antique battle vs finding identifiable remains of the manufacturing process.


It should not be surprising that a system that is subject to lawful intercept requirements has weak encryption, especially pre-lift of the export ban.


This is really to intercept over the air (so not necessarily for fully 'legal' intercept but also intelligence services).

If law enforcement needs to wiretap a specific phone/SIM they only need to request it to the operator. Over-the-air encryption is irrelevant.

Nowadays operators can duplicate all packets and send copies to law enforcement/security services in real time so that they can monitor 100% of a given phone's traffic from their offices.


Exactly, this is for third party access or at least I've assumed for a long time all communications using standard tools have always been open for scrutinity.

It's nothing new either - for as long as there is postal service your mail could be open by order of this or that government oficial. Not even the Pony Express was immune to that ;-)

If you want greater secrecy do like the paranoid intel community has been doing since at least the cold war and exchange messages on seemingly disconnected channels - say 2 different newspaper columns, wordpress blogs or even websites made to look like robot-generated SEO spam, using code that was previously agreed in person, with everyone keeping hardcopies if necessary.


It was supposed to take 6 hours using Finland's fastest supercomputer at the time. So said my professor.


The paper mentions a reference implementation of the GEA/1 / GEA/2 was used to generate the keystream data, but does not mention how to get a hold of this.

Does anyone here know? Or am I correct in assuming the implementatiom is kept under pretty heavy lock due to its proprietary nature?


Why the heck don’t consumers have a seat at the table while all the 5G technology is being developed? I want open protocols and publicly documented cryptosystems based on published protocols. Instead we are just enabling the surveillance state.


Consumers don't have anything useful to bring to this table.

Historically the realisation that you need outside Cryptographers (not consumers) if you actually want to do anything novel with cryptography† was slow to arrive.

Even on the Internet, for PGP and SSL there was no real outside cryptographic design input. In SSL's case a few academics looked at the design of SSLv1, broke it and that's why SSLv2 is the first version shipped. Only TLS 1.2 finally had a step where they ask actual cryptographers "Is this secure?" and that step was after the design work was finished. TLS 1.3 (just a few years ago) is the first iteration where they have cryptographers looking at the problem from the outset and the working group rejected things that cryptographers said can't be made to fly.

And TLS 1.3 also reflects something that was effectively impossible last century, rather than just a bad mindset. Today we have reasonably good automated proof systems. An expert mathematician can tell a computer "Given A, B and C are true, is D true?" and have it tell them either that D is necessarily true or that in fact it can't be true because -something- and that helps avoid some goofs. So TLS 1.3 has been proven (in a specific and limited model). You just could not do that with the state of the art in say 1995 even if you'd known you wanted to.

Now, we need to get that same understanding into unwieldy SDOs like ISO, and also into pseudo SDOs like EMVco (the organisation that makes "Chip and pin" and "Contactless payment" work) none of which are really getting the cryptographers in first so far.

† "But what I want to do isn't novel". Cool, use one of the existing secure systems. If you can't, no matter why then you're wrong you did want to do something novel, start at the top.


I don't think that's true about SSL/TLS. SSLv2, the Netscape protocol, was a shitshow, but SSL3, its successor and the basis for TLS, has Paul Kocher's name on the RFC. The mixed MD5/SHA1 construction is apparently Kocher's doing.


If you're claiming that Kocher's involvement in SSLv3 is enough, note that SSLv2 which you called a "shitshow" is the work of Taher Elgamal, who is also a famous cryptographer.

I claim the correct approach isn't "We should hire a cryptographer" although that wouldn't hurt a lot designs, but "We need a lot of cryptographers beating on this". Because of that problem about the easiest person to fool being yourself. That means the outside world needs a good look, and that's one reason the IETF was able to get there first because it's all on a mailing list in public view (well these days it's on GitHub, but if you're allergic that's summarised to the list periodically).

One of the hidden advantages TLS 1.3 has over SSLv2 is that of course today TLS is famous. If you're an academic in the area TLS 1.3 work was potentially a series of high impact journal papers, and thus would do your career good, whereas I can't think even Hellman (who had worked with both Elgamal and Kocher at Stanford) would have had a lot of time for SSL in the 1990s.


Right, so I guess I'm wondering how you reconcile your diagnosis of SSL/TLS needing input from cryptographers with the actual history of TLS. You claim, for instance, that TLS 1.2 was the first instance of the protocol that was actually vetted by cryptographers, which seems clearly not to be the case.


I could nitpick that I said cryptographers and we've seen one cryptographer for SSLv2 and one for SSLv3 https://www.youtube.com/watch?v=OwHGE7uhjco

But really that's fair. And it's even possible that the key difference was only ever that we learned along the way how to do this and so any bunch of fools might have developed TLS 1.3 knowing what we did by then, while not even a prolonged public effort could have made SSLv3 good. Perhaps if that's right in ten years every Tom, Dick and Harry will have a high quality cryptographically secure protocol that isn't just TLS...

But I think what I was getting at is that at last TLS 1.2 had a bunch of outside cryptographers critiquing it. It's just that they're too late because it was finished. Some of the things that today are broken in TLS 1.2 weren't discovered years later, they were known (even if not always with a PoC exploit at the time) at roughly the time it was published. Having such critiques arrive during TLS 1.3 development meant the final document only had the problems known and accepted by the group [such as 0RTT is inherently less safe] plus, so far, the Selfie attack. Not bad.


Consumers in this context doesn't literally mean consumers, it means advocates working in the interest of consumers. Advocates who would do things like insist that the cryptography be looked at by independent experts.


https://www.3gpp.org/about-3gpp/membership

e: also "3GPP TS 33.501" if you want to read about 5GNR encryption, it's open to us all, time to read through all notwithstanding


How does "open protocols and publicly documented cryptosystems" help when the carriers are mandated by law to have backdoors so they can fulfill "lawful intercept" requests? You're better off treating it as untrusted and using your own encryption on top (eg. Signal).


Implementing a legally-mandated wiretap requirement like CALEA doesn't require you to break your protocol (i.e. the transport layer). It is implemented at the application layer, on the server. You can still have cryptographically secure communication between client and server while complying with wiretap laws.

If you're concerned about your government intercepting your communications with a warrant, there's not really anything you can do except move to an E2E encrypted app like Signal. But if you're OK with only being monitored if a judge signs a warrant, then the GP's suggestion helps.

These protocol backdoors are more dangerous than application-level wiretaps because anyone can find and use them; they might be private at first, but once they are discovered there's usually no way to fix them without moving to a new protocol (version).

Protocol breaks seem to me to be more in the category of "added by the NSA through subterfuge or coercion in order to enable illegal warrantless surveillance", which I find much more concerning than publicly-known processes with (at least in theory) established due process like CALEA wiretaps.

> You're better off treating it as untrusted and using your own encryption on top (eg. Signal).

But yes, this is a sensible approach to the world-as-it-currently-is.


Especially a secret judge, in a secret court.

I always consent to that.


It doesn't help against the local authorities. But it will help against criminals and foreign authorities. E.g. most of the worlds capitals are packed with IMSI-catchers and passive eavesdropping devices operated from embassies. This spying on foreign soil would be impossible if mobile phones were any good with regards to security.

And signal isn't really very helpful in this scenario, because it doesn't properly protect against MitM attacks.


How does signal fail to protect against MITM attacks? Given that it's end-to-end encrypted, wouldn't an attacker have to force a change of keys to MITM you? In which case you should be notified by signal that the keys were recently changed.


Signal only implements a very weak form of trust-on-first-use for keys. So there is no authentication and no security for a first contact. Subsequent communication can be protected by meeting in person and comparing keys, which nobody knows about. Signal doesn't ever tell you about this necessity and doesn't have any option to e.g. pin the key after manual verification or even just set a "verified contact" reminder.

Being warned about a changed key is only sensible at all if the one before that was verified. Otherwise, how do you know everything wasn't MitMed in the first place? Also, most users ignore the warning if the next message is "sorry, new phone, Signal doesn't do key backups". Which everyone will understand and go along with because they either don't know about the danger there. Or because they know Signal really doesn't do shit to provide authentication continuity through proper backups.

Signal is only suitable for casual communication. Against adversaries that do more than just passive dragnet surveilance, Signal is either useless or even dangerous to recommend. It is intentionally designed just for this one attack of passive dragnet surveilance, nothing else. Please don't endanger people by recommending unsuitable software.


> So there is no authentication and no security for a first contact.

Note that the only alternative is to trust a third party to identify people to you. I guess you might have forgotten to mention that. Or, as seems more likely, you don't realise you're trusting a third party... But of course if you do trust a third party to identify people to you, you wouldn't need this Signal feature, so...

> Signal doesn't ever tell you about this necessity and doesn't have any option to e.g. pin the key after manual verification or even just set a "verified contact" reminder.

Signal does, in fact, explain how this works, provide a "Verified" flag you can set on contacts, and automatically prompt you if the Safety Number changes for contacts you've marked as verified, as well as removing the flag if that happens.

> Signal really doesn't do shit to provide authentication continuity through proper backups.

Leaving copies of your data around to enable "authentication continuity" aka enable seamless Man-in-the-Middle attacks is exactly opposite to Signal's actual goal here.


> Note that the only alternative is to trust a third party to identify people to you.

No, the proper alternative is blocking or discouraging sensitive communication until an in-person verification has taken place.

Also, you are always trusting a third party. You have to trust the Signal people (maybe), you have to trust Intel and their SGX (lol, look for some papers on those) and you have to trust your phone vendor. Proper security educates people about whom the are currently having to trust. Spinning it like no third party needs to be trusted for Signal to operate is dishonest.


Earlier you claimed that users will just ignore safety measures, and now you say that of course they'll obey them.

> You have to trust the Signal people (maybe), you have to trust Intel and their SGX

You don't have to trust either. SGX only gets involved if you are willing to trust it in exchange for having quality-of-life features which are optional. The sort of person who never verifies Safety Numbers probably should take that deal, the sort of person who needs Safety Numbers to protect them from the Secret Police should consider carefully.

The most important thing SGX is doing for you is making guesses expensive. If your Signal PIN is a 4-digit number then SGX's expensive guesses make it impractical for an adversary to just try all the combinations, but if your Signal PIN is 12 random alphanumerics then that's too many guesses to be practical anyway even without SGX.


Are there any reasonable case studies of individuals or groups being targeted by pitm of signal?


>And signal isn't really very helpful in this scenario, because it doesn't properly protect against MitM attacks.

I suppose it depends on where exactly the Middle here is, but for basic MitM of the physical network, if nothing else shouldn't the TLS connection to Signal's servers be sufficient?


Encryption in cellular systems is to protect over-the-air signals. It's irrelevant when it comes to 99% of legal interception because for that law enforcement simply asks the network operator to share the plaintext traffic from within the network.

If you want that no-one be able to evesdrop then yes you have to have your own encryption on top. These days a lot of data already goes through TLS but for instance standard voice calls are obviously transparent to operators.


> Why the heck don’t consumers have a seat at the table while all the 5G technology is being developed?

A lot of these standards are generally created by industry consortia, and participation in standards setting is limited to companies who are members.

This isn't the IETF where any rando can join a mailing list and chime in on a proposed standard/draft.

IEEE (Ethernet) is somewhere in the middle: you have to be an individual member (though may have a corporate affiliation), and make certain time/attendance commitments, but otherwise you can vote on drafts (and many mailing lists seem to have open archives):

* https://www.ieee802.org/3/rules/member.html

* https://www.ieee802.org/3/


Because of money and or mix of politics and imagined power.

In addition, the systems are vast and complex, too big for one person to capture and understand. This means you get into the area of design teams, business teams, politics and geo stuff by default. Even re-implementing the specification (or most of it) in FOSS is extremely hard, and that's with all the information being publicly available. Designing it is an order of magnitude harder.

Besides the systems in isolation, we also have to deal with various governments, businesses, and legacy and migration paths in both cases.

Ironically, because all of this and the huge amount of people involved, consumers _are_ involved in this. It's not like the GSM, 3GPP etc. don't use their own stuff.


I want open protocols and publicly documented cryptosystems based on published protocols. Instead we are just enabling the surveillance state.

I think you just answered your own question.


we don't tend to vote in our own best interest; whether it's politics, or economically.


Because telecoms have this government supported oligopoly.

It's always been fascinating to me how we have this parallel infrastructure between the open internet and the locked down telecoms. The free for all that is the internet has evolved much more robust protocols, but the telecoms continue to operate in their own parallel problem space, solving a lot of the same problems.

They also fight tooth and nail to prevent being dumb pipes.


I think you're mixing up a number of valid concerns which have different threat models: for example, the mass surveillance risks tend to involve things like carriers doing network traffic analysis or location monitoring, and things like open protocols or crypto don't really change the situation there since it doesn't depend on breaking encrypted traffic (unlike in the pre-TLS era when traffic wasn't encrypted).

Similarly, you can develop and document a crypto system but unless you're publicly funding a lot of adversarial research that doesn't prevent something like Dual_EC_DRBG being submitted in bad faith. I haven't seen any indication that the NIST team thought they were developing open standards — it's not like they sent a pull request from nsa/add-secret-backdoorz — and the level of effort needed to uncover these things can be substantial, requiring highly-specialized reviewers. That also hits all of the usual concerns about whether the cryptographic algorithm is secure but used in an unsafe manner, which can be even harder to detect.

The biggest win has simply been that these issues get a lot more public discussion and review now than they used to, and the industry has collectively agreed not to trust the infrastructure. Switching to things like TLS has done more to protect privacy than all of the lower level standards work and that's nice because e.g. a Signal or FaceTime user is still protected even if their traffic never touches a cellular network.


> something like Dual_EC_DRBG being submitted in bad faith.

That's a poor example, because everyone (who both knew anything about cryptography and actually bothered to read it) knew or suspected it was garbage. As [0] put it:

> > and Dual_EC_DRBG was an accepted and publicly scrutinized standard.

> And every bit of public scrutiny said the same thing: this thing is broken!

0: https://blog.cryptographyengineering.com/2013/09/20/rsa-warn...

see also: https://blog.cryptographyengineering.com/2013/09/18/the-many...


True: the angle I was focusing on is the demonstrated interest and willingness to spend time and credibility on it (see also Crypto AG). It’s safe to assume that will continue as society moves further online so we should assume well-funded concealed attempts to compromise implementations will continue.


> the angle I was focusing on is the demonstrated interest and willingness to spend time and credibility on it

Ah, fair enough. My point was that it's at least not demonstated, if not not true, that you need a lot of adversarial research to prevent something like Dual_EC_DRBG (specifically) being submitted in bad faith - you just need to actually bother to read the crypto specification you're considering adopting, and have the bare minimum competence to notice that there's no benefit to a number-theoretic design besides the ability to prove security relative to some presumed-hard task, and that there is no such proof.


The reason consumers don't have a seat at the table is that the techology has nothing to do with consumers, other than harvesting consumer's money and data.


"Consumers" as an opaque crowd wouldn't be able to judge anything this involved. The discussion partners we're talking about are mostly politicians, who are primarily educated in the field of convincing despite having no background at all, referring to higher authority et al. Our typical, maybe even scientifically educated consumer wouldn't be able in the least of realistically forming an opposing opinion, and even if she/he did, it would be a very hard sell against battle-hardened politicians.

This field is too complex even for quite a few if not most IT-centric professions.

From my point of view, we have to put the blame on us. We should treat anyone supporting invasive technologies (in the sense of subverting privacy and basic human rights) as an outcast.

The impression I get (maybe not primarily from HN) is the opposite. We'd not take that job (pay...), but still we appear to honor their efforts. Or we _do_ take that job because of pay on the opposite spectrum.


> Instead we are just enabling the surveillance state.

It's a sad state of affairs. I honestly believe people actually want this, or have at least been conned into wanting it.

Giving up liberty (in this case, privacy) for the guise of safety is all the rage these days


Because, it's purposely this way.


These algorithms are pretty weak but can still be used for securing the communications with current phones.


If the security level is artificially limited to 40 bits as the article suggests, then it is not good for securing any communications. It was relatively easy to crack DES-56 at rest before the turn of the century. A teraflop machine can defeat 40 bit encryption on the order of seconds.

Edit: fixed the incorrect "RSA-56" encryption. Thanks graderjs.


I think you might mean DES-56, or RC5, but not RSA.


To be fair you could probably break 56 bit RSA with a pen and paper.


Pen and paper? I doubt it. You're welcome to try (show working ;p ;) xx) here's a sample RSA-56 problem I created for you:

39551945224675453 (56-bit semiprime)


I imagine that technically RSA-56 would be easy to crack as well.


Yes, good catch. I did mean DES-56. I got the encryption name confused with the company that issued the challenge. The RSA algorithm is completely different.


Also, note, GEA-2 doesn't seem to have the same kind of key weakening. I have no idea about the relative prevalence and configurability of those algorithms in the wild, though.


makes me wonder about GEA/3, GEA/4 and GEA/5.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: