Hacker News new | past | comments | ask | show | jobs | submit login
The second crypto war and the future of the Internet (weforum.org)
167 points by grey-area on Jan 22, 2015 | hide | past | favorite | 70 comments



> If you’re using an up-to-date iPhone to send a text message to another iPhone user, and the little bubble containing your message is blue, then neither Apple, nor your ISP, nor any law enforcement agency tapping into the transmission line is likely able to read the contents of the message.

That is incorrect and dangerous advice to give users.

As has been discussed before, as long as Apple controls the public key exchange process, they have the capability to intercept and decrypt your messages if they wish, or are compelled to do so.


Apple also controls the OS on both sides, and could silently push a "special" update to virtually anyone. If e.g. Abu Bakr al Baghdadi had an iPhone and was communicating with operations, and that phone could be identified, it'd be a race at NSA/TAO between compelling or technically subverting Apple to push such an update (if there is no other better solution), and the incoming Hellfire missile from CIA.

That said, iMessage in design and even implementation is utterly amazing compared to the normal quality of such systems from big companies. It is by far the most secure large-scale easy to use communications system available today. I think WhatsApp/Moxie-enhanced will probably surpass it when available, but that's an add-on application. iMessage is an out of the box default.


No matter how good iMessage may be, it's still a walled garden. It only provides privacy for the relatively privileged owners of iOS devices.


> Abu Bakr al Baghdadi had an iPhone and was communicating with operations,

I don't know what is more stunning; the blatant racism, or the fact that nobody seems to care about it.


I don't know what is more stunning, that you don't know who the man is OR how to use a semicolon OR that nobody else seems to care about neither.


How is that racist?


Anonbanker doesn't know who Abu Bakr al Baghdadi. So, if you think it's just some made up name it does come across as racist.


DanBC speaks truth, and I stand with egg on my face for playing the racism card, when it turned out I was actually the racist one for assuming the name was a joke.


Yeah, it's the name of the leader of Islamic State. That NSA and CIA are competing to kill him is just a statement of fact (which he'd gladly repeat if asked); it's not particularly racist, and doesn't even say anything about agreeing with the targeting decision.


When Apple's Privacy policy updated recently, the warrant canary was removed. Reading between the lines it looks bad :)


I'd never heard that term before. Here's the wikipedia article for anyone interested:

https://en.wikipedia.org/wiki/Warrant_canary

That's a really interesting concept thanks for bringing it to my attention.

If anyone's interested here's the first article I found discussing Apple removing their warrant canary:

https://gigaom.com/2014/09/18/apples-warrant-canary-disappea...


There's also the fact that iCloud backups are encrypted with keys that Apple controls. So even if messages are securely end-to-end encrypted, Apple can produce available iCloud backups.


As I understand it, Apple can subvert the privacy of future messages by silently adding themselves as another "device" for your messages, but this does not allow them to intercept and decrypt messages that have already been generated (i.e. before they add themselves). Correct?


I think the "silently" part is not possible with the current iOS codebase. They could patch that, but it's probably not even necessary since I occasionally have my desktop pop up as a "new device" again.

It loses auth for whatever reason (a bug? an os update?), has me sign in again, and then a "new device" message with my desktop name pops up on all of my mobile devices. I really have no way of telling if it is a reauth from my desktop or a third party that happened to know my desktop name.


What advice could one give that wasn't dangerous?


Assume that everything you send via iMessage can be intercepted [1]

[1] At least by an adversary with sufficient resources, leverage, and impunity (NSA, GCHQ, etc.)


Nobody has something that would fulfill your caveat, with the possible exception of a few larger countries. If you are the named target of the NSA, they will find a way into your communications.

This war really has little to do with find the way into a named person's communications; it has much more to do with finding the names of people whose communications they want to see.

For that, iMessage is devastating (assuming it is implemented and behaves as marketed, a pretty big assumption).


> If you are the named target of the NSA, they will find a way into your communications.

This is completely false. Use good cryptography properly, and make sure your system isn't hackable. PGP + Tails is a popular combination for Glenn Greenwald and others like him.


"isn't hackable" isn't feasible to the layman, and I'd argue for even security professionals. You can take measures to make it difficult to hack, but to say you can make a system unhackable through those means is false as well.

There are just too many layers of abstraction to exploit to say a system isn't hackable.


Greenwald also used cryptocat at a time when its crypto could have been broken by an NSA intern in an afternoon. "Greenwald uses this" is perhaps not a ringing endorsement of a product's security.


Until an intelligence agency manages to bug your home or image your machine like it seems they sneakily did with Ross Ulbricht (by stealing his laptop).

And in a lot of countries you are automatically jailed for not handing over your encryption keys when asked.


Could you provide a citation for "automatically jailed for not handing over your encryption keys" - I have not heard that before.



RIP Act in UK. Certainly detained (witness what happened to GG's partner in LHR).

Not automatically jailed, but can be jailed when they want the key and not turned over.


Tails is definitely hackable through the browser.


slightly less hi falutin: adversary with access to a relevant apple employee?


The difference is that most devices are now running strictly proprietary software. The crypto war and the "war on general purpose computing" mean that privacy is being attacked from two fronts, not just one.

In the '90s, you could distribute code for PGP in a book, and effectively circumvent regulation. Now, you can ban encryption apps in the walled garden and effectively shut down an entire genre of computation.


I agree with you, but the problem is that the OSS/free world is not willing to confront the simple fact that without a revenue model it is not possible to compete with walled gardens or proprietary software.

Sure you can build functionally good software, like PGP, but making something work is only maybe 25% of the way to delivering a product. The other 75% of the work is in getting the user experience right, packaging, supporting all the different platforms, etc. Walled gardens are winning because they deliver orders of magnitude better UX than open systems, and that's because they have a revenue model that can allow them to pay people to do the immense amount of work required to polish things to that extent.


I think that argument needs to be retired. Here are some recent examples of a revenue model that competes just fine with walled gardens, proprietary software, and proprietary/compromised hardware:

https://www.crowdsupply.com/purism/librem-laptop (successfully met its funding goals)

https://www.crowdsupply.com/kosagi/novena-open-laptop (arguably more niche and yet it grossed an incredible 305%)

http://ccc.de/ (not all revenue models need to produce hardware)


I think you're enormously under-estimating the cost of getting UX right.

Creating really beautiful UX is a brutal slog through the tar pits of hell. The amount of testing, testing, testing, testing, TESTING and endless revision that's required to iron out every little nit-picky detail of a user's interaction with a product is immensely time consuming and expensive. You need top-tier designers working for years with large teams of testers, beta users, etc.

For example: I could not imagine fielding a competent turn-by-turn directions app for a phone for less than $10-$15 million in funding, minimum. Then you'd need millions a year minimum in recurrent revenue to finance the endless polish and nit-picky improvements that would be required year over year to keep it competitive with the alternatives.

Tiny details matter. For example, when you start turn-by-turn directions on Google's Maps app for Android, it doesn't say anything right away if you're already going along the correct route. It's silent until it's time to turn. This caused me to assume something was wrong with it or my volume was down. Almost made me throw my phone out the window.

Think about what a tiny detail that is. Google with all it's funding and team didn't catch that. Apple got it right. Their Maps app starts talking immediately, letting you know it's working.

It's a new age -- the age of overall user experience. Features are now a commodity. They don't matter all that much. Getting something working is only step one of ten. The other nine steps are to get it working well.


Creating good UX is certainly an iterative process, but the millions-and-years claim is a bit dramatic. It is definitely possible to 'over-spend' on UX, for example Google Maps was nearly perfect 3 years ago, but is increasingly unusable because "we did nothing, everything is already awesome" doesn't look good on a progress report. The trick is achieving a happy balance of features and usability while not wasting time/money/goodwill in the endless pursuit of an idealized global optimum.


Sure, the UX nail can certainly be counter-sunk. But overall I think my estimates are pretty close.

The thing that makes UX really hard compared with features and capabilities is that there's no straightforwardly "right" answer that can be discerned Platonically in isolation.

UX is a matter of taste and people, not algorithms and code. It's not something that responds to the kinds of Moore's Law efficiency gains that can be achieved in the CS realm -- it requires the irreducibly expensive commodity of human time and effort.

The fact that this kind of time and effort investment is irreducibly expensive is why huge mega-corporations like Apple and Google have such an intrinsic advantage in this department. It's easy to understand why they dominate the mobile space -- when you're interacting with a mobile device, your appetite for poor UX is quite a bit lower than when you're sitting at a desktop.


> huge mega-corporations like Apple and Google have such an intrinsic advantage

They really don't. All it takes is one designer with good taste, or a developer frustrated with the status quo. Sure, big companies have the resources to do a lot of in-house testing and iteration, but that does not suddenly create taste.

> easy to understand why they dominate the mobile space

The reason for the current duopoly in mobile OSes has less to do with pure UX, and more to do with successful jockeying for market position across a multitude of factors (yes, including UX). WebOS? Windows Phone? Both arguably had/have superior UX, but still lost in the market.

Anyway, the OS vendor is irrelevant in this context, notwithstanding app investment due to market position. One can have horrible app UX on iOS, or terrific app UX on Android -- and Android is really not that great overall, especially with older and/or lower-end devices.

Moreover, there are plenty of successful third-party apps, regardless of platform, most of which haven't invested millions and years into UX.


For individual apps and sites, you're right. Individuals with good taste can rival and sometimes outperform the larger companies.

However, large companies like Apple can deliver an entire suite of software to users that has high quality, consistent UX.

This means consumers who aren't used to technology only have to learn a lot of things once. On the other hand, people who've grown up using Apple products already have an idea how to use new ones.

Individuals and small organizations can create some amazing things but large corporations definitely have some intrinsic advantages, especially when it comes to larger projects.


Is making the key options or UX variables open in Settings a way to short circuit these testing expenses? Your OSS target audience should be more than comfortable customizing their own experience.


Well, features like privacy and security are not really a commodity.


This. I was coming here to write this post. Walled gardens also mean that it is difficult to ensure you are actually using crypto with no backdoor, versus the appearance of using crypto.


How can you tell when a closed source application is really secure? Is there a way to be 100% sure or does it always rely on a certain amount of trust in the developers/distributors? Sorry if it seems like a dumb question..


You can't, at least not practically. You could do debug/trace analysis and reverse engineer but that'd be a massive undertaking.

But it's also quite hard to tell if an open source application is secure. Do you have time to audit thousands of lines of impenetrable C/C++/Java/Go/etc. code? The rash of OpenSSL bugs we've had recently shows that "Linus's law" really doesn't work -- the fact that the code can be audited openly doesn't mean anyone is going to spend the time to actually do it. People are busy.

It comes down to the question of "how paranoid are you?"

If I were highly paranoid I'd take a defense in depth strategy -- layer multiple cryptosystems, possibly on multiple OSes/platforms, etc. But that level of paranoia comes at a price of course, both in setup time and overhead.

iMessage is pretty good for every day use. PGP is probably better since you have greater transparency, but is less convenient. SSH and SSL are also pretty good for every day use. But I would trust none of those things to protect me from the NSA or a very well funded criminal adversary.


Open source still has its security benefits. When an exploit is publicized, the "fix" for it can be highly scrutinized, and people can comment meaningfully on whether it actually fixes it, and can contribute their own. I remember looking at diffs and the accompanying analysis when heartbleed was out, and it was informative and reassuring.

When an exploit in closed source software is publicized, we have to take their word for it that it gets fixed.


For some types of security, a proof of correctness could be used. (This proof could be machine verified)

This would be expensive ( in time and effort spent), but is possible for some things.

With things like crypto though, AFAIK most crypto is based on certain assumptions which seem to be true, even after a high degree of investigation, but for which a proof has not been found. In these cases, it would not be feasible to prove the program to be "secure", but one could prove that the program correctly implements the algorithm.

But then there's also timing attacks and the like, and I don't know that there are tools for proving things about the running time of a program, so as to prove that the running time doesn't leak information?

But for some things, one can trust a program to be correct if it contains a proof that it is correct, and you agree with what correct is being chosen to mean.


The first sentence sums up what's wrong with government reactions to crypto.

>[...] for the first time in history, anyone could encode and exchange a message that no law enforcement agency had the technical ability to intercept and decode.

Here's one famous counterexample: https://en.wikipedia.org/wiki/One-time_pad

People have always been able to communicate secrets. I don't underestimate the value of doing it conveniently, and obviously many people have been caught by lazily talking on a wiretapped phone, but the crypto war is only a war on convenience. There can't possibly be an effective war on communicating secrets unless people just give up trying. If governments want to fight it, they should at least be honest and say they're fighting convenient crypto, not crypto in general. They're fighting crypto that average people can access easily, crypto that makes everyday privacy achievable, not crypto used by coordinated criminals. At least it would make more sense and could lead to a reasonable debate.


Given the "code == speech" SCOTUS ruling, they may not be able to outlaw crypto tech itself, but they can damn sure throw you in a cage for using it, based on ISP monitoring and mandatory key disclosure.

As with guns, the criminals will simply ignore the law. Anti-crypto legislation will be aimed squarely at We The People.


I agree, but I also think if that happens, crypto will adapt to things like custom protocols built to be indistinguishable from regular traffic.

The great thing about code is that there's usually a way to change the rules. The scary part is that it leads to an endless arms race.


> crypto will adapt to things like custom protocols built to be indistinguishable from regular traffic.

The name is steganography, and has been used since ancient times.

The best way to ensure the secrecy of a message is to prevent the adversary from learning the existence of the message. Second best is to provide a bunch of big dumb decoys for the real message to hide.


Despite what Apple say, they can read their messages. The NSA could get a secret court order that says "Please silently disable encryption for user Dave Jones in your next software update and forward his messages to us. Tell no-one."

I don't see how Apple could avoid that - the government already uses secret legal means to make them do things they don't want to and prevent them from telling anyone. This is no different.


The cloudflare CEO talks about "SSL added and removed here" [0].

Then I go to their website [1], and see that unfortunately they forgot to add this annotation in the description of their product [2].

Could anyone tell them please :)?

[0] https://agenda.weforum.org/wp-content/uploads/2015/01/prince...

[1] https://www.cloudflare.com/ssl

[2] https://www.cloudflare.com/images/ssl/ssl.png


They are very clear and upfront about it and there are many cases where their simplest offering (the one with no encryption behind edge server) is good enough.


except the one with selfsigned certs doesn't really make a difference, and even for the last one as far as i know they still decrypt and recrypt stuff with their own key to be able to do caching etc.

Meaning, it's still not end to end, which is what you would expect when you hear SSL. Where are they upfront about that?


I've created a little series of lessons to teach our Year 8 (12yo) students about cryptography, as a direct response to all this rubbish.

https://bournetocode.com/projects/8-CS-Cryptography/

If there is a second crypto war, I want my students to have a little bit of a head start.


This is cool, thanks for sharing. I'm passing it on to the relevant teachers at our school.


https://en.wikipedia.org/wiki/CipherSaber

This is the best relic from the past war. We may need a new one soon...


Are current ciphers even simple enough for something like that? RC4 is pretty trivial to implement, but I think of AES as being pretty complex.


Part of the point of CipherSaber was that not only is RC4 trivial to implement, but it is feasible to do so entirely from memory. I can't think of any other cipher that comes close to that except for maybe XXTEA (and it's progenitors, TEA and XTEA).

RC4 is so trivial that several people (including myself) were able to write implementations that fit in a single tweet.


All you need is a hashing algorithm (something like md5, but not md5, of course).

If you can memorize a hashing algorithm, then you can use the hash to verify larger pieces of software that you can't memorize.

So large, complex crypto doesn't ever need to be memorized - all you need is an easy to memorize hashing algorithm.


Salsa20 / ChaCha is pretty simple. Curve25519 is not exactly simple but it's pretty small -- small enough to put on a shirt.


I wonder how a variant of XXTEA (perhaps with more rounds) would fare as a replacement. It's only a little more complicated to implement than RC4, but solves some of the issues. It acts as a wide block cipher which reduces the impact of not having authentication or IVs.


I feel this is the largest threat to freedom in the US in a long time, possibly ever. If the US has the ability to decrypt traffic from all of the major players I think, based on their history, they'll likely try to collect everything. All emails, purchases, internet searches, and so on would be cataloged. For example, if I applied for a government job in the future I could have my allegiance called into question for writing this comment. They would easily know who all but the most vigilant are based on information provided during purchases, correlated with IP addresses and cookie information.

The real criminals of course would use any number of other systems (say those based in other countries) which would not be collected in the dragnet. It would only be the average citizen who suffers.


Yeah. The irony I see in all of this is that the government did this to themselves. Government overcollection of information led to public need for encryption, which in turn led to the fact that the government can't collect information when it does have a legal mandate to do it.

The best answer here is: the government should obey the laws and not spy on citizens, and maybe then the citizens won't seek refuge.


I don't quite understand how the government thinks this will go. How do they think they are going to prevent encryption?

Also, does anyone else realize that they are essentially expecting to have no limitations? They want to, not only be able to essentially have access to your person at any time ... which is understandable to a certain extent ... under authority of a legal and proper warrant; but they now also want to be able to monitor and track every single thing you do and are at all times for future recall. Essentially, they are wanting to be able to reference the experience of your life, approaching your memory.

So what stops the government from reading your memories, once that technology is fine tuned? We are seriously on a precipice here, because humanity's political and civil society has not caught up to the exponential advances in technology.

The last time such a mismatch existed it resulted in WWI and WWII. Before we emerged, mumbling .... "WTF, just happened"


> British Prime Minister David Cameron recently pledged that “modern forms of communication” should not be “exempt from being listened to.”

Why not? By the same logic, tables at all restaurants should be bugged.

Before technology, when people met the only way to know what they were saying was to be there. The world survived that.


While certainly not a magic bullet, zero-knowledge systems are our best last line of defence against zero-day exploits. As the author is the CEO of the company that issued the infamous "Heartbleed Challenge", he should acknowledge that far from being some war of geeks vs spies, it's geeks vs malicious hackers. Yes, as far as Google's concerned, GCHQ fits that description, but it's silly to claim it's just about them. Even if you consider some intelligence agencies benign, there are a hundred-fold more that definitely aren't.

And the only underestimation happening here is the politicians'. If the cryptopocalypse were to happen tomorrow (i.e. David Cameron's fantasy), modernity would end as we know it. Financial markets wouldn't so much collapse as cease to exist. Hungarians recently took to the streets over a tax on data; what would happen if e-commerce became impossible? Imagine the impact on the rule of law if 10% of the economy disappeared overnight. Encryption isn't a threat to LE, it's the only thing protecting them from a nightmare.


BTW, the infamous 40-bit limit was set as part of a deal between NSA ans SPA in 1992, and applied to the RC2 and RC4 algoritms.


CMDF (DES w/ 40-bit keys), too.


I am talking about what was guaranteed to be exportable under the "7-day" review process, though 40-bit was a good rule of thumb for other algorithms.


When everything on the wire uses strong crypto, the only useful network data will be metadata. It will be interesting to see how traditional network IDS/IPS evolve to handle this. The signatures will be useless when they can't parse plaintext packets. Of course, orgs can always attempt to terminate SSL and do things with the cleartext packets, but that can get dicey.

Strong encryption on the wire (in transit) also renews the focus on the end points. The data are plaintext in memory on the devices. If the end points can be compromised, the data access problem is solved. Lot's of people are writing exploits (both good and bad guys) to do that.

When you combine network metadata with local device data, you've got a pretty complete picture.


Right. End-to-end encryption is just one of three key aspects. The second is hiding or randomizing metadata. Old-school Mixmaster nyms with Tor transport and alt.anonymous.messages as inbox does a good job. It's fatal weaknesses are complexity, and consequently small anonymity space.

Services such as CounterMail, Tutanota and ProtonMail are taking user-friendly (abeit weaker and proprietary) approaches. DarkMail may eventually be a fundamental solution, but may be too ambitious.

The third is end-point security. It's crucial to use full-disk encryption, and to avoid being exploited. That involves both software and wetware hardening.

But even that's inadequate. It's essential to compartmentalize unconformable information in separate and isolated compartments. Isolation comprises all aspects: devices, network and Internet connectivity, communication partners, and so on.


I like to also add an anonymous recipient feature present in OpenPGP. (-R or --throw-keyid for GnuPG) to strip metadata. (Supported at least on GnuPG for encryption/decryption, and Mac version of PGP for decryption. For some reason, not on Windows.) Also, implementations like GnuPG will allow recipient ID to be spoofed, but even less implementation will support it.

I believe using this in conjunction with some garbage recipients (with a secret key thrown out immediately after the creation of a public key) will randomize metadata quite well for OpenPGP.

EDIT: clarified option switch is for GnuPG


The crypto war is not being waged by apple or google, that is ridiculous!


This time we should not fear of loud laws, but of silent subversion. You can no longer tell good guys from bad; everyone you trust can betray you. Forced to make pacts with bad people, giving them everything of yours.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: