Hacker News new | past | comments | ask | show | jobs | submit login
A Crypto Challenge For The Telegram Developers (thoughtcrime.org)
486 points by mjn on Dec 19, 2013 | hide | past | favorite | 131 comments



For reference, here's a list (probably incomplete? (EDIT: and feel free to add!)) of ways this protocol is broken:

  1. There's no authentication at any point. The whole thing is trivially MITM-able.
  2. The RNG is Dual_EC_DRBG, which is backdoored.
  3. The RSA public key is small enough that an attacker of sufficient means could break it.
  4. The RSA plaintext is unpadded. Proper padding is critical for safe RSA encryption. See e.g. Bleichenbacher '98.
  5. RSA is used to encrypt semantic data. Dangerous for the same reasons as above.
  6. The hash function is broken. I'm not sure if this matters too much here, but I'm also not sure that it doesn't matter.
  7. The ciphertext seems to be restricted to messages of exactly 128 bits. It's not clear how or if the plaintext is padded if it's too short, and it's not clear how the protocol handles a longer message. These are noteworthy considerations.
And yet it's still (basically) safe against the kind of contest Telegram has outlined. Someone could win by factoring the RSA public key, but I'm not sure if that would be cheaper than the $200k prize. This vulnerability can also be mitigated trivially by using bigger RSA keys, making the protocol Telegram-secure.


I don't understand 5. RSA is used there to encrypt a random value that is used as a KDF input. I do get it that the size of the random value together with lack of any padding and poor choice of KDF causes issues, but can you explain why do we care about malleability (or did you mean something else) here?


You're right and I'm wrong. Mea culpa. I dashed these off quickly.

Unfortunately I can't edit anymore, so the erroneous #5 will have to stay there.

The main bad thing here is the null padding (covered in #4). This gives the attacker a lot of knowledge of the plaintext (the most significant bytes are all null), which can be used to decrypt if this format is validated on the other end. Bleichenbacher's attack only requires knowledge of one plaintext byte (the leading 02h), and we have many.


I don't keep up on everything, but I thought Dual_EC_DRBG was used by nobody else for any real world crypto. Did these guys look over the Wiki page and decide it would be fun to be the first?


To be clear, I'm describing problems with Moxie's hypothetical broken protocol, not with Telegram. Telegram does not (as far as I know) use Dual_EC_DRBG.


They aren't using it, but Dual_EC_DRBG is fairly widely used actually. There was a false meme denying that this was the case, but RSA and others proved that wrong.


Dual EC was not particularly common. Here's a metric: name a couple of products that you or I use regularly that ever used it.

It's not as if you could design a system with Dual EC instead of HMAC DRBG and not know it; Dual EC requires bignum math. It is incredibly slow for a CSPRNG.


This is disingenuous. RSA used it as a default in a commercial crypto library, as you must know. End users won't generally be aware that it's being used in a product, but that doesn't mean it isn't out there.

It is unlikely that most developers changed the default unless it was having a noticeable impact on performance, which wouldn't be the case if it was just used for key generation.

http://www.wired.com/threatlevel/2013/09/rsa-advisory-nsa-al...

> In its advisory, RSA said that all versions of RSA BSAFE Toolkits, including all versions of Crypto-C ME, Micro Edition Suite, Crypto-J, Cert-J, SSL-J, Crypto-C, Cert-C, SSL-C were affected.

> In addition, all versions of RSA Data Protection Manager (DPM) server and clients were affected as well.

> “Every product that we as RSA make, if it has a crypto function, we may or may not ourselves have decided to use this algorithm,” said Sam Curry, chief technical officer for RSA Security. “So we’re also going to go through and make sure that we ourselves follow our own advice and aren’t using this algorithm.”

Here's someone who has dug up a decent amount of real-world products:

http://security.stackexchange.com/questions/43164/which-prod...


You probably don't use any product that uses BSAFE.

For whatever it's worth, for people who think this point is all part of some elaborate edifice of sticking up for NSA: I am now 99.9% convinced that Dual EC is in fact a backdoor, and while it's clumsy in a tradecraft sense (you can just look at it and see the problem), I've heard compelling scenarios in which it would have been effective.

I just don't think it's a backdoor that's relevant to modern software, or, even for its time (the early 00's), software that was in popular use.


What do you make of the list given in this other comment that includes copy machines and game consoles?

https://news.ycombinator.com/item?id=6940993


Dual_EC_DRBG is used:

http://security.stackexchange.com/questions/43164/which-prod...

> Since we know the RSA BSAFE library uses Dual_EC_DRBG (...) by default, I would guess that this would be the main vector.

> As for the use of BSAFE, I can easily find (hint: use your favourite search engine to search for the terms "This product includes" "RSA BSAFE") implementations, oddly skewed towards imaging and gaming devices: surprisingly many printer/copier/fax devices use BSAFE, though for unknown purposes. Including Ricoh, Minolta, Océ/Canon, Brother, Fuji/Xerox, Epson ... Your Playstation (PDF), PSP, or your Nintendo DS wifi (PDF) Software from Adobe, Hitachi, Oracle and HP Some Nokia phones(PDF)


I thought the problem was the recommended implementation of Dual EC PRNG that was broken - the specific point selection that was mandated to be suitable for government use - the one that RSA used?


> 6. The hash function is broken. I'm not sure if this matters too much here, but I'm also not sure that it doesn't matter.

They are using SHA-1, which is indeed broken, not as broken as MD5 and its predecessors yet, but still less than a birthday attack.

> The message key is defined as the 128 lower-order bits of the SHA1 of the message body (including session, message ID, etc.).

A reduced SHA-1 cut down from 160 to 128 bits is not collision resistant. I'm not sure what implication this has for this protocol but if strong collision resistance is required this may be a point of weakness.


Sorry, I'm afraid my original post wasn't very clear. I was describing potential problems with Moxie's intentionally weak protocol, not with Telegram.

The hash in question is MD2 used to reduce the 32-byte random secret to a 16-byte shared encryption key. MD2 is weak, but I'm not sure if it matters in this context. As I said above though, I'm also not sure that it doesn't matter.


I think you missed: “Both Alice and Bob now compute message_key = MD2(super_secret) (we know you like dated crypto, so we thought you’d like the MD2 hash function).”


tl;dr: moxie uses ancient, known broken crypto primitives (Dual_EC_DRBG, RSA with 896 bits, MD2 and XOR) to construct a chat protocol which is unbreakable if framed in the same way the Telegram developers did with their challenge. "If they can’t demonstrate a break in this obviously broken protocol using the same contest framework they’ve setup, then we’ll know that their contest is bullshit."

Also, a call to arms to improve the OSS TextSecure implementation.


I still don't get it.

If an insecure protocol with an insecure implementation can send messages that others can't read, how is it insecure?


The contest limitations rule out most of the likely attack vectors for breaking the protocol in the real world. It's like saying "Our bank vans are 100% secure. Just try stealing money from them without puncturing our tires or bribing one of our employees."


Thanks - this is the best analogy I've heard so far.


The problem is not whether the protocol is secure - the problem is that there's no way we can tell. Historically, that means that it's likely not to be secure.

With regards this counter-challenge. The crypto here is known to be poor. If this counter-challenge cannot be broken, then it shows that the challenge issued by Telegram is no proof of security.

SO in short,

* we don't know if no one can read the Telegram-encyphered messages,

* the challenge provides effectively zero evidence that it's secure,

* Telegram will proclaim loudly that no one has broken their crypto,

* non-specialists will be fooled by this.

If I haven't answered your question, perhaps you could be more explicit as to what you're not understanding.


This is an excellent answer. You get right to the core of it, quite clearly, which is that this has naught to do with the crypto itself. But that:

> If this counter-challenge cannot be broken, then it shows that the challenge issued by Telegram is no proof of security.


Because the contest eliminates many vulnerabilities that exist in the real world.

The contest framework is identical to Telegram’s (no MITM perspective, no known plaintext, no chosen plaintext, no chosen ciphertext, no tampering, no replay access, etc)


Also no side channel observations.


In real life, a lot of breaks in crypto security come from sidechannel attacks, man-in-the-middle attacks, chosen plaintext attacks, etc. Just posting a small number of already-encrypted messages disallows all of these possibilities.


There are attacks that are possible in the real world that Telegram's (& this example) contest deny. It's like tying a person to a chair, and challenging them to run a marathon, to prove that humans can't run marathons. In the real world, people aren't tied to chairs and humans can run marathons.


It's only secure if attackers have < $2 million in resources, or the contest expires within 2000 years for an attacker with 1 PC. Or... and this is a big OR can't position themselves between you and the recipient.

eg. You aren't using Wifi, your network is fully secured, no one has access to any router along the way, etc.

Lets put it this way, if you're using Telegram / MarlinSpikeGram and you and I are in the same coffeeshop I can read your messages.


Well, not entirely the same framing.

For $200k one could probably brute-force an 896-bit RSA key. ;)


I don't actually think that's true. At least, not within the time limits they defined.

The $75k 896bit RSA factoring prize went unclaimed for 20 years, for instance.


Presumably if you made the prize large enough you wold get to the point where it would be economical and profitable to start fabricating your own hardware like the EFF did for DES.

http://en.wikipedia.org/wiki/EFF_DES_cracker


Ship yourself custom ASICs in the few months allowed by the contest guidelines and you'd earn it. FPGAs are another possibility, but even that would be a bit of an undertaking in a short timeframe and it might not actually get you much.

Also the DES hardware is slightly different than prime number factorization in terms of workload. Supercomputers are fairly well optimized for some of the types of matrix operations you'd need to do for a GNFS, which is generally the method of choice for factoring large numbers on a classical computer. Custom hardware isn't going to give you the huge boost like you'd see for brute forcing DES.

Custom hardware isn't a silver bullet and it is a large engineering problem that takes usually more time than the telegraph contest allows.


This is true, but you could easily respond by increasing the RSA key size. This would make the protocol Telegram-secure without meaningfully improving its actual security profile in any way.


Yeah, when I started actually trying to back it up, I noticed I was probably a factor 5 off.

Namely, if you look at the keylength.com values for asymmetric key sizes, 768 in 2009 ago should come close to the difficulty of 896 today. The RSA 768 challenge was broken in 2009 (http://eprint.iacr.org/2010/006), which cost them "the equivalent of almost 2000 years of computing on a single core 2.2GHz AMD Opteron". Renting that amount of time Amazon EC2's $0.06/hour instances would be $1 million.


>Amazon EC2

I'm not sure how they compare in practice, but it might be worth calculating how many hours an Amazon G2 instance would take, using their high-end graphics cards as CUDA processors. I think the cost per performance ratio is much lower, and that could change the equation in the other direction.


I really don't know how well the required operations execute on a GPU. I've only skimmed the RSA 768 paper to find that line.


Fair enough. I've read that using CUDA (or another GPU-based language) you can get at least a 10x the GFLOPS of a 4-CPU Xeon [1], though, and RSA cracking should easily parallelize, if I'm understanding the process correctly. And the high-end NVidia cards in the G2 instances have 1,536 CUDA cores each. No, I'm not kidding. The one benchmarked in the link above is about 1/3 the GFLOPS of the one in the G2 instances.

And it looks like a reserved G2 instance is 0.65/hour (though can be lower on the spot market and in the reserved instance marketplace). So if there's a 120x speed improvement over the "single core 2.2GHz AMD Opteron" (and that's assuming each core is as fast as the Xeon core above), for only 11x the cost...well, it gets a lot cheaper.

In fact, it ends up, if I haven't done my math wrong, at about $94,900 of full instance time (less if you get spot or reserved instances). [2] To win the $200k prize. Hmm....

[1] http://archive.benchmarkreviews.com/index.php?option=com_con...

[2] "the equivalent of almost 2000 years of computing on a single core 2.2GHz AMD Opteron": That's 17,520,000 hours. If the G2 instance gets you 120x performance improvement, that's 146,000 hours. At 0.65/hour, that's $94,900.


Even if Telegram's explanation did stand up to scrutiny and was ran by experienced cryptographers, the fact that its core code is closed source makes it utterly worthless from a security perspective. They can tout their own security all they like, but if no one else can independently verify it then it means nothing.

So far they've only published the source to their client, but their servers do all of the actual processing and cryptography.

All of Moxie's projects, on the other hand, have always been completely open source.


Open source doesn't imply trustworthiness and it's a very dangerous assumption to make.

Any open source system can be screwed with in a variety of ways. The simplest and most effective option is to publish both the source and the binaries, but built latter from the an altered source. This will work in a vast majority of cases, because a lot of people make this ridiculous assumption that publishing the source automatically implies that the guy is good, open and trustworthy all over. And won't bother verifying the binaries. Virtually everyone will assume that since it's open there will be someone who will do the verification. Guess what? That someone will assume the same thing.

That's your good old social engineering. It's the humans that are exploitable, not the tech.

But let's say, as unlikely as it is, this such person materialized. Easy enough to run an independent build and verify the binaries, right? Sure. In theory. In a lot of cases, due to dependencies, it's either hard or nearly impossible to do. In other cases it translates into an non-trivial amount of work, which needs to be justified. I am aware of just one project - PGPfone - that published not just the code, but the exact build instructions to produce matching binaries. Everything else is just the "open source, trust us" model. And so the bottom line is that in heck of a lot of cases you will not be able to produce matching binaries.

Now, even if the binary difference in just several bytes that is 100% enough to screw everyone over. This is done by messing with an initialization of an internal random number generator, which all crypto stacks have. All you need to do is make the PRNG (semi)predictable and the best crypto won't stand a chance as there'll be no secrets.

In the end, if you are using pre-made binaries (and who doesn't?) that are not built by a trusted entity from a specific peer-reviewed snapshot of the sources, you have the exact same chances of running a flawed version regardless of whether its source is open or not. Except that in a closed source case you are likely to be more on guard for the surprises.


I am not saying open source == secure. Rather, closed source == impossible to know if secure. Making something open source doesn't instantly add a "secure" tag to it, but keeping it all closed leaves no chance for the tag to even appear.

With open source you can get intelligent, experienced experts to look through it. And if enough of them look through it and say it's good (for example, as many experts have done with the Bitcoin client and protocol), you can at least gain some degree of assurance, even if the possibility of a critical exploit being found in the future still always remains.

Also, you are right in that in this particular scenario, open source would only be the first step as you couldn't know if their servers are actually running the source they published. Open source + full end-to-end encryption and authentication are both required, as is the case with OTR and Moxie's project.


> This will work in a vast majority of cases, because a lot of people make this ridiculous assumption that publishing the source automatically implies that the guy is good, open and trustworthy all over. And won't bother verifying the binaries.

Probably worth linking to Ken Thompson's Reflections on Trusting Trust paper, which illustrates exactly what you're saying with a hypothetical (or not) C compiler backdoor.

http://cm.bell-labs.com/who/ken/trust.html

"You can't trust code that you did not totally create yourself [...] No amount of source-level verification or scrutiny will protect you from using untrusted code"


For me, the following lines also stood out:

"I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect."

Presumably to would be possible to introduce a bug into every CPU manufactured, that even the manufacturer is unaware of.


Even scarier, researchers have shown that there are ways of backdooring CPUs via transistor doping so that even if the manufacturer suspects a backdoor, it may still be very difficult for them to find it: http://www.techrepublic.com/blog/it-security/researchers-cre...

Full paper: http://people.umass.edu/gbecker/BeckerChes13.pdf


Interesting. The "doping" attack seems to be aimed specifically at random number generating circuits. Presumably such attacks could be definitively detected by using the RNG to generate a long sequence and checking that the distribution of probabilities is as expected [1]? Is my understanding correct?

[1] http://www.johndcook.com/Beautiful_Testing_ch10.pdf


Reading the paper, it sounds like the attacker can make the trojan arbitrarily hard to detect, in exchange for making the resulting encryption harder to crack.

In the attack they describe, Intel's hardware RNG is supposed to return the results of encrypting 128 random bits using a 128-bit random key with the AES cipher. So an attacker trying to guess the random number returned would have to try 2^256 options. Instead, the attacker modifies the chip so it returns the results of a known key used to encrypt n random bits and 128-n known bits. They therefore only have to try 2^n options -- they can make guessing the random number as easy or hard as they want for themselves.

Now the AES part makes the results of the hacked chip appear random -- the results of AES(0), AES(1), AES(2) ... will have a random distribution in the 2^256 range. (The whole point of a cipher is that it should be impossible to draw any conclusions about the inputs by examining a bunch of outputs, even if the inputs are predictable.) To detect it in software, we'll have to generate enough numbers to start to see a suspicious number of repeated results. So if the attacker sets n==2, they'll have a really easy time cracking the resulting encryption, but we'll easily detect it -- we'll quickly notice that the RNG always returns one of four numbers. On the other hand if they set n==32, they'll have to try n^32 options to crack the resulting encryption, but there will be few enough repeats that we'll give up testing before we notice anything is wrong. (Of course if they're more paranoid and have better resources, they could go with n==64 or whatever -- it's like a dial they can use to set the difficulty where they want it.)

The neat thing here, of course, is that this backdoor is only valuable if you know the bits that have been hardcoded into the chip. So it remains secure against everyone in the world, except the one three-letter agency that managed to modify the chip.

Or their private contractors and consultants, I suppose.


"but their servers do all of the actual processing and cryptography"

I realize t'is the season to to piling on to the hapless Telegram folks, but I think that assertion is wrong.


Do you consider an HTTPS connection to be "their servers do all of the actual processing and cryptography"? Most people would (even though there are subtleties to reality) and Telegram certainly qualifies under whatever you consider HTTPS doing.

The only real difference is the usage of client keys and a custom protocol.


It's really not even enough to open source their server software because you would still have to trust that they ran that open source software unmodified on their machines.


Yep, that's also absolutely true. You'd have no idea if their servers were actually running the code they published.

Moxie's TextSecure on the other hand does full end-to-end encryption, with no work done by any server. Same with OTR and similar message encryption plugins. It baffles me why people are sticking to the "everything on the server" route for applications that focus on user privacy and security in this day and age, especially when there are a lot of good alternatives that don't.


They're going the 'everything on the server' route because this application is meant to compete with messaging services rather than appease the 'small crowd of crypto enthusiasts' as they have already stated.


If you host your own server with their server software, you can verify it's running unmodified.


They don't provide the server-side code, nor a way to ensure that your client is talking to a specific server. The only authentication is via a 5 digit code sent via SMS, so it's presumably very easy to spoof and control messages. You need on average 5000 attempts to log into another users account (they are send a "new device added" notification message though), which could be done reasonably easily if you knew a person would have their phone off for a period of time when you wanted to attack their account.


Wait the crypto is done server side? I haven't really looked into Telegram much, but that's fucking silly. Lavabit all over again.


This can't be right. Server-side crypto? I.e. client transmits to the server in plaintext? Surely no one is that clueless about cryptography.


its not done server side


Eh, yes and no. It's not sent in plain text to the server. However, in the sense that it's 'Lavabit all over again', or at least a procedure that produces isomorphic problems, the encryption seems to be being done server-side:

--------------------------

'The difference between messages in Secret Chats and ordinary Telegram messages is in the encryption type: client-client in case of Secret Chats, client-server/server-client for ordinary chats. This enables your ordinary Telegram messages to be both secure and available in the cloud so that you can access them from any of your devices — which is very useful at times.'

https://telegram.org/faq#q-why-not-just-make-all-chats-secre...

--------------------------

I have trouble construing that other than in you use the server's key to encrypt the message you send and then server encrypts it with the public key associated with the addressees' device at the time.

...

This seems to produce the same problem LB had in that you can be asked to turn over the keys to the messages on your server. And for similar reasons, you're doing encryption on your server.

#

Given that LB had that trouble, I also have difficulty seeing any non-malicious reason for solving the key distribution problem this way. Since you can set up secret chats that are end to end encrypted you could equally have asked the client's devices to exchange a private key using each others' public keys.

I'm tempted to explain it away in terms of the user being expected not to be technically savy enough to press the buttons on both devices when asked to do so. - But the client knowing about and using encryption in the first place implies at least some sort of technical competence which I would imagine is great enough to press two buttons when a prompt comes up saying something like:

'The device DEVICE NAME is asking for the key to read your messages.

If you did not try to set up your mail on another device within the last minute, press the REPORT ATTACK button.

Else press the ALLOW button.'

Still, I've overestimated the average intelligence of people before.


As mentioned at http://core.telegram.org/contestfaq if more tools to interact with the traffic are needed for the contestants to crack Telegram, they will be provided in the next contest right after 1 March, 2014. The current contest has an important practical task of deciphering traffic that is being intercepted in real time. This is the basic concern of regular users like myself (me and lots of other people in Russia had to stop using WhatsApp because of easily decipherable intercepted traffic). If Telegram proves to be robust in this respect, more tools to manipulate traffic and wider contests with similar prizes are to follow. Like all startups, this contest by Telegram starts from solving a basic but most important problem, then gradually gets more complicated in functionality and scope.

Telegram will always be interested in creating incentives for the crypto-community to check its security and provide feedback. So if you are waiting for tools to try, e.g., a MITM on Telegram and get your $200К, please stay tuned. It's @telegram on Twitter.


Thanks for sponsoring the Telegram product. (Even though I think what they are trying to do could be done much better.)

Could you please ask the Telegram team to post the exact contents of the first message that Paul sent to Nick, except with the secret email address X'ed out? I explained in https://news.ycombinator.com/item?id=6937631 that if the MT protocol is secure, then there is no risk in posting such a "known plaintext", so the Telegram team should have no problem posting it.


Does this mean that you were unable to recover Alice's message?


Alas, I am not a cryptographer and not even a member of the Telegram team. I'm just a guy who backs Telegram financially and proposed to start their contest. I described my motives behind it here https://news.ycombinator.com/item?id=6938622

As for your contest, I will make sure the Telegram team will have a look at it once they are awake. As far as I understand, you designed it to be similar to Telegram's contest. How do you send messages that affect traffic in real-time? How large is the prize? Is there a deadline?


Have you taken part in Telegram contest design?

> How large is the prize?

I think the "prize" is obvious. Breaking this "unbreakable" 896bit-RSA + no auth + no signature + MD2 + XOR is a necessary condition for the Telegram contest to be taken seriously.


You can generate your own messages according to the scheme he gave (even using the same public key from Bob if you like), but they will not be aggregated into a public log.


This is counter-productive.

Whichever way you view Telegram, they haven't developed it to make a quick buck on the ignorance of the masses, nor are they in it to deceive people and entice them to use a knowingly broken crypto.

Granted, they have an attitude problem, they clearly have no experience talking to the crypto community and they made dumb move with this contest thing, but in the end of the day they and Moxie(s) are on the same damn side.

Antagonizing things further is just plain stupid.


They have a blindness and an arrogance that could prove fatal to anyone trusting them. Until they lose the arrogance and catch up with the published state of the art crypto they are dangerous and are likely to do more harm than good.

This is a very clear explanation of the limitations of their challenge and hopefully will open their eyes and help them on the road to getting a better understanding. If not it will help to limit the damage they can do by publicly clarifying the limitations and the lack of understanding that they currently have.


> Whichever way you view Telegram, they haven't developed it to make a quick buck on the ignorance of the masses, nor are they in it to deceive people and entice them to use a knowingly broken crypto.

The contest they set up actually makes me think they did. I am willing to see them pay out for it and thus prove me wrong, though.


Is there a decent “Crypto Not For Dummies But For Reasonably Competent Programmers Who Have Thus Far Taken It For Granted But Want To Get Up To Speed Fairly Quickly On Concepts And Implementation” text?


If you're open to an online course, there's a Stanford intro one coming up on Coursera - https://www.coursera.org/course/crypto


I did the course about 1 year ago (or maybe 2? not sure now).

The only thing I really remember from top of my head is don't implement your own crypto.

I guess I remembered the most important lesson.


The Matasano Crypto Challenge is a fun intro.


I wish it were still open. They want you to send them an email to start the challenge, mine went unanswered.


There's a great Crypto I course on Coursera that I would highly recommend.


"Practical Cryptography" is a good book.



NO! Applied Crypto is at times a fascinating book, but it is terrible for a developer that wants to learn the fundamentals of crypto and how to avoid the most common mistakes people make. Its also really old at this point.

A much better book is Schneier, Ferguson and Kohno's "Cryptography Engineering"[1]. It covers what makes for strong crypto primitives and why weaker ones are considered broken. Note that is book is starting to get a bit out dated (I don't remember it covering ECC at all, for example), but I don't know of any better book.

I also endorse the Matasano Crypto challenges and the Coursera Crypto class taught by Dan Boneh as excellent learning resources.

[1] : https://www.schneier.com/book-ce.html


Schneier himself semi-regrets this one I believe as it teaches enough to be dangerous but not enough to understand the risks you are creating. Implementing ciphers is one thing, knitting them together into a secure protocol is something very different and very challenging.


I'd recommend Cryptography Engineering over Applied Cryptography for programmers today. It has a lot of actionable info and is quite up to date.

https://www.schneier.com/book-ce.html


Dear makers and backers of Telegram:

Perhaps in response to my requests (https://news.ycombinator.com/item?id=6933179 , https://twitter.com/zooko/status/413552420522708993 , https://twitter.com/zooko/status/413552466748133376 ), your FAQ (http://core.telegram.org/contestfaq) now says:

------- Q: Does Paul send the same message to Nick every day?

No, just as in real life, Paul‘s messages to Nick can be different each time. The only thing that doesn’t change is the secret email address in his daily messages.

Q: Could you provide an example of a Paul's message to Nick?

Sure. The message may look like “Hey Nick, so here is the secret email address for the bounty hunters – {here goes the email}”. -------

There are some things that I don't understand about the structure of this contest. Why is the target secret an email address rather than a magic word like "squeamish ossifrage"?

I asked for an “examples of the actual message”, and you posted an possible example, but what I meant to ask for was actually the exact text of one of the messages. Except, of course with the target string (the email address) replaced by X's.

For redditors following along, getting a (partial) copy of the exact message that was sent would be an example of what cryptographers call (partial) "known plaintext". If your cryptosystem is secure against Known Plaintext Attack, then it doesn't matter if an attacker (me) gets copies of some of the messages. If your cryptosystem is insecure in this model, then your users have to be careful with what they type into their messages. For example, they might need to be careful not to cut and paste long strings from other sources, or to otherwise insert strings into their messages that their attacker might guess.

All good, modern cryptosystems are secure in the Known Plaintext Attack model! (And, in fact, all good, modern cryptosystems are secure in much more rigorous models in which attackers get more powers beyond peeking at plaintext.)

So if the makers of Telegram are confident in the security of their protocol, they should have no problem posting the complete, verbatim text of the first message that Paul sent to Nick, with the target email address replaced by "XXX"'s.


Taylor Hornby has written a good introductory explanation of the Known Plaintext Attack model and the more powerful attack models, in the context of the Telegram cracking contest:

http://www.cryptofails.com/post/70546720222/telegrams-crypta...


A simple way to understand the gravity of this: the Nazi's Enigma machine was broken with a known-plaintext attack a.k.a a Turing Bombe break. Furthermore, it was the known plain text of previously decrypted messages that was used in further attacks against new keys issued by the Nazis.


Somebody pointed out to me that this isn't reddit, but hackernews. Oops, sorry.


I have been saying this a couple of times in similar threads, but I think Threema [1] deserves a little more attention. Complete end-to-end encryption using NaCl. The interface they created is simple and gets the point across. Also, they're actually saying "don't trust us!", which ironically makes me trust them.

[1]: https://threema.ch/en/


Their protocol doesn't provide any forward secrecy. It uses the PGP protocol model, which is increasingly being seen as an architectural dead end (particularly given the recently revealed ciphertext recording capabilities of NSA):

https://whispersystems.org/blog/asynchronous-security/


That's not what their FAQ says: https://threema.ch/en/faq.html (scroll down, they specifically claim to provide forward secrecy)

Do you have additional information?


Yikes, that actually looks like potentially deceptive marketing to me.

> "Yes, Threema provides forward secrecy on the network connection. Client and server negotiate temporary random keys, which are only stored in RAM and replaced every time the app restarts (and at least once every 7 days). An attacker who has captured the network traffic will not be able to decrypt it even if he finds out the long-term secret key of the client or the server after the fact."

My reading is that they have an end-to-end secure protocol that does not provide forward secrecy, which happens to be routed through a server which uses HTTPS w/ an ephemeral cipher suite for the network transport, with a TLS session ticket that they rotate the key on every 7 days.

We should ask them for more details, but if true, that would be pretty deceptive of them.


Wow, ok. So, just to be clear, what you're saying is that you're interpreting their claims here as being exclusively related to the network transport; the underlying end-to-end protocol does not use ephemeral keys as far as you know.

If I'm understanding you correctly, and you're understanding them correctly, that is quite deceptive indeed.


Yes, that's what they're doing. I checked them out earlier this year and I remember being disappointed they didn't offer forward secrecy like OTR.



Hello, Mr. Moxie...I was just wondering if I understood you about what you said here...is the PGP model being considered a dead end because of the "store it forever in Utah" model of the NSA? Excuse my ignorance, but, don't all public key systems actually end up using a symmetric key to crypt a message and that key doesn't get re-used? How is that different from PFS? (If this is too grade school, a "RTFM" is cool...)


This looks like a better alternative, but unfortunately all of their code is closed source as well.

I completely understand the desire for developers to make certain applications closed source, but if your application's main selling point is user privacy and security, you really need to abandon that desire.


Funny. But actually, the simplest contest that accurately describes Telegram's insanity is simply this:

::Given an unknown function f and a single output y, compute the input x that maps to y.::

Ready? Here's the output: ROSEBUD. Now I'll give $100k to anyone who can tell me x. Good luck!


The function is known--they publish how their algorithm works. The problem is that their contest doesn't account for the main problem with their system: its vulnerability to MITM attacks.


Okay, then let f XOR the input with a randomly generated integer. There is no loss of generality here.


Whats to stop Telegram tampering with the messages and just displaying random bytes in the 'output'? This would make it impossible to crack. You cant test the security of a system without 1 - full access to the system or 2 - complete trust in the people controlling the system (which we dont have)


They said if no one wins the contest they would publish the keys allowing anyone to decrypt the data, proving it was not garbage.


Ah, I missed this part. Thanks


Using an NSA backdoored RNG is pretty redundant. A cell phone cannot be secured against NSA. They'll just activate their keylogger and grab the plaintext before it has even been encrypted.


The goal is to prevent mass snooping on our private data. It's impossible to prevent an attacker with root access from getting your data, but on the other hand, they must invoke their root access in order to get your data.

More simply: right now, the NSA is vacuuming up everyone's data across all services. Your emails, your texts, your search history, certainly your metadata; basically everything. And the only reason it was possible for the NSA to do this to us is because security has historically been an afterthought.

TextSecure is the first step toward keeping our data free from prying eyes. It prevents the NSA from having default access to our texts. If the NSA wants your data, they'll have to deliver a keylogger to your specific phone in particular. That's very different from gathering everyone's comms all the time.

Telegram, on the other hand, offers no protection whatsoever against the NSA vacuuming up everything, because the NSA can simply MITM every Telegram conversation as they're initiated, just like the NSA MITM's CAs to decrypt your https traffic.

In summary: if you care at all about a world in which the NSA can't sift through all of your data, then use and promote TextSecure, because TextSecure offers protection against governments.

Now, I've said "NSA" about ten times here, but this is true for other governments too. Other governments have impersonated CAs, coerced CAs into issuing bogus security certificates, etc, to target people they deem to be political radicals. China tries very hard to do this. I'm sure there are plenty of governments worldwide who are all working on doing exactly this.

So it's not just the NSA. It's the entire future landscape of our data privacy. If you believe you have the right to electronic privacy, then use TextSecure, and make sure everyone knows the truth: Telegram offers no such privacy.


That's making the assumption that all phones in the US have NSA keyloggers on them, which is pretty unlikely.


While they probably don't have keyloggers on them from the start, it is with high probability child's play for them to push it to your phone over the air, and make your phone run it

http://www.osnews.com/story/27416/The_second_operating_syste...

"What makes it even worse, is that every baseband processor inherently trusts whatever data it receives from a base station (e.g. in a cell tower). Nothing is checked, everything is automatically trusted. Lastly, the baseband processor is usually the master processor, whereas the application processor (which runs the mobile operating system) is the slave. So, we have a complete operating system, running on an ARM processor, without any exploit mitigation (or only very little of it), which automatically trusts every instruction, piece of code, or data it receives from the base station you're connected to. What could possibly go wrong? "


Yep. And you can build your own (low-power) GSM base-station:

http://www.thinksmallcell.com/Technology/build-your-own-open...

So anyone with a little time on their hands can be that "trusted party" for everyone in radio vicinity.


They don't need keyloggers on every phone. Having them on a wide range of people's phones (say, people involved with various Occupy stuff) is a very real possibility.


consider that every factory-default mobile OS has vendor backdoors, if not ISP, firmware and hardware backdoors. no need to keylog everyone, just remotely take over on-demand using vendor backdoor.


ding ding ding!

you have won the prize! expecting anything to be secure on a mobile device is a serious mistake.

in some ways, textsecure and redphone actually induce behavior that puts people at risk: no amount of encryption can make a mobile device safe.

the only possible exception to this is a device that is built from zero and has fully in-house gsm stack, etc.


That gives me an idea: messaging apps shouldn't use the default OS keyboard, but write their own. In that case, NSA would need to target that messaging app specifically.


not going to buy you much if an attacker has dma, which is what a proper backdoor will give.


<Rant> After reading all the blogs and replies that are abuzz talking about Telegram, I realized they are the best guerrilla marketers I have seen in a while! They might as well throw away their PhD. papers and stop calling themselves as Engineers/Cryptographers/whatever... marketing monkeys...

</Rant>


I must be missing something, but isn't this easy to attack by exploiting the periodicity of the XOR function? Or is the message 32 bytes long as well?


The plaintext is the same length as the hash, so each byte of the hash is xored into only one other byte.


Ah, well, that's pretty secure, then.


If the prize was similar to this one, I think the challenge would be taken more seriously:

http://16s.us/software/FreeOTP/freeotp_challenge.txt

    * Prize

        One small Slurpee or its equivalent monetary value.


If they were to release the plaintext of Alice's (or, in their case, Paul's) message, wouldn't that include the secret email address?

FWIW, I agree the contest is a sham for the reasons moxie & others listed here and elsewhere.


The contest is only set up that way to make it look more secure, there's really no reason that you would have to prove yourself in that way. If there's an actual cryptographic break then a researcher can prove it without all of the "send email to this address" nonsense.


Two alternatives

* Have a special message that does not contain the email address and makes it's contents public. * Take whatever the first message happened to be and replace any characters in the email address with the character X and release that. Giving away the length of the email address wouldn't do anyone any good.


this is a reminder that prizes or cash for breaking crypto products is a silly PR stunt. mega did the same thing, ended up paying out some money, then their product is "secure" by the same sort of argument. same deal with cryptocat and several other cryptoturds.

i do find it amusing to hear moxie ranting about how much better textsecure is when the license on it is such shit. can't argue with the fact that it's open source, but there is no point in contributing the codebase due to the licensing.


The license on the TextSecure app is GPLv3. What would you like to do with TextSecure that this license prohibits?


Integrate it and distribute it with non-open-source software. So, any commercial use whatsoever.


You want to use other peoples software, modify then and distribute it. Afterward, you then want to sue users who dare to modify or share your version?

How can anyone except authors to accommodate this, when the license choice explicit states the opposite?


You can absolutely integrate it and distribute it with non-open-source software. Look at all the GPL stuff in android, for example.


I understand the down votes, and kind of expected it. But this is the actual practical reality, guys. Large software companies avoid GPLv3 like the plague. If you want your software to be used widely, then you need to use BSD/MIT/Apache/etc.


Since when is Google and Oracle not defined as "large software companies"?

Companies that avoid GPLv3 (but not GPLv2) do so because of either the patent clause, or the DRM clause. That is, they either want to by pass the license with legal restrictions, or hardware restrictions.

This is only relevant for external products, and says nothing about internal use. the actual practical reality, dead seriously, is that gplv3 is used by most large software companies that exist in the world. It would surprise me if Microsoft did not have some debian machines laying around somewhere hosting some website.


public key plz


Another guy has butthurt from Telegram. As I read somewhere telegram guys said that after 1st march 2014 they somehow will allow to perform MITM in that crypto challenge


If they allow man in the middle attacks then the system is completely and demonstrably broken. The out of band public key verification uses deterministic images to confirm that the keys are the same, which can be easy forged given the relatively bad comparison engines in use (humans describing what a pixelated 16px image looks like). At no point is the real key shown to the user, so it's impossible to verify that they're identical through a description.

http://telegram.org/img/key_image.jpg

I wager that if they did allow a tampering eavesdropper in their bounty contest, it would be in the same conversation that has already done a key exchange and verification, making it yet more snake oil. You can hardly call something secure when you don't allow real-world MITM attacks in testing.


How is the key image impossible to describe?

There are only 4 possible colors per cell. You just describe it like 0,1,2,3,2,0, etc Just as if you were reading off the real key.


From just asking around, most people described the key image to me in terms of the darkest portions and ignored the parts that were lighter. It's easier for the example image to say that there's a dark L and lighter X shape in the center and then assume that the rest is the same. At least, that's how people did when I presented it to them as a challenge.


> There's also not that many possible images anyway, there's 8 rows with 8 columns, and 4 possible colours for each pixel. Even not assuming any fuzzy matching (human comparison) it's still very possible to generate keys with colliding image hashes.

Uh, 4^(8*8)=2^128 is a pretty large number.


So it is, I've removed that. I need to sleep more.


Man in the Middle attacks are so easy and cheap to set up. Just use a few wireless access points, pop them up around town, install something like Jasager and a 3G dongle. Phones like to connect with known networks and will happily connect with your rouge access point, if you tell them that you are exactly the ap they are looking for.

So, any system that claims to be secure must factor in MitM.

More information on this, and how easy it is to trick devices can be found at Troy Hunts website [0] and at Wifi Pineapple [1]

[0]: http://www.troyhunt.com/2013/04/the-beginners-guide-to-break... [1]: https://wifipineapple.com/


you sure seem to like Telegram. Also please don't use "butthurt".


People need to stop posting his shit here, it's basically linkbait he's using to pimp his Whisper service. He's the worst kind of troll.


Moxie? He is kind of a real expert in everything crypto and, instead of using the phrase "military grade encryption", WhisperSystems actually explains what they do and how they do it.

His posts are very well written and understandable, even for non pros (with a pinch of sarcasm, but that's how I like it). So, where exactly is he trolling?


I think you'd be more likely to find him trolling on the ocean.

http://www.blueanarchy.org/holdfast/


... looking for herring. Red ones, preferably :)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: