Hacker News new | past | comments | ask | show | jobs | submit login
Indistinguishability Obfuscation from Well-Founded Assumptions (quantamagazine.org)
197 points by theafh on Nov 10, 2020 | hide | past | favorite | 88 comments



Fully homomorphic encryption?!

Reads article

Yup! Gosh, I can't wait for it to become remotely practical.

It's been a few years -- wow, almost a decade -- but I remember a paper that claimed to prove that "no fully homomorphic program can be written such that the secret key was guarded in all cases," i.e. it's theoretically possible to trick a FHE program into giving up its own key, unless your program is basically trivial.

It was one of those papers that you run across, then never see again, and wistfully wish you'd saved a reference to. Tantalizing idea, if true, but probably not true.

EDIT: I just went back to finish the article, and they linked to it: https://link.springer.com/chapter/10.1007/3-540-44647-8_1 Welp. Looks like it was a good paper!


The article isn't about FHE, it's about iO. As you point out there's no reduction from FHE to iO, but there is one going the other way. Worth noting though that when Boaz et al wrote that paper, even FHE wasn't known to exist. As for the iO result, not sure whether you were saying you thought the result wasn't solid because of the Boaz paper, but my understanding from people in the field is that it's expected to hold up and of course Boaz is quoted in the article.


Can someone break down in simple terms what the difference is between FHE and iO? What can you do with FHE that you can’t do with iO? A concrete example would help tremendously.

I always mentally substituted FHE with “the ability to encrypt a function such that no attacker can reverse engineer how that function was implemented,” i.e. “input go in, output go brr.” Crucially, in FHE this is true even if you yourself are the one executing the program and can observe everything the program is doing.

Or... in whatever I thought FHE was.


In FHE, you need some sort of key to decrypt the result. In obfuscation the result comes out clear text. General obfuscation is impossible, but iO gives you basically the best possible result which is "given any method of computing the same function (for some complexity parameter of the function in question) you will always be able learn at least as much from said method as from the obfuscated function" (in other words iO gives you a circuit that is as hard to reverse engineer as the hardest circuit of a given size that computes the same function, but it doesn't give you a guarantee on how hard that actually is).


Can you use iO to decrypt an FHE ciphertext and return an obfuscated output only if the ciphertext matches some value?

The iO circuit would need the FHE decryption key and the hidden output. If there were exactly one FHE ciphertext for any plaintext (which can't work, otherwise you could distinguish FHE values and break it), then you could simply implement this by supplying <target FHE ciphertext> XOR <obfuscated secret>.

Is there a scheme where iO could do this, or is it impossible because any obfuscation would necessarily require both the FHE decryption key and the obfuscated secret?


Suppose I have a function which returns input + 42. I wish to obfuscate that function so that no one can determine how I arrived at the result, even if they themselves are executing the program.

In simple terms, why is FHE different from iO? I’ve read your reply a few times, but I don’t understand the concrete distinction. What is the “best possible result” of input + 42 obfuscatabiliry?

Note that I am using + 42 as a trivial example; please mentally substitute with “for example, a bunch of matrix operations to compute a super secret formula result.”


Former crypto PhD student here. Here's the formal definition. FHE means that, given an encryption Enc(x) of plaintext x, you can compute Enc(f(x)). Note that without the decryption key, Enc(f(x)) is just as "useless" as Enc(x) (i.e., it looks totally random). FHE would be useful if you wanted Google to filter all of your emails for spam, but didn't want them reading your emails. Google would receive Enc(email) and compute Enc(isSpam(email)) and then forward Enc(email) and Enc(isSpam(email)) to your email client, which would then decrypt both and either send `email` to spam or not. Note that in this scenario, Google learns nothing about `email`, not even if it is spam or not.

Here's what iO means: given two different circuits C_0 and C_1 that both compute a function f, the obfuscations iO(C_0) and iO(C_1) are indistinguishable (to a polytime adversary). In the email example, you could have a circuit C that computes isSpam over encrypted emails, but with a decrypted result. That is, C(Enc(email)) = isSpam(email). But note that this leaks information to Google, and the security definition of iO does NOT guarantee that Google does not learn anything about your secret key (the obfuscation might not hide the secret key at all).

Definitionally, they're very, very different things and have very different security guarantees. The weird thing about iO is that the security definition doesn't immediately appear to actually secure much, so it doesn't seem very useful. However, it turns out that iO is an incredibly powerful primitive because it can be combined with other things like pseudorandom generators to build up a lot of other primitives. For example, you can combine secret-key encryption and iO to get public-key encryption in a very elegant way.


You can use FHE to encrypt functions, but the more standard use is to encrypt inputs and evaluate plain text functions (e.g. in a two part sort of protocol, see e.g. [1]). Regardless, the important difference to obfuscation is that once you're done computing, the output from FHE will still be encrypted. Somebody has to have a key to decrypt that and that key can most likely also be used to decrypt the function. In iO, the result comes out plain text.

[1] https://juliacomputing.com/blog/2019/11/encrypted-machine-le...


FHE:

- I encrypt data Encrypt(key, data) = ciphertext1

- you operate on encrypted data f(ciphertext1) = ciphertext2

- I decrypt the result to obtain Decrypt(key, ciphertext2) = f(data)

for iO it's more like: I give you the algorithm to encrypt your data: some_algorithm(data) = Encrypt(key, data)

but you can't figure out what "key" is, even though you have the code and it's not some remote service


iO doesn't guarantee that the adversary can't figure out what "key" is. It turns out that this is possible, but it requires some very careful applications of a special kind of pseudorandom generator.


I haven't fully absorbed this, but isn't this kind of the opposite of FHE? In FHE you are hiding the input and output. Here you are hiding the function (or in the PHFE application the function and part of the input)


Hmm. I thought “fully homomorphic encryption” == “hiding the function,” not merely input and output.

Have I been using that term incorrectly? It’s possible. If so, what’s the canonical acronym for “encrypting a function”?


FHE is the ability to encrypt some data and give it to a machine, which is then able to perform sensible operations on it without decrypting it (and thus ever learning the plaintext).

The term you are looking for is "obfuscation". Either black-box obfuscation (which was shown to be impossible) or the indistinguishability obfuscation, which is what the article talks about.


You can encrypt a function with FHE, but the applications of that are limited because the output is encrypted as well. We've had provably secure FHE since Gentry's 2009 paper.

Obfuscation roughly means "encrypting a function." But here be dragons: since the adversary gets non-black-box access to the implementation of the function (a circuit), and can also learn as many (input, output = f(input)) pairs as they want, it becomes impossible to say that the adversary cannot learn anything else about f in the general case (intuitively, consider a higher-order function f: you could learn a lot about f by computing f(f)).


Reading the abstract of the "paper" (book) that you linked it looks like what they are attempting to prove is that there are classes of functions in which obfuscation is impossible. My guess is that iO could still be attempted by simply guarding against these certain classes (which may even be difficult to create in the real world).

However, that book does look daunting. I am curious if the book hints at more potential problems than the abstract indicates.


Unlocked paper: https://moscow.sci-hub.se/2256/6ffcc08f6e40c37f84b5287edf9a5...

I wish I had endless free time. It would be so, so fun to dive into this. Because you’re right — I got the sense that there were more potential problems than was previously appreciated. It would be fascinating to know exactly what problems there are, and whether they’re fundamental problems or “dodgeable.”


Some of the use cases presented sound like they'd be perfect for DRM.


Here is a short list of the downsides. I’m sure there is more. Tech is always double edged, it seems.

- present false source to claimed binary build of same.

- allow for pervasive backdoors in binary blobs that resist scrutiny by security researchers


I don't think that game developers/media companies/etc care about the two things that you have mentioned. All they want is un-crackable DRM.


Non interactive media will always have the analog hole, no getting around that.

https://en.wikipedia.org/wiki/Analog_hole


iO isn't going to be used in video games for a long time, if ever. For one, even if it had no overhead, it'd completely ruin any optimizations that the devs needed for the game to run in the real world. But likely iO will add a huge overhead, degrading performance by at least a quadratic factor or more. iO also doesn't consider things like syscalls or hardware instructions, which might break the security model anyway.

Video/music DRM wouldn't (couldn't?) change from what it is now: the plaintext bits have to be surfaced to the user at some level (so that it can be displayed/played), so they can simply be ripped at that point.


Not even theoretically possible, as far as I can tell. If iO is protecting your decryption key, then any indistinguishable circuit would also include your decryption key in some form.

I'm having trouble thinking of cases where iO gives you any actual guarantees that matter, because of this property.


If you're interested, check out the paper called "How to Use Indistinguishability Obfuscation" by Sahai and Waters. They were able to prove some very surprising things -- including how to hide a private key by way of iO.


Why would you need to use iO on everything? Wouldn't it be enough to use it on computational inexpensive but vital parts?


I feel like I'm in way over my head here, but how is it even possible to have mathematically uncrackable DRM? Surely at the end it's going to come down to a comparison instruction (or a few thousand of them), no?


So you want to modify those instructions to crack DRM? Which of them? There is a lot of instructions (say, your few thousand) and even more combinations of instructions, you can't find another combination of instructions which would work.


I guess my problem is that I don't see a fundamental difference between this approach and traditional obfuscators/packers/virtualizers/whatever. They can also bloat a simple comparison (for example) into many seemingly useless but interconnected instructions. I just don't understand how it's possible to make a DRM scheme that can't be cracked.


user identification and telemetry come to mind


I think the second one applies but I don't see how indistinguishability obfuscation could allow for the first.


(sorry for late reply.) There was a bit in the OP that said in passing, "[f]or instance, it enables the creation of “deniable” encryption, in which you can plausibly convince an attacker that you sent an entirely different message from the one you really sent".

So candidate plaintexts that are plausible sources for the obfuscated blob appears to be a capability of this approach.


Thanks, now I see the connection.

As there's a whole other thread going on right now about deniable encryption (in the context of e-mail), I think it's worth pointing out that the recipient still typically knows whether or not a particular encryption method is deniable. Such methods would presumably not be a good idea to use for software distribution.

In other words, the deniability is an intentional construction that could be used if parties feel they have a reason to use it (like in a messaging application, as tptacek and Matthew Green are arguing for). However, it's not something that automatically applies to all signature methods merely because of the existence of indistinguishability obfuscation. A software signature method that is constructed to have a deniability property is probably not one that a software user should accept. :-)

In my view, which I've expressed elsewhere, software releases should move in the exact opposite direction by including advertisements of their existence in a public log (whether that's a blockchain, a widely-distributed git repository, a Certificate Transparency log, or whatever).


I'm no crypto expert and trying to follow along, but can't make this leap:

> But iO is stronger than it sounds. For example, suppose you have a program that carries out some task related to your bank account, but the program contains your unencrypted password, making you vulnerable to anyone who gets hold of the program. Then — as long as there is some program out there that could perform the same task while keeping your password hidden — an indistinguishability obfuscator will be strong enough to successfully mask the password. After all, if it didn’t, then if you put both programs through the obfuscator, you’d be able to tell which obfuscated version came from your original program.

Why does existence of another mask the secret? Even if you couldn't distinguish the programs, couldn't you just try both possible secrets?


Suppose there exist two programs that do the same task, with the following properties:

- Knowing program A allows an attacker to recover your password.

- Program B does not contain any information about your password.

Eve knows that these programs exist and that they have these properties. Eve is now given two obfuscated programs O1 and O2. They were produced by applying iO to A and B, but Eve knows neither A, B nor how O1 and O2 map to O(A) and O(B).

If it were possible for Eve to somehow obtain your password from, say (without loss of generality), O1 in any way, then Eve could deduce that O1 must be O(A). It could not be O(B) because neither O nor B contain information about your password.

Since we assert that indistinguishable obfuscation makes it impossible to draw this kind of conclusion, it must be impossible to obtain your password from O(A) if B exists.


I know nothing about the whole topic but could you explain:

Why does any of those programs contain the password anyway if it is not needed for achieving the goal?

If it is not needed, flow analysis, tree shaking and const evaluation could get rid of it.

If it is needed, how would the program not containing it achieve the goal?


Maybe the password is not the best example. I am also not an expert and just took it from the article.

As someone explained elsethread:

> iO gives you a circuit that is as hard to reverse engineer as the hardest circuit of a given size that computes the same function, but it doesn't give you a guarantee on how hard that actually is

https://news.ycombinator.com/item?id=25048220

To answer your thoughts though:

> If it is not needed, flow analysis, tree shaking and const evaluation coulg get rid of it.

Those are methods you could try to apply, but there is no reason why this would be sufficient in the general case.

> Why does any of those programs contain the password anyway if it is not needed for achieving the goal?

Maybe the problem your program is trying to solve is significantly easier given the knowledge of a secret. As long as you know that it's possible (but potentially harder) to solve the same problem without the secret, with iO you can distribute your obfuscated binary and nobody can recover the secret.

For example (if I understand correctly), if your secret program contained an oracle to reverse SHA256 hashes[0], then applying iO to it you could be sure that nobody can recover this secret oracle, because applying iO to a different program that uses brute-force search should yield an indistinguishable obfuscated program. I doubt that's how it actually works (because oracle and brute force are probably not the same "circuit hardness"), but I hope you see the point.

[0]before someone complains about lack of bijectivity, read this as "find the smallest integer/bitstring that has the same hash"


The pre-requisite is that there are at least two possible implementations of the same program- this seems to imply that for every possible input, program A and program B must have the exact same output. Then, it becomes impossible to tell whether the obfuscated code comes from program A or program B. Did I understand this correctly?

But this in general seems relevant only for algorithms- two algorithms can produce, provably, the same output for each of their inputs, one being better than the other in terms of performance. Fine. But then the performance of the encrypted version could actually point to the used source code, so unless it's obfuscated away it still can be used to determine the origin of the obfuscated code. But then if the performance is also obfuscated, what reasons are left to encrypt the code?


You only need to prove that the other program is possible, you don't need to actually write it (or even know how to write it). In other words, such a proof can even be non-constructive.


> If it is not needed, flow analysis, tree shaking and const evaluation could get rid of it.

That's harder than it sounds. You can construct simple, iterative programs like `while(f(i++)){} return password;` whose behavior depends on hard problems like the Riemann Hypothesis (or easier problems which automated theorem provers still struggle with). Tree-shaking will be unfruitful, but if you can otherwise prove that an equivalent program exists that doesn't leak the password then iO gives an implementation.

One interesting additional property is that you don't need to come up with the program which doesn't leak a password; you just need to prove that one exists, which potentially might be substantially easier.


The important point here is that iO, once fully realized, can be used to derive other cryptographic primitives. Fully homomorphic encryption has been doing the rounds for a while now, but it's too slow for practical use - I wonder if this will change that.


This is confusing:

>For decades, computer scientists wondered if there is any secure, all-encompassing way to obfuscate computer programs, allowing people to use them without figuring out their internal secrets.

Obfuscate executable code? Somehow I don't think the original paper is talking about that.

<skims through the beginning of the paper>

They are?

But... how do you execute it then?


In short: Math.

To elaborate:

Sure, your computer has to know all the numbers and instructions it executes. But that doesn't require your computer or you to understand what those numbers or instructions mean.

The meaning can be guarded by encryption. For example, if you are given a ROT13 encrypted text, you could uppercase all the letters without understanding the message. You perform an operation on the ciphertext without being able to decipher it, and the result (once decrypted) is exactly the same as if the operation had been applied to the plaintext. This is what fully homomorphic encryption (FHE) is about.

Going further, you can also have the effect of the instructions themselves be "hidden" by encryption. As a very simple example, imagine a programming language where all you can do is multiply numbers with constants (and there are different "instructions" for multiplying with different numbers). Your unencrypted program might be "multiply by 3, then multiply by 5, then multiply by 2". In encrypted form, this might become "multiply by 21, then multiply by 35, then multiply by 14" - that is, every factor is itself multiplied by seven, leading to different instructions. Someone running this encrypted program on (possibly homomorphically encrypted) data will not be able to figure out your original program (unless they know the "key" - that every factor was multiplied by seven). But when you get back the result, all you need to do is divide it by 7x7x7 to get back the output of the original program. And all the "difficult" computations (the actual instructions) have been performed on somebody else's machine without them knowing your original program (or your plaintext data, if the entire thing is also fully homomorphic).

I hope this example somewhat illustrates the concept (even though it is of course flawed and entirely trivial). Just imagine significantly more and better math instead of the simple operations (e.g. multiplication as only instruction, "decryption" by division etc.) used above. And of course, you can't just slap FHE on top independently, it needs to be built into the system. But the idea is the same.


Looks smart theoretically. Practically I suppose it would be useful in a bunch of corner cases where people are extra paranoid about their IP. Or at least you could make money selling this to people who are extra paranoid about the IP.

For normal utility programs, I don't know. I mean, the data has to be unencrypted at some point for a human to make sense of it. What use is a fully obfuscated email client if you can't read your mail with it?


The point is that you can send your data and program (both encrypted) off to the cloud to run on 100% untrusted devices and get back a result that only you can possibly decipher. But all the expensive computation happened outside of systems you control.


Ah now it makes sense. Thanks.


> Obfuscate executable code? Somehow I don't think the original paper is talking about that.

Why not?

Technically its talking about polynomial sized circuits, but that is roughly the same thing.

To quote the paper, the property they are looking for:

"(T, γ)-Indistinguishability: For every two ensembles {C_0,λ} {C_1,λ} of polynomial-sized circuits that have the same size, input length, and output length, and are functionally equivalent, that is, ∀λ, C_0,λ(x) = C_1,λ(x) for every input x, the following distributions are (T, γ)- indistinguishable: {iO(1^λ, C_0,λ)} {iO(1^λ, C_1,λ)}"


> But... how do you execute it then?

You still end up with executable code - i think the key thing is you get it to a form where you can't tell the difference between your function (after obfuscation) and any other function that gives the same output for same input (after obfuscation). So you could still try giving it every possible input, to figure out what it does, but if it takes at least a couple 64 bit integers you're going to be spending the rest of your life doing that.


Welcome to the joys of fully homomorphic encryption. Enjoy your stay. And wonder why more people aren’t talking about it.

Because you can execute it!


> Then — as long as there is some program out there that could perform the same task while keeping your password hidden — an indistinguishability obfuscator will be strong enough to successfully mask the password.

I wonder if this will be the Achilles heel. If there were a way to perform a task without needing a secret, you would not be using homomorphic encryption. You could just do regular computation since there is no secret. However, if the only way to perform the computation is with the secret, then Indistinguishability Obsfuscation won't help you. To me it seems that iO is useful only for computations that didn't need homomorphic encryption in the first place, because they already had a way to do the computation without the secret.


There's been immense applications of iO in crypto already, constructing things we didn't know how to construct from the plain assumptions. This includes primitives which seem, at the first glance, to not have a "secret-less" equivalent.


The IAS about an hour ago uploaded Huijia (Rachel) Lin's talk on this:

https://youtu.be/oBhQ58ntjdI



Is this basically the learning parity with noise problem, but instead of a fixed size vector, the noisy output is generated in by a PRNG with a pre-determined secret seed? How can this be used for authentication or encryption?


@dang, it may be good to de-editorialize the title of this article. Best to simply say that indistinguishability obfuscation has been achieved.


He usually prefer the subtitle:

A cryptographic master tool called indistinguishability obfuscation has for years seemed too good to be true. Three researchers have figured out that it can work.

Reduced to 80 characters it is:

12345678901234567890123456789012345678901234567890123456789012345678901234567890

Three researchers have figured out how indistinguishability obfuscation can work


The current title ("iO from Well-Founded Assumptions") is the title of the paper.


Yeah, that's much better now- definitively wasn't the case when it was submitted though!


I remember that the original title was the title of the press article "Computer Scientists Achieve ‘Crown Jewel’ of Cryptography".

Sometimes the mods change them.


I can see the malware/ransomware business really wanting this. It would make it impossible to circumvent or decrypt their warez.


Its already impossible if they implement (plain ole' public key) encryption properly


Doesn't seem like they deal with the 10^6 X slowdowns you encounter with HE. Seems inherent to all the obfuscating computation.


Nit: Hiding what the neck you're talking about doesn't make for better journalism.

If you're going to spend 9 paragraphs talking about the controversy of whether such a thing is possible or not, dedicate at least 1 sentence at the start to explain it. Otherwise it's just confusing because as a reader, I have no idea why I should care or what's even worth caring about here.

Recommendation to anyone who hasn't read the article yet: start with the part called "The Crown Jewel".


Yeah I will wait for a more digestible summary of what this actually does. This is one of the OLD reference papers: https://eprint.iacr.org/2016/257.pdf

I guess once I figure out what bootstrapping IO for C means, maybe I'll have a chance of understanding the paper. But I don't even know what bootstrapping means in this context.

I have no idea what:

"Indistinguishability obfuscation" is (it seems like some kind of mathematical inverse to Monty Hall but I doubt that's a clean take)

"polynomial size circuits" are (are they just talking about the number of variables in an equation? What does "size" and "circuits" mean here?)

"constant-degree graded encoding schemes" are (blu... bluh... uh...)

or what any of the above mean "in the plain model" (or what even is a "plain model").

That's the first 1/3rd of the first sentence of the abstract.

I don't have a problem with that, clearly it's an extremely specialised field so I don't mind being ignorant here, but the OP article doesn't really seem to help at all with the understanding.

The OP article has a nice picture with hidden lock boxes further down the page but this just feels like the whole quantum entanglement mystery all over again, where the analogy is more confusing than the observed phenomena.

I'm looking at a bunch of replies here like "isn't this just simply [x] with [slight y difference]" and making huge calls on the impact to society and I'm here wondering if I'm the only dumb person who doesn't get it.

I'll just add this is extremely interesting but it feels like this is going to take a few years for me to absorb what it means, unless I focus all my energy on it, which I'm not convinced I should just yet.


Not just Nit, but I think this is a major point. It is my biggest pet peeve and I immediately back off from articles written in such manner.

If writers write good introduction paragraph, it gauges my interest and I am more likely to read.

Otherwise, I don't read out of spite or protest.


Who was the famous author that recommends throwing out your first chapter when writing a novel?

That it's important for you to write as an author, but it doesn't serve the reader well.


I don't think this is fair. This is more of a magazine article than a newspaper article. It's totally valid to draw a general reader in with answering the question of why to even care about this problem in the first place, and then fill in the technical details, which are, in any case, going to take some space to explain, given how complex the idea may be.


It’s because journalists have no clue what they’re talking about. This is true for all journalism but you only notice it when it’s about something you’re familiar with.


If you are going to diss the journalist, you should back it up with specific examples.


This particular journalist has qualifications that suggest she probably does know what she's talking about.

From https://www.quantamagazine.org/authors/erica-klarreich/:

> About the author

> Erica Klarreich has been writing about mathematics and science for more than 15 years. She has a doctorate in mathematics from Stony Brook University and is a graduate of the Science Communication Program at the University of California, Santa Cruz.


Gell-Mann Amnesia has been widely researched on HN and ought to be reason enough for the OP to make the claim.


There is a big difference between lots of journalism being bad and this specific piece being bad.

90% of everything is shit, but if someone posted their app on a show HN thread and i commented that it was shit on the basis that statistically most apps are shit, i would be rightfully downvoted to hell. This is no different.


I mean journalists in general.


Just in time for the DoJ and the EU to tell them to intentionally cripple it for law enforcement purposes.


> For decades, computer scientists wondered if there is any secure, all-encompassing way to obfuscate computer programs, allowing people to use them without figuring out their internal secrets.

As someone who works in cryptography, this isn't "The Crown Jewel" of Cryptography.

Cryptography to me is about helping queer people live their best life even in countries where they're hated.

Cryptography to me is about providing assurance that the software you're running was produced by the people you thought produced it, and that the built binary is reproducible from the source code, which technical people can also inspect to assert the absence of backdoors.

Obfuscation and copy-protection are non-goals for me. Fully homomorphic encryption has uses in machine learning and cloud services, but I find them at odds with the goals I care about.


This is a pretty cynical take. I don't think we can possibly foresee all the advances that practical fully homomorphic encryption could provide. It could make it easier for under siege communities in repressed 3rd world countries to run fully online programs that can't be shutdown or blocked. There are numerous ways in which it could be helpful or harmful.

The point is that we don't know what's going to come out ahead of time.


> It could make it easier for under siege communities in repressed 3rd world countries to run fully online programs that can't be shutdown or blocked.

How can you expect people being targeted by hostile nation states to trust obfuscated closed-source software to protect them from said nation states?

Binary obfuscation techniques to evade signature detection are a dime a dozen. The only time this intersects with homomorphic encryption is when you're dealing with DRM and other privacy-adverse technologies.


> Binary obfuscation techniques to evade signature detection are a dime a dozen.

Fully encrypted programs that can perform and can run on an untrusted platform are not. I think you may have a little more reading to do.


Fully encrypted programs can do things without the end user having any way of verifying that they're not backdoored or behaving otherwise maliciously.

(As a reminder, I'm not saying that FHE is inherently harmful. I take issue with their marketing spin on FHE as the "Crowl Jewel" is all. The motivations for companies to develop FHE are almost always in line with DRM, which is strictly speaking adverse to user freedom.)


It would be good to clarify that this article (and quanta in general) is about the theoretical side of cryptography, which often has little overlap with applications of cryptography, especially when you're at the cutting edge of theoretical research like this work is.

However, that doesn't mean that iO isn't useful for tasks that you might care about. For example,

* deniable encryption (https://eprint.iacr.org/2013/454.pdf),

* better constructions of post-quantum NIZKs (which can help with ensuring that a remote server is behaving honestly),

* proving results about the hardness of finding Nash equilibria (https://ieeexplore.ieee.org/abstract/document/7354468)


Arguably aren't those security engineering goals more than cryptography goals?


I'm really not interested in splitting hairs over taxonomy.

Real-world cryptography and security engineering intersect a lot.


i would call both of those the same thing, and neither of them are cryptography.


Is your argument "real-world cryptography isn't cryptography", or am I misunderstanding?

EDIT: I can't reply due to rate-limiting, so here's my reply to your reply below:

-----

> My argument is "real world cryptography" is a euphamism for security engineering for people who want to sound more badass.

It's not a euphemism.

Take a look at the accepted papers at the aptly named Real World Cryptography symposiums: https://rwc.iacr.org/2020/program.html

Breaks on OCB2. Lattice attacks on TPMs. MPC research. Privacy-preserving telemetry.

I'll accept that you may personally have only seen security engineering work under the guise of "real-world cryptography", but to call it a euphemism is erroneous.

-----

> Well if someone wrote an article about the Poincaré conjecture calling it the "crown jewel" of math, and someone else said "actually I consider the primary goal of math to be in helping build low-cost housing and other practical issues," then they're talking about two different things. Both are math, but to say that the Poincaré conjecture isn't important to math because you use math primarily for different things is misleading.

All of the proposals for homomorphic encryption I've seen to date (which, for transparency, were mostly in the realm of encrypted databases) don't provide any protection against chosen-ciphertext attacks. If [F]HE isn't IND-CCA2 secure, which is the minimum bar for encryption that should be used outside of academic settings, then calling it the crown jewel only serves to hype up a technology that will fail to protect users.


Well if someone wrote an article about the Poincaré conjecture calling it the "crown jewel" of math, and someone else said "actually I consider the primary goal of math to be in helping build low-cost housing and other practical issues," then they're talking about two different things. Both are math, but to say that the Poincaré conjecture isn't important to math because you use math primarily for different things is misleading.


My argument is "real world cryptography" is a euphamism for security engineering for people who want to sound more badass.

Making an app with encryption has about as much to do with cryptography as making a react app has to do with turing machines.

Don't get me wrong, its important work, but lets not pretend its remotely the same type of work.


FHE can be made IND-CCA secure in practice (i.e. for specific systems), please stop spreading misinformation. It's impossible in the general case for obvious reasons. Also, TFA isn't about FHE, it's about iO.

I know and have worked with the organizers of RWC and they've done a lot of the kind of work that you're dismissing here (which would be like telling Diffie and Hellman to piss off in the 70s because public-key crypto isn't IND-CCA2 secure without additional assumptions).


> I know and have worked with the organizers of RWC and they've done a lot of the kind of work that you're dismissing here (which would be like telling Diffie and Hellman to piss off in the 70s because public-key crypto isn't IND-CCA2 secure without additional assumptions).

I'm decrying the sensationalist writing in this news story, especially the headline, not dismissing anyone's work.

Maybe should ask your RWC organizer friends if your take is proportional to what I'm doing.


I'm a former cryptographer and the headline and news story aren't sensationalist by any means. iO has been considered the "Crown Jewel" of cryptography (by cryptographers) ever since the 2013 iO candidate by Sahai, Waters, Gentry, et al, and Sahai/Waters' followup "How to Use Indistinguishability Obfuscation." From a cryptographer's perspective, being able to use iO is like having a superpower.


Re: IND-CCA2 security for FHE; you can have constructions which achieve it in suitably (Eg from iO: https://cybersecurity.springeropen.com/articles/10.1186/s424...).

But in general the goal of FHE is at odds with CCA2 security. The latter requires non-malleability, but the point of FHE is to malleate the ciphertext in useful ways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: