That’s a deep straw man of what I said, to the point of being non-constructive mocking. You’re just being dishonest to claim I suggested trusting the spy agencies.
Rather, I pointed out that they have a real mission, and they’re going to spend effort accomplishing it. But their mission isn’t to own every device — it’s to own a select few, probably on the order of hundreds or thousands a year. So, if we create a mechanism by which they can do that without owning every device, we can align our goal of protecting most devices with theirs of owning a few.
This in turn increases security for nearly everyone, because powerful agencies no longer have the same motivation to cause harm — and might be persuaded to help. After all, it’s in their interest to prevent large remote compromises — just not a higher priority than maintaining their own access.
Further, the best way to actually restrain them is through a change in government policy, which will only happen when the government believes there’s an alternative solution.
> . But their mission isn’t to own every device — it’s to own a select few, probably on the order of hundreds or thousands a year.
Things like Room 641A show that the government doesn't even need to engage the courts to collect data on millions of people and further, that they are not limiting their collection efforts to a few hundreds or thousands of devices a year.
> But their mission isn’t to own every device — it’s to own a select few, probably on the order of hundreds or thousands a year.
This is patently untrue. Snowden and much more have made it absolutely, unambiguously clear that national spy agencies do wish to gather and collect absolutely and every bit of information possible about their own law-abiding citizens as well as those here that are not.
The PRISM program alone shows the NSA's intent to collect as much information as possible. This in conjunction with the Utah Data Center makes it pretty aparrent what the goal was.
The ones courts which governed the relevant company were willing to issue warrants on behalf of — that is, those our regular legal and political systems decide on.
For a big multinational like Apple, that’s probably a fair number. But it’s also harder to hide they’re doing that, and let’s us bring pressure on them politically for their political misdeeds. In the end, it’ll be major powers who can — US, Europe, China, etc.
My point is that it’s never going to be the case that technologists get to unilaterally decide that for all of society.
So you support China being able to hack any phone globally with a warrant in a Chinese court? Isn't that literally what Huawei are accused of facilitating?
No, I support China being able to spend an appreciable amount of time to crack each phone they have physical possession of, following a court order. I never suggested the ability for remote compromise (what Huawei is accused of), and my exact point is that we can create a cost to cracking each phone — in hashing power and time spent — if we compromise on the topic. No such cost exists now, because they achieve access via other means.
The combination of requiring physical possession and appreciable hashing time per crack is a two-layered response to mass-surveillance. That’s the whole basis of the compromise I’m proposing: calling their bluff, and enabling warrant cracks as political cover to shut down mass surveillance and cracking as unnecessary.
The "hashing time per crack" reminds me of the old 48-bit encryption which was mandated to have short, crackable keys. Or even the previous attempt at doing this with the "LEAF". Are you proposing some sort of work function within the crypto enclave? ie you leave the device brute-forcing for a few days and it spits out the key?
The "appreciable time" is going to be subject to constant downward pressure, both politically and technologically.
> political cover to shut down mass surveillance and compromise as unnecessary
Mass surveillance is not going to go away without huge cultural reform of the security services. They don't take concessions.
I was purposing each phone be loaded with a unique “export key” while being built. For about 1 dollar of GPU time, you can force an attacker to take a month of continuous hashing, on similarly performing hardware. (Okay, people with fancy ASICS could get that down to a week... but that still seems like a big win.)
I agree that it would be a consistent political battle — but it’s already that, and it’s clear people with power are getting fed up with technologists attempting to impose their ideology without compromising with other social needs.
That’s what prompts laws about wiretapping or mandated backdoors.
I haven’t heard the same arguments about safes I do about encryption — and the reason is because there’s an understood bypass, if they gain access, have time and money.
By compromising and allowing targeted cracking, we split the faction pushing for backdoored phones, solve most of the issues, and give ourselves a viable path to accomplish something rather than being forced into backing down or completely compromising systems. Further, by being willing to compromise, we gain a voice on shaping how that discussion looks — rather than largely being excluded.
>Mass surveillance is not going to go away without huge cultural reform of the security services. They don't take concessions.
FakeComments was mostly talking about targeted surveillance, but I agree with him/her in spirit, since I believe that mass surveillance is not going to go away. Ever. So you can either yell futilely into the wind as it happens over your objections, up to, including, and perhaps going beyond a swarm of camera-bearing networked nanodrones coating the planet, or you can try to nudge it towards happening on slightly preferable terms.
It's not going to happen on "slightly preferable terms". Either those with political power in society want surveillance, or they don't. If they do, they'll take all that they can get, and the only thing cooperating with them will do is make it all happen faster - the moment you provide a "compromise" surveillance scheme to them on a silver platter is the moment when they'll start devising how to get around the remaining limits.
So you simultaneously think the other side is so powerful that you cannot even compromise with them without being pushed back further, but they are also so weak that you believe you can achieve a total victory without conceding any points? How you you reconcile that? Or do you just resign yourself to fighting for a purer goal since you know you will lose without achieving it regardless?
Total victory, no. Total victory would be strong encryption free to use by everybody without fear of persecution. That looks increasingly unlikely.
However, I do not believe they can effectively enforce any encryption bans. Thus, people who need encryption will still have access to it. And as far as I am concerned, my duty (as a software engineer) is to ensure that it remains the case, even if using it becomes illegal.
That attitude sounds like a myopic focus on the software at the expense of everything else.
Say that in the year 2160 you have perfect, unbreakable encryption on your pocket computer. How will you use it?
With a touchscreen or keyboard, allowing microscopic cameras to see you input it or read the thermal signatures off your input device afterwards? With your face or voice that are continuously being recorded from hundreds if not thousands of angles? Plugging in the future equivalent of a yubikey that someone can just steal from you? You're lucky if fMRIs don't become good enough to just pluck the information out of your brain as you think it. Of course, the master key is most important but all of these concerns apply to the data being protected as well.
The real thing that can never be effectively enforced is privacy. People who need encryption can have access to it or not. It matters not one whit. Our duty (as people) is to push society in a direction where this change feels less catastrophic, not to fight a Caligulan war against the sea.
> With a touchscreen or keyboard, allowing microscopic cameras to see you input it or read the thermal signatures off your input device afterwards? With your face or voice that are continuously being recorded from hundreds if not thousands of angles? Plugging in the future equivalent of a yubikey that someone can just steal from you? You're lucky if fMRIs don't become good enough to just pluck the information out of your brain as you think it.
You're basically describing a totalitarian Panopticon. A society like that should be fought by all means available, including physical force, so the question of legality of encryption is somewhat moot at that point.
>"It is the common fate of the indolent to see their rights become a prey to the active. The condition upon which God hath given liberty to man is eternal vigilance; which condition if he break, servitude is at once the consequence of his crime and the punishment of his guilt." – John Philpot Curran: Speech upon the Right of Election for Lord Mayor of Dublin, 1790. (Speeches. Dublin, 1808.) as quoted in Bartlett's Familiar Quotations
You create a hash chain, then use the final result as an encryption key of your secret (in this case, the key for the data), then store only the start of the chain and encrypted secret.
The only way to retrieve the secret is to recompute all the hashes, from the start, to recreate the key and decrypt the data.
So it’s secure unless you believe there’s a weakness in the underlying encryption or hash function.
Further, you can parallelize this via encrypting the start of chains with other chains — giving a significant advantage to the chain creator: they can do 1000 chains in parallel, but unlocking requires decrypting them sequentially. At that ratio, if you want decryption to take a month of steady hashing, you need only do a little under 1 hour of hashing yourself. 1 hour of 1 GPU is about a dollar of expense, and has more than 1000 parallel tracks.
My suggestion would be that Apple create a chain for each phone and then load it with that phone specific wrapping key — which it uses to return the actual encryption key wrapped in. The only way to decrypt that key is get the necessary information from Apple and a signed request so the SE will emit the encrypted key at all.
Yeah uhh hey I appreciate your enthusiasm but this literally is not how modern public key encryption works, or will ever be changed to work. All ciphers in general use don't have any sort of a realistic time-bound to being cracked or computed
I appreciate you trying to correct me, but I never was saying that this was an instance of “public key encryption”, whichever version you mean.
This is a scheme by which you can intentionally create a key that can be re-generated in a fixed amount of time, and use it as part of normal symmetric encryption to protect a secret. One usage of that is creating intentionally crackable schemes, such as protecting other signing keys in a way you can later crack if you need to. This allows a device, such as a phone, to emit a masked secret that we have cryptographic guarantees it still takes time to recover.
Hashchains for time locking is a studied mechanism, and though it predates crypto currencies, it’s deployed as a mechanism in several kinds of applications there. A second usage is in storing paper copies of master signing keys in a safe, since the key cannot be exposed in the event of a robbery before a certain period of time — giving you time to rekey your system. (Generally, people use multipart keys instead, because they’re less cumbersome to recover; however, if you only have one secure location — multipart keys don’t help. Hash chains still do.)
So it’s literally how (part of) modern cryptography works.
Better yet, scrap the network of cameras and employ a load of uniformed, unarmed police officers wearing body cams. (The cameras should default to deleting footage after a few hours, unless the officer presses a button to save it, and all arrests not recorded due to "missing" footage are deemed invalid).
This helps communities to feel the reassurance of a trusted police presence, creates local jobs, and provides a decentralised alternative to having a single network controlled by a few hidden, unaccountable individuals. Putting a human conscience behind every single camera seems like a good way to prevent tyranny and encourage whistle-blowers.
Yep, just shoulder surf the passcode the user enters on the train, or read their fingerprints/retinal patterns off their body and print it out onto your key material.
Rather, I pointed out that they have a real mission, and they’re going to spend effort accomplishing it. But their mission isn’t to own every device — it’s to own a select few, probably on the order of hundreds or thousands a year. So, if we create a mechanism by which they can do that without owning every device, we can align our goal of protecting most devices with theirs of owning a few.
This in turn increases security for nearly everyone, because powerful agencies no longer have the same motivation to cause harm — and might be persuaded to help. After all, it’s in their interest to prevent large remote compromises — just not a higher priority than maintaining their own access.
Further, the best way to actually restrain them is through a change in government policy, which will only happen when the government believes there’s an alternative solution.
Perhaps you could try responding to the point?