In the 1960s, the ecological notion that "there is no 'away' to throw things" finally became widespread regards pollution, first stated by Barry Commoner.
In the 2010s, the informational notion that "there is no 'other' informatics ecosystem" on which security, privacy, or surveillance practices and principles apply is slowly dawning.
> "there is no 'other' informatics ecosystem" on which security, privacy, or surveillance practices and principles apply
SIPRNet?
(I would argue that the fact that governments try to have their own air-gapped packet-switched networks for secure communications, is a large part of the reason that governments don't invest too much into making the regular Internet secure.)
SIPRNet was one of the networks accessed by Bradley Manning, convicted of leaking the video used in WikiLeaks' "Collateral Murder" release[6] as well as the source of the US diplomatic cables published by WikiLeaks in November 2010.[7]
In 1995, I convinced the program manager who was responsible for SIPRnet that they should not use HOSTS.TXT files and random IP addresses that they just pulled out of their ass.
Ultimately, the weapon that convinced them was that they wanted to ultimately connect it to the “real” Internet, once the work was completed on the multi-level secure gateway that the NSA was developing.
I convinced them that there would be no way for them to communicate with the real owners of those IP addresses, and that they would need to use the DNS to communicate with the hosts and domains on the other side of that gateway.
It was a major face-palm moment for me, but I was glad that I was the DISA.net Technical POC at the time, that my boss trusted me and he was the DISA.net Admin POC at the time, and I had built a good reputation by helping them get the first CERT inside of DOD up and running inside of a week on ASSIST.mil, back at a time when there was just the one NIC for the entire Internet and they did root zone updates only once a week.
Thank $DEITY for that MLS gateway that the NSA was developing, because otherwise SIPRnet would probably still be using HOST.TXT files and random IP addresses pulled out of their ass.
This was also the event that convinced me I needed to get out of DISA quick, because I couldn’t keep saving the entire agency from making seriously brain-damaged decisions like the one I saved them from.
So: the point being that despite having a secure network, the immediate impetus was to connect it to the insecure one?
And that without IP and network isolation, HOSTS.TXT was an utterly meaningless obfuscation / network isolation mechansim (which was apparently not ... apparent ... to them)?
The only bureaucracy more SNAFUd than government and military is private sector. You just don't hear so much about it except through lawsuits and leaks -- no congressional investigations or constituent concerns. Representation has some advantages.
SIPRNet (the secure version) has a parallel network for unclassified traffic (eg surfing foxnews / open-source intelligence / checking your gmail).
If I had to take a wild guess, an executive at some point said "why do I NEED to switch between these two terminals? Can't it all be secure on just one of them? I just want to be able to reply to outside emails on the SPIRNet."
And thus it happened, or probably something like that. Never underestimate the "executive factor".
... largely as a consequence of several factors: interconnectedness, device development costs, functional flexibility, and winner-take-all market-capture dynamics.
Interconnectedness means that even if your secure, classified, compartmentalised, encrypted, logged information begins or exists on bespoke kit, there's extraodinarily good odds that it will transit or reside on other systems either on its way there or after being created. Networks are complex, with many components, and ensuring all kit is fully certified and cleared is all but impossible.
The costs of developing new devices is both falling (Moore's Law) and rising (hardware, firmware, driver, and software design is all rapidly rising in complexity). In virtually all cases, it is tremendously cheaper to begin with COTS (common off-the-shelf) hardware or software than to do a ground-up, greenfield, clean development. And where classified development exists, keeping up with improvements in consumer-grade kit is either impossible or remarkably expensive.
This claim rests on publicly available information, and it may well be that there are some exceptions. A few datapoints at least point to the likely costs. The Xen hypervisor has an advantage over vmware in that, by serving as a "shim" over the Linux kernel, it inherets the full class of Linux-supported devices. VMWare at least though the early 2010s was reliant on its own device development and featured a highly restricted HCL (hardware compatibility list).
The list of publicly-known top-500 supercomputers consists entirely of Linux-based systems. There is no public investment in either proprietary or bespoke supercomputer operating systems. Any classified work would have limited leverage from publicly-available development. Numerous of the publicly known supercomputers are engaged in highly-sensitive classified work. Evidence suggests limited, or exceptionally expensive, alternatives, if any.
Functional flexibility: what is still being called a "phone" is in fact a general purpose computer. And, for what it's worth, general-purpose surveillance-capitalism, state-surveillance, and ATP-surveillance platform, but I digress.
A slim glass-fronted slab provides voice comms, text comms, email, camera, calendar, notekeeping, Web access, geolocation, mapping, directions, e-book access, and a myrad of applications (though many of highly dubious utility). Both Android and iOS offer commandline access, though of varying completeness, reliability, and utility. (Termux now offers over 1,200 packages, Apple's iOS project remains fairly nascent.) This includes multiple development environments and potential well beyond the limitations of native Android and iOS platforms (already quite extensive).
A single "do everything" tool, that's sufficient for most of those tasks, will replace special-purpose bespoke tools in practice. It's an inevitable Desire Path (https://en.wikipedia.org/wiki/Desire_path). As a practical matter, workforce, corps, or agent discipline will be broken, and such devices will be used.
Winner-take-all market dynamics arise from several mechanisms, most especially positive-feedback network-effect loops of manufacture, development, sales and supply channels, developer ecosystems, and marketshare. Which means that not only will consumer-grade devices dominate, but a very small number of device or platform variants will dominate. Even highly-capitalised and capable firms experience frequent failure in attempting to dislodge established incumbents: Microsoft Mobile, in devices, Google+, in social networks, Intel's Itanium, in CPU design, Apple in cloud services, Amazon's Fire Phone. My point isn't that these are devices, but that they're the biggest players in the tech world taking on an entrenched contender and failing spectacularly.
Niche security projects are effectively playing in this field. They do not have to compete in the market with commercial offerings, but they do have to compete for talent, mindshare, tools, skillsets, and concepts. And they will all but certainly have to either interoperate with, or take into consideration the functions and features of consumer-grade kit.
Upshot: the worlds aren't separate, closed secure systems development competes poorly, and effective practice will blur all boundaries regardless.
There is no 'other' informatics ecosystem. We've got to make the one we've got healthy.
These are closely related to the points I was trying to make in November of 1992, during what I would call Crypto Wars I, in my letter to the editors of the Communications of the ACM.
Though what I'm immediately addressing is the notion implicit in Barr's proposal that there are two separate universes of inforamation tech. There really isn't.
The question of whether infotech presents a fundamentally different "data physics", or if it's just an extreme form of one we've had for some time (information has an inclination to wander, digitised information more so), isn't entirely clear. That however is another question I've put some thought into.
No clear conclusions as yet, though I find myself preferring paper for many forms of recordkeeping.
For a while the President was tweeting from a consumer-grade Android (!) phone, likely made by China, in the Oval Office. It makes me wonder how many groups pwned that phone and have access to critical national security information.
I used to work for a company that serviced a very secure site (military).
The rule was NOTHING electronic went in that wasn't accounted for, and NOTHING ever left. We sent people onsite with what were effectively disposable laptops that were single use items and never left the location.
Showing up at the gate with anything extra was said to be a very bad idea. It never happened so I don't know what would happen.
You drove to the site in a car with only what you needed in it. Your license, keys, the equipment you were scheduled to bring. Smartphones weren't ultra common yet, but were absolutely forbidden. The car itself was searched too even though it was in a parking lot far from anything sensitive. You were warned that anything suspicious would not be in the car when you came back (like the extra stuff rule I don't know of anyone that happened to as nobody was foolish enough to drive out with anything but a clean rental car).
I honestly think that is the only security that makes sense. That should be the status quo for some areas the White House too IMO. At least as far as meetings and etc.
Maybe one day we get physical switches that power off cameras and mics, but hard to trust anything until that day....maybe not then.
Norbert Wiener, following WWII, noted that the secure classified treatment of military R&D proved a bigger impediment for the Allies than its enemies. There were multiple independent efforts investigating the same or similar technologies who didn't know of one another and could not share experiences. At the same time, much of the information either leaked out or was available to Axis forces (or Soviets), though development lead-time made this a fairly minor concern.
J. Robert Oppenheimer had similar observations over the Manhattan Project, and there's a famous anecdote by Richard Feynman of travelling to Oak Ridge, where uranium processing and enrichment was occurring, and realising that plant practices were at severe risk of resulting in critical masses of refined material. That wouldn't have vapourised the plant, but could have resulted in deaths, meltdowns, and extensive radioactive contamination. Feynman had to fight to make this information available to plant management and workers, so that the US wouldn't unintentionally sabotage its own efforts.
There's a great William Tenn Cold War short story about this about a race to the moon ahead of the enemy. Spoiler: the Army's crash mission reaches the moon only to discover that indeed, they weren't the first; the enemy already had a base. Who was the enemy? The Navy.
I have friends that work in defense. Their experience is basically like yours - physically and electronically isolated clean rooms where you do the actual work. Nothing goes in. Nothing comes out.
Also, one of the Snowden disclosures was that apparently the NSA can power on the cameras and mics of Android phones even when the phone is off. I don't know exactly how it works, but it does, probably based off the residual current drawn from the battery and the circuitry that handles the power button. Smartphones are basically never safe, although I've heard iPhones are significantly safer than Android.
> Also, one of the Snowden disclosures was that apparently the NSA can power on the cameras and mics of Android phones even when the phone is off. I don't know exactly how it works, but it does, probably based off the residual current drawn from the battery and the circuitry that handles the power button. Smartphones are basically never safe, although I've heard iPhones are significantly safer than Android.
I mean, even if you could turn on the mic/camera, I would think you still need to possibly save it to storage for retrieval later which pretty much requires the OS to be running.
Otherwise, maybe, you could somehow transmit the data but that would require the ability to communicate with the device via Bluetooth/WiFi and for the data from the camera to be passed to the wireless interface.
Unless a device was designed with that sort of functionality, I'm not sure how the NSA could just turn that on.
They would likely compromise the device via its baseband, and patch the operating system to give the illusion of a complete shutdown when the user attempts to power it off.
"The baseband has DMA access and so can get root at any time" is a myth that won't seem to go away no matter how many times security professionals refute it. On iPhones and all but maybe the cheapest Android phones, the baseband is exposed to the CPU as a USB device, with all the usual memory protections. You cannot patch the device over the baseband (unless there's an existing vulnerability in the USB stack or something).
USB device drivers are for sure bullet-proof, just like the baseband controllers are bullet-proof....
Jumping from one exploit target to another in a chain is pretty much how all modern exploits work. To say that because they would need to find an exploit in the USB driver (or some other hardware or software interface), not only states the obvious, but more importantly misses the point: it's far more plausible than most engineers intuitively believe.
The depth of modern exploit chains is incredible, and while the conceptual difficulty has gone up the pace hasn't seemed to abate, which is clear evidence that our intuition about "too difficult" and about the elasticity of the exploit supply and demand curves is woefully inadequate to accurately gauge risk.
Part of the problem is that the complexity of the systems grows at the same time as improvements in security and correctness. Sometimes it grows faster, sometimes slower, but it's dynamic.
Cursory web searches seem to point away from the USB device theory. Usually the modem is integrated on the main SoC. You can't find mentions of USB associated with the recent Qualcomm onchip modems, for example.
Do you have links to any of the refutations? I recall hearing that the iPhone separates the baseband, but I thought it was otherwise usually on the same chip.
I believe that there are circuits within the baseband hardware that cannot be powered off, short of completely removing the battery. And even then, there are capacitors and other technology that will store power for those circuits for a period of time.
I believe that this has actually been pretty well established in the community, but I don’t have the evidential links immediately at hand.
Even working in prisons they are strict on equipment, I have a mate who does Air Con work and he had to account for very tool he took in to make sure they went out with him.
>Even working in prisons they are strict on equipment
The tone of incredulity in your comment suggests, as if this practice is extraordinary, when it would be considered SOP ─ tools and crims behind bars make the most unlikely bedfellows.
With cameras and mics, they need to be removeable plug-in items. It may make a device a millimeter thicker but I'm sure the truly security-conscious wouldn't mind.
The environment described is pretty standard for SCIF/SAPs. The White House has different rules, primarily because some of the people there are Too Important™ to be bothered. The one time I got to visit the White House I got a tour of the situation room I was physically uncomfortable about the fact that I was allowed to bring my phone even though there wasn't a meeting in progress (though I did get to spin around in Obama's chair so I got over my discomfort).
What's being described is a SCIF or SAPF. It could be just about anything classified going on in there, from intelligence analysis or weapons design to project management or ongoing network operations. There's no real way of knowing from the description.
It was a military related site, remote, and our people were actually blindfolded from the gate all the way to the equipment needing service, and all you saw was indistinguishable from a small data center.
They had a pretty good system for keeping a tight ship, at least as far as threats from outside equipment.
> For a while the President was tweeting from a consumer-grade Android (!) phone, likely made by China, in the Oval Office. It makes me wonder how many groups pwned that phone and have access to critical national security information.
You mentioned tweeting, was he doing NATSEC-relevant work from the phone or was he using an official device for those purposes?
I don’t think it’s possible to get high level pols to forgo their devices and follow good security hygiene. It’s in their nature to be communicative and available.
One would hope for a little continuity in the Presidential Office's security department. Why would they reset their security at the beginning of each administration, only to have to learn the same lessons the hard way again?
As I understand it he wasn't doing NATSEC-relevant work from the phone, but on Android spyware can turn on the microphone and camera remotely. During the Cold War our adversaries would've given anything to slip a bug into the Oval Office; now every hacker group who can pwn an Android phone has one.
>but on Android spyware can turn on the microphone and camera remotely.
Seems like an unsubstantiated claim. Also, Android is a broad umbrella term where security varies widely across implementations and devices. Do you have any concrete information about reference Android devices (i.e., Pixels)
It absolutely isn't. Evidence supporting his claim are easily found online, both in leaked documents as well as from other first and second hand sources
My guess is that besides whatever the microphone, camera, and radios on that device can capture, there's no classified information on it. The microphone and camera can be dealt with by keeping the phone in a somewhat sound-proofed box, or just powered off.
I would hope there's no classified data access via WiFi at the WH, so perhaps that device's radios are of little importance. I'm not going to address the possibility of the radios being used to gather emanations, but keeping it on for very limited time periods is a mitigation.
(Yes, even so, the radios and microphone could perhaps be used for clever side channels for exfiltrating data during the times when it's powered on, but I'm sure there's lots of those side channels. This is partly why we have SCIFs, so again, not an issue.)
In any case, provided all he does with that device is tweet, provided when he's not tweeting it's powered off and locked away, provided the device is never ever taken to a SCIF, and provided there's no access to classified data over WiFi at the WH, I'd tolerate the President (whoever it might be) tweeting from a consumer-grade mobile device. That said, my preference would be for the President use a room for the purpose, and wired devices maintained for the purpose of tweeting.
The twitter account itself is of relatively low value.
Of course, perhaps he and his staff are quite careless with that device. Perhaps that device has been the source of many leaks! I thought NSA wouldn't let the President conduct the nation's business on an unsecure device, so I assume he only uses for tweeting, thus carelessness would be about the microphone, camera, and radios.
> Through the mid-1990s, there was a difference between military-grade encryption and consumer-grade encryption. Laws regulated encryption as a munition and limited what could legally be exported only to key lengths that were easily breakable. That changed with the rise of Internet commerce, because the needs of commercial applications more closely mirrored the needs of the military.
The case seems to be that the government and military has almost no special product offerings, so they use consumer tech. Therefore, weakening consumer tech weakens the government and military. This is not a robust argument imo.
The stronger argument is about how a whole economy would spring up that would fill office building after office building with full time hackers trying to dox, blackmail, mitm, or steal from every non-banking, non-crypto-approved communication in the world. The interned would eventually just die off as a communications platform as the public completely lost trust in it (though not politicians - they would be approved to use the secure channels and would not understand the issue)
I like the general argument that's being made in the article about encryption available to consumers being the same, and of the same importance as military encryption, but I've got to disagree with military electronics no longer being the bleeding edge.
Especially in areas such as RF, optics and positioning, the military still has access to stuff the general market can only dream of.
Do you have any examples of that? In my experience the military tends to be extremely conservative and prefers well-proven designs. For example: The RCA1802, a processor launched in 1972 is still being manufactured mostly because of its use in military applications (guidance system of the Tomahawk Cruise missile among others)
The military is always at the bleeding edge of military applications and form factors, I think that is what gets people confused.
I mean the small cassegrain telescopes in missiles are probably bleeding edge for that size and weight for instance. However, I think this fact falls into "weird flex, but ok" category.
Gorgon Stare springs immediately to mind. I certainly can't afford a multi-day in the air drone carrying a 30 lbs mutli-million megapixel wide camera [0].
It’s not a single multi million mega pixel camera.
“ARGUS is essentially 368 five-megapixel smartphone cameras clustered together” That’s a 1,840 megapixel (1.85 gigapixel) camera which is not that extreme and in line with several of these images: https://petapixel.com/tag/gigapixel/
A lot of confusion comes because it’s a combination of multiple different cameras than capture wide angles, infrared, and a separate camera that can focus on areas of interest within the field of view. So, if the entire image was at maximum resolution you would get into insane territory, but that’s not how it works.
Obviously the military has "toys" I cant afford, however, do you see any fundamental technologies in that imaging system that are not available to consumers? Even the quoted development cost of $15 Milion are not out of range for a well funded tech startup..
Why can't we have both security as well as court ordered access. What would be the problems if we had 5 HSM that are airgapped and located at secure facilities. Encrypted applications (such as WhatsApp, Apple Messenger, etc) are required to submit/transmit escrow keys that are encrypted with the public keys of those 5 HSM's. After a valid court order, a law enforcement official has to physically go to one of the secure locations with those encrypted escrow keys which will then be decrypted by the HSM. This way, everyone can have secure communication, but still allow legitimate law enforcement searches when ordered by a judge.
No, I would not call that secure communication, because it can be accessed by someone other than the intended recipient. The number of hoops they have to jump through is irrelevant.
It is not possible to have secure by definition communication and facilitate a backdoor at the same time. The two concepts are strictly mutually exclusive.
In the United States you have the amazing freedom to communicate in any language you desire (including one unintelligible to a would-be snoop), and the State cannot force you to translate your communications just because they want to hear them. We should not be so eager to give up that freedom in our digital lives.
I'm surprised that Americans aren't pushing to make private and secure communication a constitutional right. Will it make it harder for law enforcement to go after "bad guys"? Yes. Just as the second, fourth, and fifth amendments do today.
The 4th amendment already say that. Papers and effects cleanly expand to communication in all forms even those not imagined today.
Courts say otherwise sometimes, but they have to use twisted, convoluted reasoning that doesn't really stand up to the written letter of the constitution. Given that nothing more we can do in the constitution is immune from that either so what is the point of another amendment?
The 4th amendment grants access if a court signs a warrant. The unique challenge of encrypted communications is that there is no way to access encrypted data without a person giving up their key.
You're correct to identify "giving up the key" as the important step. Furthermore, 4A is very clear that should only happen "upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized." The state can't cheat by pretending it has all of this for the entire populace ahead of time. The literal text of the Constitution makes key escrow impossible.
LE doesn't need a key if they can just break the door down. So all that's needed is a key escrow system set up to broadcast when a key is accessed, so unauthorized access can be prosecuted.
The "key escrow system" contains no keys, because only a chump would give up her or his key before she or he has to give it up. As addressed above, 4A specifies very particular circumstances for that, circumstances which do not apply to most citizens.
Crypto is munitions for export purposes, I doubt you can get a ruling that it qualifies for 2A protection. The law is like that. The law can classify a thing as one thing for one purpose, and another thing for another purpose. If they want to ban crypto, they could classify it as a Schedule I drug and it'd make about as much sense as classifying it as a munition.
2A isn't upheld any more than 4A. States will just require a license for crypto and make it "may-issue". RSA will be classified an "assault algorithm" and banned in California. You'll have to pay a $200 tax stamp for each key, message capacity will be limited to 10 characters, and if your ex-wife claims you hit her then the sheriff can confiscate your hard drives until you convince a judge you didn't.
Access to the escrowed keys may also be conveniently reclassified in the future.
And that assumes that the law is even followed: you can't trust that the "court ordered access" will remain only court ordered. Law enforcement agencies have violated laws in the past as a matter of policy:
These people couldn't stop the OPM database -- blackmail on all of their sensitive personnel -- from getting hacked, and that's something they wanted to protect. Now imagine how good a job they would do of protecting something they resented, like the escrow mechanism in your proposal.
Once the United States has a backdoor key, every other nation will demand one as well, and there won't be any grounds to refuse. Which creates two problems:
- Obviously you can't reuse the same master key, so now you need 180+ keys, meaning 180+ backdoors into the system. It is ridiculous to expect that all of these will remain secure; the United States can't even secure the OPM database so even ours will probably be leaked, and countries with more bribe-prone law enforced will give it up even faster, and now everyone is pwned.
- Law enforcement are frequently the bad guys. YMMV on how often this is the case in the United States but it's certainly inarguably the case in many places abroad. Mandatory backdoors means no possibility for secure communications about dissidents and "inconvenients" in those countries.
Universal escrow == universal access. Leaking == global compromise.
There's a place for personal recovery key quorums, where multiple parts are joined to create an alternate recovery key, but that involves key management for each such key served, which is a Very Large Problem.
Might be possible to take it on, but the underlying problem of identity remains: The question "who are you?" is the most expensive one in information technology. No matter how you get it wrong, you're fucked.
Deny access to the right party: fucked.
Allow access to the wrong party: fucked.
The only advantage of security-based DoS over security-based unintended dislosure is that denied access doesn't propogate. Published data cannot be unpublished, at least not at any reasonable cost:assurance level.
Because of the key management issues, most seriously-proposed data-backdoor systems revolve around one or more of:
- Workfactor reduction in which known keys or values are used in key generation. The resulting keys are weak where the known inputs' secret elements are known, at least to state-leve actors.
- Specific escrow keys. No workfactor, just key access. Widespread key access is a Very Bad Day.
- Specified access accounts. Like above, but worse.
- Specific system bypass. Alternate paths to data access on systems.
- Alternate data submission. Various "phone home" or intercepts of in-the-clear transmisions, either through design or software/device compromise.
As a practical matter, alternate information channels (usually metadata), public data, standard detection, bug/zeroday exploitation, and various sideband attacks (Maginot compromise: don't go through, go around) tend to be used, though cryptographic attacks have some utility.
> but still allow legitimate law enforcement searches when ordered by a judge
You can already do that by voluntarily providing access to your devices to anyone you want, including law enforcement. Other people don't necessarily share your ideology and accept that there could exist "legitimate law enforcement searches" of their private communications.
Stop trying to build scenarios where key escrow solutions are technically sound. They are not. This is an intractable problem that is not solved by these half cocked technical measures. Key escrow cannot by definition be secure, and wasting time trying to invent solutions just confuses the matter, weakens security and leaves us all vulnerable. Basically, Sssssssh, or the politicians might actually believe this fantasy.
There is basically no limit to the amount of money that it would be worth paying for a government to get those HSMs, since they could decrypt nearly all the world's communication. And the country in which they are hosted is included in the threat model. I would not want to be the one tasked with attempting to keep it secure.
It's a slippery slope - there would undoubtedly be a creeping escalation, with more and more agencies and people given access, and the requirements to get access relaxed. Before you know it, the CIA is hoovering up everything, local police departments have unfettered access and the local environmental agency can easily request access to anything for "reasons".
Governments have shown time and again that they abuse any power they get - I wouldn't trust them with this.
What if you simply claim to encrypt it twice, but instead place random noise wherever the "encrypted for gov't" data should be? If the system works as intended, it shouldn't be possible to attempt decrypting it (and thus verify if you encrypted it properly) without a search warrant based on probable cause, as the key material would be in escrow and not accessible to anybody including law enforcement and intelligence agencies before a warrant is served.
So you can't have proactive detection of that and everybody who wants to ecrypt communications with criminal intent can and will continue to do so with the same consequences as right now, where they can be pressured to reveal the keys but their communications are otherwise secure; the process would risk the privacy of honest citizens (in case if the escrow system is broken) but not hamper the bad guys at all.
I'm assuming that bad guys can modify the software they use, which seems to be a reasonable assumption supported by practice.
And given the proposed rules of key escrow the government has to assume that they use it as-is, because doing traffic inspection to ensure that they really do so is impossible without a specific warrant, so whenever they do get a warrant and get the keys out of escrow, then that's the first moment when they can tell "ah, we actually can't decrypt Bobs messages because he's not using that key".
So the proposed naive system of simply "encrypt the message twice so that either the intended recipient or the government with 5 HSMs can decrypt it" won't work. You can have a more complicated system that works around my objections above, but that would be a different system that will also have other drawbacks. Cryptosystems are very hard in general, all small details matter, a random proposal that hasn't undergone significant expert analysis has almost 100% chance of being fundamentally flawed, and doing reasonable escrow will have all kinds of "interesting" consequences and potential attacks, I am not aware of any public proposals for the specific details of a mass-escrow system that would have reasonable consequences.
Or whomever has hacked the source control / build environment to replace the "5 magic HSM public keys" with one or more of their own public keys -- See Juniper Incident with Dual-EC DRBG (https://eprint.iacr.org/2016/376.pdf)
This story should be repeated whenever anyone brings up 'solutions' involved with key escrow. Bruce warned us in 2006 this was a backdoor, ten years later, we find that not only was it implemented by Juniper, the backdoor was backdoored by unknown (and potentially malicious) actors. Really, this should be the last word on why this key escrow and general cryptographic backdoors are a terrible terrible idea.
In the 2010s, the informational notion that "there is no 'other' informatics ecosystem" on which security, privacy, or surveillance practices and principles apply is slowly dawning.