Hacker News new | past | comments | ask | show | jobs | submit login
UC Berkeley to build open-source secure enclave using RISC-V (hackaday.com)
160 points by walterbell on Dec 14, 2018 | hide | past | favorite | 51 comments



Cool. HSMs are a notable step up from secure execution environments like TrustZone and SGX because they make side-channel attacks that take advantage of the shared hardware much more difficult.

That said, the HSM itself becomes a target after a while. So the more stuff you off-load to your enclave, the larger the attack surface becomes, and now you moved your problem over.

And I am further skeptical about how much this improves on the status quo. Apple delivered a secure enclave that was essentially a blackbox. Google delivered one and is open sourcing the firmware at some point (apparently.) Now, this is going to give us an enclave with open source hardware. But in a world where there are known, novel attacks against the integrity of microchips that would make it difficult to verify even in the presence of microscope images of a decapped chip, I'm not sure if this really gives us that much reassurance that the chip does what it says. And if it did, we would never know for sure that it is free of security-sensitive bugs.

I still think this is great. Again, step above SGX and TrustZone in my opinion. But, this article makes it sound like a panacea, which I don't think is the case.

(Case in point: people have successfully attacked devices secured with enclaves, including iPhones and I assume Pixel. Doesn't matter how bad ass your gate is if it isn't surrounded by an equally strong fence. These devices should keep encryption secrets secure, but they don't 'stop hackers.')


HSMs have been around for donkey's years. SGX is interesting because now you can run e.g. a whole database inside an enclave on commodity CPUs with competitive performance.


Arm has already announced crypto-processors, which I assume aren't very expensive. Every chip maker could use one as an HSM now.

https://developer.arm.com/products/system-ip/security-ip/cry...

https://developer.arm.com/products/system-ip/security-ip/cry...


So the chip-maker would have their hardware protect the permanent secret key, and use ARM's coprocessor to do the cryptographic heavy-lifting? Sounds reasonable.

Is that the first high-level crypto coprocessor then? I'm surprised it's taken this long.


I always thought the purported advantage of SGX was being able to run important bits of code with ensured integrity, using attestation, even on an unsecure machine. If you ran a whole database in SGX or TrustZone, you just open yourself up to traditional security vulnerabilities alongside the usual side channel attacks you can do against them.


Well yes, but I AFAIK remote attestation for specialized security processors has been around for a while. You are of course right that you still have a large attack surface + side channels etc. But at least you can't just snoop on the RAM.


I am probably very uninformed, but I was hoping we might see the kernel + userland running entirely from a secure enclave. As I understood, this would make cold boot attacks more difficult? I'm sure it's like using a sledgehammer to close a narrow attack surface, though. From a performance point of view it's probably not a great exchange.

At minimum I'd want full-disk encryption programs (veracrypt) and biometric authentication services running from an enclave. Lenovo does this for their fingerprint reader.

It would be cool to have the facilities to say "everything run by this user runs in the enclave", but I think the argument is the same as FDE - don't invite the possibility of a leak by selective encryption. All or nothing.


That's very restrictive in terms of general purpose computing. It makes a lot of sense in some other cases though.


>But in a world where there are known, novel attacks against the integrity of microchips that would make it difficult to verify even in the presence of microscope images of a decapped chip, I'm not sure if this really gives us that much reassurance that the chip does what it says

I find myself saying this a lot lately but I really think that stuff like this is going to lead us to chip fabs being national strategic industries in the short/medium term future.


When were fabs not a strategic asset?! It's the whole reason Taiwan still exists.


Plug alert: I am working with a company that's building a full representation of the Sanctum system using custom extensions of RISC-V (https://eprint.iacr.org/2015/564.pdf). There are critical features like root of trust key and entropy generation, and cache isolation that are missing from the vanilla RISC-V keystone approach. It offers a measure of security worth having but we're taking it all the way. We're working with MIT prof Srini Devadas. You can read more about it here if you're interested: https://medium.com/gradient-tech/announcing-gradient-crypto-...

Also we're hiring! https://www.gradient-tech.co/jobs


That's great - will it be open sourced in some way? SiFive demonstrated RISC-V secure boot using (closed) Rambus IP for the RoT.


Some version at least will be yes. RE: RoT + RISV-V, cache isolation appears missing here, so sidechannel attacks using cache timing and other methods still work


Awesome! It was unclear to me where the Rambus IP began and SiFive's core took over in the demo.

Keystone using existing RISC-V extensions is exciting to see, but it's frustrating that the Hack a Day article seems to confound where it begins and ends (at least today). The Keystone presentation notes that the RoT is derived from Sanctum and their docs indicate that you need to bring your own entropy and key storage, neither of which are made clear in the blog post.


Hardware Security Modules (HSMs) are in plentiful supply, they're not new and the OP seems to have swallowed a bit of the Apple KoolAid from the description of the M7 being 'revolutionary'.

This is simply another HSM implementation, and no, they don't stop hackers, there are plenty other exploits to go for.

(For one example hhttps://www.nxp.com/products/identification-and-security/sec...)


I was brainwashed by the "Security Now" podcast --- I will trust the HSM that is open to any security researchers to do extensively study/review more than typical "security by obscurity" implementations.


this is a far cry from an HSM. an HSM provides much better protection than an enclave, but it can do much less.


Tagged memory is the biggest game changer for security IMO. RISC-V has a proposal IIRC, but ARM has a product (ARMv8.5) that looks like it will really raise the bar.


Can you point me to some resources on this?


ARM: https://community.arm.com/processors/b/blog/posts/arm-a-prof...

RISC-V: https://www.lowrisc.org/docs/tagged-memory-v0.1/tags/

You can think of this feature like a hardware ASan. Instead of using ASan as a design tool, you could use it as a security feature.

Capitalizing on this feature requires compiler and C lib changes. I'd bet that the support should be available in popular open source compilers before ARM ships.


Thanks, they look pretty interesting. Do you know what granularity the ARM tagging supports (e.g. byte level)? Would be interesting to see performance numbers - although I guess the hw is not available yet?


IIRC the ARM tagging would protect ~8 (maybe 16?) byte regions. More info on this is at [1] [2] -- note that some of this discusses a software-only approach (with likely higher performance impact).

I think the majority of performance cost for HW Asan will come from the additional bits stored to memory, but it was "modest" (< 5%?). A couple of additional CPU instructions during memory allocate and free, these are probably of negligible impact.

[1] https://arxiv.org/abs/1802.09517

[2] https://clang.llvm.org/docs/HardwareAssistedAddressSanitizer...


> This is a game changer for security.

uh, no. it might be a game changer for open source, though. maybe.


Ya, secure enclaves have been available for a while now (and for decades in the form of HSM's), but AFAIK none have been open source.


It should be noted (and this is one of the biggest concerns with open hardware) is that you often cannot change said hardware -- after all it's a physical object. But this means that if the hardware has keys baked into it which you cannot change, it can be rendered non-free even though you have the chip design. So while secure enclaves will be very useful for many things, and I am hoping we can get some even more ambitious uses of secure enclaves, I would be very wary if industry starts reinventing Intel's BootGuard with RISC-V (which has Intel's firmware keys burned into hardware -- rendering free firmware impossible to use on such platforms).

All that's necessary is a way to enroll new keys, and when new keys are enrolled all the secure storage is wiped. This allows for free use of your own hardware, while also keeping secrets safe from attackers.


Umm, I'm pretty sure it has that? Why would you make an HSM with a set of predefined unchangeable keys, that's retarded.


No, this is actually not retarded if you need to store program code externally and want to make sure that this code is unaltered. The XBox 360 early boot did this as an anti-piracy measure IIRC: a minimal on-chip program would load and verify the first couple of blocks of externally stored firmware before executing them.


Yes, if you the manufacturer want to make sure that nobody but you can alter the data. If you're providing cryptographic hardware for somebody else, they're gonna want to set their own.


In my experience, that's not usually a requirement. From an implementation perspective, you just say 'this certificate is the root' and trust it explicitly without needing access to the key on the HSM. From a security perspective, adding the ability to set your own key is a risk (adds RW storage to a previously immutable system, changes the model from 'no one ever sees the key' to 'no one ever sees the key as long as it was written correctly and any non hsm stored copy is deleted'), and doesn't really improve the security of the system - you trust the HSM manufacturer already, seeing as it's their hardware.


Clearly we run in different circles. In my experience keys need to be generated regularly and rotated on a predetermined schedule. Immutable systems are kind of the antithesis of that.


You might/should rotate the leaf keys regularly, but root keys are a massive pain to rotate (when you change a root key, you need to reprovision the corresponding cert onto every machine, as they're the root of trust for your PKI). You can buy a new HSM every 3 years to rotate those root keys, which fits fairly well into most organisation's decommissioning cycles.


Root keys are more of a pain to swap out, but AD does a pretty good job of handling the grunt work of distributing them across the organization. And the leaf keys should live on HSMs as well. You might be able to get by with a hardcoded root key if you're willing to accept the tradeoff, but that situation isn't going to work for the rest.


AD and similar things def help, but it's still hard, especially once your deployment gets big or complicated - at the last place I worked we had tons of linux hosts + google cloud load balancers + aws load balancers + a tiny AD domain + various others. I think once you hit a certain size, it's incredibly hard to keep homogeneity.

Leaf keys on HSMs is interesting. Personally, I view HSM and TPM stored keys about having an immutable identity for the device. Leaf keys are usually a bit more complicated - in the environments I work in, leaf keys are often tied to a service, not the host. The short lived nature also reduces the impact of ex-filtration.

Anyway, yeah, depends on the deployment and the org.


If you do this, you weaken your device against hostile tempering as well, don't you? The key update mechanism becomes part of the attack surface.


Yes, but if implemented in a secure manner the key update mechanism presents a smaller risk than that of a hardcoded key becoming known to an adversarial party.


As I mentioned, Intel BootGuard has firmware keys burned into the CPU so that only firmware signed by Intel can be used as a bootloader. This is done to avoid boot-kits, but results in users not being able to flash their own firmware on their devices (so CoreBoot cannot be used on OEM machines with newer Intel CPUs). Note that enclaves can be used for things other than HSMs, and could be used in far deeper parts of the stack than just as a HSM interface.


Technology cannot stop hackers but let authorized users if humans are still vulnerable to "You have a request from SexyMama69, enter your password to recieve video call" and other forms of social engineering.

Secure enclaves are only really useful at allowing people who don't own the device from preventing those who do own it from taking certain actions.


They're used for more than that, but I agree it's unfortunate that DRM is a major use case.


'LeifCarrotson didn't say "DRM", though it's kind of implied as a part. Third parties preventing device owner from operating the device is a cornerstone of information security, justified by observing that regular folks are very prone to selfpwning themselves if a little of social engineering is applied (+ it's hard in general to distinguish user-initiated actions from sofware-initiated ones). It's important to remember that legit security needs and user freedoms start to be in conflict pretty much as soon as the device in question gets connected to the Internet.


Also known as "authoritarianism", and the security community seems to be full of it. After all, isn't it more secure if the user can't do anything at all and the device controls the user?

Let's not forget that DRM also goes beyond preventing consumption of media (although that is a big application) --- it can be, and is, also used to force users to accept unwanted behaviour of software. Win10's deeply embedded telemetry is one example: It would be relatively easy to patch out if it weren't for the files being signed and "protected" by "secure" boot, and the constant barrage of updates (not all of which are for fixing remote unattended attacks, IMHO the ones which are actually important.) The situation is much worse on mobile devices.


> Also known as "authoritarianism", and the security community seems to be full of it.

This, exactly. Maybe that's déformation professionnelle of a kind.


Isn't this highly similar to MIT's Sanctum from 2015, also open source and ignoring the hardware changes needed to make meaningful guarantees about sidechannel attack robustness? Useful if you want to avoid hardware mods, but the enclave model is not new. https://eprint.iacr.org/2015/564.pdf and open source code via MIT license here: https://github.com/pwnall/sanctum


Well its the next evolution by many of the same people.


I don't understand how this differentiates itself from ARM TrustZone.

You can already create a system with a dedicated HSM and run your own trusted operating system using the features of the processor. With Intel SGX you are somewhat stuck with using the HSM provided by Intel. With the BSD-licensing aren't we left in the same place as ARM processors, with the exception of producing a RISC-V processor being less expensive with regard to licensing?


Is TrustZone open source? It's in TFA, the differentiator is that this is open source. Enclaves per se have existed for a long time now.


My guess is that this is true as an accident but once they have real market traction they honeymoon will be over since it is very likely that the tools and software will have substantial bugs early on, not even including hardware errata; one of the Intel/AMD/POWER strongpoints is that they have huge verification libraries that RISC V vendors will not have. A slightly different memory model will also expose issues that happen to work on x86.


also addressing this Open-Source RISC-V Hardware And Security https://semiengineering.com/security-and-open-source-hardwar...



No.

I recommend using the original title: "RISC-V Will Stop Hackers Dead From Getting Into Your Computer"

Even though it's still very fallacious.


No it will not.


Being a different platform means that hackers will have to adapt to it. That was the argument of the Macintosh it didn't get Windows viruses because it wasn't Windows.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: