Hacker News new | past | comments | ask | show | jobs | submit login
Private Key Extraction from Qualcomm Hardware-Backed Keystores (nccgroup.trust)
305 points by griffinmb on April 24, 2019 | hide | past | favorite | 56 comments



https://www.qualcomm.com/company/product-security/bulletins#...

That's pretty much all the snapdragons in modern Android phones (page is not letting me copy+paste them here).

Has QC put out a patch yet?

EDIT: The April security patch looks like it took care of it:

https://source.android.com/security/bulletin/2019-04-01

EDIT 2: And of course, my Samsung Galaxy S8+, despite having received an update in April, is only at the March 1st security patch level. So I'm likely vulnerable until Samsung's next update.


Yes. From the article:

> Recommendation

> Qualcomm has already designed and distributed a patch to address this issue. Ensure that your devices are running the most recent firmware version.

Also, here's the list from that page: IPQ8074, MDM9150, MDM9206, MDM9607, MDM9650, MDM9655, MSM8909W, MSM8996AU, QCA8081, QCS605, Qualcomm 215, SD 210/SD 212/SD 205, SD 410/12, SD 425, SD 427, SD 430, SD 435, SD 439 / SD 429, SD 450, SD 615/16/SD 415, SD 625, SD 632, SD 636, SD 650/52, SD 712 / SD 710 / SD 670, SD 820, SD 820A, SD 835, SD 845 / SD 850, SD 8CX, SDA660, SDM439, SDM630, SDM660, Snapdragon_High_Med_2016, SXR1130


Thanks, I was hoping to figure out when it was distributed. But your comment encouraged me to search for the CVE, and it looks like it was fixed in the Android 04-05 security patch:

https://source.android.com/security/bulletin/2019-04-01


For vendor patches, you really can't trust that value in any way... I'm afraid there is no real way to check, except for trying the attack.

Qualcomm patches are not distributed as part of AOSP security patches, and is not tested for Google certification, so there is really no reason for it to be accurate, except possibly for Pixels.


I remember reading that phone manufacturers sometimes update the patch version but don't pull all the patches presumably because it's too much effort to integrate into their forked codebases.


Do firmware updates typically get distributed along with OS updates on Android, or is there some other process you'd need to use to patch those device?


They're delivered along with normal updates.


>(page is not letting me copy+paste them here)

`user-select: none` in the `.slick-slider` CSS rule. User-hostility at its finest. Sigh...


Is it possible to change those rules? I’ve come across this before and it is frustrating to say the least.


Open the developer tools in your browser, go to the styles and unselect the element(s) causing you problems.


That's assuming Galaxy S8+ is using Qualcomm's keymaster, I'm not too sure.

Considering how Samsung is focused on security, I wouldn't be suprised if they'd use external security element like Google does on Pixel 3.


Not the best response from the vendor:

> March 19, 2018: Contact Qualcomm Product Security with issue; receive confirmation of receipt

> April, 2018: Request update on analysis of issue

> May, 2018: Qualcomm confirms the issue and begins working on a fix


I worked for a big company that made some really popular networking equipment. Of all people you would expect them to handle security ... fairly ok.

Yet they struggled to get headcount for people to respond to security researchers, and despite having seemingly trained the executives there was a regular "Oh man I contacted legal we should sue this guy!" type email every few months.

Meanwhile they had a separate technical support team who knew how to respond to customers in a timely fashion, make people feel like they're being listened to, but for some reasons they had to reinvent the wheel / fail repeatedly at dealing with security researchers as if nobody had ever done basic customer service before. I was on the support team and I sat next to the security guy(s) and I would show them what to do and how to keep a customer or security researcher on track. It wasn't rocket science, but nobody thought to teach them that.

And that was beyond training engineering to stop with the "well you're using it wrong" type responses.

The scale of, incompetence in the security field is astounding as a lot of folks with security written all over their resume don't know jack squat. And the scale of incompetence just DEALING with security researchers is also bizzaro world terrible, even among companies that should know better.


> security written all over their resume don't know jack squat.

While working in a F500, I found on github the company credentials of a security consultant coming from Thales ... hem


The head of security for Panera Bread seemed to become upset / confused when a security researcher asked about exchanging pgp keys... he worked at Equifax previously.


Par for the course, "security" industry people, especially the higher ups rarely make an effort to secure their communication.

The best you can hope for is they likely use Signal, good luck getting their contact details or verifying keys with them though!


Could it be so because features sell products, not security (yet)?

If their customers (which are not u) and them feel like security is only a cost and not a selling point then they won't work much on it. After all they already sold those products and have orders for more.


I think security just isn't an instinctive priority in most organizations when it comes to engineering operations. Also I don't blame them for the results very often as they're simply not given time / budget to do so very often.

It's a cultural thing that just hasn't taken hold (normally I hate using "cultural" but it seems to fit here).


One solution to this problem is to post a PGP key on your security response web page. Then, whoever reports problems will encrypt the message such that it is only readable by the proper people inside your organization (because they control that PGP key and regular support, suits etc don't)


Certainly for communication that is the right way to go.

Having said that the suits have to be in the loop to some extent, they just need to be able to control those "I don't like this sue them" instincts and understand how to better channel that energy.

Security needs an executive level person to be able to directly work with the other executives to push things if only because the inclination to hide or not fix things is so common. Security just isn't a part of a lot of engineering teams mindset / time budget.


I think it would take at least another 20 years to have people that are really security expert in most of the big companies!


Stuff changes. Today's young Turks, when factoring in the time and energy to get to the point they would be listened to, will have antiquated skills too.

Old people can certainly keep up! But corporate climbers often can't (I do know of exceptions)


Yeah it will be a while. I've spoken to some newly minted security experts that are really good.... and security executives who I swear struggle to operate a computer.


I mean, this seems realistic. Product Security is contacted, the issue goes into some queue somewhere. Eventually it becomes a priority and the correct engineers have to be found to understand the researcher's paper and replicate the issue to their satisfaction.


Here’s what could have happened:

March 20: Confirmation of receipt of issue, boilerplate response detailing expected next steps

And as far as we know, this might have actually happened! Maybe it wasn’t deemed interesting enough to include on that timeline.


It actually does say confirmation of receipt, on March 19 (the same day it was submitted)


Does this allow someone to decrypt a stolen device?

I moved from an iPhone to a Galaxy S9 about a year ago because I was getting fed up with Apple's hardware problems, and wanted try Android again.

I convinced myself that I was able to secure the Android phone as long as I always bought the newest one and kept it up to date.

But decryption after loss is an untenable scenario for me. I had read that qualcomm's trustzone has had software exploits in the past, but I didn't think it would happen again.

Is there any way to trust that the data on my Android device is safe? If I lost it today, someone could keep it around for a while until the next exploit drops. Has Apple ever had an exploit of this nature?


"I had read that qualcomm's trustzone has had software exploits in the past, but I didn't think it would happen again"

Of course it's going to happen again, given the abysmal state of security in QSEE, Qualcom's implementation of Trust Zone. I used to do software/firmware security reviews at Google, and let me tell you that what Gal found at [1] would have never passed my reviews had Qualcomm had a similar internal security process in place. This is one of the many reasons Google realized they couldn't trust vendors, so they rolled out their own security chip Titan M: https://www.blog.google/products/pixel/titan-m-makes-pixel-3... So, if you want a secure phone, buy a Pixel 3 or later.

[1] http://bits-please.blogspot.com/ : there are so many WTF moments, like Qualcomm not revoking trustlets, never sanitizing arguments passed to QSEE syscalls, etc, etc


Short answer: no

Longer answer: This article mentions how to leak the key, but it assumes that you can generate many signatures successfully. But to successfully use the key (thus sign with the key), you need to provide the password to the TEE, else it will refuse any operation on the key. If the TEE is properly written (I'd say Qualcomm's is), the key in keymaster (which encrypts the key for the storage) itself is encrypted based on the password, so it can't leak anything since it's not decrypted even for the TEE itself.

Also, unless you unlocked your bootloader, the attacker would need to be able to launch its own software while the device is on the boot lockscreen. That requires to find an additional security flaw in the bootloader and/or early Android boot.


Don't ever trust anyone to keep your sensitive data encrypted by default. Make sure you get something like TrueCrypt (open source, tested by a large segment of population, security experts from open source segment, etc) that is truly secured and don't have any backdoors, and use that to lock-up your data in a encrypted container. Make backups in cloud of those containers and sleep like a baby.


Uhhh... what about when I take a photo with my Android phone?

Forgive my ignorance, but I don't believe it's secure to use TrueCrypt anymore, and I didn't even think it was possible to use a volume on Android, let alone an automatically encrypted volume.

I'm worried about thugs blackmailing me, not state actors.


Don't have cloud enabled and sharing by default. Also use strip tags software to erase your geolocations from the pictures you take. And if take sensitive pictures (children in your house, sexy time with SO, police doing a crime, etc) definitely move them to an encrypted container and use a wiper too to get rid of them from your normal storage.


And yeah, I used the TrueCrypt as an example because is the most recognizable name in this, but in my particular setup I use VeraCrypt, as pointed by someone else. Wasn't aware that VeraCrypt got so popular. Before VeraCrypt I used Jetico Bestcrypt containers but those weren't public source.


To chime in on TrueCrypt, and perhaps someone can help elaborate, I understood that it went offline and the maintainer called quits suspiciously? VeraCrypt was the successor iirc.



The idea that it's safer to trust Truecrypt than the platform's enclave secret system because enclaves sometimes have vulnerabilities strikes me as pretty weird, since the big difference between Truecrypt and an enclave-based system is that Truecrypt doesn't have an enclave to begin with.


Doesn't seem that weird. The enclave has your secret in it, and comes attached to the storage it's protecting. Steal storage, extract key, done.

My software FDE does not keep my password in it. There is nothing to extract after stealing. I will happily stipulate though that this requires a solid password and key stretching.


... if and only if it is off. Which is probably not a great assumption with a phone.

A DIY mitigation might be to convert a phone to having only an external battery on a long cable, which stays in your other pocket.

Philosophically I do agree with where you're coming from with contemporary devices insisting on baking in privileged keys. It's unfortunate that we're forced to choose between the two models.


Good point. Didn't really consider live or "cold boot" attacks.


[flagged]


I'm not confident from this comment that we're using the same definition of "enclave", and, to be clear, I'm using the one that applies to this actual article.

Your comment was also extraordinarily creepy in a way totally uncalled for on this thread.


Possibly stupid question: If only a few bits of nonce are needed to recover the key, what's preventing iteration of all possible values of those "few bits"?


One of the main assumptions in cryptography is "if I can recover one bit of the private key, I can probably recover all of them." This is just a manifestation of that well-founded assumption.


The relevant paper here is: https://www.hpl.hp.com/techreports/1999/HPL-1999-90.pdf

It's not actually faster if you have to try all of them :)


Likely they can recover something like `private_key XOR nonce`. If you know a few bits of the nonce, you can reveal bits of the private key. But this is not something you can brute-force.

In other words, the nonce bits leak information about the private key bits. You can't magically create this information by trying all possible values.


store something known in the secure storage...


It's a few bits over many signatures then fed into a lattice reduction algorithm.


Cool, so I guess at some point, you should just brute force the rest of the unknown bits.

E.g. if you’ve figured out 255 of the 256 bits, don’t keep running through more signatures because you’re likely to get bits you already know.


In this particular case they are using bits leaked from the nonce which changes with each execution, but they use lattice reduction after using these bits to generate hidden number problem inequalities[1-3]. They aren't exactly brute forcing the remaining unknown bits here but they do need to try running the lattice reduction multiple times over different samples of their guesses until it works.

In other settings, yes you can still gather leaked bits and use them to reduce the search space until it is small enough to brute force. I've had my own elliptic curve implementation broken through differential power analysis from a leakage due to a conditional branch just like the one in this paper. It leaked only a single bit and it was leaking a bit of the key or a nonce but the least significant bit of an intermediate value in the same way that the LSB was leaked in this paper. All I did was select one of two results, both of which were computed using the same number of operations every time and discarded the other based on whether a result was odd or even. Simply branching is generally enough to leak. After running many operations using the same key on different data you could reduce the search space down to one which could be brute forced in a few minutes. Both the OP's implementation and mine also tried to use some side channel countermeasures, but leaking this single bit was enough. You simply cannot use any conditional branching or memory addressing based on secret information [4-5]. Even being aware of this, it was not obvious how to avoid this conditional branch while implementing the algorithm (the math can be reformulated to avoid it).

We did find that randomized projective coordinates[6] significantly helped in our case (we had already implemented this but was disabled by default). This probably only makes it more difficult to extract the secret but it should be implemented in constant time.

[1] https://www.nccgroup.trust/globalassets/our-research/us/whit...

[2] https://www.youtube.com/watch?v=QCsvBFFLsqE

[3] https://www.nccgroup.trust/us/our-research/return-of-the-hid...

[4] https://cr.yp.to/highspeed/coolnacl-20120725.pdf

[5] https://www.bearssl.org/constanttime.html

[6] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.5...


"Only a few bits of each nonce are known for around 100 signatures", This is the challenge that we can't.


Could this allow bootloader unlocking, custom roms, etc. on an otherwise locked device (e.g. S7)? Tried the engineering bootloader, but horrible battery management.

I'll avoid updating until I know more.


I guess it depends which public key the device is willing to accept updates from. This exploit gets you the per-device keystore private key, which is not going to be being used by vendors to sign builds. But perhaps there's an option somewhere to allow running firmware updates if the payload is signed by the device's keystore key, I don't know.


>We demonstrate this by extracting an ECDSA P-256 private key from the hardware-backed keystore on the Nexus 5X.

Did the fixes make it to nexus 5x ? It has been EOL since December 2018. The cve date is CVE-2018-11976 though.



The BearSSL link is the important one. For one, because they could literally have copy-pasted the code from there and would have avoided the issue outright. And second, because it demonstrates how useless the Qualcomm approach of throwing an entire Indian city worth of developers at a problem is when it comes to security. It just takes one.


Are there any interesting practical consequences from this in common apps?


Considering how some carriers refuse to unlock bootloaders, this may well be the only option some of us have to restore bricked phones. Other than paying Google 250 bucks to reflash them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: