Passphrases are always going to be the strongest, but you can have more than 6 digits in your pincode.
Select "Custom Alphanumeric Code" in Passcode Options[1], but only enter digits using the keyboard. iOS will display a pin pad on the lock screen that will accept any number of digits[2].
I picked this up from the delicious iOS 11 security whitepaper[3].
This is plainly brilliant. Just done that and the interface seems to not give any clue about the expected number of digits. Meaning that an attacker have no mean to even estimat the time needed to unlock. One could only figure out that complexity of password increased after failing all attempts with less digits (which will already take a lot of time).
It would be nice to have a panic code that one could type in, making it look unlocked, hiding apps a user could mark as secret, and optionally sending a security/tracking alert. This could also thwart a cracking device by making it look cracked at a shorter code, but giving no actually useful info.
Many fingerprint systems have this for access. A "Duress" Finger. If you use that finger, the system can be set up for a silent alarm, full on alarm, lockdown or whatever configuration you need.
I would gladly give my passcode if I was mugged. The main issue is either smarter more mischevious criminal, identity thieves and the like, or bad governments.
It would be terrible if all airports had these and could crack you on demand. I can imagine them holding you until the device cracks it.
That XKCD comic was wrong and one of the rare truly stupid and actively misinformative ones. Security measures are always about responding to threat scenarios and economics, and must in turn be analyzed in context. The general use paradigm for an average device involves many different threats and in turn separate measures for each one. FDE for example is only about protecting from cold attacks, warm/hot ones are just plain out of scope.
Similarly, authentication systems (including passcodes and biometrics) are purely about ensuring that only authorized people may gain access with whatever permissions they're authorized for and no more. Their threat scenario and usage does not involve intent. Authentication "primitives" can be used as part of a counter-intent security measure, but by themselves they only determine a "who" not a "why". Strong authentication backed by strong encryption is a stand alone good and a required foundation to do more complex stuff, but that's it. The "wrench" attack is an intent-based threat scenario: the person performing access is in fact authorized, and there is nothing at all buggy, malfunctioning, misdesigned, weak or wrong in any way with an authentication system allowing access. It'd take an intent reactive security measure to deal with.
Regrettably there are still no main stream smartphones that implement this as far as I know (though at times it's been possible to do your own to some extent via jailbreaking on iDevices at least and I assume on rooted Android as well). Which is really too bad because Apple in particular have a lot of very good tools at this point to do a really, really good implementation. A classic measure would be coercion/distress codes, ie., alternate passcodes an operator can enter that will cause the device to perform different actions then a straight authentication (these can range from a full deletion to more subtle actions like a silent alarm). Apple though could use TouchID/FaceID to make this even more user friendly, merely "use this finger vs that finger" or "be making this facial expression vs that one" as triggers. They (and anyone else with a secure hardware stack) could also implement temporal and spatial trigger options which would be very useful. Finally the iOS design is decently placed to allow relatively easy and more selective view filters on accessing apps and data due to how heavily everything is silo'd. The silos have been intensely frustrating a lot of the time vs standard computer access patterns, but in this instance it could be a real strength. Imagine being able to have a "travel view" we could set before a trip so that sensitive apps simply vanish until GPS indicates we've arrived at our destination or connected to a specific network or whatever. Or there could be distress views that react to a facial expression or code or finger and then permanently hide everything but a cleaned minimal data and app set, including your cloud stuff, until you get home and enter a special unlock code (or for a set amount of time or whatever).
I think intent-systems/view triggers will (or at least should) be the next big leap forward for helping our personal information devices get not just more secure and better for privacy but more productive. I'd like to see that start to show up in future versions of iOS and Android more then nearly anything else.
Fine, but a wrench, or something analogous, is the most serious threat model for most smartphones. On Apple devices, at least, the Find my iPhone feature mitigates most others.
But why not take it a step further and present the standard KB? That would be a slight inconvenience, with a massive gain in confusion for anyone trying to guess their way in.
I had a bug recently on my Windows Phone after adding my corporate email account to Outlook. I assume it was some sort of group policy being erroneously applied, but it turned out to be a massive inconvenience.
Mostly because trying to unlock it displayed the standard keyboard[1], but simply wouldn't allow me to create a PIN with anything other than numeric characters[2], so every time I needed to unlock my phone I had to swipe the screen up, then change the keyboard over to the number/symbol mode, and enter my PIN using the small row of numbers there.
In the end, all I had to do was change my pin, and from that moment on, it only ever displayed the standard number pad for unlocking.
I have a 15+ character password that is a mix of alphanumeric and punctuation. I have to enter it at startup and also once per day, which isn't a huge deal.
Naturally I use fingerprint/smartwatch authentication throughout my normal day, but once the phone is off, good luck getting access to its contents.
If this actually works there has to be some huge, embarrassing vuln in Apple's Secure Enclave Processor on par with the "CTS Labs" AMD secure coprocessor hoopla that hit the news just this week.[1][2]
The SEP is supposed to enforce a time delay between passcode attempts to prevent this sort of brute forcing. The timer could be defeated in older models by cutting power at just the right time, but Apple's whitepaper says it's supposed to survive restarts now.[3]
Based on the screenshots it looks like it can load custom firmware on the iPhone. That's bad.
It seems like they don't have the exponential delays, but they do have delays. Why else would it take 3 days to unlock the phone if it has a 6-digit passcode?
Apple's security paper says it would take more than 50 years to brute-force an alphanumeric 6-digit passcode at 80ms per iteration. I suspect that's still correct here (if “3 days or more” is for 6 numeric digits).
If this new time delay is via an off-chip RC circuit (one of the few ways for a timer to work with the power off as the capacitor acts like a battery) then it could be defeated by changing the components. RC circuits can be made on chip, however large value resistors and capacitors are very expensive to place on-chip.
sure, but it does require physical access for at least a couple hours, up to multiple days, if you don't use a predictable pincode. And from what I gather it doesn't even threaten a passphrase.
It supposedly threatens 6 digit passcodes, the default that 99.9% of iOS users will use. And the whole point of a passcode is to protect against physical access.
Time delays only provide a false sense of security. In theory I could always cut open the casing and just plug wires straight into the EMMC or whatever you have in there. Your time delay UI is useless if I just bypass your UI and wire straight into the hardware.
Of course that's non-trivial EE work, but the point is it's possible, for someone with enough money and the right equipment. What would make it intractable is to ditch the idea that a 4 digit pin is protecting you from anything. There's simply not enough entropy in that.
Time delays are useful protection when over a network. But not when the attacker has physical console access, e.g. to a phone. At that point proper cryptography and mathematics is the only good protection.
I mean, I don't think they're dumb enough to put unencrypted data out on busses that you can just probe. All the encryption is done inside the same package as the CPU; if you read the RAM or flash directly, you'll just get garbage because it's encrypted.
Anyway, to break the encryption involves finding bugs in the software that runs in the secure zone (the same way you'd defeat a time delay in a networked application) or by opening up the CPU die and figuring it out with an electron microscope (perhaps while the CPU is running).
Ultimately, software is going to be the productive attack vector. While the TrustZone docs emphasize "small" for the amount of code that runs in the secure zone, no programmer can ever do "small" on a deadline and so I wouldn't trust it to provide any actual security. But the hardware infrastructure is there to be pretty secure against any adversary that doesn't have a lot of time or money, if the programmers do their job correctly. (I don't trust it because of the time schedules involved for these SoCs. One or two a year, with a lifetime of maybe a couple years. Nobody is going to spend the money to do the hard work like looking for bugs just so that someone can't rip Netflix video feeds. The engineering work would be more expensive than whatever their contract with the DRM vendors says they have to pay if they get hacked, which is the only incentive to be secure... and I doubt Apple would sign one of those.
You misunderstand the SEP. It contains an externally unreadable private key baked in at manufacturing time that encrypts protected data. Your "wires" would read garbage. The iOS security white paper is worth a read.
Perhaps a nation-state actor could shave down the processor and read that key with a SEM or some crazy thing, but that's literally how far the design is supposed to have pushed iOS security. Which is what makes this hack so embarrassingly bad (if confirmed).
I think I read somewhere that some secure coprocessors incorporate physical defenses that will destroy keys if you try to shave them down or physically tamper. So yeah. Hard.
> Perhaps a nation-state actor could shave down the processor and read that key with a SEM or some crazy thing, but that's literally how far the design is supposed to have pushed iOS security. Which is what makes this hack so embarrassingly bad (if confirmed).
Shaving down the processor is hard but still possible. A government with a billion-dollar budget can probably do it, if they want, including after practicing on a million throwaway phones in the process. Apple's design is also known as security by obscurity (https://en.wikipedia.org/wiki/Security_through_obscurity).
I much prefer an open design and security by high-entropy keys and well-established practices in the cryptography. I'd feel much safer using open cryptography libraries and strong passwords than arbitrarily and blindly trusting some secretive engineers at Apple.
Apple's design is super-hard to break but possible.
My Linux laptop, on the other hand -- the hard drive is standard SATA. Have at it. I don't care if you have a billion dollars, you'll have a hard time breaking into my data for a probably at least a few decades.
That is just totally false. Apple does not rely on security through obscurity. They may not publish the code for the SEP but they do have a whitepaper that lays out the key architectural features.[1]
The only reason your Linux laptop would be more secure than an iPhone is if you were using a high-entropy key to unlock it every time you wanted to use it -- and the iPhone wasn't. That's it. But remembering high entropy keys without storing them in some less secure manner is so inconvenient and failure-prone for most people that it's actually a less secure design than Apple's SEP-assisted approach. And you could opt-in to a high entropy key on an iPhone if you wanted to, so even that is a false comparison.
One of the points of having a secure enclave is that it can enforce things like time delays. I doubt that the time delay in question is only enforced by the UI outside the enclave.
However, it does mean that an iPhone’s security cannot be ensured if it falls into a third party’s hands.
That was and will always continue to be true. Even secure cryptoprocessors of the type used in smartcards and HSMs can be cracked with enough determination and time. There are companies in China who will read and clone them for surprisingly little money.
It has always amused me somewhat how scared (or the impression that articles like this give) some people are of governments, while at the same time completely accepting and trusting to being herded and controlled by the companies they purchase these locked-down computers from. Anything you truly want to keep secret should be encrypted by systems you have knowledge of, with a key that only you know, or even better --- not leaving your brain at all.
Unfortunately, the IP-Box 2 became widely available and was almost exclusively used illegitimately, rather than in law enforcement
If by "illegitimately" you mean third-party repair shops... I know Apple doesn't like that, but the whole *-box series are aimed at the mobile repair industry (a huge business in China), not law enforcement.
the fact the methods of accessing the device are so secret seems very prone to a court rules of evidence challenge.
The accused has a right to know exactly how evidence was obtained, and if the chain of custody was broken, just hiding behind an NDA isnt going to cut it.
Is it just me or does the price point seem extremely low?
They have a device that should be in high demand globally, and maybe one competitor.
And they are charging 15-30k, for basically unlimited usage??
You can't tell me federal law enforcement wouldn't pay at minimum ten times that amount for metered usage...
That was my thought as well - but on the flip side, by dealing in quantity they are a lot more likely to have one leak and be reverse engineered, and thus have Apple render them all useless.
It's certainly an interesting problem of profit maximization!
Unless the vulnerability is in the CPU/DMA/whatever and not easily patched. Everyone assumes that Apple has no idea what it is, maybe they are keenly aware and it’s just not fixable.
> The cheaper model isn’t much of a danger if stolen—unless it’s stolen prior to setup—but at 4″x 4″x 2″, the unlimited model could be pocketed fairly easily, along with its token, if stored nearby. Once off-site, it would continue to work.
Presumably even the cheaper model could be reverse engineered to reveal the exploit used. But once it becomes known, it would be patched.
> (e) Law Enforcement, Intelligence, and Other Government Activities.— This section does not prohibit any lawfully authorized investigative, protective, information security, or intelligence activity of an officer, agent, or employee of the United States, a State, or a political subdivision of a State, or a person acting pursuant to a contract with the United States, a State, or a political subdivision of a State.
I hate this stuff. I want to secure my device and not have the govt or companies steal it, I want to control my device. Still, it's fascinating to learn about.
Did no one think, when they take someone's phone for 5 minutes at the border, they could be doing this to your phone.
>>It can take up to three days or longer for six-digit passcodes, according to Grayshift documents, and the time needed for longer passphrases is not mentioned.
So yeah, up to three days for six-digit passcodes. If you have a longer passcode with letters and special characters, you could wait a long, long time.
Can GreyKey or anything else really bypass the unlock attempt counter of an iPhone set to erase itself after 10 unsuccessful attempts? Have they found a way to replace the firmware that executes that erase procedure? In that case, only password complexity can save you. But no evidence is shown that proves they can accomplish this.
Humans being abysmal PIN and password generators, a decent fraction of phones can probably be unlocked within 5 attempts by just trying 123456, 123123, 111111, 654321, 000000. Unless/until the phone forces the user to learn rather than select a PIN that's probably going to remain the biggest vuln.
On the other hand, perhaps Apple could secretly partner with a law enforcement team and purchase one for themselves. $15k and $30k are literally nothing to Apple with their warchest in the tens of billions.
The ‘offer’ isn’t illegal - going through with it would be though, for both sides. Grand theft and receiving stolen goods. Both not great, plus you’d be actively acting against the law enforcement system which would ensure a zealous prosecution.
It's a realistic point though. Illegal industrial espionage definitely happens and with the resources of a huge multinational corporation it becomes easier to conceal behind a wall of secrecy and misdirection. I doubt Apple would do it, but it also wouldn't surprise me and they definitely could.
All they have to do is discretely obtain the device in question and have a few good engineers quietly pick it apart for a few weeks to figure out how it works. They then patch the vulnerability in a regular update claiming they discovered it as part of normal procedure and nobody takes notice.
Edit: the legality of the device itself is kinda interesting to me. Like, even if it is doing something illegal (like using stolen code or something), how would Apple prove it as long as it was only sold to law enforcement? If the police aren't asking too many questions and Apple can't legally acquire one, how do they prove it? I suppose they'd have to gather enough circumstantial evidence to get a judge to issue a subpoena, but things get a bit dark and fuzzy.
I’m not a lawyer, but quoting from the Wikipedia article the DMCA “also criminalizes the act of circumventing an access control, whether or not there is actual infringement of copyright itself.” You could argue that you hold copyright on for example a photo you’ve taken, and the passcode is the access control method.
I’m also not a lawyer, but I have no reason to believe that would be an arguable case; the fact that an access control method incidentally makes gaining access to copyright materials harder doesn’t make it a copyright protection method.
Not a lawyer, but copyright protections methods may be implemented as an access control system. For example, I could sell content to iPhone users under the assumption that third parties won't be able to copy or access said content. More to the point, things like Apple Music, the App Store or the iTunes Store all could be argued to rely on the access control mechanisms as part of the DRM system.
> An iPhone typically contains all manner of sensitive information: account credentials, names and phone numbers, email messages, text messages, banking account information, even credit card numbers or social security numbers. All of this information, even the most seemingly innocuous, has value on the black market
My phone has no banking information, credit card information, Social Security numbers, or email accounts that can be used to recover or reset access to any online service. Why? Because I don't trust my phone.
But aside from all that, all that information is already on the black market. There have been so many breaches, Equifax just to name one, to think otherwise.
I wonder: Apple has hundreds of billions in overseas cash. Why don't they go after Cellebrite and Grayshift and offer the owners something to the tune of 1-2 billion US$ in hard cash? Given the reputation hit once this knowledge becomes widespread, a couple billion dollars are pocket change.
>The cheaper model isn’t much of a danger if stolen—unless it’s stolen prior to setup—but at 4″x 4″x 2″, the unlimited model could be pocketed fairly easily, along with its token, if stored nearby. Once off-site, it would continue to work. Such a device could fetch a high price on the black market, giving thieves the ability to unlock and resell stolen phones, as well as access to the high-value data on those phones.
If this gets stolen and put on the black market, that would be a good thing. Because then Apple can buy one, figure out what vulnerabilities it's using, and patch them.
This seems at odds with Apple’s claims about holding the device encryption keys in a secure coprocessor that only releases them in response to a valid passcode, and self-destructs the keys if too many passcodes are tried.
It’s not at odds with it - it’s pretty obviously using a vulnerability to run a crack against the passcode. Once the passcode is found, that is used to unlock the phone and this the Secure Enclave.
I agree it’s not at odds with it, but it’s not even that simple - the passcode is enforced by the Secure Enclave itself. It’s not a case of “try passcodes until you find the right one then tell the SEP” - it has to be exploiting a vulnerability in the SEP itself, assuming what we know of the design and attack is true.
Technically, this does not absolutely have to be a SEP vuln. It's possible that the PIN is being stored (not properly zero'd-after-use) somewhere outside the SEP, i.e. a pinpad entry buffer or something. Often criminals do not think of, or are unable to, turn off their phone when being arrested. Furthermore the iPhone battery is not removable and some portions of its DRAM - in theory anyway - could persist even when the phone is "off". Apple has (or at least - prior to public outrage - had) a fairly loose definition of "off" for other aspects of the system, such as bluetooth.
I thought iPhone were electronically secure, it seems they are not. I thought the FBI had to just do some Xray of some chip to read some ROM thing.
Sometimes I wonder if real security is really and theoretically possible, or if it's just engineers who never manage to achieve it because designers want things to be usable for consumers.
What ever happens it doesn't seem really secure, consumer oriented device do exist. I wonder if there are android devices who do a good job at that, and what's the status of the security of android device in general, I would guess it's not better.
Are you willing to pay $500k for a phone? Is there a vendor who is willing to put R&D investment of $20mil so you can buy one? How much more phones they would sell? If you would be Ed Snowden would you even trust that company?
Are you going to buy a safe to keep family photos in it?
What does it even mean for you to have absolutely secure phone if you are going to be hit by a bus tomorrow?
Yes it is, and has been done. Subject to [edit: a] security policy (NOT defined by implementation) and a meaningful statement of threat (what are you protecting against?)
I mean if they truly broke and iPhone lock, then it means they had to be tampering with a true Apple device (not a dummy) in order to make their device work. Therefore, they violate Apple TOS that I am sure forbids any sort of backdooring. I doubt they will go after a rouge chinese jailbreaker sitting in moms basement and trying to make a name for him/herself, but here we have example of a for-profit incorporated business that makes 100% of their money by breaking Apple's devices.
On the other hand, if this is all just some sort of marketing gimmick, or that device never been truly tested on iPhone, then I am sure they can go after them for attempting to shame iOS/iPhone for users to think their devices are less secure than they actually are, which could hit their bottom line.
Select "Custom Alphanumeric Code" in Passcode Options[1], but only enter digits using the keyboard. iOS will display a pin pad on the lock screen that will accept any number of digits[2].
I picked this up from the delicious iOS 11 security whitepaper[3].
[1] https://i.imgur.com/KEEC71B.png [2] https://i.imgur.com/YrgQA5s.png [3] https://www.apple.com/business/docs/iOS_Security_Guide.pdf