Hacker News new | past | comments | ask | show | jobs | submit login
Remote iPhone exploitation part 2: a remote ASLR bypass (googleprojectzero.blogspot.com)
285 points by weinzierl on Jan 9, 2020 | hide | past | favorite | 71 comments



This exploit, like many before it, make use of NSKeyedUnarchiver.

It's a class which allows arbitrary types to be serialized on the wire.

I think it's time that Apple had a new policy that no NSKeyedUnarchiver will ever go from one iPhone to another. They could do that by having a random 64 bit number in the header, and if the header doesn't match the local device, refuse to unarchive.

Then all software that breaks, find new ways to encode the specific data they need to send over the wire. Specifically, require they send a specific serialized class structure (ie. something with a schema), rather than something general purpose.


Using NSKeyedUnarchiver (and other language serializers) on the wire is also a great way to make sure your product is going to be extremely hard to port to other platforms. Even the plist version is not easy to handle outside Apple world.

I've consulted many companies which found out that they need to expand beyond Apple horizons, just to find out that their whole infrastructure is based on Apple's binary formats which aren't well documented and are not easily (and reliably) readable on other platforms.

Pragmatic move in these cases is to use formats that aren't platform dependent - things like JSONs, Cap'n Proto, msgpack and others.


> your product is going to be extremely hard to port to other platforms

I'm sure this isn't a particularly troubling issue for Apple.


Im sure they'd like the ability to port a service like iMessage to Android, just to give them options incase a big competitor starts eating their user base with cross platform support.


Apple could just build their frameworks for other platforms–they've done it before.


It's always interesting to read about these things. It's amazing to me how hard it is to find a bug like this.

Also interesting, zerodium would pay up to $2million for something like this, and even more now on Android.


It'd be capped at $1.5 million [1] since he didn't get persistence, but still very impressive.

[1]: https://zerodium.com/program.html


> [1]: https://zerodium.com/program.html

warning, scrolljacking... Firefox should really fix that bug, like they did with the "new" popups.


If you use NoScript the site is perfectly viewable and has no scrolljacking.


I have never seen one instance where it was beneficial to the user... some of these programmers (minority) really think that they know better


What's the business model of zerodium? Do they sell them on black market somehow? Why is it able to pay $2M if they weren't going to be able to extort more bounty from Apple (which at that point the security researcher herself can go to Apple, as well?)


This is the first I’ve heard of them but they seem to be based in DC and have government clients. I’m guessing they are funded by the US government to acquire vulnerabilities to use against targets as well as defend from being sold to other countries first.


That sounds grossly unethical. While the US somehow defends itself from being sold to other countries first (which it still might be), it probably also acquires them to attack/spy on its own citizens as there's been precedents about this, and no indication of policy change to stop it.


What do you think the CIA and CyberCom do for a living?

Buying vulns/exploits is just a little government outsourcing.


Ethically, comparing to the CIA is about lowest bar possible.


They do entertaining things as long as you are comfortably more than ten feet from the end where freedom is expelled at supersonic speeds though, like, from buying Titanium sheets from communists to build the fastest airplanes to spy communists, to letting own psycos spike unsuspecting grad school elites to experimentally manufacture terrorists


> it probably also acquires them to attack/spy on its own citizens

Or possibly, "to spy on foreign nationals and to share with FVEY partners - who'll do the spying on US citizens, because doing it themselves would be illegal"... :sigh:


Asking other countries spy on US citizens is also illegal. You are spreading a rumor that has no basis in fact.


Commercial vulnerability brokers don't pay out up front; they pay in monthly tranches, and stop paying when the vulnerability stops working. It's not an apples-apples comparison with bounty payouts.


What's the incentive to go to cellebrite rather than Apple? Cellebrite offers more but Apple does lump sum, from what I can tell.


I guess thats what stops you selling it to multiple orgs.


CCC video that explains this pretty well -

https://media.ccc.de/v/36c3-10497-messenger_hacking_remotely...


Man, security research is seriously elite. It's like a mixture of arcane magic, invention, and treasure hunting. I wonder how much time it takes to find vulns like this.


Months of dedicated research, utilising a wealth of knowledge that comes from spending a not-insignificant portion of your career researching such bugs.


Google is so good at finding fatal iPhone security bugs, maybe Apple should pay them. ;-)


They seem to already be incentivized sufficiently, though.


Obviously a business decision, it is in Google's to interest to protect their services and official apps in all the platforms they support.


... and publicize vulnerabilities in their main competitor.


Right, because they should instead publicize all their discoveries except for the Apple ones. Because obviously.


That’s not what I said. There’s an incentive for them to find Apple vulnerabilities.


Apple doesn't spam the internet with ads.


Apple doesn't need to have the same business model to be a competitor in the mobile OS space.


This is why I consider address space randomization to be a cosmetic fix that doesn't solve the real problem - a memory safety violation - but gives an excuse for not fixing the real problem. It just increases the cost of the attack development beyond script-kiddie level. Most of today's attackers seem to be competent and well-funded, either by nation-states, organized crime, or the advertising industry.


That's like saying you don't like crash barriers on the side of a highway because it doesn't address the real problem of cars crashing.

Yes, ASLR doesn't address the root bugs. Yes, ASLR can be bypassed. But it still lowers the overall risk and so is worth having around.


No, it's not. Crash barriers are a defense against random events. Attackers are non-random and purposeful. Several defensive layers, all with holes, only protect against inept attackers.


I suspect the number of inept attackers is rather high.


Great. So it does actually help.


You're making their point.

You're saying it protects against "inept" attackers, that lowers risk.

No need to nitpick a simple analogy.


His comment is a textbook example of the kind of nihilistic antagonism so common in the security industry.


There can be only one response to these people. Stop using computers. Computers are hugely complex, and no amount of software level shenanigans can amount to anything other than a crash barrier.


> nihilistic antagonism so common in the security industry

Thank you, that is a very nice label for an annoying phenomenon.


ASLR isn’t a complete exploit mitigation technique, it just hinders attacker by usually requiring them to pair it with another exploit. On many platforms the limited number of random bits makes it clear that it’s not a perfect protection against all attacks.


No, most of today’s attackers are not competent or well-funded, but you don’t hear about them because they’re not news.

ASLR changes the game in that instead of a control flow vulnerability needing to be found, you also need an info leak and the ability to put that into the exploit.

That’s a nontrivial improvement.


> Most of today's attackers seem to be competent and well-funded

By volume no. There are enormous numbers of low-level and mid-level hackers out there working for any number of interests. You have to be a pretty elite level of organised crime before your attacks reach the level achieved by nation states.

Security design is to some extent a cost tradeoff - the more you spend on security the more the attackers are going to have to spend on attacking you. If you're dealing with insensitive data, and are only going to be of interest to script kiddies, then you can spend only a little. If you're a widely deployed device, and people use it to store sensitive data that might be of interest to nation states, then you have to spend enormously more to mitigate against it.

Layered security or "defence in depth" is how anyone in the field will tell you to design secure devices these days. ASLR doesn't do anything on its own, it just amplifies the cost of any other attack, as does other mitigations like W^X. No layer is ever 100% secure in the real world. But 5 layers that are 99.9% secure make for a difficult to attack system.


Learn what "defense in depth" means.


I agree. This is why rust and go don't need it. Address space randomization is an attempt by the OS to workaround language problems (specifically C). Nothing against C, I love it, but people really need to move away from it where possible.


Rust and Go both can have C-based libs linked to their address space thus exposing them to the same problems ASLR diminishes (I accidentially wrote "solves" here. ASLR solves nothing, it just makes exploits harder. It's a holey raincoat...). Also, both language runtimes and compilers might have flaws. "don't need" is dangerous hybris imho, of the same kind that was the downfall of Java security. Lots of Java exploits were actually just broken linked versions of libjpeg and others.

No single measure will ever be sufficient, one should never become complacent like that...


Since at least Rust depends on the system C library, it gets ASLR whether it needs it or not.


Plenty of languages don't need it, even some older than C.

This all comes down to C designers ignoring the security best practices of other systems programming languages, and UNIX clones having mainstream adoption.

Morris worm (1988) wouldn't have been possible in regular Modula-2 (1978) code, nor Ada (1983), Mesa (1976), ESPOL/NEWP (1961), PL/I (1964), PL/S (1974), BLISS (1970), just citing a couple of them.

Yet C17 still doesn't provide language features to prevent another Morris worm, and the security Annex was made optional instead.


>ignoring the security best practices of other systems programming languages

Can you please expand on this point?


Bounds checking, explicit unsafe blocks, proper strings, explicit type conversions, proper enumerations, checked arithmetic.

For example, ESPOL/NEWP was the first systems programming language with explicit unsafe blocks, 8 years before C came into the world.

So hardware that was much weaker than PDP-11, and yet capable of supporting such security features.

Burroughs B5000 OS is still being developed, nowadays known as Unisys ClearPath MCP, and its top selling feature is being a mainframe system for customers whose security is the top priority.


> Burroughs B5000 OS is still being developed, nowadays known as Unisys ClearPath MCP, and its top selling feature is being a mainframe system for customers whose security is the top priority.

What's special about it?


Implemented in a systems programming language has everything security related that C lacks.

> Bounds checking, explicit unsafe blocks, proper strings, explicit type conversions, proper enumerations, checked arithmetic.

Plus a capabilities based security access, and unsafe blocks taint binaries, requiring admin permission to be executed.


Are these OS or hardware features?


Language and OS features.

Recent hardware versions use Xeons.

Again, just features missing from C, and available in other languages.

Another notable feature, it did not make use of any Assembly, in 1961, because all CPU features are exposed as compiler intrinsics.



so...as a dumb consumer, how do i protect myself from this?


Always promptly install security updates. Other than that, there is no protection from this specific class of vulnerabilities.

There are other good security practices a slightly savvier user can follow, like running a soft firewall and installing fewer apps, but they wouldn't mitigate this vulnerability.


This sounds nice in practice, but real world says that there is a greater than zero chance that the quickly released patch can have a negative affect on your device up to bricking the device. As a long time Apple user, I never install version X.0. I wait until at least X.0.1.


I think you’re mistaking the stability of Apple’s x.0 releases as “negative effects” of security patches.

x.0 releases do a lot more than patching. Security updates are much smaller and more frequent than even x.x.1 updates


There are is such thing as a "security patch" for iOS. If they release an update, they are releasing features as well. How quickly we forget the debacle that is iOS13. This was not a smooth deployment[0]: 13.0.0 9/19/19 13.1 9/24/19 13.1.1 9/27/19 13.1.2 9/30/19 13.1.3 10/15/19

I wait a couple of days to make sure updates are not bricking devices. Waiting just a week saw 3 updates and a 4th within 10 days. Some of those updates saw serious crippling of capabilities of the device after the update.

[0] https://en.wikipedia.org/wiki/IOS_version_history#iOS_13


Oops, I was talking about macOS releases, not iOS. The numbers don’t match between the two


There is no protection against adversaries that have the resources it takes to stockpile exploits like these.

Feel good in that you are probably not important enough for them to target you..


I wonder if Jamal Khashoggi considered himself "important enough"?

Or his Canadian friend Omar Abdulaziz?

https://www.washingtonpost.com/world/middle_east/khashoggi-f...


If not, they should have.


Project Zero usually releases the details 90 days after they've found the exploit, giving hopefully ample time for Apple to patch the bug. So assuming they have, simply being on the latest version means you are safe.


> This is the first blog post in a three-part series that will detail how a vulnerability in iMessage can be exploited remotely without any user interaction on iOS 12.4 (fixed in iOS 12.4.1 in August 2019).

> The vulnerability was first mitigated in iOS 12.4.1, released on August 26, by making the vulnerable code unreachable over iMessage, then fully fixed in iOS 13.2, released on October 28 2019.

(From the previous blog post in the series.)


That's also assuming Project Zero found this first...


I guess it depends if the question was a more general "how do I project myself from zero day" or "how do I protect myself from this specific zero day". For the latter, yes someone could've been exploiting it in the past, but right now, it's been patched for some months and if you're up to date, you should be safe.


Don't use any machine that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming.

Assume every electronically programmable device can, has been and will be compromised at now or at some point in the future.


May be they should have a class to detail how to debug iOS or macos processes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: