This exploit, like many before it, make use of NSKeyedUnarchiver.
It's a class which allows arbitrary types to be serialized on the wire.
I think it's time that Apple had a new policy that no NSKeyedUnarchiver will ever go from one iPhone to another. They could do that by having a random 64 bit number in the header, and if the header doesn't match the local device, refuse to unarchive.
Then all software that breaks, find new ways to encode the specific data they need to send over the wire. Specifically, require they send a specific serialized class structure (ie. something with a schema), rather than something general purpose.
Using NSKeyedUnarchiver (and other language serializers) on the wire is also a great way to make sure your product is going to be extremely hard to port to other platforms. Even the plist version is not easy to handle outside Apple world.
I've consulted many companies which found out that they need to expand beyond Apple horizons, just to find out that their whole infrastructure is based on Apple's binary formats which aren't well documented and are not easily (and reliably) readable on other platforms.
Pragmatic move in these cases is to use formats that aren't platform dependent - things like JSONs, Cap'n Proto, msgpack and others.
Im sure they'd like the ability to port a service like iMessage to Android, just to give them options incase a big competitor starts eating their user base with cross platform support.
What's the business model of zerodium? Do they sell them on black market somehow? Why is it able to pay $2M if they weren't going to be able to extort more bounty from Apple (which at that point the security researcher herself can go to Apple, as well?)
This is the first I’ve heard of them but they seem to be based in DC and have government clients. I’m guessing they are funded by the US government to acquire vulnerabilities to use against targets as well as defend from being sold to other countries first.
That sounds grossly unethical. While the US somehow defends itself from being sold to other countries first (which it still might be), it probably also acquires them to attack/spy on its own citizens as there's been precedents about this, and no indication of policy change to stop it.
They do entertaining things as long as you are comfortably more than ten feet from the end where freedom is expelled at supersonic speeds though, like, from buying Titanium sheets from communists to build the fastest airplanes to spy communists, to letting own psycos spike unsuspecting grad school elites to experimentally manufacture terrorists
> it probably also acquires them to attack/spy on its own citizens
Or possibly, "to spy on foreign nationals and to share with FVEY partners - who'll do the spying on US citizens, because doing it themselves would be illegal"... :sigh:
Commercial vulnerability brokers don't pay out up front; they pay in monthly tranches, and stop paying when the vulnerability stops working. It's not an apples-apples comparison with bounty payouts.
Man, security research is seriously elite. It's like a mixture of arcane magic, invention, and treasure hunting. I wonder how much time it takes to find vulns like this.
Months of dedicated research, utilising a wealth of knowledge that comes from spending a not-insignificant portion of your career researching such bugs.
This is why I consider address space randomization to be a cosmetic fix that doesn't solve the real problem - a memory safety violation - but gives an excuse for not fixing the real problem.
It just increases the cost of the attack development beyond script-kiddie level. Most of today's attackers seem to be competent and well-funded, either by nation-states, organized crime, or the advertising industry.
No, it's not. Crash barriers are a defense against random events. Attackers are non-random and purposeful. Several defensive layers, all with holes, only protect against inept attackers.
There can be only one response to these people. Stop using computers. Computers are hugely complex, and no amount of software level shenanigans can amount to anything other than a crash barrier.
ASLR isn’t a complete exploit mitigation technique, it just hinders attacker by usually requiring them to pair it with another exploit. On many platforms the limited number of random bits makes it clear that it’s not a perfect protection against all attacks.
No, most of today’s attackers are not competent or well-funded, but you don’t hear about them because they’re not news.
ASLR changes the game in that instead of a control flow vulnerability needing to be found, you also need an info leak and the ability to put that into the exploit.
> Most of today's attackers seem to be competent and well-funded
By volume no. There are enormous numbers of low-level and mid-level hackers out there working for any number of interests. You have to be a pretty elite level of organised crime before your attacks reach the level achieved by nation states.
Security design is to some extent a cost tradeoff - the more you spend on security the more the attackers are going to have to spend on attacking you. If you're dealing with insensitive data, and are only going to be of interest to script kiddies, then you can spend only a little. If you're a widely deployed device, and people use it to store sensitive data that might be of interest to nation states, then you have to spend enormously more to mitigate against it.
Layered security or "defence in depth" is how anyone in the field will tell you to design secure devices these days. ASLR doesn't do anything on its own, it just amplifies the cost of any other attack, as does other mitigations like W^X. No layer is ever 100% secure in the real world. But 5 layers that are 99.9% secure make for a difficult to attack system.
I agree. This is why rust and go don't need it. Address space randomization is an attempt by the OS to workaround language problems (specifically C). Nothing against C, I love it, but people really need to move away from it where possible.
Rust and Go both can have C-based libs linked to their address space thus exposing them to the same problems ASLR diminishes (I accidentially wrote "solves" here. ASLR solves nothing, it just makes exploits harder. It's a holey raincoat...). Also, both language runtimes and compilers might have flaws. "don't need" is dangerous hybris imho, of the same kind that was the downfall of Java security. Lots of Java exploits were actually just broken linked versions of libjpeg and others.
No single measure will ever be sufficient, one should never become complacent like that...
Plenty of languages don't need it, even some older than C.
This all comes down to C designers ignoring the security best practices of other systems programming languages, and UNIX clones having mainstream adoption.
Morris worm (1988) wouldn't have been possible in regular Modula-2 (1978) code, nor Ada (1983), Mesa (1976), ESPOL/NEWP (1961), PL/I (1964), PL/S (1974), BLISS (1970), just citing a couple of them.
Yet C17 still doesn't provide language features to prevent another Morris worm, and the security Annex was made optional instead.
For example, ESPOL/NEWP was the first systems programming language with explicit unsafe blocks, 8 years before C came into the world.
So hardware that was much weaker than PDP-11, and yet capable of supporting such security features.
Burroughs B5000 OS is still being developed, nowadays known as Unisys ClearPath MCP, and its top selling feature is being a mainframe system for customers whose security is the top priority.
> Burroughs B5000 OS is still being developed, nowadays known as Unisys ClearPath MCP, and its top selling feature is being a mainframe system for customers whose security is the top priority.
Always promptly install security updates. Other than that, there is no protection from this specific class of vulnerabilities.
There are other good security practices a slightly savvier user can follow, like running a soft firewall and installing fewer apps, but they wouldn't mitigate this vulnerability.
This sounds nice in practice, but real world says that there is a greater than zero chance that the quickly released patch can have a negative affect on your device up to bricking the device. As a long time Apple user, I never install version X.0. I wait until at least X.0.1.
There are is such thing as a "security patch" for iOS. If they release an update, they are releasing features as well. How quickly we forget the debacle that is iOS13. This was not a smooth deployment[0]:
13.0.0 9/19/19
13.1 9/24/19
13.1.1 9/27/19
13.1.2 9/30/19
13.1.3 10/15/19
I wait a couple of days to make sure updates are not bricking devices. Waiting just a week saw 3 updates and a 4th within 10 days. Some of those updates saw serious crippling of capabilities of the device after the update.
Project Zero usually releases the details 90 days after they've found the exploit, giving hopefully ample time for Apple to patch the bug. So assuming they have, simply being on the latest version means you are safe.
> This is the first blog post in a three-part series that will detail how a vulnerability in iMessage can be exploited remotely without any user interaction on iOS 12.4 (fixed in iOS 12.4.1 in August 2019).
> The vulnerability was first mitigated in iOS 12.4.1, released on August 26, by making the vulnerable code unreachable over iMessage, then fully fixed in iOS 13.2, released on October 28 2019.
I guess it depends if the question was a more general "how do I project myself from zero day" or "how do I protect myself from this specific zero day". For the latter, yes someone could've been exploiting it in the past, but right now, it's been patched for some months and if you're up to date, you should be safe.
It's a class which allows arbitrary types to be serialized on the wire.
I think it's time that Apple had a new policy that no NSKeyedUnarchiver will ever go from one iPhone to another. They could do that by having a random 64 bit number in the header, and if the header doesn't match the local device, refuse to unarchive.
Then all software that breaks, find new ways to encode the specific data they need to send over the wire. Specifically, require they send a specific serialized class structure (ie. something with a schema), rather than something general purpose.