Hacker News new | past | comments | ask | show | jobs | submit login
A Look at iMessage in iOS 14 (googleprojectzero.blogspot.com)
528 points by kccqzy on Jan 29, 2021 | hide | past | favorite | 79 comments



This is the equivalent of a negative research result that looked into whether there's anything more to explore here to produce the really juicy result like a true compromise, but instead shows that there's nothing that a set of highly qualified researchers can prove, at least. I applaud more of this, although it is not an infrequent practice among the top security engineers, I feel it sets a precedent for research broadly.

Nothing to be found here as far as we know, is still a result that deserves publishing. Let's make it more of the norm.


Having been in this silly industry of hacking for 20yrs, I really wish publishing negative results became more normalized. There are orders of magnitude more unpublished info regarding stories of not finding vulns there are about finding vulns. It just goes to show how much the industry really is flashy/stunt hacking.

p.s. Samuel who published the research OP posted is one of the best hackers I've ever met, he helped me code our interview challenge test that we still use (it's that good!)


Security researchers sometime publish informative blog posts from time to time that don't deal with a specific vulnerability; this is one of them. Of course, being Project Zero usually these deep dives often turn up a bug or two (for example, this document on Apple's PAC implementation, which is probably one of the best resources describing it, stumbles upon a bug fixed by Apple while it was being written: https://googleprojectzero.blogspot.com/2019/02/examining-poi...), but this case it does seem like they just happened to not find anything. Whether that is because they couldn't probe very deeply due to the complete rewrite, or because the new architecture and use of a memory safe language upped the bar a bit (or a combination of both) is not clear but the thing I wanted to say here is that these blog posts are useful regardless of whether there is an exploit PoC attached. People may not want to write them, but they certainly stand on their own as good reference material.



> Nothing to be found here as far as we know

I disagree, it's a fantastic, approachable disection of what Apple did to mitigate some nasty security holes. Worth the price of entry by far.


I feel like you are claiming researchers do not do it.


I read it as GP arguing we don't do enough of it.


Yes, this was my intent. It's not that security researchers don't publish this kind of thing... it's just that there are a million rabbit holes to explore, and probably only like 0.1% of those dead ends get written about.


My english may be failing me here, but doesn't

> I feel it sets a precedent for research broadly

imply it's some sort of first time?


Generally, precedent means first "well-known" time, or first time by a respected person/institution/company. I don't know enough about this field to know if this is the first time Google has published a negative result.


This is a fair point to consider. Perhaps my claim was a bit overbroad.


This is actually pretty old news (10+ years old) but it still amazes me to find iOS and macOS sandboxing rules are written in a dialect of Scheme.


Me too! I wonder why that is?

I suspect that they wanted to use conditional expressions or something without having to use a "real" programming language with loops or I/O, which risks freezing the interpreter and leaking data.


It's actually a DSL written in scheme that is then used to generate the sandboxing rules, so loops aren't possible. I had the chance to speak with the guys working on it about a year ago. macOS security is pretty neat stuff.


Interesting! Is it actually a dialect of Scheme or just a LISP syntax custom language? s-expressions are a fairly nice way to create small custom rules languages or configuration languages as they are very easy to parse.


It's TinyScheme, apparently.


Thanks!


I vaguely remember that some settings / control-panel-thingy was written in Scheme on Windows. That was at least 20 years ago, though, so it’s probably gone by now.


There are components in Windows that still exist from the NT days, so I wouldn't be surprised if some of these were still there.


> The bug was fixed in iOS 14, for example due to the rewrite of large parts of the iMessage processing pipeline in Swift

One of the key points.


They’re not out of the woods yet:

> However, it is worth noting that while the high level control flow logic is written in Swift, some of the parsing steps still involve the existing ObjectiveC or C implementations. For example, XML is being parsed by libxml, and the NSKeyedArchiver payloads by the ObjectiveC implementation of NSKeyedUnarchiver.


Agreed, it is still a first step.

rantmode=on

While I complain about how constrained NDK happens to be, I imagine that is exactly Google's goal, to force everyone to write as little C and C++ code as possible.

Microsoft seems to be the worst case in this regard, despite the whole security talk, WinDev is pretty much doubling down on C and C++.

I find ironic that Azure Sphere sells security, yet only supports C on their SDK.

rantmode=off

My experience with security advocacy, is that only works with seeing is believing.

So the initial Swift rewrite might also be to prove a point to the team, while libxml and NSKeyedArchiver will eventually come down the roadmap.


> Microsoft doubling down on C/C++

I’m getting the opposite feeling. Just last week they released a way to expose all Windows APIs past and present to C# and Rust, with theoretically any language supported on that same framework.

I can’t speak to the Azure API you mentioned but clearly Microsoft sees the value in non-C languages.


It is not an API, it is an IoT device sold as being the ultimate achievement in IoT security.

Microsoft does see a value, the WinDev unit not so much.

"Porting the Clipboard sample to C++/WinRT from C#—a case study"

https://docs.microsoft.com/en-us/windows/uwp/cpp-and-winrt-a...

> Unmatched Native Performance > >WinUI is powered by a highly optimized C++ core that delivers blistering performance, long battery life, and responsive interactivity that professional developers demand. Its lower system utilization allows it to run on a wider range of hardware, ensuring your sophisticated workloads run with ease.

https://microsoft.github.io/microsoft-ui-xaml/

It is like the left hand undoes what the righ one is trying to do regarding security improvements.


They should add Rust as a .NET language to be able to optimize between languages. Somebody did an experiment of translating LLVM output to CLR IR, and it looked like it had potential.


> Microsoft seems to be the worst case in this regard, despite the whole security talk, WinDev is pretty much doubling down on C and C++.

From what I understand, MS is gradually moving away from C/C++: https://thenewstack.io/microsoft-rust-is-the-industrys-best-...


Very very gradually,

Check the praise for C and C++ over here,

https://microsoft.github.io/microsoft-ui-xaml/

https://techcommunity.microsoft.com/t5/internet-of-things/de...

Or how XNA was killed in name of DirectXTK.

Meanwhile the MonoGame, Unity, WaveEngine, and for IoT, Meadow (what Sphere should have been) efforts are all driven by third parties.

Joe Duffy mentions on his RustConf talk that even with Midori proving it was capable to handle production loads (it even powered parts of Bing for a while), it was a no go for Windows team.

Apparently the culture change in WindowsDev will take a couple of generations.


XNA was written in C++, just like you could expose DirectXTK to C#.


> Historically, ASLR on Apple’s platforms had one architectural weakness: the shared cache region, containing most of the system libraries in a single prelinked blob, was only randomized per boot, and so would stay at the same address across all processes.

Historically ASLR on Apple’s platforms has had many weaknesses, mostly stemming from poor randomization across many different subsystems :P

What I’m also really curious to know is what the performance implications will be of isa PAC…


Even if it turns out to be a bit slower, it is the pursuit of speed at the expense of bounds checking that has brought us here, although other ill fated OSes proved that it didn't had to be that way.


Exploitations like this are much more complicated than lacking bounds checking; after all most web exploits start by defeating the supposedly memory safe JS interpreter.


Because they trigger bugs in the underlying engine code written in C and C++.


If your JIT interpreter is written in a safe language it can still have memory bugs because it’s generating and executing assembly directly.


It might, just like the microcode, firmware and hardware design can have such bugs.

Safe languages don't magically make all code impossible to exploit, they just reduce the attack surface to logical errors, instead of having to deal with UB and memory corruption as well.

The same way that helmets and belts don't save people from dying all in all kinds of accidents, yet they surely help reducing the mortality numbers.

I enjoy languages with helmets and belts.


That sounds like a challenge.


I am posting this as an independent comment.

There isn't a "popular" CPU architecture that is 100% PIC for all of the allowed address ranges. There are "optimizations" the compiler can choose for limited addresses "reaches". Please consider independently compiled libraries that are aggregated into something larger, specifically languages that permit subclassing and inheritance. How can they be moved within an address space?

If there is a hierarchy of address availability this means that ASLR cannot be implemented in a pure way - it must involve runtime fix-ups if the base address changes. If the length of an instruction must change because of the addressing mode, what happens? How can you do ALSR? Do you rewrite all instructions (is there space? was space allocated?) or always use the most expensive address mode?


Compilers know how to generate code that is position independent. Apple requires such code on the iPhone. Therefore, all code that runs is slidable using ASLR. Whether you want to call that "the most expensive address mode" or not (it's really not all that expensive) is up to you. (Also, note that arm64 has fixed-length instructions regardless.)


The major architectures & ABIs are all capable of producing and loading fully position independent code. The performance impact varies; for x86_64 and aarch64 instruction pointer relative addressing reduces the number of relocations that are needed by quite a lot.

Languages that support subclassing and inheritance don’t represent any special hazard to relocation; ultimately these are built out of data and function symbols that need to be supported even if you’re only compiling C.


Slap, slap, slap. I do not think you did a complete or accurate analysis.

Randomization is NOT free. Randomization either requires that the values are pre-computed before install (which means delivering and pre-computing N different versions) or it is computed it on device.

If the randomization is computed on-device how to you validate that the binary or a library has not been "substituted" - persistent malware, APT?

The "compute on device" was a feature of very old macOS versions - it was annoying and took quite a lot of resources.

"TOTAL ASLR" depends on a CPU arch if it fully endorses over all addresses position independent code and data (Q: homework for ARM, x86_64...). If the ABI allows violations of this you cannot glide / slide all code and data addresses without significant runtime costs. This will likely result in a compromise.


I believe my analysis is both accurate and complete; actually, I would dispute many of the things you mention. "Baking in" randomization into a binary before installing it is rare; on Apple's platforms ASLR is done at runtime. I am well aware that ASLR does not come for free; it requires position-independent code to function and support from the kernel to do randomization (for image base addresses and anonymous mmaps) and a dynamic linker aware of how to apply relocations. On iOS PIE code has been required for many years, and the various OS subsystems are not only aware of ASLR but ensure validity of code signatures regardless of load address. I suspect the expensive process that you are referring to is shared cache prebinding, which was never a thing on iOS AFAIK and is no longer used on macOS either.

To be clear, I am not complaining about a lack of ASLR where it would be prohibitive, such as mapping the shared cache at a different address for every process (which, unless done carefully, would kill the benefits of it being in shared memory as the pages would all be dirtied). I am talking more about various instances where Apple has generally used very poor slides for reasons that aren't all that great, leading to the randomization being easy to break.


> mapping the shared cache at a different address for every process (which, unless done carefully, would kill the benefits of it being in shared memory as the pages would all be dirtied)

Assume foo.dylib and bar.dylib are system libraries, both live in the shared cache, and foo links to bar. Both are loaded and mapped to a running user-space process.

If foo links to bar, then there must be a symbol table somewhere in physical memory with an entry that points to bar’s TEXT.

That symbol table is part of the shared cache, right? Doesn’t it already follow that bar’s TEXT needs to be at the same virtual address in every process?


> That symbol table is part of the shared cache, right? Doesn’t it already follow that bar’s TEXT needs to be at the same virtual address in every process?

Yes, and this is how the shared cache works. If you wish to map the shared cache elsewhere there would have to be another copy of it in memory, which is why this would be a massive pessimization if done without designing for it. Perhaps you might have some idea as to what would need to change to make this not be as bad.


Got it, thanks!


> "Baking in" randomization into a binary before installing it is rare

OpenBSD relinks the kernel with new randomization on every boot ("KARL").

Fun stuff, but probably a nightmare to make it work with signature checking :D


Sorry no. This is not about the main executable - this is a misdirection. The comments you make about the "main binary" are 100% correct. But, they do not address my comment at all.

My comments were entirely about the shared cache region and how it can be moved.

I again ask you to do the homework - please calculate how it could be done better given the address spaces involved. Address spaces going from 32bit to 64bit have gotten better, but this does need to be kept into consideration given the size of the object involved and the API / CPU instructions available (please, I ask you to consider all of the addressing modes available to the ABI for all of the currently supported platforms)

[added]

It is likely the individual objects that compose the share cache region are compiled independently. There are lots of individual objects! Resolving the dependencies of the composite shared object are likely expensive.

Historically I observe that the contextual data available to a static or dynamic linker has been very constrained, which makes relinking / reallocation objects a challenge.


Please assume good faith–I am familiar with how the shared cache works and my comments apply equally to either the main executable or the shared cache. My point is that the shared cache is not very well randomized at all–for its size, the region it has to slide around in is fairly small. From the original blog post detailing the ASLR break (https://googleprojectzero.blogspot.com/2020/01/remote-iphone...) the number of probes required to find the mapping is quite small, purely because it can only be located within a single 4 GB region for whatever reason. On a system with terabytes of virtual address space, this is a strange choice.

Oh, and because you mentioned it: yes, the shared cache has many objects in it. The process of making it is fairly involved, especially since many optimizations go into it (string deduplication, perfect hashing for Objective-C runtime metadata, shortening intra-cache procedure calls, …) But this is all done once when it is built by Apple's B&I, so it's not a problem on-device.


Sorry, I got animated and committed the faux-pas of familiarity. I forgot that this medium is not the same as a lively discussion between people who "know" each other - I see lots of your posts and the thoughts and depths of your knowledge.

There has to be some reason.


I really have no issue with assuming familiarity, the part I was concerned about was you calling my comments a "misdirection" when it was not my intention to be misleading.

And, I am sure there is a reason, I just have not seen anything that indicates anything good enough that it is worth drastically reducing the quality of the ASLR they provide.


Stop setting homework questions in your posts, please. If you have something relevant to say, why not say it yourself instead of roleplaying a teacher delivering work to your students.


You are right, it is rude, sorry. Will do better.


The original article from December:

> In July and August 2020, government operatives used NSO Group’s Pegasus spyware to hack 36 personal phones belonging to journalists, producers, anchors, and executives at Al Jazeera. The personal phone of a journalist at London-based Al Araby TV was also hacked.

> The phones were compromised using an exploit chain that we call KISMET, which appears to involve an invisible zero-click exploit in iMessage. In July 2020, KISMET was a zero-day against at least iOS 13.5.1 and could hack Apple’s then-latest iPhone 11.


Does anyone know if there's a cross platform library that offers similar services like Blastdoor and a sandbox for apps to run sensitive code in?


Did I read this right and see that the exponential back off by launchd would basically cause a DOS for the recipient?


The attacker in this situation already has the ability to deny service to the recipient, except they’ve also had the ability to utilize the crashes as an ASLR oracle too. The backoff attempts to prevent this from being possible.


That caught my eye too. I guess you would need a way of reliably being able to crash the service but that seems like a relatively low bar. Presumably the impact would also depend on how much work the service does.


Better than the alternative.


Well the attacker could probably DOS the victim anyway by rapidly sending crashing messages. The update just means the attacker only has to send the message every 20 minutes.


Finally, we are seeing more applications pushing chunks of themselves into sandboxed sub-processes.

Now if only there was a nice open-source, cross-platform way to do this..... hint..... hint......

:-)


Open source aside, it would be great if any developer could do this on iOS. Unfortunately -- in contrast to macOS -- Apple keeps the XPC API on iOS private, so that it is impossible for "normal" app developers to properly sandbox the more critical and exposed parts of their application.

Only Apple can provide the described security for their messenger, but other messengers which face similar challenges (rendering untrusted content) cannot protect themselves.


Apple doesn't allow subprocesses for 3rd-party apps on iOS generally as far as I know.


Perhaps I’m too dense to get the hint?


What I mean is, it would be really nice if someone would build such a toolkit - I know the code exists inside things like Chrome and Firefox, but I'm not aware of any standalone libraries that make it easy to

(a) create a sandboxed child process with some code inside it (b) talk to that sandboxed child process (e.g. Firefox and Chrome both have IPC libraries) (c) handle the resulting crashes nicely, restarting child process when necessary


Sadly not possible on iOS. Even on macOS, creating a good sandbox profile is non-trivial (and technically unsupported by Apple).


Guile has its own sandbox. You can probably also use lua-JIT.


[flagged]


I'm no admin but wanted to ask for you to refrain commenting like this, it's probably against the site's guidelines and brings nothing to the discussion.

Per the guidelines: "Please don't comment about the voting on comments. It never does any good, and it makes boring reading."


HN upvotes what HN wants to see. Apple is one of the biggest companies in the world with products that a large majority of HN commenters use, and this blog post is in-depth technical praise / validation written by a representative from one of their biggest competitors. It feels like classic top-post material to me.


> This thing has only 190 upvotes

190 upvotes isn't "only". Things can be at the top with 20 upvotes if they come in fast enough, and all younger submissions that could outcompete it are at noticably lower numbers.

> ridiculous 12 comments

irrelevant for ranking.


Many comments tend to set off the flamewar detector and cause post downranking.


yes, but that's not happening with way less comments than points.


It’s a conspiracy of course. Now go buy your Apple stock, sit back and enjoy the profit /s


The only thing faster than an Apple post to the top, goes a comment questioning this weird pattern. -1 in one minute in a 12 comment post.

Not bad


Please don't post like this. You're breaking the site guidelines badly.

https://news.ycombinator.com/newsguidelines.html


Woah


At some point we would expect that the world would experience a zero day worm that works its way through the entire iMessage ecosystem, because a majority of the users are all on the latest version...

I wonder when that will happen


I've read that three times and I still can't parse it. Why would being on the latest version make a zero day worm more likely?


I've read it as a way of saying that the fragmentation of iOS versions is way lower than on Android. So if a "latest iOS"-only zero day would've been found it would affect a lot of users as they tend to update quickly.


Complicated to exploit bugs often rely on specific code and data being in particular places or even the number of cycles a code block takes so even if the same bug had existed for years you might have to write a separate exploit for each version.

And make a version check if failed attempt will crash the software or you only get one attempt. Sometimes the hardest part is finding out what version of exploit to use and that may even have to be done manually.


Sorry for the confusion, it seems that in a software monoculture, where everyone has the same iOS version, a zero-day (which in theory will almost certainly always exist in complex system) would travel more effectively than in a system with lots of OS versions.

E.g. a zero-click zero-day exploit that spreads through a zero-click iMessage message could travel the world in a few seconds.


The days of internet massive worms are behind us. Or at least mostly.

Security is vastly better than it used to be and good zero day exploits are guarded jealously. Making a big worm is a quick way to expose a zero day.

The people who have access to these zero day exploits keep them tight for targeted attacks where they are less likely to be discovered.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: