Hacker News new | past | comments | ask | show | jobs | submit login
Android Bug Superior to Master Key (saurik.com)
114 points by piratebroadcast on July 26, 2013 | hide | past | favorite | 21 comments



Perhaps interesting historical from Gosling on why Java has such limited primitives:

"almost no C developers actually understand what goes on with unsigned, what unsigned arithmetic is. Things like that made C complex. The language part of Java is, I think, pretty simple"

I do agree with the sentiment. All that type coercion in C is insane. But removing unsigned doesn't seem like the fix. Perhaps if languages stopped doing automatic conversions that'd solve the problem.


I think the real problem with the logic is that it equates "reasoning about unsigned logic is hard" with "unsigned logic isn't required". Sadly it is. There are solutions that can only be expressed cleanly in unsigned logic, and others that can only be done signed.

Now, clearly there are more problems out there that need a sign bit and comparison operators that honor it than there are problems that don't. So Java made the "right" choice about which one to eject.

Still, hard code remains hard code, and there is no free lunch. Hard code is going to have bugs, and designing your language to make hard code harder (in the interest of making easy code easy) is probably a bad tradeoff in hindsight.


> "almost no C developers actually understand what goes on with unsigned, what unsigned arithmetic is.

This seems like a large exaggeration unless I am one of such people. Correct me if I am wrong, but the arithmetic is simple (i.e. simple binary addition). What tricks people is when they start to mix unsigned and signed arithmetic. Even then, its only that they dont know which one is getting casted to the other (or not knowing the consequence of such a cast; e.g. how right shift is different with signed and unsigned numbers). That is a very simple thing that takes all of a google rtt to figure out.


Far too many C programmers think that unsigned ints are an appropriate type to use simply because the quantity in question can never be negative, e.g. the length of a list or the size of a memory allocation. What they fail to appreciate is that almost inevitably such values will get used in subtraction operations down the road, and then your chances of having a corner-case bug grow exponentially.

I doubt more than 1% of non-security-hardened extant C programs are completely safe when any unsigned quantity in the program may acquire a value with the high bit set.


> Far too many C programmers think that unsigned ints are an appropriate type to use simply because the quantity in question can never be negative, e.g. the length of a list or the size of a memory allocation.

size_t is the type for such things and it is defined by the standard to be unsigned. I would like to point out that since it is the standard that defined it, it is not the fault of an average programmer.

> What they fail to appreciate is that almost inevitably such values will get used in subtraction operations down the road, and then your chances of having a corner-case bug grow exponentially.

C has always had the approach that the language is hands off; meaning that if the programmer wants to shoot themselves in the foot they very well can. Furthermore, not using an unsigned int is not going to help the issue for two reasons: you might very well need the entire address space provided by the 32 bits and somewhere down the line you will eventually need to cast to size_t when you call malloc. The solution is to simply make sure that you test for overflow/underflow where appropriate or choose a different language.


The C standard specifies a good proportion of bad ideas, though mostly out of backward compatibility.

Just consider size_t and its friend ptrdiff_t. Suppose you're trying to copy a slice out of an array. If the array indices are size_t, the length of your slice is ptrdiff_t, which you cannot then safely pass to malloc.

There are two solid solutions; (a) limit allocation to the positive range of the signed integer type (e.g. 2GB in 32-bit apps), or (b) use the next signed type up. C, being a systems-level language, doesn't want to take the limitations of (a) nor the platform dependence of (b). The result is that it makes it very hard to get right for average programmers not used to writing OSes and standard libraries.

It's no theoretical issue. When I was working on the Delphi compiler, I had to audit the runtime library to eliminate integer overflow issues. It wasn't trivial to catch them all, and they can become security bugs with medium difficulty. It was eye-opening enough to make me wary.


A growing concern IMO is that there are many situations in C where you can hit undefined behavior with signed arithmetic, where the behavior would have been perfectly defined (if undesirable) with unsigned logic.


It's interesting to see that the old reader class had a method 'readShortLE' that read an unsigned LE short and returned it as a signed integer; the 'readShort' being called in the new buggy code reads a signed short and returns it as a signed integer. Unfortunate misuse of naming (in the first case, naturally). I certainly would not have guessed the difference were I just reading the code.

It's unfortunate that this breakage all started because the original (correct, it seems?) code was too slow due to method call overhead. If people hadn't felt the need to optimize, this bug probably wouldn't be there - but I guess the Dalvik VM can't inline in that scenario, or something.


The code they have right now is slower with respect to method calls than all previous implementations. The C code worked. The low-memory Java that replaced it worked. The Java version that used less method calls worked. It was the person who then came in later whose job was not optimization that broke the reader while trying to fix a correctness bug.

One argument is that the code was now sufficiently complex due to the optimization that it was bound to confuse him. Another is that the other bug (the CRC one) was caused during the optimization sweeps that could have been handled elsewhere. (So your point to me largely stands, but I wanted to correct the timeline. I agree that spending time fixing inlining to help the entire codebase would have stopped the timeline of events that led to here, possibly not just on accident.)


By comparison, Windows 8 app packages (.appx) signatures involve signing hashes of the entire zip file minus the signature file itself. http://blogs.msdn.com/b/windowsappdev/archive/2012/12/04/des...

If anyone knows how iOS .ipa packages are signed, I'd love to hear it.


Interestingly, they are doing both; this is similar to how SignApk (which is used to sign Android update.zip files) works. Microsoft's first layer of "sign all of the files" looks very similar to jarsign: both embed files inside of the zip file that include the signature, and both have a signature that signs a manifest that in turn is a list of hashes.

The only differences at that level of the signature process (before the whole file signature) are that 1) jarsign has one further indirection (whose purpose is to allow some metadata to be hashed, as well as to support multiple people signing different parts of the file) and 2) the Microsoft manifest signs blocks of files, not entire files (for efficiency).

That article does not describe the second layer of signature verification (the one that signs the whole file) but does mention it a few times. FWIW, as I bring up in an aside of my article, Android's update.zip files made a mistake in how they attached the signature to the end of the zip file, and ended up with a signing bug back in 2008 ;P.


Speaking as someone with no Android or security background, this was an exceedingly well-written article. Big thumbs-up to saurik.


Open always wins


The folks at Android Police posted about it weeks ago: http://www.androidpolice.com/2013/07/11/second-all-access-ap...

It also was already patched. Good write up though.


saurik explains in his introduction why his article is different (and potentially worth reading even if you've read other articles):

> Later articles were written about this bug, each examining the same exploit technique, coming to the conclusion that this vulnerability offered less abilities than the Master Key exploit, due to very narrow and unlikely requirements that the original signed application package must satisfy.

> In this article, I document different exploit techniques for this bug that do not have these limits; in fact, the second exploit described here provides much more capability than the Master Key exploit, allowing arbitrarily complex packages to take on the signatures from arbitrary signed packages. Finally, I look at the history of this bug in Android, showing when it was introduced.


Correct. In fact, in my section "Serious Limitations", I directly reference the article from Android Police and quote from its at-the-time understanding of the vulnerability.


A commit to a repository does not equal a patch to the half billion vulnerable devices in the field.


Yes, captain obvious? What do you suggest?


Instead of minimizing or dismissing serious problems as 'already patched', hackers could help by warning consumers about the proliferation of Android malware. If sales were jeopardized, vendors would start caring about updates.


So your suggestion is to pass the buck onto "vendors"? Who are they and what can they do about it? Think outside the box!


You don't seem to have any understanding of how Android works. The buck does stop at vendors since they are the ones who ship OS updates to consumers.

Vendors don't prioritize updates because their customers don't understand enough to make this part of their purchasing decision, so there is little incentive for them to spend resources on it.

Minimizing this major problem doesn't help.

And - instead of saying 'think outside the box' - why not actually contribute some out of the box thinking?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: