It's a bad move for apple. A good relationship with the community of security researchers is crucial - they're talented folks and their research results grab headlines. It takes just a tiny amount of corporate humility and public thanks to win their respect, and in return get goodwill. Treating the community badly will get ensure the next guy won't even try to cooperate.
Over the last several years, Microsoft's MSRC has balanced this very well. Google has done well recently, too. Lots of clued-in people in both places.
I'd agree more if he didn't submit — and get approved — a working exploit in their store. Without telling them about it.
Edit: Now, I don't disagree that just banning him from the program isn't a great idea, and that pulling the app and having someone from the security team send him an email isn't a better one. But it's hard to say this that a bad move on Apple's part.
You prove DDoS vectors exist by DDoSing your own site, or one you have permission to work on. Same with SQLi vulnerabilities. If you want to report a vulnerability you've found to a company, include a working exploit in your report, but don't run it. If the company ignores you or tries to brush the vulnerability off, that's where it gets hairy and responsible disclosure comes into play.
We don't know what his level of communication was with Apple, but it doesn't appear that he notified them before testing this exploit. Had they refused to address the issue or otherwise brushed him off, this would be a reasonable escalation. The same story on r/netsec [1] is being linked to a Forbes article [2], which claims he notified Apple three weeks ago. That's not a ton of time.
Ultimately, he very much violated their ToS and Apple is well within their rights to give him the boot. Whether that was a smart decision on their part remains to be seen.
Since the only place you can install software on iOS devices is through the store, it is important to demonstrate the attack vector by which it can be gained.
It indicates both a security flaw in the platform itself, and a security flaw in the app store approval process, both should be highlighted.
Since he has control over pricing, couldn't he submit with a free price tag, and change it to something insanely high once accepted. That way no sane person would buy it, and he'd still prove his point.
He _had_ to submit an app and get it in for this to work ofcourse, otherwise this was a moot point. And it's a good wakeup call to everyone. Security awewareness helps sometimes unfortunately when you make a splash.
Otherwise, while I think you've got a point (he could have used pricing to ensure no one ran his app), that isn't the issue here. The disclosure is. No one is contending he did something evil with his code, it's that Apple is mad about his code and disclosure. I don't think making it unlikely to be purchased would have helped.
For one, he could have submitted it and then have it "held for developer release" — at which point he told them about it. There's no reason he had to have it actually in the App Store here, even if he wanted to test the approval process.
^^This. And he [1] probably told them immediately afterwards, since otherwise they still wouldn't have known. As he says: he regularly submits bugs.
[1] Or perhaps someone beat him to it: he may not have seen the acceptance mail before someone already noticed the app? I'm not familiar with the exact process: do you need to give final approval or can the app be in the store for a while without you knowing it?
This hardly qualifies as an exploit. While it allows the app to do something it's not supposed to do, the ability to download and execute additional executable code doesn't actually violate security. The new code is still restricted to the app's sandbox and can't do anything that the original app couldn't potentially have done directly.
It easily qualifies as an exploit, given that Apple's app store model is based on the fact that each app is reviewed beforehand to ensure various properties, including the property that the app does not contain spyware, etc. If Apple approved a harmless app, and then said app downloaded code that snooped on the user's calls or asked for their credit card number, that's an exploit.
First - I think just general manners, as well as established protocol, would have the security researcher let Apple know ahead of time what he would be doing. A simple email sent prior to uploading this code would have been sufficient to cover his bases - I'm surprised he didn't do that.
Second - Unless I'm mistaken - his proof of concept was more a violation of Apples TOU, it didn't really attempt to copy credit card numbers, or snoop on users calls - so, in that sense, it wasn't an exploit.
Net-Net - nobody comes out of this looking good, but Apple makes it clear that they are prepared to back up the language of their Developer TOU with actions.
Part of the security of the app store is the review process. "It's possible to download and execute code" is neat, "it's possible to download and execute code and the app store reviewers don't catch that" is much more impressive.
Nothing in the App Store review process will allow them to catch a zero-day exploit. Coming up with a zero-day exploit in IOS is very impressive - but, by definition, once you have it, the App Store review process isn't going to catch it.
Yep. There's no deep check of what your code contains, only a fairly superficial check of what it actually does. You can include nearly anything in your app (perhaps lightly obfuscated) as long as it doesn't show its face during the review.
Depends on your level of paranoia and willingness to rely on the network. The server has the advantage of letting you turn it on and off at will, but a timer will work even if the user has no internet connection or your server gets confiscated by the FBI.
I think it qualifies as a great exploit. You totally go around the Private API checks that Apple does. And there is a lot you can do with those APIs that is potentially evil. Even in the sandbox.
Code signining is a control that is intended to restrict the software that can run to only those apps which have been granted the right to run.
Your second question is a good one, but given is context, it is unrelated. If apple signs a python interpreter, they do so at their peril, for obvious reasons.
Yes, and it's still only running an app which was granted the right to run, it's just that this app now has some extra code in it. Since Apple doesn't really inspect the contents of the apps it signs anyway, this grants no extra capabilities.
Unfortunately, if he submitted an exploit and didn't get banned, we'd see more criticizing Apple for favoritism in enforcing the rules.
They deserve that criticism and it's true, but I can see where they would prioritize actually enforcing those rules, especially in a big publicly-visible incident.
Obviously the best choice from HN's moral point of view is to be more open, more even-handed and less draconian about rules in the first place. But failing that, I can see why they try for "even-handed" over "less draconian," given their own priorities.
when you submit a security related bug report to apple - granted my experience dates from 99-2005 - you get:
A/ ignored (mail auto reply "we might fix it, don't tell anyone or we'll go after you"
B/ bug don't get fixed for 2 or 3 years
C/ bug get fixed, you get no credits
I don't know why this is being downvoted. Apple is notoriously horrible at fixing vulnerabilities reported by the general public, unless they're downright critical.
In fairness, many of the bugs which enable jailbreaking also represent serious security problems. For instance, the various iterations of web-based exploits fundamentally do represent remote code execution, a serious bug in any browser environment. On any other platform, we'd classify them exclusively as security vulnerabilities; however, on iOS, the user has to take advantage of security vulnerabilities to break into their own system.
Not necessarily. Remote exploits, definitely, but entirely local jailbreaks that require booting the phone into a specialized firmware-loading mode don't actually impact the user's security, just Apple's anti-tampering guards against the user.
Wrong. The first jailbreak was done because the iPhone trusted the restore mode commands coming from iTunes. The protocol was totally reworked so that the iPhone would only run some canned scripts. This did nothing to improve device security (it pretty much only enabled the jailbreak), but Apple fixed it fast.
FWIW, I've submitted a couple of (relatively minor) ones in the last couple of years. They were each fixed in the next update and I was credited in the security release notes.
I don't know about the timeframes you quoted but the apple security advisories do credit the researchers. See some of the entries here:
http://support.apple.com/kb/HT5002
Submitting a security bug report to the Chromium project was a delight compared to submitting one to Apple. It was obvious that the engineers working on Chromium cared about the problem and were competent. On the other hand, I mightaswell have been reporting the Apple bug to a brick wall or a black hole.
Odd use of the word "competent", you implying Apple personnel aren't competent because they didn't send a message saying "thank you" with gold stars all over it?
Hold on here. Is Apple expected to know Charlie Miller is a "security guru", and even if they did, why should he be treated any differently? Security researchers should be held to the same standard as regular developers when reporting bugs/flaws.
RTM was convicted of a crime because of his curiosity, and here we have a security researcher who knowingly put users at risk. You ask me, Mr Miller got off lightly.
He did not put users at risk. This vulnerability allows apps to download and execute new code, but that new code is still subject to the app's sandbox. This vulnerability is interesting from a research standpoint, but has zero actual consequences to the security of iOS.
Not sure I agree with this. Less scrupulous developers might use this to download code that does things, even from a sandbox, that are bad for users. For example, it could download code that reports your usage habits to third parties, or saves your CC number.
Surely you don't think that having arbitrary code placed within the IOS AppStore isn't a security risk do you? Once malicious code has been approved in the store an attacker need only find a way to break out of the sandbox, which I am sure is possible.
Reviewers check behavior, mostly not content. It's easy to hide code and activate it later. If you can break out of the sandbox, you don't need to download code to exploit that.
In his demo video, he shows a metasploit interpreter downloading the address book. He mentioned it was a different payload, but I don't recall if he said it was a different application.
If it was the same app, then does that imply the sandbox for a stockmarket app allows access to the address book?
Nowhere in that article do I see them state that the downloaded code is able to escape the sandbox. They certainly imply it pretty heavily, but I can only assume that's due to general cluelessness, or less charitably a desire to sensationalize the story.
Everyone at Apple who does security knows of Charlie Miller. The guy has a phd and hacks Apple products and wins prizes and writes research papers, etc. If they don't know of him I'd be very surprised.
Isn't it considered good security-research practice and just "good manners" to notify the company beforehand and give them a chance to fix the problem before going public and pulling stunts like publicly abusing it, making sure they are publicly humiliated with their pants down?
Judging from the article, he did neither - so don't run crying about "that's so rude".
1. This "guy" apparently didn't try very hard, at all, to cooperate, as evidenced by him putting the exploit itself in the App Store before notifying Apple about it, in direct violation of the dev guidelines.
What good is it to have such guidelines at all if you display in public that you won't enforce them?
2. Microsoft is doing a great job at this? So are we to assume that their security is therefore superior?
It’s rude when according to the article he withheld details of the exploit to give Apple time to fix the bug, but the decision is understandable since he did violate the developer agreement. I’m not so sure about “interfering with Apple's software and services” but his activites seem to be covered under “hiding features from [Apple] when submitting them.”
The situation is only understandable from a 'blindly following the rules' perspective. If Apple makes it 'illegal' to probe their AppStore, then only black hats will be the ones doing the probing.
How are you supposed to test whether or not Apple will discover a vulnerability during their AppStore approval process if you are going to tell them that one exists?
In this case, he didn't only probe the approval process, but he also released the app containing the exploit into the store for public consumption. Apple's process allows for submitting an app for approval without releasing it into the store once it has been approved.
If the exploit potentially allows downloading and running of unsigned code after release in the app store, how else could one prove that it is in fact a hole, other than by releasing it into the app store to confirm the behavior?
Apps that you load onto the device yourself from Xcode are still signed, and are still governed by the sandboxing rules. You can demonstrate that the exploit works in your app by loading it on via Xcode, at which point the only difference submitting it to the AppStore makes is proving that it gets past the AppStore submission process (which isn't the interesting part about this exploit).
You cannot yourself with Xcode install the very same signed "Distribution" binary that you submit to the App Store. The closest you can get is one signed for "Ad Hoc" distribution, but even those binaries interact with the OS differently than a "Distribution" binary. In-app purchasing, for example, differs between the two.
That said, this guy broke the legal agreement that we partly rely on for trusted computing in iOS. He can be thankful if he doesn't get sued, and he should have gone about it differently if not willing to face the minimal consequences of violating the legal agreement.
You cannot, but the apps installed via Xcode have the same restrictions. In-app purchasing is at a much higher level than the kernel-enforced sandboxing rules that this exploit was affecting.
Putting the exploit in the App Store isn't particularly polite either and doesn't seem to serve any purpose other than generating some publicity for the researcher. It'd be different if he believed Apple wasn't going to fix it or that the exploit was being used or was about to be used in malicious apps - but he doesn't claim that was his motivation.
It seems he was pretty sure it was going to work - there's nothing magical about the App Store, he'd found a way to get around the code signing checks. I'm sure that once the vulnerability was fixed, he'd get credit. It's just that this sort of thing won't get you in forbes.
I personally don't really think there's anything at all wrong with a bit of harmless, nerdy limelight-seeking to boot, if that's what he was doing. Acting like he was somehow mistreated is what seems a bit iffy.
The problem is Apple could claim, "In our app verification process we can ensure such an exploit could never make it to the app store." The only way to test the full-scope of a vulnerability is to test it in a real world scenario, which means keeping it from Apple.
Unfortunately, I know of no other way to do it, unless companies like Apple create security groups that work with people like Charlie and give him an exemption to submit, and not notify other parties at Apple.
If that's the problem, it's a different problem. If I'm reading the article right, he did submit the exploit, companies like Apple do have channels to receive and respond to vulnerabilities and to credit people who find and report them. There's nothing in the information released so far on this that suggests he was, in fact, facing such a problem.
That's a bit too charitable to Apple, I think. Yes, the decision is covered by the terms of the agreement - they can do what they did. But since the result of their decision is 1) bad press and 2) increased risk of security holes, it's not "understandable" unless you think Apple is run by morons...
I think the risk is primarily bad press. It's not really a "security hole" for apps to add additional runnable code from an external website, when apps can currently contain pretty much anything at all (as long as they don't link to forbidden symbols). Remember that Apple does not see source code, and relies completely on app developers to behave, beyond a few perfunctory checks.
And Apple has made it abundantly clear that they don't care about bad PR in the security community. So there's really no downside to cutting out Charlie Miller, in Apple's eyes. The winner here is Charlie Miller's career.
I really doubt it. To be blunt, Apple is an existence proof that security on consumer products doesn't provide business value in proportion to its cost. Keeping users safe is seldom worth investing in.
That's sort of a strange thing to say considering that iOS and app store sandboxing are doing more to innovate in the security department than any other consumer device manufacturer.
He's got great skills, and NSA training is as good as it gets, but he explicitly violated the rule to not download and run code from a server, to see if the rule would be enforced. They enforced it, just as he'd known they would. There was no point to his doing that other than to get headlines.
No, he explicitly violated the rule in order to test the hypothesis that a security hole he'd uncovered would allow unsigned code to be downloaded after release into the app store and run on the device.
The sane response to this would be "Oh, we better fix that. Thanks. We're removing your app BTW." The Apple response was typical of a bureaucracy.
But he did more than just test it and remove the app from the store. He left it up there and people presumably were downloading it. Another Forbes article[1] says:
> But the researcher for the security consultancy Accuvant argues that he was only trying to demonstrate a serious security issue with a harmless demo, and that revoking his developer rights is “heavy-handed” and counterproductive.
But as he demonstrated in his YouTube video it wasn't just a harmless demo, he had a shell that he could run on anyone's phone who downloaded his app.
Further, he didn't even have to put it up for sale at all except to perform his publicity stunt. The code signing aspect doesn't change when you are developing on your device locally. He could have submitted the app and not even put it up for sale at all. If the exploit worked in dev it would work on the store, and if they approved it he really didn't have to test it at all. Of course he's a curious guy - I think we can all relate to and appreciate that - so he could have chosen not to release it to the store on approval, then if approved put it up for sale only long enough to try it out, and then removed it from sale again.
I don't agree that they should have terminated his account, but neither are they really that out of line in doing so. I also don't think he would have opened a shell to anyone else's phone but the fact remains that he still had the ability to do so.
The lesson I would take away from this is that Apple should provide a mechanism for security vulnerabilities to be reported officially so that researchers don't have to engage in these sort of dubious activities.
Whether they listen to the reports or not is another matter.
Charlie is one of the founders of the controversial "no more free bugs" movement.
The amount of skill necessary to identify AND exploit bugs is so great that the bug reports themselves have value,far beyond attribution in the patch notesand a T-Shirt. This is especially true when there is in fact a lack market of bad people willing to pay good money for 0 day vulns.
thus, reporting vulns that way doesnt necessarily make sense. Charlie's walking a fine line: He is not a BadGuy, but he also isn't giving away security consulting to companies with 200 billion market capitaliazations. Apple should pay him good money to look at this stuff. Otherwise, its going to be only BadGuys.
"Whether they listen to the reports or not is another matter." - It's kind of the point: the instinct of a bureaucracy that is not serious about security is just to keep things quiet in the belief that no noise means no problem. Schneier's excellent essay, "Full Disclosure of Security Vulnerabilities a 'Damned Good Idea'", observes that this reflex is in fact economically rational.
"I don't think they've ever done this to another researcher. Then again, no researcher has ever looked into the security of their App Store. And after this, I imagine no other ones ever will," Miller said in an e-mail to CNET. "That is the really bad news from their decision."
Take your wrist-slap like a man, sir.
Apparently the grand are also prone to self-aggrandizement. I have a lot of respect for Miller's skills, but he's not the only smart person taking a hard look at App Store security.
He's certainly not the only researcher looking at the app store, then again, he needs to play the victim a little bit right now if he wants to get public support. Public support and media attention may very well be his only ticket back into the developer program.
Downrank all you want. Nothing about this move means "Apple now has a bad relationship with security researchers." It just means Apple doesn't want Charlie Miller showing people how to side-load arbitrary code into their sheeps'-clothing apps.
What Miller did was clearly a violation of the Dev Program Contract that he signed. There is no flexibility indeed when it comes to putting trojans on the store.
I feel like if this were an Android flaw, I'd see it in the title. Miller was booted from dev for discovering a major flaw in iOS. A hacker can have full access to the phone and personal data by just downloading an app from the App Store. Definitely worth mentioning in the title.
There's always this flip-side to reporting security findings. I don't know the details of Charlie Millers exploit, however had he gone through the process of informing the vendor (in this case Apple) and then allowing sufficient time to address the issue, perhaps a showdown could have been avoided (I'm assuming that he hadn't).
People however, also forget that, there are other pressures facing info-sec researchers - such as pressure from management at the company where they work to 'publish' and/or present their findings under the company banner. Often, this irks vendors, because vulnerabilities are used to promote the researcher's (or who they work for) interests.
That said, Microsoft, Google and Facebook have very transparent processes & expectations for submitting vulnerabilities.
He didn't do anything harmful though. He just demonstrated the capability.
It's the equivalent of making a word document pop up calc.exe. This proves that you've broken the security, but doesn't cause harm.
You have to actually do it to be taken seriously, especially with unreceptive vendors like Apple.
There is really no reason to behave in a hostile way towards a researcher that does this. He's telling you about the problem.
In the security industry, there are many people who seek and find vulnerabilities. Some of them report them, and some of them keep them private and exploit them secretly to attack people's property, or privately sell to others who do the same. Selling these to the underground is big business now.
Let's subtract the people who report things from the above equation (because we ban and vilify them). Now what does your ecosystem look like?
Maybe he expected a "thanks for showing us this vulnerability, we've pulled your app from the store and are working on a fix to the problem", as a sane response would be.
He could also just have sent them an email about it. Instead he put a malicious app on the store and announced a talk at a security conference. Diplomacy was never his skill.
All kinds of nasty things have slipped through to the users. There have been multiple remote root exploits for iOS in the wild for weeks at a time and nobody really cared. There would be no fallout.
I agree that it's easy to see why they don't take kindly to this sort of thing, but it should also be easy to see why they should take kindly to it.
I come into your party as a guest and what I do is steal all your stuff. If you would be a white hat, you would knock at the door and kindly hint me to the loophole instead of just doing it ...
Over the last several years, Microsoft's MSRC has balanced this very well. Google has done well recently, too. Lots of clued-in people in both places.