Oh my I feel so sorry for apple security engineer's right now. I'm curious the motivations to release such a serious, and complex, 0day on new years eve!? Makes me wonder who brushed this guy the wrong way, or if he just wants to watch the world burn.
That said, seriously impressive work and I give him props.
I remember waiting to report a security issue with some trivial setuid application until January, back a few years, specifically so I could claim the first CVE of the year.
I was unlucky and it didn't work out, but come December time I do wonder if I should dedicate more time to audits ..
If I had to guess, he waited a month, got blown off, and said, "Fuck it."
Also -- I feel like the underlying bug, ie relying on values in a volatile variable that is shared -- are the sort of thing that source code could be systematically tested for, both mechanically and by humans. But my strong impression is that Apple doesn't care about MacOS and definitely doesn't put the A team on it. Or even the B team.
I had actually submitted to the ZDI, but had written the exploit & write-up in the first place mainly because I like hacking rather than for money. I figured I'd see what offers I'd get anyway, but once I had spent all the time on the write-up, I mainly wanted people to see that, and the amount offered wasn't enough to convince me otherwise. I might've published this earlier even, but my December was kinda busy, first with the v0rtex exploit and then with 34C3.
And an engineer from Apple's security team contacted me a bit after releasing - they had found the bug a while ago, but hadn't verified the subsequent patch which actually didn't fix it. And a while ago I tweeted this https://twitter.com/s1guza/status/921889566549831680 (try diff'ing sources to find it :P). So they do have people on it.
I also told that person to extend my condolences to whoever has to come in and fix that now, but they basically said that there's nothing to apologise for and that they (the team) really like such write-ups. So... I guess I'm not that evil?
And I neither wanna watch the world burn nor did anyone brush me the wrong way - I didn't publish this out of hate, but out of love for hacking. If you're concerned about skids hacking you now, they need to get code execution first on your machine. If you're concerned about people who can do that, then those can also get kernel r/w without me, so... nothing really changed for the average user.
PS: Yes, it's really me. Will add keybase proof if my karma gets >= 2. Edit: done, see my profile.
The write-up you did on this vulnerability (not to mention the discovery of the vulnerability and coming up with a working exploit) is really top notch. Thanks for taking the time to compose such a high-quality explanation and walk-through.
I found it by looking through IOHIDFamily's source, hoping to find a low-hanging fruit affecting iOS. In total? A lot, probably way too much... I had found it in February and started to write an exploit in April. Next to my studies, exams, the Phœnix Jailbreak and Apple trying to mitigate tfp0, it took me until August to get a fully working exploit, at which point I figured I'd wait for High Sierra. And that actually broke a bunch of stuff (heap layout assumptions, ROP gadgets, kernel symbols, ...) so I had to fix these. In October I started working on the write-up, but when I got to the part about the info leak, I had written that it's most likely possible, but I had no demo for that. I didn't wanna leave an empty claim stand there like that, so I ended up taking another month to get the "leak" binary working and basically write a second exploit. By that time it was early November - the write-up with its graphs took some time and before I knew it, December had started (at which point I was finally done). All in all probably 200-250h - but it was a hard-to-exploit bug (IMO), I've done way more than necessary, and when I started I had still rather little knowledge of XNU and required a lot of time to learn how most stuff worked. Especially everything from the "leak" part was later really useful for v0rtex, whose initial version took me just one and a half days then - without that work, it would've taken me a couple of weeks at least.
That was my thought too. As an apple customer (5+ macs and 10+ i-Devices), I'd feel way better knowing that apple cares about macOS security enough to hire skilled engineers.
I'm a total non-engineer/developer, but I'm increasingly interested in what guys like you think about software QA as it relates to security.
Today's Apple does a lot of security posturing in hardware/platform architecture, like full disk encryption, the iOS device secure enclave thingie, the secure enclave's subsequent inclusion on touchbar Macbook Pros to control the webcam, iOS defaulting to non-networked sandboxing for third party keyboards, etc.
Do you think macOS/iOS development perhaps should slow down from a yearly release cycle to delay releases with continuous big reworking starting with XNU?
With a very rudimentary outsider perspective on QA, it just seems insane to keep pushing big OS changes yearly.
Nah, it doesn't fix the actual vuln - it's just that my exploit doesn't work out of the box because the "hid" binary uses a hardware info leak which doesn't work anymore the way I was doing it. If you patched the "leak" and "hid" binaries together, you'd still get kernel r/w on 10.13.2.
Needs to be running on the host already (nothing remote), achieves full system compromise by itself, but logs you out in the process. Can wait for logout though and is fast enough to run on shutdown/reboot until 10.13.1. On 10.13.2 it takes a fair bit longer (maybe half a minute) after logging out, so if your OS logs you out unexpectedly... maybe pull the plug? And maybe don't download & run untrusted software until the bug is patched (or, you know, ever)? Also, any decent antivirus shouldn't take long to add this to their malware definitions.
Not sure if this is HN-level, but... I hope it's understandable.
You can just press and hold the power button to shut a Macbook off. I assume this is a hardware level interrupt as I’ve never seen it fail. Granted not quite as satisfying as physically pulling the plug!
Apple actually creates some signatures in house with “XProtect”, but I’m not sure they do the same for raw privesc exploits. I’m also not sure how thorough they are with their signature creation...
Some of them. But mostly of prevent forwarding windows-malware. Most corporate-managed stuff has endpoint protection, and most end-users are covered by GateKeeper, XProtect and the standard Google safe browse whatever it's called thing. And since most of the basic users simply use webmail, that vector is covered as well. It's not as bad as it once was.
Looks like a Time of Check vs Time of Use (TOCTOU) type vulnerability. User controlled memory is read and used later by kernel space - time in between read and use can be used to overwrite it from user space, typically by exploiting and winning race conditions.
I haven't completed reading this one thoroughly but one example like this is kernel performing access checks on user requests asking it to perform some action - user space would ask for something it is permitted to do at first, kernel would read it and proceed to perform access check. User space meanwhile writes something different to the original memory area that specified the action - this time something privileged - kernel comes back successfully performing the access check for the older action and now executes the privileged action from the overwrite.
j00ru/project zero used modified Bochs (BochsPWN) to detect double memory fetch patterns to find similar vulnerabilities in the Windows kernel.
It's actually neither TOCTOU nor double fetch, it's not checking anything at all, it's just write-then-fetch using shared (untrusted) memory to store trusted information. Sure similar in nature, but not really fitting any common name...
I'm actually not too surprised. I briefly delved into Mac kernel programming in the early days of Hackintoshing (in an attempt to write some missing device drivers), and the amount of complexity there seemed excessive --- to someone who had done previous kernel work in Windows and Linux. The overall impression I had was "far too many moving pieces". And as everyone should know, complexity hides bugs.
"Responsible Disclosure" is an Orwellian term concocted by vendors to control the actions of independent vulnerability researchers who work without real compensation, using information freely available to consumers, in competition with malicious attackers.
The term you're looking for is "Coordinated Disclosure". Yes, Coordinated Disclosure would involve sending the bug to Apple and waiting for them to publish it.
If you'd like to complain that this disclosure is irresponsible, fine. But try not to do it using the vendor's marketing term, because it's not up to them to decide what is and isn't "responsible". Other reasonable people --- myself included --- will probably disagree with you, and say that getting information out to people as comprehensively as possible is usually the most responsible thing you can do with a security bug.
I don't like the term you made up. But fine, I think this is "Irresponsible Disclosure." Is that better? Did anything change?
Vendors and non-vendors alike are all responsible for good security, and that includes working together to make this happen. If you are working against vendors because of some preconceived notion that they are "evil," that's not a good thing.
If it turns out that the author did submit to the vendor and worked together to minimize damage then I'll retract my statement. Until then I think it's irresponsible, not just "uncoordinated".
The problem is that allowing the vendor to define what is responsible, which seems these days to be expanding into giving them unlimited time to fix it, is to allow them to take unlimited time to fix it.
Cooperation or even coordination takes willingness from both parties. Let's look at the actual page apple has on reporting security issues [0]
"When we receive your email, we send an automatic email as acknowledgment. If you do not get this email, please check the email address and send again. We will respond with additional emails if we need further information to investigate a security issue."
Something seems a bit off here. I would have expected a human to get back within a few working days for a serious security problem. That might be in the auto response email, but I wouldn't be surprised if it wasn't.
"For the protection of our customers, Apple generally does not disclose, discuss, or confirm security issues until a full investigation is complete and any necessary patches or releases are available."
Does this extend to the security researcher that reports the vulnerability? If so, that's probably why there was no coordination.
They admitted they never contacted Apple product security, which means they never notified Apple to begin with. That month you see at the top of the writeup appears to be how long they waited for ZDI before deciding to publish, not how long they waited for Apple to fix it.
So what? They owe Apple nothing. They owe you nothing.
Unless you are taking requests from random HN commenters for software that you would like to build them for free, I suggest you rethink your suggestion for highly skilled researchers to donate charity labor to the largest corporation in the world.
This has nothing to do with owing Apple anything and instead has to do with not intentionally compromising the security of millions of innocent people around the world.
And before you interpret this to mean never disclosing publicly, that’s not what I’m saying. But no matter what your opinion is on the best way to handle disclosure, releasing a 0day without any attempt whatsoever to notify the vendor is highly irresponsible and immoral.
Is 3 years considered quickly enough? How about 3 years for a remotely-exploitable problem? According to <a href="http://www.telegraph.co.uk/technology/apple/8912714/Apple-iT... Telegraph</a>, "Apple was informed about the relevant flaw in iTunes in 2008, according to Brian Krebs, a security writer, but did not patch the software until earlier this month [Nov 2011], a delay of more than three years.".
It seems to me that nobody but Apple has a responsibility to its users. The public at large certainly doesn't owe Apple (or any other software proprietor) specific performance regardless of whether they report what they've found publicly or when.
Apple is also not being nice to its users by denying them software freedom: most of MacOS is proprietary and the aforementioned bug concerned iTunes, a proprietary media player. So no matter how technically savvy and willing the user is, they're not allowed to diagnose and fix the problem, prepare a fixed copy of the changed files, and help their community by sharing copies of the improved code.
"Responsible disclosure" is indeed propaganda that benefits the proprietor in a clumsy attempt to divert blame for a product people paid for with their software freedom as well as their money.
You imply by framing without explicitly stating that "coordinated disclosure" is "unlimited time", but that's not the time frame under discussion.
I consider 24 hours notice bare minimum responsible disclosure, and 1 business day in the operating timezone of the company as an polite courtesy to the human beings who have to respond to uncoordinated security disclosures with emergency builds of their product.
What do you consider the bare minimum notice sufficient to respect the human beings who use the products we find vulns in? One business day? One hour? Zero seconds?
(Siguza, I'd also love to hear from you on this question, if you're willing to share. I know Apple said they don't need advance notice but if they hadn't, and offered no guidance, what would you have chosen?)
That would be technically impossible, since you had no prior participation in this thread. I would have happily answered questions about my choice, but if your only question is “r u trolln” then there really is very little to say.
Rabble-rouse all you like, but unless you respond with whatever your personal bare minimum delay is, you risk being perceived as the troll in this exchange.
> I consider 24 hours notice bare minimum responsible disclosure
...it seems rather unfair of you to have a go at my reaction. But somewhat incredibly, it appears you are serious.
I don't have a bare minimum delay - I think the vulnerability discover should coordinate a 'sensible' and 'fair' disclose with the vendor. What 'sensible' and 'fair' means, really depends - how serious is the vulnerability? How many systems are affected? How quickly can the vendor patch, test and document a fix? How quickly can the fix be distributed?
It's a stretch to imagine a scenario where 24 hours is in any way sensible, fair or responsible. I'd be intrigued to know your reasoning.
This made me think of judgement/judgment, so I looked it up there too. It's apparently mixed enough everywhere that there's not a localization. It always makes me pause to think what form is correct whenever I have to write it.
Thanks. I thought I was aware of most differences, but clearly there is always more to learn with respect to the differences between the two localizations.
> and that includes working together to make this happen
It also includes disclosure if the vendor drags their feet and does nothing, which is a very common response.
I have no preconceived notion that vendors are "evil," but I sure as hell have a preconceived notion that they're as lazy as they think they can get away with.
Well, it has been disclosed to black hats - just not for money. To end users the result is the same; black hats have an unpatched 0day to play with, and we have no mitigations to deploy.
The end result is not the same. You know about the bug as well, whereas if the bug + exploit was sold to black hats, they can use it without your awareness.
This would be a less serious problem if vendors pushed out fixes faster.
Well, we do have mitigations. They're not good ones, but they're mitigations:
- Be more paranoid about allowing r/w direct access to your computer.
- Be prepared to power off or otherwise halt your machine if (on 10.13.12) you see unexpected logouts or similar.
- Safeguard your data and/or consider moving it off of the machine or not using it in some situations.
None of those are great things to rely on or to have to do. A real working patch or detection mechanism would definitely be better. But that's not the same as "no mitigations" whatsoever.
It’s not « preconceived »: Vendors have very low bounties, not even covering the time spent on 1 issue (none for macOS I think) and the reason researchers are working with them is more the drawback of working in black hat than the reward of white hats.
Why a consumer should be left in the dark while malicious attackers may discover (or have discovered) the same hack? The consumer should know right away that the system is not secure and take action (like not using it any more until a patch is released). That would be responsible.
If you knew something is dangerous would you let your family/friends in the dark just to give the Company the time to fix it?
Just to strengthen your back, representatives of the Chaos Computer Club agree with you on that, and also agree that no vendor has a right to coordinated disclosure.
But they also argue that you should try to coordinate disclosure the first time at least, and only if a vendor doesn’t cooperate, you should publish future bugs. And they also suggested to coordinate disclosure at least with the club, so the disclosure is handled via official press communications of the club, and they can offer legal protection, too (very often, vendors will just sue any researcher).
This information is from a recording of the 34C3 year in review and PCWahl talks.
You're correct, but I think it's worth going into a bit more detail about some of the tradeoffs involved in the concept of "Responsibility" as it applies to security research and exploit discovery, because there are a few clashing objectives that shape what the right choice is each time if we're considering how to minimize harm to all parties (if someone just wants to sell it to the highest bidder none of this applies). On the one hand, a vendor provided, well tested full patch is of course the ultimately desired permanent fix.
But what I think many people forget when they get "responsible disclosure" in their minds is that there are often bandaids users can do to protect themselves immediately regardless of whether a patch is ready, so long as they know about it. And since it's always possible and generally unknowable as to whether someone else might have already found the exploit and be using it, there is extra hard to calculate risk. Releasing it without a patch may lead to some users getting exploited, but it could also actually protect some users from being exploited or at least allow them to minimize the harm. Once it's known about in the wider community, it is also easier to check whether it's been selectively deployed anywhere. The lag time between notification and vendor patching is itself a risk (and of course there is lots of room for perverse incentives in all this).
So the real core issue with Coordinated Disclosure is that there is not in fact a Right Answer in general, any choice may help one group at the expense of another. Many researchers and organizations try to split the difference with standardized policies that seem to strike the balance, perhaps with occasional exceptions if it's serious enough. But ultimately it really is up to the discoverer and it's wrong to insist they conform to what the vendor finds desirable, particularly since ultimately the responsibility for the blunder lies with the vendor. It's a hard area and researchers should be respected for the work they do on their own terms.
tptacek's position is well thought out and defended, by himself and others, and generally includes what you are covering here when expanded in detail. I feel for him, because he's put in the position of being the mouthpiece for this all too often and it must get tiring repeating himself. Sillysaurus did a summary of his prior comments on this a while back and added some additional resources[1].
I myself have found myself in total agreement in one instance, and then a week or two later a big exploit comes out that makes me really wish some patches had made it out first.
What I think it comes down to is that any vendor needs to assume the exploit can come out any minute after notification, and act accordingly (if it's important, they better damn well get it patched quick). Any researcher should assume that if they act like an asshole and aren't accommodating in some way, they'll get raked over the coals by at least some of the technical public. As tptacek noted, coordination is best, and that requires a dialogue.
Also worth noting is that sometimes there is no patch. Some security problems are of the degree that the entire process is fundamentally flawed, and in those cases there's little to be gained waiting for the vendor, unless the vendor is working to notify all clients and recommend they cease use of the affected service or product. If, for example, you identify a flaw in in how a protocol is defined, and almost all implementations are flawed, the only responsible thing to do might be to publish publicly. Otherwise you're just favoring some groups over others in some way or another.
The "responsible" disclosure term is a pretty devious piece of marketing, because it creates situations such as this very thread.
Its use carries an implicit catch that anything that does not meet the narrow definitions of "responsible" is the opposite. Without naming names there are vendors that have been pretty terrible at handling their end of "responsible" disclosure and appear to be getting worse, down to not even acknowledging there is a problem or even that they have been notified of a problem.
The alternative to disclosing a vulnerability is non-disclosure and, frankly, that's what some vendors mean when they say "responsible".
This looks like the usual greyhat posturing to me. He could have gone with "coordinated disclosure" and talked to Apple. Or he could have sold a local vulnerability for whatever that fetches on the black market. But he thought that boasting about it on the tech web would benefit his personal brand more than either of those routes, so this happened.
So what? He found the vulnerability, he wrote the exploit, he gets to decide how it is "disclosed", and anyone else's opinion of his choice is irrelevant.
I don't understand why Apple doesn't have a well-funded bug bounty program. You would think that companies would welcome people finding bugs in their software. Hell, they could give away free MacBook Pro laptops, phones, and IPads along with CASH!!!
Would you agree that if it were really well known and earned a fair amount to hackers people would not end up throwing this stuff online (even on Twitter) because their PR team is the only one that gets affected in these cases?
"...turned out Apple PR channel is much more responsive than product security [...] No wonder nowadays people just throw security issues on Twitter right? What a world we live in."
Apple's financial reports and behavior the last few years make it clear their priorities are not with the Mac, but with iOS. For better or worse, Apple is a smartphone company that happens to make computers on the side.
> For better or worse, Apple is a smartphone company that happens to make computers on the side.
Apple makes computer and operating system way before it entered the mobile market. The first part is right, but your second part is inaccurate historically.
Yeah but to phrase it pointedly, to what degree does Apple consider Macs to be basically development kits for iOS apps? It's very clear that iPhones make the big cash.
This is stupid though. macOS is the operating system for its flagship desktop/laptop devices. The lack of bug bounty on macOS (or even an invite one) seems really bizarre.
Maybe if more people would disclose such vulnerabilities “irresponsibly”, vendors would develop their software more responsibly. Just my 2 cents ¯\_(ツ)_/¯
Comments like these really rub me the wrong way. If it were some other company that actually has such a program or isn't as evil and uncooperative in so many aspects as Apple I could even agree to the sentiment (but not the wording). But as noted in the comments - they do fucking not for Mac OS. As noted in other comments - they knew about this bug anyway for a while now and it requires local code execution first to do anything anyway.
They deny problems until a shit tornado actually starts somewhere and they absolutely love to control the narrative. They are clearly PR first, lobby and fight against right-to-repair efforts because they want to "guarantee the quality of authorized repair", sell expensive proprietary software on proprietary hardware (ostensibly to provide absolute perfection by controlling the entire stack), have slogans like "it just works", "light years ahead", "touch of genius", "say hello to the future", care enough about current social outrages (USA specific ones that is..) to do truly silly crap like water pistol emoji or removing a historic game that featured a confederate flag.
They are also secretive as hell (i.e. 0 social media presence/interaction, YouTube comments off on their channel to prevent criticism, engineers under stricter NDA compared to Google, Microsoft, etc.), instantly fired an engineer for a mere iPhone X video her daughter filmed in Cupertino Campus, actually sent police to raid a house of a Gizmodo reporter and confiscate his stuff (which apparently breaks journalist protection laws both federal and state but oh well, it's USA and Apple, money talks, bullshit walks) when they wrote about a leaked iPhone 4G prototype, ran idiotic misinforming (but funny, so apparently it's okay!) ads in the past like the "Macs don't get viruses" one but still managed to have bugs like infamous "goto fail" or "password got stored in hint" (which are frankly insane to me).
If they cared they could do as much as silently toss a few thousand dollars per big bug, make sentimental/bragging right's rewards (like Knuth's cheques), etc. but chose not to.
They should be thankful that between "sell 0 days on the dark net", "do what Apple says for free" and "post online for cred" actual security hackers pick the last and not the first (I mean, I also would out of principle of not being a criminal, but there is clearly something wrong with a company's image if we have to fall back to a person's moral compass or even criminal justice system for any choice).
To add to the problem a lot of their fans are outright rabid, instantly plunder the Apple stores at each product release and criticizing Apple anywhere near them online results in being called a Microsoft/Android shill, hater, poor, fucker, faggot and such, and Apple's PR fluff being regurgitated at you.
If someone lives lavishly in a multi-billion ivory tower that also has a diamond mine under it while surrounded by their cultists and won't even toss you leftovers when you help them out, what do they expect?
They are a crazily rich, global and long established company and they should stop being excused. They are the world pinnacle of technology business in every sense and if they can't deliver they deserve the criticism. The poor guys working for free on OpenSSL for a bazillion platforms for single digit k of donations per year (and 0 compensation from all the corporate freeloaders from Fortune 500) got torn a new asshole the size of La Manche for HeartBleed but Apple keeps getting excused for not being able to get their shit together (by their own wish, like not having a bug bounty after all the bugs that happened in 2017) while they have the opinion of saints and geniuses and control everything to the tiniest details - shops, repairs, hardware, software, components and information.
that would have been a withholder, not disclosure. you need to disclose the vulnerability to those who are vulnerable for it to be disclosure, and nothing else is responsible.
Can a company the size of Apple not afford 24x7 security resources? For their installed user-base, I don't think this is unreasonable. Security doesn't have a holiday.
I would claim that there is a very high likelihood that the person having to work all night to fix this on new years Eve is not the same person who prioritizes tech debt pay off vs. new features.
That said, seriously impressive work and I give him props.