Hacker News new | past | comments | ask | show | jobs | submit login
Disclosure: Another macOS privacy protections bypass (lapcatsoftware.com)
384 points by sillysaurusx on June 30, 2020 | hide | past | favorite | 106 comments



> I'm not interested in waiting years for a bounty. I can't speak for anyone else, but my personal experience is that the Apple Security Bounty Program has been a disappointment, and I don't plan to participate again in the future.

Yeah...Hint to Apple: When somebody discloses something, responsibly, you react to that with a proper process and understand you get 90 days in most cases as is industry standard (that's typically how Google Project Zero operates).

Otherwise don't be surprised when exploits end up sold for much more on a black market because nobody wants to cooperate with you.

Think we need a few more 0-days that cost Apple a few bad PR spins (as well as others like Valve) to make them wake up and actually have a proper process.


It's such a tough culture to change. I sat next to one of a security team at a large networking equipment maker (who you would hope has a clue about security but not so much) for a while. Even after numerous meetings and agreements, dude still was on conference calls where various executives would first reach for the lawyer's phone number upon someone from the outside reporting a security problem. Talking these guys down from even the most absurd actions was a regular occurrence ... let alone just responding properly / graciously.

It was so bad at one point the legal team would contact the security team to get background anytime someone notified them of anything security related... just to avoid the obvious PR disaster that would be suing someone who was genuinely being helpful. (The key there was the legal council was a good guy and had an appreciation for the damage a poorly timed legal threat could do.)


So the bossman can’t make distinction between human society lawyers and computer logic lawyers?


The problem is they identify it as a criminal/legal issue not as a software/hardware issue.


That level of ignorance should disqualify someone from running a tech company.


We're stuck with this until we decide it's time to start regulating ourselves.


Yeah so let’s merge those categories to a same one within corporate, something like “internal externalized product business risks” so both the boss and classification isn’t wrong

(and ideally cut/reroute/raise the pay according to requisite nursing for idiot bossman)


I tried as well, not for the bounty (I didn’t get one) but because I thought I should responsibly disclosure for bugs I had found in sandbox enforcement. I have them a demo, they sat on it for a while (to be fair, if you emailed them they’d respond quickly saying they were still working on it, unlike things submitted through Feedback Assistant.) Then I got an email a month later saying that they had fixed my specific case for App Store apps and couldn’t see what the problem was, as they had apparently pushed a fix just after I first submitted my bug without telling me or posting the source code for dyld (a fix which happened to apparently break some apps, so they got their bundle IDs hardcoded in AMFI as an exception). They seemed to only be interested in the App Store, which I couldn’t test obviously, but I submitted a slight variation to work around the changes within a day, and a month later another variation that made exploiting the bug even easier. After the last one they asked for an extension to work on the last one, which I granted them; when that one started expiring I sent them an email asking what had become of the bug. This time they said they fixed it in the App Store and they wanted to credit me with a fix; there was no mention of sandbox enforcement outside of the App Store. Still, as they had run over the discretionary disclosure and said they fixed the things they cared about I went ahead and published a writeup…but it was a bit annoying because they didn’t even assign me a CVE or something to link to so people could get more information, so I’m not sure why they were asking for details about me…


Out of curiosity why would it matter if they "assign you" a CVE?

If your intentions were purely altruistic then the CVE matters not. Just make a blog post with the details and link there.

But if perhaps on some level you were after recognition then yes I can see the desire for a CVE...


Because I can give people a CVE and they can find it easily rather than "oh it's actually a blog post I wrote, oh you didn't find it by search, here is the link" and absolves me of coming up with a descriptive name, reduces confusion if I ever find another sandbox escape, and centralizes the bug so that future researchers can do things like look up all the bugs for the component or build on it by seeing what's been done in the past. Same reason you might want your book to have an ISBN and show up in a library catalogue, for example.


A late addition: in the absence of all that, I do fall back on blog posts. I have written one for that bug, of course, but surely you can find it ;)


Again, whats the point of "handing someone a CVE?" Literally just make a post somewhere on the internet and let people see it there. If a CVE shows up, add it to the top. If not, oh well. It should be 100% about getting the right information out there, isn't it?

I've spent a lot of time involved in responsible / coordinated disclosure and it's always like this. The person disclosing "just wants to help", but the reality is the vast majority want (or feel entitled to) something for their efforts, be it their name behind a CVE, a bounty, or recognition of some type. Perhaps even the admiration of the company (wow, thanks we are impressed you found that!). That's totally normal and fine, I just wish more people would admit it.


As I mentioned, there is a huge amount of value in having a centralized, consistent database of bugs that you can refer to. It's the same thing as naming a new species: ok, sure, it's cool that found it, but by naming it everyone can now call it that and use that name to find it rather than using some ad-hoc non-descriptive name for it. And this is really important for systems like macOS that see multiple bugs in the same component every update: it's important to be able to distinguish them. Getting a CVE number isn't really that difficult, anyways, you can basically get one for anything, like "denial of service" (you found a way to crash an app, congrats).

FWIW, Apple should not be impressed I found that, they should be impressed it took someone so long to show them that their sandboxing strategy on macOS is totally broken. It's nice that they wanted to credit me and all, but they fixed the issue in a way that didn't fully resolve the problems that I brought up, which I consider worse than not crediting me or not paying me a bounty.


There is probably no market at all for the privacy bypass where you make an illicit copy of Safari and arrange for it to be run so you can access files in ~/Library/Safari. This is simply not a high-severity bug. A good find, and an interesting writeup, but it's no RCE.


Perhaps, but for Apple it’s (or should be) a major headache when apps from the App Store steal and upload people’s browsing histories, as they were doing for a while until this change came in. Clearly they care enough about it that they were willing to piss off developers when it rolled out, so it might as work right…

Also do note that Apple does claim to offer a bounty for such access, regardless of what the market is willing to pay. So clearly it is valuable to them.


> Perhaps, but for Apple it’s (or should be) a major headache when apps from the App Store steal and upload people’s browsing histories, as they were doing for a while until this change came in.

Isn't this what Apple's approval process is supposed to be for? Or is that only good for preventing Apple-Tax Evasion nowadays?


App Store Review is a joke on macOS.


It's different than on iOS?


I’d say it’s worse, which is saying something considering a jailbreak once shipped in the App Store on iOS.


I can confirm this! Back in 2017, I was 17 and found a flaw that affected the privacy of all browsers and required a web standards change. Google/Chromium (Andrew R. Whalley) paid me a good bounty for that, but Apple (mac, ios, iphone, watch) didn't pay. I didn't expect Mozilla to pay obviously since they were non profit.


> I didn't expect Mozilla to pay obviously since they were non profit.

Despite being a nonprofit organization, Mozilla's annual revenue is around half a billion dollars[1] and they do offer a client bug bounty of up to $10,000[2].

[1] https://en.wikipedia.org/wiki/Mozilla_Corporation

[2] https://www.mozilla.org/en-US/security/client-bug-bounty/


When the December 2014 git vulnerability was disclosed to git maintainers (I can't find the CVE number now, but this was the github blog about it: https://github.blog/2014-12-18-vulnerability-announced-updat..., and this was the hn discuss then: https://news.ycombinator.com/item?id=8769667, tl;dr is git only checked and protected all lower-case .git directory from overwritten by clone/checkouts, which will becomes tricky when it's used on case-insensitive fs), they contacted both Microsoft and Apple to make sure that they fixed all the corner cases of their case-insensitive file systems. Microsoft was very cooperative at the time, pulled in someone from NT kernel team to make sure that the fix covered all the corner cases; Apple, on the other hand, didn't response. Even after the patched version of git is released, Apple didn't do a hot release of Xcode tool (which is how they distribute git officially on Apple's systems). They only included the fix on their next scheduled beta release of Xcode.


What was the exploit? I've been sitting on one that's essentially as old as the web but I figured it was pointless to try to alert browers about it because too much functionality rests on it.


Don't tease us - what was your exploit? Just from the description, I'm happy taking a guess that this is going to involve somehow exploiting the browser cache (e.g. via timing how long it takes to request an image from another domain that may or may not have previously been cached) to determine whether the user has visited a page. That was first written up by academics decades ago, and has since been independently invented by several people (me included), but wasn't fixed as of whenever I last checked a couple of years ago.

Is my wild guess right? :)


No, though that is a good one.


> What was the exploit?

Just file a security issue on bugs.chromium.org with demo/instructions. If it's a good one, they'll pay good.

I don't wanna give it out since it's publicly linked to my twitter and I've bad mouthed Apple here... (I don't publicly speak against them for obvious reasons)


>Think we need a few more 0-days that cost Apple a few bad PR spins

I doubt it, "The exploit was used only against oppressed ethnic people"[1] was good enough to give Apple clean chit last time.

[1]https://arstechnica.com/information-technology/2019/09/apple...


Says who? The link you posted is one of many calling out Apple for their poor response.


He's referring to Apple's official response. Which was basically along the lines of "Nothing to worry about everyone! Exploit was only used on the 12 most popular websites for Ughygur related content. Therefore you have nothing to worry about."


Well, it was nowhere "good enough"; it was just the best they could come up with.


My sample exploit uploads some of your private data (your Top Sites, for example) to a server that I control, because that's an easy thing to do when I can run any JavaScript I want. Note that I'm not really collecting any data, as http://lapcatsoftware.com/test/ is a dead link. I used http so that you can see the private data being sent in a packet trace.

Please don't do this when reproducing exploits. Yes, it's just source code, and yes, the url is dead. But it's still source code that, when compiled, will grab your real safari data and attempt to upload to a url that could be switched on.

There's a difference between repro'ing an exploit and weaponizing an exploit. alert(1) is generally the best thing to aim for, or even alert(some user data) to illustrate the point. Whereas upload(some user data, my server) is a bit too close to the moral equivalent of `rm -rf $HOME/*`: all of these illustrate the point, but rm'ing a homedir is generally not a great thing to have in your repro; ditto for uploading real user data.

exmaple.com was explicitly reserved for scenarios like this, which would accomplish the same thing safely.

It may seem like a pedantic point, but it was something I was taught as a pentester: don't weaponize exploits; simply reproduce them. So my instincts kicked in, and it's impossible not to mention it. (That said, I don't mean to make a fuss – it's not a big deal in this case.)

By the way, I found this post via the author's twitter: https://twitter.com/lapcatsoftware I've been following them for about a year now, and their tweets on Mac programming have been quite informative.


Another problem with this demonstration is that by exfiltrating the data via HTTP, even if it ends up in /dev/null on the other side, the user’s ISP and anyone else in a position to sniff packets along the route can now know the user’s top sites.


Why would you run _any_ code with a known exploit on a personal machine with your personal data. At minimum make a new user and run it on there for heaven's sake.


No expert, but even example.com isn't totally safe, right? According to the RFC (https://tools.ietf.org/html/rfc6761), while these domains are reserved, nothing should treat them specially, so there's nothing to stop people having them routed somewhere else at the DNS level.


AFAIK the domain is controlled by the IANA, so it's probably not too big of a security risk. If you really want to be paranoid you can set the site to be 0.0.0.0 or attacker.invalid.


0.0.0.0 is localhost, which is a great way to allow other software running on your computer to take the data and send it elsewhere. It's not a great IP address to use for "invalid".

Yes, 0.0.0.0 really resolves to 127.0.0.1 on Linux at least. Try it.


Not saying 0.0.0.0 should be used in this case, but if you have data sniffing software listening on port 80 or 443 without your knowledge (or really any port at all, but especially a privileged port), you probably have much bigger things to worry about than some ad hoc PoC sending data there.


attacker.invalid is good.

So are <anything>.invalid, <anything>.test, or <anything>.example. They're all reserved and don't route.


Maybe on your system, but there is nothing in the standards which forbids them from being routable.


But by default they won't be routable. I guess you can shoot yourself in the foot and have entries in your /etc/hosts point to your attacker's site, but that's kind of hacking yourself.


The standard forbids (well, "should nots", which is the best you get most of the time) .invalid. from routing. It's supposed to be filtered at the nameserver API level and everything after that (cached DNS, authoritative DNS, registrar, etc.)


Which RFC is that? There is nothing like that in the original RFC that defined the .invalid gtld. https://tools.ietf.org/html/rfc2606


RFC 6761, which is linked both upthread and as a supercession in the header of your own link.


Or use .local


.local is a bit problematic since there are RFC's explicitly stating this should only be resolved by multicast DNS which has its own security issues and also fallback lookup is somewhat undefined.


I agree. In my opinion, it would be useful if every browser and API client did something special with that domain. To the best of my knowledge, this is not the case. AFAIK I could easily plug that domain into my unbound servers which my SSL MITM proxy utilize and route traffic to it.

If the client is not aware of a special domain, then you are dependent on the infrastructure to behave in a predictable manor. Attackers can alter that behavior.


"example.com"?


example.com (among others) is reserved per https://tools.ietf.org/html/rfc2606

It's very unlikely that those reserved domains will be controlled by someone nefarious in the foreseeable future.

I've made it a habit to use them instead of something like "insertyourdomain.com" in example configurations or dummy data for tests (where I perhaps need a valid-looking domain). In the unlikely event that something is ever sent to it, example.com is a much better choice than just some random string, since someone could register it and accept that traffic.


GP commenter said (note the typo):

> exmaple.com was explicitly reserved for scenarios like this


Aaah... Sorry.

I'm guessing this is also why we perhaps should avoid even "example.com"-URL's if it's really bad if someone receives the traffic :-)


> my goal is to show that Apple's debilitating lockdown of the Mac is not justified by alleged privacy and security benefits. In that respect, I think I've proved my point

If every attempt to improve something were disproven by the presence of flaws, it would disprove all attempts to do anything with software ever. I get that people don't like the macOS privacy protection efforts, but that's no reason to construct a logical fallacy.


It's interesting that you chose to omit "over and over again" from the end of the quote. I would also mention to this:

> There are two fundamental flaws in TCC that make this exploit possible

We know that TCC is a major burden for legitimate Mac apps. But is it a major burden for malware? That's the question, and it seems to me the answer is no. There are so many holes in this system, it only stops the good developers who wouldn't stoop to using the countless hacks readily available to malware developers.


> We know that TCC is a major burden for legitimate Mac apps.

It's a burden for me as a user!

My home theater setup is basically just a Mac connected to a projector. Every button on my Harmony remote runs an Applescript. Many of them start with lines like:

    tell application (path to frontmost application) to
Every single time a new application is in front when I run a new script, Mojave and newer pop up a dialog asking if I want to allow my own script to control the front app, which means I need to get up off my chair and grab a mouse to click the button. When I edit a script, it usually resets all of the approvals.

I make very heavy use of Applescript for all sorts of things on my computer. It's one of the things that has kept me on Mac over the years, because there is no broadly-supported equivalent on Windows.

I get the sense that no one at Apple uses Applescript much, though, because if they did, they wouldn't have added an impossible-to-disable feature which renders it effectively useless.


On the theory that I may as well check, just in case something helps —

Does the Harmony process request Apple automation permissions, and is the Harmony process enabled for it if so? (Whatever the parent process of the scripts you're launching is, i.e. Harmony.app in the chain Remote button -> Harmony.app -> Your Apple.scpt)

Does exiting the Harmony process and all scripts, purging all of your events decisions with `tccutil reset AppleEvents`, and then restarting the Harmony process and running a script result in any improvements?


No guarantee this will work and I don’t have a machine to test in front of me but does that still occur if you add your script to either (in order of likelihood) the Automation, Developer Tools or Accessibility groups in the Security & Privacy -> Privacy preferences?


Automation and Accessibility, no. Automation is indeed the relevant panel, but the white-list is per-app being controlled. There's no way I can tell macOS to let my script control any app in the automation panel, nor can I even approve apps ahead of time.

Is Developer Tools new in Catalina, or do I need to install XCode or some such in order for it to appear? Never saw it in Mojave.

Fwiw, at one point I had a 250 rep bounty on this StackExchange question, and got nothing. :(

https://apple.stackexchange.com/questions/339509/edit-tcc-db...


Your argument is much better presented here, and it makes a lot of sense. While I'm not sure whether I agree or not, it does help me understand the viewpoint you're coming from. Thank you for taking the time to reply! I would now paraphrase my current understanding as (correct me if I'm wrong):

'The endless bugs in TCC demonstrate that its burden is not worth the costs to developers.'

What was written in the post did not lead me to understand this, even including the quantity/repetition modifier "over and over again". I think the missing piece for me is the cost to developers bit — without that, it reads as "the bugs prove that this isn't worth the privacy improvement", with that it reads as "the bugs prove that the cost to developers isn't worth the privacy improvement".


It was honestly more of an expression of frustration in the article than an argument. I'm pessimistic that I can do anything to stop the iOS-ification of the Mac.


Locks on your house only protect you from people who use doors. I'm not sure this argument holds up either.


But imagine if you only locked your front door and left your back door completely unlocked all the time. The front door lock would stop honest visitors from entering your home, but they probably didn't need to be stopped anyway, because they would knock before entering. Whereas criminals will neither knock nor use the front door.


We're totally in agreement regarding TCC, but this analogy has a lot of issues. A criminal could also break a window, or—even easier—pick the lock, because the locks on most houses can be trivially broken.

There's a couple reasons locks work IRL despite this, one of which is that they don't really stop honest visitors. You don't usually want anyone coming into your house that you haven't let in yourself, unless they're family members with keys.


Yeah, I don't think these "door lock" analogies are helpful for either side of the argument. The situation with a computer operating system is not analogous.


I think the biggest issue is that it’s been bolted on macOS, so it works nowhere near as well as it does on iOS. I’m sure you’re aware of the many other cases where there’s been holes in the macOS version of some enforcement because it was added later and without considering how it might fit ;)


I commented this on the Big Sur thread last week:

> I was really hoping they'd take some time to address Catalina's glaring issues — its slowness, bugginess, just general sloppiness — and instead they did the opposite.

I didn't include "security exploits" in that list because of course they're going to fix security exploits, right? Especially one that they're already aware of, and can reproduce — they wouldn't just sit on it in favour of making the UI glossier, right?

Apple have just been getting so much wrong lately. This pretty much dashes any hopes I had of ever upgrading, because even if this particular flaw gets fixed by the time Big Sur comes out, there are almost certainly others that they've ignored. I expected to eventually have to grit my teeth and upgrade so I wasn't behind on security updates, but I guess that's not the case. Ugh, now I have to figure out how to mute that annoying (1) in System Preferences. I've heard they keep making it come back now.


I shared your concerns prior to trying Big Sur, but in practice I'm somewhat shocked by how much I like it. Big Sur is very snappy on my aging 2015 Macbook Air, and I've encountered far fewer bugs than I'd expect in an initial developer preview, which bodes well for the final release.

I like the visuals too, at least compared to Yosemite-era; Leopard-era still wins out overall. Using the OS has made me feel much better about the Mac as a platform, although I'm certainly still nervous.


Out of curiosity, would you say it's an improvement coming from Catalina? My confidence in the Mac was really shaken by Catalina but I'm genuinely ready to be impressed by Big Sur.


> My confidence in the Mac was really shaken by Catalina.

Same! However, I've never actually used Catalina for a significant period of time, so I can't compare them directly. I downgraded back to High Sierra after just using Mojave for a few weeks, partly because I was frustrated with TCC breaking my scripts, but also because I'd noticed Mojave was slower and buggier than High Sierra. When Catalina came out and the problem reports started rolling in, I resolved not to touch it with a ten-foot pole.

I only installed Big Sur because I wanted to try out the new design. I was fully expecting a dumpster-fire, and I wouldn't have even blamed Apple for it, given that Big Sur is currently an early developer preview (I was only able to download it by hacking Apple's catalog URLs). I did not expect to actually like the Big Sur.

There are a couple of odd bugs that I expect to get ironed out by the fall—for example, the menu bar sometimes shows a wifi-disconnected icon even when the internet is working fine. On the whole though, I think the current build of Big Sur would make for a fine daily driver. (Although it would probably still be a bad idea if you're working on something important!)

More than anything else—and I know this will be very hardware-specific—I just can't overstate how fast Big Sur feels. It's really as if I got a new computer. I will note that I did a clean install, but I do those regularly anyway, and they don't help all that much.


Thanks for writing this! Honestly, it does make me feel better about the whole situation.


It's getting even worse - with the most recent security update for Mojave I can no longer block Catalina from the available updates using `sudo softwareupdate --ignore "macOS Catalina"`. It always shows up now in System Preferences -> Software Update. Anyone nows how to block it effectively?


> My personal opinion is that macOS privacy protections are mainly security theater and only harm legitimate Mac developers while allowing malware apps to bypass them through many existing holes such as the one I'm disclosing, and that other security researchers have also found. I feel that if you already have a hostile non-sandboxed app running on your Mac, then you're in big trouble regardless, so these privacy protections won't save you. The best security is to be selective about which software you install, to be careful to avoid ever installing malware on your Mac in the first place. There's a reason that my security research has focused on macOS privacy protections: my goal is to show that Apple's debilitating lockdown of the Mac is not justified by alleged privacy and security benefits. In that respect, I think I've proved my point, over and over again. In any case, you have the right to know that the systems you rely on for protection are not actually protecting you.

Substitute "security" for "privacy" and you immediately see why the argument is flawed. "Other people and I have found bugs that bypass security features, ergo security features are security theater and only harm legitimate developers. Better have a free-for-all OS and be mindful of what you install." (Yes, you should be mindful of what you install, regardless.)


Right! Who needs seatbelts or airbags on a car with great brakes -- just be careful and don't drive into things.

</s>


This is not a useful analogy. You won't die or be horribly disfigured sitting at your keyboard. The tradeoffs of driving a car vs using a Mac are very different. They're not comparable.


It's strange that the people who disagree with my "security theater" comment don't seem to be alarmed about the bug I disclosed, which is a very serious one if you really do believe that macOS privacy protections are important.


> macOS privacy protections are mainly security theater and only harm legitimate Mac developers while allowing malware apps to bypass them through many existing holes such as the one I'm disclosing

So this is just another abuse of the term "security theatre." Door locks, for example, are legitimate security. There are plenty of lock picking sets on the market. But that does not mean door locks do not work. TCC works because a casual user isn't trying to override it. But if it didn't exist then users would have no protection at all. It still comes down to the person sitting behind the computer is in complete command of it. And security knowledge is the best security you can buy.


> It still comes down to the person sitting behind the computer is in complete command of it.

No I'm not. If I had complete command, I could actually turn off the damn prompts so my own scripts worked.


If you are physically in front of it and have admin privileges there probably is a non-trivial way to disable those prompts. Disabling SIP and all sorts of protections is involved.


I've tried! I keep SIP off anyway, so it's certainly possible, but Apple neglected to actually give us a mechanism for turning off TCC, and I'm not an OS developer. There isn't a boot flag, or a process you can disable, or anything like that.


Admin privileges are not involved in any way.


Security theatre works for Apple.

Customers buy the products and can logically say "I value my security so I buy Apple products".


Minor correction for the author, TCC was not introduced in Mojave. It's actually present on the 10.9 ("Mavericks") machine I'm using right now, where it controls which apps are allowed to use UI scripting. Like on modern systems, this is all stored in a database called "tcc.db", and it can be reset with the Terminal command "tccutil".

This 10.9 incarnation on TCC is much weaker, however. It doesn't apply to most Apple Events—only UI scripting—and once the user whitelists an application, that application can control any other app on the machine. Also, because SIP wasn't introduced until 10.11, there was originally nothing to stop an application with admin privileges from editing tcc.db directly and whitelisting itself—as Dropbox later did!


Wait, so you can just duplicate an app that has more privileges than your app, modify it, and run it to exploit it's access?

This is a pretty glaring security issue actually - after reading this, it seems like Apple's choice to track app permissions / security exceptions by the app's bundle ID and not its file path was a pretty big mistake.

I wonder if this is a case of iOS security engineers working on macOS, forgetting that app bundle IDs aren't enforced by a central install flow on the platform?


File path is wrong, too. What should be checked is the bundle’s code signature.


It does check the code signature. However, it's not a "deep check". The problem with doing a deep check, including all of the apps Resources, is that this can be very resource intensive, depending on the app. It's the reason why Xcode takes forever to "verify" on first launch. If there was a deep code signature check on every TCC check, you would see a lot of very long pauses.


You can guarantee that the system apps haven't been tampered with, at their system file paths, because of System Integrity Protection. But all bets are off if you make a copy of a system app elsewhere on the disk.


Right, I meant “deep code signature” rather than “executable code signature”, thanks for the correction. I think macOS has a thing where it only checks the former the first time you launch an app and not after that, so you can scribble all over the resources and the system won’t care. Presumably this was thought to not be a big deal, but you showed a pretty good example of how you could launch a data-only attack on the privileges associated with the program :)


But surely they can do better than this? This really is a bad flaw.

At the least, couldn't they maintain a cache of verified signatures, based on the hash of the file? Then on subsequent loads, they could just hash the file and see if the hash was cached. Not as safe as checking on each load, mind, but surely a bug improvement over checking it once and blindly assuming no changes!

I mean, if this was Windows it would be absolutely huge - they'd be ridiculed in infosec and HN circles alike, and IT teams across the globe would be nervously scrambling to get the patch applied before they got pwned.

It seems like Apple is getting off too lightly here, IMO.


> My personal opinion is that macOS privacy protections are mainly security theater

I really disagree. This feature has revealed that Google Drive by default spies on my ~/Downloads folder - something it has no business doing, nor did I ever intend for it to do.

I actually love the many things Apple are implementing to improve user privacy from third-party apps and services right now. To an extent, their privacy brand actually is real, and helpful.

Just don't expect any privacy from Apple themselves (and by extension the government). At least they help reduce Google and Facebook's rampant surveillance. That's some good news, in today's era of tech.


This is a scary exploit. If by modifying a bundle's resources you can coax an app to do something it shouldn't then an attacker can make a copy of the app with the malicious resources and assume the full privilege of the app.

Or phrased differently. All installations of the same app have the same privilege.


Shouldn't 'Safari.app/Contents/Resources/HTMLViewController.js' be codesigned? Why does Safari launch at all if this file was modified?


IIRC once an app is launched the resources are not checked again. Let me see if I can look up if this is documented somewhere.


Wouldn't that kinda defeat the whole purpose of signing the app?


¯\_(ツ)_/¯


Whenever I read an independent disclosure in an area I'm unfamiliar with such as this, involving the failure of the vendor to engage with the security researcher , I'm always presented with a parsing dilemma: is the vendor justified in ignoring the exploit because it is not really as severe as the author makes it out to be - or is the author correct and this is a serious vulnerability which the vendor has ignored/let slip through the cracks?


I'm not even claiming the vulnerability is serious. But when there's a bug bounty involved, the question is whether the vendor is ever going to pay, or the vendor is just leading the bug reporter on to keep it quiet for as long as possible.


Apple doesn't pay a bounty until after they release a fix. So if Apple decided to de-prioritize a bug, you never get paid. But Apple doesn't tell if you they've de-prioritized a bug, they just want you to remain silent forever, and they say "We're still investigating."


I don't understand how big companies are always penny-pinchers in this area. Anyone got a glue?


My armchair theory is that there is a bit of hubris involved: in-house developers are hesitant to admit they've been 'bested' by a stranger on the internet, and so downplay or even just have blinders on against recognizing in the first place the severity of the issue. Though to be far, I think the flipside is also at times true - where the independent security researcher believes the anthill they've found is a mountain.


"I didn't expect Mozilla to pay obviously since they were non profit." In 2017 Mozilla made a ton of money off google. Yes, Mozilla makes their money selling your search.


This sounds like a bug in safari more than osx tbh.


No, Safari is just an easy way to exploit this bug.

In principle, it works with any app that can be convinced to run arbitrary code by changing its resources. What Safari is doing isn't wrong and wouldn't cause an issue if TCC would check the entire app, including resources.

It's true that you can't use this to copy the privileges of _any_ app, only of those that have this property.


It works with any app that can be convinced to run arbitrary code.

It is limiting the scope very much. TCC shouldnt check entire app for signature.

Safari should verify its resources, is what I am saying


> It works with any app that can be convinced to run arbitrary code.

I guess any Electron apps and apps using webview for its local resources do it too.


The developer claims otherwise, that you can do the same with any app: https://twitter.com/lapcatsoftware/status/127794139284581171...


Can you get a licensed copy of OSX without Safari? If not, it's part of the OS.


You can update Safari independently of the OS. It even ships off the read-only system partition by default.


There is separation between responsibilities, even those apps are apple apps.

Basically if you can do with any app, it is OSX, otherwise apps fault

Even apple guides state:

You must also verify that the file you intend to read from or write to is the same file that you created.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: