Hacker News new | past | comments | ask | show | jobs | submit login
Apple has pushed a silent Mac update to remove hidden Zoom web server (techcrunch.com)
1358 points by coloneltcb on July 10, 2019 | hide | past | favorite | 532 comments



If you would like to force this update you can do so via the terminal:

softwareupdate -ia --include-config-data

It will show up as MRTConfigData if you look under Apple Menu->About This Mac->System Report->Software->Installations. The latest version is 1.45 and was updated today which includes the Zoom mitigations.


This also install a lot of stuff people may not want to. You can install only the designated package with :

    softwareupdate -i MRTConfigData_10_14-1.45 --include-config-data


> MRTConfigData

If I'm correct MRT stands for Malware Removal Tool.

I'd feel pretty bad if anything I worked on had to be uninstalled by that.


Well, it’s literally malware.


You are correct, the -a flag will install all updates that are available from Apple. Thanks!


What updates would people not want to install? Just curious, not arguing.


I like to precisely know what's installed and running on my devices. It does not make sense to install latest HP printer software, or new Apple Pro Codecs, to patch a particular security flaw related to Zoom.


Software Update Tool

MRTConfigData_10_14-1.45: No such update No updates are available.


I am also getting this error, and suspect it is because I’m on 10.12.6. If you do system_profiler SPInstallHistoryDataType |grep -A5 MRTConfigData you should see your latest version. For me, it’s 1.42. Not sure how to get the update yet though. Will update this comment once I figure that out.

Update: According to this macworld article, there is a Zoom patch out that fixes this. https://www.macworld.com/article/3407764/zoom-mac-app-flaw-c...

There are also commands at the bottom to manually kill the zoom localhost and disable it. I have opted to run those commands regardless:

  pkill ZoomOpener;rm -rf ~/.zoomus;touch ~/.zoomus &&chmod 000 ~/.zoomus;

  pkill "RingCentralOpener";rm -rf ~/.ringcentralopener;touch ~/.ringcentralopener &&chmod 000 ~/.ringcentralopener;#


What do the chmod do there? Removing files count as writes to the directory at least in Linux, so chmodding the dummy file wouldn't do much I'm thinking.


Idea is to prevent the Zoom Software from ‘repairing’ the ‘damaged’ app by overwriting it with the malware.

I would also set the ‘user immutable’ flag. If you want even better, set the ‘system immutable’ flag (see ‘man chflags’)


Yes sure, but I question if these permissions would do anything to prevent that. It would reject an open() call on the file, but these are expected to be directories so that would never happen, and it doesn't stop an unlink()


Use

    softwareupdate -l --include-config-data
To list available updates, then adjust the command with the MRTConfigData version Apple provides for you.


Thank you for this. HN comments always make me realize I don’t know enough about MacOS command line. My primary system is Linux so I miss out in these MacOS specific commands day to day. But I would love to pick up a few, if there’s a book or document.

Is there a book you can recommend? Or did you pick these up over the years


To check it with fewer mouse clicks:

  system_profiler SPInstallHistoryDataType |grep -A5 MRTConfigData


Helpful! Thanks...


Any details about --include-config-data? Doesn't seem to be documented in the help message.


This blog post[1] has a good explanation of ConfigData updates. The flag would appear to force the install of new Gatekeeper configuration updates.

>To help distinguish Gatekeeper and XProtect updates from other updates in the software update feed, Apple marks them as being ConfigData updates.

>Marking these updates as ConfigData cues the App Store to not display these as available software updates in the App Store’s list of software updates. These updates are meant to be under Apple’s control and to be as invisible as possible.

[1] https://derflounder.wordpress.com/2014/12/27/managing-automa...


Thank you!


softwareupdate --history --all | grep MRT

MRTConfigData 1.45 2019-07-11, 13:10:59 softwareupdated


This means there might have been another side to this story: Zoom's change of heart might have been forced by Apple, not the public backlash.

Apple: Hey, your app poses a threat to macOS security. We're going to remove your server app with the built-in macOS anti-virus.

Zoom: Oh crap. Okay, give us 2 sprints to release a new version that removes it.

Apple: We're killing it in 48 hours.

...

Zoom, after an all-nighter: HEyyy users, we have a patch for youu


I've been on the Catalina Beta since the week of WWDC, and Zoom hasnt worked for me until the update today.

The loading modal would come up, but the app window would never open and I would have to force kill the app entirely, since I couldn't close the modal.

I suspect that Apple had already closed the possibility of the loophole on Catalina, which is why it wasn't working.

So I suspect they had probably noticed it weeks ago.


I believe the issues running Zoom were due to tightening of security in Catalina for screen capture.


I had the same experience on Catalina, couldn't launch. Explains why I couldn't reproduce the bug from the Medium article too. Uninstalled it before the update.


HN 2 hours since story broke:

  rm -rf ~/.zoomus


That doesn't solve the problem because the webserver continues to run even if you have uninstalled Zoom.

Source: https://medium.com/bugbountywriteup/zoom-zero-day-4-million-...


Quote: "Then you can delete the ~/.zoomus directory to remove the web server application files."


You'd still have to kill the pid


Wild speculation.


Wouldn't say so.

The who found the Vulnerability where at least talking to the people at chrome and firefox: Quote from there Blogpost (https://medium.com/bugbountywriteup/zoom-zero-day-4-million-...) "Apr 10, 2019 — Vulnerability disclosed to Chromium security team.

Apr 19, 2019 — Vulnerability disclosed to Mozilla FireFox security team."

So, not entirely unrealistic that the other Browser manufacturer also got a notice.


More like a good joke


But highly plausible.


That's pretty epic. Apple continues to make big, brave moral gestures (like when they yanked Facebook and Google's enterprise certs earlier this year, or killed long-term tracking cookies in Safari overnight).

Makes me happy to be a customer. Hope they keep enforcing their own rules and protecting their users' privacy and security in this fearless manner.


I don't think disabling the enterprise certs was particularly moral, Facebook and Google were flagrantly violating the terms of the enterprise program. Apple also apparently didn't even notice (or didn't care) until articles about it started getting a lot of attention.

Apple definitely does make some commendable decisions, but I think it's also important to distinguish between bravery and what Ben Thompson calls "Strategy Credits" (https://stratechery.com/2013/strategy-credit/):

> Strategy Credit: An uncomplicated decision that makes a company look good relative to other companies who face much more significant trade-offs.


> Apple also apparently didn't even notice

Do they have any information about enterprise apps? As I understand it, Apple never phones home with app info (such as the identifier, name, etc) when verifying or installing enterprise-signed apps, so the only thing they know is probably the IP address requesting to verify the enterprise-signed app and the frequency of how often Apple devices do this certificate verification.

Considering FB and Google have many employees in all different parts of the world, it wouldn't be too suspicious to see a good amount of diversity between GeoIP regions.

Correct me if i'm wrong about what info Apple collects about enterprise apps.


As far as I can see this is correct. Even if devices are enrolled In Apple's Enterprise MDM program, the administration staff are the ones who get to see which applications are installed on the iDevice, not Apple. And I really do not think they are so preoccupied with this that they want to actively scan IP addresses for suspicious behavior (of which there probably isn't any to begin with).

Anyway I wholehartedly agree with you here and I think Apple genuinely had no knowledge of this activity until news outlets reported on it. Or if they did, it did not make its way to the higher-ups that revoke developer certs.


Going forwards, Apple will require that companies provide their enterprise apps to be audited.


I see them adding something like the macOS "notarization" requirement to iOS enterprise apps.


Indeed a company's "morals" are better exposed when it has to make inconvenient choices.


I would say true morals lead to structuring your company in such a way that you don’t have to rely on business people making ethical decisions moment to moment, because they won’t.


As nice as that sounds, I think it requires an impossibly perfect prediction of future events. You face ethical decisions whenever you have power or limited resources.


No. You can bend your business model towards transactions you are comfortable with, without perfect future vision, or even a clear strategic understanding of how that might happen.

In fact, the world around you will bend to meet your values whether you’re even aware of it. And that includes any companies you run.

The world does extend beyond your knowledge of it.


You really need to build a company that values that right down to its core. It has to be embedded so deep into the hiring process that you only select for people who share that value, and it has to be easy to let go of people who don’t fit.

Otherwise, it only takes one person to short-circuit that value to set the ball rolling on a shift towards lower standards.

You need to run a super tight ship, which I think is not as hard as it sounds until you put VC, investment, and shareholders into the mix. You at least need to be super diligent about those people you bring in who are not accountable to you, but you are accountable to them.

Basecamp is an amazing example of a company that has succeeded without compromising itself a jot. They do all kinds of things that we might consider unthinkable because they won’t budge on their values. Probably the one company I’d drop everything to work for if I had a chance at getting through their hiring process.


The same forces that require several levels of management make it increasingly difficult to enforce ethical decisions. Basically, when no one person can keep track of all the moving pieces you get splits around what individuals think is acceptable behavior. The larger organizations grows the more things tend to diverge, with different branches often having wildly different perspectives.

This tends to further degrade as new employees are added and any whatever original vision was going on continues to degrade over time. Especially as both the times and even business models change.


> make it increasingly difficult to enforce ethical decisions

Nonsense - this is a solved problem. You simply need to remove the ability to make defined classes of bad decisions by binding the company's future decision making capability with a Ulysses pact[1]. Cory Doctorow gave a good talk[2] about using Ulysses pacts in the tech industry.

>> It's not that you don't want to lose weight when you raid your Oreo stash in the middle of the night. It's just that the net present value of tomorrow's weight loss is hyperbolically discounted in favor of the carbohydrate rush of tonight's Oreos. If you're serious about not eating a bag of Oreos your best bet is to not have a bag of Oreos to eat. Not because you're weak willed. Because you're a grown up. And once you become a grown up, you start to understand that there will be tired and desperate moments in your future and the most strong-willed thing you can do is use the willpower that you have now when you're strong, at your best moment, to be the best that you can be later when you're at your weakest moment.

>> The answer to not getting pressure from your bosses, your stakeholders, your investors or your members, to do the wrong thing later, when times are hard, is to take options off the table right now.

This shouldn't be a problem for anybody that actually wants the moral outcome. Why would any=body insist on preserving the option to behave badly in the future unless that bad behavior is part of their future plans?

[1] https://en.wikipedia.org/wiki/Ulysses_pact

[2] https://www.youtube.com/watch?v=zlN6wjeCJYk (transcript: https://d3j.de/2016/06/24/cory-doctorow-how-stupid-laws-and-... )


They don’t insist on preserving the option, they simply can’t predict the choice.

Companies lack unified decision making. The founders can’t predict every moral choice any employee will make. And for any large organization the CEO has no idea what most of the day to day decisions involve.

Consider something as simple as an old brick company setting up an email server for the first time. That opens up a host of choices management likely had zero idea even exist.


It's easy to talk about "not preserving the option to behave badly" in the abstract, but actually doing it requires perfect foresight (predicting every way anything could be used for evil) and often a lot of implementation difficulty (because you need to completely foreclose on the bad options without impeding good or neutral ones).


Yes, and all of those are decisions you can go along with or reject. Deciding how big an organization you will join is one of many ways you apply your ethics.


That’s another issue. As an organization is viewed as less ethical, only less ethical people join creating a downward spiral.


Hopefully the unethical organization develops a reputation for being unethical. Then only unethical people will patronize said organization. Ideally, unethical behaviour leads to marginalization, though obviously that does not always happen in practice.


Samsung are a strong counter-example. They're been caught red handed doing all sorts of truly nasty stuff. Pretty much anything Apple has even been suggested as doing, they've been enthusiastically up to their eyeballs in and a lot more, but they seem to just shrug off the negative press like water off a duck's back.

I think once a company develops a reputation like that, most people just get desensitised to it. Also there is a degree to which lower cost products get a pass, because hey, they must be cutting corners somewhere.


Yeah, I agree, but I think that is just a way of stating that morals are impossible to perfect.


Hmm maybe - what I have in mind is that you could run something undeniably good e.g. a hospital, and you will still face hard ethical decisions about how you handle uncertainty, apply power or allocate your resources. Doing good things just isn’t easy!


You can certainly form a company whose line of business minimizes how many ethical questions it will face; I'd consider that to be ducking out and ultimately less moral than entering a business where there are genuinely tough ethical questions that you will need to take positions on (and inevitably sometimes get wrong).


How is that even possible?


For example between easily upgradeable environment-friendly product and a box of glued components with no-user replaceable parts so that they need to buy a new item in the line sooner.


I'm not convinced that morality and self-interest are mutually exclusive. Very often the best decision for a given entity to make is a moral one.

We should still reward/praise companies who make decisions that are morally superior to their competitors, regardless of whether the morality itself was a primary motivation.


I personally believe morality and self interest strongly overlap over the long term, but short term they are largely independent conditional on the probability of getting caught...

Humans might have too short lifespans, memories and limited rationality for the long term benefit of morality to be strongly in our individual self interest though... one of the possible benefits of anti-aging and cognitive enhancement tech is it might incentivize us to be more moral all other things equal as a side effect.


Yeah, I'm not sure I attribute Apple and Tim Cook's latest stances to strong moral fortitude. I think it's more corporate 101:

1) Public sentiment is hammering companies for perceived privacy violations

2) Our business model does not rely heavily on selling user data

3) Make public statements about how much we value privacy at literally no cost to us

4) Get in a good dig at our competition at the same time


Perhaps. But I also find it easy to buy that a guy who grew up gay in Alabama could think privacy is of fundamental importance.


Indeed. But I also think that Zuckerberg, Bezos, Page & Brin all value their own privacy. They just don't value your privacy that much.


Zuckerberg definitely does, there are pictures of him with tape over his webcam, microphones taped, ect.


[flagged]


Those are very different things. The former is doing the latter is showing.


I would be reticent to praise them quite so effusively, though I do think they're the best of of the big tech companies currently. I'll be watching the development of this suit with great interest: https://time.com/5596033/lawsuit-apple-selling-itunes-listen...


I think that case is kind of a stretch. I read the complaint and there are two arguments:

1. Lists of people who have purchased [music genre] from iTunes & listened on Pandora is for sale by data brokers, and

2. App developers (they specifically call out Pandora) who use the MediaFramework API have access to iTunes library metadata that they can then collect.

I haven’t looked at Apple’s Developer Agreement recently but I suspect Pandora (and potentially others) hasn’t complied with the terms.


I do appreciate Apple's overall stance and actions regarding privacy, but I see this as a very practical action for their own self-interest. Apple has staked its reputation on privacy. The headline-level summary of this incident is that Mac users are exclusively affected by Zoom's bug/security hole. Among consumers, unauthorized access to the webcam is the epitome of modern invasion of privacy. All it takes is one Apple user to be victimized for Apple's reputation on privacy to be as much of an ongoing punchline as Samsung's exploding phones.


Why not both?


Glad to see that Cook hasn't altered this as Apple always had security as focus. It was one of the reasons I got my wife into using macs years ago, I never had to support her or worry about what website has managed to install ad malware.

We recently went back to PC's and it was immediately obvious we needed wall to wall antivirus protection which was not always the case on macs.


Never forget PRISM.


I don't think killing those enterprise cert was a moral gesture. They were just enforcing their walled garden.


From what I hear from people in the ad business, safari is still 100% trackable even without those cookies; it's just a bit harder to set up.


Wouldn't it be EPIC if apple produced a statement saying

"We are not going to keep any data at all about you unless we are forced to do so legally. We are bound by that contract with you when you purchase our device."

Then followed that with

"We're going to make it as hard as humanly possible for anyone else to collect and keep data about you if you own one of our devices. Including both legal and technical solutions and we will sue them for breach."

As it is, we're praising "least worse" which is effing awful. Apple's excrement stinks less than some others, eat it up!


Well that's what Apple might do if they really were in the business of being paid by customers to serve those paying customers and nobody else.

But of course Apple literally wrote the book on selling their customers as product to third parties. They've been wildly successful at it. Microsoft, IBM look on in envy at how they've managed to get away with it.

Since it has been massively profitable for them to turn their customers into their product, they see no reason to change and I guess why should they? Profit maximisation is their business, yours and my health and welfare is only of interest in service to maximising profit. If they did anything else they might be guilty of securities fraud(!) So yeah, they can be completely horrific and still win the PR battle because others seem even worse.

I see these statements of fact are always jarring for people to notice for the first time, especially if they quite like the machines (I do), and quite like liberal democracy and free market economics (again I do!) and more so that this utter hideousness is our best option right now because there is no option even remotely on the same planet as good. It is thoroughly depressing all round.


I still think Apple products are built on human rights abuses. I am currently trying to parse their most recent conflict minerals disclosure. It doesn't explicitly say "yes" but also doesn't clearly say "conflict-free" either.


Note that conflict-free at this time is so hard to be practically impossible. Fairphone, a company and phone founded explicitly with the goal of producing a phone without conflict minerals, still isn't conflict-free, and it's not for lack of trying.

Sure, Apple has more leverage, considering their size, but that also comes with its own set of problems. Plus, their customers have nowhere to go to in protest - all other phones are full of conflict minerals too.


I understand it's hard to make conflict-free computers.

I feel sick when apple says they are deeply committed to upholding human rights, while they continue manufacturing electronics, because I need authenticity. I would like Apple to use more of their resources to figure out how to do conflict-free consumer electronics.


> I would like Apple to use more of their resources to figure out how to do conflict-free consumer electronics.

I would like that as well, but I understand how that's difficult for them to do, too: making public that you're working on that, is also making public the deficiencies you have in that area currently - something many consumers are not aware of, and of which they may think it applies only to you.

That's why initiatives like Fairphone's are good. That said, I've followed their blog [1] for a while, and occasionally they've been part of initiatives of which other phone manufacturers have been part as well (I recall something about Nokia and Congo). I think they just don't publicise that for the reasons I outlined above.

[1] https://www.fairphone.com/en/blog/


I’m sure you also love to complain about the problems at the Foxconn ‘Apple factory’. Which in reality builds products for all manufacturers.


I just wanted to express to rawrmaan that I felt disturbed about his calling apple brave and moral when many aspects of their supply chain don't appeal to my sense of justice.


So "everyone else does it" is a valid defense?

Apple charges $1k for their monitor stands. I think they can afford to build their stuff at a factory that doesn't use modern slavery.


I agree that Apple is incredibly greedy and hypocritical, but in the end, working at Foxconn is just another underpaid job. I think the term "modern slavery" should be reserved for people whose passport has been taken away, who are trapped on a fishing boat or in a brothel for life, who have been tricked into accepting debt etc.


So “selling your items for cheap” is a valid defense?


According to the article, "Apple said the update does not require any user interaction and is deployed automatically.". There's nothing moral about using "silent updates" (updates the user has no opportunity to decide whether to adopt).

Apple certainly wasn't looking out for their users' privacy and security when they let an iTunes bug go unfixed for 3 years (see http://www.telegraph.co.uk/technology/apple/8912714/Apple-iT... for more). That bug was said to allowed government spying. Apple's iPhone back door lets Apple delete a user's apps (per http://www.telegraph.co.uk/technology/3358134/Apples-Jobs-co...) but Steve Jobs said it was okay because we can trust Apple ("Hopefully we never have to pull that lever, but we would be irresponsible not to have a lever like that to pull."). Back doors aren't moral, they exist to grant another party over the device the user bought and should own.

The root of all of this is the power of proprietary software (software the user can't inspect, share, or modify, and in some particularly restrictive cases can't always run). Proprietary software is unjust power over the user. There's nothing moral about proprietary software.


Requiring user confirmation for updating malware signatures would make them a lot less effective.

And in any case, there is a checkbox in the software update preferences labelled "Install system data files and security updates" which presumably allows you to opt out of these critical security updates.

And if you really wanted to have the zoom backdoor server run on your system, you could probably just strip the code signature and run it manually. Apple isn't stopping you from running whatever software you want on the Mac. Apple is helping all those users that don't follow Hacker News to keep their Mac safe.


>Requiring user confirmation for updating malware signatures would make them a lot less effective.

That seems highly unlikely to me. Do you have evidence to support that assertion.

On first use "Do you want us to automatically remove apps we think might damage your system: Y/n."

Don't users need a notification, at least, to inform their choices when installing software.

I guess Apple Computers would rather you just mindlessly relied on them, however, so anything that lets users know that Apple's system exposed them from risk is going to be avoided.


> Do you have evidence to support that assertion.

Every relative who never installs updates. I ask them why they are on an old version with major security holes that were on the news, but they just don't care. They always click "later".


You can turn it off if you don't like it. If one doesn't know enough to turn it off, one probably shouldn't be turning it off.


> There's nothing moral about using "silent updates"

Sorry, but this is absurd. Automatic security updates are necessity. And no user read through all changelogs of all updated software (except extremely critical systems).

Maybe you wanted to argue for ability to downgrade and disable updates?


There's no call to write in such patronizing ways.

It should be up to the user to decide whether to take on updates, regardless of what you think because that's their computer and not yours and you each deserve control over the computers you own. Just as freedom of speech means sometimes people will say things you disagree with, free software computers means not everyone will keep up with the updates. But not offering software freedom is unethical and neither Zoom nor Apple are distributing software freedom. Apple has a clear record of using the power of a proprietor to expose their users to harm (more examples at https://www.gnu.org/proprietary/malware-apple.html ) and this story is an example of how Zoom apparently does as well.

What you and other posters are tellingly refusing to address is the immorality of software nonfreedom. As I wrote before, this is the core of the issue.


> It should be up to the user to decide whether to take on updates, regardless of what you think because that's their computer and not yours and you each deserve control over the computers you own.

Which is why the user can CHOOSE to have automatic updates. Or not to. The default when buying a new Mac is that automatic updates are enabled, because that’s the product Apple wants to sell and that they believe most of their users want to buy. It’s secure, it’s practical, it’s fun.

If you want to be your own IT department you simply deactivate all or some automatic updates. If you want a secure computer and trust Apple you leave it on.

I don’t see how this is a big moral question at all. Let people organize their computing needs in a way that’s safe and practical for them, not in the way that’s safe and practical for you.


>There's nothing moral about using "silent updates" (updates the user has no opportunity to decide whether to adopt).

There's nothing accurate about this description.

The user can turn off all update checking, or use the granular permissions to just turn off silent security updates.

>To allow macOS to update automatically, go to System Preferences > Software Update, then check Automatically keep my Mac up to date. The Mac offers some more granular update options than iOS. If you click Advanced…, you see a number of options:

https://www.intego.com/mac-security-blog/everything-you-need...

If you only want to turn off silent security updates, the option to uncheck is "Install system data files and security updates".


Every browser and most other important software now does auto-updates with no user interaction. ESPECIALLY for security issues.


Apple or anyone cannot silently pushing changes to my computer without my explicit consent – especially on unrelated things.

What Apple did here is also a dark pattern. We cannot commend them and normalize this behavior.

This is a dictatorial one-sided decision by Apple. What else can they do? Can nation state governments compel Apple to push stuff silently? Can this system be abused by hackers?

Why are we dependent on the good moral behavior of Apple business decision makers for the well-being of our digital lives? Haven't we learnt anything at all from all the incidents in the recent past w.r.t trust in corporate benevolence?


It's also on macOS license agreement:

"By using the Apple Software, you agree that Apple may download and install automatic updates onto your computer and your peripheral devices. You can turn off automatic updates altogether at any time by changing the automatic updates settings found within System Preferences."


It is enabled by default and if you don’t like that you can disable this behavior.

https://support.apple.com/en-us/HT204536


We still have not heard anything officially from Apple. Based on other comments here, this removal happened via Malware Removal Tool (MRT) which itself is a hidden tool. If yes, then Apple needs to declare Zoom as Malware. For reference, Apple defines Malware here - https://support.apple.com/en-in/guide/mac-help/mh27449/10.14....

On the other hand, Apple itself is guilty of not addressing gatekeeper vulnerability in time (is still yet to fix this bug): https://9to5mac.com/2019/05/25/macos-gatekeeper-vulnerabilit...


> We cannot commend them > Why are we dependent

Why are you using "we"? I for one am quite happy how Apple manages Gatekeeper.


That boat sailed when google chrome launched 10+ years ago. Users don't care that applications come built-in with the ability to download a binary and execute it without the users permission. The market has chosen convenience over security. Yes, its a dark pattern, and Apple uses dark patterns themselves (for e.g. to trick you into updating iOS). Its going to be a monumental uphill battle to change this trend..


It's been rather disturbing to see this whole thing play out --- I'm not taking sides here, but Apple "flexing its arms" in this manner shows that it is willing and has the power to go beyond policing its App Store and such (which while I do not like, I feel it does have the right to) and involve itself in the affairs of third-party software which it did not originally install. (This is subtly different from updating things like OS files, for example. Some other comments here suggest that the installation of this update is controlled by a setting described as being for system related updates, which a user would expect to leave his/her third-party software alone.)

You may agree with its decision this time, but will you always agree? Apple's wielding of power in this way is likely to attract the attention of groups such as copyright/IP lobbyists, which have an immense desire to have all "non-authorised" files/software erased from all user's machines.

As the saying goes, "two wrongs don't make a right".

In any case, the idea of the OS/platform vendor meddling with third-party software that it doesn't like just feels wrong. I know Apple has historically held tight control over its mobile platforms, but the Mac is meant to be different.

I am not an Apple customer, and I now feel even more reluctant to become one.


lol they removed what would be called horrific spyware if it wasn’t made by Zoom and you’re over here on some lofty criticism about possible implications years into the future

any OS (and many other apps) that update have the power to do what you’re afraid of, and much more.

plus i don’t really see a bright line between system level software and an app when apps can access your video cam, mic, all your files - basically your whole computer.


That's it right there. This isn't some gray area, questionable thing like that time they pushed a David Bowie song onto people's iTunes. Remember that? People completely lost their minds over it, and I agree with the sentiment.

This isn't third-party anything. No one even knew this was running on their machine and it was demonstrably abusable. Good riddance!


As one of those people who got really annoyed by it, here's two reasons why:

One: On the family iPad we had at the time, we hadn't ever uploaded music to it from the family library, because there wasn't enough space for it all. Whenever anyone opened control panel and accidentally pressed "play", something by U2 (with a not-appropriate-for-little-children album cover) would come on.

Two: It was hard to remove that darn album. I couldn't figure out how to get it to go away for the life of me.

There's also a fundamental difference between someone adding something like Clippy to your desktop (or a U2 album) and someone saying "you need to fix your stuff or you get kicked out."


a David Bowie song

It was an entire U2 album, a far greater offense.


It’s blasphemous to equate these artists.


If it was a Bowie song, people probably would've been happy.


any OS (and many other apps) that update have the power to do what you’re afraid of, and much more.

There's an ocean of difference between can and will.

plus i don’t really see a bright line between system level software and an app when apps can access your video cam, mic, all your files - basically your whole computer.

The setting ostensibly refers to the operating system, i.e. macOS, which I have no problems with Apple modifying if you've enabled that option, and which their EULA probably has a clause about. But from a legal perspective, modifying a third-party application which Apple does not own and did not install seems an overreach; unless their EULA explicitly grants them the right to do anything they want with the files of the system it's installed on, they could find themselves in legal trouble. (That notorious CFAA and the like.)


I’m quite happy with it, as I don’t see millions of people removing some hidden directory.

No more zoom for me.


The point is precisely NOT to think about only this one case like many others seem to be focusing (or Zoom-ing in...?) on, but to consider how far you are willing to let Apple exercise its power over your computer.

Would you let it scan all your files and delete e.g. "suspected images of child abuse" (to use an old cliche)? Suspected copyrighted material or fragments thereof? "Extremist" content, or content which is contrary to current social norms? How authoritarian does it have to get before you start being creeped out?


This is a classic "parade of horribles" argument. I do not find them compelling, personally.

If Apple starts being abusive, they'll get their hand slapped. If they don't, they don't.

There's no better company positioned to do anti-malware than the vendor of the OS itself. Which is why Apple and Microsoft both do it. You can disable updates on both platforms if, for some reason, you don't want anything to change on your system without your explicit action (pros and cons to that, obviously). But for most end users, the tradeoff of control vs. security is a very easy one, since the average user is in no way qualified to secure their own system or audit the code that runs on it.


You can take any capability and stretch it out to some absurd extreme. What if apt-get whatnot trashed your entire computer? What if buses started hunting pedestrians for sport? It's a line of inquiry that prioritizes handwringing over insight.


Has anyone ever asked bus companies to start hunting pedestrians for sport? The answer is clearly no.

Contrary to that, the demands from governments and others for tech companies to "take responsibility" and become enforcers of all sorts of perceived virtues is reaching a crescendo.

And it's not just about clearly dangerous things like child porn or terrorism. The UK government seriously demands the takedown of "harmful but not illegal" content.

Just think about that concept of "harmful but not illegal" for a moment and you'll see that the sort of overreach that userbinator is talking about is anything but "some absurd extreme".


I still have a lot of trouble seeing how Apple removing a critical vulnerability - a completely mundane act with plenty of precedent from both Apple and others - is some clarion call that, if left unheeded, will have Siri judging everyone's hentai collection next. Why this - preventing myriads of users from becoming campeople - of all things? Why not, say, every Chrome autoupdate ever?


> I still have a lot of trouble seeing how Apple removing a critical vulnerability

You have that trouble because you are focusing on the "critical vulnerability" part and ignoring the fact that Apple decided to uninstall a program they had nothing to do with from your computer without your consent.

The intentions might be noble, the implications however are less so.


But they didn't uninstall 'a program'. The product itself was unaffected. And, again, this was done in consultation with the makers of the 'program' who had screwed up badly enough to be unable to fix the problem themselves. Nothing happened here that doesn't regularly happen when all sorts of things update.


These are unimportant details, the issue is with Apple modifying people's computers silently and the users themselves having no knowledge or any say about it.

Replace this instance with something that you disagree about (imagine Apple removing VPN software from Chinese customers due to demands from China or "fixing" existing VPN software with backdoors that enable Chinese authorities to wiretap Chinese people) and see what the issue is here.

(if that example would happen or not is irrelevant, i'm making it to help you see the issue in a context i think you'd disagree with Apple about, i'm not making it for you to argue if that would happen or not)


You brought up the details, inaccurately, to now tell me the details don't matter. You can understand, I hope, how this starts to feel like an exercise in eel juggling.


I didn't brought up details, i explicitly mentioned in my first reply to you to ignore the specifics of this case, ie. the details, and see what happened without them.


> something something Apple, a US-based company who prides itself on privacy helping China spy on people.

Apple engineers go to China. Anything they do to help the Chinese government can immediately affect their own workers. If they did that, and a bunch of people with Apple devices got thrown in jail / whatever, their stock, and moral standing, would suffer some serious blow-back for it.

Google Chrome has a thing that pops up when it thinks you might be getting attacked / phished by somebody. I wouldn't mind if OS X terminated my connection and said "Hey, we don't think this is safe" to me, especially if it was something that the average person isn't likely to notice and can cause damage to them (also, in China [relative to the US], the stakes for everything are generally higher- the US probably tracks you around, China for sure does that and is actively nabbing people a lot more frequently, too.)


I already wrote

> (if that example would happen or not is irrelevant, i'm making it to help you see the issue in a context i think you'd disagree with Apple about, i'm not making it for you to argue if that would happen or not)

It is in its own paragraph. That China part wasn't meant to be debated, it was meant as an example of an event that if it happened would make you disagree with Apple. The important part of this example is you disagreeing with Apple, not the reason why.


>terminated my connection and said "Hey, we don't think this is safe" /

That's not equivalent, equivalent would be doing something you don't realise, the point is about user agency: keeping users uninformed and, for those that get the information out-of-band, unable to exercise their own control over the situation.


>the users themselves having no knowledge or any say about it.

This is not true. You can disable all the automatic updates in System Preferences.


This is a nuclear option and the issue isn't getting updates, the issue is being silent and not offering any control over that. See my other replies about Windows Defender about what i meant with that.


What do you mean? They're silent because they're malware updates. You can turn those off.


>uninstall a program they had nothing to do with from your computer without your consent.

Incorrect. Users who are vulnerable to this had already decided to uninstall Zoom and that's why it was a vulnerability. Zoom had decided to ignore the user's wishes and leave their server behind so that it could re-install the software. Apple's update simply enforces the users' past decision to uninstall the application.


The problem is that Apple appears to have made an exception to its own rules in this particular case. If I understand correctly, they used a first party system update mechanism to change third party software.

It's like Google making an ad hoc decision to use Chrome autoupdate to silently patch a particularly bad vulnerability in Microsoft Word just because they can.

So what is the principle behind this kind of exception? It's simply this: If it's bad enough, normal rules can be suspended and anything goes. It's like declaring a state of emergency. It's not normal or mundane.

Now the question becomes what is bad enough and who gets to decide what is bad enough? People will point to incidents like this and ask questions like: Why was the San Bernardino attack not bad enough for Apple to suspend its ususal rules? Why can people store tons of pirated music on their Macs without Apple taking action? Why does Apple allow criminals to hide behind end-to-end encrypted messaging software?

If Apple has decided to take responsibility for the security of all third party software on macOS then they should say so. They should change the rules instead of breaking them in an ad hoc fashion.

Then we can all decide whether or not we want to hand total control to Apple (and to those who have control over Apple).


This is untrue. The update process was part of Xprotect, the malware definition/signature system built-into macOS that's part of Gatekeeper [1]. It dates back to Mac OS X 10.5 Leopard and was expanded on Mac OS X 10.6 Snow Leopard (the Gatekeeper GUI was introduced in OS X 10.8 Mountain Lion and back ported to Mac OS X 10.7.5 Lion). Updates were historically issued via minor OS updates, but Apple started to do silent updates to the Xprotect definition list a number of years ago, as a way to target popular/growing strains of malware (which were often installed via cracked apps).

There were a few instances in the last few years where the repos or built-in update systems of legitimate programs were compromised and bundled malware (and in one case, ransomware) along with their apps. In those cases, Apple also silently updated XProtect to remove the malware.

In this case, just because this was a webserver and not something more traditional like a trojan doesn't mean that it isn't still malware. The Risky Business podcast asserted the existence of the RCE before Apple jumped into action that it says Zoom knew about for months. Given that the only way to remove the webserver is to update Zoom (something that won't help any user that has already uninstalled Zoom, which kindly left the insecure webserver behind), this type of update makes perfect sense -- especially since Zoom itself is removing the server from its own application bundle.

This was malware, pure and simple. It wasn't third party software. It was malware left behind/included with a third-party app. It's not as if Apple removed the Zoom app -- it removed the piece of malware Zoom was including alongside its app. The fact that Zoom was including this malware as a way of bypassing Apple's access control in Safari (God forbid the user have to click a button confirming they want to open a meeting) is beside the point -- this was malware.

Additionally, users can turn off the auto system updates and they can disable Gatekeeper entirely.

I understand the broader concern of an OS maker being able to remove files a user chose to install -- but this is a very unambiguous case of malware. Just because the RCE wasn't actively exploited doesn't mean it wasn't malware.

[1]: https://en.wikipedia.org/wiki/Gatekeeper_(macOS)


I understand why Apple did it and the additional context you provide does change my opinion somewhat in Apple's favor, but I disagree about Zoom being malware because malware is made in bad faith to introduce functionality the user never intended to use.

What Zoom did was negligent and incompetent, but I don't see that there was malicious intent. I do agree, however, that what they tried to do is unacceptable even if implemented competently.


I think when you refuse to address a reported security issue related to something you installed (without the users knowledge and without a way for the user to easily remove) as a way to bypass an access control pop-up, and cite that it’s a feature not a bug, until forced by the public/other disclosures to remove it, The intent is malicious.

But even if it weren’t — and we can agree to disagree on the intent — the second the RCE is popped, it becomes a massive security issue and it becomes traditional malware. As I said, I’m convinced Apple would do the same thing if this was something left behind or associated with Java or Flash.


Malicious intent is the only thing that separates malware from a regular security issue. So if we disagree on intent we have to keep disagreeing on whether or not it's malware.

But I will admit that I'm starting to see the question of Zoom's intent a bit differently after thinking about what you have said.


Lying to users about the uninstallation is pretty icky intent. It's weird to make this about the sanctity of user choice and just repeatedly ignore that bit on top of coming up with a throughly inaccurate narrative about the nature of Apple's response.


I didn't ignore that bit. You didn't bring it up in your responses to me.

Instead you defended Apple fixing security issues in third party software (as I understood it without user consent) and you compared any concerns about that with concerns about buses intentionally running over pedestrians.

So apparently our debate took wrong turn and that wasn't entirely my fault although I will take some of the blame.

I agree that Zoom's intent (and even more so their methods) is icky. So perhaps we should have focused on that, because I can understand the reasoning that this makes Apple's actions look far more justified than I initially thought.


It's not.. it is malicious. They want to circumvent os/browser behavior / user protection (the prompt to open zoom). To hack around this they install malware to get things done. It is exactly the same as using doing something that wouldn't pass the appstore checks.

It is actually very competent of them, except for the security part.


The problem is that Apple appears to have made an exception to its own rules in this particular case. If I understand correctly, they used a first party system update mechanism to change third party software.

I don't see anything in the article that suggests this - as I read it, it pretty much says the opposite. What else have you read that outlined these rules and the exception Apple made?


The article says "Apple said the update does not require any user interaction and is deployed automatically."

As far as I know, there is no system-wide update mechanism for third party software not distributed through the Mac App Store that does not require any user interaction. So apparently they (ab)used the system update mechanism.


There is nothing in the article that suggests 'abuse' let alone rules, which is what you said. It's not entirely clear exactly what was done here but it was probably an update to the Malware Removal Tool which does what you can surmise from the name, without user interaction.


I care far more about the facts than about what's in the article.

Zoom is clearly not malware. It just has a bug. Is updating regular third party software documented behaviour of macOS? If so then I agree that it is not abuse. Otherwise Apple has some explaining to do.


I wouldn't call it a bug. Zoom deliberately engineered their app so it opened a security threat, accessible from any website on your browser, on your local machine without the user's knowledge. Then they reinstalled their software after the user had uninstalled it. Again, deliberately engineered that way.

That is not a bug


Zoom's intention was not to introduce a security vulnerability. That's why I'm calling it a bug.


Their intention was to bypass an inbuilt security measure. So no, maybe they didn't mean to add a vulnerability but they did mean to reduce the security of the system as a whole


Sure, facts are important. You started here:

The problem is that Apple appears to have made an exception to its own rules in this particular case. If I understand correctly, they used a first party system update mechanism to change third party software.

I don't think any of these are established facts and I don't understand how you, a fellow fact-fancier, haven't acknowledged that before breezily moving on to a discussion of the precise definition of the term 'malware'.


If you read the quote carefully, you will see that I did use rather cautious language, and I did that exactly because I wasn't sure that I knew all the facts.


No, they installed an update to XProtect's signatures to say "this is malware, remove it if detected" -- something the company has done many times in the past.


No, it prioritizes thinking about clear boundaries ahead of time, while you're able to think clearly about the issues. "Hard cases make bad law" as the lawyers say. Ethicists do this sort of thing all the time.

There's a line between the OS and third-party software. There's a line between malicious software and accidentally vulnerable. Apple has just shown that it is willing to cross both those lines. Where is the line at which Apple will stop?


In what way did Apple cross a line? Platform vendors have automatically removed malware for many years. In this particular case, Apple, in consultation with the vendor, removed a particularly nasty vulnerability. The software itself was left alone. In fact, because the software was written so poorly, the vendor didn't even have the ability to address the problem - only Apple could. It's even odder to bring up ethics - should Apple have knowingly left zillions of users exposed to this?

To make this look scary, you have to misrepresent what Apple actually did and then extrapolate to some frightening hypothetical to end up at nothing more than a risk inherent in all self-updating software.

If the position is 'all self-updating software is an unreasonable risk', fine. But at least argue that unvarnished, and I imagine to most people, extreme and impractical view instead of trying to dress it up as some novel and intricate argument about morality and creeping authoritarianism.


Lets ask a reasonable question: who wants the zoom webserver running on their systems providing a backdoor.


I didn't want my window to be broken. I still wasn't happy when my landlord came in and fixed it without giving me any notice.


You have to extend some goodwill to a company that invested millions of dollars and absolutely critical space in its handheld tech simply for security (I'm referring, of course, to the secure enclave).

Point to me any other manufacturer who has gone to those lengths to protect their users. There was no reason for Apple to develop that tech. No one else in that space, but they developed it anyway.

Can they do all this stuff? Sure, but I don't think they will. It does not seem to be in their interest.


I'd rather put Apple in control of my computer than any other random software vendor. This is exatly why all windows systems are full of crapware and are grinding their disks out of the box


Exactly. What is the other option? Show my mom how to use terminal to delete hidden dot directories?

I also purged Zoom. They’ve blown it in the trust department. They better start working on a web app because that is as much access they’ll get from me in the future.


They didn't remove the application. They removed the mechanism by which that application was circumventing user intervention (which does actually remove the application). It's almost like a company refusing to take chinless credit cards to stop fake credit card use. They didn't do anything to the credit cards themselves to make the valid ones invalid. They just updated their policies to exclude the ones that could be abused.


It's not spyware, this was not something that was intended to be abused, it's insecure software and its very common, you're running plenty of it right now.


> It's not spyware, this was not something that was intended to be abused

Do you have a source for that. It doesn’t peek around my computer a little and/or send back any telemetry? I’m being serious, I’d like to know.

I had to install Zoom in school in 2014, I ended up uninstalling it the next week and reformatted after the quarter. I’m with Apple here. It’s shit insecure non-consenting software that wastes battery 99.99% of the time.


> I had to install Zoom in school in 2014

This is a good point; we shouldn't act as though users are necessarily making an informed choice or meaningfully consenting to all the software that's on their computers. Lots of people are forced to install software at economic gunpoint (and probably can ill afford a separate computer to isolate it on).

You can't depend on users and the marketplace to select against insecure software. The market is too distorted to function that way; the people forcing others to use shitty software are often isolated from the consequences themselves, so there's no effective feedback loop to stop it. Having the OS vendor step in is really the only good solution in the short term.


IIRC, part of the functionality included silent background updates from a domain that nearly expired, and was only renewed when pointed out to them during the discovery of this.


Thanks, that cuts through a lot of the fluff.

The part that freaks me out is you can’t uninstall it.

“The undocumented web server remained installed even if a user uninstalled Zoom.”

I’m not sure if this is common. Sony got caught with their XCP rootkit (I’m not sure if they called it this at the time) you had to fill out a “uninstall request” form on their site with your email and location[0]. I’m not sure if the uninstaller fixed the vulnerability.

So maybe a rootkit might describe this if the vulnerable webserver is privileged. In Sony’s case, the side effects were unintentional (though their history with DRM is egregious). I think Zoom is just polluted MVP in production.

[0] https://web.archive.org/web/20051104044919/http://cp.sonybmg...


It's not spyware but it's user-hostile, insecure, undocumented, uninstallable-by-the-usual-process software.

Most of the insecure software that I run has enough grace to not silently leave behind a web server to automatically re-install itself after I dumped it in the trash can.


You have no idea what "most insecure" software does, if you did you wouldn't be running it. Lots of insecure software fails in spectacularly unexpected ways.

https://www.macworld.co.uk/news/iphone/facetime-bug-hack-369...

This is a horrifying bug. Is Facetime malware? Or do developers with earnest intentions sometimes write buggy code?


Buggy code is common, silently installing and running a web server is not.


The webserver is an implementation detail. It's not uncommon for desktop application software to use native code to provide UX enhancements that are not possible within the browser alone. If the server had been designed securely it would not be an issue, it would just be left-over cruft, which is also pretty common.


This is effectively a press release to the Russian and Chinese governments that Apple could unilaterally remove all VPN software from Macs in a territory if they were compelled to.

If you don't think that's scary then you're just incapable of long term thinking.


What? The blowback from that would be huge. Why would they do that?

Yes, they can do that. So can Microsoft with Windows. So can Google with Android. Will any of them do that? Hopefully not, at the very least. Will all of them do that? Probably not- there's money to be made being the last company standing that actively protects privacy.


> What? The blowback from that would be huge. Why would they do that?

For the same reason Apple removed VPN apps from the Chinese iOS app store and multiple news organisations. Because they feel that the money from China is worth obeying oppressive regimes. I really don't think the backlash would be any worse.


There are reports there was another RCE that Zoom didn’t/wouldn’t fix. This is what Gatekeeper and the built-in anti-malware engine is suppose to do — remove malware. If you don’t want this feature, you can turn it off, but this is a sane default and a good thing.

Apple didn’t flex anything here, it removed malware from its users computers.

https://twitter.com/riskybusiness/status/1148824808236318721


If "malware" is going to include any software with security bugs, then Gatekeeper should just rm -rf the whole drive.


Zoom was literally reinstalling itself after being uninstalled. It's basically the definition of malware.


I was waiting for this comment.

> You may agree with its decision this time, but will you always agree?

Yes, I will. At least I am not going to lose sleep over it until Apple does abuse that power.

I am actually even more happy to be an Apple user knowing that the mothership said hell naw to the naw naw naw naw to this horseshit Zoom has been pulling.

If I were Apple, I'd be taking this as a personal slight against my entire user base.


> At least I am not going to lose sleep over it until Apple does abuse that power.

What if Apple abuses that power in ways that not everyone sees as "abuse", yet they are affected by it?

The reason you are not already seeing this act as abuse is because you happen to agree with it. What if you didn't agree? What if you were in the minority? What if the reason you where in the minority was that the majority simply didn't had the necessary understanding, experience and/or knowledge to see the issue you see?

If something can be abused, it will be abused, there isn't a matter of if, it is a matter of when. And with that in mind it is better to try and avoid being abused than wait for the abuse to happen and see what you can do after the fact.


This same line of reasoning could apply to any Windows or Mac OS update that disables and removes known viruses and malware.

Is it appropriate for Microsoft and Apple to push updates that disable and remove those from infected computers? If so, what is the significant difference?


No, and there is no difference, because "infected" in their eyes is not necessarily so in the user's. To give a clear example, AVs have had a long history of false positive problems with things like cracks, keygens, and demoscene productions. Even then, for the most part(? I have not used a persistent AV for a long time) I believe they still tell you first and then let you decide what to do when they find something.

I remember many years ago when the first widespread worms for Windows started circulating. All MS did was publish news and a removal tool. It was publicised greatly, but the ultimate choice was left to the owners of the computers, and that's how it should be.

All the big tech companies (and even a lot of the smaller ones) are becoming increasingly authoritarian, and that's the most concerning thing about this.


Windows Defender doesn't remove stuff silently, it shows you notifications, provides control to ignore parts of your computer you know can misfire and you can revert the actions it does.


Apple has had this ability for YEARS. They have not abused it yet.


> but Apple "flexing its arms" in this manner shows that it is willing and has the power to go beyond policing its App Store and such (which while I do not like, I feel it does have the right to) and involve itself in the affairs of third-party software which it did not originally install.

I love this. This is why I'll keep buying Apple.


> You may agree with its decision this time, but will you always agree?

To be honest I'm kinda sick of this argument. Someone brings this argument up _every single time_ a tech company takes action against something malicious. It's a strawman argument at best and at worst a way to give people an out on acting against something that could harm the user.

> Apple's wielding of power in this way is likely to attract the attention of groups such as copyright/IP lobbyists, which have an immense desire to have all "non-authorised" files/software erased from all user's machines.

This will never happen.


How is it a strawman? It's not even a hypothetical, has everyone forgotten already the constant dramas from the era in which Steve Jobs insisted iPhone apps would be banned if they weren't of "very good quality" or whatever the BS wording was? And when he went on his personal moral quest against porn?

This seems like overreach to me. It's annoying but it's not like the Zoom app was silently letting people watch me for hours through my webcam without anyone noticing - the app opens a full screen video sharing GUI for goodness sake. Is being joined to a VC without me wanting it when I click a link annoying? Sure. It also serves the attacker no real purpose and thus has never actually happened in the wild. It's also easily fixed. This is a storm in a teacup.

Moreover it seems from the last discussion of this on HN that videocall firms do this for a good reason - lots of users get confused by bad Safari permissions GUIs and end up locking themselves out of the app by cancelling the URL open prompt without thinking (which is apparently persistent!) Then they can't join the call. So the only reason these firms are using such a bad workaround to begin with is because Apple screwed up their user interfaces: why is this not on Apple to fix?

This appears to send a message to Mac devs that a single troublemaking blogger can cause Apple to kneejerkingly nuke features in your app overnight, regardless of whether you are fixing them, whether they're serious or not or whether it will result in legions of confused and stuck Mac users. Not a great message.


> Someone brings this argument up _every single time_ a tech company takes action against something malicious.

The argument isn't about taking action against something malicious, the argument is about the implications of being able to take that action and what sort of power the company has and if they should have it in light of past abuses (not necessarily but that company, but this is totally irrelevant since companies are made up of people that come and go, they do not have a single "brain" or morality).

> This will never happen.

You cannot guarantee that.


   > > This will never happen

   > You cannot guarantee that.
Can you guarantee that it will happen within the next, say, 5, 10, 15, or 20 years? Somewhere within there, the devices we're currently using will likely be replaced, and the landscape will have changed.

What I personally care about, privacy-wise, is the present and near future- will my family be safe on the internet with what I've set up for the next five years? Probably. Will the computer I'm using to type this reply on be replaced within a decade? Probably so. Will my family get a new PC within the next ten years? Yes.

Can you 100% guarantee that the Government of the United States will be intact in twenty years? No, the threats from Russia and China (both nuclear countries) and North Korea (armed or not, they're still dangerous), and space asteroids and epidemics and terrorists and politics and civil wars are not zero.

Can you 100% guarantee that California won't sink into the ocean in 100 years? That would make for some really bad real estate investments, yet people still buy and sell and build there.

People are still living in California, trading with each other, the US Government is still stable, and Apple is currently upholding and protecting user privacy. Also, we still have electricity and the internet. Now is a great time to be alive.


And this sort of widespread short-sighted behavior is why over time we lose nice things that at the past we took for granted.


> Some other comments here suggest that the installation of this update is controlled by a setting described as being for system related updates, which a user would expect to leave his/her third-party software alone.

The setting is called "Install system data files and security updates": it's not just system components.


I suppose that depends on the application of the transitive property of English grammar: "Install (system) (data files and security updates)" or "Install (system data files) and (security updates)"


Removing a malware app is installing a system security update.


Would you happen to have any examples outside of this particular update where an OS security update goes and changes something that the OS didn't itself create? Genuinely curious if this is a thing. The word "system" can be scoped any which way (see system of systems) but typically means "part of the OS" in this context.


Microsoft does this monthly with the Malicious Software Removal Tool which comes down as part of Windows Update.


Isn't it a security update of a system data file (the malware removal tool configuration)?

And it is a security update meant to annihilate a serious malware threat, not to mess with legitimate third party software.


I'm so glad that Apple pushes updates to protect me as a user. Why would I want to be vulnerable to unwanted webcam access?

I am an Apple customer, and I now feel even happier about being one.


> Why would I want to be vulnerable to unwanted webcam access?

You shouldn't want. But at the same time i do not think it is a good idea to want Apple to be able to silently remove arbitrary applications they had nothing to do with from your computer.

At the very least they should ask the user about it or quarantine the software and inform the user about it. AFAIK this is what Windows Defender does when it finds malicious files.


I agree that this entire thing has been disturbing. Leaving behind a web server after you perform an uninstall by Zoom is unbelievable in my opinion. It's not even the fact that there was a vulnerability that makes me angry, it's the idea that you say uninstall, and it knowingly leaves a web server running on your machine.

Then, Apple, can push a silent update to simply kill software on your machine which as I understand it wasn't installed through the app store.

In this case I may be happy that it's no longer running, but the whole thing is disturbing. Looking at the Security & Privacy settings on my MacBook, I see nothing about running any anti-virus or anti-malware. The closest setting I can see that might be this is under software update, where I have the option to install automatically the system data files and security updates.

It's kind of a stretch for me to consider the ability to kill some software Apple might construe as malware at anytime the same thing as a "security update". To me, a security update would patch Apple code which had a vulnerability.

Where do I tell Apple to whitelist software in the future they might not like which I've chosen to install not going through the App Store?

It's actually news to me that I'm running Anti-X on my Mac, I didn't think I was.

Considering the fact that I have to learn new places for all the buttons every time Microsoft gets bored and changes things for the "better" I'm really disappointed.

System76 is looking better and better.


I'm fine with this. I can always leave Apple and go to Linux if they do something drastic.


After years of pouring thousands of dollars into Apple's coffers you expect that Linux will still be there when you need it...

Real principles would involve switching now.


a) (s)he did set out his principle - that (s)he'd switch if Apple did something found personally egregious. Isn't this a sufficiently reasonable principle to follow?

b) Linux seems pretty solid, no...?

c) I'd argue that a better position might be 'support Linux now, in case you need it in the future'.


I like Apple products and software. Why would I switch?


I find it pretty hard to make sense of an argument that frames the removal of trivially exploitable, extremely privacy-violating malware from users' systems as a 'wrong'. The software itself continues to work and was minimally affected, if at all. If quickly intervening to protect your customers' privacy from an egregious threat is wrong, I want all my vendors to be wrong.


Given that it exposed Apple's users to possible privacy violations and Apple's actions didn't remove any function from Zoom (it operated fine, if installed when this action took place), I'd say I'm happier as an Apple customer.


They worked with Zoom to kill the zombie servers which were left behind after Zoom is uninstalled. Not really flexing. Zoom accidentally created malware, and Apple killed it using the same mechanisms they would to kill other malware.


That’s being pretty kind to a company that bypassed a system setting designed to respect user permissions by writing an always-running insecure web server —- and then refusing to remove that web server and even reinstalling itself when you remove the app.

Oh, and a company that defended the insecure web server up until the moment the public outrage exploded and/or the RCE it had willfully ignored was about to be revealed.

Oh, and a public company at that, that’s trying to convince businesses to use its product as it primary video chat system.

Apple worked with Zoom insofar as Apple cleaned up Zoom’s mess because of Zoom’s poor/unethical software practices.


How do you “accidentally” write software that can reinstall itself?


Logically this suggests not that Apple should stop removing malicious software but perhaps should have user visibility and control over the process.

For example if you classify possibly unwanted programs from annoying toolbars to randomware on a scale of 1-3 it might be reasonable to provide a checkbox to allow the user to switch between being warned of a negative program and being given the option to uninstall and having this happen automatically for non critical situations.

If the default is to on then 99% of users will be protected.

Arguably stuff like ransomware shouldn't be optional lest the malware set the option.


Does Windows not distribute automated tools via Windows Updates from time-to-time to remove Malware _and_ PUP (Potentially Unwanted Programs). I'd argue that the entire Zoom debacle is probably a PUP since a user _uninstalled_ Zoom. And the Zoom web server can _reinstall_ Zoom.


This sort of mentality means that nontechnical users will never be protected. Someone needs to take an active role in security and this kind of action is part of it. If the app developer is skirting the issue, Apple should be the company to step up. Who else will solve an issue on this scale? The developer should have stepped up and fixed it; instead they chose to dismiss it.

Apple made the right call for this instance, especially after the completely insufficient excuses given by Zoom’s CIO.


I'll never understand this argument:

> Apple's wielding of power in this way... the idea of the OS/platform vendor meddling with third-party software...

This is THEIR app store for THEIR operating system. Why in the world would they not be allowed to control their software's features or third party integrations? It reminds me of the ridiculous argument over Windows setting IE as its default browser (and I've been a web dev since the late 90s).


> This is THEIR app store for THEIR operating system.

On MY computer.

> Why in the world would they not be allowed to control their software's features or third party integrations?

Because it is not THEIR computer but MY computer.


> On MY computer.

But Apple didn't install macOS on your computer. You chose to use THEIR platform.


Actually they did as macOS comes preinstalled, however this is irrelevant. The computer and the software is running under it must be under the control of the user, otherwise it is the user who is under the control of the software.


This isn't about the App Store, I thought I made that clear in my original comment. While I disagree with the decisions it makes, I think Apple certainly has the right to control its App Store.

It reminds me of the ridiculous argument over Windows setting IE as its default browser

What do you mean by that? Instead you reminded me that saying "$our_competitor's product is not secure, so we've helpfully removed it and recommend you use $our_equivalent instead" is likely to run afoul of antitrust laws.


I'm actually quite happy to have Apple police its App Store and offer patches that eliminate security holes and malware. There's nothing wrong with Apple trying to make your computer safer and less prone to exploitation, in my opinion.


How do you feel about Windows Defender?


Windows Defender will not silently remove the software completely, it will quarantine it and make sure you know about its actions (i mean, it does make sure you know it exists even when there are no issues by its constant "Windows defender scanned your system and found no issues" notifications).

At this point it is up to the user to decide what to do and most non-technical users will leave it at that (and wont know what else to do) which should keep them safe.


The same way I feel about antivirus in general: they have tons of false positives that turn them into an effective form of censorware (cracks, keygens, demoscene stuff, Hello World programs[1][2], etc.) and are very unlikely to actually protect you from a 0-day. If I happen to have a file I want to run but feel suspicious and can't be bothered analysing it myself, which is a very rare case indeed, I upload it to one of the services that scans it with a dozen or more AVs to see what they think. Even then I will not use my main OS to try it first.

That said, in the context of my original comment, AVs are a bit of a special case because their sole and expected purpose is to detect and remove software they don't like.

[1] https://www.csoonline.com/article/3216765/security/heres-why...

[2] https://stackoverflow.com/questions/22926360/malwarebytes-gi...


Or the Malicious Software Removal Tool distributed and run regularly by Windows Update?


I think the difference here is that apple explicitly does not have an "anti malware" configuration section of the MacOS control panel. There's configuration for automated system updates, which most technical people understand to mean security patches and things that are equivalent to windows hotfixes and servicepacks.

I am with the "two wrongs don't make a right" people here. Zoom was reckless and their casual disregard for the initial security report left a very bad taste in my mouth. I'm now highly unlikely to use one of their products willingly. But having Apple initiate unattended removal of software that a person willingly installed on their own workstation machine is also unacceptable, unless they specifically opted in and checked "enable" on something that is very similar to windows defender.


If Apple removed a piece of ransomware that you installed would you have a problem with that?

Do you think anyone would have installed Zoom if they knew that it would allow any random website to activate your camera?


No, and I'm 100% in agreement with the need for it to be removed, it was clearly malware.

The question I see is really that Apple doesn't inform its users of the existence of this feature, unless you really search for it. Having something as simple as a functional-equivalent to Windows Defender with its own icon in Control Panel, which is fully enabled in the default operating system installation, should be sufficient.

Personally, unless I am specifically aware of the existence and enabled status of some anti-malware application, I don't think it's a good precedent to set for operating system vendors to start silently removing software from peoples' machines. Really all it should take is apple making people aware of the feature's existence.


While I agree with you that I wouldn't want any such thing operating silently on my own machine, I increasingly wonder if it's not the sensible default for a lot of people.

For a bunch of my family members, even simple errors mean almost nothing to them. They'll stop what they're doing and wait for help even on an error that (it seems to me) they could have simply read and addressed themselves. They've never examined the system tray, and dismiss any popups that come from it. Making them aware of systems like this only serves to confuse, because they don't really understand the problem it's addressing in the first place.

Machines for power users aren't going away. There are more operating systems than you can shake a stick at, and the number keeps growing. But for a lot of users information can be paralysing, and I wonder if having a strongly managed and simplified system akin to a phone isn't a better idea.


One could speculate in whether that is a conscious decision, since it would weaken the illusion that only Windows gets malware.


> Do you think anyone would have installed Zoom if they knew that it would allow any random website to activate your camera?

Yes, I'm quite confident that millions of "normal users" would still have installed Zoom knowing that.


> There's configuration for automated system updates, which most technical people understand to mean security patches and things that are equivalent to windows hotfixes and servicepacks.

Microsoft do exactly the same, and have done for over a decade now:

> Malicious Software Removal Tool is a freely distributed virus removal tool developed by Microsoft for the Microsoft Windows operating system. First released on January 13, 2005, it is an on-demand anti-virus tool ("on-demand" means it lacks real-time protection) that scans the computer for specific widespread malware and tries to eliminate the infection. [...] The program is usually updated on the second Tuesday of every month (commonly called "Patch Tuesday") and distributed via Windows Update, at which point it runs once automatically in the background and reports if malicious software is found.

https://en.wikipedia.org/wiki/Malicious_Software_Removal_Too...

> Having something as simple as a functional-equivalent to Windows Defender with its own icon in Control Panel

MSRT is independent from Windows Defender.


How well I remember everyone abandoning Microsoft Windows back in the Windows 2000 era when Microsoft first started using Windows Update to push out their monthly Malicious Software Removal Tool, he said sarcastically.


[flagged]


Could you please not slide back into breaking the site guidelines like this? I don't want to ban you, but you've done it repeatedly recently.


Stop pretending this is a conversation, I have no say here...


[flagged]


There is only Gentoo. So much control, so free. I've had to manually approve licenses to install fonts. A few other trade offs but I control and own all the bits


It's weird to call Linux "safe" because it lacks an optional anti-malware service.


If you can't see a difference between deleting files not authorized by the person who bought, installed, and uses the OS, and deleting files not authorized by third parties, I don't know what to tell you.


Well said. Apple loves to flex muscle as a show of force - virtue signaling - when there is some great drama going on. It is terrifying to know they have covert remote root code execution on Macs and iPhones at all times, which they may use without your consent of even knowledge. To me, that is a greater security risk and the reason I will not use their products in favor of open source operating systems only. It saddens me to see how soon people forget Apple participates in PRISM surveillance and has been literally criticized for human rights violations in China, yet invite them to control their computers. I expect people will down-vote this comment, but expressing my disgust for such an authoritarian world view is more important to me, and should be recognized as such by the hacker community. Unfortunately it seems too many of us will sacrifice autonomy and control for the promise of centrally-administered, ominous "security" working in the shadows.


The RCE is a given for anything that has automatic updates enabled, and IMHO isn't really the focus of this issue; the main concern is with the scope of what they are allowed to "update", and the legal ramifications thereof.

Yes, Apple does have the power to change every bit on your hard disk if you let it. Things like EULAs are supposed to govern to what extent they can use that power.


> Yes, Apple does have the power to change every bit on your hard disk if you let it. Things like EULAs are supposed to govern to what extent they can use that power.

Please help me understand this. Where in the EULA or elsewhere have users allowed Apple to remotely install a U2 album, or delete third-party software? What of the fact the EULA itself can change at any time without any posted notice? Or worse, what if Apple is forced or breached to deliver a "security update" which steals personal information or bricks your device? Why is this not a legitimate concern to technical professionals? Or, where is the documentation which at least explains this behavior to put curious minds at ease?


It's been really interesting to see how quickly the original Zoom response of "there's nothing wrong with this, everybody does it" ended up being reversed.

I wonder if there's a known exploit for the Zoom server specifically, or if Apple discovered one while looking into it. It seems strange for them to go to these lengths in this case when it sounds like other software has been using a similar technique too. Maybe it's just the reinstallation aspect that makes Zoom's case exceptional?


It was the combination of the vulnerability in the Zoom client combined with this behaviour in the web server:

"Additionally, if you’ve ever installed the Zoom client and then uninstalled it, you still have a localhost web server on your machine that will happily re-install the Zoom client for you, without requiring any user interaction on your behalf besides visiting a webpage. This re-install ‘feature’ continues to work to this day."


In the news segment of this week's episode of Risky Business[0], one of the hosts mentions (starting around 3:40) he has some information that there was an RCE disclosed to Zoom back "some months ago". He further says that @Jlleitschuh (the person reporting the web server issue earlier this week) got 90% of the way to finding it. So...yeah, speculation only, but maybe Apple became aware of this and dropped the hammer.

[0]: https://www.risky.biz/RB547/


They also discuss a case, where a user uninstalls Zoom, but does not remove the web server, remaining forever vulnerable, because the fix from Zoom will not reach them. That explains the Apple involvement.


After that PR spin in response to threat disclosure in the original article, I am very skeptical of Zoom's PR machine.

> Zoom spokesperson Priscilla McCarthy told TechCrunch: “We’re happy to have worked with Apple on testing this update. We expect the web server issue to be resolved today. We appreciate our users’ patience as we continue to work through addressing their concerns.”

Yeah, I bet.


Yeah, I'm quite curious now if Zoom's reversal and decision to remove the server was because Apple informed them they were going to forcibly remove it like this.

It looked like they decided to remove the server themselves (or at least, as a response to pressure), but maybe they didn't actually have a choice at all.


Yeah, exactly. Software you've uninstalled gives away permission to use your camera to a remote web page? That's malware, and would get most apps banned permanently.


I got all my colleagues to uninstall Zoom and related artefacts. I'm sure I wasn't the only one. They probably have stats that show this. Too late for me. We will just never use it again unless all other improving options suddenly fail (which is highly unlikely). This isn't because I hate Zoom or and annoyed at their response, it's because it's too late. I've uninstalled it and most likely won't have a reason to reinstall it ever. Reverse inertia.


Other software used similar techniques for starting calls, but I believe that Zoom is unique in that calls can turn the camera on without any user interaction.

Zoom also didn't reverse their decision until there was a huge amount of public backlash.


>I wonder if there's a known exploit for the Zoom server specifically

There is, though it's not public yet.


I always wondered why the zoom app required root permissions, which is why I never installed it in the first place. What would a video conferencing app ever need root permissions for?! Now we know: a backdoor.

Thank god for Apple putting down the law. This is why I happily pay premium prices...


I can't verify since I don't have it installed but I see no reason why this webserver would need root permissions. If it's asking for root it must be for something else.


Quite a few apps ask for root during installation. But now you have me wondering which apps ask for root and which don't. Would be neat if there was a huge app registry website that could show this. Name and shame the ones that ask for root..


Let me introduce you to the nice folks over at Objective See.. https://objective-see.com/products.html

They have a bunch of cool little apps (that are free) like BlockBlock that let you know when things are happening you wouldn't have otherwise allowed.

For example, BlockBlock warned me randomly about 30 minutes ago about an app that was being silently installed in the background.. something I hadn't seen before called MRT.app.

Turns out - that was Apple silently updating the OS to protect against Zoom. Wouldn't have known if it weren't for these apps.


> Let me introduce you to the nice folks over at Objective See.. https://objective-see.com/products.html

The "nice folks" at Objective-See is Patrick Wardle, a former NSA rootkit expert who would like nothing more than to install various close-sourced components on your computer.


I'm not necessarily a fan of Patrick Wardle, but his software is open source: https://github.com/objective-see/


Some software components are open source, others are not.


That’s funny. Windows’ own tool for these matters is called... MRT.exe


Maybe it stands for Malware Removal Tool on Windows as well.


IIRC "Malicious Software Removal Tool" but close enough I guess.


Wow, neat stuff. Several machines ago, I used to run Little Snitch, but never got past the trial. At some point I searched for free alternatives but didn't find anything. LuLu seems to be very similar, so I'm giving it a try!

Funny how both companies have "objective" in their names.


Dropbox is one. It always prompts me for my password whenever it launches. I always cancel. It works perfectly without it.


They ship a pkg installer package that pretty much always elevates, even if it doesn’t actually need it. While it could in theory install as the user, I don’t even think the Installer app supports this configuration anymore (at least last time I tried I wasn’t able to).


Maybe an improved ability to remain persistent after uninstall, and to reinstall the app?


If it's asking for root it must be for something else.

Occam's Razor, asking for root was probably just the path of least resistance for the Zoom developers.


Huh? Why is it ok for Apple or anyone to do silent installs on my computer? As a customer, why am I getting this information from YC/Techcrunch and not Apple?

What else have they pushed like this? Is there a transparent log? Can we verify if their track record is clean? How many times have they silently broken and fixed their own things? How do we know they won't abuse this?

Isn't this the same dark pattern that we criticized zoom for? Did I consent to Apple doing silent editorial changes to my system?

For having exercised this editorial privilege, will Apple take accountability for every thing that is done by every app on my computer?

It seems like we are being slow-boiled into accepting outrageous things as normal.


I don't want to come across as confrontational, but I find this kind of response exhausting. I do not want control of everything on my computer. I don't have time or expertise to decide on whether to accept each and every security update, particularly ones that involve a web server which was installed by stealth and which isn't removed when the app is uninstalled. I want to outsource these kinds of decisions to people more qualified than me, and if I don't have to pay extra for those people (beyond the extra expense of buying Apple products) all the better.

If you want complete control over your computer you have the choice of getting yourself a PC with some flavour of *nix, and combing through each update as it comes. I really don't like the future of Apple that you seem to want. Apple has made missteps, sure - like that idiotic U2 album - but I actively want things like this to happen, and I imagine the vast majority of Apple users do too (if they actually ever think about it).


Do you recognize any middle ground between the current state and your extreme scenario of poring through every update?


Honestly, for my purposes (I'm technical director of a digital media agency), not really. I install updates automatically (apart from first-version releases of macOS), and appreciate the gatekeeping aspect of the App Store for everyday software.

When it comes to public-facing servers that we run it's a different matter of course, but then I'm performing (and delegating) the same task that I want Apple to perform for my MacBook Pro.


It's the same response like the one when dropbox released. "It's trivial to spin up a server, set up a file sync, blah blah blah, why do we need dropbox".

I'm the same as you. The whineyness exhausts me. Let the market decide. I just want shit to work.


> What else have they pushed like this? Is there a transparent log?

Yes.

    sh-3.2# softwareupdate --history
    Display Name                                       Version    Date
    ------------                                       -------    ----
    Safari Technology Preview                          87         07/10/2019, 21:40:18
    Gatekeeper Configuration Data                      171        07/03/2019, 14:00:23
    Safari Technology Preview                          86         07/02/2019, 01:27:12
    MRTConfigData                                      1.42       06/29/2019, 11:52:13
    Gatekeeper Configuration Data                      170        06/29/2019, 11:50:33
    Safari Technology Preview                          85         06/13/2019, 14:48:10
    Safari Technology Preview                          84         06/10/2019, 00:51:57
    TCC Configuration Data                             17.0       06/05/2019, 07:04:21
    Gatekeeper Configuration Data                      167        06/04/2019, 04:17:26
    Safari Technology Preview                          83         05/30/2019, 19:48:10
    iTunes Device Support Update                                  05/15/2019, 16:27:15
    Safari Technology Preview                          82         05/15/2019, 16:27:15
    macOS 10.14.5 Update                                          05/15/2019, 16:27:15
    Gatekeeper Configuration Data                      166        05/14/2019, 02:36:07
    Safari Technology Preview                          81         05/03/2019, 00:47:53
    MRTConfigData                                      1.41       05/02/2019, 06:36:59
    XProtectPlistConfigData                            2103       05/02/2019, 06:36:37
And the list goes all the way back to when I bought my Mac. Among the list above, only the Safari Technology Preview updates and the macOS 10.14.5 update are initiated manually, as far as I can remember.


Apple has done silent updates for Gatekeeper, the macOS code signing/file quarantine/light anti-malware framework ever since Yosemite. These are done silently unless you explicitly disable all updates and only show up as a visible item when you manually list updates using the softwareupdate command line tool, I think. AFAIK these are just config files and hash databases or similar.

You can view the history of these installs by running softwareupdate --history | grep -E "^MRT|^Gatekeeper"

I see an average of 2 updates per month.

FWIW Apple can also mark something as visible (but install automatically) with a certain config file key. A blog mentions this as being done for patching the NTP bug from a while back:

> Marking these updates as ConfigData cues the App Store to not display these as available software updates in the App Store’s list of software updates. These updates are meant to be under Apple’s control and to be as invisible as possible.

> Meanwhile, an automatically installed software update like OS X NTP Security Update 1.0 shows up as a normal software update, but has extra keys in its catalog listing to mark it as a critical update whose automatic installation is set to occur as soon as possible.

https://derflounder.wordpress.com/2014/12/27/managing-automa...


> These are done silently unless you explicitly disable all updates

You can turn off silent security updates from System Preferences without affecting software update checks for other components.


It seems that unticking this checkbox causes those updates to be disabled completely (and not just converted into visible updates in the Software Updater that you have the option of installing, like I thought it would). I had that option unticked and my last "MRTConfigData" update was a year out of date.

The option is called "Automatically: Install system data files and security updates" in the Software Update advanced settings.


> Isn't this the same dark pattern that we criticized zoom for? Did I consent to Apple doing silent editorial changes to my system?

Because as opposed to what Zoom did, this is a feature that you can turn off if you want in "Updates" preference pane.


There is an undisclosed RCE that prompted Apple to act. https://twitter.com/riskybusiness/status/1148819622558236673...


Someone mentioned this above, too: https://news.ycombinator.com/item?id=20407699

Makes the story MUCH worse in my opinion. An unpatched RCE that they left open until someone else got 90% of the way there and went public with it.


It sounds like silent updates from Apple without automatic updates turned on is also an undisclosed RCE - or an Apple backdoor, depending on how fine a point you wish to put on it.

Being my OS or hardware vendor does not entitle you to permanent RCE on the machine that now belongs to me.

Unless of course this is just a XProtect rules update or a Gatekeeper CRL update, then ignore what I said.


It is.


From the article, this sounds like it was a GateKeeper change, de-whitelisting the signature, rather than an update, per se.


Of note, Apple has had its own malware detection and removal system in place since the Mountain Lion - Snow Leopard timeframe. Since this article speaks to removal, it's sounding like the Zoom local server may have had its signature added to that system.


So the local server is not a regular price of software with a vulnerability, it is now considered malware?


Beyond the unsolicited reinstallation (== malware) behavior other commenters rightly mentioned, the entire existence of the vulnerable server was a hack to work around a Safari security feature. Zoom wanted to eliminate an extra user click, required by Safari to confirm that it was OK to invoke a local application based on the public zoom link. This server was an implementation of that security "workaround".

That makes this server at least doubly malware. And "vulnerability" understates the case: a negligent implementation that utterly disregarded any security concerns should be considered beyond the pale. That times 1000 for a major software vendor like Zoom.


The server was intentionally left behind, and running, by the "uninstaller". The server would respond to requests by reinstalling the intentionally uninstalled software.

That's malware.

The server itself was deliberately added to work around a Safari security feature, that was designed specifically to prevent what they wanted: allowing arbitrary web content to open an app without user consent. They literally added an always on, persistent server, to avoid a security dialog


> The server was intentionally left behind, and running, by the "uninstaller

FWIW, I don’t think there was a uninstaller? What I’ve seen people describe is that dragging the .App file to the trash wouldn’t remove the server, since it was installed in a different folder.

There was no uninstaller until the Zoom update this week added an uninstall option to the menu.


I think that is the point.

MacOS users expect an app that does not come bundled with an uninstaller to be "uninstalled" by dragging the .app bundle to the trash. This generally leaves behind metadata, but that is not a big deal as it is just data - not code.

Leaving behind code that continues to execute after the user has removed the application without an option to uninstall it is obviously something that never should have shipped in the first place from a customer trust perspective.


FWIW I am not sure how dragging the .app to the trash would uninstall other things installed by the app.

If I drag photoshop to my trash instead of using Adobe’s uninstaller, I’m pretty sure that leaves Creative Cloud junk running on my machine.

Now, zoom did mess up by not having a proper uninstaller shipped with their app, I think a lot of other Mac apps do fail at this too though.


The server was deliberately left running: if it was unintentional it would just error out if Zoom had been removed. Instead it downloaded and installed the client again. That’s fairly clearly designed to override the user’s attempt to remove the software, and is exactly the kind of thing malware does.

There is a real problem many (often primarily windows) apps have this bizarre desire to “install” content randomly scattershot across the OS. There is no reason to do this on Mac OS. OS X happily supports multiple binaries and services per bundle.

If you write an app that needs an uninstaller on OS X, and you aren’t needing to install some kind of driver, your app is doing things it should not be doing, and does not even need to do.


It is perfectly supported to leave daemons in your app’s bundle. There was no requirement to install the executable to a different directory. They appear to have done it to ensure the web server could silently reinstall the app after the user deleted it.


Yes, as is, dragging apps to the trash cannot do that.

Apple should probably implement an API that allows developers to tell the OS what other stuff should be removed if the app is dragged to the trash.

To prevent devs from removing other people's stuff, you could require that the subcomponents need to be cryptographically signed with the same key as the main app bundle.

edit: or, even simpler solution (potentially) - have a fairly basic API via which devs can install subprograms elsewhere on the system. When you use this API to install something, it adds the thing installed to an OS-level registry. When the user drags the app to the trash, the OS checks the registry and removes anything that was added by the application.


The real solution is to not install anything outside of your app bundle in the first place. In this case, instead of sticking a plist in ~/Library/LaunchAgents, you can use an API to add a login item, pointing to a helper .app inside the main app bundle:

https://developer.apple.com/library/archive/documentation/Ma...

Then the system will automatically disable the login item if the app is removed.

Edit: It seems like Zoom was using a login item, but using the "shared file list" API instead of the newer (but still dating back to 10.6) SMLoginItemSetEnabled.

An alternative is to just make your daemon check for itself whether the main app has been removed, and delete itself if so.


No, on OS X there is literally no reason to move anything from out of your application bundle. Copying the server out of the application bundle had one effect: it makes removing the app not remove the server.

There are apis to register for launch, there are launch service plists, etc which all correctly, and by default handle the user deleting your app bundle.

There are already “cryptographic mechanisms to detect modification of your app”, it’s the platform code signing mechanism, and Xcode will produced signed binaries by default.

Finally, your post comes across as saying that you believe leaving the server was unintentional due to lack of APIs (ignoring that removing an application is a sign that the user doesn’t want your app to run any code). Their “unintentionally” persistent server explicitly check for the app being removed, and if it had been, would redownload it and install it without user consent.


I think your edit is describing a package manager :D Or the Mac App Store... (But yes, it would be good if this feature were available for apps that can’t come through the App Store)


There was a reference to an uninstaller in one post, but also it is expected the uninstalling an app on macOS is simply a matter of deleting the app package.

But again, this was deliberately subverted by the client copying the server out of the app package and into a hidden folder in the user's home directory. Again, this is an intentional choice mimicking malware: the only reason to move the server binary is to break the standard app removal process.

But I maintain that the smoking gun for this being intentional is that the server will download and reinstall the client software, which only makes sense if you expect it to run when the user has uninstalled/removed your software.


Yes because even if you purposely uninstalled the application it would reinstall if you went to a link even by accident without really notifying you that it did so.

So yes, malware.


That's Apple's closed garden, even when they allow you to sideload application, they still have the ultimate decision. Of course it's not malware, but probably enough users have vulnerable software which could be remotely exploited, that they decided to blacklist it.


A program that surreptitiously reinstalls software when you uninstall it is by definition malware.

A piece of software that lets any website activate your camera without your permission is a security vulnerability.


Malware is a software written to harm user. Their purpose was not to harm user, they wanted to make their service more convenient for users. Bugs are bugs, every product have bugs and many products have security bugs. That does not make them malware.


Intent doesn’t matter only results. The result is that unless you like for random websites to be able to activate your camera without your permission, it did harm users. It wasn’t a “bug”. They purposefully hacked around a security feature. Do you really think it was a “bug” that it reinstalled itself?


> Intent doesn’t matter only results.

Then you should call Chrome a malware because there were vulnerabilities with remote code execution (and there will be similar vulnerabilities), so every website could install anything. But that is absurd.

> The result is that unless you like for random websites to be able to activate your camera without your permission, it did harm users.

First of all, you need a hard data that this vulnerability was exploited in the wild. Otherwise it did not harm users, it only opened a way for malicious websites to harm users. And, again, every vulnerability could be counted as a malware by that definition which makes malware term meaningless.

> Do you really think it was a “bug” that it reinstalled itself?

It is intended behaviour. And I don't see anything drastically bad with it. If you're opening their website with corresponding link, you want to use that service. In order to use that service, you have to run additional software and they are making it easier for you to run it. Only a few years ago every browser supported Java Applets and with Java Applets every website could run arbitrary code on your machine. And that feature was actually used a lot. Does it make all services which used that feature to overcome browser weaknesses malware? I don't think so.

They probably should have communicated better about that aspect and provide proper uninstaller software for security-concerned users. And not making those vulnerabilities in the first place, of course. But world is not perfect.


First of all, you need a hard data that this vulnerability was exploited in the wild

So would it also be okay for a credit bureau to post all of your information to a website? Would that be okay until it was “exploited in the wild”?

Are you really saying it’s okay to run knowingly insecure software until there are reports of it being exploited?

Only a few years ago every browser supported Java Applets and with Java Applets every website could run arbitrary code on your machine.

Java applets ran in a sandbox.


> So would it also be okay for a credit bureau to post all of your information to a website? Would that be okay until it was “exploited in the wild”?

I'm not saying that it's okay. I'm just saying that it's an overreaction to call their software malware.

> Are you really saying it’s okay to run knowingly insecure software until there are reports of it being exploited?

Yes, it's okay. When Windows security team receives some vulerability report, they do not shut down all Windows systems in the world until the bug is fixed. Those systems continue to work insecurely until they got update.

> Java applets ran in a sandbox.

Signed Java applets do not run in a sandbox and have full access to the computer. That's why they were widely used for things that JavaScript did not have access to, e.g. using USB secure tokens for website authentication.


There is tons of malware that doesn't harm the user. Take crypto mining for example: it raises the temperature and the electricity bill, but those things aren't per se harmful (albeit financially). Or how about a botnet client that harms some other entity but not the user who owns the machine it's on? Or adware? The list goes on and on... calling this nefarious behavior "harm" is a huge stretch but calling it "malware" is not.


Stealing electricity is harm. It’s theft no different than if someone stole your wallet.


> A program that surreptitiously reinstalls software when you uninstall it is by definition malware.

Not quite. Malware is short for "malicious software" and malice has a specific legal definition: the intent to harm. By your standard, removing a Mac system daemon only to have it re-installed by a software update would classify the entire OS as "malware" - which is an unfair, emotionally-driven characterization.


Microsoft could do the same with Windows Defender with jutification, had the windows version exhibited the same malicious behavior.


Windows Defender wouldn't do it silently though, it would make sure you know what happened and get more information about it - even reverse its actions.


More likely this was done via a signature update to xprotect, which is essentially a background antivirus process in macOS.


Since the update is called "MRTConfigData" - and that has to do with XProtect according to https://discussions.apple.com/thread/250079600 - you're probably right.


Doesn't appear so, current XProtect version remains at 2103 which was released a couple months ago now.


The update from today updates the MRT configuration data.


I haven’t checked, but are you looking at the version of the binary itself, or the MRT signature files it uses


Checked XProtect.meta.plist inside the app bundle, forcing an update check with softwareupdate has no impact either.

Perhaps it’s a staged rollout, but at least on my iMac there’s no sign of updated signatures.


Yeah you’re just looking at the version of the xprotect binary, not the malware signature data files, which don’t live in the bundle and get updated more regularly.

They also ship silently via system_installd, you’re not going to see anything in the software update GUI


The signature files also live inside the XProtect.app bundle, unless in true Apple fashion they’ve got other stuff that’s lurking elsewhere in /System that I can’t locate.


Looks like the ZoomOpener app is still present on my machine but it’s not running anymore and it shows as unticked in the Login Items preferences.


I wonder if this was the real reason behind the Zoom backflip. It certainly cannot be good for business if your app gets marked as malware.

Seriously though, they should have owned the mistake, apologised and reversed their decision rather than handling it with a PR spin. It is a great product but somehow it has left me with little trust for zoom. It's probably still not too late.

Does anyone know if bluejeans et all are also removing this? Or does it require public shaming, like the zoom case?


Wasn't there once a company with the motto "Don't be evil."? If that motto is abandoned, Apple should claim it since they genuinely try do their best to live by it.


Not so fast. E.g. have you read "History will not be kind to Jony Ive"[1]?

Apple has its share of evil. Another example is preventing people from repairing their own Apple devices.

--

[1] https://www.vice.com/en_us/article/ywyjmw/history-will-not-b...


Apple only does so when it's also convenient to their bottom line. They provide the Chinese government backdoor access to iMessage, remove VPN apps from their store to enable censorship, and have we all forgotten they are a PRISM partner? These actions seem pretty "evil" to me.


> They provide the Chinese government backdoor access to iMessage

No. What gave you this idea? iMessage is end-to-end encrypted. The keys are managed by the devices themselves. There is no facility to backdoor or intercept the messages.

Apple acts as a registration server, notifying your devices when a new device signed in as you joins the pool but the devices themselves tell you when this has happened. That’s all client-side. If the server didn’t tell the client about a new peer it would never encrypt a copy of the message for that peer and that peer wouldn’t get the messages.


> iMessage is end-to-end encrypted. The keys are managed by the devices themselves. There is no facility to backdoor or intercept the messages.

That is only a half-truth. Apple controls the key infrastructure; they may replace your keys with arbitrary ones at the demand, coercion or compromise by any number of bad actors. The software is closed source, making it impossible to verify any actual claims made otherwise. If they truly valued privacy, why not open source iMessage, allow users to verify iMessage keys, hire an independent third party to audit their infrastructure, or all of the above?

Moreover, Apple has moved iCloud infrastructure to Chinese data centers to enable spying on millions of innocent people. They have removed apps from their store which circumvent Chinese censorship. These are truly shameful acts which has appropriately drawn criticism from human rights watch organizations.

https://techcrunch.com/2018/02/25/apple-moves-icloud-encrypt...

https://www.cnbc.com/2018/02/24/apple-moves-to-store-icloud-...

https://www.nytimes.com/2017/07/29/technology/china-apple-ce...

https://blog.cryptographyengineering.com/2013/06/26/can-appl...


I find it disappointing that we blame companies operating in China and not the real forcing function for all this: the Chinese government


The government is bad, so is a trillion dollar American company choosing to collaborate with the government by enabling spying on their users just so they can make even more money. They're enabling a government to track down & torture/murder dissidents


Regarding them as distinct actors is a mistake. Both enable each other: Apple enables pervasive surveillance; in turn China enables huge profits. How bottomless is the guile of a corporation willing to put lives at risk to make some money?


Imagine saying this with a straight face.


They are basically solving a self-inflicted problem. The real issue there is the fact that macOS doesn't provide a standarized way to completely uninstall an app.


It does provide a standard way to uninstall an app: drag it to the trash can.

I would say that what Zoom did was have their app intentionally install malware that bypassed this normal uninstall.

I do think that it would be great to have a more thoroughly sandboxed idea of what an “app” is on the desktop, though.


Snapd, flatpak, appimage all can do this on Linux. Even docker/singularity can sort of do the same for some, if you pass through all the necessary devices and sockets from the host. When you remove the app (or container) all the files it brought with it or created during runtime are now gone.

Even the regular Linux package managers like apt, dnf, pacman track which files were installed by which packages, so they can be removed when the package is uninstalled. The downside to these is they don't track files created at runtime so a lot of the config or cache files created can be left over if the package itself doesn't remove them.


A sandboxed macOS app would offer similar protections as what snapd, flatpak, etc provide on Linux.

> Even the regular Linux package managers like apt, dnf, pacman track which files were installed by which packages, so they can be removed when the package is uninstalled.

Technically speaking, Zoom could have abused dpkg post-install scripts, or pulled similar tricks, to install their malware server and leave it behind after the package was removed. Linux distributions aren't invincible to these shenanigans.


> Technically speaking, Zoom could have abused dpkg post-install scripts, or pulled similar tricks, to install their malware server and leave it behind after the package was removed. Linux distributions aren't invincible to these shenanigans.

This is correct, but such a package should not make it into the distribution's package repositories.


And likewise Zoom didn’t make it to the AppStore...


The install scripts in rpm/deb can do all sorts of stuff that doesn't get tracked/reversed. There are package linters to help detect some of this buts it's largely a faith based endeavor either way.


> all the files it brought with it or created during runtime are now gone.

How would this work with apps that create things that a user would expect to persist, like downloads (kept after uninstalling a browser) or office documents (kept after uninstalling the office suite), or media production apps, IDEs, etc.?

It could have some rule like "let it be if it's in the user home directory" or "only remove stuff in these system directories" but it seems kind of fragile, like what if you use a text editor to create a system config file and then uninstall the text editor?


> How would this work with apps that create things that a user would expect to persist, like downloads (kept after uninstalling a browser) or office documents (kept after uninstalling the office suite), or media production apps, IDEs, etc.?

I can't see how it would. Making the uninstall remove all the files created at runtime is the wrong solution to a real problem, which is better solved by forcing all packages to be self-contained and making their installation an idempotent operation. If we're able to do this, there's not much to clean up afterwards, and we ensure there's no privileged malware left (such as Zoom's one) as both the package install scripts and runtime would be able to run unprivileged.

I use NixOS, an OS built on Nix, a functional package manager that does this, and being able to set my system to a known state without going through the steps of formatting and reinstalling is really nice.


Such a system would not allow the application to write to any place outside of its sandbox or a designated user document volume. The system can't be touched. If you want to create a system config file, you are responsible to break the glass and move it and then all bets are off.


I can imagine an allowance to "break the glass" within the context of the app so long as the app invokes an obvious-to-the-user common dialog (like the typical file>save / file>open / choose-a-folder), but there would be no "breaking the glass" in an invisible / programmatic manner. That sounds quite nice.


That’s exactly how the macOS AppStore apps work. Your app gets permission only to those files that were intentionally opened by the user via the system open dialog.


Malware intends to harm you, with intent to steal or destroy or hold hostage your information, data, and computing resources.

Does the ZoomOpener app have malicious intentions towards you, your computing resources, or your data? Not just “could it be unintentionally exploited” - note the unintentionality! - but specifically “this was intended to harm”.

If not, then you need to reconsider your use of the word “malware” - a word shortened from “malicious software” - and find a better way to describe software design patterns that aren’t compatible with “drag it to the trash” but also aren’t intended to harm.

Adobe Creative Cloud, any $$$$ audio software DRM, and HP printer drivers all use similar patterns of “can’t just drag it to the trash”, and are all similarly annoying to remove - but they are not malware, any more than this ZoomOpener is.


>The real issue there is the fact that macOS doesn't provide a standarized way to completely uninstall an app. reply

And Windows does? Uninstallers are completely at the behest of application developers. No consumer OS but iOS actually provides any sort of true app level sandboxing.


> And Windows does?

Control Panel “add and remove programs” usually works?

There’s no equivalent on Mac. Yes, dragging the app to the trash is a thing but that leaves behind content in ~/Library/caches, ~/Library/Application Support, and ~/Library/Preferences . It’s been somewhat of an issue with Mac ever since they first put a hard drive on the original ones back in the 80s...

Edit: I literally cleared several GB of junk out of my application support folder left behind by just one app today, so it’s fresh on my mind. OmniDiskSweeper is great for finding this stuff.


Control Panel “add and remove programs” usually works?

All that does is launch the app’s uninstall process. The app is free to leave whatever crap it wants to on your system.


OK but on Mac, dragging the app to the trash doesn’t launch an uninstall process, so it will always leave the crap behind.


Your app is suppose to be self contained....


Then what in the world is the huge Application Support folder for?

And why is it filled with stuff from Apple programs?


The point is, all of those files are generally still left behind on Windows as well. Uninstalling a program in Windows is roughly equivalent to dragging it into the trash in macOS.


> all of those files are generally still left behind on Windows as well.

Citation needed. That was the case in the 90s, these days most apps (device drivers aside) uninstall fairly cleanly with only settings/configurations stored in the registry still resident afterwards. Startup programs remaining after an uninstall is straight-up a bug.


> these days most apps (device drivers aside) uninstall fairly cleanly

Citation needed.


Windows installers are declarative and data-driven. An installer script does not simply use the file copy operations a usual app would use during runtime.

Instead an installer is driven by a number of "database" tables that specify the installer actions in a declarative way.

There are several benefits to this approach. The declarative actions are reversible and the uninstall actions can thus be inferred automatically. This removes the burden on install authors to create a script to reverse the install operations.

This includes registering services/daemons (will unregister on uninstall), registering protocol handlers, filetype/program associations, desktop shortcuts etc.

The declarative approach also allows the installer to roll back in case of an error during installation. This even includes "undeleting" files that were deleted. (https://docs.microsoft.com/en-us/windows/win32/msi/rollback-...)

The author of the install script can (non-Store installers) escape and execute a specific program during the install process. This is rarely needed, though. So the default is that the uninstaller will uninstall by completely reversing the install actions.

https://docs.microsoft.com/en-us/windows/win32/msi/windows-i...


No, dragging an app from "Applications/" into Trash is equivalent to dragging a program folder from "Program Files\" into Recycle Bin. Actually that's exactly what it is.


Have you checked ProgramData, Local, and Roaming recently? Or your registry?

Also, Add/Remove only works if the program adds itself there. And remove only works if the program added an uninstaller. Also, even then, it still leaves crap behind.


At least there is a convention to have an uninstaller. If you create the installer through conventional toolboxes the uninstaller comes for free. That's how most Windows programs have both.


macOS has the same convention. Conventional toolboxes are mostly not-Windows and not-Microsoft toolboxes, indicating that even if it's a convention, it'a apparently not a windows-native convention.


Doesn't "Add and Remove Programs" just run whatever custom Uninstaller came bundled with the program? Is there anything it can do with programs that don't come with an uninstaller?


Yes: the annoying ones open up iexplorer.exe and ask you to fill out a survey of why you're uninstalling them.


"add and remove programs" is still relying on the application developer to get it right. I've seen plenty of apps leave behind crap.


> OmniDiskSweeper is great for finding this stuff.

iTrash [1] is also worth mentioning. It uses the Levenshtein distance algorithm [2] to find all of the junk related to an app.

[1] http://www.osxbytes.com

[2] https://en.wikipedia.org/wiki/Levenshtein_distance


I've been using AppCleaner [1] for years and it's awesome. And it's free.

https://freemacsoft.net/appcleaner/


That's a cool little app. But at some point, I became very wary of running free, closed-source binaries from the Internet.

What do you think about this one?


Why do they put that stuff in there instead of keeping it in the .app folder?


Flatpak, Appimage, etc.

Zoom is even available in Flathub, and that's how I use it.


> No consumer OS but iOS actually provides any sort of true app level sandboxing.

Android does, too.


Which is completely frustrating, because Mac is totally in the position of using its built-in capabilities to deal with this. The Mac Bundle (.app) format could solve this entirely. All application specific data should be written inside of the bundle folder, so that when you delete the app, you delete the thing entirely.

I mean, maybe you need a "user data" bundle of sorts tied to the specific application. If you delete the app, it deletes all the user data bundles as well.

The default installer and bundle runners should be controlling the process. "XYZ App is attempting to write data files outside of its bundle location. These may not be cleaned up if you delete the application. Do you want to continue?"

The unix permissions system and the Mac bundle format should completely solve this problem. I honestly just don't get why this still happens. Doesn't iOS at least get this right?


As a Mac app developer, you mostly have to go out of your way to install things[1] outside of the app bundle, and you typically don't want to because that's only extra maintenance you have to do to update / version those files. Most apps don't, and that's why most apps actually are effectively removed when you drag the bundle into the trash.

There is already a concept of a "user data" directory for the app, which is determined just like on iOS. There are also other directories for things like caches that the system can clear if it's low on disk space.

Of course they could sandbox Mac apps just like on iOS, and Mac App Store apps essentially already work that way.

I can assure you, though, that any barrier they put in the way of letting non-App Store apps run however they always have will be met by strong resistance. It's easy to point to this occurrence with Zoom and call it unreasonable, but prompts like what you're describing will undoubtedly disrupt what other people see as totally valid use cases and ruining of the UX.

[1] By "things", I mean things like a separate web server process. Apps do store files in app-specific folders like "Application Support" by convention, but not for separate processes.


Thanks for the reply, really good stuff.

But what I don't get, why even have an "Application Support" directory at all. There is absolutely nothing of value added to me (as a user) to have files stored there. It's just one more place I have to look to clean up after an application is deleted. So dumb and adds zero value.

I'll put my files into Documents (or whatever). And you (as an application developer) put your files in your app bundle directory. That should be the contract for most (all?) user space applications.

I agree my prompt idea is generally poor and wouldn't work, it was mostly just for discussion purposes. But the mechanics of a fix for this are in place, rogue daemons that can't be deleted are just unacceptable.


> But what I don't get, why even have an "Application Support" directory at all.

The bundle isn't normally writeable by the app itself. It's generally good security practice to not have your app capable of rewriting what itself can do, and iOS is the same way. You can't write into your app bundle, so anything at all that you want to persist needs to go somewhere else (typically in "Application Support").

That would include mostly anything the app persists to disk that the user didn't explicitly choose, like an sqlite database or something.

> But the mechanics of a fix for this are in place, rogue daemons that can't be deleted are just unacceptable.

I see where you're coming from, but I think this is really the only balance Apple can strike here. That balance is essentially that non-App Store apps/installers are not sandboxed and can mostly do whatever they want, but Apple can step in with X-Protect (like they did here) and remove anything egregious.

They've also moved forward recently with the concept of notarizing, which will still allow apps/installers to do whatever they want, but they'll at least need to be validly signed by a verifiable private key that Apple can then revoke if you (as the developer who signed things with it) do anything egregious.

Again--it seems unreasonable in this case, but there are undoubtedly numerous very popular unsandboxed Mac apps that do sketchier things for more valid use cases. Any prompts or restrictions are going to be very disruptive and prompt a new wave of blog and HN posts about Apple trying to kill app distribution outside the App Stores.


Again, good reply thank you. Just to continue a little more...

> The bundle isn't normally writeable by the app itself. It's generally good security practice to not have your app capable of rewriting what itself can do, and iOS is the same way. You can't write into your app bundle, so anything at all that you want to persist needs to go somewhere else (typically in "Application Support").

I don't buy this. If you are writing "anything at all" into Application Support, this is no different than writing to the bundle. An executable binary written into Application Support is just as effectively the same thing as an executable binary written into the bundle itself. I don't see a difference between a program rewriting its own code vs. writing an executable into another location.

There's no added security and no added user value to writing into Application Support vs. the App Bundle itself.

Not arguing with you, I appreciate you explaining the current state and conventions. Thanks for the discussion.


Ok, but an SQLite dB isn’t an executable. So what should an app do with that?


Mac OS is a multi-user operating system. Most applications that are installed are global to the system although each user also has their own Applications folder. The Application Support folder resides in the user's Library folder and contains information that the app needs when running for that particular user. Storing information in the .app bundle would affect every user on that computer.


You're saying that an application can't write user specific information into the bundle and sort that out? There's no difference between these two (hypothetical) file paths:

  /Applications/SomeApp.app/users/taftster/user.specific.data
  /Users/taftster/Library/Application Support/SomeApp/user.specific.data
These two file paths are effectively the same. And when the "global" application gets deleted, I most definitely want all the user data deleted with it as well.

And no, I'm not talking about saved output (documents, etc.) that are generated by the application. I'm saying there is just no need for Application Support at all; it adds no value and is just used by convention.


It’s a design choice, and of course it arguable but it have it’s (good) reasons.

By design choice user data of any App should be only be readable and modifiable by the active user. If user data is stored somewhere under home folder, this mechanism comes for free for the App developer and the OS handle it. If you want that same level of privacy in the App Bundle developers would have to handle it themselves (and well).

But then what happen when you uninstall ? Either you are blocked because you can’t remove data from another user or you somehow require escalation to admin privileges to remove data of all users at once. Which might or might not make other users happy if they were unaware of the uninstall...

With current data model choice (pretty much shared across all UNIX world and even Windows). If user A delete the App but user B still needed it you just have to reinstall to keep user B unharmed.

Your proposal is not bad to optimize for disk space gain at removal, it just happen that (quite rightfully) OS vendors chooses to rather optimize for data persistence and security.


> /Applications/SomeApp.app/users/taftster/user.specific.data

This doesn't work well for a few reasons:

- The Unix permission model makes it difficult to set that up in a way that doesn't allow someone to access someone else's user-specific data, or otherwise tamper with the app.

- Some setups sync home directories across the network, so everything related to a particular user needs to be under their home directory. Applications, on the other hand, would be part of the read-only OS image.

- If you delete a user in System Preferences, it gives you the option to delete their data or archive it. This works by deleting or archiving their home directory; things would be much messier if their data were scattered all over the disk.

- It breaks the concept of an app bundle as a immutable, sealed tree that can have its validity checked via code signature.

- If you're searching through your disk to see what's using up disk space, mingling user data with the app itself makes it harder to distinguish between the two.

- If the app decides to start storing its data in iCloud and syncing it across devices, it wouldn't make any sense to have it within the app bundle: among other things, you don't want the app's immutable data (i.e. what's in the app bundle today) to count towards your storage quota, and the data may be shared between, e.g., the macOS and iOS version of an app. (Application Support itself is not synced to iCloud, though.)

> And when the "global" application gets deleted, I most definitely want all the user data deleted with it as well.

Do you?

- What if you're just upgrading the app? If you download a new app bundle, drag it into Applications, and tell Finder to overwrite, it will delete everything in the old bundle.

- What if you're migrating to a new Mac? You may prefer to reinstall applications manually instead of copying everything, but that doesn't mean you want to lose your data.

- Even if you do want to delete the app entirely, does that actually mean you want to lose your data from that app? What if you're planning on reinstalling it in the future? iOS does automatically delete data when deleting an app, but it's not at all obvious to me that that's the best behavior.


All these are fair and good points. But all of these are still workable problems that should be very much in the capable hands and constraints of the operating system and "app bundle runner" (call it) to deal with.

Your points are all solved by using the user's home directory, very true. But the problem simply remains -- I think this is my main point -- that the "App bundle" has failed the user by not allowing for cleanup of everything that the application has created. If an application can simply write any executable into any user directory it wants, there's no code signing or integrity checks on the app itself that matters.

Applications should be treated as hostile, just like a user is treated in a multi-user system.

I think it's a failure of an operating system to not be completely in control of the limitations and installation of any program. It's also a failure of the developer community to not stand up and insist on this too. A sand-boxed model is what we should all be striving for here. Force the bad actors out.

I'm not a Mac developer (obviously). But I am an old Unix neckbeard. So I get all of your points; Unix invented this problem.

In a multiuser system, individual users are treated as hostile. Going forward, so too should applications. That's the failure in all of this (and it's been with us a long time now). Our security model is based on not trusting users, but in trusting applications. This thinking was born in the 60's when users couldn't install/execute any random download.

It's interesting that the unix model of security is hurting us more today than helping. The time for sandboxed applications is definitely overdue.


For the record, sandboxed macOS apps do exist (as someone mentioned upthread) and have a design somewhat similar to what you're describing, but uglier for the sake of backwards compatibility. For each app you have a directory like /Users/foo/Library/Containers/com.some.bundleid/Data, which is an entire virtualized home directory, containing not just a Library subdirectory but also Desktop, Documents, Downloads, etc. The latter directories shouldn't actually be used, but they're there in case some legacy code tries to access them. When an app presents an open or save dialog, the dialog is out-of-process and unsandboxed, so the user can pick a file from their real home directory or anywhere else; once they do so, the app is automatically granted the ability to access that particular file.


You're trading one problem for another. What if I want to delete a user instead? Now I have that user's crap in every application bundle.

However, the OS should perhaps insist on a particular location within "Application Support/" that each app can write to, and when the application bundle is deleted, provide a way to delete those support files as well, either for that user alone or for all users within permission (can be a system-wide configuration).


I would argue, I think most systems have more quantity and churn of applications than users. Meaning, it would be better to pay some overhead to deal with "user's crap" in every application bundle than to deal with "application crap" in every user's home.

Your second paragraph though is probably closer to a realistic solution. That is, the OS provisions an Application Support directory and restricts the application to using it exclusively. Any application uninstalls can (via admin prompting or configuration settings) then also delete the support directories as well.


You absolutely wouldn't want user data stored in app bundles. You can back up all user data by backing up /Users/ or /home/ on Linux.

As an application developer, I want to be able to drop a new bundle over the old version and NOT LOSE ANY USER DATA. The only way to do that is separate them completely.


In addition, users can run applications from a CDROM or a read only USB stick, where the app would not be able to write anything.


True, for many uses it would be cleaner to write application data to the bundle, even though this isn't commonly writeable. As someone who develops professional apps for Mac, I can thin of a few circumstances where this definitely won't work, or at least will introduce other compromises or require a whole lot of extra effort of developers and/or users.

- You want to uninstall/re-install an app without removing the application's data - The application's data can become large (think raw audio or video libraries) and users request to store it in a separate disk - Users want to personally organize the data they make with your app (by project, client, personal/work, etc.), or use it with other apps - Your "installer" is just a zip of the .app bundle and there is no obvious opportunity to assume admin privileges make the bundle writeable

Most of these could be solved by having a separate "sandbox" a la iOS or MAS that can be moved or, at the user's option, remain on disk when uninstalling; as far as I know neither system offers these capabilities.


Most of the time, the app's directory in Application Support has the per-user configuration files. And games put their saves there. In both cases, I really don't mind the files staying. I can always change my mind, reinstall the app and resume where i left off.


[self-reply, sorry]

For example, Steam games stored in Application Support. Why?? If I install a game from Steam, it should be installed somewhere in the Steam app bundle. When I delete the Steam app, I delete everything related. So dumb.


Yeah but this is clearly a special case as Steam is not AppStore distributable AND it is basically an alternative to the AppStore.

They clearly could have made other choices and they have their own logic, but that clearly not on Apple role to oversight how their concurrents operate.

Antitrust, geeks and media network would instantly gather pitchforks to run on Apple if they even dare to hypothetically mention it.


I'd argue that linux distros have this power, too, and they haven't either (unless you use a snap, which has compatibility and performance issues).

If you uninstall a .deb or .rpm or AppImage, the files you wrote into XDG_CONFIG_HOME (defaults to ~/.config) won't magically get cleaned up.

I'd love to be wrong here, BTW! I've had several PhotoStructure users try to reset their configuration by uninstall/reinstall, but that just removes the files in the installer, it doesn't do anything to files in user directories (and I'd be really surprised if that was ever a thing). Can you imagine the havoc from `apt remove vscode` and having it remove user's keybindings, extensions, and anything else?


>~/.config) won't magically get cleaned up. >I'd love to be wrong here,

Isn't this what apt purge appname does ? or is something missing

Form the apt-get docs:

purge - purge is identical to remove except that packages are removed and purged (any configuration files are deleted too).


That would only purge config files that came with the package (which would live in /etc/). Config files in the homedir of a user are not managed by apt.


Thanks, I was wrong, not sure why I believed that it worked. so in the end Linux needs a CCleaner tool like Windows, it should also contain a browser cache cleaner, I sometimes find a few Gb of space in Chromium localstorage/cache and have to hunt them down and delete manually.


> The default installer and bundle runners should be controlling the process. "XYZ App is attempting to write data files outside of its bundle location. These may not be cleaned up if you delete the application. Do you want to continue?"

If you do that, the entire system stops working. Everyone will just click "ok" and then still gets mad when uninstalling doesn't fully clean things up.


So users who want crapware can get it, and users who want a clean secure system can get it, and app developers are pressured to build apps correctly. And App Store/GateKeeper can prevent misbehavior for apps distributiled through Apple's friendly marketplace. Win-win-win.


Well, that's fair. But hopefully in the process of getting mad, it starts to reflect negatively on the application vendors and/or Apple directly. Maybe that will be enough for them to change.

Maybe the app bundle runner should be logging files written outside of the bundle folder? Then the uninstall process will wipe those out?


Wipe out all the things that you create with the app? All the text you created when you uninstall a text editor, the photos you touched up and saved under a new name, the audio recordings you made?


> All application specific data should be written inside of the bundle folder, so that when you delete the app, you delete the thing entirely

Not even Apple follows that ideology though, I can't be the only one who has had to delete the gigs of Garageband data from ~/Library/Application Support on an under-specced company laptop 128GB SSD


Nor does Windows for that matter. Running installshield with some command line parameter doesn't count. Linux package managers come close, but not all third party apps are installed like that.


Having a standard installer toolkit that comes with the os and used by many os updates along with a centralized uninstall UI is nothing?


> Having a standard installer toolkit that comes with the os and used by many os updates along with a centralized uninstall UI is nothing?

If we are talking about OS updates, OSX has the same thing.

You are not required to use Windows Installer. And even if you do, you are not guaranteed that everything will be removed, be it due to malice or incompetence.

Not even Linux can guarantee that. Something like the Nix package manager would be closer to what's required. Plus a sandbox.


Funny, it is also self-inflicted because Safari inspired Zoom to do this hack by breaking the correct behavior of protocol links.

> This is a workaround to a change introduced in Safari 12 that requires a user to confirm that they want to start the Zoom client prior to joining every meeting. The local web server enables users to avoid this extra click before joining every meeting.

https://blog.zoom.us/wordpress/2019/07/08/response-to-video-...


I don’t understand how a “you’re about to jump out of the app” confirm panel is breaking protocol links. I actually want this behavior for zoom and any other app...


The first time you use that protocol, of course a warning is appropriate. To prompt the user on _every_ external protocol click seems.. hostile to the concept of linking


It needs to happen at least for every combination of source domain and protocol. Otherwise websites can drive-by open zoom, reminders, or whatever other app you have installed to achieve some marketing or malware goal (or just DoS your computer).

Edit: Once you consider social sites with user-submitted content, like reddit, it might be best if you’re prompted every time.


So then you have a protocol link to a .vbs file and you tell all of your contacts how much you love them.....


It adds an extra, confusing step, that is not necessary most of the time.


I think "breaking the correct behavior" might be a bit of misleading term -- it's obvious that in some contexts we would want to be warned about the context switch and in others we'd be annoyed by it. I could totally understand why Zoom would, for frequent users, want the users not to get an annoying dialogue. On the other hand, if I'm a rare user or don't know a program is installed on my system, the context switch dialogue would be super useful alerting me that something is happening.

So I think you could argue Apple might want to let you override that, I don't know what the language is and whether there's a "click here to skip this next time" box on the dialogue. It's possible they got the annoyance versus security tradeoff wrong.

But imho they didn't break the correct behavior any more than Microsoft "broke the correct behavior" of privilege escalation by adding a dialogue box with UAC in Windows 7 .


> But imho they didn't break the correct behavior any more than Microsoft "broke the correct behavior" of privilege escalation by adding a dialogue box with UAC in Windows 7 .

well, it did a shitton didn't it ? to this day, most people I know disable UAC because of how annyoing it is.


Zoom deliberately circumvented Apple's standard package format, contrary to security interests. Zoom is basically behaving like a virus here.


Neither does Windows or Linux. I don't think you can have such a standardized way on a flexible general-purpose OS.


What apple should do though is provide an API that developers can hook into, where in when the user drags the app to the trash, it can also uninstall anything else the app placed elsewhere on the system.


They do! The app developers didn't follow the guidelines.


Oh really? Cool!

I've been out of the MacOS app dev scene for too long, it seems.


it really does provide a very nice and effective standard way to bundle components needed by an app altogether and to manage them appropriately.

The developers in this case didn't bother trying to use it.


The bigger question -- what other desktop apps have similar, latent daemons hanging around? I'm always wary of installing stuff like this (e.g. zoom, go2meeting, teamviewer).

Anyone know of other sneaky apps to avoid?


Razer gaming keyboard drivers spin up a webserver for controlling the chroma, which I've always found scary. (Using the much more reasonable community open source drivers that don't do that.)


Why in the world would a keyboard driver need to run a webserver? Client software should just be able to call driver functions directly in order to configure the keyboard. It sounds like they hired a web developer to write their driver configuration tool and didn't give any architectural constraints or have someone managing the project who knows best practices or security principles.


I don’t have the keyboard, but it’s my understanding that application developers can customize the lights on the keyboard. For example if you die in the game your keyboard turns red.

To do that you need IPC, and a JSON endpoint is the most popular form of RPC. If the server listens on localhost, I don’t see any issue with it - any issue you would have with IPC, you would have with this style of of RPC.

Now they could have provided a library to communicate directly with the keyboard - but I think the drawback was games developers didn’t want to integrate it into their games.


> Now they could have provided a library to communicate directly with the keyboard

They could have also opened a named pipe. Much cleaner, faster, less overhead than a web server, and way more secure (last time I checked, a website could not simply perform a request on a named pipe via JavaScript. With a local web server however...).


> they hired a web developer to write their driver configuration tool

This is the real problem behind all of these cases of "why the heck is tool/driver/app/whatever X running a web server locally?" - the market is full of developers only knowing HTTP, and when someone just has a hammer, every problem looks like a nail.

There is a real shortage of devs who know about all the other IPC techniques supported by modern OSes (of which practically all of them are much faster, lower latency, less overhead-y, more secure and come with less unintended side effects than a local web server).


Most devs of most OS X desktop apps are convinced their junk is important enough to pollute LaunchAgents with and none ask for permission. Be a "normal" user, install software you think is useful and you'll end up running a hosting service for a thousand "latent" daemons and "helper" programs.

Not as serious as leaving an httpd around and then letting sites to hot mic you with it -- obviously -- but on par in terms of a few select adjectives.


I had toyed with using Little Snitch, https://www.obdev.at/products/littlesnitch/index.html, to let me know what connections each program is making but after like a day it was just too complex to get going.

Wonder if the LitteSnitch list of procs had the Zoom Daemon.


I can’t find a reference of the top of my head, but I recall s Logitech mouse(!) also installed a local web server.


Dropbox


I don't know about the rest of you, but since crap like Zoom runs just fine in Firefox, that's where I'm keeping it. Ironically, I trust the browser's sandboxing way more than the vendor's app, which inevitably seem to open up my computer to some crazy vulnerability or phone home with my personal data or some other nonsense. I feel, perhaps wrongly, that I have more control over what the browser executes and what (web) applications can access, so Zoom, BlueJeans, Slack, Discord, and the rest are getting trashed.


And why did Apple shut down the app regardless? Was it not properly patched? Did Apple not care?


Because zoom’s patch will only help users still using and updating zoom while those who have uninstalled zoom are still vulnerable (because the uninstalled leaves the web server behind)


"(because the uninstalled leaves the web server behind)"

For cripes sake...


It bothers me that people weren't more upset about this part.


I’m sure Apple is as upset as anyone else. It effectively breaks their sandbox model so they’ll probably be working hard on a way to plug that hole gracefully.


>It effectively breaks their sandbox model

The sandbox only applies to software devs that want to use it or those that wish to sell through the Mac App Store. I don't think Zoom is in the MAS at all (I don't see it in a quick search anyway), and a standalone installer is free to do whatever it wants and can convince users to go along with (up to and including, in principle, bypassing SIP though since that significantly raises the effort bar I've only ever seen niche stuff request it). And it's completely legitimate to want to run a server on your system too, there is no hole. Zoom simply acted as malware, taking actions without user permission.


SIP cannot be disabled by anything running in the current boot session. Once the root volume is mounted, the SIP flags are set in the mountpoint. The root volume obviously cannot be unmounted while it is booted from.


Not after Catalina.

Future versions of macOS will require signed software (notarized as per Apple terminology), even outside of the store.

What is new in security at WWDC.


I don't fully understand the difference. But signed is different from notarized. Notarized means you uploaded the binary to Apple. Previously, you can sign without doing that.

I haven't looked enough to understand what is gained by notary. Does Apple want to search your binary for maliciousness or rulebreaking (potentially even at a later date) so that it might revoke the notarization/signature?


In order to avoid repeating myself, from WWDC 2019:

"Advances in macOS Security"

https://developer.apple.com/videos/play/wwdc2019/701/

"All About Notarization"

https://developer.apple.com/videos/play/wwdc2019/703


Thanks.

Some of this stuff seems a tad disingenuous. Like preventing debugging. The debugger APIs on Mac already pop up a password prompt, limiting the usability in malware (and actual use, like trying to debug over ssh). Meanwhile, a culture of producing separate binaries for debug and for end users (debug builds lacking optimization, allowing additional permissions) is in my experience a great way to fail to reproduce legit customer-facing bugs during development and have greater difficulty diagnosing them when they occur on a real live user machine.


As far as I understand, notarization is intended to catch malware before it can be distributed. The traditional signing mechanism can protect users against malicious software because Apple can pull certificates used to sign malware.


Notarized≠signed. I suggest you watch the videos that you've posted; they go into detail about the changes in macOS Catalina and when they apply.


On the go now, I will post the video minutes and slide pages afterwards.


Signed is not the same to Sandboxed on macOS, afaik.


It's not, but sandboxing requires code signing.


Watch the security talk. macOS is on the path to adopt iOS permissions model and long term roadmap is to apply the sandbox to everything, with the option to explicitly disable it on per-case basis.

A path similar to how Windows 10 is now converging the Win32 and UWP sanbox models, or how ChromeOS sandboxes GNU/Linux processes.


> macOS is on the path to adopt iOS permissions model and long term roadmap is to apply the sandbox to everything

I do not see where this is mentioned.


I will show it later then.


That will be the Mac books death.

Many open source projects will not participate in this.

If this kills brew you will also loose a lot of devs.


iPhone and iPad don't seem to have suffered from lack of open source projects.

Neither do game consoles or the large population using Windows based systems.

I never cared for brew on the occasional moments I get to use Apple computers, XCode and default tooling is more than enough.

Which is like what the large majority of developers targeting Apple devices actually care about.


The MacBook is a general purpose computing platform. The iPhone and iPad are not. Locking down the Mac will make it unusable for many, many people. It will indeed be the death of the platform, as most devs abandon it entirely.


As a Mac user who develops high performance scientific applications portable between all UNIXen (Linux/Mac/BSD), I still can write my code pretty easily on the platform.

I personally don't use Homebrew, et al. but, I have a Linux VM which handles that stuff pretty well.

I didn't develop a Mac specific "application" though.


The large majority of devs that buy Macs aren't UNIX FOSS devs, rather devs that care about Apple platform.


My experience has been the opposite. Of all the people I've worked with using Macs, all of them were developing cross-platform open source software. I've yet to meet a single developer making MacOS applications.


They would better off sponsoring OEMs that try to keep BSDs and GNU/Linux hardware alive then.

On my Mac circle it is all about store apps and Web apps (Java/.NET Core based).


This doesn't match my experience so far, could you point me to your sources for this claim?


Just like you, my experience so far.

Then again, I only hang around with Mac devs that target iDevices and macOS, using Objective-C, Swift, C++ and Web.


This doesn't kill package managers like Homebrew.


There is no real sandbox model on Macs if you don’t go through the Mac App Store, only code signing to detect that the app hasn’t been tampered with and to validate the author.


Until Catalina, which will more aggressively use permissions and require all software to be notarized.


Neither statement is true.

There is still the same control-click to open non signed software and there is still no aggressive permission model outside of the App Store.


There is, but even the control click will only allow you to open signed software. Unless you build the software yourself (I'm not sure how homebrew still works) you cannot run it if it's not been notarized by Apple.

Firefox was broken on Catalina for a while, even though the main app was notarized. Some internal binary wasn't notarized, and no amount of control clicking would get Firefox to work until Mozilla notarized everything in the build.


For users who know what they are doing:

https://forums.macrumors.com/threads/unsigned-apps-catalyst-...

  sudo spctl --master-disable


The users who really know what they're doing are going to refuse to disable system integrity protection. I paid a shitload of money for the T2 chip, secure signed boots, a virus-free environment and complete peace of mind from malware. No way I'm turning that off on a work machine.

I have a Raspberry Pi for hacking, I'm happy to root the hobby computers, not the work ones.


That's why if find postings like this dangerous. If an author is asking someone to run a command, they really need to explain what the command does and what the tradeoffs are.


The command doesn’t do anything by itself from what so can tell. It just enables the option to run unsigned code.


I find it weird that you don't expect root privileges on devices you do work with.

It seems like this conflates the notion of having root privileges with turning off security. There is no meaningful connection between the two save in situations where there is no meaningful way to control said security layers save destroying them.

For example refusing to boot a bootloader that isn't signed doesn't require your oem to hold the only possible key that can be used to sign said bootloader.


Which are a very tiny percentage of typical Mac users.


As it should be. The vast majority of users should only run signed software. That leaves the ones who know what they are doing a way to bypass it.


You mean those that after doing that just perform "curl | sh" as it has become trendy among the younger UNIX generation?


I would not be surprised at all if this episode has some concrete ramifications in 10.15. It could take the shape of something akin to iOS’ location permissions for applications that want to run a server, or even a first-party framework for accomplishing the thing Zoom was trying to do, but it’s probably safe to say what Zoom was doing won’t be possible next year.


Because macos has no decent concept of package management or containerization.


I mean. The issue at hand was that they purposely left the webserver behind to auto reinstall if a zoom link was clicked. This was an intended feature, and the same could have been done on Linux or Windows. Package management or containers are irrelevant to this conversation.


A package manager would typically have removed the web server, too.


Sure, the "package managers" on Linux, Windows, and macOS all behavior in pretty similar fashions. A manifest of files that the installer knew at time of install. That doesn't stop a program from installing anything else at run time, or even in the installer (since they can define what to remove in a lot of cases). This wasn't an "accident," it was purposely left behind with the intention of being used to onboard users easily even after they removed the client. This would have pretty much been an issue on every platform (had it been implemented on other platforms). And please, don't tell me "but Docker!" Docker, at present, isn't really usable with GUI applications yet.


And please, don't tell me "but Docker!" Docker, at present, isn't really usable with GUI applications yet.

But Flatpak! Flatpak applications can be sandboxed and you can install/remove applications as one unit.


Is Flatpak still open to the issues outlined at http://flatkill.org/?

If so, it doesn’t seem much better.


A package manager would be designed to remove any non user hostile features. Intentionally hostile behavior would be unaffected. One might hope that the packager, the person that is, might have refused to include software from incompetent or hostile developers.

Something an app store due to volume and default allow pending mostly automated checks has a problem with.


As far as I know most traditional package managers only remove files and folders declared in the package. Not files installed somewhere else during the install script or created by the binary when it runs.


Files installed elsewhere during the install script are treated as config files and can be purged, no?


that is explicitly incorrect. macos has a very nice and effective way of packaging all the components needed by an app into tidy bundles.

the problem in this case is that the developers just couldn't be assed even trying.


Not really, If you have a standard app that can be dragged into the trash, sure. If you have a kernel extension, or a launch deamon or any application data you store locally you cannot clean up after yourself without a custom uninstaller. Windows is far ahead in its centralized Add&Remove Programs area.


Not really, If you have a standard app that can be dragged into the trash, sure. If you have a kernel extension, or a launch deamon or any application data you store locally you cannot clean up after yourself without a custom uninstaller.

This is false. Kernel extensions can be part of the application bundle and will be unloaded and removed when the bundle is removed.

Installing KEXTs in an application bundle allows an application to register those KEXTs without the need to install them permanently elsewhere within the system hierarchy. This may be more convenient and allows the KEXT to be associated with a specific, running application. When it starts, the application can register the KEXT and, if desired, unregister it on exit.

For example, a network packet sniffer application might employ a Network Kernel Extension (NKE). A tape backup application would require that a tape driver be loaded during the duration of the backup process. When the application exits, the kernel extension is no longer needed and can be unloaded.

Source: https://developer.apple.com/library/archive/documentation/Da...

You can also launch agents that are part of your application bundle:

https://developer.apple.com/documentation/servicemanagement/...

It is true that you cannot remove application data, but that is a feature (maybe users want to retain the data) and also does not happen in e.g. Linux package managers.


I am not sure it is false if you want it loaded on startup.


No, the de vintentionally circumvented the bundling and containerization that exists, and MacOS couldn't prevent it. That said, Windows and Linux can't prevent it either, in common configurations.


macOS has a very good concept of package management and containerization. It's just optional because people get even more up in arms when their old software doesn't work any more.

Apple also makes a fantastic computing platform with very good mandatory isolation, namely, iOS. If you're interested in isolation in preference to compatibility with traditional desktop software, an unjailbroken iPad Pro with Smart Keyboard is a pretty good option.


I don't think it's just old software that might break. One must also consider the software not yet to be written. If the isolation is too constraining for some type of application that really needs a privilege, and isolation is mandatory, then some amount of innovation will just have to happen somewhere else or not at all.

Remember old school Mac was full of hacks upon hacks upon hacks, many by third parties, and there was cool stuff in there too. The App Store mentality has caused everyone to overreact and think that every third party app on the planet will turn into the worst conception of Win98 era malware overnight if unconstrained, when this is just one outcome among many possible.


Probably to not allow it to run if the users didn't update Zoom.


I wonder what happens now to the Product Owner who decided it was OK to install hidden web server on user machines?


Probably nothing, and mutual lamentation from product and marketing people about the loss of their easiest customer retention strategy.


eng has responsibility here for going forward with this


This was likely a product manager decision, implemented by some hapless kid fresh out of college.


Has anyone checked that `dpkg --purge zoom` does the right thing, on debian/ubuntu?


Keep in mind dpkg never removes user data.

So that said, I checked, and it removes everything zoom installs that isn't in a user directory, plus (there is an extra script that does this):

    remove_folder "/opt/zoom"
    remove_folder "$HOME/.zoom/logs"
    remove_folder "$HOME/.cache/zoom"
Which is stupid since it's removing this from root, who probably never ran zoom.

Note it removes logs from .zoom, but not the directory itself. Which is good, since there might be user data in there (chat logs, and recordings).

Unlike Macs, there is no hidden webserver.

There is also (yes, it's commented out):

    #logged_in_users=$(who -q | head -n 1)
    #sorted_users=$(echo "$logged_in_users"|tr " " "\n"|sort|uniq|tr "\n" " ")
    #for user in $sorted_users;do
    #       echo "removing $(grep -w ^$user /etc/passwd | cut -d ":" -f6)""/.zoom..."
    #       remove_folder "$(grep -w ^$user /etc/passwd | cut -d ":" -f6)""/.zoom"
    #       echo "removing $(grep -w ^$user /etc/passwd | cut -d ":" -f6)""/.config/zoomus.conf..."
    #       remove_file "$(grep -w ^$user /etc/passwd | cut -d ":" -f6)""/.config/zoomus.conf"


Thanks.


I was thinking to myself “it is too bad Apple can’t just disable this like they could have on iOS, cause I suspect most people I know with Macs would be vulnerable to it and it is next to impossible to explain to a nontechnical user how to actually uninstall this”.

Kudos to Apple for nuking this malware.


This explains why I couldn't find it on my system... was scratching my head when the .zoom folder wasn't on either system that had the Zoom.app.

Strong work Apple.


Apple is punishing Zoom, because they explicitly built this mess to get around Safari appropriately prompting users to decide whether they wanted to open the app on each meeting join. If you are a safari user, there was never a vulnerability. You’d be prompted. Why is no one talking about Chrome and Firefox’s lax security posture here? It’s frustrating.


Note that dropbox also opens up three servers on your Mac, though when you exit the app they go away, so are arguably discretionary. I assume they are for lan syncing, though I don't know why that would require three ports.

They're blocked in my little snitch anyway so no problem.


Disturbs me somewhat that Apple has a way to silently push changes to laptops without user interaction.


You can disable it in Preferences -> Software Update -> Advanced -> Install system data files and security updates

From there you can manage the updates manually from the command line with the `softwareupdate` command. e.g. `softwareupdate --list --include-config-data` will show available updates, `softwareupdate -ia --include-config-data` will install them, etc.


This has been in place since 10.6 Snow Leopard. It's part of their built-in anti-malware system (MRT + XProtect + Gatekeeper).

It's no different than a virus scanner auto-updating its signatures.


Except that if you auto-update disabled, you'd be somewhat surprised if the virus scanner could update it's signatures.


As one of the posters already mentioned, it can be easily disabled through the OSX GUI by changing the setting in Preferences -> Software Update -> Advanced -> Install system data files and security updates


Windows update does the same, no?


Windows Update leaves behind a lot of logs with KB entries, so I don't think they're trying to do anything secretly. If Microsoft changes your software, you know about it.


Nor is Apple. See other comments on this thread.


When I was at Microsoft, Windows' policy was only to kill applications like this with the explicit consent of the manufacturer (i.e. they were usually asking us to do it because they are unable to patch themselves, not the other way around), and only with a very specific version range.


Think of it as a malware scanner, like Windows Defender


Where can I find the technical details regarding this patch?

Another application I've been running for years won't start-up anymore and is logging:

> Thu 11 Jul 2019 23:12:59 AEST Waiting for web server to come up

It may be coincidental, but would be good to know what Apple changed.


I had a prospective client/customer, just today, schedule a Zoom meeting for tomorrow. I was aware of the issue this week, but I wasn't really going to make a lot of noise with someone who was just a prospect.

I figured.. this has probably evolved already to an acceptable situation.

So the calendar invite came in, I started up zoom to see what it would do. There's an update available, where zoom says they're abandoning the local web server.

Upgraded, should be good for tomorrow.

Came here, noticed the "softwareupdate -i MRTConfigData_10_14-1.45 --include-config-data" command and ran it. Checked last update -- back in June, to 1.42 ...

Updated that, ready to go.

Business continues, maybe I'll delete zoom another day, but not just yet.


No wonder Apple took this step!

Many users are unable to enable Video feature even after applying recent patch released by Zoom. Also zoom has become security joke/conversation topic while starting a con calls!


It bothers me that I see an increasing number of apps that run a local web server. I've got half a dozen apps, mostly development tools like pgAdmin, that force me to run the app and then access the UI through a browser.

How many such apps am I running that I don't know about? And how many of them are exposing my system to malicious web sites, or to curious people in my office on the same subnet? I wish I knew.


This whole saga and how it played out is the final nudge I needed in my decision to move completely from Android/PC to Apple


Why do web browsers even allow access to localhost? Seems like developers just use this to abuse/violate user preferences anyway.

I think I'd be happy with a popup once-per-tab asking me for permission for a web page to talk to a local web server... Might even be okay if it's scary (Are you a developer?)


I appreciate Apple's handling of these issues, but is there a local log or something where you can see all such "secret" updates?

Edit: Apparently there is:

https://news.ycombinator.com/item?id=20409200


Does anyone know if issues like this would only affect the current user account?

I currently have a separate limited user account just for meetings, and that’s where I install various meeting apps. So in my case is there any way to know if Zoom or WebEx would install stuff on all accounts?


This is why I don’t trust apps outside the App Store as much.

If we can’t trust an app that is the cornerstone of a 25 billion dollar business (Zoom’s market cap) not to install malware, then I don’t know.

I want the trust through the App Store.

PS. I love Zoom, and find it to be the best conference solution out there.


I haven't been following this issue, but i am very wary of Zoom since they automatically turn your camera on and broadcast your stream as soon as you click a Zoom link in your web browser which seems like a major issue to me...


You can judge how a company functions internally by how they respond externally,

Zoom’s initial response to this incident was shameful. They basically said “that’s how are app works. F U”

I am moving away from Zoom.

Any suggestions? Preferably open source


Zoom should be investigated by the US government.

https://news.ycombinator.com/item?id=20408502


I'm so glad they're killing it too. Zoom did a horrible job handling this. (yes, I'm assuming the reported timeline is correct).


Please, Apple, give me a way to disconnect my microphone and webcam on an OS level so apps can't randomly access it.


Doesn’t that already exist under Preferences > Security & Privacy?

Afk rn so i might be misremembering the name of the setting.


TIL! Thanks! This is exactly what I wanted.


Wrt webcam I use a sliding webcam cover to block it on a hardware level.


Right, but when you try to report a busted macOS API to Apple that breaks your app, you have to walk on egg shells...


Now they should push an update to forbid Apps from doing that in the first place without asking the user at least.


How do these kinds of silent updates work?


This is an update to Apple's Malware Removal Tool (MRT) blacklist.


Zoom should just move to a webRTC based setup with no plugins or anything. Wouldn’t that make things easier?


Why wasn't Microsoft susceptible to this? Did the Microsoft Windows 10 firewall stop this?


What I'm more surprised about is how Apple can giveth and quickly take it away remotely.


Once again, a popular site that is completely GDPR non-complient. To opt out of tracking you have to go through six layers of obfuscation. And don't take the wrong turn, or you will just come to walls of text, meant to do nothing but make you throw up your hands and give up. Or you can just opt in to everything with one click.


looks good to me. Do more and more at that pace.


i love apple for these kind of things


Who cares?


I imagine Apple has known about this daemon for a long time from it's OS analytics.


What OS analytics?

Apple gathers various anonymous metrics, yes, but I don't think they collect information on arbitrary web servers running on Macs.


Crash reports include running processes.

What do you think the anonymous metrics are if the process list and open sockets are excluded?


Crash reports do not include information from processes other than the one that crashed. They also don't include open sockets. If the web server crashed and produced a crash log, maybe that'll get sent, but I don't know if Apple even collects crash logs from non-MAS apps anyway (what would they do with them? They collect crash logs from MAS apps in order to provide them to the developer).


Zoom? Thought they were talking about the zoom gesture in MacOS. I’ll move right along then ...


So they installed without further confirmation a silent update that removes a silent installer? No irony?


Not really, no. Apple's update is just a configuration change to XProtect, which is the anti-malware system. It's not an OS patch, it's just like any third-party malware system auto-updating signatures.


ertecheck found this for me maybe 2 months ago. coincidentally right in the disclosure window!

i tried etrecheck on a lark. at the time i found it unremarkable. oh, i have this leftover dingle here, thanks etrecheck, i'll just remove it then. but otherwise i wasn't screaming etrecheck from on high.

now i am!!


Everything about the etrecheck website screams "system optimizer scam!" and this comment does nothing but reinforce that feeling.


If you try EtreCheck, you’ll find that it has nothing in common with system optimization scams. EtreCheck has no (or very few) magic “fix it” buttons. It only finds issues that appear to be problematic and (optionally) gives the user instructions on how to fix those issues. EtreCheck is definitely an end-user tool. It is designed for people who don’t know what software they have installed. It will even provide people with anti-scam tips if it detects that they might have installed scam apps in the past. For other users, it could still be helpful in listing partially uninstalled software like what is being discussed with the Zoom issue.

The latest version of EtreCheckPro 6.0.2 has more features that might appeal to very tech-savvy users. It has a storage analysis feature to help find how your disk is being used since Apple’s own tools are notoriously bad at this. It also has a graphical view of the analytics data that macOS automatically collects. You won’t find this analytics display in any other tool.


it's a very simple tool, to be sure. but come on, it's not like it's steve gibson wares ...

the free version is perfectly adequate. it's simply an information gathering and reporting tool. anyone could write this tool themselves -- the mechanics of it are beyond simple. but like all sysadmin tasks, gathering the requirements is the hard part.


Alas, EtreCheck is far from simple. Early versions were little more than a wrapper around system_profiler. But the current version is very sophisticated (~40K lines of ObjC, C, XSL, XML, HTML, and JS) and has some one-of-a-kind features. (see above)


> Apple said the update does not require any user interaction and is deployed automatically.

I think this scares me just as much. #singlePointOfFailure #rootKit


This isn't an "update" in the traditional sense and you can turn this off from System Preferences.


My coworker owes me lunch, I said they would yank the Zoom app for breaking the app stores TOS (close enough hahaha). Apple cant be very happy with public companies breaking their platform, especially in the name of "UX", which is supposed to be (and is) their differentiator.


Except Zoom isn’t available via the Mac App Store, so there’s no ToS for them to have broken.


My Macbook pro froze this morning...the mouse moved, but I couldn't interact with anything. After a few mins, I hard rebooted it, and it worked fine after that. I'm not sure if it was related to this update, but it's the first time that this has ever happened, so it's a little bit of a coincidence.


I had the exact same symptom, also for the first time it'd ever happened.


This has been widespread for the last week and a half, but nobody knows why. Unrelated to zoom.


Looking at my logs, I see a gpu reset at the time the problem occurred:

Event: GPU Reset RCS Ring is: - busy - in the ring <-- Appears hung

I suspect the automated GPU reset didn't quite work, as nothing wasn't redrawing properly.


It's a coincidence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: