Hacker News new | past | comments | ask | show | jobs | submit login
Zoom fixes major Mac webcam security flaw with emergency patch (theverge.com)
326 points by azernik on July 9, 2019 | hide | past | favorite | 146 comments



Stories like this are wonderful evidence of the effectiveness of public disclosure of security vulnerabilities, and are always heartwarming to see. Remember, 90-day disclosure windows are just a courtesy.


This is why I consider bug bounty programs problematic, because they've been co-opted from a system to manage responsible disclosure to a system to contain and manage non-disclosure.


Bug bounty programs can be great tools to help reward researchers, secure products and help align new and amateur researchers who may not have ever reported a bug before to standards.

But like all things, they can also be used to keep software insecure, hide issues, and instead buy off researchers.


Mmm. This post by Matthew Garrett is good on this: https://mjg59.dreamwidth.org/52432.html


Glad to see the company is changing course, but I’m not sure it would have happened without the public shaming. I want companies to fix things because something is insecure and it endangers the public, not because they have their feet to the fire.

I know companies don’t always respond right the first time, I know I haven’t, but Zoom had over 90 days to consider their responses and possible options / software changes. Instead, they were dismissive of the entire thing and only changed course after loud public pressure.


> but I’m not sure it would have happened without the public shaming.

It wouldn’t. From the article:

> The move is a surprise reversal of Zoom’s previous stance, in which the company treated the vulnerability as “low risk” and defended its use

They’re backpedaling because of the bad press, not because they think this is better for users. And if they don’t believe what they did was wrong (if they did, they would have never done it or would have fixed it previously), it’s just a matter of time until they pull other crap like this. This is not the only user-hostile behaviour of their app[1], it’s just the most egregious we know of.

[1]: https://news.ycombinator.com/item?id=20390613


What's scary is that they have a bounty program but it comes with a gag catch.


Most corporate bounty programs are going to include an NDA and following their release schedule. No corporate legal department is going to sign off on a bounty program that would both pay third parties for bugs and allow outside researchers to unilaterally decide when to disclose the bug to a wider audience.


They include an NDA, but a time-limited one - i.e. they require the researcher to give them a period of time (usually 90 days or more) to create, test, and deploy a fix, after which time the researcher can publish. Zoom's NDA was a permanent gag order, which puts no pressure on the company to actually fix the issue and doesn't alert laggard users that they need to update their software.


They weren't going to fix it without the bug being made public.


This seems par for the course for their support. I tried to report that their signup form automatically, silently deletes spaces from your password (!?). After a painful process of trying to explain the issue, it was summarily ignored.

They didn't really seem to understand that it was a bug.


Is noisily deleting passwords acceptable in your eyes?

(i.e. "Your password contains spaces, which is disallowed by our policy. Please try again.")


It's annoying in either case. Passwords should be any string I want! You're just going to hash it anyway.

I found it particularly egregious that Zoom's form auto-trims any spaces from the end of the string - so they are deleted as you type with no feedback (unless you happen to be watching the dots flicker).


> You're just going to hash it anyway

Wow, you're optimistic :)


I remember when I started out with SQL databases, someone managed to hack the site using SQL injects. So I made a SQL sanitation function, but soon enough someone complained that they couln't have escape characters in their password. =) Now a days I always use a library that parameterize all SQL variables to avoid SQL injections.


Does it matter? I mean is there another way of entering this string which preserves the spaces, or is deleting them just part of the hash function?


Yes, you can paste a string with internal spaces. I guess you can also disable JS and type whatever you want. Passwords with spaces work absolutely fine, too - it's just the signup form that is broken.


They probably had to many people accidentally copy-pasting strings with spaces into the form. Like the good old "double click to select a word" also picking up the space after the word.

The reason I can empathize with your complain is it being highly unlikely they are able to keep those restrictions consistent across all password forms & login methods.


And this is why anyone who trusts that organization in any way moving forward is a fool.


Nah leadership can change, see Microsoft and apple


The Microsoft telemetry spyware was completely opaque and they kept changing it to work around users blocking it until they were shamed into publishing almost everything they collect. One still can't turn it off.

Apple usually needs to be shamed into admitting to and repairing any broken hardware design. They had to be sued in multiple countries to stop misleading customers to buy AppleCare and allow them to use the warranty guaranteed by law.


That’s more of a reason not to trust anything ever. If leaders change for the worse, your investment in the company gets screwed no matter how well they’d done previously. And that investment can be stocks or it can be data, to give an example which you can’t just pull.


Leaders influence company culture but it's also a self-feedback loop where leaders that fit the company culture end up being leaders in the first place. To break that feedback loop and change course is usually a conscious choice for a company. Even then leadership change and direction at the top is only one of the many signals. It's entirely possible for Zoom's CEO to be a security minded person and the PM/Infosec person who reviewed the security report decided it's not a flaw worth patching.


Slow degradation to industry standards is the norm not the exception. That accelerates with growth as the original culture gets diluted.


Microsoft and Apple have barely changed in all the ways they are bad though. Specifically Microsoft has just moved to a different place in the embrace expand extinguish cycle. Give it a few years and everyone will hate them again (and maybe be surprised that it happened at all) because they did something unethical.


I think that we'll never see the same feelings about Microsoft as existed in its heyday precisely because those feelings weren't just about what Microsoft tried to do, but actually about what it did. Microsoft will (probably) never again enjoy the hegemony it once did; so, however evilly it acts, it'll never be able to translate its evil deeds into the same impact that they once had.


Companies are not interested in not endangering their users. They only care about making money. So you have the make endangering their users negative on their bottom-line.

This is a perfectly workable system, that does not require any party to not be selfish. It's a much better system that those that require somebody to be 'good' rather than 'rational'


Yeah this, like Superhuman, is about damage control, not a fundamental desire to do the right thing.


This is typically what laws are for. Sadly, those of us in the Land of the Free will probably have to wait a decade or two to get anything reasonable, so I fully expect this trend to continue as-is for now.


Tech executive changes stance after very public embarrassment that could impact their bottom line. If they didn't get the backlash, they would have kept their course.

There's not really much "willing to accept responsibility" here as far as I'm concerned.


Exactly. My company is actively shopping for a conferencing tool, and Zoom just ensured that it's eliminated.


Anger is certainly justified, but it should give way to reconciliation once the offending party truly repents.

Do we want a world of people who change their ways, even if for somewhat impure reasons, or a world in which no one ever does because it's pointless?


We should definitely treat people as human beings and understand we're all fallible, we should try to forgive mistakes, and we should hope people learn when they get something wrong. That said, if someone does something to make us stop trusting them it's on them to show they've changed; it requires effort from both parties.

There is no reason to extend this philosophy to corporations though. We can decide to use a competitor instead. No one needs to show loyalty to a supplier who screwed up.


They haven’t repented yet, though. Read their response.


I don't understand, is there something specific you're referring to? From the article it looks like they're even removing the local web server.


No patch? What the actual fuck


The thought process is that if they did this bad thing, only walked back the bad thing when it became public, and are maintaining that they didn't think the bad thing was that bad in the first place...what other nefarious things will that product team do?


> Do we want a world of people who change their ways, even if for somewhat impure reasons, or a world in which no one ever does because it's pointless?

You forgot another option: not making the mistake in the first place.


I've seen no evidence of true repentance here.


Let us be quite clear here:

"Farley maintains that the relative security risk of the vulnerabilities that security researcher Jonathan Leitschuh disclosed yesterday were not as severe as Leitschuh made them out to be."

That's the CIO still unwilling to accept what a poor decision it was to subversively install a web server on users' computers. They need to shut the fuck up unless the words coming out of their mouth are "we're sorry and we'll do better in the future".


Let us be clear. Running a local helper agent that accepts properly formatted requests (includes authn/authz) to provide a valid expected functionality is a perfectly valid architectural choice for a full-fledged desktop computer and we shouldn't throw out this capability.

The mistakes I see here are:

- UX Dark Patterns – making uninstall hard/duplicitous

- Helper process having security vulnerability - unauthenticated requests, providing unnecessary privileged operations like update/reinstall etc.

- Providing the control of participant video on/off to meeting host

- Not acknowledging the mistakes quickly and fixing them fast. Being defensive and using 'others do it too' excuse.

Also, in an internal fight for resources/prioritization and just plain philosophical alignment between security vulnerabilities vs UX funnel optimization (reduce number of clicks), in a company like Zoom, I am not at all surprised that UX side own always and security side lost and it took public pressure the shift the balance. Anyone here who has been in this situation knows what I'm talking about.

Unless the cost equation changes, it is hard to get business users to change their priority – from their perspective, they didn't understand what the heck their internal security guy was talking about. It would have been one person/security-team who they normally don't interact with. So why will they listen to that guy over the UX Product guy who they interact with daily, who they see as the one who built the hockey stick growth in their customer NPS scores and that guy wasn't happy about adding the extra click back.

So, only workable answer I see is public outrage like this (still not very scalable or consistent) and better yet, legal protections/regulations that make it extremely expensive for companies to ignore this stuff.


Running a local helper to take control from the browser is absolutely a bad architectural choice. The browser doesn't allow websites to open local programs without going through a user confirmation process for damned good reasons, and Zoom decided to put a lot of effort into circumventing that security measure to save users a click and so boost their conversion rates by a few percentage points.


It really sounds like what we need is:

“Always allow zoom.us to open ‘Zoom’” within browsers.

Even Spotify runs a local web server for this.


This is inherently a security problem because web sites can open URLs without user awareness or deceive users about the content of said URLs.

On Ubuntu, xdg-open phrases the checkbox as something like "always allow X program to handle foo:// URLs?", which is probably not comprehensible to the average user; more accurate phrasing would be "always allow websites to open X program?" Which I think indicates why I'm so skeptical that this is a good option to give to users.


Yep, those definitely are the dark patterns.


Helper agents are dark patterns.

Unless installing an always running service on my device is directly related to the intended functionality of your software, setting one up is unwelcome and deceptive. Especially when it is done to work around existing security controls.


Absolutely. If the user indicates they don't want your software running anymore, it should stop.

In Zoom's case, if the user exits the app, the web server keeps running. When the user uninstalled the app, the web server still keeps running.

The user twice said "I don't want Zoom's software running on my computer and both times Zoom ignored the user's request.

This behavior is both unethical AND malicious.

Edit: wrote up more thoughts on this: https://salibra.com/p/viral-growth-doesnt-mean-writing-virus...


I don't want the web server running when Zoom is running either.

Video conferencing has nothing to do with a web server or any server listening to ports.

When I install a video conferencing client its only function should be me initiating a connection.


Then why don’t you switch to webex?


I have never been in the position to choose other than voicing my opinion, all video conferencing sucks for some reason or another, and it has never been anywhere near the most important thing.


I disagree with declaring all helper agents as dark patterns.

From a regular user point of view, it would be acceptable to have a helper agent as long as it follows:

- platform provided background process methodology (example: launchd could launch your process when you hit the socket),

- and it is made clearly apparent that such a thing is installed on your system (say, via system preferences panel, via status bar icon menu, and via in-app preferences panel),

- and it does cleanly uninstall as part of a simple standard regular uninstall.

And from a technical/security point of view, it would be acceptable if it:

- has minimal necessary privileges and proper separation of concerns.

- and does only what it needs to provide a user-expected functionality and doesn't do random egregious things.

- has secure ways to allow only expected/authorized caller to talk to it.

- does not violate any platform guidelines or tries to circumvent protections.


> From a regular user point of view, it would be acceptable

It would be not, stop pretending acquiring consent from a statistical model counts as acquiring consent from the actual user.

Thing you wrote may make it acceptable for you, but certainly ain't sufficient for me.


> It would be not, stop pretending acquiring consent from a statistical model counts as acquiring consent from the actual user.

I don't know what you are referring to here. Care to elaborate?

> Thing you wrote may make it acceptable for you, but certainly ain't sufficient for me.

This isn't about individual taste. Nothing I wrote above was about my personal taste. My point was about differentiating between the OS provided valid architectural mechanisms vs surreptitious dark patterns applied on top of it by an application developer.


I don't know what you are referring to here. Care to elaborate?

You make assumptions about individual user's consent from whatever bulk experiences you might have measured. Either that, or you didn't even measure anything and therefore you're just making things up about what's "acceptable."

> This isn't about individual taste.

Who said anything about taste, it's about individual boundaries.

> My point was about differentiating between the OS provided valid architectural mechanisms vs surreptitious dark patterns applied on top of it by an application developer.

First, that's a word salad. Second, after untangling it, I'm pretty sure you mean "if there's a mechanism in the OS that enables this then it's okay" in which case that's even more absurd than the usual "if it's legal then it's okay." Look, even if you take Zoom's "let's leave a tray icon there when you thought you quit the app without putting a honking huge notice you just did that like a decent app usually does" is more about having a way to disawov ("see, we did leave a notification, lol") than actually ethical design. That's the _essence_ of a dark pattern.

Seriously, though, you're being creepy and advocating pushing people's boundaries here.


But for what?

My caveat is that a helper service is acceptable when it is doing something necessary for the basic function of the software.

Virus scanners, file sync, and things which are obviously servers fit the bill. Not much else I can think of does.


Libreoffice has an agent that preloads java bins to make the startup time comparable to MS Office. There are valid uses for startup agents, please get over yourself


I don't want that.

I don't want installing an office suite to permanently take away a percentage of my computer's resources.


That’s the meat of it, Zoom wanted an app feature macOS said was a no-no so they coded up an insecure workaround. On iOS that would get your app pulled at the least.


I want an operating system with a permissions model which specifically forbids this kind of thing.

My Linux desktops are also always full of processes which I have to dig to figure the purpose, unless I build my own distribution it's hard to make anything work which feels satisfactorily under control.


So how does your OS differenate between Apache and a local helper?


Non-OS provided applications are installed as packages and given package-level permissions which are easily audited and revokable (without forcing uninstall).

Apache has permission to start at boot, run in the background, and listen to 0.0.0.0:80,443. Photoshop has permission to write to files in $HOME, and connect to network services while the application is running optionally with explicit permissions for each access. Adobe's update service can be disabled with a click.


Well windows pops open a huge GD window that allows you to decide firewall rules if it notices a change


Helper agents are an integral part of said security controls, e.g. for XPC, privilege separation, etc.


> Unless the cost equation changes, it is hard to get business users to change their priority

With GDPR getting teeth (see recent fines of BA and Marriott) for security breaches, I think this is the beginning of that cost equation changing.

But also bear in mind this is a company who have someone with the title Chief Information Security Officer. If alarm bells didn’t start ringing for that person when this vulnerability was reported, then they likely aren’t the right person for the job. Especially as Zoom have customers in the EU so that person is also likely their nominated Data Protection Officer and should therefore be well aware of the privacy requirements imposed by GDPR and the penalties for a privacy breach (which someone secretly recording webcam footage would surely qualify as).

As for local helper agents accessible from the internet, you only need to browse google project zero to see what a bad idea that is.


Of the main facets of the problem, the vulnerability bothered me less than their obviously poor attitude towards fixing it in a responsible timeline, and that bothered me less than the discovery that they were running an always-active webserver to assist call launches and reinstallation.

Is that a common thing that programs do? Should I be expected to portscan myself frequently to see if software is unexpectedly running web servers? How much battery am I losing to this stuff?


The verge article mentions it's reasonably common and mentions some programs that do it.

From the article, a tweet

--------

They are far from alone, a quick `lsof -i | grep LISTEN` shows that I have: Spotify, Keybase, KBFS, iTunes, Numi, https://t.co/MVSAJgN9yY… All running locally listening web servers.

— Matthew Gregg (@braintube) July 9, 2019


Did they just imply that every listening socket is a web server?


They are mixing apples and oranges indeed.


The spotify one is for spotify connect most likely. I'm guessing (although not sure, someone could verify it) that spotify connect requires some sort of authorization to work.

Edit: here you go http://cgbystrom.com/articles/deconstructing-spotifys-builti...


> running an always-active webserver

It's one thing to run a webserver while your software is running.

It's quite another to leave it installed and running even after the user has uninstalled your application.

And to actively evade the user's attempts to remove the webserver component. Until this update, if you removed ZoomOpener from your Login Items and via `rm -rf ~/.zoomus`, it would miraculously reappear every time you participated in another Zoom meeting. (To stop this, you had to touch .zoomus as a file or otherwise make it harder to recreate as a directory. But if they had chosen to, Zoom could have coded around these countermeasures thus leading to an arms race, at least for a while.)


Or be like me and kill all zoom processes after you leave because you are afraid someone may be watching your next wank session


Not only is it common, there was an article about "how to write performant electron apps" at the top of HN last week explaining exactly why you should do that.


While Electron apps have the ability to introduce a security nightmare (just like every desktop app framework really), the authors do try to teach Electron users how to make the apps they develop a bit more secure - https://github.com/electron/electron/blob/master/docs/tutori...


> How much battery am I losing to this stuff?

Unless they coded something very stupidly, a listening socket that nobody connects to is not going to be on the CPU. It will be asleep waiting to be woken up by actual activity.

Not sure if any operating system would use that socket as a reason not to enter a low power state but I kind of doubt that.


> Unless they coded something very stupidly

In this particular case, I don't think we can exclude that possibliity.

In general of course I agree with you.


Yeah, lots of programs do that. The problem is (1) what kind of things the daemon did, (2) how they reacted to the disclosure with PR bullshit


I've noticed that the Intel software update scan tool is a web page now, I presume it is a client side application that communicates with a local web server.

But I could be wrong.


This is why full disclosure is so effective. Nothing else works quite like dropping a full PoC and details of an exploit publicly to light a fire under their ass to fix it.


Well it’s an argument for responsible disclosure - you tell them, give them plenty of time to fix it, and publish.

But responsible disclosure absolutely does not mean “no disclosure”. It means give them a chance to fix it. If they choose not to you disclose so that people know that they need to take steps to protect themselves.

The important thing is that the disclosure must become public. It doesn’t matter that they pushed an update, as none of the victims who had deleted/“uninstalled” zoom will get the update, and without the update they’ll still be running the server.

The only way anyone would know about it is with the details being public.

I’m waiting for Apple to use xprotect to kill the server on all machines, as that’s the only true solution for the uninstalled victims


Responsible disclosure doesn’t mean anything. It’s an obsolete term. You’re referring to coordinated disclosure.

https://blogs.technet.microsoft.com/ecostrat/2010/07/22/coor...


This case is actually a really great demonstration of why this often-repeated claim is false. This was responsible, but not coordinated, disclosure.


Only in the literal sense. A commonly cited issue on the term "responsible disclosure" is that the discoverer is responsible for one's action, even though the action indeed benefits the public. In this viewpoint the vender can argue that it is not responsible to ignore the vendor, even though the vendor itself is being unreasonable. The term "coordinated disclosure" is invented to fix this abuse. You can't literally interpret "responsible" or "coordinated" without this context.


There’s nothing false about it. The term “Responsible Disclosure” was invented by a vendor as a self-serving PR tool to cover their own ass. “Responsible” is subjective and loaded, so it’s not useful terminology, and professionals in the field do not use it.


The security flaw isnt even the outrageous part. It was secretly installing webservers that dont even remove themselves when you uninstall the app that makes them scum.


That's not a bug, it's a feature! The server can "seamlessly" (surreptitiously) re-install the app when hitting one of their "join room" links.

At least some of the reason for putting it there in the first place is so it can hang around and be insecure in case they want to use it later.

The really insidious part is that users who uninstalled it previously won't receive this update removing it now. And a vanishingly small percentage of those will see this and respond by removing it.

I certainly won't ever hit the install button on a video conferencing browser extension ever again. If Zoom was doing this I have zero confidence in


Dropbox also does this, so does a lot of other popular programs.

do `lsof -i | grep LISTEN` to find out what server is running on your machine.


In my case there isnt anything I didnt expect. Also closing the apps that were running e.g. RabbitMQ actually stopped the webservers.


> It was secretly installing webservers that dont even remove themselves when you uninstall the app that makes them scum.

To be fair, dragging an "app" to Trash does not constitute un-installation. It was a poor design decision to implement features using a local web server, but let's not be so quick to attribute covert, malicious intentions.


> To be fair, dragging an "app" to Trash does not constitute un-installation.

Dragging an app to Trash MUST constitute uninstallation. If it doesn't, it is a bug.

Leaving configuration files in home folder for easier on-boarding after a reinstall is not the same thing as leaving a self replicating rootkit running all the time.

> It was a poor design decision to implement features using a local web server, but let's not be so quick to attribute covert, malicious intentions.

Zoom, with all its advertised features, works like a charm from the user's standpoint. It is not that easy to craft such seamless video conferencing apps which makes me believe the team behind it is formed by really experienced people. If this assumption is true, "the poor decision" is actually the true intention and is probably a feature in case zoom needs to install extra software on my device when their business needs change. It feels more like a back up plan than a dirty hack.

In my humble opinion, experts ignoring the ethical consequences of such decision are dangerous to society and their intention can be considered malicious if not criminal.

I'm sick of seeing the blame always belonging to the business people. Unless taken hostage and forced to act despite not giving consent, the developer should be equally responsible. We don't treat murderers, burglars and scammers the same when they work under a boss.

P.S: I probably strawmanned your answer to express my own opinion. English isn't my native language, sorry if my words sounded offensive.


There are plenty of Mac apps where dragging the app to the trash doesn’t uninstall them.

Unlike windows which has the add/remove programs control panel there isn’t really a standardized way to uninstall things on Mac. (I think you can make a .pkg uninstaller but it’s rare to see that)

I went through the launch agents and launch daemons on my personal computer a few months ago and found plenty of obsolete stuff that was hanging around even after I no longer had whatever the associated app was installed.


Erm yes it does. That's how it is supposed to work on Mac and that's how it would work if they didn't sneakily install a web server.


Their latest patch they rushed out is basically just implementing uninstall the way it should be.


RingCentral Meetings still vulnerable:

lsof -i :19424

https://www.ringcentral.com/whyringcentral/company/pressrele...


For those unaware, RingCentral white-labels the zoom.us product as their meeting solution.


Ironically, RingCentral's convention schwag includes a stick-on laptop lens shutter.


> But we also recognize and respect the view of others that say they don’t want to have an extra process installed on their local machine. So that’s why we made the decision to remove that component — despite the fact that it’s going to require an extra click from Safari.

Am I reading this correctly, their CIO believes it's the "extra process" that people are concerned about -- not the webcam vulnerability?


Amusingly enough, they never actually describe that part as a vulnerability in their blog post. It's a "concern" about a "seamless join process." The word only gets used with regards to the DOS vulnerability, which is only part of the problem. I get the need to at least try and spin things, but it's kind of obvious in this example. And given how people tend to get antsy when they start thinking about possibly being spied on through their webcams, downplaying it is probably counterproductive.


Amusingly enough, the standard menu position for Quit on the Mac is at the bottom of the menu… _now_ the Uninstall Zoom option is at the bottom of the menu, making it easy to accidentally invoke if you're used to selecting the last item there to quit. (I happened to have my hand on the mouse rather than keyboard at the time… normally I'd just cmd-Q).


Wow, I just checked and you are absolutely correct. How is it this hard for companies to check the Apple HIG for this type of thing?


This is why no researcher should sign an NDA after to doing volunteer work for a for-profit Corp.

If the reporter had agreed to the NDA required for the bug bounty, Zoom could have - and based on their earlier responses, would have - continued to ship this malware. But now because of the researcher signed an NDA they wouldn’t be able to inform the at risk public.


Will shenanigans like this (declaring a security breach as not a security breach) be caught and fined under GDPR? According to the regulation, companies need to declare breaches in under 72 hours without any unduly delay, but Zoom left this unpatched for months!


As a non-lawyer forum commentator I can say with absolute correctness that it will (or will not) maybe apply.

More seriously: I would guess no, as the GDPR is concerned with data collection and compromise, but I can’t imagine they store all the video they forward.

Of course I wouldn’t be surprised if someone sues them in the US (but given that the US sees companies as people for rights, but not punishment I imagine that they’ll be fine).


“Can say”? Did you mean to write “cannot say”?


it doesn't matter, that's the joke


Someone would have to prove that their personal information was stolen or misused for GDPR enforcement to be relevant.


The earlier discussion is at https://news.ycombinator.com/item?id=20387298.


Specifically - OP is just the news that Zoom agreed to make the changes the security community demanded. Previous discussion is root discussion of the issue itself and of the Zoom response more generally.


With their incompetent behavior they put a big target on their back now for security researches. In the end it's good for consumers since more issues will be found and promptly fixed. So kudos to the reporter!


What about users who had previously uninstalled the Zoom client? Must they now reinstall Zoom in order to be able to fully remove it? Surely there are users that won’t perform the manual update and will remain vulnerable indefinitely.


Although I agree having an active webserver with dubious security controls is a problem, the vulnerability as we know it today installs Zoom... and this new version of Zoom uninstalls the webserver. So it is (or at least could be) a self-patching vulnerability.


I believe rm -rf ~/.zoomus should do the trick


I'm confused. Does the patch now make it to where if you drag the app to the trash, it actually uninstalls?


A macOS "app" is just a directory with an executable binary and some convenient helper files for Finder. Dragging it to the trash does not remove artifacts, such as logs, supporting binaries, even methods of persistence, which may get placed somewhere else on the filesystem as part of a typical installation. This is not unique to Zoom or even Apple operating systems.


One thing I'd like to see (and this is definitely doable on macOS side) is that if you trash a .app, the OS would automatically revoke all permissions to it.

If this were done then even with the sneaky reinstalling, the user would be alerted by a system dialog requesting access to their webcam.


If you have a ktext kernel driver to implement generic access to video input subsystems (think v4l) then why exactly would the user be alerted if the developer didn’t add the feature?

Read: anyone can write a kernel driver for macOS, you are too trusting of your software vendors. Get a hardware switch


> anyone can write a kernel driver for macOS

No they can't. With SIP you won't be able to install it if it is not signed (for kernel extensions only a very few developers have certificates) and in any case you will be alerted about it. Also there are plans to completely disallow making kernel extensions in release after Catalina (since they can now run in userspace, I imagine that userspace will not get access to pre-installed hardware)


However, if you're going to automatically install a daemon, you can and should make that daemon check whether the originating app still exists, and automatically uninstall itself if not.

(Better still, reconsider whether you really need a daemon in the first place...)


According to one of the updates in their response blog post (https://blog.zoom.us/wordpress/2019/07/08/response-to-video-...), you need to choose it through the Zoom menu bar:

> We’re adding a new option to the Zoom menu bar that will allow users to manually and completely uninstall the Zoom client, including the local web server. Once the patch is deployed, a new menu option will appear that says, “Uninstall Zoom.” By clicking that button, Zoom will be completely removed from the user’s device along with the user’s saved settings.


First, does the patch actually fixes it? In original report, they "patched" it with the flaw remaining.


I wonder if instead of the usual 90-day notice a slightly better approach would be an initial partial public disclosure of the issue, without divulging the actual exploit, and the fact that it had been communicated to the company so that a public countdown of the 90-day window can happen.

The exploit can then be divulged to the public, automatically, on the expiration of the 90-day window, regardless of whether it's fixed or not, as that may also be educational.

For example, after this Zoom issue other companies will hesitate to use a localhost webserver, but if the issue had quietly been fixed by Zoom other companies may still have been tempted to use similar approach.


More often than not announcing the existence of a vulnerability is enough to motivate people to find it. It’s much easier to find something that you know is there than to just experiment blindly.


Fair enough, but that can be countered by very minimal disclosure in the beginning, just that a vulnerability exists and it has been notified.

Even if we don't do that, I think we should at least reveal the issue after it's been fixed in all the cases so that other entities can learn from that.


Yes. I learned the security release process for a major open source software and it became trivial to determine when vulnerabilities are patched and knowledge of their build process gives you an exact window to exploit said 0-day


There isn't a 1-size-fits-all solution to this type of situation. Giving the company a reasonable time[1] to fix it is a good guideline to start with, but determining the best approach to use requires making several judgment calls.

Is the exploit already being used in the wild? Should affected users be doing something asap to protect themselves? How quickly could someone transform your disclosure into something malicious? How much risk are we forcing the users to bear while we wait for a self-imposed arbitrary time limit to expire. Is there an actual benefit to be gained by delaying public disclosure? (Is the company using the time responsibly to implement a fix, or are they denying the problem exists?

Handling a public disclosure meas adapting to the specific risks of the situation.

[1] I would say 30 days is more appropriate, with the option to delay the public announcement if the company has handled the situation well and is working on the fix, but needs more time for a legitimate technical reason.


I uninstalled this software yesterday and it's not going back on my machine.

I have the same criticisms as others do about a company like Zoom that only responds to security issues after they wait-and-see if it will impact the bottom line. And that quick peek behind the curtain where their own employees view this as a "PR crisis" (their exact words in the article) rather than something more tells me everything I need to know about their leadership's DNA. Buyer beware.


This is why I love the appstore and forced sandboxing. I hope adobe will finally use the appstore for once. Glas that office is using it. No more buggy updater apps


Adobe has been using the App Store for years[1], but not for the Creative Suite apps.

[1]: https://apps.apple.com/us/developer/adobe-inc/id331646274


For iOS because they have no other choice. I’m talking about macOS and their crapware updater and licensing apps


Scroll down. You’ll see they also have apps on the Mac App Store.


I could be wrong, but it looks like they are doing a server-side redirect to their custom zoommtg:// URI protocol now instead of making a call to the localhost server. Couldn't anyone still drop this on their website and force you to join through a redirect just as zoom is? I don't see how that particular concern of the disclosure could be avoided unless browsers force confirmation, as Safari has done.


MacOS as a whole forces confirmation on deeplinks. The old solution skipped the OS confirmation dialog.


Hmm..but I'm not getting a confirmation prompt on Firefox or Chrome? Visiting a zoom link in either of those browsers takes me directly into the meeting


Safari 12 forces the confirmation, not all browsers.


The Zoom security team has a lot to answer for on this one. They subverted built in security to only then fall victim to the very thing that would have been stopped by what they subverted.

Good on Zoom to do a rapid course reversal here although naturally trust is now damaged given they only came to their senses under strong public pressure. Also a good case study of how putting “user experience” over security can come back to burn you.


The more a think about it, the more a I come to the conclusion that we need a mixed computing paradigm.

For many tasks (probably most and certainly for most people) the iOS model is the best.

It is, however way too restrictive for a number of use cases. Prohibitively so.

Imagine two very different and isolated environments. Terminal, compilers, file managers in one, most other software in the other. With perhaps shared folders between them.


And then every clown shop would insist that you install their agent program inside the trusted partition.


I want a web browser with fewer functions and permissions that is aligned with my interests to dominate most of my computer interactions. That would handle most things that most people do on their computers.


Qubes OS is an operating system which pursues exactly these design goals, and unlike iOS, it is free and open source.


Does anyone know if this issue (and the BlueJeans one) would only affect the current user account?

I currently have a separate limited user account just for meetings, and that’s where I install various meeting apps. So in my case is there any way to know if it affected all my accounts or just the one?


Yawn... Call me when they actually go through their corporate ladder and fire everyone who thought this was a good idea in the first place. If those same managers are in still place, there's no reason for me to ever trust this software again.


Good to see that they’ve turned around and fixed this issue the right way. I was gonna stop using Zoom otherwise.


You should anyway. They had 90 days to do this, they didn't.

Incompetent company. Incompetent management.


Dat IPO lyfe.

Investors got their money, what’s the problem here?


Is there any scope for legislation mandating that cameras have a hardware button to turn off?


Cheat, repeat until caught, then lie.


They should have kept doing guitar effects and not bug others computers. Or choose not already taken name.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: