Hacker News new | past | comments | ask | show | jobs | submit login
Vulnerability in the Mac Zoom client allows malicious websites to enable camera (medium.com/jonathan.leitschuh)
1937 points by mplanchard on July 8, 2019 | hide | past | favorite | 456 comments



Zoom’s response to this[1] is a wonderful example of how not to respond to security issues. It includes the classic tropes:

* Our users don’t care about security.

> Our video-first platform is a key benefit to our users around the world, and our customers have told us that they choose Zoom for our frictionless video communications experience.

* We have no way of knowing if this has been exploited in the wild, so it’s probably fine

> Also of note, we have no indication that this has ever happened.

* Other products have the same vulnerability

> We are not alone among video conferencing providers in implementing this solution.

* We decided not to fix it

> Ultimately, Zoom decided not to change the application functionality

And also a lovely one I haven’t seen before:

* We tried to buy the researcher’s silence, but he refused

> Upon his initial communication to Zoom, the researcher asked whether Zoom provides bounties for security vulnerability submissions. Zoom invited the researcher to join our private paid bug bounty program, which he declined because of non-disclosure terms. It is common industry practice to require non-disclosure for private bug bounty programs.

1. https://blog.zoom.us/wordpress/2019/07/08/response-to-video-...


> Zoom invited the researcher to join our private paid bug bounty program, which he declined because of non-disclosure terms. It is common industry practice to require non-disclosure for private bug bounty programs.

Is an NDA really "common industry practice" for bug bounty programs? I know NDAs are common for pen-testing but it seems like an odd (and kind of dishonest) requirement for a bug bounty program.


Some kind of NDA terms are not unheard-of. Like a 1-3 month period in which to work on things during which disclosures won't go out.

That said, there's a slight disconnect between Zoom's two statements here. The first is that the researcher declined out of concerns over Zoom's NDA. The second is that NDAs are common. What this doesn't say is that Zoom's NDA is cookie-cutter or what the specific terms are.

If I were to guess, Zoom was using some unusual NDA and attempting to buy permanent silence.


Thanks for the explanation. That makes sense and seems pretty reasonable. The company should certainly have the opportunity to fix the vulnerability before it's made public and could be exploited.

> If I were to guess, Zoom was using some unusual NDA and attempting to buy permanent silence.

Considering that Zoom ultimately decided not to correct the issue I suspect you're right.


From the Medium post:

> - Offered and declined a financial bounty for the report due to policy on not being able to publicly disclose even after the vulnerability was patched.


I'd have to guess this as well. I have dealt with a number of public and private bounties, and not one of the researchers has ever rejected an NDA or not allowed us time to remediate before they could disclose this information to 3rd parties. Unless you count Tavis tweeting critical findings I guess.

And to be fair, none of the times I've engaged a private bounty have been due to some massively critical bug that impacted privacy or could hijack parts of client systems. I could see that if the researcher worked with Zoom and didn't feel like they took it seriously they would refuse this and just disclose it due to the impact it has.


The researcher makes it clear that they rejected the NDA because it was a permanent gag on any discussion (even after patching). With that in mind, and this clearly being an intentional design, I can see why it might come off as Zoom not taking the issue seriously.


Will they pay the rest of your team and your spouse as well? "I've already sent these results to a few colleagues around the world to test out, but don't worry, they won't disclose anything for 90 days".


Also, they seem almost entirely focused on "unwittingly joining a meeting" as the real problem here, ignoring the fact that they have made the extremely poor choice of exposing a dodgy control API on your mac to the entire internet. What are the odds there are no bugs in this shitty little HTTP server they snuck onto everyone's machine? The fact that they came within five days of losing control of one of the domains that has the power to install arbitrary code on every mac running this thing is absolutely insane, and they should be asking themselves 1) how that happened, and 2) how utterly screwed they would have been if they lost control of that domain.

In a more amusing alternate universe, someone discovered the zoomgov.com vulnerability, waited until it expired, snapped it up, then published an "update" that uninstalls zoom entirely. In a nastier one, they used this idiotic design flaw to pwn every zoom client machine out there.


It's exactly how you want to respond if you plan on sharing it publicly on Twitter in the hopes of fooling those not in tech.

If my mom stumbled into that article, she would likely think they perfectly explained everything (well... she would likely contact me but, still).

Given this news is already not sticking near the top of hacker news and barely reported elsewhere, it feels like they are already getting away with it for the most part.


Problem is, your mom is not in their market; their paying customers have paid IT people who do pay attention.

And hence Zoom just caved: https://www.theverge.com/2019/7/9/20688113/zoom-apple-mac-pa...


> All first-time Zoom users, upon joining their first meeting from a given device, are asked whether they would like their video to be turned OFF. For subsequent meetings, users can configure their client video settings to turn OFF video when joining a meeting. > Additionally, system administrators can pre-configure video settings for supported devices at the time of install or change the configuration at anytime.

TBH, they're not as dismissive as you're sounding them to be


That part just doesn’t seem very responsive. Unless Zoom is recommending that everyone should turn it OFF, and urgently releasing a patch to make OFF the default, why does it matter that the vulnerability is in an optional feature rather than a mandatory one?


The Zoom admin for an org can switch to cameras default Off

I agree it should be the default, though if you're worried you can open your Zoom app and change the default as well


That is a pre-existing feature, and while it mitigates one specific aspect of the issue, it doesn't represent a security-focused response. Yes, I am saying that's not good enough: an appropriate, non-dismissive response would commit to writing code to deal with the issue raised, subject to the industry standard 90-day embargo. Depending on how much importance they place on their user's security.


Hi I'm the author, AMA

Or come hang out in the party chat!

Use the exploit to join: https://jlleitschuh.org/zoom_vulnerability_poc/zoompwn_ifram...


Stayed on that call for over 3 hours and I just have to say that it was one of the best experiences I've had on the internet in years.

People behaved pretty good considering it was a random public Zoom call (except for a few trolls, but nothing really bad).

It just felt like the internet of yore where random people would meet and chat and just be nice to each other.

Lots of interesting topics, people from all over the world, lots of surprised faces, random camera sights out the window, someone with a unicorn mask...

It was a blast. Thank you Jonathan for a great time!


Thanks for being cool!


I listened for a long time, learned a lot as well.

This made me think - is there any website that facilitates you to do such public conferences on zoom like clients. Basically a bunch of people who are interested in a certain topic could join and chime in - go from topic to topic. It could be a very healthy discussion. People could post and schedule meetings and essentially anyone who wants to learn could join. I do listen to podcasts often, but such meetings would be pretty different than podcasts. Does this already exist?


I am not aware of anything like what you describe, but I did see some people in that Zoom call suggesting the creation of a Discord and/or Slack channels.

However what I fear is that they will become like any other modern forum in that you will need heavy moderation, people will try to troll, etc.

The beautiful thing about Jonathan's call was it's spontaneity I think, and that everyone was so excited to talk about the vulnerability that the group had a single focus.

I might be too cynical so maybe it's a good idea, and if someone suggest a place/site/forum to have these kind of discussions I would definitely try it out.


Interestingly, I implemented every mitigation listed in the article: kill the web server process, remove and add an empty directory at `~/.zoomus` to prevent it being re-added, remove Firefox's content type action for Zoom, and disable video turning on when Zoom launches. When I visit a Zoom join link or the POC link above, Firefox prompts me to open the Zoom client to join the meeting, and when I click "Open Link" the client opens just as it should and joins the meeting.

This seems to confirm that there is no functionality to create a seamless experience for the user that actually requires the presence of the web server. If you don't have the client installed the page can prompt you to download it the same as it would the very first time you download and install it. You can ask your browser to remember the link association and not be prompted for which app the link should open going forward. These are minor steps, even for a regular user, and ones with which most users are likely already familiar.

To me this further illustrates that the web server is truly just a ploy on Zoom's part to keep their hooks in users' systems, and have a way in that the user isn't privy to. Any other excuse they are giving about "enhanced experience" is dubious at best and deceitful at worst.


It seems the web server "is a workaround to a change introduced in Safari 12": https://news.ycombinator.com/item?id=20389668

> You can ask your browser to remember the link association

If that's true in Safari, then a web server is using dynamite to kill a fly.


If you want to see some part of this fixed, please UPVOTE this issue:

https://github.com/mozilla/standards-positions/issues/143


Have your checked for similar vulnerabilities in competing products such as GoToMeeting and WebEx? They have the same basic features.


RingCentral Meetings uses zoom.us engine but the local server runs on port 19424 instead. I'm able to replicate the issue on it.

PoC: http://localhost:19424/launch?action=join&confno=3535353535


I can confirm that this vulnerability exists in RingCentral for macOS, version 7.0.136380.0312.

I was taken into Miguel's meeting, but since the host wasn't presented, it simply let me know it was waiting for him (It also had a friendly notice "Your video will turn ON automatically when the meeting starts".

I've changed my settings in Video > Meetings, just like in Zoom, to turn off my vid when joining. Also confirmed that the server is running on port 19424 (via terminal command 'lsof -i :19424').


In my case it's 19421 as written in the article.


For RingCentral or Zoom? Could be because I have both on my machine.


Zoom


Yes, my comment was about RingCentral Meetings


Sorry, never heard of that, and since the rest of the story was so similar, it didn't really register in my brain as something entirely different.


bluejeans video installs a nasty daemon that runs at boot too. I'll never attend a bluejeans meeting again


Not if you just use it through the browser, which is more stable than their app.


Anyone know what port the Bluejeans server is running on and/or how to kill it in a manner similar to the Zoom workaround?


    BlueJeans 423 [...] TCP localhost:18171 (LISTEN)

    $ nc 127.0.0.1 18171
    GET / HTTP/1.0

    HTTP/1.1 200 OK
    Content-Length: 23
    Server: Swifter 1.3.3

    BlueJeansHelper Service


Removing the BlueJeans from your machine is a little more involved because they actually used launchd.

launchctl list

Then you need to find where the plist files are (i.e. com.bluejeans.app.detector.plist).

You can disable an entry from launchctl list:

launchctl disable uid/<your user uid>/com.bluejeans.app.detector

You can also unload if you find the actual file

launchctl unload ~/Library/LaunchAgents/com.bluejeans.app.detector.plist

There were a couple differently named bluejeans agents.


Wow didn't know that. I rarely use bluejeans but I guess i will uninstall it anyway.


> You can confirm this server is present by running lsof -i :19421 in your terminal.

Might be good to specify what the output would be if the vulnerability is present or not, like this:

"If the server is running on your machine, you'll get a line specifying which process is listening to that port. If the command returns empty, your machine is not vulnerable."


Huh, I'm on Windows and it auto-joined the meeting too, with video enabled. I wonder if this is because at some point in the past I opened a Zoom meeting and allowed Chrome to open the Zoom URI in the Zoom app?


We need "allow only for this session" (or tab) in the permissions popup bar.


Great chat, I think you were right when you said all vulnerabilities should have a video conference for Q&A after release. It was really helpful to get a better understanding of the platform and the threats facing it.


I asked Zoom support about this and they sent me to this page: https://blog.zoom.us/wordpress/2019/07/08/response-to-video-...

The key thing here is they think this is a fair trade-off because Safari asks if you want to open Zoom.

> This is a workaround to a change introduced in Safari 12 that requires a user to confirm that they want to start the Zoom client prior to joining every meeting. The local web server enables users to avoid this extra click before joining every meeting. We feel that this is a legitimate solution to a poor user experience problem, enabling our users to have faster, one-click-to-join meetings. We are not alone among video conferencing providers in implementing this solution.

I do not believe that this is a fair trade-off given that any website can act on this locally installed server.

EDIT: I think they need to be made aware that this isn't acceptable. My reply to their support team: I do not believe this is a fair trade-off - allowing any arbitrary web site local control of privileged software installed on my machine - because Safari offers a security prompt (specifically so that any arbitrary web site does not gain control of privileged software on my machine). I will be switching ~/.zoomus/ZoomOpener.app off, and considering other options until it has been fixed.


I realised I had a paid account, so I've cancelled that too. And I've also reported them to Apple, after seeing that the ZoomOpener app reinstalls the client - which is completely and utterly unacceptable.


Yeah, this seems like it must violate some Apple TOS, right? The uninstaller leaves behind a local webserver, that can't possibly be allowed.


Yeah, when I read this, I said WAT.

How on earth does Apple allow this ? I'm not excusing Zoom, but this is Apples fault.


I'm not sure how this can be construed as Apple's fault (and I've never owned any Apple products). A general purpose OS runs what the user installs. This is purely on zoom for backdooring the system. I'm not sure how many Mac users bother running ps every once in a while, but it seems like it wouldn't be that hard to detect either.

That said I have to say zoom's the only businessy meeting client I've used that doesn't require running through hoops on Linux. Maybe I should check if there are any devious backdoors installed on my system...


Apple only lets you install verified applications by default. Zoom is in the damn AppStore.

The whole point of making the AppStore a walled garden is such that these things don't happen. If an AppStore App can install a server in your machine that remains there and reinstalls the App after it has been deleted, and can be used to spy you via the camera or DDoS you. Then... the AppStore sucks.


> I think they need to be made aware that this isn't acceptable.

Oh, definitely. I cancelled my subscription because of this, but I wonder if the reason will make it through the corporate fog.

What is worrying is that more and more companies think it is fine to install "helpers", "openers" and other cruft. I recently removed several, and I still have to use software that scares me sometimes (DYMO web printing, Brother web printing). This should not be considered OK.


> I wonder if the reason will make it through the corporate fog

I really doubt it. Given the change control policies of huge corps and how awful it is to get anything new/get rid of anything they'll just toe the zoom party line and keep it.


What a glorious response. “Your product is broken.” “We know, we did it on purpose, and we’re proud of it!”


Not on my machine. I uninstalled with AppCleaner, visited a Zoom link in Safari, and the software reinstalled and started without my interaction.


> This vulnerability leverages the amazingly simple Zoom feature where you can just send anyone a meeting link (for example https://zoom.us/j/492468757) and when they open that link in their browser their Zoom client is magically opened on their local machine. I was curious about how this amazing bit of functionality was implemented and how it had been implemented securely. Come to find out, it really hadn’t been implemented securely. Nor can I figure out a good way to do this that doesn’t require an additional bit of user interaction to be secure.

Does anybody understand (and have a moment to explain) why the author says this is difficult to do securely? macOS has a simple facility for handling custom URL schemes, so my impulse would be to have `https://zoom.us/j/492468757` do a server-side redirect to a URL like, say, `zoomus://492468757`, which would launch Zoom locally using the OS's built-in services. This wouldn't require a third-party daemon of any sort, and would just be a regular application that the user could trivially uninstall.

Is there a security hole there that I'm missing? Or have I misunderstood the author's point?


A custom URI wouldn't work as seamlessly as zoom's UX team would have liked. If you hadn't installed zoom, either a nasty message would tell you the protocol wasn't supported, or it would redirect you to a google search.

Their answer was to send people to a URL they controlled and brought you through the install process as easily as possible, but the issue they needed to solve was determining if you needed to have an install or just redirect to the app.

They broke so many security rules just to shave off a few inconvenient seconds, and those seconds rose them to the top.


Am I the only one seeing the pattern here. Most security loop holes I have witness have existed at the cost of providing a better user experience.


This is the security - usability tradeoff and is as old as the hills.


Yeah, it's a tradeoff by nature. This applies to security in general, not just computers. Having to unlock the door to your house when your hands are full with shopping is annoying, but the alternative is leaving your house unlocked all the time and trusting nobody will walk in.

Depending on the context (location, is there usually someone home anyway, value of stuff within the house) you may or may not find the tradeoff makes sense and voluntarily opt for the worse 'UX'.


See also: Boeing 737 Max


As in security against stalling lead to a UX disaster that caused planes to dive into the ground?

I'd argue the moral of that story was to redesign the plane, instead of piling on hacks to save costs in the short run.


As I understand it, they tried to design a new plane that wouldn't require pilots to be re-trained on how to use it, if they'd already been trained on an older model. That's the UX I'm referring to.


Certainly a (bad) trade-off, but I wouldn't classify it as UX. It's more of a safety vs sales trade-off.


The fun thing is users mistakenly recognise the tradeoff as a sign of the security. If it was annoying it must be secure. Why would somebody waste my time for no purpose? See also placebo effect - of course I feel better, you gave me pills and I took them, duh, it's medicine.


This is the pattern of applications continuing to be deeply flawed and heavily advertised as long as you can be bought for a billion by IBM/Microsoft/Google/Facebook/TechOverlorfOfTheYear and finally get into a stable enough state so that they can be part of the infrastructure when a full-features open source version emerges.


Ah, yeah, the flow for when the app isn’t installed makes particular sense (at least as a motivation for why someone would implement something so awful). Thanks!


If you want to really break down their viewpoint on the situation, lets translate their PR statement line by line:

> Zoom believes in giving our customers the power to choose how they want to Zoom.

Zoom believes if their app isn't convenient to use, their customers have the power to leave their ass, as they are in an incredibly competitive market.

> This includes whether they want a seamless experience in joining a meeting with microphone and video automatically enabled, or if they want to manually enable these input devices after joining a meeting.

This includes making sure that they aren't asked to provide confirmation to access their camera/microphone, which impedes the convenience of the app to all participants. Less clicks equals less thinking.

> Such configuration options are available in the Zoom Meeting client audio and video settings.

Stop complaining about this as we have given ourselves a legally compelling user defined control hidden in a single tab deep within our preferences.

> However, we also recognize the desire by some customers to have a confirmation dialog before joining a meeting.

We can tell you aren't going to drop this.

> Based on your recommendations and feature requests from other customers, the Zoomteam [sic] is evaluating options for such a feature, as well as additional account level controls over user input device settings. We will be sure to keep you informed of our plans in this regard.

We don't care. We have lots of users, and lots of success having this option turned on by default. The support costs alone telling non-technical people how to turn on their cameras don't make it worth it.


Oh come on. There is no easy way to send people without the app to a installer page, that is the issue. And that is something every single person wants.


Good point. Maybe MacOS/iOS should have a feature where, just like going to a custom service that can launch an already installed app, such as zoomus://123456789, they can allow software vendors to register an install URL that users who don't have the app already installed will be directed to. Let the OS handle security, where it should be, and still make the first install user experience good.


Bad behavior for unknown protocols is not a MacOS specific problem. Instead of registering things with Apple, a link to the handler should be included in the protocol link and the OS should send the user there if a handler is not installed. Something like <a href="zoom://12345" handler="https://zoom.us/install">


Your proposal is the closest thing to the best solution I have seen. It still has at least several issues:

* When Zoom is already installed:

- should be able to handle most instances

- needs to account for version management, eg installed version zoom could still be version that is too old to process the uri correctly. Version could be in the uri.

When Zoom is not installed:

- an information dialog needs to be somehow shown to the receiving user, asking them if they want to install 'Zoom'.

- that screen must include the 'uri' and validate certificates etc to prevent abuse (hence must necessarily be 'ugly' and not 'seamless')

- the language on that dialog has to be provided by the OS/Browser, not the software vendor, to prevent abuse. For similar reasons the Windows UAC dialog text can't be written by the vendor.

- the language employed by the OS/Browser has to of necessity be fairly neutral, neither encouraging nor discouraging installation, to prevent abuse. This is necessarily at odds with the UI principle of leading the inexperienced user through clear steps to achieve their intended goal.

- the user of average-to-lower-quartile experience, as of 2019, for a product with a client base of 40 million+, is likely not in a position to meaningfully distinguish a legitimate Zoom install uri from a malicious / imposter one. Hence any popular software using this install-from-uri-handler becomes an appealing target for malicious actors to mimic, which they will.

- some proportion of users will likely install from malicious links, and whichever product (let's say Zoom for example) is the most likely software for malicious actors to masquerade as will become the name associated with the attack in the mind of the wounded public


Those are some interesting points. I'm not convinced that versions should be in scope for this sort of thing though. If I'm writing a protocol handler, I think it's my responsibility to make sure my software can update itself, and make the default behavior that it should check for updates if it is given a URI it doesn't understand.

Secondly, version checks assume that the user wants to run this specific protocol handler. I as the user might prefer to run an open source non-official zoom client. I think the OS should only be trying to help me if I don't have any handler.


The UA could go to the handler site which would be a landing page.


They have the opposite starting with Catalina and iOS, Universal Links that lets an app register to take the first pass at handling zoom.us URLs. Android always had this with their intent system.


Was available long before Catalina


Well, presumably if that's the case, their ZoomOpener could simply be configured to respond that it exists. That would be enough to either direct the user to a download page or open the protocol-specific URI.

If I'm understanding it correctly, the reason it does more than that is to bypass the "protocol-specific URI opening" UX.


I'm unclear what subset of users are desktop only Zoom users that aren't also familiar with the same "Do you want to allow this app to access your camera/microphone?" dialogs on mobile devices. This can't be a large demographic, can it?


Ah, but that's an interesting question right? do they WANT to be asked? If you only had to make one click to join a meeting, doesn't that FEEL better?


In fairness, I get irritated about the fact I need to tell WebEx to use my computer's audio to join the call every damn time I join a meeting quite annoying.

If only there was some happy middle ground between never asking and always asking ...


For me the problem isn't that it asks, it is that it forgets (and they don't have the same options consistently across hosting orgs).

I'd be totally fine with default-on voip sound - with a red, muted mic button and a bubble saying 'tap to unmute'.


> The UX team

You seem to imply that they have an UX team but not a security team, so nobody convinced anybody else that this wasn't a good idea.

Without genuine security orientation, even if an expert realizes there is a security problem, who wants to be the boring paranoid pessimist who wastes time and attempts to ruin products, only to be staved off by the efforts of more productive employees that focus on adding value?


A sustainable company isn't built on velocity, lack of conflict, and willful ignorance.

Decisions need to be made between strong opinions about the right path forward. There needs to be balance and respect between these aspects.

Reading the PR statement, I highly doubt the people who have those strong opinions about security are being given a fair voice. They are probably there, but they have zero power to change anything within their product.


> A sustainable company isn't built on velocity, lack of conflict, and willful ignorance.

> Decisions need to be made between strong opinions about the right path forward. There needs to be balance and respect between these aspects.

tell that to literally every VC


I think literally every VC isn't built to be sustainable, they are designed to randomly jab the marketplace for a good investment bet. I wouldn't even expect them to listen to this kind of advice, it doesn't apply :)


The article indicates they have a "Security engineer" who was OOO when the author first contacted Zoom.

So yeah, sounds like one human, and it sounds like she/he probably doesn't have much say.


I have some experience with this, you can use javascript on the https:// meeting link to detect if the app protocol (zoom:// or whatever) exists. If the app protocol exists then go straight to the app protocol link. If it doesn’t then prompt the user to download and install Zoom. The JS is a bit messy and requires a few different approaches but it works on all popular browsers on Windows and Mac (Linux support wasn’t needed, so not sure).

Of course, the browser will pop up a confirmation dialog to ask if you want to open the Zoom app but this is a feature not a bug.


Citrix Workspace does exactly this and it works fine


The basic problem is that you enter a meeting by loading a URL, and loading URLs is something any website can do. There probably needs to be a confirmation step before joining a meeting.


A custom URL scheme would at least provide an opportunity to confirm launching Zoom, even if Zoom itself didn’t confirm joining s particular meeting (which I agree it should).


They put this in place precisely to avoid Safari's "Do you want to open this in Zoom" confirmation prompt


Once you deleted the localhost server they have running, they actually fallback to using the protocol.


Yes, verified. Terminated and deleted ~/.zoomus, links to join Zoom calls still open, but I am prompted to open Zoom first.


i didn't see that. for me it just failed and to join a meeting now i need to open the zoom client and copy the meeting id manually. not a big deal for me, just wondering...


Can you reliably delete the server?


Yes, you just need to

  rm -rf ~/.zoomus
  touch ~/.zoomus
The opener is the only thing in that directory.


below down the post, zoom team said that this feat exists because Safari doesn’t have custom url scheme.


I've opened up Safari and it properly asks me if I want to open slack when using: slack://test


When I clicked the PoC link in Safari, it launched the Zoom app using a URL scheme. ("open in ...?" dialog put up by Safari)


Oh, yeah, I missed that. That’s patently false.


That’s just false. Safari opens custom URIs.


If Universal Links was supported on macOS we could get the best of both worlds.

The web server basically presents meta-data in a JSON-file (in the .well-known directory) which Safari/iOS uses to launch the app if it is installed, and otherwise just renders the webpage [0].

The app contains information about which domains it allows itself to be opened from which would fix this issue.

[0]:https://developer.apple.com/library/archive/documentation/Ge...


Universal Links will be supported on macOS Catalina. Reference: https://developer.apple.com/videos/play/wwdc2019/717/


Universal Links are better than their localhost webserver insanity, but don't really solve this. A malicious website can still redirect you to a zoom.us URL that will instantly join the meeting without confirmation.

The underlying problem is that they want a URL to join a conference call hosted by any random user and share your audio/video without confirmation. And it's simply not safe to trigger that kind of action from a URL.


Yes, I agree that's the underlying problem. Regardless of how the URL is opened it shouldn't behave that way.

However, I do think that Universal Links doesn't work with redirects, consider: https://bit.ly/30oxOdO vs https://twitter.com/ycombinator (tap using Safari on iOS with Twitter installed).

EDIT: Turns out I was misinformed...


> why the author says this is difficult to do securely? macOS has a simple facility for handling custom URL schemes

So does all other operating systems and this has been a thing for at least a couple of decades. This is not the problem.

The problem is that this feature is severely locked down in all modern browsers, precisely due to the security risks involved.

Relying on this feature in a critical user interaction path is a guaranteed way to get flooded with support-requests.

Disclaimer: have replaced custom protocol with other solution in end-user facing production projects.


Positively terrible... Kudos to this researcher. I liked Zoom when I used it a couple of times, but the reinstall “feature” is a huge violation of my trust. Software from the company behind it will not touch my system anymore. Too bad really, because properly working video chat is hard to find. The App Store model is not my favorite, but at times like these, a forced sandbox and inspection by a trusted third party start to look like the only way forward.


If you had a sandbox, you wouldn't even need anyone to inspect it - since all the app's files would be contained in one place, uninstalling it would remove everything, and there wouldn't be a way to leave a server behind.


This just in: Bad behavior is still bad behavior when it's possible to mitigate it on the user side.

Consider how many people use Zoom and don't even know that Hacker News exists.


Right, I agree! My point is that preventing this situation from happening in the first place, through better sandboxing restrictions, is both more fair and more effective than having each app be individually approved. If you try to mitigate this just with app review, then 1) you're going to miss apps that do bad things, and 2) It introduces huge conflicts of interest for the reviewer. But if you were to have effective sandboxing, it wouldn't be possible for Zoom or any other app to do this in the first place, so that you would be able to trust the apps that you install even if they haven't been reviewed.


What reinstall feature?



It silently reinstalls if you follow any Zoom meeting link.


For which loading an iFrame by visiting a website or opening an email is enough...


In response to all of the well-deserved criticism, Zoom just made two updates to their blog post[1] to announce that they will be completely removing the webserver for all macOS users in a new release tonight, and also adding an option prompt going forward:

JULY 9 PATCH: The patch planned for tonight (July 9) at or before 12:00 AM PT will do the following: 1. Remove the local web server entirely, once the Zoom client has been updated – We are stopping the use of a local web server on Mac devices. Once the patch is deployed, Mac users will be prompted in the Zoom user interface (UI) to update their client. Once the update is complete, the local web server will be completely removed on that device. 2. Allow users to manually uninstall Zoom – We’re adding a new option to the Zoom menu bar that will allow users to manually and completely uninstall the Zoom client, including the local web server. Once the patch is deployed, a new menu option will appear that says, “Uninstall Zoom.” By clicking that button, Zoom will be completely removed from the user’s device along with the user’s saved settings.

PLANNED JULY RELEASE: Additionally, we have a planned release this weekend (July 12) that will address another security concern: video on by default. With this release: 1. First-time users who select the “Always turn off my video” box will automatically have their video preference saved. The selection will automatically be applied to the user’s Zoom client settings and their video will be OFF by default for all future meetings. 2. Returning users can update their video preferences and make video OFF by default at any time through the Zoom client settings.

Edit: the new version is now released at https://zoom.us/download

[1]: https://blog.zoom.us/wordpress/2019/07/08/response-to-video-...


> Remove the local web server entirely

Thank goodness. Sanity has prevailed.

You know you've blown it when the following appears in a buzzfeed article about your software:

> open the application called, “Terminal.” Copy and paste this text: lsof -i :19421. Press enter. You’ll get a string of mumbo jumbo. Underneath the text “PID,” copy the string of numbers underneath. Then type “kill -9” (without the quotes), add a space after -9 and paste the PID string of numbers. Press enter. The server has been killed.

:D


Verified that the patch removes the web server.

What I'd really like to see now is them addressing the fact that their initial response to this was terrible, as if whoever was making the decision had no idea how bad this design was from a security standpoint.


This whole thing reads like a security response driven by marketing and branding considerations. They put a lot of work into that seamless experience they're so proud of, apparently without security professionals being involved.

These factors point to a company that fundamentally doesn't take security very seriously. That's not a fast, easy, or cheap thing to change. I suspect it won't any time soon.


> We’re adding a new option to the Zoom menu bar that will allow users to manually and completely uninstall the Zoom client, including the local web server.

Including the local web server that definitely doesn't exist anymore anyway after this patch?


I uninstalled it via their new patch, but it doesnt remove all files. I think its just caches and logs left but who knows. If you want to purge this malware with fire you still need to follow the instructions at https://apple.stackexchange.com/questions/358651/unable-to-c...


HIPAA provides an effective strategy for holding Zoom’s feet to the fire in cases like this. Since the company markets compliant video conferencing for healthcare professionals, they are classified as a Business Associate. It is quite likely that a well-written complaint on the HHS Office of Civil Rights site would result in further investigation and regulatory action.


software companies tend to be safe from this kind of thing (less everyday though). but they could lose their users


Only insofar as that people usually do not complain. I’ve worked with software clients on OCR investigations that were prompted by far less substantial complaints.


Not sure I follow the CORS angle. The linked stackoverflow question mostly seemed to be someone who was confused about how CORS works, and the issue in the Google Chrome tracker was closed as WontFix because they couldn't reproduce it and said it should work.

I'm nearly positive that CORS from localhost works OK. I set this up all the time for local development. For example, I run a client CRA app on localhost:3000 and an API on localhost:3001. The API sets the CORS headers and the CRA app can make requests to it.

If this is correct then I believe all Zoom needed to do is have their localhost application set CORS headers for their production domain. This would have allowed AJAX communication and only allowed it for Javascript running on their domain. Instead they did this totally hacky method that lets the whole world interact with the localhost server...

Maybe I missed something but if they could have done this the right way and didn't that is much worse IMO...


You're 100% correct, and while someone has pointed out the proper headers that need to be set on the bug report here: https://bugs.chromium.org/p/chromium/issues/detail?id=67743, it's been drowned out by people who don't seem to understand the issue:

http://williambert.online/2013/06/allow-cors-with-localhost-...

CORS is hard, I've struggled on it several times, and I'm not surprised an engineer gave up trying to fix it because of deadlines.


Can confirm, CORS (Origin: ramdomsite.tld to localhost) works just fine in Chrome.

If you have a CORS enable server on localhost you can make requests to it from http://www.test-cors.org


Am I right in thinking that CORS only applies to Javascript-initiated requests? This trick uses an embedded image to make the request.


That's correct, and part of my point. If they used CORS headers correctly it could both be secure and not require a crazy image hack.

The image hack seems like a lot of work to go through to make an app LESS secure.


I'm a bit confused, so CORS doesn't apply when trying to load an image?

If they set CORS to allow interaction from anywhere, why use an image and not load data with js?


CORS is set up to protect data from being given to a third party, e.g. JS requests obtaining and being able to observe data they shouldn't have access to. Since images are being loaded by the browser (second party), there is no such protection, since a third party should not be able to read them anyway (barring some other vulnerability). It's assumed the first party is correctly doing what it's supposed to, an example could be fetching an image from a cdn.


Hmm I still don't understand why they have to use the image hack. Since they control the server on localhost they can set the CORS headers to allow all domains, then JS from a site could access localhost right?


Yes. I don't think there is any good reason to use the image hack. Further, they could have made the CORS lock only the production zoom domain for better security...


A user on Reddit suggested the image url hack was a way to bypass mixed content blocking from the zoom https site to the local http server: https://www.reddit.com/r/programming/comments/cavblo/zoom_ze...

> One potential hiccup I encountered was that Firefox blocked my XHR request due to a policy against "mixed active content". This was because my origin site was accessed through an HTTPS connection and the localhost server was only HTTP. That's one potential reason Zoom might have opted to use their <img> garbage; since <img> elements are passive not active content, they could avoid using HTTPS on the localhost webserver. That's not a good excuse, but clearly they weren't interested in finding a good solution -- whatever the problem that prompted the <img> hack was.


Very interesting, thank you! This is definitely no excuse for not filtering the origins -- they just don't get it for free through passive but they still need to do it, or since they are a native app, generate and install a cert -- but it could be the motivation for the decision to go this route which is really useful to know.


Lots of this got me thinking so I wrote a bit of a piece on it: https://fosterelli.co/developers-dont-understand-cors


I'm trying to think of the real-world implications and how this would play out.

Normally this would be pretty obvious, wouldn't it? Users would see Zoom open into some weird meeting, and close it.

Presuming the exploit cannot avoid bringing the Zoom app to the foreground when it joins the meeting and activates the camera/mic. If it can do that and stay in the background, all bets are off.

In spite of its obviousness, it's still pretty darn scary --

Scenario 1: malicious website/app opens link while you're sitting there.

You're sitting in front of your computer, you see Zoom open, you're like "WTF?!", close that shit, uninstall Zoom; hopefully discover how to permanently remove it (it otherwise leaves a localhost http server running that can reinstall itself).

But crap the hijackers have, even with a few seconds of video: your face, your surroundings, the audio of your surroundings, all of which can increasingly be fingerprinted. That alone is very scary. Just to be in an unintentional meeting for a moment is very disturbing. A violation of sorts.

Scenario 2: malicious website/app delays opening the link until some threshold of mouse/KB inactivity is reached.

Activate the Zoom link and hope the person is AFK. Spy on their home/office/whatever. Also a violation.

Are there other scenarios I am missing?

Personal note 1: I'm happy I switched to a Linux laptop after finding last year's MBPs disappointing (and the TB revolting; I have a physical escape key!).

Personal note 2: I do actually like Zoom a lot, it's an awesome video conferencing app. But this should be fixed for Mac users.


I wonder if this works in an electron app (like Slack maybe) displaying it?

Maybe you could intentionally send this link to someone shown as inactive on Slack, and have the WSlack webpage preview thing run enough javascript to pop open Zoom with the camera and mic running...

I'd test it myself, but I deleted Zoom and the sneaky localhost web server while I read the article...


It says that the server sends an image with certain dimensions back as an error code, so I wonder what you could do if you served some simple HTML that uses the local server as a meta tag that renders in the preview?

I imagine slack would do that on the client since it’s built on electron.

Unless it requires more than loading a URL.


> Activate the Zoom link and hope the person is AFK. Spy on their home/office/whatever. Also a violation.

I think this is the most likely scenario. There are ways you could potentially delay it (e.g. they leave a tab open and you don't open the link until a certain time)


Scenario 1 extended: Add this into an ad or a popover for a porn site and potentially capture some very compromising footage.

Scenario 3: Add it as a tracking pixel in an email.

I guess there are all kinds of scenarios since it's an unsecured API that responds with an image. You can trivially embed it in anything that renders HTML.


Scenario 3: You want to be a jerk and put someone into a meeting they weren’t in to get them in trouble.


Well, the company and product are dead to me now, gonna hassle our CTO to switch. I just really hope theres some dev at Zoom who hated this whole installing backdoors idea who's gonna have the greatest "I told you so" day at the office tomorrow.


We work with a lot of hospitals and find that video conferencing tools are often blocked by their IT departments for security reasons. I’m expecting Zoom to find itself on that list in short order.


Interesting to me that this would be your CTO’s decision.


Hes my direct report-to and has ears of everyone that matters, so its really just my decision of who to pester to get it up the chain.


“On Mac, if you have ever installed Zoom, there is a web server on your local machine running on port 19421.”

...

“All a website would need to do is embed the above in their website and any Zoom user will be instantly connected with their video running. This is still true today!”


This server is still running on my machine despite having "removed" Zoom a few months ago (macOS).

Guess I was a bit naive in thinking just trashing the .app and immediate artifacts in Library would do the trick.

EDIT: I missed the .zoomus directory in my home folder that had the culprit. Funny enough Zoom's instructions on how to uninstall the app on macOS just points to documentation from Apple and wikiHow (???) with standard methods that don't fully remove Zoom.


I'm sure Zoom intentionally failed to tell you how to remove the web server. After all, if it's still running, then it's just that much easier to reinstall Zoom on your machine.


I’m surprised more enterprise IT orgs haven’t flagged this behavior, or simply made it impossible via local machine policies that would prevent running a web server.


.../.zoomus/ZoomOpener.app/Contents/MacOS/ZoomOpener


Does anyone know how this web server starts itself after restarting your machine? As far as I know, a `~/.zoomus` directory can't restart a web server after your machine restarts.


It doesn't start on boot, it starts on login. It appears as a Login Item named ZoomOpener in your local user account in the System Preferences -> Users & Groups.

Additionally, when you launch the main application, it will check to see whether ZoomOpener is running. If not it will boot it up. The main app will install and register ZoomOpener as a Login Item if necessary.


I think it's because it runs on a port higher than 1024, so it doesn't need root privileges to start a web server on that port.


The most damning part of the conclusions:

> This being said, I also recommend that any researcher that finds a vulnerability in Zoom’s software does not directly report the vulnerability to Zoom. Instead, I recommend that researchers report these vulnerabilities via the Zero Day Initiative (ZDI). The ZDI disclosure program gives vendors 120 days to resolve the vulnerability, the ZDI will pay researchers for their work, and researchers have the ability to publicly disclose their findings.


On my Mac, I have uBlockOrigin installed in my browser and I have it configured to always block 3rdparty and 3rdparty frames and it prevents both the POCs completely.

I have one browser that I use for work email and video conference, where system grants access to camera/microphone to the browser and browser allows Google Meet to access camera.

I have another browser where system does not grant access to any of the devices - camera, microphone, USB etc - and I use that for web surfing.

And I strictly don't install any plugins for video calls. I have refused to join meetings where people try to make me install random binary software on my machine. There's always phone call for such situations.

I feel better about dedicated apps on iPhone where again I can install and grant permissions before the call and then uninstall the app completely. On iPhone I don't do any web surfing. I have Firefox Focus for occasional emergencies to open the unknown web.


Refusing to install software for video calls is a good policy. Also, having a throwaway workstation for such things (also Skype, which is spyware of the worst kind) is useful too, for when it’s a pre-sales call and security can’t outweigh closing a six+ figure deal.


and I am in the crowd of mac users who tape over their camera. when it comes to video conferences at most I have ever seen the desktop shared. what type of work do you do that uses the video for portions other than the presentation?


You weren't asking me, but I run into this same thing.

In my business -- project management software -- I'm in online meetings a LOT (say, 20 hours a week?) because everyone in my company is remote. We have never, ever used video. It just doesn't come up. Nobody wants it internally, and none of our customers ask for it in external meetings. I don't think any of them use it internally, either (and many of our customers are large, distributed organizations with offices all over the place).

This seems normal to me.

My neighbor is an IT VP for a health care concern. She travels a lot (30-40%), and when she's home she's in online meetings pretty much all the time. And in her company, video is ALWAYS included. I have no idea why, and neither does she; it's a cultural thing.

The upshot, though, is that I work in t-shirts and cargo shorts, and she has to be "office ready" even though she works at home. However, I will note that, if I run into her outside when she's walking the dog, it's not unusual to see her in a nice blouse, hair and makeup done, but wearing yoga pants or whatever. Which is its own kind of hilarious.


Sounds like false security thiugh- at least the camera has a light (which last I heard has been hardware level synced with the camera) so you know if someone's watching but what you have no control over is the microphone


Just audio is still an improvement over audio and video.


Video meetings are so much more effective than audio meetings. I went from a company that always uses video to a company that rarely uses video and the difference is huge.

Often the most important parts of meetings are nonverbal.


> what type of work do you do that uses the video for portions other than the presentation?

My department (of 400 people) is split between two cities. I am regularly in meetings with people from the other city.

Google Meet is a big part of our culture. It helps with team cohesion and collaboration to actually see each other's faces when we meet.

It is of course not _required_ but I really believe it is better than just audio.


To ubermonkey's point, some of this can be a company culture thing.

Certain teams at my workplace use webcams all the time, others never. My team leverages them quite a bit, as our team is all over the world. It helped solidify our team members not just as random voices on a phone line, but as actual people who we will likely never meet in person.


HN part of the unknown web?


Another small thing (big for me) Zoom does is register their app as the handler for `tel:` links every time you launch it, with seemingly no way to disable that. Companies that make themselves the default for something on your machine by force are not to be trusted.

I’m not surprised they start a web server from under their users, and that their response to the vulnerability was lacklustre.


You might be able to remove this with https://github.com/Lord-Kamina/SwiftDefaultApps


I’ve opted to remove Zoom instead.


Yep, they have lost my trust too, especially with their terrible response on the blog. And I don't trust that they simply won't remove my mitigations if I have that app on my system again.


Why isn't zoom running fully in the web browser at this point? Meet does this, and as far as I can tell the quality is indistinguishable from Zoom. Can someone with a better understanding of the underlying protocols shed light on why Zoom continues to ship a separate desktop app?


They do have a web client, but not WebRTC. See this reverse engineering of their protocol: https://webrtchacks.com/zoom-avoids-using-webrtc/

TBH I don't really understand their rationale. Nothing about it strikes me as "better" than WebRTC.


To allow meeting participants to use the web client to connect to a meeting requires the meeting host to explicitly enable the option in their advanced settings (it is disabled by default).


"as far as I can tell the quality is indistinguishable from Zoom"

In my experience, the quality is similar to Meet when all parties have great internet connections. But if one or more parties has high/variable latency or packet loss, then Zoom provides a much more smooth experience.


You can do so much better using a native app than a webapp. UDP transport is really important for real-time communication.



WebRTC typically does use UDP. But it certainly isn't as flexible as what you can do with a native app.


Not related to zoom, but I'm working with a team which uses 'highfive'. I can't, for the life of me, get the downloaded desktop app to ever work. There's this perpetual dance of "you need to be logged in" and "register now" and "log in". I was thinking it was something to do with the VPN, but it seems to be the same on or off. However, grabbing the full URL and pasting in to Chrome, works like a champ. I'd prefer to use the desktop app, but I can only get the browser version to work.


Why is the web client (on Chrome, etc) so bad on Zoom ? I mean, people are building Google Earth on the browser .Its not just the bad video experience - even the product experience is seriously broken.

For example, the default audio setting when you sign in to the web video client is to connect using PHONE AUDIO. In case you figure out how to click the tab to use computer audio...it breaks down a couple of time in asking for browser permissions (camera, mic). It is unusually bad for something that is supposed to be that good.

there are all these articles about the comparisons - https://webrtchacks.com/zoom-avoids-using-webrtc/

https://bloggeek.me/webrtc-vs-zoom-video-quality/

hangouts still rules when it comes to web based video conferencing. And for countries with massive linux based usage (like India), Zoom is not a very viable option.


Jitsi Meet is also an excellent web-based video conferencing tool. They also have a basic comparison of WebRTC vs Zoom [0] which is actually has the same demonstration video as your second article. I posted a basic overview elsewhere in this thread [1], and I would strongly recommend it.

[0]: https://jitsi.org/news/a-simple-congestion-test-for-zoom/

[1]: https://news.ycombinator.com/item?id=20390149


how does it work ? because we do larger conference calls with a distributed team of 40 people. As in - is there a paid plan, etc

also, does it work with mobile apps ?


It's free. It has apps for iOS and Android and you can also phone in. I've never used it for such large meetings, but apparently it handles 100+ people fine [0], and they have a blog post about scaling [1].

[0]: https://www.callstats.io/blog/2017/10/09/jitsi-atlassian-web...

[1]: https://jitsi.org/jitsi-videobridge-performance-evaluation/


The Zoom client on Linux used to (?) have a nasty command injection. The URL for joining a meeting got passed to some bash reinvocation (so they could set the library path if my memory serves me). A specially crafted URL could execute commands on the system. I haven't been too interested in using Zoom since seeing that.


I hadn't heard of this, so I looked it up, and you are right: https://www.exploit-db.com/exploits/43354

At least that was patched. These sorts of issues are frustrating, because as a Linux user I really want to like Zoom -- I appreciate that the treat all platforms pretty equal (Mac, Windows, Linux, Android, iOS) with native apps. That is a rarity.


For the longest time the Linux client would just crash randomly. It also tends to heat up your laptop and use all of your cores at 100% if you're looking at someone's screen.

Just run `strace -f zoom 2> wtf.zoom` to see all of the shit it does (looks like it is polling for events like crazy).


I tested the repro-steps and found that the ZoomHelper was not listening, although it was installed . In this case i was prompted to download zoom.pkg rather than activate video.

I'm guessing it was because I have MacOS firewall = strict (no listening ports)

Also, here's a nice tip to show all listening apps (good habit while cleaning up)

  lsof -i -s tcp:listen |awk '{print $1 " "$8" " $9}'|sort|uniq
  COMMAND NODE NAME
  Google UDP *:mdns
  SystemUIS UDP *:*
  SystemUIS UDP *:53611
  UserEvent UDP *:*
  WiFiAgent UDP *:*
  identitys UDP *:*
  rapportd TCP *:49152
  rapportd UDP *:*
  sharingd UDP *:*



Blog post too, which seems to be somewhat different: https://blog.zoom.us/wordpress/2019/07/08/response-to-video-...


From the article:

"To shut down the web server, run lsof -i :19421 to get the PID of the process, then do kill -9 [process number]. Then you can delete the ~/.zoomus directory to remove the web server application files."


Does osx not have the fuser command? It lets you find and kill a process by its tcp port (also file handles) in one command.

On Linux I use something like 'fuser -k 19421/tcp' to kill server processes all the time. It is super useful when working with local dev servers etc!


It does, but unfortunately without the -k flag.


><img src="http://localhost:xxxxx/launch?action=join&confno=492468757"/...

So a browser allows a random remote website access to stuff running on the localhost interface? Is this a good idea? Stuff like camera access I can at least disable...


Yep. This[0] post[1] from a few months ago touched on this with more discussion.

[0] http://http.jameshfisher.com/2019/05/26/i-can-see-your-local...

[1] https://news.ycombinator.com/item?id=20028108


The browsers allows anything according to the CORS configuration on the target website. Perhaps it would be a good idea to prompt for access to localhost/127.* resources.


Hosting a web server on localhost is the equivalent of adding a backdoor on your customer's machines. How does a product team even reach this decision?


By being blind to the implications.

The most charitable interpretation of the Superhuman read-receipt problem is kinda the same thing: they had an idea, thought it was good, and then did some deeply shitty things to make it work. And nobody at Zoom or at Superhuman had the organizational power to stop it.


Do you know if this vulnerability might manifest in other ways, and permit remote participants to force you into sharing or viewing your screen silently?

One of the first times I'd ever used Zoom was in a call with a startup trying to pitch my company on something. The remote participant said something later in the call that was uncannily prescient and related to notes I had in a separate application window. I wrote it off as coincidence, but the phrasing used (and the fact that it was an answer to a question I hadn't asked) seemed nearly verbatim to my written notes.


I really wish physical switches that cut power to mics and cameras were standard on everything.

I know that would be change from how hardware works / is designed now but it also seems like the only reliable line of defense.


some computers have this e.g. raspberry pi


And the upcoming pinebook pro: comes with a hardware privacy switch for camera, microphones and bluetooth/wifi.

For those who aren't familiar with pinebooks, they're $99 arm-based linux laptops.


Nice to see some folks adopting a solid last line of defense.


This is crazy, heres a video what happens after deleting and opening your URL. https://www.youtube.com/watch?v=DMY7Z9Fe0ic Before testing that I had the app itself removed months ago...


These meeting apps feel like the browser plugins of the 2000's. There are so many that do almost the same thing they now resort to seriously insecure methods to make sure you have theirs installed and never remove it.

Apparently one less click is a competitive advantage, whatever the cost.


I don’t get Mozilla and Chromium’s responses. I can think of few cases in which a website should be allowed to issue requests (CORS, img, or otherwise) to an address on the local network and none whatsoever in which a website should be able to contact localhost.

The fix seems straightforward. Require user permission to access the local network (subject to appropriate heuristics as to what “local” means). Require a config option and user permission to access localhost. Problem solved.


The problem with asking permission is dialog fatigue and similar.

As far as supporting local content: Historically a lot of terrible (read: Enterprise, H&R Block tax software, etc) apps are glorified webpages, coupled with a local server that provides things like FS access and malware installation. Those apps use a kludge of remote and localhost urls, and generally expect to work.

I suspect at this point though that browsers will just start going for the "no access to localhost" route as this practice is mercifully dying out (alas in favor of Electron apps shipping full, but out of date, browsers).

To me the bigger problem is: Zoom installed a server on a machine, without consent, with the ability to install software (without consent). Removing the browser's access to that service doesn't mean anything because an attacker can always just directly attack the server.

Even if the server locks connections to exclusively coming from localhost they've provided a service that can install and launch software, which can therefore be used as a sandbox escape - e.g a super constrained network service gets compromised - the idea is that service can't modify the filesystem or what have you, but now it can just connect to localhost and get a file written to disk.

People keep on complaining about apple "locking down the system", but its because of developers like Zoom that Apple needs to do this: and average user is not going to see this post, and Zoom has clearly decided that it is in their interests to leave a service running that can install software for them.

I hope that apple drops the XProtect hammer on the server binary, and the ban hammer on their signing cert.


Yes, but blocking browser access to localhost from non localhost pages would stop the attack by simply visiting a webpage.

It’s as much the fault of browsers for leaving the hole as Zoom for doing a shady job exploiting it.

Very disappointed at Mozilla for their meh response.


They have a web server on your machine. If the browsers did that, they would find some other way to handle this since they have a server running on your computer.

I don't think the browser vendors are to blame here.


> The problem with asking permission is dialog fatigue and similar.

That’s why I suggested config option and permission. There’s no dialog fatigue if you never see the dialog.

That being said, there really ought to be a little menu of permissions that can be granted to a website such that the website cannot make it blink, flash, or otherwise draw attention to it. Crud like “allow push notifications” could go there. Granting push notification permission to a site is fine, but I don’t think sites should be able to ask for push notification permission.


In my experience if you tell users "to use this awesome thing, you need to enable this option", a reasonable portion will do it.

It sounds like what you're saying is that there should be a dialog, but only if you've already enabled a setting, which raises the question of "if this feature is so bad you don't want it exposed, why would you have it available at all?".


How to protect yourself:

"Disable the ability for Zoom to turn on your webcam when joining a meeting."

It's under Settings->Video in Zoom, check "Turn off my video when joining a meeting".


What if you have uninstalled Zoom? It seems that it leaves a web server on your machine that will re-install Zoom if it receives a request to join a meeting.


Really? That’s nuts. Makes you appreciate the iOS app model a bit more.

Everything sandboxed, delete an app and all traces of it are gone.


macOS is gradually adopting that starting with Catalina, e.g. System Extensions (that will replace Kernel Extensions) and DriverKit drivers too I assume, are installed with app bundles and uninstalled when the app is trashed.


Unfortunately neither of those would help in this case.


As far as I can tell you can't fix this and still be able to do web development on the same machine.


The whole reinstallation thing freaked me out, since I did try Zoom a while back, but apparently my own uninstall process kept the reinstallation hack at bay.

By this I mean:

I have no local web server running on 19421; and

Your link doesn't launch or reinstall anything for me.

Now, something I do that most people probably don't is periodically check StartupItems as well as the LaunchAgents and LaunchDaemons folders, so I can remove anything left over.

I do not mean to trivialize this problem, because what Zoom has done here is egregious and unforgivable, BUT is it accurate to say that the reinstall behavior depends on

1, usage of Chrome and 2, the presence of a StartupItem / LaunchAgent / LaunchDaemon?

I ask because it didn't work for me, even though I still had the ~/.zoomus shit in place (obvs, I don't anymore).

I just want to make sure I understand it properly, and that I've taken the necessary steps to prevent Zoom's unwelcome return.


I use uMatrix, and I've seen localhost show up as a domain a site tried to connect to quite a few times. I never gave it too much thought since I block all non-first-party resources by default anyway, but I now realise it could indicate the use of tricks like this to attempt to communicate with some other process running on my computer. I'll now make sure to look closer whenever I see this. I bet Zoom isn't the only one doing things like this.


> Our users don’t care about security.

They're not wrong. Empirically, users explicitly preferred Zoom because it lacked the "ask the user" step before starting a session. Less security is a user visible advantage.


Same problem Microsoft faced when it added "UAC" in Vista. Admittedly the implementation might not have been the best from a usability perspective but I think any attempt at implementing proper privilege management in Windows would have had many users complaining and not seeing the point.

I guess the lesson here is not to give your users bad habits for the sake of convenience otherwise it'll backfire if you ever want to do things right later. MS had everybody run as root for decades before they finally decided that it might not be such a great idea after all, and then they had to face annoyed users and bad publicity.

That being said I can't really imagine how having a non-intrusive "do you want to start the call" dialog before initiating the call can be considered a deal breaker. I assume you could even reduce that annoyance further by adding a "don't ask me again for this website/user/whatever" checkbox. Do you really think that would hurt Zoom significantly? I've never used their product so I can't really form an educated opinion.

This is especially stupid because I have no doubt that now that it's been made public some people will abuse the vulnerability, if only for fun.


It wasn't bad habits, up to Windows XP which introduced user separation on consumer oriented Windows (NT and 2K were meant for businesses and businesses who had networked PCs were really meant to use those) all personal computers were fully controlled by their users without any notion of privilege separation - this is a behavior that traces its lineage back to the original Altair 8800. Computers weren't networked and those who were were either running a different OS (NT, Unix, whatever) and/or controlled entirely by a single entity (a company). Or just didn't care and used Windows 9x.

And honestly i do not think it is bad habit even today. UAC is intrusive, the main reason you do not see it as much as at the past is because applications nowadays work around it: see how Chrome or even VS Code saves the executable files for their updates to your %APPDATA% folder (where normally regular data are going) to avoid the UAC annoyance of going through Program Files (which makes the UAC protection pointless) or how app stores like Steam change the permissions to "everything allowed" to be able to modify the folder contents.

People are using computers to do specific tasks they want to do, anything else is an annoyance and something they'll want to avoid.

Today's security issues come from things a lot of developers and companies simply do not want to acknowledge: trying to put everything online, connect all computers together, trying to have everything controlled by whoever writes the applications users use (putting everything online is a way to do that), trying to come up with monetization schemes where users pay nothing out of their own pockets, trying to make users pay subscriptions instead of one-off fees (the excuse is often that they have to somehow keep their servers going, willfully ignoring that the developers/companies are those who decided to make something run on a server in the first place and that by doing that they are the ones in control).

A lot of security issues would be gone if computers weren't so connected to each other. Sadly i do not see that happening any time soon since no developer wants to give up that sort of control (some developers nowadays do not even know how it is to not have it) and no company wants to get rid of the biggest excuse they have to ask for continuous payments.

Personal computers back in the 80s and 90s were very insecure, but that didn't matter because they weren't so connected as they are today. It isn't surprising that pretty much all famous security issues of the time (like the ILOVEYOU worm) happened exactly as that connectivity started getting widespread.

I think the only hope there is is that the IoT craze will blow up everyone's collective faces and realize that it might not be such a good idea to connect everything after all. Sadly the more cynical side of me thinks that what will happen instead is the introduction of more draconian user hostile measures which end up with the users losing every more control to big companies that control their devices and OSes in the name of security and usability (more like dumbability) and any voice against that would be marginalized as "you are a power user, you do not matter" (ok princess, then what are power users supposed to use after you lock down everything? - i guess the answer is somewhere between "expensive licensed workstations" and "nothing, now piss off").


I’ve had viruses and anti viruses years before I had internet. Getting a virus was trivial in the 90’s when windows had no security and any program could do anything.


Any program can do anything in modern Windows too, only special places like C:\Windows\System[32] are protected. I'm not against such protections since they can be easily overridden if needed and in day-to-day use they do not harm anyone nor affect negatively the usability of the system.

I'm not saying that we should go back to 90s entirely, we have a lot of good improvements over the years. I'm just hoping we'll tone down the "connect all the things" a bit since that is the main source of a lot of security issues.


I agree that less connectivity is better for security, which is why I think rushing to IoT-everything is premature.

However unless a computer cannot be physically connected to the internet, it must implement all of the protections it can. Just not having wifi enabled or cable disconnected is a false sense of security.


The question is about the "all the protections it can" part - what does that imply? Because "all the protections" can include user hostile (not just in terms of usability) misfeatures that give control to OS vendors in the name of security even though the real purpose is controlling what the users can do with their own devices (for a variety of reasons, with stuff like market segregation and forced obsolescence being among the more benign ones).


All the protections that help the machine survive in non-compromised state in a hostile environment. I think of stuff like not giving random users permission to write over system files or give processes access to peripherals (camera, microphone) without explicit user consent.


Your comment is a bit ambiguous. Are you saying that even retail software could be considered a virus just because of what it can do on the system? Or was virus software making it onto the machine in other ways?


When I was a kid it was quite normal to pass around floppies and later CDs full of warez. These contained viruses more often than not especially since an infected machine would auto infect any writable media it got hold of.


In our computer lab we got viruses spread by disks.


I think you make good points but to sum it up: privilege separation wasn't needed pre-internet because vulnerabilities and computer viruses weren't that big of a problem back then.

>A lot of security issues would be gone if computers weren't so connected to each other.

I mean, sure, but having computer connected together is pretty damn amazing.

I'm actually drawing the opposite conclusion compared to yours: I think UAC doesn't go far enough. You need more finely grained permissions. That seems to be the trend too: Android, SELinux, OpenBSD's pledge... It's all about giving every process only the privileges it needs and nothing more.


Finely grained permissions mean bad UX and as Android has shown you gain nothing practical from that since the people will learn to ignore them pretty much like they learn to ignore the UAC warning while on the other hand you lose the flexibility, functionality and openness of the entire system (all significant pillars for ensuring user control).

Note that i'm not saying to disconnect computers entirely, i'm saying to rely less on connected computers. Simple stuff like use LibreOffice or MS Office instead of Google Docs, use a desktop calendar and other tools instead of relying on "web apps", instead of using a "cloud-based solution" for syncing data with your mobile phone, just connect it directly to your computer (via wifi, bluetooth, whatever - this is a UX issue mainly - but it doesn't have to roundtrip with someone else's server). Stuff that makes you and your computer less reliant on the network.

Not everything can work like that of course, but then instead of trying to isolate applications from each other using fine-grained separation, we can simply treat the network itself as hostile and try to defend from it (e.g. applications that can access the network cannot access outside of a designated folder - the OpenBSD pledge approach but forced on all applications that access the network). I think it is a much easier, flexible, user controllable and understandable approach than UAC on steroids or any other approach that relies on application segregation.

It does require a massive shift in developers' mindsets and profit incentives for companies though, which is why i do not see such a thing happening.


> we can simply treat the network itself as hostile and try to defend from it (e.g. applications that can access the network cannot access outside of a designated folder - the OpenBSD pledge approach but forced on all applications that access the network)

Won't work. Malicious actors (both malware developers and companies with user-hostile business models) will start working around it, by for instance giving you two applications, one connected to the Internet and one not. The first application will be the C&C server, the second one will be the executor, and they'll talk with each other over e.g. files in first application's folder.

Trying to block that would pretty much hose all utility in having a general-purpose computer. You'll be back to the crappy UX of a smartphone.

I honestly don't know how to solve this conundrum. You can't solve it technologically, as you quickly hit the Halting Problem. You can't solve it socially, because for any power user benefiting from the modicum of interoperability you leave in, you get 10 regular people who can be trivially social-engineered into selfpwning their device. It seems that in the end, you'll either have to lock down computers to near uselessness, or live with the risk of bad actors exploiting them.


My comment isn't an ideal solution, it is what i consider a better solution considering how things are treated nowadays.

Ideally users would be wary of what they do with their computers, but considering how the world devolved from "you should never use your real name and address online" to modern social media, this is yet another case where i do not see such an ideal happening.


Have you looked into the object capability model of permissions? https://en.wikipedia.org/wiki/Capability-based_security

This is exactly the type of problem it solves, usability with security.


I don't see how it solves the selfpwn problem - that is, for any capability I can explicitly grant if I know what I'm doing, someone else can grant it because a malicious actor nicely asked them to do it. If you take away the ability to grant the capability, you're reducing usability.


Yeah, that's really an unsolvable problem I guess. But you could at least make it clear to the user what some app is requesting. If it's requesting the root capability / ambient authority (basically access to everything) then that should be a big red flag.


This is what things like https://sandstorm.io and Google's Fuchsia OS are trying to solve. Of course it requires a huge shift in how you design applications, but it does not impose any burden on the user's side really. They just allow $APP access to some data or resource, and then it has access to only what it needs going forward, with no need to allow it every time (unless you revoke it). This can be done when the app gets installed, so there's no UX problem really.


My original comment was about having computers be less connected because a large reason for security issues and their implications today arise from their connectivity, so i do not see what sandstorm.io is solving there.

I'm not familiar with Google's Fuchsia OS to judge, though i do remember reading some months (year?) ago about a clash between their developers and Google's advertising team that ended up with the developers compromising Fuchsia's design. Which brings me back to "let's not rely too much on connected stuff and prefer stuff we have control over, shall we?"


> Do you really think that would hurt Zoom significantly?

Zoom is a publicly-traded company now, so I am sure that adoption through convenience trumps a lot of other concerns.


They’re not incorrect. They are, however, wrong to think that users not caring about security means they don’t have to care either. Product makers have a duty of care beyond what their customers have.


> Less security is a user visible advantage.

No, less friction is a user-visible advantage, less security isn't user-visible, for most users, until sometime after the vulnerabilities exposed thereby are exploited and, when it becomes user-visible, is very much not considered an advantage.


Also, most users of zoom are job applicants - so theyre more likely to care less abt security because they really need to be in that interview session.


This is not even remotely true. We use at everyday at my workplace (Education) - thousands and thousands of employees as well as students . All of our contemporary peer institutions do the same.



It probably helps if I explain what I linked...

Micro Snitch is a small MacOS toolbar application which runs in the background looking for system calls made to the camera or the microphone. It visually indicates when either are being used and logs the activity to a file for future review.


Do people who understand networking better than I do (i.e., almost everyone) want to explain how to universally prevent this localhost garbage? Like, some kind of firewall, combined with a simple command line trigger to open up a port when I actually want to? There's gotta be an open-source firewall for this kind of thing, right?

The notion that some random app can just spin up a server on localhost without my permission is completely insane. Also, this is why Gatekeeper, and the App Store "walled garden" are good---nothing should get the kind of permissions necessary to run a fucking localhost server that can reinstall a deleted app w/o user interaction!!


> The notion that some random app can just spin up a server on localhost without my permission is completely insane.

As far as I know any desktop app (userland code) can listen on a non-privileged port without permissions, on any desktop OS.

I’ve seen a few programs (like R) run web servers to provide documentation (although, the server only ran temporarily).


Even if you have the camera disabled (and I never gave camera permissions to zoom to begin with), it will still join a random stranger's meeting, which will leak your name (or whatever name you have configured). This may be important for some.


yup, shocked me when i noticed. changed the name and unchecked the 'remember name' feature. but it turns out that unchecking that will prefill the user account name, and next time the remember name feature is checked again. so that the only two choices are: either remember the name used last, or, if you uncheck it, have it fill in the account name.


Zoom’s UX has always come off as invasive. An application default that allows hosts to enable automatic camera join is an overstep, and the lengths they go to facilitate this while ignoring long standing, industry standard appsec guidelines to prevent XSS is relatively unsurprising yet hopefully not inconsequential to their enterprise customers.


Allowing the host to unmute participants is pretty invasive too. First time someone did that to me I was floored.


> Apr 10, 2019 — Vulnerability disclosed to Chromium security team.

> Apr 19, 2019 — Vulnerability disclosed to Mozilla FireFox security team.

Does anyone have any idea why there was a 9 day delay between disclosure to Chromium and Firefox teams?


I guess he reported to Chrome first because that’s what everybody uses, found their answer (or lack of answer) unsatisfactory, and then went to report to Firefox.


Here is how I fixed the problem for myself temporarily:

1. Quit Zoom.

2. Kill the ZoomOpener process.

3. cd ~; mv .zoomus/ .zoomus.off/

4. mkdir .zoomus && sudo chown root .zoomus; sudo chmod 600 .zoomus

Now, the Safari permission prompt will show up every time you click on a Zoom link.


It should be pointed that an empty directory (even if owned by root) placed in your home directory can still be deleted by you, without requiring root. You need to place a file into the directory.

Or if you want something drastic, run

    chflags simmutable ~/.zoomus
as root. This will make sure that not even root can delete it.


That is actually true, just tested it. There is always something new to learn!


Yeah because removing a file or empty directory only changes the table at the parent directory. So you only need the write permission of the parent directory.


This worked for me...

In order to verify that the opener was running, I ran the following command.

ps aux | grep zoom

To kill the opener I ran the following.

killall zoom

Then I followed the rest of the instructions above to create a locked down version of the directory. You could also create a file called .zoomus instead (similar to the suggestions made farther down this comment thread).


UPDATE no need to do this any more. Zoom actually conceded they were wrong and pushed out an update that removes the local webserver: https://imgur.com/gallery/INvYaH4 (from the discussion below in the thread).


It appears that two of Zoom's competitors, Blue Jeans and Webex, also use web servers on localhost:

https://twitter.com/anthonypjshaw/status/1148470933901864960


Has anyone torn down recent-era Macbooks to see if the camera LED is still hardwired to the camera power and a reliable indicator that can't be software disabled?


I don't know what you mean by "still". Apparently it was software disable-able up to the 2011 mbps.


That's so terrible. Ugh.


I have Zoom installed on my Ubuntu 16.04 and its also vulnerable.

https://jlleitschuh.org/zoom_vulnerability_poc/zoompwn_ifram...


This is even crazier, because a webpage could load the iframe or img tag lazily, long after the page has been opened and is in the background because the user left the tab open, and the user would have no way of knowing which page is responsible for opening Zoom.

Furthermore, in Chrome, the webpage can set a timeout which brings the browser window back into focus. So for example, if Zoom usually takes about 1 second to open, then the browser could set the timeout for 1100ms, so that zoom is only visible to the user for a split second before it's backgrounded with their camera enabled. Either of the following will bring Chrome back to the foreground:

    setTimeout( function() { alert("Hi") }, 1100)

    setTimeout( function() { var win = window.open("https://www.google.com", '_blank'); win.focus(); window.close() }, 1100)
The latter is a little less of an alert to the user that something has happened, since it could be used to reload the current page without the offending image or iframe tag, which would look to the user like the page just randomly reloaded itself.


If you do this audio-only there isn’t a telltale LED on the camera to give away that you are doing it. I’m way more worried about audio bugging than a webcam (which really only has the user’s face)


That's a hard problem to solve. An audio alert that the microphone has turned on would obviously be rejected by consumers, unless maybe some fancy processing were used to remove that alert sound from the recording. But even so, an audio alert would presumably only play once while the indicator light remains illuminated the entire time.

On the other hand while an indicator light is good for the camera, it's not sufficient for audio. If the computer is facing away from me, then the camera can't see me so my inability to see the camera light isn't that huge of a deal. But audio goes around corners so I could be recorded by a computer not immediately in eyesight.

If there were some reasonable third sensory channel to available for "out of band" communication, that would be ideal. But consumers will reject smell-alerts.


id much rather the zoom conference can't do anything unless I let it


A hacker could turn the light on and off before you even knew what to do about it. All they need is a picture of your face to do something nefarious.


This feels material, which is why I’m surprised there’s 0 movement in their stock price after hours. Why do you think that is?


Stock markets very rarely care about security, unless it's somehow front page news.


Investors would care if they thought Zoom’s customers cared enough to switch. I personally think this is terrible, but investors don’t seem to think Zoom’s customers care enough to change to a competitor, which makes me second guess whether or not this is actually a big deal to large customers.


my guess is this is of 0 importance to the stock market and large customers.

almost all video communications software works similarly to zoom, so that's that.


Tavis Ormandy joked about buying those dips because of this and I really don't think either of you are wrong and I might start doing it myself.


If a company’s stock drops after a major vulnerability, you should buy. Equifax exposed everyone’s financial data and their stock is back above their pre-breach price.


Can’t Zoom add their own simple prompt to the local server which gets confirmation from the user that they want to join the meeting? Just one more click and not “nasty”.

It could even be 4 different Join buttons:

- Video & Audio

- Video Only

- Audio Only

- No Video or Audio


Without the local webserver, they fall back to Safari's URL handler, which asks whether or not you wan't to start the application in question.

They went through a lot of trouble to implement this ridiculous solution to avoid the kind of thing you describe.


Which is why they're doubling down on not fixing it.


I mean _with_ their local webserver, can they implement their own, simple confirmation of some kind?


“This [local webserver] is a workaround to a change introduced in Safari 12 that requires a user to confirm that they want to start the Zoom client prior to joining every meeting.”

https://blog.zoom.us/wordpress/2019/07/08/response-to-video-...

According to Zoom the intended purpose of the local webserver is specifically to avoid the confirmation step.


Well - Safari asks you for confirmation. They built the local, exploitable web-server to avoid the confirmation message. Why would they go to that trouble, only to reimplement what they were trying to avoid?


No amount of security features can rival a small piece of black tape over the camera.

It also makes crypto-phishing (you've been recorded doing X, pay Y BTC) much harder to fall for. Where software could (and eventually will) be compromised, the attacker would have to physically access the machine to remove that tape.


One assumes that this activates the green camera light? It's not perfect but I can't imagine not noticing that it had come on.

Not that it helps if you're, ahem, in the middle of something when the nefarious 3rd party opens the line.


AFAIK the light is controlled by the camera's firmware. So unless someone is able to hack that, the light will always turn on with the camera. That said, even Zuckerberg has sticky tape over his camera.


Most of the affected users won’t be able to uninstall the Zoom client in a clean way:

https://apple.stackexchange.com/questions/358651/unable-to-c...

I could not get rid of the client in my process list for weeks and regretted installing it.

I will try the fix mentioned at the end of the article now (first killing the webserver).

They will have a hard time regaining users trust.


Just this morning we were on a Skype for Business call which, predictably, was a hot mess. We mentioned Zoom. I was on their home page. I was this close [ ... ] to installing it.

Now? I hope you're reading, Zoom. No chance now.


The bash script shown in Edit 2 and in the gist linked in the comment are guaranteed to wipe out unrelated files.


I'd be really happy to see:

a) Apple removing Zoom from the App Store (at least for a fixed amount of time before they patch* this nightmare up), b) releasing an update to MacOS that breaks Zoom's server completely (I know, I'm asking for too much here).

*talking about patching is a bit exaggeration here because this is not a bug this is a fucking trojan disguised as conferencing app, I'd truly truly block them from App Store for that, as a fellow developer I'm writing this with heavy heart but the incompetency of Zooms developers is enormous here, CEO can say anything he want but I'm pretty much sure it's impossible he was not aware of the fact how the core of his product works. It's not even unethical, you really have to have no imagination to do something like this.

Also - there's a different issue - Macs seem to be pretty solid when it comes to security but looks like ANY installer can just spin up web severs on our machines and we won't even know? I'm just a simple developer, not a devops, how can I prevent this in happening in the future? If they did it once they will do it again. And if not them then someone else. Any hints? Should I scan my ports every morning and see what can go through every single one of them?


It's been a while since I've been a Mac user but I used to use an app called 'Little Snitch' which would notify you about outbound traffic, perhaps it has a mode that can do something similar.


Why isn't the Windows client vulnerable? What have they/Microsoft done differently?


Safari asks you if you want to open an app that owns a URL scheme to prevent webpages from automatically triggering behavior you might not want.

Zoom decided they know better than the Safari team and decided to install this local webserver specifically to bypass the operating system's security policies, supposedly because "it is their key differentiator" or whatever.

Basically their product managers decided they wanted it to work a certain way and demanded someone do whatever nasty hacks were necessary to make it happen.

It turns out their nasty hack doesn't set the proper CORS policy so any random webpage can force you to join a meeting.

It also turns out they don't do what mac apps are supposed to do: keep this crap inside the app bundle so dragging the app to the trash effectively uninstalls everything. Instead they install to ~/.zoomus, don't document that fact, and if you hit a zoom link after "uninstalling" they automatically reinstall themselves.

Oh and they let the registration for one of their domains expire and nearly lost control of it, which would make this a RCE because their client doesn't do anything to validate their update packages as far as anyone can tell.

I think that about covers it?


you forgot one more thing: they don't distribute their crap as a regular self-contained .app, they give you a .pkg which asks for elevated privileges during installation (this is why I don't have it installed)


It's not to hard to extract the app bundle from the .pkg file. This is how I've always installed it. Do this from an empty directory, though, since it will just spray files everywhere...

* Use `xar` to extract the contents of the .pkg file:

    $ xar -xf Zoom.pkg
* Use `cpio` to extract the payload, which is in a file oddly named "Scripts":

    $ mkdir payload && cd payload
    $ cpio -i -d < ../Scripts
* The app bundle is compressed within a 7z archive, but the .pkg file contains a precompiled decompressor. Either use that or install your own (e.g. via Homebrew) to extract the app bundle:

    $ 7zr x zm.7z
Now you will have a directory called "zoom.us.app", which is the app bundle. Move this to wherever you want it to live, and now you've "installed" the app without running the scripts from the .pkg.

Importantly, note that the app will still exhibit the behavior discussed in the article. When you run it the first time, it will install ZoomOpener, which is the helper app that includes the web server. It will not install any browser extensions, however, which is the behavior I was originally trying to avoid by going through this procedure.


This is good info, thank you for sharing it.


I almost never run .pkg installers either. Maybe 1% of apps need elevated installation privileges.


Those little tricks to make things easier is why it's popular and why they're currently valued at $25,000,000,000 (though that should go down quite a bit tomorrow, still just an insane number for a company with $8m in annual profits).


Does this work in the Tor browser to launch the zoom client and expose your identity?

edited: after some research it is clear that this would work in the Tor browser. So if you are logged into Zoom using your real ID a malicious Tor site could launch the client and harvest your name. And if you are only using the browser bundle (and not routing all traffic through Tor) Zoom and/or Zoom+government could use this to expose the real IP of tor users.


"Additionally, if you’ve ever installed the Zoom client and then uninstalled it, you still have a localhost web server on your machine that will happily re-install the Zoom client for you, without requiring any user interaction on your behalf besides visiting a webpage. This re-install ‘feature’ continues to work to this day."

So... what's the best way to really really uninstall Zoom client from our Mac?


It's mentioned in the article. After uninstalling the main application, you also have to kill the helper app named ZoomOpener. The article gives some Terminal commands to do this, but you should be able to find it in Activity Monitor if your more comfortable there. Once you kill ZoomOpener, remove it from the list of Login Items in System Preferences -> Users & Groups. Lastly delete the folder called .zoomus from your home directory. You can do this in Finder, but it'll be hidden by default, so you'll have to use Go -> Go to Folder... or some other trick to expose it.


Now go explain that to the folks in Marketing.


The folks in Marketing have folks in IT to do this for them.


I would highly recommend visiting the proof-of-concept link, which is a Zoom videoconference hosted by the author full of people testing it out.


The first time I installed Zoom, I opened up a pkg (an Installer package) and Installer said this package wanted to run a script to determine whether or not it can be installed. Usually this is for checking system version or some other harmless action. But for Zoom, this script immediately installed Zoom itself. I immediately thought something fishy was going on.


Ok here's the thing. Open Google Hangouts, or any other website that asks for permission to use your webcam, then close the tab. Go to terminal check if VDCAssistant is running using `lsof | grep -i VDC` it returns that it is running. I've had this issue since 2015 so I'm glad someone is talking about this now.. Is it just me?


That seems like an OS daemon specific to the built in webcam.

https://www.cnet.com/how-to/fix-no-connected-camera-error-on...

> When you run a program that uses your Mac's webcam, OS X will launch a background process called VDCAssistant, which manages the connection and control of the camera. While this process should quit when the program stops using the camera, it may persist if an error occurs, and prevent future connections to the camera, either by the same program or by others.


What’s the issue? That VDCAssistant keeps running for a bit?


> Second, when Zoom is installed on a Mac device by the user, a limited-functionality web server that can only respond to requests from the local machine is also installed on the device. This is a workaround to a change introduced in Safari 12 that requires a user to confirm that they want to start the Zoom client prior to joining every meeting. The local web server enables users to avoid this extra click before joining every meeting. We feel that this is a legitimate solution to a poor user experience problem, enabling our users to have faster, one-click-to-join meetings. We are not alone among video conferencing providers in implementing this solution.

A workaround to legitimate Safari security improvements.

I hope the Wall Street Journal and CNBC skewer this company and shred the stock price.


Zoom is why my shiny 27" Retina iMac is decorated with a small square of black electrical tape.


It always makes me wonder why people keep tape on camera sensor but don't care about microphones. Maybe I'm wrong but I think things you and other people around say can be of more value to the attacker than what you do or how you look like.


I choose white for mine! It matches better I think. :)


Apologies in advance if someone has already commented about this, but it would appear that Zoom removed that local web server feature for the MacOS version a few days ago:

---

Current Release July 9, 2019 Version 4.4.53932.0709

New and Enhanced Features

-General Features

--Option to uninstall Zoom Zoom users can now uninstall the Zoom application and all of its components through the settings menu.

-Resolved Issues

--Removal of the local web server Zoom will be discontinuing the use of a local web server on Mac and will be completely removed from the Zoom installation. --Minor Bug Fixes (https://support.zoom.us/hc/en-us/articles/201361963-New-Upda...)


> [UPDATE 2:35 pm PT, Tuesday 7/9] The July 9 patch to the Zoom app on Mac devices detailed below is now live. You may see a pop-up in Zoom to update your client, download it at zoom.us/download, or check for updates by opening your Zoom app window, clicking zoom.us in the top left corner of your screen, and then clicking Check for Updates.

--

Looks like Zoom have decided to remove the Web Server from MAC and pushed out an update directly to the clients (before this, you couldn't get the Zoom Client to check for updates automatically) - The popup appeared post meeting.

https://imgur.com/gallery/INvYaH4


Btw Zoom is also reinstalling itself one minute after uninstalling it. Made a video of it here: https://news.ycombinator.com/item?id=20390755


The sad thing is that Zoom will probably get away with not fixing this, or fixing it much later than they should.

If you look on Twitter, anyone that has complained about this (huge) vulnerability is being redirected back to their blog post.

To make things worse, most non-technical users that have caught onto these posts are replying to say "thanks for sorting it out!"

This wouldn't be the first time a company sweeps a data leak or vulnerability under the rug. I remember when the Panera Bread stuff kicked off, and all they had to do was bury their head in the sand and wait for the storm to pass. There's currently a lawsuit in progress, but will that happen for a vulnerability like this?


I always thought it was paranoid to keep tape over your webcam, but this makes a pretty good case for doing that as a last line of defense.

I also want to express my complete disbelief that Zoom basically installed a back door on all its users' machines. It's hard to imagine a competent engineer not understanding the security implications of building something like this. I have no special security expertise, so when I see an exploit that I can actually understand it scares the living daylights out of me. In this case just about anyone with a web page can trigger this Zoom vulnerability.


In this day and age, I wish Apple and other laptop manufacturers had a hardwired power switch on the camera.


Does anyone know of any working alternatives? We use Zoom a lot, it has the most hideous UI and the worst UX but so far it's been the only video platform that reliably works with tens and hundreds of attendees.


Jitsi Meet [0] is the simplest video conferencing solution that I have ever come across. To use it you go the website (or the mobile app) and that's it.

You have a free, private, end-to-end encrypted, efficient, multi-participant video chat which allows screen sharing and shared document editing. It works on every modern browser, you don't need to create an account, and you don't need to install an app (except maybe on mobile OS's). It's open-source and you can run your own server.

[0]: https://meet.jit.si/


appear.in has worked well for me in the past.


It looks like RingCentral phone / meeting service may be tied up in this also.

FTA, one step to clean this up is:

pkill "RingCentralOpener"; rm -rf ~/.ringcentralopener; touch ~/.ringcentralopener && chmod 000 ~/.ringcentralopener;

RingCentral and Zoom have a multiyear partnership.

https://www.ringcentral.com/whyringcentral/company/pressrele...


Searching my Macbook using to see if I have this on my machine using lsof, I found that I did not have it. But there is a suspicious "Adobe Desktop Service" listening on localhost:15292. I wonder what sort of fun things that would enable a random website to run on my machine. I don't even use Abode products, willingly, on this machine. Though I probably have installed at least one in the past.

I'm no infosec expert, If I wanted to figure out more about what this process was up to, how would I go about it?


One thing they were right about was this being "Standard Practice". Even spotify is doing it https://twitter.com/braintube/status/1148645026936827905.

So how does one remove these from their machine ? I can kill the process but it will just start again when I restart the machine. Also how do they do this ? I thought all startup items will be shown under Login Items in System Preferences.


Sooo... this is still vulnerable?!


Yes. Try this link from the article to see it in action if you have (or had) Zoom installed: https://jlleitschuh.org/zoom_vulnerability_poc/

WARNING, this will open a video chat with random strangers, and will turn your webcam on. Consider yourself warned!


If you want to test it without using your real webcam, I recommend CamTwist [1]. The author is in the group video call now. I joined for a short minute, and was relieved to see that my real webcam wasn't being used.

Normally I use CamTwist so I can write subtitles on top of my video feed when chatting with my gran. It seems it's also a good layer of extra security!

[1] http://camtwiststudio.com/


WARNING, this will open a video chat with random strangers, and will turn your webcam on. Consider yourself warned!

Amusingly enough, this actually exists as a product:

https://en.wikipedia.org/wiki/Omegle

(Edit: just noticed it's already been around for over 10 years. That's rather amazing.)


HOLLY SH*T! This is insane.


Personally, I do not think so.

I did a test with myself and a coworker. I’m using macOS 10.12; he’s using 10.14. We both have up-to-date Zoom clients.

In our Zoom clients, we both already had the “Turn off my video when joining a meeting” box checked.

I set up a meeting, with participant video set to On, as the article describes. I took the new Meeting ID, launched Zoom, and joined my new meeting. I then sent my coworker the join URL using Slack.

My coworker clicked on the link, which opened the URL in Safari. Safari asked my coworker if he wanted to launch Zoom. My coworker confirmed that yes, he wanted to launch Zoom.

My coworker’s Zoom client did _not_ automatically start video. I never saw video come in from him.


> In our Zoom clients, we both already had the “Turn off my video when joining a meeting” box checked.

I believe this is one of the mitigations, which is why it didn’t work.


That makes sense. But, I don’t remember ever turning on that checkbox.


This is the issue. It’s on by default.


I’m sorry, I am really confused.

The box says “Turn off my video...”. So, I think having it on by default is a good thing.


The default is that the box is unchecked - i.e. the Zoom client will, by default, automatically turn on your camera when you join a meeting. You can opt out of that behavior by checking this box, but that behavior is the default.


I just received this message prompting me to "Update now."

Release notes of 4.4.53932.0709:

## Remove local web server

- We are discontinuing the use of a local web server on Mac devices. Following the update, the local web server will be completely removed from the Zoom installation Option to uninstall Zoom

- Zoom users can now uninstall the Zoom desktop application and all of its components through the settings menu


I just got an update! Good job Jonathan Leitschuh!

Release notes of 4.4.53932.0709:

Remove local web server

-We are discontinuing the use of a local web server on Mac devices. Following the update, the local web server will be completely removed from the Zoom installation Option to uninstall Zoom

-Zoom users can now uninstall the Zoom desktop application and all of its components through the settings menu


Huge kudos to the author for finding this security hole, reporting it responsibly, and then posting such a clear writeup.


Here's a small script you can run to mitigate the issues described in the article: https://gist.github.com/notmyname/824db39350e3d39496de2ea930...


How do you recommend uninstalling this?


From the article:

> To shut down the web server, run lsof -i :19421 to get the PID of the process, then do kill -9 [process number]. Then you can delete the ~/.zoomus directory to remove the web server application files.

> To prevent this server from being restored after updates you can execute the following in your terminal:

rm -rf ~/.zoomus

touch ~/.zoomus


note the last part where the directory is removed (~/.zoomus) and touch creates a file ~/.zoomus. I am assuming a re-install will fail because it cannot create a directory again when there is already an existing file.

My guess is that if sometime in the future you want to use zoom again, the install will fail until you remove the file ~/.zoom


I did this: 1. killed by process name, and zoom app will 2. fail to start its opener and 3. fail to reinstall it:

  killall ZoomOpener
  chmod -x .zoomus/ZoomOpener.app/Contents/MacOS/ZoomOpener
  sudo chown -R nobody:nobody .zoomus/ZoomOpener.app


Doing it that way results in a nuisance prompt from Zoom every time you launch it complaining that it can't launch the opener.

Here's a modified version that deletes the app, removes the LoginItem if it exists, and makes the ~/.zoomus directory unwritable, which achieves the same thing but avoids the nag:

    killall ZoomOpener
    osascript -e 'tell application "System Events" to delete login item "ZoomOpener"'
    rm -rf ~/.zoomus/ZoomOpener.app
    sudo chown -R nobody:nobody .zoomus


Thank you for sharing this. One small typo or formatting error: The last line is missing a ~/ and should be: `sudo chown -R nobody:nobody ~/.zoomus`


Ah, yes - thank you for that.


Interesting, I don't get this nuisance, even after their update.


Not sure why he didn't just give us

    kill -9 $(lsof -i :19421)


For the non bash users among us?


There's nothing specifically bash about it, but here's what the components mean:

"kill" = kill running process

"-9" = kill as forcefully as possible

"$(...)" = command substitution: run the stuff inside the brackets and replace this term with the results (it will be the processes to kill in this case).

"lsof" = list open files (other things like ports and devices count as files on Unix systems)

"-i" = search for internet address

":19421" = local machine, port 19421

I think they're missing a "-t" on lsof, to make it output process IDs only ("terse mode") instead of a human-readable table:

  kill -9 $(lsof -t -i :19421)


That "command substitution" bit will fail on tcsh, say, last I checked.


Is lsof part of the default install of OSX? It isn't usually part of the base install on Linux (though obviously very easy to install)


although command substitution is in the POSIX sh spec and so not a bashism, $(...) doesn't work in e.g. fish


that works on all korn shell based posix shells.


macOS and iOS both support custom schemes, and have done forever. What feature does Zoom actually want?


Not having to cede control of the experience to the system, presumably.


Not having to cede control to the user. If the user wanted/wants your software they know how to install software at this point. They also have a much better mental model of what that means.


Yeah. That's why I bought a webcam cover. I highly recommend if you don't have one already.


It looks like it can also happen if you simply open an HTML email with Apple Mail: https://twitter.com/funjon/status/1148464952480374784


This is thoroughly disappointing, given that Zoom is among the few popular conferencing programs that aren't complete garbage for Linux users. Thankfully the Linux client doesn't appear to be affected AFAICT, but this is trust-shattering nonetheless.


Is anyone noticing that port `19421` no longer has anything listening on it? We are noticing some machines suddenly stop listening on this port but they have not downloaded any update. In zoom patching the running web server without user interaction?


They reversed course a couple of hours ago: https://www.theverge.com/2019/7/9/20688113/zoom-apple-mac-pa...


The odd thing is we are seeing this on clients that have not updated. So in this something they are controlling remotely?=


Surprised something of this magnitude seems to have 0 impact on stock price after hours.



They responded not to fix it which is unusual

https://blog.firosolutions.com/exploits/zoomtozero/


A few of our team had the Zoom app installed, but no-one had a running webserver on that port. None had used the app in a long time though.

Was this maybe added in a recent version, and perhaps they just haven't updated?



I'm surprised that Mac doesn't have a built-in firewall that warns if an app installs something that listens on a port. They advertise OSX as being "secure by design."


The built-in firewall does exactly that. It may not do that for things that only listen on localhost, though.


FYI: I was able to reproduce the issue with the Linux client as well.


Why don't people just use the web based Zoom client? I do this exclusively, have done so for a few months now.

One of the main features of a browser is to provide a secure runtime.


There's a web client?

I'd guess the reason is that, if you don't have any of the native apps installed, and you click a Zoom meeting link, the browser will download the native client installer. There's no mention at all on the download page that there is a web client.


My job requires me to use zoom for communications – is there anyway I can run zoom in a container or something, so that when I kill the app all traces of it are gone?


Note that the server persists after the app is removed, so there’s that as well.

Follow on: What are they talking about regarding custom url handlers? That’s a standard OS X feature...


Ok, so the article says

"According to the Zoom team, the only reason this localhost server continues to exist is that Apple’s Safari doesn’t support URI handlers."

Which is simply wrong. macOS (and i*OS) have supported custom URIs forever. What feature are they wanting? Do they want random websites to be able to install URI handlers?


They want to be able to re-install zoom for the user even if they have deleted it from /Applications.


GoToMeeting and Zoom are two things I always insist not to use. There are perfectly acceptable online-only counterparts that don't need to infect my computer.


I'm curious what your objection is to GTM. I've been using it for a decade and have really come to see it as the only reliable option for us.


Its installs are not incremental. How many versions of GTM are currently installed on your computer? Last time I cleaned it up, I had six.


Would be interesting to see a list of software that installs a local server (http or otherwise) and whether or not it typically runs in the background on startup.


This is ridiculous. I am cancelling my Zoom subscription.


What blows my mind the most about all of this is how such a successful company has managed to engineer such a shitty solution to this very common problem. I’m not even speaking from a security standpoint (which is a catastrophe) but this feels like some holier than though neckbeard wanting to literally reinvent the wheel on everything. Encoding enum’s as images served from a local web server with various pixel widths??? You can’t make this up. I never liked their UX and now I guess I don’t like what’s under the hood either. Good riddance.


Have been using Zoom for the last three yeara and I have recommended it to a lot of people.

Seeing how they handled this accident I will never recommend them again.


Messing up your decoding for weird unicode characters in your text message app is an accident, but this one wasn't an accident.


The title is click-baity. What it allows is a website to automatically join you into a meeting with the camera enabled and without you asking. So it's annoying, but it doesn't let a website secretly grab your video without you knowing.

The reason webrtc has permissions per site is because webrtc can indeed grab your video without you realising, so it's important to give each site permission to use your camera. This isn't the case with zoom...it pops up a massive window when you enter the meeting.


I don't even know why laptops have cameras. Do you really need to see someone for a conversation? Mine's permanently taped.


Blown out of proportion. A real vulnerability I wish was handled more seriously. But at the same time, even after I “fixed” the vulnerability, my preference in Chrome to always open links for zoom, with zoom, made it nonsense. The problem is with lax browser security and CORS as a product “feature”.

It is worth underscoring that the only reason this vulnerability exists is because Safari forced appropriate prompts? Zoom hacked around it, and got away with it. That’s on browsers to fix.


brew zap formula[0] worked for me to uninstall the listener..

brew update && brew cask zap -f zoomus

[0]: https://github.com/Homebrew/homebrew-cask/blob/master/Casks/...


I added this to my uBlock Origin filters and it has fixed it:

  ||localhost:19421^$all
  ||127.0.0.1:19421^$all


If you must use Zoom and you use GNU/Linux, run it under firejail.

With firejail you can sandbox anything.


Nothing can beat piece of duct tape


I know it's said in jest but the trusty screwdriver lays itself in a lot more permanent solution.


If you want to be taken seriously, don't include these dumb images in your post.


Zoom allow iframing the join page as well? Why are they not seeing X-Frame-Options‽


Luckily I use Pi Hole I just Blacklisted zoom.us and zipow.com


What about Audio? Can this be used to exploit that as well?


Long live the App Store - developers, customers, Apple.


If you're interested in seeing if you're vulnerable to this, visit this website: http://zoomzeroday.com


...no thanks. The author already mentions links you can use to check literally no reason to advertise this unless you, OP, are being malicious and/or didn't read the actual article.


I did read the article, and I don't have malicious intent. I was just trying to make a more easily sharable URL for people trying to test if they are vulnerable. I include links to the medium article and an update that there is now an update released to fix this web server issue.


I've lost all trust in Zoom


Might also affect Discord


It's pretty convenient for Zoom that this occurs after the IPO.


just came to say: lol


Jesus Christ


Note: "Zoom" is a videoconfrerencing app, not a built-in Mac OS accessibility feature for "zoom".

The article does not clearly state this, ceding a plain English word to a corporation, enabling a takeover of human language.

P.S.: This part

> Apr 26, 2019 — Video call with Mozilla and Zoom Security Teams

is funny, and would be way funnier if it was an non-consensual video call.

Finally, note that Zoom effectively does not pay for bug bounties, so researchers should think twice about donating their expertise to a selfish for-profit corporation, and users should think twice about using a videochat product that allows its entire security team to take blackout vacations, and also doesn't pay its outsourced sercurity researchers.


Finally, note that Zoom effectively does not pay for bug bounties, so researchers should think twice about donating their expertise to a selfish for-profit corporation

I've read this a few times and am curious if this has really become the prevailing view about what security researchers are doing (i.e., uncompensated labor) when they notify vendors about security vulnerabilities.

The traditional view (which I think was widespread in the 90s or whatever) was that engineers who find vulnerabilities in products have a special responsibility to the public, and owe a duty to the people at risk: the users of the product (or whoever would be harmed if the vulnerability were exploited to malicious ends). Just like if you used your training as an engineer to discover that the Bay Bridge had a structural flaw and that drivers were at risk (or, in the case of Diane Hartley, that the new Citicorp Center had a design flaw and officeworkers were at risk). And this duty can be discharged a few ways, but often the most efficient way to help the people at risk is to educate the vendor and keep on their ass until they fix the problem in a free update. If the vendor pays you, fantastic, but you shouldn't accept payment that would prevent you from discharging your duty to the people actually harmed by the vulnerability's existence (e.g., if you take the vendor's money and it comes with an indefinite NDA, and they never fix the problem and the users remain at risk of being harmed by bad actors forever, you have not behaved acceptably as an engineer). This view probably emerged at a time when bug-finders mostly had salaried jobs and were privileged not to have to depend on payments from the same vendors they were annoying with information on their product's flaws.

A newer view (probably informed by bug bounties, etc., and also a broader community of people doing this stuff) seems to "no more free bugs for software vendors" -- that researchers who find vulnerabilities in commercial products are producing knowledge that's of value to the vendor, and the vendor ought to give them compensation for it, and if the vendor doesn't want to do that, the researcher would basically just be doing uncompensated labor to give it to the vendor, and is free to go sell the fruits of their discovery to somebody who does value their labor instead. Even if that means selling the bug to unknown counterparties at auction and signing a forever NDA not to tell anybody else.

The first view is mostly what we teach students in Stanford's undergrad computer-ethics course and what I think is consistent with the rest of the literature on engineering ethics (and celebrated examples like Diane Hartley and William LeMessurier, etc.), but I do think it seems to be out-of-step with the prevailing view among contemporary vuln-finders. I'd love to find some reading where this is carefully discussed that we could assign students.


I can't imagine selling bugs to the highest bidder ever becoming ethically acceptable. You can't pretend not to know that the high bidder is probably a cybercriminal. If you do this, your hat is clearly black.

Once upon a time, vulnerabilities were just nuisances and people could justify some gray-hat casuistry when the damage was just some sysadmin overtime to clean up. But now there are serious organized crime rings and rogue nation-states using vulnerabilities to steal and extort billions and ruin people's lives.

It's OK to choose not to work on products with no bug bounties, but if you do find a bug in one you must disclose it responsibly.


>you must disclose it responsibly.

While most people agree selling a vulnerability is immoral, there is much debate on whether "full disclosure" is ok, and whether "responsible disclosure" is a term anyone should ever say (some argue the correct term is "coordinated disclosure").

https://news.ycombinator.com/item?id=18233897


The first view meets some sort of ideal (I guess) but causes all sorts of free riding problems. In larger society these sorts of problems are solved through regulations. For example if someone identifies a structural vulnerability in a bridge, the agency in charge of the bridge has a legal obligation to take steps to fix it. That sort of regulation doesn't exist in software land.

The second view as you describe it (selling to the highest bidder) is clearly black hat, but it is completely ethical for a researcher to disclose a vulnerability to the public if the vendor doesn't fix it in a reasonable amount of time. So Project Zero and this disclosure are both fine. Yes, ordinary users may be harmed in the crossfire, but the vendor should be liable for damages.


Beyond just a prevailing "view", this duty to public safety is actually explicitly codified in the laws and regulations of most professional engineering organizations. To act otherwise would be a) unethical and subsequently b) grounds for loss of license to practice.


If only software development was actually an Engineering profession....


I would say the 'first view' you've described is what the bulk of professionals in the information security industry would still espouse as the ideal.

In my opinion this second view you are observing is carried by a vocal minority of participants in bug bounty programs and would be good fodder for a computer-ethics course.


They’re donating their expertise because, yes, this research is extremely valuable and important, but the vendor should obviously be paying for it.


I feel like selling bugs to the highest bidder is usually ethically questionable, no matter how “new” your viewpoint is.


>The article does not clearly state this, ceding a plain English word to a corporation, enabling a takeover of human language.

The English language can handle it:

proper noun

- A noun belonging to the class of words used as names for unique individuals, events, or places.

- A noun denoting a particular person, place, organization, ship, animal, event, or other individual entity.

- A noun that denotes a particular thing; usually capitalized


>The article does not clearly state this, ceding a plain English word to a corporation, enabling a takeover of human language.

I agree with your outrage, but you have a long way to go. That sort of behavior is the soup du jour of SV the past ten years or so.

Keep fighting the good fight. I've given up, but I hope you win.


10 years is nothing on the scale of language development, and SV is nothing on the scale of the English-speaking world :) Fear not, I bet there aren't enough words to go around for this to be a big deal long-term.


> Offered and declined a financial bounty for the report due to policy on not being able to publicly disclose even after the vulnerability was patched.

They seem to pay bug bounties if you agree to keep it down.


that's not a bug bounty, that's reputation management


That’s a polite way of calling it what it really is — “hush money”.


> The article does not clearly state this, ceding a plain English word to a corporation, enabling a takeover of human language.

English usually wins.


It's pretty well known in white hat circles that Zoom has a paid private bounty program through one of the "big 2". I know several who have got paid. Say what you like about non-disclosure, but it is the reality for most programs. We can disclose for pay, or disclose for fame, but usually not both.


I've lost all trust in Zoom at this point


(prior reply deleted once I read about the fucking local webserver & phantom reinstallation bullshit. Fuck zoom.)


It's ridiculous to install a constantly running web service that uses tricks to circumvent CORS protection and to get around Safari's protections, which were both rightly created to improve user's security.

It's not a "so-called vulnerability". As the article describes, this could be used in concert with another vulnerability to achieve RCE. Combining vulnerabilities is often how RCE is attained.

These actions undo the thoughtful work of information security professionals to protect users. It's astonishing to me that people can't see what's wrong here.


Yeah, I was focussing on the webcam thing. That piece, taking individually, isn't a big deal.

But the web server / CORS bypass is completely fucked up, nefarious, and unforgivable.

Accordingly, I edited my post.


Could you further explain the CORS bypass? Why do they have to do the image hack if CORS if they open up CORS on the local server? At that point couldn't they retrieve data via JS instead?


CORS isn't supported to localhost, aka you can't do that; hence the image-size hack


CORS is indeed supported and also required on localhost if you're using two different ports (e.g. an API server and a hot-reloading dev server for a UI).


It appears CORS _is_ supported to localhost according to this website.

If you have an open local server running this will detect it.

http://http.jameshfisher.com/2019/05/26/i-can-see-your-local...


But the image is being served from localhost no? Do image requests not abide by CORS?


They do not. The reason for that is that at the time CORS was designed lots of sites loaded images from other sites and because images where considered static content that didn't change the server this was at worst a information leak. What Zoom has done here is abuse a HTTP GET via a <img> tag (which is not supposed to change anything) as a way to trigger a privileged local process to INSTALL software (among other things). This is a classic XSS and is number 7 on the OWASP TOP 10 vulnerability list (2017 version). For Zoom to contract as BAA with HIPAA regulated clients and various other bodies they had agree that they would NOT do this and that they had security teams and audit processes in place to prevent this sort of thing. Nearly ALL of our client contracts require we be aware of and mitigate AT LEAST the OWASP TOP 10.


Thanks for the info!

I still don't fully understand _why_ they had to do this hack if they own the localhost server. They could just set CORS to be '*' and lax their CSP. Then they would be able to get data with JS.

For example this website can see any localserver on your network with open CORS since it appears they laxed their CSP.

http://http.jameshfisher.com/2019/05/26/i-can-see-your-local...


my understanding (have not tested this) is that CORS "" does not work in all browsers between `localhost` and other domains. This is also AFAIK an intentional security feature. Even so CORS "" would be even more explictly bad behaviour. The whole point of CORS is to prevent XSS from random sites linking to your end points.


Yeah, I wouldn't even call this a vulnerability. I'd call it malware. Nothing should secretly reinstall deleted apps without user interaction. Never. The user expressed the intention to delete the app, and you're undoing it without their permission? Deliberately defeating expressed user intent. Malware. Period. It's the Zoom Trojan.


> It's ridiculous to install a constantly running web service that uses tricks to circumvent CORS protection and to get around Safari's protections, which were both rightly created to improve user's security.

All of this to avoid an extra click. I know UX is important, but it is not more so than security.


Jonathan pointed out something important on the chat last night. In many cases, the auto-join is a vulnerability it itself even if the video doesn't turn on.

It allows the attacker to potentially unmask your identity if you are logged into Zoom. When you join the call, you will show up in the participants list.

This is definitely something that you would not want to happen on various parts of the web. It kills your ability to browse privately.


> But insisting Zoom change the software because it's possible some doofus might be duped into joining a meeting with someone is kinda ridiculous, IMO.

In my experience (the energy sector), most of the people I interact with on Zoom would definitely fall for joining some random meeting that popped up. They are incredibly good at their field of expertise, but certainly doofuses when it comes to knowing how to click on things in zoom.


> the Zoom client starts up. It'd be hard to miss

They already have you on video at that point. The summary above is very fair, there's no point trying to throw more PR at this problem. Ignoring other issues and focusing on the main point: They need to increase security by a huge amount by implementing a simple dialog with "Yes" not selected as default. They also need to communicate why they did this to their users and be honest.


And WebEx and Bluejeans, too?


It's a fantastic piece of software I use daily, so I'm inclined to give them the benefit of the doubt before joining an internet pitchfork mob.


If you qualify software that performs such a blatantly awful/wrong practice as fantastic, then you I'm afraid you need to redefine your views of what constitutes software.

This is a truly heinous design and should be lambasted as such


as demonstrated, "responsible disclosure" is a huge time waster for the discovery, and the price of this is undervalued even if the company had a clear bug bounty program.

its more valuable than 90-days of a developer's time, not even correlated to time at all really


I guess this depends on your definition of responsible. Something like this however is bad enough that users should be informed right away so that they can take steps necessary to secure themselves. Assuming they were responsive I'd have given them the 10 days to confirm it was an actual issue, but I'd have expected them to notify the pubic and their users of the issue and mitigation steps within a week.


> users should be informed right away so that they can take steps necessary to secure themselves

For the record, this could be accomplished by a trustworthy source announcing "there is a critical vulnerability in Zoom's macOS software and you should uninstall it immediately pending vendor response". Some researchers do this already -- Tavis Ormandy has, for example.

It's not a binary choice between no disclosure and releasing an unpatched PoC.

By the way, I'm not trying to argue that this researcher behaved unethically, just sharing another option. My usual take is that the researcher gets a lot of leeway for having to make a difficult decision and presumably trying their best to balance consequences, similarly to how a pilot trying to land an emergency plane has great discretion in how they do so.


Unfortunately in this case "uninstall it immediately" does not actually mitigate the vulnerability, since it will just reinstall itself if you come across a triggering link.


Right, I'm talking about the working uninstall instructions in the Medium post.


Don't those steps effectively give away the vulnerability though?


Click on the app icon, hold, move to Trash.


It is mentioned in the third paragraph already, highlighted in green. They don't offer a method of clean removal to their users. They run a web server on your machine that will reinstall Zoom on your macOS whenever it is convenient for them (secretly, without asking you first).

See here: https://apple.stackexchange.com/questions/358651/unable-to-c...

That web server is exploitable, as explained in the article.

Note that most Zoom users (probably lots of business people) won't be capable of following the uninstall steps necessary at the moment..


Now, notwithstanding what I posted above, THIS is fucked up and inexcusable.


I do NOT appear to have the web server running, but I did have the ~/.zoomus folder and the ZoomOpener app there.

Is this because I'm scrupulous about killing LaunchAgents and LaunchDaemons?


Run this:

ps aux | grep zoom

You'll probably see "ZoomOpener" there. It is running but it's not in the "Force Quit" menu. Then, to kill it run:

killall zoom

Then you can follow the other directions indicated by the previous poster who gave information about how to lock your ~/.zoomus directory down to root so that it can't install itself again.


I do not have ZoomOpener running.

My feeling is that removing the startup item probably cripples this, no? I mean, fuck them for doing this, and get rid of all of it, but I think the StartupItem is required for their hack to work.

Right?


Which isn't actually enough, since the surreptitiously installed server will happily go and reinstall the Zoom client for you whenever you load a zoom link, or a malicious link. You have to kill the server, and remove the ~/.zoomus directory as well. This is all pretty damning to be honest.


I would have loved to be a fly on the wall of the meetings where that policy was designed and approved.

Did no one at all speak up and say "hey, running secret webservers on obscure ports without telling the user is shady stuff"?


Just to be sure, I don't think that's enough. You might want to kill the running process and remove the binary (as described under "Quick Fix" section in the blog post)


Leaves behind an exploitable web server on localhost


Not an option if your employer uses Zoom for all of its internal and customer-facing meetings.


... and delete the webserver from the background


The article’s actual title is: Zoom Zero Day: 4+ Million Webcams & maybe an RCE? Just get them to visit your website!

But, assuming I’m reading it correctly, the “maybe an RCE?” part seems like fear-mongering, because it would require that Zoom lose control of one of the domains that they trust for transparent client installs/upgrades.

I’m also a little concerned about how some parts of the article don’t match up. For example, the “UPDATE: June 7th, 2019:” does not have (as far as I can see) a matching entry in the Timeline. There is an entry for July 7, noting a regression; but there is an update the next day (July 8) noting that the regression has been fixed.


Dangling domains happen all the time. As long as the main one that's actually used is still controlled by the org, others can quite easily slip through the cracks, not renewed while still present in the codebase.


Indeed! It's not even a hypothetical in this case. According to the article, Zoom was just 5 days away from letting one of the domains expire when the author told them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: