Hacker News new | past | comments | ask | show | jobs | submit login
Zoom RCE from Pwn2Own 2021 (computest.nl)
262 points by xnyhps on Aug 28, 2021 | hide | past | favorite | 122 comments



FTA: "Using a combination of proxies, modified DNS records, sslsplit and a new CA certificate installed in Windows, we were able to inspect all traffic, including HTTP and XMPP, in our test environment."

I have setup wireshark for troubleshooting. That's about it. What's the role of proxies, modified DNS records etc. in this setup? How can I duplicate this?

Thanks.


In addition to what others have mentioned - you could probably find the session keys in ram - but for a system without debug knobs, injecting your own certificate authority is probably easier.

For stuff using nss(Firefox)/openssl/gnutls - you can usually just ask nicely for a copy:

> The key log file is a text file generated by applications such as Firefox, Chrome and curl when the SSLKEYLOGFILE environment variable is set. To be precise, their underlying library (NSS, OpenSSL or boringssl) writes the required per-session secrets to a file. This file can subsequently be configured in Wireshark

https://wiki.wireshark.org/TLS#TLS_Decryption

https://gnutls.org/manual/html_node/Debugging-and-auditing.h...


With DNS one can avoid having to use the firewall for redirection

sslsplit documentation actually suggests DNS as an alternative to using firewall

Theres a number of easy-to-use UNIX firewalls. Not sure about Windows

Proxies allow easy inspection of HTTP traffic, among other things. Arguably sslsplit is itself a proxy, specifically a forward proxy

There are many ways to monitor HTTP traffic. More than one way to do it

Why doesnt Zoom use certificate pinning

(I avoid using sites/apps that force use of third-party controlled pinned certificates. What are they trying to hide from the user)


Certificate pinning makes it impossible to examine what the software on my own machine is sending over my network! Please don't do that.


Isn't certificate pinning what keeps my employer from MITM'ing my personal email session on their network?


As an employer I would prefer employees not to use the corporate network for personal email. The network exists for business use.

As an employee I prefer not to use the corporate network for truly personal email.

If I am the employer that responsibly monitors the traffic to and from our network, including TLS traffic, an employee that uses our network for personal use with a surveillance "tech" company service such as Google Mail, Facebook, etc. is putting her own privacy at risk. Because I can extract her cookies from the traffic, all she has to do is forget to log out once and I now have a "bearer token", i.e., a cookie, with no expiration,^1 that lets me access her account at any time in the future.

1 The type of cookie that lets users stay "logged in" indefinitely. A non-"tech" company with sufficient legitimate sources of revenue besides online ads may not use such cookies. For example, if an employee logs in to her personal bank account using the corporate network but forgets to log out, the bank website will log her out automatically, the cookies will expire.


>As an employer I would prefer employees not to use the corporate network for personal email. The network exists for business use.

And as an employee that actually exists in 2021, I'd tell you to get a clue.

>As an employee I prefer not to use the corporate network for truly personal email.

And that's your preference. If you think everyone shares that preference or even realizes the implications you're delusional.

>If I am the employer that responsibly monitors the traffic to and from our network, including TLS traffic, an employee that uses our network for personal use with a surveillance "tech" company service such as Google Mail, Facebook, etc. is putting her own privacy at risk.

No, you're putting them at risk by MITMing their traffic. There's absolutely nothing that forces you to do that. If you don't have separation between the network where humans live, and where The Business lives, that's what's irresponsible.


... As someone who exists in 2021, I have a smartphone. Why would I want to do my stuff on someone else's machine?


Probably you might need to re-read your employee agreement. Some of these policies are clearly stated and you signed up for them when you are employeed


Don't know why you are getting downvoted and people are getting emotional.

I have family members who work in compliance. Everything is fair game for surveillance. I know of someone who got fired for accidentally uploading his whatsapp chat history via work email (this is how chat history backup used to work) and they got fired from JPMorgan for having references to drugs.

You can choose not to work for companies like this (indeed I have always fully owned my machine at work) but you're just kidding yourself if you think bigco aren't monitoring everything you do.


And the standard startup contract says business hours are "9-5"

The poster's point is that what they say doesn't match reality, contract or otherwise


I doubt he read it the first time. :)


I assume you're talking only about employees using corporate devices on the corporate network. If the employee can connect a personal device to the corporate network the employee will be safe from the MITM.


"If the employee can connect a personal device to the corporate network..."

Why not use the cellular network.


It's slower, and you have to pay for cell data.

But anyways, my point is not whether or not you should use a personal device on a corporate network, my point is that if you do use a personal device on a corporate network you will be secure from MITMs.


Why shouldnt you pay. If its personal use why should the employer subsidise that.

My point is if you dont use a personal device on the corporate network paid for by your employer and instead use the personal device on the cellular network you pay for, then you will be "secure from MITMs".

More than one way to be "secure from MITMs".


I'm not saying the employer should subsidize it. Some employers might. If your employer provides that perk, it might make sense to use it. Similar to how if a restaurant provides free wifi it might make sense to use it.

I think the real way to be secure from MITMs is to use a device that you control the root CAs of. If you control the root CAs, you'll be safe no matter what network you're on. If you don't control the root CAs, you'll be vulnerable no matter what network you're on (but some networks will carry a higher likelihood of an attack).


Basic TLS is sufficient to stop your employer from MITM'ing your personal email session as long as you control what certificates your machine trust.

Certificate pinning is what protects the main sites (who use pinning) from an advanced attacker or a rogue government who are able get a proper CA to issue fake certificates.


Basic TLS is sufficient to stop your employer from MITM'ing your personal email session as long as you control what certificates your machine trust.

Which, on almost any employer-issued device on a large corporate network today, you won't.

Personal stuff goes on personal devices with personal connectivity and uses personal accounts with personal security. Work stuff goes on work devices with work connectivity and uses work accounts with work security. Contaminating either with the other is just a recipe for bad things happening, often for both the employer and the employee.


By contrast, I'm typing on a work computer right now. We deploy no special certificates to attempt to MITM traffic, nor will we ever.

I'm using a Chromebook, which allows me to run multiple users at the same time, each with their own profiles. Each user has their own encryption keys for their hard drive. We have no corporate network, no VPN, and instead rely on attestation for authorization.

I prefer to use this device for personal use because I know how safe it is.


Yep. Pinning doesn't protect you, using a personal device protects you.

You mention needing to use personal connectivity. I don't think that's necessary. HTTPS should protect you from malicious networks.


HTTPS should protect you from malicious networks.

Yes, but on the kind of network we're talking about, you probably won't be able to make an outbound HTTPS connection at all if you're not going via the required security infrastructure with an appropriate corporate-issued cert.


You're checking your personal email on your work computer? Your employer can see that. One way would be through screen recording. But even without screen recording, your employer can install its own certificates. Chrome at least ignores certificate pinning if there are custom installed local certificates.

If you're on a personal device (e.g. your personal phone) on a work wifi, you're secure whether or not certificate pinning is used.

So I don't really see any situation in which certificate pinning will help you. The purpose of certificate pinning is to protect against malicious regular root CAs. It's not to protect against your employer or anyone else who can install custom root CAs on your machine, because they could also install malware that steals data directly from Chrome.

>Chrome does not perform pin validation when the certificate chain chains up to a private trust anchor.

https://chromium.googlesource.com/chromium/src/+/refs/heads/...


No. Your employer can't MITM your personal email session if you don't trust their MITM proxy's CA.


if your employer controls your work computer, they can set it to trust their MITM CA.

cert pinning means they can't do that unless they're also modifying yoru email client binaries.


It depends on your email client. If your email client is Chrome, then the pinning won't help you at all.

>Chrome does not perform pin validation when the certificate chain chains up to a private trust anchor.

https://chromium.googlesource.com/chromium/src/+/refs/heads/...


The HTTP and XMPP traffic is encrypted using TLS. The proxies were used to decrypt, log and re-encrypt this traffic in real-time.


And the new certificate and DNS records are to make the proxy look legit to the Zoom client, which would otherwise not accept TLS connections. Especially if there are DNS records which specify which CA is used for the certificate.


> Especially if there are DNS records which specify which CA is used for the certificate.

If you're thinking of CAA, those records are not for anybody except the CAs. They're an indication to the CA "You may/ may not issue for these names" and explicitly never an instruction to clients about what's trustworthy.

It's unusual but completely sound to have CAA set to forbid all CAs, switch it to allow just one CA, get a certificate issued, then put it back to blocking them all again for a week or months. I'm not recommending that procedure, but it's sound and if any software can't handle that the software is broken.

The idea here is that all the public CAs are trustworthy but their procedures may not be a good match to your particular way of doing things. For example if a CA does ACME http-01 proof-of-control (like Let's Encrypt) and you let customers run arbitrary stuff on port 80 on your machines that's a bad combination, probably you should get your certificates from a CA which doesn't use ACME http-01 and restrict CAA.


Indeed, my mistake. Then I don't understand why they need to modify dns records.


Not entirely sure if this I'm understanding this correctly (since it doesn't really make sense to me), but this is what they wrote almost at the end:

> Anything we did differently could influence the heap layout. For example, we noticed that adding network interception could have some effect on how network requests were allocated, changing the heap layout.


It blows my mind that there are people who manage to find exploit chains like these, amazing job!


The article goes into detail on how much trial and error effort it goes into making such an exploit chain - approximately two months work each for two people. Even for other people who have the required skills, making such a time investment - with no certainty of succes or reward - is a big barrier. Perhaps the math works out differently for blackhats as the payoff is larger and perhaps more certain if they do get to a working exploit.


This is generally through the use of (often custom) analyzers. I would wager, though I have little empirical evidence, that most non-trivial zero days of large software like this are not strictly manually discovered.


Isn't this a bit like saying most software these days isn't manually built, because they use compilers?


Not sure the point of this comparison. Using compilers to build software has been all but required for a long time, and exploit discovery can be done just by using the software in unexpected ways, or by using complex reverse engineering and analysis tools.


its more like you run a fuzzer and hope it breaks something.


Not at all.


This presumably doesn't apply to the web app, which is the only way I've used Zoom.


Are there any cases or instances of secrets leaking from a zoom meeting through hacking ? Specifically from audio and video, not chat ?


> This meant that by sending a ResponseKey message with an AES-encrypted <encoded> element of more than 1024 bytes, it was possible to overflow a heap buffer.

This is what I was looking for. Fundamental bug was an overflow of statically-allocated buffer leading to heap corruption.

We gotta get off memory-unsafe languages.


> We gotta get off memory-unsafe languages.

You read this whole post and that's what you got? Just the fact that this includes a heap grooming step should be pretty telling that it's not very reliable and that it can easily be broken (it probably won't work if you try it after the next Win10 update).

I mean, yeah, sure, buffer overflows are bad, but this is an extremely sophisticated attack that relies on like a zillion moving pieces, of which "memory-unsafe languages" are basically a footnote. Props to the dedication and expertise of the security researchers.


Just because there's heap grooming involved doesn't mean it's unreliable. Exploits that use heap grooming can often be ~100% reliable.

Our POC for Sigred required lots of heap grooming but it was extremely reliable. https://www.graplsecurity.com/post/anatomy-of-an-exploit-rce...

The overflow was hardly a footnote either, it's the primary bug being exploited here.


FWIW, and this is not a 'dis' at the researchers, I would also not say this is "extremely sophisticated". Most attacks these days involve chaining lots of bugs like this and using grooming techniques. It's extremely impressive work and I have great respect for people who can do that, but I would reserve "extremely sophisticated" for cases where novel techniques are used, which isn't really the case here.


Would the exploit have been possible without this, though? Possibly but it would have been that much harder to accomplish. I think it's a fair if overdone observation that C is a bad language to use if you want to minimize the incidence of RCEs.


Why does it matter how reliable it is? What matters is that it's possible. If it breaks with an update, the technique can be pretty easily adapted to work again.


That's going to take a while. But at least the linux kernel is starting to integrate Rust. It's doubtful they'll rewrite the whole thing in Rust, but getting it in there is a start.

I know the Chromium team was considering switching to Rust too, but who knows if that's ever going to happen. IIRC Chromium has more LOC than the linux kernel.


There is a PR (or whatever they call them) to add the rust toolchain into chromium’s build infrastructure. The link is in my submission history. Didn’t get any traction on here, but fingers crossed!


Yup. I wouldn't hate it if it were illegal to write new applications that processed untrusted input in memory-unsafe languages, at least in the not too distant future. The fact that the industry doesn't see this as an urgent need is just embarrassing.


That's the last thing I want to hear from the authoritarian industry who also writes user-hostile software, embraces DRM, and is deathly scared of users having control over their general-purpose computers.

Insecurity is freedom. (Don't believe me? How is jailbreaking and rooting accomplished?)


I do not believe it is remotely reasonable to say that our software should be deliberately insecure so that people have the ability to root their own devices. That problem can be solved with other means, without exposing all of our devices to anybody else in the world who can send the same payload.

If I can root my device through an exploit then I am not at the mercy of the company that made the device. But I am now at the mercy of every single criminal or oppressive state that wants to use that exploit to harm me. And given that there is no way to confidently determine which pieces of software do not expose this capability, this cannot be an informed decision made by consumers.


That problem can be solved with other means

That's what they always say --- so how about solving that problem first, before thrusting ourselves head-first into advocating for full authoritarianism?

But I am now at the mercy of every single criminal or oppressive state that wants to use that exploit to harm me.

Good. That means power is not centralised. You can defend yourself instead. Besides, do you really want to be "at the mercy of the company that made the device" ? As we have seen multiple times, they do not really act in your interest.

There's also plenty of dystopian sci-fi to show us what attempts at making a "perfect" society in any way will turn out. This applies to making software "perfectly secure" too.


> That's what they always say --- so how about solving that problem first, before thrusting ourselves head-first into advocating for full authoritarianism?

Expecting more secure applications is "advocating for full authoritarianism"? If anything, security vulnerabilities place individuals at far greater risk to authoritarianism since it exposes them to the people who have guns and can throw them in prison.

And software written in memory-safe languages is very very far from "perfectly secure". It just closes a very common class of vulnerability.

If you really want, you can use FLOSS for everything. Your use case of "I really want the ability to change any piece of code running on my device" is supported. Not well, since few people actually want this, but it is supported.


Imagine thinking we should, literally, police language.


Imagine thinking we should, literally, police engineering techniques.

If you build a bridge then you are expected to use techniques and systems that provide at least some degree of planned safety for the users of that bridge. It is virtually impossible to write a C++ program of any meaningful complexity that processes untrusted data in an unsandboxed environment that does not expose the owner of the device running that program to harm. To say otherwise is to ignore decades of observation.

Every single person who starts writing a new application in a memory-unsafe language that will deal with untrusted inputs is declaring up front that they are willing to tolerate the inevitable vulnerabilities and exploits caused by that decision.

I think it is very important that our industry develops a path to getting all such programs off of unsafe languages, since it is very clear that techniques like testing, fuzzing, and audits are not sufficient to actually produce safe programs.


I initially disagreed with your viewpoint and after reading your response you've actually changed my mind.

My only real gripe is I would prefer it came from the IEEE or something and not really from some government agency; or worse -> oracle or someone trying to get everyone to use java/their stuff.


I personally don't think that the IEEE would have any capability of really shifting the industry. It isn't like IEEE guidance for privacy preserving programs really moved the needle. You needed legislation like GDPR to do that (and even then it remains incomplete). Ultimately, adopting memory-safe languages for systems programming is going to be very expensive. You need more than just recommendations to make that happen.

I do think there is risk with legislation binding developers too much or forcing them into suboptimal approaches if things aren't written well. One could imagine legislation that does not permit the use of Rust because of the presence of `unsafe`, but that would be a terrible misstep.


You make a very sound argument about the engineering perspective.

Unfortunately, many of the folks writing such software aren't (formally-trained) engineers. Would you suggest that they receive training which allows them to think of software as infrastructure? I'm genuinely curious, not being sarcastic.


> Would you suggest that they receive training which allows them to think of software as infrastructure?

I don't know. I don't know enough about detailed practices in fields like civil engineering to have any idea what would translate. I'm not convinced that "teach every software engineer to use model checking for everything they ever write" is going to be a winning approach. This is why memory safe languages are so valuable. You don't need to teach engineers new techniques. You just outright eliminate an entire class of vulnerability that has persisted despite efforts to eliminate it with other means.


> Every single person who starts writing a new application in a memory-unsafe language that will deal with untrusted inputs is declaring up front that they are willing to tolerate the inevitable vulnerabilities and exploits caused by that decision.

Meanwhile we banished Java and Flash from browsers, with JavaScript still leading every pwn2Own contest because these "memory safe" languages are ultimately still implemented by humans paid to prioritize new features instead of security. I still haven't seen a website that absolutely needed multi threading, certainly didn't break anything of note when it had to be disabled as specter mitigation.


I worked on V8 for almost 7 years. It being written in C++ is a cause of a large number of issues. And even larger number of issues is caused by its absolutely massive complexity and the low-level nature of what it does, particularly the object model and the JIT compiler's complex optimizations. Low-level is really dangerous and error prone.

I think every VM should be rewritten in a memory-safe, GC'd language. While there are bugs at the meta-level (i.e. the compiler IR and object representation), making the runtime code itself memory-safe should be table stakes for even talking about a trustworthy implementation.


Memory safety is not a security panacea and you should run far away from anybody who says that it is. What I am arguing is that it is table stakes.

Browsers also have a uniquely difficult security challenge in that they, by design, execute untrusted code and compete based on the performance of their js engines.


I don't think comparing software to buildings is always apt.

If a building collapses, it's likely that people will die.

The consequences of failing software can be mere annoyances depending on the context of its use.

Obviously certain industries that use software have much more dire consequences of failure though (eg. large machinery, transport, health care).

I think one could come up with all sorts of analogies that fit or don't fit, such as, applying a similar argument to door locks. Why should it be legal to use ordinary keyed locks on houses when they are so easy to circumvent with basic lockpicks?


Programs written in memory-unsafe languages are riddled with RCE vulns. This is true even for software written by companies that hire the very best security engineers in the world. The consequences of such software that processes untrusted input is more than mere annoyance. This sort of behavior is the root of RATs operated by both criminals and oppressive states. It does not matter if your program is intended for something as seemingly non-critical as text messaging - it will still be used to cause terrible harm.

I do not think that the lock is a reasonable comparison here, because exploitation of software scales so so so much more effectively than picking locks. One exploit easily scales to millions of devices. So the harm caused by vulnerable software has a much higher ceiling than the harm caused by a weak lock.


The point of the lock analogy is to point out the absurdity of analogies here.


Then drop the analogy.

If I install software that was written in C++ on a device I own and it processed untrusted content then I put myself as fairly major risk of all sorts of harm. There are only two resolutions for this problem:

1. No more memory-unsafe languages on security boundaries.

2. Extremely effective sandboxing and process isolation.

#2 has proven very hard. But we know how to do #1. We just need to spend the effort.


Part of the problem is that the actual impact of vulnerabilities in the program is often divorced from it's actual purpose. A simple TODO list that allows RCE is one example. It also has a wide variety of impact based on the user - is it just installed on a random personal computer? Or is it on a hospital server?

I don't know that it's particularly possible for a developer to truly understand all the possible impacts of an error in their program.

I'm not sure what the best way to handle that uncertainty is. Assuming all failures are critical would do the job, but certainly isn't free. However doing something like is suggested here - somehow requiring safer languages - might be a decent middle ground. The cost of using languages more built-in safety features is often not very high. Actually often such languages claim that those features make them cheaper to use.


Your point might make sense for web facing software because programs where lives are actually at stake are written in Ada or a subset of C with rigorous static analysis and engineering processes.

Now, it can't be denied that C and C++ are weak from a security perspective and that they should be avoided for network software as much as possible. But the problem with your take is the subtle implication that Rust is "safe" (not just memory-safe) when in fact there is no empirical evidence or track record of Rust being successfully used in anything remotely mission-critical. I mention this because you brought up the bridge example when it is also possible that due to language complexity that the new "bridge" built in Rust would turn out to be even fragile (but just memory-safe).

Just the other day, there was a Rust GUI library posted here. The library uses a convoluted event handling mechanism of passing enum values as messages and additional book-keeping burden instead of straightforward closures just so that the compiler can prove the code is safe (just memory-safe, mind you). It is possible that because of such contortions required to pass the compiler, Rust could fare worse in the "general correctness" area[1]. It is just that we don't know yet. Even the particular safety issue that is mentioned in the GP comment could be solved by having built-in slice types and mandatory bounds checking (like Zig/Go/D). As usual, C/C++ have terrible defaults.

Again, I agree that there is a need for secure alternatives to C and C++. But the contention is whether Rust is that. Even Rust is actually far from optimal in the "safe systems language" space. There might be languages in the future that are as fast as Rust but more ergonomic. Microsoft Research, for example, is creating a research language named Verona[2] that aims to be memory-safe and concurrency-safe. There are also other attempts like Vale[3] that aim at this space. It is premature to think that Rust is the final evolutionary step in the landscape of systems language and suggest for everything to be moved to Rust ASAP. It often appears like reckless fanaticism.

[1]: There is, in fact, few "anecdata" of Rust being less reliable: https://news.ycombinator.com/item?id=24027296 https://dev.to/yujiri8/it-seems-like-rust-software-us-bad-hk...

[2]: https://www.microsoft.com/en-us/research/project/project-ver...

[3]: https://vale.dev/


> But the problem with your take is the subtle implication that Rust is "safe" (not just memory-safe) when in fact there is no empirical evidence or track record of Rust being successfully used in anything remotely mission-critical.

An application built from the ground up in a language like Rust is going to have fewer vulnerabilities than the same application built in C++. I say this as a person who loves C++ and is intimately familiar with the state of the art of securing C++ applications. I am not proposing an immediate rewrite of everything, though I do personally believe that the "well a rewrite will just introduce more vulns" concern is overblown. I expect papers in ICSE in the not to distant future to be able to validate one of our views.

Rust is not flawless, not even close. There are other alternatives and there can even be new languages in the future, but it has the most mindshare and it is an alternative today. For many years, people would simply say that there was nothing that could compete with C and C++ for systems programming. Rust very nearly handles all of the common use cases. But... I didn't even really mention Rust in my post so I think it is especially difficult to call me an evangelist for it.

Like it or not, ideas in research languages take ages to filter into real world ecosystems. My PhD is in the intersection of static analysis and security. I love this research. But the honest truth is that waiting for MSR to produce the path forward is not a winning strategy. Languages need ecosystems and I think it is more likely that the future will come from industry than directly from academia.


> the subtle implication that Rust is "safe" (not just memory-safe)

It's not a "subtle implication" it's a fact that Rust is also data race free and thus concurrency safe in the same sense you're attributing to Verona, although for very different reasons - it can't introduce data races. Verona, unlike Rust, is not in fact a production system, it's an academic toy for pondering new ways to approach concurrency. Perhaps ten years from now its findings will influence future Rust development.

It's certainly interesting that we're still at the place where people are going, "This is only better if you can't afford GC", when even Java is markedly less safe than Rust since it doesn't prevent data races. (Yes there is ConcurrentModificationException for this, no Java doesn't promise to raise this Exception, and if it happens that might already be too late).


Did they edit their comment? I saw nothing about Rust in it


No, that is why I wrote "subtle implication" there. Unfortunately on online forums, the term "memory-safety" (which is a well-defined term in computer science), is nowadays almost always used in contexts of Rust evangelism. I would be very surprised if the GP's actual intent was that Zoom must have been written in a garbage collected language and not Rust. The wider context of this discussion at all is that whether memory-unsafe languages (ie., C/C++) must be made illegal with the implicit suggestion that Rust must be pushed as the alternative. If C/C++ is made illegal (because "memory-unsafety"), then guess what would be the legal alternative if you can't afford GC overhead. Moreover, for people not using C/C++, the question of memory-safety/unsafety doesn't even arise in the first place.


> But the problem with your take is the subtle implication that Rust is "safe" (not just memory-safe) when in fact there is no empirical evidence or track record of Rust being successfully used in anything remotely mission-critical.

If you don't need the performance characteristics or OS-level interaction offered by systems languages, then please use an interpreted language. Please please please please please. But there aren't a lot of new projects started in C or C++ that fit this, since people have known for decades that using something else will be better if you don't need the specific features offered by systems languages.

> The wider context of this discussion at all is that whether memory-unsafe languages (ie., C/C++) must be made illegal with the implicit suggestion that Rust must be pushed as the alternative.

I never said this and it would be wildly ridiculous for me to suggest this. I mention Rust elsewhere to describe poorly written legislation, not to say that legislation must demand that everybody bow down at the feat of the Rust community and donate their first born child to the borrow checker.

You are reading way too much into my post.


Fair enough. I felt compelled to post in this thread because I've seen the "ban unsafe languages" sentiment expressed several times here and on Reddit before (especially on r/rust I remember reading some comments that had a hostile tone written by people who were serious about it). Your initial comment in this thread resembled one of those.

I think you've misunderstood why I mentioned Verona and Vale though. It is to challenge the notion that there could not be any other language than Rust that could be more ergonomic but with slightly different trade-offs. Moreover, I agree with your point regarding the ecosystem.


You sound paranoid.


That is a neat attempt at making it appear like I am somehow deluded and am imagining Rust evangelism. The person I replied to made a comment down thread that literally states that Rust must be given a free pass despite `unsafe` blocks on the face of such legislation against unsafe languages. Sounds completely illogical to me.

https://news.ycombinator.com/item?id=28343526


You can disable security features in java as well. Elsewhere, you mention GCed languages as an alternative. Would it be appropriate for me to assume that you are a java evangelist and then criticize you for not considering the harm that can be caused by turning off stack inspection? That's what you are doing to me.

The fact that the default is safe matters. It matters a lot. Heck, if you want to use C++ with a sound static analysis tool then I'd support doing that and I'd hope that legislation would support that too - but I think you'd be working 10x as hard as really necessary.


Yeah, I'm literally saying you are deluded and imagining things. The post you replied to mentions multiple GC and non-GC languages. That you also have a bad opinion about unsafe isn't really important.


Throwing ad-hominems at people criticising your language is not a good long term strategy, though it might appear to work for a while.

Not only there was not any "mentions multiple GC and non-GC languages" in the comment I replied or in the parent comments (except for single mention of C++), I also don't get why I have "bad opinion about unsafe" (and where I claimed it is important?). Such a friendly community. Now I see why people don't engage with Rust evangelists. Lesson learned. Anyway, have a nice day!


We have quality and standards regulations for all sorts of things, thankfully. That software doesn't makes it the odd one out.


This is why I only run Zoom in Firejail.


I just decline Zoom meetings while politely saying “our cyber security division does not allow us to use Zoom.” Then send an alternative invitation. So far it seems to work just fine.


Doesn't work so well when your security team are the ones mandating you use zoom.


What do you recommend to use instead?


Jitsi Meet.


FWIW it's possible to run Zoom in a web browser, but they make it annoying. https://techcrunch.com/2020/03/20/psa-yes-you-can-join-a-zoo...


Yep. I run it in web browser in separate user account made for that purpose.


i run the snap Zoom on Ubuntu


Is that more secure? Snaps seem to be shit for performance, so I avoid them by default, but maybe I should be favouring them when I have security concerns.


Not really.

https://github.com/ogra1/zoom-snap/blob/065831f1e83c1230810a...

It has the "home" permissions which means it can write "sudo pwn" into your ~/.bashrc, which will of course pwn you.


By default (without -—classic) on install) they run in a chroot. Makes saving files sent to you a hassle as it can’t write to your downloads directory.


Although they don’t make it easy to find the link, you can use Zoom in a browser which is the best way of limiting the damage it can cause if you have to use it in the first place.


Anyone know what logging/printing library exploit.py is using in that first embedded video?


The colored and animated logging parts are from pwntools (https://docs.pwntools.com/en/stable/).


Great, thank you!


No one should be installing native apps for this now that we have WebRTC.


To be frank, if Zoom was a web only app (or maybe web plus web-in-a-electron like eg Slack and WhatsApp) there'd be a vocal HN crowd complaining that there was no proper native app.


Last I checked you didn’t have to install anything. I’m not sure about more advanced usage like screen sharing or how many timing options their are, but for generic “see me, see you” it works fine in the browser.


It does have a web app, but they make it incredibly hard to find. I’m not surprised that some don’t even know about it


Indeed, IIRC, you need to click “download”, reject the download, and then an “Open in your browser” dialog appears.


IME, my video always shows as either blank white, or psychedelic light show.

Android app works.


There was a setting they had, so the Bowser option is shown right away (well, after the xdg-open prompt)


I can confirm that the in browser version does not allow for remote desktop. I use zoom in a support role because webex is a laggy dumpster fire.


In my personal opinion, open source native app > web app > closed source native app


Define "this". The web app has less features[0] and you might be forced by your employer to use a feature that doesn't exist in the web version.

[0] https://support.zoom.us/hc/en-us/articles/360027397692-Deskt...


All of the apps that use WebRTC seem to have worse quality and latency than Zoom. Including the semi-hidden web version of Zoom.

This could just be a coincidence, but I suspect it's not. For all of its faults, Zoom calls are just much better than all of the other mainstream solutions I've tried, particularly with large groups.


WebRTC has had plenty of implementation issues. https://googleprojectzero.blogspot.com/2020/08/exploiting-an...


There's a difference between Android native bugs and forgoing protection provided by browser on desktop instead of relying on Native apps.



WebRTC still requires you to implement your own signalling layer, which is where most of these problems occur. Using XMPP for signalling in combination with WebRTC is very common.


Unfortunately, Zoom deliberately cripples their web app to the point of being unusable. If your employer uses Zoom, there's no way to avoid the native app.


That’s why I keep it quarantined to my work computer. If friends/family want to use it, I use it on there.


lol we use Zoom for most of our meetings and I always used the web app, without any issues.


Sitting in a meeting and saying a few words is like 10% of what Zoom can do - there's webinars, breakout rooms, Q&A, polls, moderation, and a lot of other smaller features which are incomplete or unavailable on the web client.


There's a YC company that tries to make starting and scaling WebRTC super easy, which is far from trivial for a variety of clients/browsers or with 5+ participants simultaneously: https://www.daily.co


One company has piss poor security; but there have been hundreds of native apps doing teleconferencing before, which were native.

Nothing to do with native or not; and pushing everything to a web-browser makes a really complicated bit of software with weird quirks and potential hidden bugs. Yes it’s more tested, but when your code paths are literally infinite- “more eyes” isn’t going to help.


Zoom is much faster, especially on older PCs, than Teams or (especially) Google Meet, because it’s native

pick your poison


So browser sandboxing? Is that fundamentally different from native sandboxes like snap, flatpak, et al?


Browser sandboxing is more battle tested, and probably a lot more researched, and with more fuzzing performed on it.

I know that at least X11 is not sandboxed with snap/flatpak/etc., and there is no sandbox for macOS/Windows Zoom client, so using web client is infinitely more isolated.


Browser vendors push sandboxing technology and everything else kinda follows behind by years. It's unlikely you'll find a more powerful sandboxing approach than what's in Chrome.


>> for this

What is exactly “for this”?


I’d assume teleconferencing. And tbh, I’m not sure I disagree. WebRTC has some issues and certainly isn’t the greatest, but it feels like every teleconferencing solution goes through basically the same problems over and over again. I know some swear up and down that Zoom is better than any WebRTC solution and I am going to have to hard disagree, it has a larger featureset than say, Google Meet, but I don’t know anyone in my current org that isn’t disappointed in Zoom’s reliability or security issues. In my case the security issues I’ve personally heard of are less serious (mostly random people somehow getting into meetings — never witnessed that with Meet or anything else for that matter) but to be honest, I have zero trust in Zoom. If I could run it with less privileges than a browser tab I would.

I’d really prefer a world where people don’t have to deeply distrust software, but still adhere to principle of least privilege where it is reasonable to do so. I feel like if I have to install software natively, it better be software with a decent track record from a trustworthy team. However we’re really at a worst of both worlds situation with Zoom. I don’t trust it at all, and it gets a ton of privileges that are only checked in the sense that there might be some scrutiny from researchers.

Not saying I never had issues with the WebRTC solutions, but honestly, at worst I just found myself refreshing the tab and going on my way. Meanwhile I’ve been warned against even trying Zoom for Linux as apparently it makes the old Skype for Linux look like a solid product.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: