Hacker News new | past | comments | ask | show | jobs | submit login
E2EE on the web: is the web that bad? (emilymstark.com)
99 points by PaulHoule 9 months ago | hide | past | favorite | 65 comments



Desktop applications are uniquely auditable by security professionals in a way that web sites inherently aren't. It's very, very hard to hide "send this unencrypted message to evilserver.com" in a desktop app in the same way it's possible to in a web app (because the web app could serve the evilserver.com code only to the targeted subject under surveillance and send only good code to the rest of the world). And mobile apps have the explicit review pipeline you mention. So web apps need some sort of "frozen, trustable, third-party reviewed" version to be able to match the security of desktop apps.


Code delivery happens in desktop apps too, when you download the binary from evilsite.com, or when you receive an auto-update, they can give you a different binary than the security professionals reviewed. That's assuming the professionals even reviewed the binary, and not the source evilsite.com claimed it was built from.

It would also be difficult for said professionals to detect IP-(range)-specific backdoors (with as much obfuscation as you like; only send on Tuesdays; encrypted using a string constant elsewhere in the binary), in App Store delivered binaries that are harder to vary per downloader.

Some web apps - [Cryptee](https://crypt.ee/threat-model) is a notable example - address this with a "trust on first use" approach, that makes any change to the (web) code require approval, but that's in the same realm as a desktop app, where you've trusted it on the first download, and trust it to have actually followed through on that promise.


That's not really fair in the day and age of Electron apps. If the Electron app downloads JS code to run in its embedded browser / NodeJS runtime, then it's subject to similar attacks. Hypothetically, so too could any traditional desktop binary download and execute code. An audit will only reveal that such capability exists, not whether or not it is being abused.

If that's genuinely part of your threat model, then you simply must run on-premises code only, backed up by robust network controls that allowlist very specific parts of the Internet.


Indeed Microsoft Teams once contained a bug where specially-formed HTML messages could trigger arbitrary code execution when received.

Any platform where the security model is two steps:

1. Check if the message contains any code. If so, reject it. 2. Scan the message for code and eval() it.

is extremely susceptible to code execution bugs.


Correct. I do not think applications with such a vulnerability should ever be used or marketed as "end-to-end encrypted", since that implies a promise that a malicious or suborned vendor could not be compromise your communications—which, in the case of remotely loaded code, is misleading. I think this is table stakes for any e2ee communication app (e.g. Signal can do this easily since Apple and Google and many third-parties audit their code) but if it conflicts with your current app design, you should be realistic about whether a true e2ee-required threat model can be accommodated by your app.


There’s more tooling for non-professionals to keep a leash on desktop app network usage, too, like LuLu[0] and OpenSnitch[1]. There’s ways around these tools as there is with anything else, but it raises the bar a bit since the user is unlikely to let random programs connect to servers they have no good reason to connect to, once made aware that the attempts are being made.

It’s not as reasonable for the layman to do this on the web, since it’s par for the course for a single visit to a big commercial website to kick off connections to dozens of unintelligible domains, and to sometimes break if these connections aren’t allowed. OS-level tools like LuLu aren’t of much help here because significant limitations on the servers a browser can connect to essentially break the browser, which leaves you with extensions like NoScript and uMatrix which for the explained reasons aren’t as straightforward to operate.

[0]: https://objective-see.org/products/lulu.html [1]: https://github.com/evilsocket/opensnitch


You just described Manifest v3 Chrome extensions hosted on the Chrome Web Store. No remotely loaded code, frozen assets, third party review.


Correct. I know that there's a lot of concerns about Mv3 and I don't want to downplay them, but from a security perspective the team absolutely made the right and necessary calls that are required to prevent the (currently widespread) abuse of browser extensions for tracking and ads injection, not to mention malware distribution.

I would trust an audited, Chrome-or-Firefox-store distributed browser extension to implement end-to-end encryption. I don't think that extends to UI elements injected into the page itself though—the UI elements where you're typing your messages have to be separate from the rest of the page to prevent malicious web apps from overwriting them and capturing your input before sending it on to the extension themselves.


For what it's worth, the threat model for Chrome extensions is not the same as that of arbitrary websites. You can probably reasonably accomplish some things in extensions that are untenable on web pages.


> Desktop applications are uniquely auditable by security professionals in a way that web sites inherently aren't.

Except for the desktop applications that simply pop up a web view to load the web site from the internet.

(Looks at Discord, Slack, Zoom)

Not all Electron apps are this way, for example LM Studio stores its interface locally and therefore doesn't rely on loading it from a remote server at application start time. But a surprising number of them are.


There is "Signed Pages" by the developer of EteSync. It is a browser extension, that checks webapps based on signatures in the html file. The addon then warns the user if the signature is not correct or - if I remember correctly - the source changed. This allows you to be sure what webapp code was delivered. But it seems like it did not really get used outside of his own projects. https://github.com/tasn/webext-signed-pages


The page you linked even says "While this doesn't protect you from a malicious developer". The whole point of e2ee is that it needs to be able to protect you from a malicious developer. Native apps do this by having local, auditable code. Web apps don't.

That said, this project could be extended with something like a public certificate transparency log showing which versions of the code have been signed and making the code associated with each signed version available for third-party inspection, which would help plug this loophole. I haven't seen any proposals for how to do that with web standards yet, but I expect that some people have thought of a few of them. While it would be very different from the web we have today (no dynamic server-side templates, only APIs!), I think it would be a welcome innovation for web security


> Often, security experts argue that the web isn’t suited for E2EE applications because of the vast attack surface for code injection – abusable by the developer or by an external attacker. For many web applications, a web browser receives and runs code from a zillion servers, retrieved over a zillion TLS connections.

This feels like the wrong argument. I don't think this has anything to do with suitability of end to end encryption. It is easily worked around with e.g. subresource integrity, or rolling your own signing scheme.

The real problem with end to end security on the web is you don't have a trusted base. You have to bootstrap your application with some sort of trusted base. On a website you are redownloading the website everytime.

The entire point of end to end encryption is that the service provider should not be able to intercept messages. The service provider is the attacker. This is impossible to prevent if you redownload the app everytime you open the page†. You have to build trust from some starting point. If you have some trusted bootstrap code you can build from there with signatures, but you can't build trust out of thin air running code directly supplied by the party you are trying to protect against.

The problem isn't the zillion TLS servers. The problem is the first TLS server, which in the e2ee threat model we have to assume is evil.

† i guess service workers can extend that to once every 24 hours. Still not super compelling.


Subresource integrity hash checks are supposed to let you pin a particular version of a webpage / webapp, but the W3C managed to not let SRI work on bookmarks. If you could bookmark a specific version of a url with SRI, that would make so many problems go away.

You can have a personal startpage saved on your device's local storage, with an SRI link to a webapp, but that takes a bit of fiddling.

E2EE: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


> Subresource integrity hash checks are supposed to let you pin a particular version of a webpage / webapp

I dont think that is true. Its called subresource integrity not resource integrity. I don't think pinning a specific top level resource was ever part of the goal.

> but the W3C managed to not let SRI work on bookmarks. If you could bookmark a specific version of a url with SRI, that would make so many problems go away.

I don't really think this makes sense in the context of how web browsers currently work. Are you proposing that users just unilateraly pin versions of websites?


The idea is that users could bookmark reviewed and vetted versions of websites / apps.

It won't work "unilaterally" for obvious reasons but there are lots of p2p, e2e, security and crypto type apps where creators would love to be able to reduce the degree to which infra like CDNs and hosting need to be trusted.

The "page level SRI" mechanism wouldn't even need to work that well (eg limited support for web features). If you could get a trusted "bootstrapper" to work with no software installation then you could do the rest of the trust chain in js "userland".


> It won't work "unilaterally" for obvious reasons but there are lots of p2p, e2e, security and crypto type apps where creators would love to be able to reduce the degree to which infra like CDNs and hosting need to be trusted.

If we are moving out of the http/web space into some sort of distributed protocol, just use magnet links, problem solved. Or some equivalant hash based content-addresable scheme. (How applicable depends on what space we are talking about)


I agree but now you are asking your users to either install an extension or download something.

What many people want is the convenience of http/web with the immutability of content addressing and the frustration of OP at the start of our thread is that SRI feels so close to being able to provide that but stops short.

This has been a really interesting discussion for me because I think you understand the use cases and alternatives but it seems like you don't agree that hash pinned / immutable web sites natively in the browser would enable many interesting use cases.


I think i don't like it if its implemented as metadata stored separately from the url. As a separate url scheme where the url scheme includes both the hash and the document location, i think it would be cool. Really that essentially comes down to supporting magnet links with the webseed (ws) parameter in browser. Which would be really cool.


Actually, a browser COULD implement subresource integrity for bookmarked urls - it would just require another field in the bookmark metadata. They could arrange for 'rightclick, save as bookmark' to pull the sri hash from the link and autocopy it into the bookmark. #featurerequest


[pinning a specific top level resource] - You can include an SRI link on one of your pages that points to the top level index.html page on someone else's domain, and the browser will verify the pinned hash of the index.html file. Try it and see. If their index page also uses SRI to pin its own resources, the entire site is pinned.

[just unilateraly pin versions of websites?] - If the other website is advertising itself as 'not supposed to change', then yes, this is a way of confirming that it has not changed.


Perhaps bookmarklets could make that possible? It could generate loading of the resources using the SRIs hardcoded directly in the JS contained in the bookmark itself.

Or even simpler, use the data: URI for the initial page as a bookmark.


The data: url to bootstrap trust is a really cool idea. I like that.

Of course then you are starting from a null origin, which makes certain traditional web architecture things hard. Maybe that doesn't matter if you design for it from the get-go. Too bad <iframe> doesn't support integrity. I guess you also sacrafice features that require a secure origin with this method.


This is where IPFS can shine, as (if you don't use the very-optional IPNS) you are browsing directly to a fixed hash that your browser could (it doesn't, but this is trivially fixable) verify; it thereby exists in a space between pre-downloaded and web-requested software (with different tradeoffs, some positive and some negative; but like, all three of these options have negatives).


Yes, i agree, this is one of the most interesting aspects of content-addressable networks like IPFS.


> It is easily worked around with e.g. subresource integrity, or rolling your own signing scheme.

Or, you know, not loading resources from third parties in your secure web application.


Yes, https://news.ycombinator.com/item?id=39315585

" Why Bloat Is Still Software’s Biggest Vulnerability > A 2024 plea for lean software "

The problem involves so many dependencies that it is impossible for an individual and maybe even megacorporations to fully audit the code base and prove the software behaves as intended. Everything these days is best effort.

We _need_ a native interface that portable software can target, to at LEAST get to the level where 'If you trust the OS, then the rest of the software is known to behave as expected' is a statement that can be made to have a plausibly secure environment for End to End Encryption on anything.

This __has__ to ship, in a DEFAULT install of Windows AND OSX AND mobile computing devices (so iOS and Android too). It absolutely has to be a baseline target, otherwise we'll never get past shipping the browser, again, out of band as yet another thing that must have it's entire monstrosity of libraries updated for every security patch.


> The problem involves so many dependencies that it is impossible for an individual and maybe even megacorporations to fully audit the code base and prove the software behaves as intended. Everything these days is best effort.

Everything has always been best effort, there was no point in time that formally verified software was mainstream. Even attempts at that have turned out broken later on (e.g. KRACK attack on wpa2) so its not a pancea.

> We _need_ a native interface that portable software can target, to at LEAST get to the level where 'If you trust the OS, then the rest of the software is known to behave as expected' is a statement that can be made to have a plausibly secure environment for End to End Encryption on anything.

This doesn't really make sense unless this portable interface contains the entire App. Like how do you imagine such a system would work? What would such an interface do or contain that would bring this dream even remotely close to reality?


> there was no point in time that formally verified software was mainstream. Even attempts at that have turned out broken later on (e.g. KRACK attack on wpa2)

In what ways was wpa2 an attempt at formally verified software? The KRACK attack exploited an issue in the standard itself. Were notable implementations of it formally verified? Was the spec itself verified in some flawed ways? I read the Wikipedia page and couldn't find anything relevant.


See section 6.4 of https://papers.mathyvanhoef.com/ccs2017.pdf

Essentially the spec was formally verified, but it turned out that the formal definition of "secure" they used wasn't sufficient. Formal verification only works if you properly define all the security relavent properties that need to be proven, and the process of defining them can have errors itself.


It shouldn't even be as hard as it appears. If the business logic is designed as a generic library made to be portable to different platforms then you can call into it from a native application built using whatever the native UI toolkits and conventions for that platform are.

Libtorrent is a shining example of this architecture. It runs everywhere, even on mobile platforms. Rust being more popular with makes this even more accessible than it used to be. There's even libraries like PyO3 to make writing bindings to other languages easier.


Isn't the main issue with e2ee on the web the fact that you're battling on three fronts?

1. You need to securely get the code and resources to the browser. A state level attacker can make this very hard.

2. You need to securely run the code on the client which may or may not include code from n-third parties.

3. You need to securely handle data and logic from the client which may or may not involve n-third parties.

Phew. That's a heck of a challenge.


> You need to securely get the code and resources to the browser. A state level attacker can make this very hard.

Even worse, you need to securely get code to the browser without trusting yourself. The E2EE model assumes that you (the service provider/app maker) will turn evil at some future point.

Code distribution is hard enough if you can trust yourself. Its basically impossible if the threat you are protecting against is your future self.


An amateur attacker can make getting cryptographic code securely to a browser very hard. :)


4. You need to not trust the cert authorities installed in a system.


If we do care about the delta in security model between the web and other platforms, then we could build some kind of code bundling and signing mechanism for web applications, perhaps with some kind of transparency layer on top to make the code publicly auditable and make it harder to target specific users with malicious code. A bundling/signing/transparency solution for the web could probably be built out of some of a collection of mechanisms that already exist or have at least been explored. Related ideas include Subresource Integrity, Isolated Web Apps, Signed Exchanges and Web Packaging, Meta’s Code Verify extension, and source code and supply chain transparency proposals.

Incidentally, I've actually just recently developed a solution to this exact problem: https://www.websign.app.

WebSign started a while back as an internal framework used by the Cyph E2EE messenger (https://www.cyph.com), and @eganist and I gave a talk that covered part of the TOFU architecture at Black Hat and DEF CON. Now there's a static web hosting service built around it for others to use, which takes care of bundling and code signing during deployment.

If anyone here has a use case for it, we're looking for pilot customers now. Just shoot me an email at ryan@cyph.com.


I thought it was vaguely interesting until where you write how blockchain-based DAO "will enhance the scalability and transparency" of your infrastructure...


Appreciate the feedback on the messaging. I can see how mentioning that risks painting the whole thing with a brush that won't appeal to everyone.

To be clear, what you quoted is an idea that has been requested by some potential customers — essentially decentralized and distributed code review/signing. It's not a feature of the current product, and likely won't be until someone pays for it, although I do think it would be cool.

Either way, it's not something that would detract from the fundamental value proposition. It would be an optional add-on.


I see. Yeah, it made me think you're making an interesting service just to let someone to compromise it if they collect enough tokens. I guess more specific phrasing wouldn't hurt for skeptics like me


Ahh, yeah, definitely nothing like that. I'll make a note to clarify the language there in the next website update.


I think Code Verify (mentioned in the article) is the right approach https://engineering.fb.com/2022/03/10/security/code-verify/

Although it's all open source, if you install the published version of their extension, iiuc, it will only work with certain facebook sites out of the box. There should be an easier way to enroll new sites - I'm thinking TOFU with fingerprinting.


E2EE can be done anywhere, but of course some entities like Google or Apple might try to spy locally.


Indeed, the article's assertion that "mobile is convincingly a more secure platform" falls over if you don't trust the gatekeepers holding the keys.


And even if you trust them, they could be compromised now or in the future:

https://www.computerworld.com/article/3712380/russia-hacks-m...


So that's fine and all, but if you've lost that trust in the platform itself, you can't trust any E2EE; it's not as if browsers somehow fare better in that analysis; they are in fact strictly worse.


Browsers can run on platforms that don't have secret baseband firmware, obligatory auto-updates, etc. etc.. Obviously a layer on top of a specific platform can never have better security than that underlying platform, but you can run a browser on a better platform.


browsers do add one more layer of trust when thinking about the OS, but any mobile app also does?


It seems like "code from a zillion servers, retrieved over a zillion TLS connections" can be solved by serving the app from a single origin. You have to vendor your dependencies but that's probably wise anyway.


Isn't the whole point of using E2EE on Company A's messenger app that you distrust Company A?

After all, they could easily serve up slightly different js when you log into their website; for all practical purposes it would be undetectable.

Being reasonably confident that you're running the same code as everyone else seems pretty important to me - and that's not how the web works.


That's the ideal but browsers don't support that today. Trusting the client but not the server is weaker but still better than nothing.


> That's the ideal but browsers don't support that today.

Which is why people say E2EE is unsuitable to the browser.

Sure there exists other threats and other security issues in the browser, many of which can be addressed by various means, but that is a different thing than E2EE.


Yes, it's odd to me that everyone has to serve some random JS from some random ( to me very untrusted) third party cdn server. Why every single random site has to have like 8 different domains with random character variations and serve javascript from it. To me that seems completely bonkers, absolutely unstrustworthy and extremely lame, as if they have no actual money to do it like a normal person. Like FB serving stuff from facebook dor com but also from random, lame domain fbcdn AND also fbsbx (wtf is that???!?, i dont trust it at all). And thats on the "reasonable side", random wordpress site or eshop is a complete nightmare in this regard.


The reason used to be performance, but now with modern browsers and especially HTTP3 I doubt that is true anymore.

I guess the separation into CDN and non-CDN stuff could still make sense since you can serve static resources from even closer to the user than you serve your dynamic stuff while avoiding having the CDN proxy traffic to the website backend.


There is still a reason for E2EE where you trust the source, but the source simply _doesn't want_ your data. Unfortunately data on a server is necessary in an infrastructure that has no guaranteed persistence (looking especially at you Safari), or when you want to switch client seamlessly (for example from your laptop to your phone).


I'd say neither web or desktop are easy. Cryptocat was a great example. It started off as a web app and got a lot of heat. Was rewritten as a desktop app and still failed.

https://en.wikipedia.org/wiki/Cryptocat#History


That's more to do with the E2EE implementation than how it was deployed... Cryptocat did a lot of things wrong other than using crypto in the browser.


It seems like using a browser extension instead of a website gets you most of the way there:

- Extensions are signed with a developer key

- You can be pretty sure any code changes pushed out to one user are pushed out to everyone

- You benefit from the Chrome Web Store review process (or whatever equivalent Apple and Mozilla do)

- Extensions are permissioned and sandboxed


More relevant to the article, Chrome extensions can't just "open a zillion connections." Included scripts must be unminified and external scripts are strictly declared ahead of time with more secure default permissions.

I like that idea.


Indeed, Manifest V3 disallows remotely hosted scripts entirely: https://developer.chrome.com/docs/extensions/develop/migrate...


I mean, kinda. It's little more than a consumer-grade padlock or maybe "do not cross" tape - there are countless ways around it, it mostly just encourages normal cases to be more static (which is a good thing! but very far from a security tool)


It's not only E2EE. Let's say you would rather not know your users passwords (which I think it commonly agreed upon good security hygiene, at least for storage, but it makes sense that you wouldn't want to have them at all on your servers). You are going to hash it client side with some password hash or even better using some aPAKE. Great, except it's all moot since your password handling code comes from your compromised server. I wish there were practical solutions to this problem.


Mutual authentication without sharing secrets is already supported at a protocol level in some sense, via client certificates. The relatively straight forward and intuitive nature of password authentication might explain why it hasn't caught on as a replacement.


"Unlike web apps, desktop applications are often written in vulnerable memory-unsafe languages like C++."

As if browsers (arguably the most attacked desktop applications) are not wriiten in C++.


signed code bundles with sts must-staple style semantics for preventing downgrades sounds reasonable. would probably also need some kind of protection in the browser runtime that prevents/limits scope of changes to execution that can be evoked via web resources outside of that bundle.

kinda starts to point towards a move from traditional www domain/location security semantics to abstract identity based approaches.


seeing "memory-unsafe languages" in an article is one way to get ppl to instantly click away




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: