The equivalent to the app signing cert for a web app is the TLS cert. If security is important to you, don't let third parties control your TLS cert!
It's so common now to let CDNs (primarily cloudflare) run your TLS frontend that this article apparently doesn't even consider the idea of hosting an app entirely from servers the app author controls.
That said, it's true that a TLS cert is necessarily more exposed than an app signing cert can be. If you're serious about security, your app signing cert will be on an airgapped machine. The TLS cert however has to be available on a networked machine in order to sign messages.
The certificate is public, it's fine for copies of that to be in all edge devices, the problem today is that the associated private key has to be on those edge devices too, and that's what Delegated Credentials solves.
> you can trust in TLS when you’re downloading signed software too; but for the web, you only trust in the connection, there’s nothing else to save you if you can’t trust that connection.
While Signed HTTP Exchanges were originally developed for a more nefarious purpose (to allow the URL to be changed by a trusted proxy), I think the idea or one like it can apply to serving trusted web content. Think of it as instead of your current TLS cert verifying your host, it would also verify the full URL and content including headers. It's a bit untenable for regular use, but some apps could leverage it for extra trust.
> When designing E2EE protocols for persistent vs ephemeral applications, we need to figure out where we need long-term identity in terms of cryptographic keys, and where we don’t.
I would hope that web apps always lean towards ephemeral key use whenever possible (i.e. key generation and post of public key in browser upon authentication, with private key only in local JS memory for just that page). If this means the webapp has to be built to work with 20 different keys for a user because they opened 20 tabs, so be it. I know people are afraid of doing anything like key generation in the browser, but we can't ride-off the possibility of e2ee web apps altogether. I fear the browser allowing access to the OS's key management or the system's TPM for key storage because it may lead to overuse/over-reliance on long-term keys, but I'm sure it'll happen if it hasn't already.
I'm hopeful that Signed HTTP Exchanges lead to what you describe, but another Chrome-originating technology that could be extended/abused to achieve a similar goal is the <portal> tag.
There is already a little trick[0] that can be done with bookmarklets (or locally saved files) which allow you to bootstrap a page with a known set of JavaScript code running on it, but it has the disadvantage that the URL bar doesn't contain a familiar domain. If the <portal> spec[1] ends up supporting SRI[2] integrity hashes in a sensible way, this little bootstrapping technique could actually be practical.
Combine SRI with CSPs and cache-control: immutable and you could already commit a page to never change. All that's missing for TOFU is fingerprinting this combination, watching for changes and surfacing the information to the user.
Unfortunately that by itself does not guarantee security. The code that is verified by the bookmarklet could download additional code when it runs, and that code would not be verified.
My point is that verifying that the content doesn't change is by itself not enough. You also have to verify that it was secure to begin with, and that is much harder, especially for your typical end-user.
That's a separate problem to solve. But for audits to even make sense you first need to solve the problem of sites changing under your feet, i.e. enabling TOFU.
I started building an end to end encryption API once that includes server/client setups. I promise I'll finish it one day (there's still client - client to do. client - server is all done) https://gitlab.com/DrRoach/NetworkAPI
While this article points out there are still opportunities for a malicious actor to gain access to the private keys stored locally in the browser, wouldn't that still be an improvement over only using https and server side encryption?
Do you even need a "protocol" if the clients trust each other?
Client A generates a random key, maybe a nonce - and a session Id - then encrypts that with Bs public key, signs with As private key - and sends that to B. Only B can decrypt the message, A and B now share a key.
Or maybe that is the protocol.
Anyway, if you know someone's public key and they know yours - you're already bootstrapped for a secure channel?
Ed: m seeing the page, I see this is more à link to the api for libsodium, and that obviously makes sense - to have standard implementation (and I guess this does some tricks for generating public/private session keys from long lasting public keys?
> this basically boils down to TOFU (trust on first use), but the trust does not perpetuate across uses, so it’s more like, TOAU (trust on any use). The trust is ephemeral, the meeting is ephemeral, the ID is ephemeral. For a lot of meetings, this is perfectly acceptable.
I think I would call this TFSU (Trust For Single Use). Trust On Any Use sounds like complete and total trust.
Not sure why I got downvoted - go try it yourself. Group calls are not supported at all in the webapp. As announced last month it is coming soon - but certainly not yet.
E2E is an illusion on anything other than a free Linux running on a free BIOS with no security enclave.
You can't have E2E on mobile devices, you can't have E2E on any other OS.
(And you'll probably have a hard time finding the right combination of hardware and Linux distro to have it on Linux)
This seems to pick an arbitrary expansion of what “end to end” means, where “end” is “the OS layer on the source/destination computers”.
What if the monitor is backdoored and sends copies of the display buffer to The Secret World Government? What if the keyboard has a hardware keylogger? What if we’re all living in an elaborate computer simulation of a global pandemic?
As an alternate comparison: it’s still end-to-end encrypted communication if I take the securely received message, print out a copy, and tape it to a bulletin board at the town square.
The “end-to-end” refers to the transmission path. It’s a defense against MITM, and can be accomplished by plenty of systems that aren’t Linux.
I feel like you meant the question to be rhetorical, but for the sake of clarifying: there is tremendous value in protecting against MITM, even if there remain other attack vectors.
Encrypting traffic end-to-end over the network protects against entire categories of attack. For some attackers (for example: ISPs), end-to-end encryption essentially removes their ability to compromise traffic contents. For other attackers, it forces them to ignore those categories of attack and instead narrows them to things like compromising the device. Notably, Linux is not magically immune to device compromise, even if you’re running a magical open-source BIOS. And unlike Windows/OSX, Linux doesn’t have Apple/Microsoft paying large, motivated security teams whose work is pushed to all their devices. At best, Linux has commercial distro providers like RedHat paying for security work. At worst, it relies on the good will and skill sets of open source maintainers. In trade, Apple/Microsoft offer lower customizability/visibility into the OS. But since the average user is not interested in (or qualified to do) security hardening of devices, Linux isn’t likely to buy them anything meaningful in the field of device security.
All of this is to say “life is hard. We shouldn’t make it harder by protesting the concept of E2E encryption due to the obvious fact that it does not cure all ailments.”
The moment the information is unencrypted and made available via a userinterface, you've lost all control.
You don't control the iOS rendering loop. You don't control the Android rendering system. (You might think you do though as much of Android is open source).
You don't control the OS core libraries, you don't control the microcode of the CPU.
You don't control the blitting to a screen device or the recording of photons on a camera.
And I'm not even talking about external manipulation to exfiltrate data.
You might control the content of the IP packages sent. You don't control any other IP packages sent.
Yeah, even that is not true.
Do you know what Apple does with text you enter into a text field? Or the letters you type on a virtual keyboard?
It's closed source and even if it were open source, you have no way of checking if the binary has been produced by that source code.
https://jitsi.org/blog/e2ee/