I wonder if they'll make it unverifiable (like Keybase[1]), opportunistic (like WhatsApp[2]), requiring confirmation in every call so nobody practically uses it (like Telegram[3]), hide the key verification screen so 0.001% practically does it (like Signal[4]), uses long-lived keys that can decrypt all historical traffic (like PGP), have encrypted mode be some second-rate mode that removes most of the features you need (like Telegram's encrypted chats; I see that the announcement already mentions some unavailable features when you turn on encryption), or if they'll have a novel trick to break the encryption at will. At a minimum, I expect it'll be an obfuscated binary with auto-updates and no published independent code reviews.
Let's hope Microsoft surprises us all! (as I see most of the comments being skeptical)
[1] Try to verify which encryption keys your mobile keybase chat client is using or if the server injected its own keys. Bonus points if you spot the "this chat is end-to-end encrypted" banner while you're doing this. Details: https://security.stackexchange.com/questions/222055/how-can-...
[2] You have to go into settings to enable even just seeing key changes, let alone that the client stops you from sending messages when a previously verified key changes.
[3] Telegram calls have encryption, but every time you have to verify the emojis again and again instead of it just storing the other party's key.
[4] Threema shows the verification status for every chat. My mother in law, from her own volition, wanted to verify keys with me. She later tried Signal to compare, since she needs to use something encrypted for her work (medical data), but didn't get that she had to verify keys on Signal, too. I find that even techies often don't know where to find that in Signal, or even realize that it's required for the security claims to apply.
I doubt it, that’s not the kind of feature that Teams customers want.
Most businesses have a need for discovery and audit, and Teams is a phone application with lawful interception needs, so you’re always going to have an interception capability.
End of the day, it’s a way to sell the full Microsoft stack… the only way to have an E2E encrypted meeting is to not allow dial-in, which is a way to encourage adoption of Teams as a unified communication solution. (Which is the strategy Zoom is attempting as well)
Lawful interception is a joke. Dragnet spying is what passive encryption breaking enables, and it's what the authorities are always after, not targeted surveillence for a specific purpose. If there were any respect for the fourth amendment by LEO and the government at large, I might sympathize a bit more with lawful interception schemes.
Meanwhile, in reality, I look forward to the need to have every person location-tracked even without a cell phone. I'll cynically bet its part of "the discussion" by the time I'm retirement age. Being untracked just presents too many dangers!
>...uses long-lived keys that can decrypt all historical traffic (like PGP)...
It is pretty clear from what the article describes that they are not doing some sort of static encryption thing like PGP. They are providing a system that is completely under the control of a central authority. Static encryption would normally give control to the end users at the cost having the users responsible for their own key information. They are not going for any sort of user empowerment here.
> They are providing a system that is completely under the control of a central authority.
That could be the dictionary definition of the opposite of end-to-end encryption. Since the claim is that it will be e2ee, I'd assume they at least make an attempt to meet the definition and subvert it in some other way.
It the classical definition of end to end encription: you encrypt it and use _our_ channel to send the keys to the recipient of the message. We promise ( rotfl) that we won't look.
> [3] Telegram calls have encryption, but every time you have to verify the emojis again and again instead of it just storing the other party's key.
I was under impression that this feature was a "gamification" of encryption for completely non-technical people who otherwise wouldn't even think about it, to be honest.
I'm not the GP, but yes it is. The umbrage that the GP seems to be taking is that Signal doesn't require you to verify your key, nor is it particularly visible.
As I understand it, Signal takes a "trust on first appearance" approach and only really starts bugging you about verifying keys if they suddenly change.
> The umbrage that the GP seems to be taking is that Signal doesn't require you to verify your key, nor is it particularly visible.
Only the latter really, but yes. It doesn't need to be required -- see the footnote text where I elaborate: Threema also doesn't require it, but it's very visible, so people do it and thereby avoid having to trust the server.
> only really starts bugging you about verifying keys if they suddenly change
Does it? I haven't seen that, though I think the only key changes so far have been in group chats, perhaps that's different. This would be quite an improvement over WhatsApp that I wasn't aware of (if it really does this).
> Signal advises you whenever a safety number has changed. This allows users to check the privacy of their communication with a contact and helps protect against any attempted man-in-the-middle attacks.
> The most common scenarios where a safety number advisory is displayed are when a contact switches to a new phone or re-installs Signal, but these actions don't always result in a safety number change. However, if a safety number changes frequently or unexpectedly it may be a sign that something is wrong.
Oh just that little message saying that the safety number changed? I thought it would actually, like you said, start bugging you to verify the safety number upon "sudden" changes.
It used to but things were changed at some point to make Signal nag the user less on a safety number (fingerprint) change. Now I think there is just a message that scrolls of the chat window. There is some sort of icon next to the identity in the roster that indicates that it is unverified.
It is indeed. The pattern that OP apparently doesn't like is that the cryptographic material needed to actually verify the other party is somewhat buried, and as a result surely few people do this.
As critiques of signal go, it is perhaps not entirely unfounded, but I'm not sure it'd be my first - or even second - qualm.
I don't necessarily agree with their decision to put it behind a button that isn't obvious, but I can see why they made that decision. Every additional step required to set up a chat makes actually getting to a chat more tedious.
All of the encrypted messaging apps have to deal with the fundamental problem of deferring trust verification to something else, somewhere else, and that work has to go somewhere, sometime. It seems Signal has decided that (1) it shouldn't be centralized, like through Keybase, and (2) verification of identity on first contact is less of a risk than MITM, and the fact that it's possible and people do it is enough of a general deterrence. It'll stop anyone who wants to be stealthy.
> Every additional step required to set up a chat ...
Nobody was arguing that it should get in your way. Again, see the alternative app I mentioned as an example of how it can be done: in Threema it also doesn't get in your way, you also have to go into the user's profile to do the verification, but the status is displayed while you're chatting rather than being just something ignored in a back menu for super nerds.
This is indeed what I meant. Of course it's encrypted by default, but unless you verify the keys, you're trusting the server to be honest which is not the point of end-to-end encryption.
Because you can verify keys, the chance of them getting caught inserting interception keys is pretty decent (if a small % of users does the verification), so they're quite likely to remain honest and anyone who might hack a Signal server would also think twice where the gain is worth triggering this indicator of compromise.
You are trusting the server to be honest, but maybe not when you think.
In Signal's design, participants have a long term identity key, and the thing you're verifying is essentially just the combination of your long term identity key with the other party's long term identity key, but deterministically ordered so that you both see the exact same value. They call this the "Safety Number". So e.g. maybe your identity can be summarised as A4 and Jim's identity is C6, Signal will show you the value A4C6 as the "Safety Number" for Jim, while Jim also sees A4C6 as the "Safety Number" for lucgommans.
This value is calculated by your client (and Jim's client). The server could present you a new long term identity key for Jim (because Jim's phone dropped dead and he bought a new one, or because the Secret Police want to intercept messages for Jim) but this triggers your client to warn you that the Safety Number changed and you need to decide if this is still Jim.
The Safety Number isn't calculated for each messages, or call, or whatever, because it's made from these long term keys.
The Signal UI reflects the reality that the only way to be sure Jim is seeing the same Safety Number as you is to physically meet up and compare. I think it has pretty nice affordance for that scenario, you can scan a QR code from another Signal user to verify them.
It's tempting to think, but I could just read my Safety Number out, on this call, and then we verify that way. Signal won't prevent you from attempting this, but it actually isn't necessarily safe, so it also doesn't encourage that. Can a Nation State adversary fake up the "verification" step in a voice call? Maybe. Would you notice if they tried? Maybe. Best to sidestep the maybes entirely and if your threat model requires it just actually perform Verification of the Safety Number in person.
I'm confused, what is it that you're trying to say? None of it adds up to "you are trusting the server" and all of it (except the part saying that you're necessarily trusting the server) is exactly my understanding.
It's unclear to me whether you understood when you are being given potentially untrustworthy information by the server.
Some of the other designs you described try to have users verify the ephemeral end-to-end encryption keys. If Signal did that then obviously each call, text conversation, or whatever has new keys, trust doesn't continue from one to another. But Signal's long-term identity key relates everything together. The Safety Number is about ensuring you really have Bob's long-term identity key (and Bob has yours) rather than about this particular call, conversation, etc.
WebRTC media transports use DTLS-SRTP, which is (loosely) the UDP/RTP version of TLS.
In a peer-to-peer WebRTC session, running on -- say -- a non-hacked version of Chrome, you can be pretty confident that you already have good end-to-end encryption.
The problem is that Teams calls are generally routed through media servers (they are not peer-to-peer sessions). So the encryption is from each client to/from the media server, not "end-to-end." In a Teams call running on -- say -- a non-hacked version of Chrome, you can be pretty confident you have good on-the-wire encryption. But Microsoft can decrypt your media streams as they pass through the Teams media servers.
The media server has several reasons it (probably) needs to decrypt the SRTP streams.
First, the WebRTC standard, today, does not specify any mechanism for "end-to-end" encryption of media passing through a media server.
The WebRTC specification mandates support for DTLS-SRTP, which has very nice "zero trust" properties. Certificate fingerprints are exchanged out-of-band (via the application-level signaling channel) but the actual encryption keys are generated in-band (via the WebRTC media channels) and never exposed.
But, with this approach, it's not possible to share keys between multiple participants, which means it's not possible to route an encrypted media stream to multiple receiving clients.
There was a debate during the WebRTC standards process about whether to also support SDES, in which keys are exchanged via the signaling channel. Because you have to trust the signaling channel, and it's trivial to log keys, SDES has obvious attack surfaces that DTLS-SRTP doesn't.
SDES support was dropped from the draft standard. Most WebRTC implementations only implement DTLS-SRTP.
Second, the media server needs some codec-level information that is part of the encrypted SRTP stream in order to route and manage the video. This is solvable with RTP header extensions that pull all the data needed by the media server out of the encrypted RTP payload and into the unencrypted RTP header. But each codec needs its own header extension, which needs to be standardized and supported by WebRTC implementations and media servers.
Third, some things you probably want the media server to do at least some of the time require decrypting the media streams (keyframe regeneration, recording, transcription, bridging to telephone dial-in).
All of which suggests that what it really means to promise "end-to-end" encryption is ... at least a little bit up for debate. If you trust your signaling channel but not your media server, and both are provided by the same application or service, is that end-to-end encryption? If you trust your media server some of the time (when you want to record a session, for example) but not all the time, how do you verify that the appropriate key exchange mechanism is being used at the appropriate times?
There is an experimental API (WebRTC Insertable Streams), and an IETF Draft (Secure Frames), that together may allow true end-to-end encryption in the near future in a standards-compliant fashion. Note, however, that this still leaves key generation and exchange up to the application.
It's mind boggling how many features Teams Calls are missing.
It still does not have an "annotate" tool. Other features are good-to-have, but not being able to point things out in a shared screen is a deal breaker for remote work collaboration.
The biggest issue I have with Teams, Zoom, Hangouts is that I have to use all three in the course of the day and I can never find anything (mute, share, etc.) because they all have different design patterns (which change seemingly with every other release) so I end up hanging up on calls when I want to unmute myself.
Zoom has it but it's awkward. Slack has a very convenient crayon that just works most of the time and is nice for quickly pointing at things, but Slack calls still seem iffy sometimes. Who else has annotation?
You can format your post as a "quote", but that doesn't give you a true-pointer (e.g. you cant click the quote to go to the post, like in a Watsapp or Slack reply)
To add insult to injury its very clunky to quote and respond in the same post. You have to paste the text you're going to quote, then hit return and type your answer and only after select the text and press "quote" in the rich editor
If you don't follow that particular order everything (including your response) becomes part of the quote.
I'm ranting, but seems such a trivial thing to get right that it bothers me. And I need it everyday
Just type > before pasting the quoted text. But you have type some text in a separate line as a place holder of your own response before you paste your quote or else your response will be part of the quote as you just said.
In Slack I can type > to quote something, and a line break will continue the quote. However, pressing backspace returns to normal text
In Teams you're stuck on quote-mode
Notice that in Slack I don't need to use quotes for replies, as the reply-to-message works
Retraction as I write this after some intensive googling: apparently pushing shift+Enter twice in teama breaks you out of quote mode. So at least there's that.
> According to The Guardian, NSA had access to chats and emails on Hotmail.com and Skype because Microsoft had "developed a surveillance capability to deal" with the interception of chats, and "for Prism collection against Microsoft email services will be unaffected because Prism collects this data prior to encryption."
End to end, but to which ends? I seriously doubt that MS will cut itself off. There are too many reasons (national security response, marketing, police investigations etc) for them to keep at least some access to content in extremis. The "end" will be some midpoint within MS systems or control.
Also empty words are one thing and deeds are another. We can look at MS past actions which already happened, and one of them was - the first thing they did after buying Skype was to disable p2p calls which they couldn't intercept, and reworked app architecture to centralized.
Good to see Microsoft competing with Apple. Teams already kills an i9 octa-core 32GB Macbook ensuring fans max out, this will ensure they can kill the next MBP launching soon.
When I am in a video chat, glances reports teams taking between 120% and 140% of the CPU. that's when I'm passive participant not even sharing my camera.
Microsoft was the very first partner in the NSA's PRISM surveillance program (the #1 used source in US intelligence reports!) that allows the US federal government access to stored communications in Microsoft servers (think Skype, Hotmail, MSN messenger) directly and without a warrant, for any user on the service. It now includes all major US providers, but MS was first, and all of the administration and planning of the surveillance programs themselves happens in Microsoft software, on Windows.
The Snowden PRISM slides are screenshots of PowerPoint.
Microsoft is as integrated in the global surveillance apparatus as anyone possibly could be.
Making products that don't advance the US global surveillance programs and national interest would be to bite the hand that feeds them.
Is "apple silicon optimized builds" somehow required for E2E encryption or how is this comment related to the submission itself? General comments about product X just because it's mentioned in the submission feels pretty boring.
While obviously not a requirement for encryption, my comment is designed to draw attention to what's becoming a pattern of Microsoft not focusing on much needed performance improvements and instead continuing to layer new features on top of a shaky core experience.
Heck, I'd settle for one that could take focus reliably and consistently when changing between screens. There's some stupid invisible "Microsoft Teams Notification" it always focuses to instead. And it never bothers to drop me directly into the chat window when I do manage to focus it.
Let's hope Microsoft surprises us all! (as I see most of the comments being skeptical)
[1] Try to verify which encryption keys your mobile keybase chat client is using or if the server injected its own keys. Bonus points if you spot the "this chat is end-to-end encrypted" banner while you're doing this. Details: https://security.stackexchange.com/questions/222055/how-can-...
[2] You have to go into settings to enable even just seeing key changes, let alone that the client stops you from sending messages when a previously verified key changes.
[3] Telegram calls have encryption, but every time you have to verify the emojis again and again instead of it just storing the other party's key.
[4] Threema shows the verification status for every chat. My mother in law, from her own volition, wanted to verify keys with me. She later tried Signal to compare, since she needs to use something encrypted for her work (medical data), but didn't get that she had to verify keys on Signal, too. I find that even techies often don't know where to find that in Signal, or even realize that it's required for the security claims to apply.