Okay, can someone give a good guess as to what's the real reason to do this? Not like all the feel-good BS - what's the business case? How is this gunna make them money?
It seems this just makes them lose access to a ton of data to mine for advertisement. I chat with a friend on IG about something and I immediately get ads for it. It's a bit creepy, but I feel it's working the way they'd want it to (never bring up watches, you will get watch ads for the next 6 months)
Are they bleeding a lot of user to Signal/Telegram b/c they lack encryption? (my impression is only nerds care about encryption)
Are they getting harassed by requests from law enforcement?
Are they in hot water b/c of child porn?
Do they need plausible deniability?
I don't really get why they're rolling this out. Like what's their angle. Seems like something users don't care too much about and they lose a ton of valuable data
They must be able to do good targeted advertising without message contents, with public likes and other data on scrolling behavior, especially as AI tools improve. Maybe having this data is more trouble than it's worth. Data is a liability as well as an asset.
That's kind of what I thought before I saw inside.
What actually happens is the reverse. 99% of engineers can have amazing values that you share, but they do not ultimately make the decisions. The board, Zuck, and the $ do.
Nonsense -- they do as long as Mark, and his chosen exec team, control whether they work there or not. Anything else is a pretty lie people tell themselves because they like the paycheck.
I don't think they need the exact content to sell ads.
1. sell ads base on the message itself, get crushed in the media; 2. encrypt message but sell ads based on profile and meta data, get good publicity. I think they are doing option 2
Messaging is just part of the platform. My guess is that they want to forgo this part and concentrate on others
- It's hard to imagine a project getting signed off for just being "nice" - especially when it hurts their own business interests.
- I don't really see it making sense as a PR move to build trust. I think outside of the tech sector they are doing fine on that front. The vast majority of people use their Bytedance, Meta, Tencent, etc. apps and aren't considering their encryptedness.
- I don't think this announcement will get any substantial press coverage
- It could be preemptive so that they don't get bad PR when they end up being "complicit" in getting people sent to jail for abortions (in the US) or being gay (in some African countries) or whatever
> It could be preemptive so that they don't get bad PR when they end up being "complicit" in getting people sent to jail for abortions (in the US) or being gay (in some African countries) or whatever
Exactly this. With the recent laws passed they see how their altruistic "save the children" partnerships with law enforcement could be twisted for causes that aren't as popular everywhere.
its the "right" thing to do only by a small population of tech workers. Most people do not care, and ad customers would be very upset if this degrades targeting.
1. More and more people care about this, eg journalists, politicians, etc. Apple has been talking about this a lot, although some being propaganda for messages, but their customers are already somewhat aware of private messenging.
2. It may not degrade ad targeting that much. I imagine doomscrolling does it way more: you engaged with this post, you ignored that one, and so on.
for 2, I will say anecdotally I have never in my life bought something directly from an ad until the last 2 years on instagram. It actually found things directly useful to me that I did not know about beforehand (I did still go through an hour of research or so, but was amazed at the algorithm discovery capability)
I think end to end encryption should be the minimum requirement for any private / direct messaging in any chat application. Group chats and larger I don't think it's as necessary since the guarantee that the conversations will be leaked is much higher, just reasonable encryption for those is fine. I do think its entirely possible to have a conversation that sounds incriminating out of context, and in fact is not even remotely relevant. If my shitposting conversations from my teens were taken out of context and shown in a court room I'd be facing several life sentences in an asylum.
I think you're right insofar as they are trying to reposition themselves as more trustworthy. I think they see the writing on the walls.
But at the end of the day, their ultimate end is to make more money. If they do the right thing it isn't out of some altruistic motive. It's because they think that by doing so will make them more money.
> Maybe [Facebook is] doing it because it's the right thing to do
This is the most autistic thing I have read on the internet this year, second to a personal friend sperging out thinking he was going to wife up the first girl he met at a party.
I'm not sure what Diffie-Hellman has to do with anything here, but yeah, there's no reason encryption would prevent interoperability as long as all clients are using the same protocol (which they would have to do anyway in order to be interoperable).
Encryption makes it practically impossible to transform messages between different protocols, since the cyphertext contains not only the text content of the message, but also formatting, some attributes (e.g. `reply_to`). Even if it were the same, E2EE algorithms also differ between protocols, and you can't reencrypt the message for other protocols server-side.
I thought the opposite, at least as a first thought: Roughly two or three years ago, facebook announced their intent to integrate their messengers - so that you could send a message from your fb inbox to whatsapp, from whatsapp to instagram. And since whatsapp has E2E as a major part of their marketing, I'd think adding it to FB and IG rather than removing it from WA would be the way to go.
(though of course it's not REALLY: it harasses you to backup your messages all the freaking time, and when I say "never", as I ALWAYS do, it asks again in 2 weeks. I assume once they're backed up on Meta's servers, there goes the encryption. But that's a parlor trick and they STILL have that data, as I assume at least 80% back up anyway and the rest is mostly worn down by the constant prompting.)
That's because a single org controls all three messengers and they can develop them to converge to the same message format and to the same encryption mechanism. At the same point Signal or XMPP will use a different format and a different mechanism, making them incompatible with messages from Meta, unless a client with a private key reencrypts them.
but then, why do they not take no for an answer and keep nagging about it, and interpret "never" as "not in the next two weeks, but ask again, please!" if they don't have an interest in having these messages there?
(and no, "it's to help YOU, the hapless user! is of course never the right answer. Corporations never do things for users without an interest of their own.)
"How can we be sure a 3rd party implements the encryption properly" is the counter-argument.
How would you refute that? Trust users to check that some code is the same on both devices? What would prevent a bad actor from MITMing the whole thing from the start?
It's not man-in-the-middle, it's man-on-the-end. If your chat app wants to spy on you, there is nothing you can do, but at least it becomes obvious and easy to analyze because it's client side code. It's not a counter argument to interoperability. You need to trust both sides, the same way web works.
Hmm, but it’s OK to trust that web browsers implement TLS properly? And your router isn’t MITMing you? Or your SSH app exfiltrating all your server information? Why is this different?
Can it do so if the encryption and key management is at the client?
> Or your SSH app exfiltrating all your server information
That's a small niche, and most service don't expose SSH to public.
> OK to trust that web browsers implement TLS properly
hmm, you may have a point, maybe they'll ensure that only whitelisted browsers can access it, like Chrome with DRM for HTML. Only purpose is public safety. /s
FWIU, this[1] was decrypting imessages successfully. But was also storing all your imessages in a serverside database accessible to the server (instead of being e2e encrypted like imessage is supposed to be) and leaking the authentication token to access the imessages over unencrypted HTTP.
Since they control the client, is it possible that the "ad profiling" can still take place on the client, after the message is received and decrypted for visualisation?
The E2EE only means the message is not readable "in transit" (as in after it leaves a Facebook client)
1. Meta can read the metadata perfectly well (who communicates with whom and when), which is enough for ads.
2. Meta doesn't want to be able to read messages, since it's a PR nightmare when doing so. Case: Ordered to do so by a government agency. People could switch to Signal.
3. Data isn't readable "in transit", since it's encrypted with HTTPS. Only Facebook servers could read it if they wanted.
If two of your recipients interacted with a certain ad, there's a chance you have similar interests.
Combine this with the frequency of your chatting and your location (at least based on ip) and the other little bits of stuff users give about themselves, Meta doesn't really need to know specifically what the contents of your messages are.
In the mass of their users, an informer smart guess is more than enough.
They control the client, so they can do whatever they want. They can take the plain text, encrypt it with my key, encrypt it with their key, catenate the two, send to FB, split off their "copy" and decrypt it to do whatever with, and send my "copy" on to the recipient.
e2e isn't a tech issue, it's a trust issue. Do you* trust FB?
As opposed to the prior step, "0. Analysis During Composition", in which the Messenger client is doing all the metadata analysis/collection while you are typing, and already knows all the tags its going to assign to you for Meta, before the message is encrypted.
Sure, third parties won't be able to see your message. But you did give Meta permission to analyse your content prior to posting.
This anti-pattern is all over Meta's products. You can see it in use when you type an update in Facebook using a browser - just try to leave your comment un-posted, or close the page, etc. Every single keystroke prompts Meta's analysis - which is completed when you press "Post" (prior to encryption/transfer ..)
So this is some slick positioning on the part of Meta's technical PR managers ..
> is it possible that the "ad profiling" can still take place on the client
I believe this is the future in a GDPR world. The server sends a list to the client of 1000 ads, and the client decides which to show based on all the data available locally and a big local neural network model to decide which you're most likely to click.
IIUC the Brave browser is already experimenting with this model. They promise[0] "privacy-preserving" ads to users AND targeting to advertisers:
"...when a Brave Ad is matched to you, it is done on your own device, by your own device, inside Brave itself. Your personal data never leaves your own device."
The mechanism is very similar to what you describe.
The problem is that the 'secret sauce' of ad targeting is that model that decides what you're most likely to click... Ad networks really don't want that model outside their data centers...
Alas, the GDPR might force a rethink on that when it gets enforced with teeth.
No, this is not what E2EE means at all. E2EE means the message is not readable in transit nor is it measured, scanned, sampled, copied, exported, or modified in any way without explicit action taken to do so by one of the legitimate parties to the conversation.
If the client just leaks the plaintext or leaks any information about the plaintext that encryption is supposed to protect then the encryption scheme cannot be described as "end to end".
Client dictates what ads are shown. Fb knows what ads are shown to who. Fb now can deduce what topics people are talking about. Technically convo info has leaked. If someone is getting served ads for Trump, they probably like Trump. If they are getting ads for Biden they probably like Biden. Etc…..
Yes, so that would violate the end to end principle. If the client downloaded all of the possible ads and the selection was totally local, and interaction with any of them was a user choice I think that could still be fairly described as E2E though. Or ads were fetched by private information retrieval.
“What self interested, selfish reason do these terrible people have to this ostensibly good thing?” - paraphrasing your question.
Answer - Message content wasn’t used for advertising. I believe it had been tried at some point and found to be sort of useless. But people like you won’t believe that, so end to end encryption might help build trust and increase engagement.
That's only a valid paraphrase if you think prioritising making money and value for shareholders above all else makes you "terrible people".
Personally I doubt that any more than low single digit percentages of people care at all about E2EE. Even me, as a tech person, I don't care about it, and I actively avoid Signal because of the inconveniences that E2EE causes.
This has been a very big effort to implement, and FB no longer deploys those kinds of resources on vague whims. I think most likely something to do with regulations, and not wanting to be on the hook for user message content, but it's just wild guessing really.
A security conscious person should assume that whatever can be exploited will be exploited - especially when dealing with actors that are economically incentivized to do creepy things.
We still have to trust that Zuck is going to do e2e correctly. I just don't have that trust in him. Messenger app is doing the encrypting, and I don't trust that FB isn't doing it in a way that they get the message also.
Firstly E2E can’t be turned off on a dime. WhatsApp e2e has never been turned off since it was turned on.
Secondly, please educate yourself about what the government actually thinks about the ad business you’ve described as “shady”. Even if it was “shady” to show ads based on preferences and never reveal or sell those preferences to a third party … the elected representatives in government really like having social media ads as an option in elections.
> The senator’s office told Katie that they really wanted to ban that practice but knew they would never get it through the Senate since so many campaigns relied on the tools for their elections. So instead, they said they were going to pressure tech companies like ours to ban the use of the tool in the hopes that if one of us did so, the others would as well. Although we did not stop using Custom Audiences entirely, Facebook and other platforms did dramatically reduce the targeting options for political advertisers.
> show ads based on preferences and never reveal or sell those preferences to a third party
That's not how modern ad markets work. Those preferences are indeed revealed to third parties, specifically ad exchanges and DSPs, as part of the bidstream data.
Now, you say, those bidstream data contain no PII! Except that de-anonymizing those data is absolutely key to targeting, and is widely practiced.
Recently in the news: "Patternz", an Israeli spy-tech company, for years hoovered up and stored all the bidstream data across 87 ad exchanges and SSPs including Google, Yahoo, MoPub, AdColony, and OpenX, de-anonymized them, and claims to have profiles on billions of users including their location history, home address, interests, information about 'people nearby', 'co-workers' and 'family members'. (See also: https://pbs.twimg.com/media/F-5bA6QW8AAyfSK.jpg )
Please stop spreading dangerous misinformation about the threat programmatic advertising poses to our privacy and national security. Your extremely sensitive data are being passed around willy-nilly and this will not change until RTB is outlawed.
Ylva Johansson, the EU commissioner who proposed that (apparently failing [1]) law, has used Meta’s model behaviour in reporting CSAM material to NCMEC & EU authorities as the justification for why that law should exist.
Considering it was Meta’s policy to scan even when not mandated, it seems like an internal shift in attitude.
Given that E2EE messengers usually require being run on a smartphone as primary device, my guess is that they are trying to push the last remaining non-app-and-web-only users to their messenger app.
The end-to-end encryption also works on the web. I’ve used it and it’s excellent. You need to use a PIN to access your past messages from their backup HSMs, but other than that it’s completely transparent.
If I understand the parent comment right, this was an argument against ProtonMail's End-to-End Encrypted Webmail 5+ years ago.
The argument being that some assurances typically associated with E2EE (that "even we can't see what you're doing") are shakier without a disinterested third party serving the application to the user. If you have some target user `Mr. X`, and you operate the distribution of your app `Y`, you could theoretically serve them a malicious app that sidesteps E2EE. And since it's just a web app: the blast radius is much smaller than if you were to go through the whole update process with Google or Apple and have it distributed to all users.
Yes, and my guess is that they are planning on removing the standalone messenger from the web version. You'll probably need to have the FB Messenger app installed on a smartphone device in order to use E2EE. That would make it impossible to write messages on the web version (i.e. facebook.com) without having an app installed. I currently do not have the app installed and am able to write messages on the pure web version of FB on desktop. My guess is that they are enabling E2EE to get the last remaining desktop-only-and-website-only messenger users to install the app. Hope that cleared it up.
Again, my point is not that FB Messenger will stop working in the web browser altogether. My point is that FB Messenger will stop working in the web browser if you don't have the FB Messenger app installed on your smart phone as the primary device.
In a way that works well on low power mobile devices?
Most people I know using FB messenger do so on desktop via facebook.com and the app on mobile. I don't see them removing the former any time soon but if the web only version still exists for mobile users perhaps that will go.
Yes, that was explicitly stated in an interview[1] a while back. Quoting from the specific section:
> “Okay, well, WhatsApp — we have this very strong commitment to encryption. So if we’re going to interop, then we’re either going to make the others encrypted, or we’re going to have to decrypt WhatsApp.” And it’s like, “Alright, we’re not going to decrypt WhatsApp, so we’re going to go down the path of encrypting everything else,” which we’re making good progress on. But that basically has just meant completely rewriting Messenger and Instagram direct from scratch.
What’s the total cost of encryption engineering / bau? I assume the ‘Facebook cares about my privacy’ goodwill from unknowing users will be worth more, but building a ‘secure’ public reputation has to start somewhere.
They can easily identify what and who you're talking to with message metadata, which is usually not encrypted. They can cooperate with government agencies this way. You don't need to know the exact content of a message, you just need to know who you're talking to and when.
Well, it’s encrypted in transit, maybe encrypted in their storage on their backend. But when the text, after being decrypted, appears in their textviews and websites I don’t think it is not kosher for them to tag every single word and glean lots of data/metadata from there and send home do their magic without associating with the identity. I thought that is something that is given. They also have not touched upon it. Except maybe the “Logging limitations” part - that section read like hogwash to me.
A kind of fatigue is setting in when it comes to Fb messenger and Instagram. They have already bloated these apps and they can’t really add any other gimmicks. So they are trying the “other” gimmick now.
My take is or guess is - they are doing it because they really have nothing else to do.
Complying with warrants and other requests has a cost. By claiming not to have access to them, they can save money. I think they or some other advertisers have used the actual messages before, but concluded it was too noisy to be worth it.
I have no side to take in this discussion, but just wanted to point out that the USA is not the only country that exists. I know it sometimes seems that way on Hacker News, but I promise you that there is a big wide world out there that has nothing to do with the Supreme Court :)
What is the share of revenue from US? I would guess it's not thaaat far from 50%. The median income in the US is like 20x of India, so presumably ad views from the US ought to be quite a lot more valuable. I would guess EU + US is the vast majority of revenue.
The post they're replying to says "half of their customers", which implies 100% of their customers are in America, which is obviously completely wrong.
The number of users matters much less for FB. The amount of users which can be monetized matters much more and the CPC for US users is always much higher than other countries.
At this point, with the impact they have on the global stage and the fact that they will only pay their taxes where they want, it's a but irrelevant to keep this frame of thoughts.
If their entire user base is American, and exactly half are women, and every single one of those women have had an abortion in the last year, and they all live in states where it's illegal, yeah, half of their customers might end up in jail.
My guess is deniability, just like Apple. Apple wanted to make CSAM detection work and make iPhone essentially a weapon law, but when their users hit back, they just made iCloud e2ee. With the number of child predators on FB, I am guessing that Meta wants to wash their hands of responsibility.
... “
Okay, can someone give a good guess as to what's the real reason to do this? Not like all the feel-good BS - what's the business case? How is this gunna make them money?
It seems this just makes them lose access to a ton of data to mine for advertisement. I chat with a friend on IG about something and I immediately get ads for it. It's a bit creepy, but I feel it's working the way they'd want it to (never bring up watches, you will get watch ads for the next 6 months)
Are they bleeding a lot of user to Signal/Telegram b/c they lack encryption? (my impression is only nerds care about encryption)
Are they getting harassed by requests from law enforcement?
Are they in hot water b/c of child porn?
Do they need plausible deniability?
I don't really get why they're rolling this out. Like what's their angle. Seems like something users don't care too much about and they lose a ton of valuable data