Hacker News new | past | comments | ask | show | jobs | submit login

Two points:

- Focusing on adding e2e encryption for messaging services serves as a nice deflector for FB's privacy issues. Their advertising systems aren't going to change in terms of efficacy and revenue potential if private messages between users are inaccessible to Facebook. I'd be surprised if they're using this data anyway, but it's Facebook, so who knows. In any case, expect the messaging to focus on increasing privacy through message and video encryption, which completely ignores the underlying issue of profile-building and behavior modification that Facebook's non-messaging platforms allow, and which I didn't seem to hear any plans around addressing.

- IMHO, pretty much all posturing around privacy by Facebook should not be taken seriously until they announce a change to their business model. Since they haven't, it doesn't take much effort to tease out the rest: their business model relies upon surveiling user behavior and selling behavior modification products, so you can expect no announcements around product changes that would undermine those efforts significantly in the name of privacy until their business model changes. Everything until then is just at best noise, at worst dishonest framing to take the heat off of them by those who are ignorant of the underlying dynamics, like regulators or the general public.




Just a note: they do "read" your messages, for example to suggest Spotify music to you. Not sure if that's still a thing, that came out around the same time I stopped using FB.

https://www.theverge.com/2017/8/14/16143354/facebook-messeng...


Just a note on your first point. End-to-end encryption only encrypts messages in transit.

This does not make messages inaccessible to Facebook, as they control both endpoints.


End to end means from client to client. Facebook wouldn't be able to see the messages.


Not really. End to end encryption means messages will leave the app encrypted and only the recipient app will be able to read them.The middle man remains.

A good analogy: it's like writing a letter and asking the mailman to put it into an envelope, so she leaves the room and comes back with your sealed envelope.

The mailman then looks at you and says "I won't read it, I promise.", Wink wink.

That's end-to-end encryption for the commons.


The main gap in trust is that facebook does not disclose their source code and have a way for users to confirm their device is running the published code. Fundamentally, if their implementation is properly implementing a published e2e protocol, they should not be able to read the messages, since the only thing traveling in the clear over the wire through their servers are public keys.


> The main gap in trust is that facebook does not disclose their source code

Nah, nobody gives a damn about the source code, or reproducible builds to ensure the binary they're executing was compiled with that source.

The main gap in trust is that Facebook has a long history of lying and cheating to maximize for themselves so there's no basis to trust that their new moves are good for users.

But conspiracy theories about e2e being read by Facebook are probably bogus, and certainly a distraction.

Even though the source is closed, I'd bet they're doing a credible job of securing messages so that even Facebook can't read them.

That's not the issue, it's a distraction from what's really important.

What's really important is that Facebook has lost control of the monster it created. This is a way to let the monster loose and avoid accountability.

Their platform amplifies harmful content like incitement to violence, terrorist recruiting and coordination, and political propaganda.

By encrypting everything so even Facebook can't read it, Facebook escapes accountability for the harm their platform inflicts on people.

Very similar to a chemical company dumping toxic waste in public water, Facebook is dumping their pollution on the public by using strong encryption to make it physically impossible for Facebook to control the monster they created.


Facebook's F8 keynote stated repeatedly that in person-to-person messaging "even Facebook" would not be able to decrypt those messages.


If one reads that statement carefully, that says nothing about whether Facebook can read it before encryption. It only says they wouldn't be able to decrypt, once encrypted.


Read a bit more carefully and you'll see "decrypt" isn't in quotes, and as such is my word, not Zuckerberg's.

The keynote's online. (https://www.facebook.com/FacebookforDevelopers/videos/422572...) He mentions end-to-end encryption a variety of times, but one example is at about 15:23, where he states, and here I do quote, "without having to worry about hackers, governments, or even us being able to see what you're saying".

Now, skepticism about Zuckerberg and Facebook is warranted, but my recollection of the keynote is that statements like this didn't leave much wiggle room on this particular point. They were playing word games in other areas, like abusing the term "interoperability" to mean "between the different Facebook-owned apps", but I don't think they were here.


They also own the client so it'd be trivial to send the data back to Facebook after it's decrypted on the client. It'd be really stupid to do that, but it's Facebook.


I think that what he means is if Facebook-owned apps are at each end, then end-to-end encryption means less because Facebook has access to both end points. You don't have to MITM a connection if you have access to the ends.


> End to end means from client to client. Facebook wouldn't be able to see the messages.

Maybe they'll start training a personalized ML model on client devices and maybe even send it back to the mothership for further exploitation.


This kind of stuff is interesting, and I think you're on the right track (simply based on intuition, not really informed on this at all). I'm interested in compressing models for performance on lower computational power devices. Like Google's Learn2Compress (https://ai.googleblog.com/2018/05/custom-on-device-ml-models...)


Have you read the client code? How would you know?

Just because the protocol is well-formed doesn’t mean the totality of the implementation is trustworthy.

#ShowUsTheCode


Read the client code? Bah! How do you know that's what's in the compiled binary?


Reproducible builds. This is pretty much exactly their purpose.


The non-facetious point here is that you have to root your trust in something (whether that's the maker of your reproducible build system, or your device, or your app, or the online service you use, or the chip foundry that made the CPU that runs your built-from-scratch-paranoid-OS).

It's better to have to trust somewhat verifiable promises about the Facebook app than to have to trust unverifiable promises about Facebook-the-entire-organization. That's the advantage that E2E provides.


Reading the client code means bupkis. Trust is not derived from source code, but from where you got your stack (phone hardware, operating system, build tools, application source code, distribution platform included).

https://www.archive.ece.cmu.edu/~ganger/712.fall02/papers/p7...


Clearly its derived from all of the above. The end goal we have for Hubs (hubs.mozilla.com) is to allow the theoretical limit of auditing to be done by the public when using our hosted services wrt both the code and the operations. And ofc, you can always run the bits yourself if you don't trust that audit.


I went to hubs.mozilla.com and couldn't figure out what it does. I then tried making/joining a Hub but it got stalled at one of the loading steps so I still couldn't figure out what a Hub is.

They seem to be VR meeting spaces: https://blog.mozvr.com/introducing-hubs-a-new-way-to-get-tog...

The model you describe seems like a good one, similar to what most Linux distributions do. The distro maintainers are trusted and they compile packages and distribute the binaries, but people can run the package generation scripts themselves to get their own package straight from the source. Reproducible builds allow users to confirm that the maintainers aren't doing anything sketchy - that probably isn't a possibility here, but this model is still far better than what FB/WhatsApp and even Signal do.


Sorry you hit a snag. Yes, we're building a web-based avatar-centric communications tool, which also supports VR. If you have more info on your setup (browser, OS, link to the room that failed) and what you saw that'd be fantastic so we can fix it! Feel free to email me directly gfodor at mozilla.com.


This.

Although when it comes to Facebook, we have to take them at their word that it's truly end-to-end.


You can use the same line of argument to say the messages are accessible to whoever makes the device, since they could in principle monitor the message. It's kind of an empty criticism.


Exactly. Hence their attempt some years ago at selling Facebook integrated smartphones. End to end encryption, wink wink ;)


I can't tell if this comment, which seems to make the argument that Facebook tried to market a mobile phone so that they could defeat the end-to-end encryption they planned to offer in their applications, is facetious or not.


Does your reasoning apply to other companies? Coca-cola, for example, is very interested in building profiles and modifying behavior. I estimate they're much less sophisticated w/r/t building profiles (although if you think in terms of flavors, instead of demographics, perhaps not...) They are definitely very effective in terms of behavior modification. The product does significant harm. It also hijacks an evolutionary flaw (people like other people's feedback=FB, people like sweet things=CC).


Not the OP but I think it definitely does. What's the public health damage caused by disease epidemics like obesity and diabetes in the long term, tens if not hundreds of billions, even more?

It's time to bring the hammer down on advertisement and manipulation of people's behaviour. Imagine if the government would try to nudge people in the way these companies do, we'd never here the end of it and rightfully so.


Basically that’s what the field of public health is: https://bioethics.hms.harvard.edu/sites/g/files/mcu336/f/Der...

Non health related example: https://datasmart.ash.harvard.edu/news/article/how-governmen...

Also any taxes on goods like cigarettes or fuel are nudges. Also zoning..


> Imagine if the government would try to nudge people in the way these companies do

You've never seen a Got Milk ad, have you?


"You've never seen a Got Milk ad, have you?"

That's not the government - it is a private association of "... milk processors and dairy farms."[1]

[1] https://en.wikipedia.org/wiki/Got_Milk%3F


> Does your reasoning apply to other companies? Coca-cola, for example, is very interested in building profiles and modifying behavior.

(Not the OP) Personally, yes, it applies in full to all other companies. That said, marketing and advertising companies (which Facebook counts as) are the most egregious with this sort of thing.


> ...surveiling user behavior and selling behavior modification products

Excellent description of personalized advertising.


> surveiling user behavior and selling behavior modification

This is fine in my book as long as they involve more sociologists/psychologists in the process, are transparent about unintended consequences and what behavior modification they are indulging in.

The thing that is becoming clearer and clearer from the data accumulating about people's behavior is left to themselves ALL people have low awareness of their own damaging behavior. Whether it's to themselves, their families or communities. Those that do have some awareness have little clue about how to climb out of holes. Spotting issues early, alerting/educating people about them and showing them what options they have to improve their own behavior is a huge opportunity to do good.


> This is fine in my book as long as they involve more sociologists/psychologists in the process, are transparent about unintended consequences and what behavior modification they are indulging in.

Even with your conditions, this would be the exact opposite of "fine" in my book. The involvement of sociologists/psychologists would make it even worse.


It's like being force-fed drugs and then saying that this is ok because doctors have approved it.


Why? These fields have never had access to this level of data. What they bumbled about doing in the past without all the data, cannot be used to judge what their impact is going to be in the future with it.


I'm confused as to what you're saying here. Are you saying (as I originally thought) that the involvement of sociologists and psychologists makes manipulating users more acceptable?

Or are you saying, as it sounds like here, that access to Facebook data is good for the fields of psychology and sociology?

In any case, the involvement of socio/psychologists in the process of user manipulation makes the situation worse precisely because it could very likely make that manipulation more effective.


>Focusing on adding e2e encryption for messaging services serves as a nice deflector for FB's privacy issues.

Although it's not all that's desired, it's something. After all don't forget that entities much less legal than Facebook are after your data (like the NSA).


imho e2e encryption is going to quickly become table stakes for any kind of internet based communications tool. it's well on its way with the publication of the protocols used in Signal and the various open source implementations available of it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: