Hacker News new | past | comments | ask | show | jobs | submit login
Facebook's Zuckerberg Preaches Privacy, but Evidence Is Elusive (bloomberg.com)
202 points by pseudolus on May 1, 2019 | hide | past | favorite | 115 comments



Two points:

- Focusing on adding e2e encryption for messaging services serves as a nice deflector for FB's privacy issues. Their advertising systems aren't going to change in terms of efficacy and revenue potential if private messages between users are inaccessible to Facebook. I'd be surprised if they're using this data anyway, but it's Facebook, so who knows. In any case, expect the messaging to focus on increasing privacy through message and video encryption, which completely ignores the underlying issue of profile-building and behavior modification that Facebook's non-messaging platforms allow, and which I didn't seem to hear any plans around addressing.

- IMHO, pretty much all posturing around privacy by Facebook should not be taken seriously until they announce a change to their business model. Since they haven't, it doesn't take much effort to tease out the rest: their business model relies upon surveiling user behavior and selling behavior modification products, so you can expect no announcements around product changes that would undermine those efforts significantly in the name of privacy until their business model changes. Everything until then is just at best noise, at worst dishonest framing to take the heat off of them by those who are ignorant of the underlying dynamics, like regulators or the general public.


Just a note: they do "read" your messages, for example to suggest Spotify music to you. Not sure if that's still a thing, that came out around the same time I stopped using FB.

https://www.theverge.com/2017/8/14/16143354/facebook-messeng...


Just a note on your first point. End-to-end encryption only encrypts messages in transit.

This does not make messages inaccessible to Facebook, as they control both endpoints.


End to end means from client to client. Facebook wouldn't be able to see the messages.


Not really. End to end encryption means messages will leave the app encrypted and only the recipient app will be able to read them.The middle man remains.

A good analogy: it's like writing a letter and asking the mailman to put it into an envelope, so she leaves the room and comes back with your sealed envelope.

The mailman then looks at you and says "I won't read it, I promise.", Wink wink.

That's end-to-end encryption for the commons.


The main gap in trust is that facebook does not disclose their source code and have a way for users to confirm their device is running the published code. Fundamentally, if their implementation is properly implementing a published e2e protocol, they should not be able to read the messages, since the only thing traveling in the clear over the wire through their servers are public keys.


> The main gap in trust is that facebook does not disclose their source code

Nah, nobody gives a damn about the source code, or reproducible builds to ensure the binary they're executing was compiled with that source.

The main gap in trust is that Facebook has a long history of lying and cheating to maximize for themselves so there's no basis to trust that their new moves are good for users.

But conspiracy theories about e2e being read by Facebook are probably bogus, and certainly a distraction.

Even though the source is closed, I'd bet they're doing a credible job of securing messages so that even Facebook can't read them.

That's not the issue, it's a distraction from what's really important.

What's really important is that Facebook has lost control of the monster it created. This is a way to let the monster loose and avoid accountability.

Their platform amplifies harmful content like incitement to violence, terrorist recruiting and coordination, and political propaganda.

By encrypting everything so even Facebook can't read it, Facebook escapes accountability for the harm their platform inflicts on people.

Very similar to a chemical company dumping toxic waste in public water, Facebook is dumping their pollution on the public by using strong encryption to make it physically impossible for Facebook to control the monster they created.


Facebook's F8 keynote stated repeatedly that in person-to-person messaging "even Facebook" would not be able to decrypt those messages.


If one reads that statement carefully, that says nothing about whether Facebook can read it before encryption. It only says they wouldn't be able to decrypt, once encrypted.


Read a bit more carefully and you'll see "decrypt" isn't in quotes, and as such is my word, not Zuckerberg's.

The keynote's online. (https://www.facebook.com/FacebookforDevelopers/videos/422572...) He mentions end-to-end encryption a variety of times, but one example is at about 15:23, where he states, and here I do quote, "without having to worry about hackers, governments, or even us being able to see what you're saying".

Now, skepticism about Zuckerberg and Facebook is warranted, but my recollection of the keynote is that statements like this didn't leave much wiggle room on this particular point. They were playing word games in other areas, like abusing the term "interoperability" to mean "between the different Facebook-owned apps", but I don't think they were here.


They also own the client so it'd be trivial to send the data back to Facebook after it's decrypted on the client. It'd be really stupid to do that, but it's Facebook.


I think that what he means is if Facebook-owned apps are at each end, then end-to-end encryption means less because Facebook has access to both end points. You don't have to MITM a connection if you have access to the ends.


> End to end means from client to client. Facebook wouldn't be able to see the messages.

Maybe they'll start training a personalized ML model on client devices and maybe even send it back to the mothership for further exploitation.


This kind of stuff is interesting, and I think you're on the right track (simply based on intuition, not really informed on this at all). I'm interested in compressing models for performance on lower computational power devices. Like Google's Learn2Compress (https://ai.googleblog.com/2018/05/custom-on-device-ml-models...)


Have you read the client code? How would you know?

Just because the protocol is well-formed doesn’t mean the totality of the implementation is trustworthy.

#ShowUsTheCode


Read the client code? Bah! How do you know that's what's in the compiled binary?


Reproducible builds. This is pretty much exactly their purpose.


The non-facetious point here is that you have to root your trust in something (whether that's the maker of your reproducible build system, or your device, or your app, or the online service you use, or the chip foundry that made the CPU that runs your built-from-scratch-paranoid-OS).

It's better to have to trust somewhat verifiable promises about the Facebook app than to have to trust unverifiable promises about Facebook-the-entire-organization. That's the advantage that E2E provides.


Reading the client code means bupkis. Trust is not derived from source code, but from where you got your stack (phone hardware, operating system, build tools, application source code, distribution platform included).

https://www.archive.ece.cmu.edu/~ganger/712.fall02/papers/p7...


Clearly its derived from all of the above. The end goal we have for Hubs (hubs.mozilla.com) is to allow the theoretical limit of auditing to be done by the public when using our hosted services wrt both the code and the operations. And ofc, you can always run the bits yourself if you don't trust that audit.


I went to hubs.mozilla.com and couldn't figure out what it does. I then tried making/joining a Hub but it got stalled at one of the loading steps so I still couldn't figure out what a Hub is.

They seem to be VR meeting spaces: https://blog.mozvr.com/introducing-hubs-a-new-way-to-get-tog...

The model you describe seems like a good one, similar to what most Linux distributions do. The distro maintainers are trusted and they compile packages and distribute the binaries, but people can run the package generation scripts themselves to get their own package straight from the source. Reproducible builds allow users to confirm that the maintainers aren't doing anything sketchy - that probably isn't a possibility here, but this model is still far better than what FB/WhatsApp and even Signal do.


Sorry you hit a snag. Yes, we're building a web-based avatar-centric communications tool, which also supports VR. If you have more info on your setup (browser, OS, link to the room that failed) and what you saw that'd be fantastic so we can fix it! Feel free to email me directly gfodor at mozilla.com.


This.

Although when it comes to Facebook, we have to take them at their word that it's truly end-to-end.


You can use the same line of argument to say the messages are accessible to whoever makes the device, since they could in principle monitor the message. It's kind of an empty criticism.


Exactly. Hence their attempt some years ago at selling Facebook integrated smartphones. End to end encryption, wink wink ;)


I can't tell if this comment, which seems to make the argument that Facebook tried to market a mobile phone so that they could defeat the end-to-end encryption they planned to offer in their applications, is facetious or not.


Does your reasoning apply to other companies? Coca-cola, for example, is very interested in building profiles and modifying behavior. I estimate they're much less sophisticated w/r/t building profiles (although if you think in terms of flavors, instead of demographics, perhaps not...) They are definitely very effective in terms of behavior modification. The product does significant harm. It also hijacks an evolutionary flaw (people like other people's feedback=FB, people like sweet things=CC).


Not the OP but I think it definitely does. What's the public health damage caused by disease epidemics like obesity and diabetes in the long term, tens if not hundreds of billions, even more?

It's time to bring the hammer down on advertisement and manipulation of people's behaviour. Imagine if the government would try to nudge people in the way these companies do, we'd never here the end of it and rightfully so.


Basically that’s what the field of public health is: https://bioethics.hms.harvard.edu/sites/g/files/mcu336/f/Der...

Non health related example: https://datasmart.ash.harvard.edu/news/article/how-governmen...

Also any taxes on goods like cigarettes or fuel are nudges. Also zoning..


> Imagine if the government would try to nudge people in the way these companies do

You've never seen a Got Milk ad, have you?


"You've never seen a Got Milk ad, have you?"

That's not the government - it is a private association of "... milk processors and dairy farms."[1]

[1] https://en.wikipedia.org/wiki/Got_Milk%3F


> Does your reasoning apply to other companies? Coca-cola, for example, is very interested in building profiles and modifying behavior.

(Not the OP) Personally, yes, it applies in full to all other companies. That said, marketing and advertising companies (which Facebook counts as) are the most egregious with this sort of thing.


> ...surveiling user behavior and selling behavior modification products

Excellent description of personalized advertising.


> surveiling user behavior and selling behavior modification

This is fine in my book as long as they involve more sociologists/psychologists in the process, are transparent about unintended consequences and what behavior modification they are indulging in.

The thing that is becoming clearer and clearer from the data accumulating about people's behavior is left to themselves ALL people have low awareness of their own damaging behavior. Whether it's to themselves, their families or communities. Those that do have some awareness have little clue about how to climb out of holes. Spotting issues early, alerting/educating people about them and showing them what options they have to improve their own behavior is a huge opportunity to do good.


> This is fine in my book as long as they involve more sociologists/psychologists in the process, are transparent about unintended consequences and what behavior modification they are indulging in.

Even with your conditions, this would be the exact opposite of "fine" in my book. The involvement of sociologists/psychologists would make it even worse.


It's like being force-fed drugs and then saying that this is ok because doctors have approved it.


Why? These fields have never had access to this level of data. What they bumbled about doing in the past without all the data, cannot be used to judge what their impact is going to be in the future with it.


I'm confused as to what you're saying here. Are you saying (as I originally thought) that the involvement of sociologists and psychologists makes manipulating users more acceptable?

Or are you saying, as it sounds like here, that access to Facebook data is good for the fields of psychology and sociology?

In any case, the involvement of socio/psychologists in the process of user manipulation makes the situation worse precisely because it could very likely make that manipulation more effective.


>Focusing on adding e2e encryption for messaging services serves as a nice deflector for FB's privacy issues.

Although it's not all that's desired, it's something. After all don't forget that entities much less legal than Facebook are after your data (like the NSA).


imho e2e encryption is going to quickly become table stakes for any kind of internet based communications tool. it's well on its way with the publication of the protocols used in Signal and the various open source implementations available of it.


> “It’s going to take time,” Zuckerberg said of Facebook’s privacy-focused future. “I’m sure we’re going to keep on unearthing old issues for a while, so it may feel like we’re not making progress at first. But I think that we’ve shown, time and again as a company, that we can do what it takes to evolve and build the products that people want.”

IOW, they'll talk about privacy for a while until everyone stops talking about how evil FB is. Then they'll slowly stop talking about it and work their way back down to where they are now. All the while not actually doing anything at all to change.


It's about FB/Zuckerberg redefining "privacy" as being between FB users and talking to that. They don't have to work their way back down from that. Privacy between users and FB (e.g. none, even negative) isn't mentioned at all.


It's about FB/Zuckerberg redefining "privacy" as being between FB users and talking to that

Even moreso, it'll be about FB redefinining "privacy" as "whatever FB is doing with their users," then lobbying themselves into a regulatory capture for the nation (globe?) as a whole.

I forget where I heard it, but there's an old aphorism, "of course you're free, because this is what freedom looks like."


This is 100% accurate given that it's exactly what they have done time and again since almost their founding.


This reminds me of when the cloud started to get hip and executives would insert the word everywhere. Cloud + 2019 ~= privacy or AI


"We need to synergize with The Cloud so we can synthesize profits going forward!"


Have there ever been any effective privacy limitations that Facebook placed on itself that have done a thing?

We've seen internal memos raising questions effectively ignored. Facebook's own actions seem to indicate there is no limit to what they'll do to users. Their own actions seem to indicate there is no limit to what they'll do to companies that partner with, specifically lie about what level of access they're providing. Their own actions seem to indicate they don't seem to care even when they operate on another platform such as when they released a traffic monitoring VPN on Apple's app store, it was removed by Apple... so they just renamed it and put it out there again.

Facebook as an entity seems antithetical to privacy at its core. It couldn't have grown into what it is with limits... I can't imagine they are capable of being anything else.


Depending on the definition of "privacy". Over time they have limited access to their APIs , so that a new "Cambridge Analytica" has a harder time to extract larger amounts of data.

Of course that serves their business: in the beginning they had to be the hub everybody connects to, to get as much attention as possible. Now their business is to monopolize the data, so that ads are sold via their systems.

Framing that as privacy is a great strategy, from Facebook's perspective.


> Framing that as privacy is a great strategy, from Facebook's perspective.

Indeed, because it lets them pretend like they're defenders of privacy while being able to completely ignore the privacy threat that Facebook itself presents.


The whole "Cambridge Analytica" seems to be a limit (that we don't know if it is there or working...) that was only there because ... they weren't limiting.

Just not sure how that works.

I do agree that their privacy motivations may be a way to make sure that "we got ours, now nobody else should".


> Facebook's own actions seem to indicate there is no limit to what they'll do to users.

I actually think there is a limit. That limit is that they won't do anything that would cause an exodus of users.

It like the old saying: "find the amount a tyranny that people will tolerate, and you've found the amount of tyranny that they live under"


> “A lot of the focus is on changing the way that consumer-to-consumer interaction works,” said Greg Sparrow, senior vice president and general manager at CompliancePoint, a data privacy and security consultancy. “While that is laudable and it’s great that they’re doing that, but fundamentally it doesn’t address the problem on the back-end side, which is businesses gaining access to this information and how they’re using it from a data monetization perspective.”

Bingo. Most people on Facebook are well aware of how "public" their posts are but aren't aware of how public their personal information is to advertisers. Facebook's new "privacy" focus isn't intended to solve the platform's real privacy problem, it's intended to distract from it.

Edit: Spelling


> Most people on Facebook are well aware of how "public" their posts are but aren't aware of how public this personal information is to advertisers.

How public is it, then? As a Facebook advertiser I've never seen this elusive personal information collected from the masses, available for indiscriminate pickings. Facebook only sells access to eyeballs coupled with anonymized targeting based on this personal information you refer to, not the information itself.


The average Facebook user is largely unaware of how Facebook tracks their activity far beyond what they say and do on facebook.com in order to harvest data about their personal lives: their financial situation, their relationship status, their medical history, etc. Just because you can't download a file of someone's personal information "for indiscriminate pickings" doesn't mean they aren't selling access to it.

For example, someone might be gay and haven't yet told friends and family. Facebook probably knows from their browsing history. How difficult would it be for someone to run an ad on Facebook, cleverly disguised as an "article" to encourage clicks, targeting gay people in a particular region. Five minutes, tops? Well every gay person who clicks on that link has just given away their IP address and location information and the purchaser of that advertisement has a pretty accurate list of gay people and where they might be located. Hopefully they're using that list for benign purposes but who's to say?

Do you think the average Facebook user is aware of how their information is leaked by simply clicking on a link in a Facebook advertisement?


You (or someone who believes the same) should actually try this experiment and see if the reality matches the expectation, to any level of (potentially) destructive accuracy.


Here's a paper that describes it, confirmed with real profiles, not just speculation: https://hal.archives-ouvertes.fr/hal-01955327/document


Thank you, I appreciate the reference. I'll take a deep look at this.


> Facebook only sells access to eyeballs coupled with anonymized targeting based on this personal information you refer to, not the information itself.

True, but that isn't a whole lot better. Surveillance companies like Facebook like to make a big deal out of this, but I don't think it means as much as they like to pretend.


With each click on an ad, IP address and more are leaked to the advertiser. Couple that with fine targeting, and it makes a dangerous combination.


What danger do you foresee in your thought experiments?


The danger that I foresee is that additional information about me will leak to companies that I don't want it leaked to.

That is sufficient all by itself, even if that information is never overtly used in a way that personally harms me.


Microtargeted mass manipulation for questionable political ends

https://en.m.wikipedia.org/wiki/Cambridge_Analytica#Methods


Hi user, we noticed you've clicked on and visited a number of sites for gay bars while on vacation. Your home government in XYZ is cracking down on "immoral behavior" and has compelled us to send us a list of users who have interests like yours.

Yes we know you never publicly posted about being gay or joined a group for gay men, but your internet history says differently.


Why wouldn't they just go to that users isp and get that info?


Do you typically use your home ISP "while on vacation"?


Facebook has sold hundreds of billions clicks globally to advertisers, most likely trillions. Has there been one incident of the scenario you've described?


>>> What danger do you foresee in your thought experiments?

>> [example foreseen danger]

> Has there been one incident of the scenario you've described?

Don't shift the goal posts. You asked for a foreseen danger. @britch gave an example foreseen danger.

I also think the example is unfortunately realistic. I would not be surprised if in the near future repressive governments attempt to compel data brokers, pimps, and hoarders to provide information on 'undesirables'.


Isn't this at least the 3rd time he says these things?

Unless there is a complete engineering stop and development of privacy policies and tools, and review of all existing tools and processes the necessary cultural change will not occur.

Facebook would have the money to do that and possibly have positive change throughout the industry if they would do something like this. Like Microsoft made the Secure Development Cycle a thing in the early 2000s.

But I'm already reading news how now all Facebook messaging platforms will be able to send messages to each other - sure thing that that violates everything WhatsApp stood for in past. Also, tons of privacy questions that come up with such merges of technology and the data it connects.


Cookie Monster Preaches Restraint, but Evidence is Elusive


This is such a fantastic analogy I'm not sure if it's funnier or scarier


Right?

Don't forget that Zuckerberg started out making an app to get laid. It's a long, long road from there to preaching ethics and whether or not he actually has reformed, he has not exhibited that reformation.


"Privacy?" Hah! More like "plausible deniability for fueling ethnic violence." If they can't read the messages, then they aren't responsible. Meanwhile, most of the useful ad targeting data is in the metadata, which they still collect. They're truly shameless.

This is also funny, in a grim sort of way:

> Late last year, Facebook admitted that Clear History is taking longer than expected -- it turns out that browsing data, which the company uses to help send more targeted advertising to users on its social platforms -- is more deeply ingrained into Facebook’s systems than anyone realized. Simply finding and deleting the correct data without disrupting Facebook’s advertising and analytics businesses has been a big enough challenge that the product hasn’t gotten off the ground...

If Facebook were serious about this, they could easily implement a "nuke all of my data" option that would wipe all of your history. Not just the stuff gathered from beacons, but everything, so people could start over from scratch with their new understanding of how Facebook operates.

But they clearly aren't serious. I suspect that the only real "clear history" option will be exporting your data, deleting your profile, and making a new one, preferably using a new email address from a public computer.


Facebook will not change until there is sufficient external pressure - be that an authority, competition, etc. - to force it to change. End of. We've seen the same story for the last couple of years; Facebook is bad with privacy, Facebook makes token attempt to improve privacy, Facebook gets busted for being bad with privacy, and repeat.


Facebook will never change... it will just become irrelevant one day.


I'm fine with that. Sooner the better.


The unfortunate part is that Facebook is designed to replicate the addictive qualities of sugar and even more heavier drugs like cocaine.

As a result, it's not that you can't make your own social platform, it's that your platform lacks critical components for instant gratification.

Maybe I am wrong though. Open to discussion for my perspective.


You're correct in that it is focused on making the user's brain feel good. I think a more accurate comparison, however, would be casino gambling. The slot machines ding at you and flash lights and make your brain release the dopamine. Facebook has the users on a similar bent via notification graphics and sounds, feeds, and various other things. It's the same idea though, and it is legitimately addictive (I've seen many family members get taken in by it to the point where they get anxiety if they can't constantly check their notifications or whatever).


I have been putting thought into a social platform that would truly be for users and not just a gamification of attention.

- First it would have a cost, either in money to pay someone to develop/run the platform or in time/knowledge to tinker with some open-source solution. That shouldn't be a problem as the target market is the group of people who are making a conscious choice to dodge the ad platform. BUT it does severely reduce the network effect, possibly down to the point of becoming an ultra-specialized niche club.

- Then, there's the point you bring up: you'll essentially be trying to sell steamed broccoli inside of a candy store.


I've also been thinking about something like this modeled something like a mutual fund. I actually don't think people have an issue with the ads as much as they have an issue with the lack of transparency & control over the information that's "leaked" by interacting with the ads. The approach I've been tinkering with is:

- User data is held in a trust with independent trustees (like a mutual fund complex) who oversee and set limits on how the platform company uses the data for advertising.

- Users can "withdrawal" their data at any time.

- Instead of micro-targeting individual users advertisements would be sold to groups (like a mutual fund) based on anonymized data from the users who choose to join that group.

- Users could "invest" in a particular group by joining groups that reflect their interests and creating content/posts in those groups. Daily up votes would represent the creators "share" of the advertising proceeds to that group (like dividends) so good content is rewarded.

- Development/maintenance would be paid by a flat % of advertising revenue taken off the top before the "profits" are distributed to "shareholders."


> That shouldn't be a problem as the target market is the group of people who are making a conscious choice to dodge the ad platform.

Personally, Facebook and such has pretty much poisoned the whole idea of "social media" entirely. My default position these days is that if it's "social media", then I can't trust it.

I don't know how many people share my attitude. It might not be enough to matter.


This is analogous to Google declaring one fine day that its getting out the advertising business, but adding security features to do so. MZ describes steps to improve security and retooling history features to have less immortality and permanence, but people already share facebook posts and profiles as images, not only as stateful markup. People will continue to expose each other and there's essentially no way Facebook can guarantee privacy in that regard. Limiting partner access to user data will ultimately be a trade-off of what they can afford to lose and that's not a commitment to privacy either.

His proposed strategy doesn't really makes sense and seems like misdirection and lip-service. You can't change the way people use Facebook, but you can forgo any pretense of privacy, which may be the only thing that can honestly and realistically be done.


If you have made a pact with FIST (FB/Insta/Snap/Twitter) or the other members of the gang, then it is akin to jumping in a pool of sharks after cutting yourself open ─ not even the friendliest of sharks is going to resist the urge to rip you apart and feed itself!

These breaches and preaches are now a daily occurrence, having found a new level of tedium, especially since overtly parasitical behaviour has long been admitted and identified within certain ecosystems.


Like investments, the implicit question of whether one can trust Facebook (etc.) going forward is actually orthogonal to their current or previous behaviour. Whilst it might inform and be predictive to a great degree, there is no way to bind "future Facebook" to any real guarantees. In Debian licensing, we call this the "tentacles of evil" test..


I'm reminded of the scorpion and frog fable for some reason.


It's like an atheist who preaches about religion or a Ponzi scheme fraudster talking about fundamentals of economics.


I think that the new User Interface that Facebook is rolling out will show whether or not Facebook has a real commitment to privacy.


> I think that the new User Interface that Facebook is rolling out will show whether or not Facebook has a real commitment to privacy.

How would you determine if the new user interface provided (or not) any [new or perceived] level of privacy?


I am indifferent to FB. They started with a promise, then they realized the money potential and went ahead abusing it. They were caught and faced some flak. They realized that people have started caring about privacy. Hence this new pitch. I don't say the Mark Zuckerberg is evil, he himself is helpless. He can't change the FB DNA overnight, it would risk the very existence of the company. In summary, don't expect things to change anytime soon.


He's arguably the most powerful and protected CEO in America. He literally cannot be ousted from FB by the board.

If one of the most powerful executives in America can't bring about wholesale change to the business and is truly helpless as you describe, who else could possibly come in and make a difference?

If the company's existence is threatened, so be it. They're in this position due to the actions they've chosen, and immortality is guaranteed for no one.


No, he is evil. He called people who gave FB data "... dumb f#@$s" back when he was 19. It was just that in the beginning they weren't pushing the monetization yet. Now that is all it is about and people are starting to understand how they are being exploited. The only thing that has changed is that you are aware. HE is the FB DNA.


Not that I disagree with you, but when people bring this up I like to point out that I said a lot of stupid stuff as a teenager, and the vast majority would be lying if they claimed that they didn't, too.


That's a fair point and I'd be more inclined to give the benefit of the doubt if he behavior had changed in the last 15 years. The recent app around VPN and teens shows that he doesn't care (and they renamed and republished that app multiple times).


> they renamed and republished that app multiple times

Do you have a source for that? I'm only aware of Onavo code being shared with another FB survey app which was running in parallel (ie, not the same app renamed and republished)


> I said a lot of stupid stuff as a teenager, and the vast majority would be lying if they claimed that they didn't, too.

Sure, but the stupid stuff that I said as a teenager actually reflected true aspects of my attitude and personality. I suspect the same is true of the vast majority of people.


And if saying that stupid stuff made you stupid rich do you think you would have changed your ways and done the right thing?

He has both the money and the power to do the right thing yest clearly has no desire to.


> ... I said a lot of stupid stuff as a teenager

Yet this declaration was specifically about his view on privacy at FaceBook, not just random shit.


Considering the average age of a Fortune 500 CEO is 57, Zuckerberg is still a teenager...


> I don't say the Mark Zuckerberg is evil

"Evil" is a a loaded term. I think it's more accurate to say that Zuckerberg is knowingly deceptive in an effort to ensure that Facebook can continue to engage in pervasive surveillance without too much public outrage.


Helpless? Facebook should probably change their leadership then.


Helpless in terms of capitalism. Their entire business model depends on privacy violation. Zuck cannot change this without risking the future of the company. He is helpless in this regard.


That's not being helpless. That's making a business decision.


Fool me once, shame on you... Fool me 100 times, shame on me...


I think "elusive" is a particularly kind word...


Why are we still listening to anything that tech CEOs say?

There seem to be absolutely no consequences to their repeated misdirections and outright lies.


LOL, evidence is in plain sight.


People really need to stop listening to what Zuckerberg says and judge him on his actions.

That way, you'll see that the 'dumb fucks' comment his made in his youth was not merely an example of immature cockiness tempered over time by maturity, but rather a profound insight into who he is and what he wants.


The Big Lie?


I am producing baking soda. Unfortunately it is also containing arsenic. But it is very popular baking soda and I am making a lot of money. Some people complain so I make a pledge that one day I will change my production process to make my baking soda arsenic-free.


Is this an argument for tighter regulation of the internet? The only reason you can't get away with selling baking soda with arsenic is (quite rightly) because of food regulation.


It is unfortunate that we need regulation to prevent people from doing what is ethically wrong.

Ideally the internet would not need regulation, if only people cared more about doing whats right...and not what makes your shareholders rich.


It is unfortunate. I remember the earlier days of the internet when regulation wasn't brought up like this constantly.

I think the unfortunate reality is whenever there's money involved, certain players will win over and time and become gigantic and by the very nature of their being at the top, they are unethical. Naturally, the less ethical and more ruthless money makers win in the market if all other things are considered equal to their competitors.


If I understand your analogy, you're saying that privacy violations are like food-safety violations: they should be prevented by legislative action and government force. I think the analogy is flawed. When you die from poisoning, it's an irremediable situation; you can't just say, "I'll never do business with them again!" and fix it. Also, you have to eat something; you can't opt out of the whole market. Finally, it directly affects your physical safety, so that intentional disregard of food safety is akin to violence.

Facebook's problems, and privacy violations in general, are different. It's not necessary to engage in mediated social interaction; you can opt out with no loss except convenience. There's no bodily harm, let alone death, so if you're burned once, you can simply never do business with them again.

That means the market could fix this problem. That it hasn't says that there aren't enough people who agree that it is a problem, or that the cost is worth the benefit to them. In a case like that, not legislation but education is the solution.


> if you're burned once, you can simply never do business with them again.

The problem is that avoiding doing business with them does not protect you from their behavior.

> That means the market could fix this problem.

Maybe, maybe not. But generally speaking, the free market cannot fix all problems, and particularly has an issue fixing problems that are imposed on people who already don't do business with the bad actor.


Once they've gathered information about you and sold it, you can't take it back. For most people living in western democracies, living with no privacy won't kill you but where is the guarantee your government and the corporations it empowers will always be so benign?


I thought about mentioning that, but even under less liberal governments, the case where an invasion of privacy is fatal is pretty exceptional. I decided not to muddy my exposition with it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: