Hacker News new | past | comments | ask | show | jobs | submit login
Facebook denies it collects call and SMS data from phones without permission (techcrunch.com)
411 points by wil_wheat_on on March 26, 2018 | hide | past | favorite | 359 comments



I think we are talking differently about this permission concept. Legally yes, they had permission. But the fact that they used those dark ux patterns to request that permission should not be forgotten.

Even though legally they are in the right, we as users should make this fact irrelevant and just abandon the platform. Let them be right, let them win the argument but lose the battle with the general public.


I'm still completely convinced that this whole scandal will not damage Facebook a bit, in the long term. Yeah, the stock is going to be a bit lower for some weeks, some people will leave, but the largest part of the Facebook users has not even realised the scandal in full scale. They're not going anywhere, they're going to continue as is, since there really is not alternative for Facebook right now and tbh they would just probably do the same thing to their users so why bother. The only solution for people concerned with this is giving up on social media all together. The majority however won't care at all and still continue using Facebook.


> the largest part of the Facebook users has not even > realized the scandal in full scale.

Exactly. That's why they must be regulated. Pharma companies, food companies, and airlines, to mention three, have the same sorts of characteristics in their businesses.

Examples: Airlines have regulations because the typical user has no way to evaluate maintenance regimes or navigation procedures. The big information-hoovering dotcoms need the same sorts of regulations for the same sorts of reasons. "We obtained user permission" needs to be a stronger claim than "we tricked users into checking a box."


Regulation come in many forms, and I believe the first step will be GDPR.


GDPR is a god send. It would never have come out of the US, the corporate funded politicians would have just ate it alive.

But because European politicians have some integrity left and internet is inherently global, most internet companies need to do these for everyone.

Yes, it’s not perfect, but a great first step.


The internet is as inherently global as the postal services are. What differentiates it is that it's faster, cheaper, and immaterial. In principle, the legal situation hasn't changed compared to before the internet was a thing. If you just want to do business in one country over the internet, you can do so. What has changed is that a company and a customer can reach each other easily, cheaply and fast aross national borders, using the internet. The drawback to international business is laws, rules and policies compliance.

Having that in mind, the project Gutenberg lawsuit makes sense. Imagine a service that copies a requested book and sends you that copy, all by postal services. If that company shipped into a country where the book in question stills protected by copyright, then it is clear that there will be legal action.


No, in fact, airlines have regulations, because big flying objects loaded with fuel are kinda seriously dangerous, and can kill (not in un*x console meaning of the word) lots of people at once. Very unlike social platforms, to be fair.

If regulations were introduced because of users not knowing what's good and beautiful, believe me, IT industry would be the most regulated thing in the world.


> Very unlike social platforms

a social media platform that can be used to organize and manipulate hundreds of millions of people globally is far more dangerous than any airplane.


Don't you think that this "organize and manipulate hundreds of millions of people globally" thing is a very blurry fear-laden sensationalist wording, which practically implies that social platforms possess some kind of hypnotic power to drive humans to do something on orders from secret circles?


Not at all. People are rather easy to manipulate, if you control the flow of information. Hypnotic power is not required.

The formula is simple: promote emotionally powerful events that support your agenda, ignore events that inspire feelings counter to your agenda, deify supporters, demonize opponents, censor and ostracize anyone who speaks against the agenda.

That formula has led nations to war, sent innocent people to prison, changed laws, overthrown governments, led to acts of terror, and caused hundreds of millions (perhaps billions) of deaths throughout history.


Well, with your formula you have just described work of media since invention of printing press. However, it doesn't work as easy as you described. Almost every political campaign tries to utilize this playbook, and absolute majority of of them fail. In fact, looking for the best historical examples fitting your argument one may find they generally comes from countries which strictly regulated their media.


For full effect, it requires control of the flow of information. It has indeed often failed in America and other countries where people had relatively free access to information.

But with digital information replacing newspapers, magazines, and TV, and with access to digital information being filtered through a handful of companies, those companies have immense power approaching that of governments that strictly regulate the media.


airplanes are kind of unlike banks and trading markets too yet those are also heavily regulated.


To be fair, banks and trading markets affect the economy directly, which is a primary concern of the government.


Sort of. For now.


> That's why they must be regulated.

I don't agree with GP's assessment that users "haven't realized" but I do agree that the "majority however won't care". So why regulate if a majority of people are ok with it? They're not too stupid, they just don't prioritize the way some here do. Doesn't have to be a law to force them to prioritize this way. This false equivalence to expertise of airplane mechanics is completely different in a security context and these kinds of analogies only help to make others want to dismiss the point quicker. You want to regulate transparency, ok. But if some accept these things, let them.


> They're not too stupid, they just don't prioritize the way some here do

While I agree that most people aren't stupid, I would argue that many people are ignorant. They simply don't know why they should care, probably due to a lack of understanding what it means for a company to have a copy of their data, and what that could enable those companies to do and figure out.


> While I agree that most people aren't stupid, I would argue that many people are ignorant

If that's true, solve that problem directly via education and enforcing transparency. There are also other solutions such as grants w/ stipulations, enforcement of existing fraud statutes, etc.

Let's assume there are 3 segments of people here: 1) the ignorant, 2) the non-ignorant accepting, and the 3) non-ignorant unaccepting. We have to stop letting #3 (arguably the smallest group but probably consisting of most here) run the show wrt to laws. There are too many consequences to pieces of legislation that all of us business owners have to conform to regardless of whether it is targeted to us. Most often we don't even recognize the consequences because we are so focused on how good we're going to do and how much we're going to help everyone. I dub it digital security theater.


> If that's true, solve that problem directly via education and enforcing transparency.

This doesn't always apply. Should car regulations be abolished in favor of educating the ignorant?


Of course it doesn't always apply. Of course car regulations (or the aforementioned airline ones) have purpose. Not sure why this physical safety equivalency keeps coming up. You have to take each problem at its face instead of bending to analogies and what-ifs. We have to stop guessing and throwing legislative spaghetti at walls hoping it sticks. There are paths to arrive at legislation, but it's not the first brick in the path.


Many look to Twitter as their new social media platform, that together with IG (which is owned by facebook sure, but it's "harder" to be political there, which it all comes down to in the end) and Snapchat as a messenger.

Google plus failed, so why wouldn't Facebook eventually do so also? The youth don't have Facebook, it's mostly the 20-40 year olds who do. This 20-40 generation follow the mainstream media news and most of them are capable of realizing the harm in not protecting ones own privacy. Also, the current theme of news regarding social media is that it's being seen as a threat to democracy due to the ease of massive manipulation through political propaganda. A huge attack! Threaten democracy and the people will hate it.


Google+ had all the necessary requirements to be successful, but it failed at creating the network effect.

Facebook does have the network effect, so killing its network effect will be much more difficult.


The reverse network effect is also real. People who leave for whatever reason reduce the value of the service to others. Services quickly goes from the first place to keep in contact to yet another place to post event announcements and not much else.


If Google+ were launching today, they might have a better chance. More people are interested in Facebook alternatives. At the time, Facebook had already supplanted MySpace and Google+ didn't offer anything attractive enough to switch again.


No point swapping Facebook for another advertising-driven social network in the same vein. If a company like Apple were running it and people were paying for it, that would be a different matter.

Google have contributed to privacy issues, e.g. with poor permission management.


I'd say the big difference with google is they can treat a social network as a loss leader. Google has other (far more profitable) forms of privacy invading revenue sources, facebook doesn't.

It's not a ideal solution however. If as a society we have decided social networks are important a not for profit, or decentralised federated scheme is probably the way to go.


Running a social website should be cheap enough that they don't need to be invasive. The company would be leaving money on the table as it where, but winning is worth enough that it's a good trade-off.


"If a company like Apple were running it and people were paying for it, that would be a different matter."

Oooh if only Apple did "social" apps well; they are famously not-so-great at doing them, I'm not sure where that comes from at root though.

Shooting from the hip: Maybe their intense internal secrecy translates into a worldview where the hyper-focus on the 1:1 relationship with their customer obscures their ability to see how their customers (and potential non-customers!) could interact; might even be dangerous.


Google could not be more confusing. I spend 10 hours a day on the computer, and I will avoid Google stuff because it’s just not worth the time it takes to educate yourself about their nonsense, so I wouldn’t expect anyone less tech savvy to want to bother with them either.


The main problem with Google is that I expect any service they build (beyond their biggest ones) to be shutdown eventually (probably sooner rather than later). It's not worth taking a chance on them anymore.


That's amusing considering Microsoft has shut down more products and services than Google ever has. Apple is also up there.


Except instas sponsored ads are more pernicious (from a mobile ux pov) than fbs including the fact that fb still tracks you on every site. All credit to the Zucc for buying insta and whatsapp since the entire world (especially on whatsapp) are locked in that ecosystem.


I don't have a single non-technical friend who cares. Theyve assumed FB has been doing this all along and that's just the tradeoff for using the service. To that end they're not wrong.


I don't know that it's that simple. Facebook survives because of network effects. Which means it is vulnerable to mass exodus, sort of like a run on the bank. So maybe no one single scandal will bring it down, but a slow, steady trickle of controversy will chip away at the user base until it hits a critical point and spirals to nothing. The continuous publication of evidence about its negative mental health effects, the scandals about political manipulation, and the terrible privacy practices seem to be driving away many of the trend setters, which means the apathetic masses will eventually follow.


Anecdote here: I was talking to a college student who hadn’t heard about the whole Cambridge Analytica to-do. Why? She uses Facebook to check the news, and unsurprisingly it wasn’t coming up on trending news.


It came up a lot on my newsfeed, including the post by Mark Zuckerberg about it which went viral https://www.facebook.com/zuck/posts/10104712037900071

Is your friend into tech news a lot? If not maybe that's why it wasn't shown. The only people I know who care about this story are tech people.


That is the whole issue, using one source for news when knowing any source has a percent of bias. Only way around the issue to keep an open mind and actually view new sources that you might or might not like to get a whole picture.

Don't like CNN nor FOX but still skim them for news, articles not TV nor videos.


It is an easily testable premise. Ask anybody if they have deleted their Facebook account. If they are hesitating they probably won't.


Actually deleting the account doesn't matter -- if you don't use the account, that's what matters more and here's why.

Facebook's revenue is almost exclusively from advertising. Advertising costs on Facebook depend on reach, PPM, and/or PPC. Facebook advertisers don't pay for "active users" or even "users" -- they pay for PPM/PPC. In other words, your data is relatively worthless if you aren't around to be targeted based upon it.

Despite having a profile, if a user isn't there to see the ads, that means less reach and fewer clicks. When advertisers realize the apparent ineffectiveness of FB advertising, ad rates will decline, reducing revenues for FB. It's unlikely FB would actually run out of money because with reduced scale comes reduced costs -- eventually the entire platform might be operated by Zuckerberg at at a WeWork rented desk after having laid off everyone else.

My point is that your data is pretty much useless if you aren't there to be exploited because of it.

Facebook and Google are huge really for one main reason -- self-service advertising. Some local bike shop can experiment with $200 in PPC super-easily. That local bike shop can't as easily run a radio ad or TV ad and measure the results as easily. So Google and FB have made advertising something that is accessible to those with both lower budgets and lower sophistication. The Coca Colas and General Motors of the world don't care about FB or Google at all -- they have the resources to sponsor the Olympics or national TV shows or pro baseball teams.


Technically that is correct, but I don't really agree in reality. Most users lack the discipline to simply walk away from a still available account when Facebook is a savage drug addiction for so many people.

I downloaded my Facebook archive this weekend and it was a massive 12kb. I know entirely what it means to have a living account and not feed the beast, but I am also the rarest exception that disproves the norm. Deleting a Facebook account means the drug addict cannot relapse.


Except that with any account at all, whatever information is available, can be sold to any third party that wants it through Facebook Research.

Advertising isn't the whole story by a long shot... hence CA / SCL Group (and whom else shall we add next?)


Wait until GDPR is here, then we might discuss about it. I really wonder how they will pass all those "privacy checks".


It doesn't matter. They have the money, people and time to throw at compliance.

GDPR won't dent Facebook in the least. They'll bend where they have to, and no further. They'll pay occasional fines and that won't matter either.

If you take it to an extreme and the only thing they can do is show non-personalized advertising and do zero tracking, they'll still print billions in profit in the EU market. That's what owning most of Europe does for you. Their gross profit margin globally is just under ~87%. Cut that in half and they still have a very profitable business with margins comparable to Google.

Facebook could run entirely untargeted advertising on their platform globally and still yield enviable profit margins. Their platform is extremely tightly run on costs compared to most tech giants.

People talking about doom on this, aren't discussing facts or using logic, they're projecting a revenge fantasy out of emotion. It's driven by a desire to punish Facebook.


"occasional fines"?

Living in EU, and I am thrilled at the party we will have, when 1-2 million people/facebook users will send this letter to FB [1]. I strongly believe that this will hurt them financially (where it actually matters/hurts).

I (try to) sinkhole every tracker, advertiser etc. on my PC and jailbroken iphone. I am sure that I don't block every one, but 90% is better than 0%.

It may keep me busy, but when I start receiving the answers that FB, Amazon, etc. have provided my data to XYZ broker, and then I will be more thrilled when I send THOSE guys the same letter.

Of course after every inquiry they will be receiving a "right to be forgotten" letter.

Hurt them where it does until they learn to respect people and not see them as items to be traded.

[1]: https://www.linkedin.com/pulse/nightmare-letter-subject-acce...

Ps: I am not a hater. I enjoyed FB until it started to try to manipulate me e.g. altering my timeline and showing me what THEY deemed is 'good for me' (profitable for them).

Pps: They make enough from advertising. It was excessive greed that got them to start the sell-out.


> and jailbroken iphone

As someone who’s Jailbroken dozens of iPhones, I understand the appeal. That said, you seem to prefer privacy/security, and I can unequivocally state having a jail broken iphone is a huge step backwards on both.


The idea that a GDPR request will cause Facebook any significant financial harm is laughable. They already have a facility to enable you to download your data, expect a similar tool or portal to appear that will provide you with access to this data. No more, no less, and a couple of months work of a few engineers and interns.


Do you think it will be possible for US citizens to take advantage of GDPR by just saying they live or are a citizen of the EU?

Will Facebook even check or will they just play it safe and accept most requests?


It does matter because if/when they comply with these new regulations, it diminishes large portions of reasons to be mad at them because it will help curb all of the shady stuff they have been doing for more than a decade.

So yes, it does matter, and them being compliant is a good thing.


Sometimes it's not about "oh I am gonna crush you financially" but "if you want to play, you need to play by the rules".


Please stop, what you’re saying is too accurate. Could you potentially edit some pitchforks and such into your comment?


Every company's destiny is decided by revenue and profit. Facebook will be fine as long as the advertising money keeps coming in. Will it? I don't see why not. Companies don't care about user privacy.


>"Will it? I don't see why not. Companies don't care about user privacy."

Except they do care about their associations and brand management:

'At least three companies -- Sonos, Commerzbank (CRZBF) and Mozilla -- have pulled advertisements off Facebook after a data scandal engulfed the social network. Others are asking tough questions.

Unilever, which owns brands including Dove, Lipton, and Ben & Jerry's, issued a forceful warning to the platforms in February, saying they had become a "swamp" of fake news, racism, sexism and extremism.

"We cannot continue to prop up a digital supply chain which at times is little better than a swamp in terms of its transparency," said Unilever marketing boss Keith Weed'[1]

[1] http://money.cnn.com/2018/03/23/technology/advertisers-faceb...


...Dove ... issued a forceful warning to the platforms in February, saying they had become a "swamp" of fake news, racism, sexism and extremism.

We'll, they're contributing to this [1].

As much as they "hate" the platform, they still love the ad targeting it provides. They all do. Which is probably one of the only reasons these brands even bother.

They just can't help themselves when it comes to being able to push $X towards women only, in their 30's, who have small impressionable children at home, are affluent enough to buy expensive soap, etc, etc, etc.

[1] https://www.nytimes.com/2017/10/08/business/dove-ad-racist.h...


Agreed.

Many people didn't realize that we still living in a global society that pretty much made by many super large sized Black Pearl[0]s and Jack Sparrow(s) is the captain.

Dear Captain Jack is cool sometimes, and could bring you to the treasure, but you should never trust him that he will taking care of you. In fact, regardless what he said, he don't care about you even a little, all he care about is himself. If you fell off board, he will not bother to save you, unless it's out of coincidence.

So yes, you may follow him closely for an amazing adventure if you want to, but whether or not you will survive in the end, it's all depends on you.

[0] Yes, that pirate ship.


Companies do care about being associated with negative PR, though.

Plus, how effective is the advertising in motivating purchase versus sowing seeds of discontent?

It has utility, yes, but what the majority seem to consider, "not good".


Pure Capitalists, whose values most of the larger companies are run under, care only about the profit. Sometimes negative PR brings additional profit, there is no remorse in such cases.

Facebook's controlling minds don't care about you, they care about if there's reduction in profit [or profit generating ability].


You don't have to abstract this too much. Facebook's "controlling minds" are just one mind: Mark's. Facebook does what Zuckerberg wants it to do. And if he doesn't want to maximize profit, he doesn't have to.


In todays world there's no entity which would leave full control in a single person's hands, neither Facebook nor the US or any other non prehistoric government, although for obvious reasons the public needs to be given a recognizable figure which can be turned into a scapegoat when needed. Should Zuckerberg piss off his top investors, he would be sacked in no time.


It won't. Outside of TechCrunch, HN and NYT, nobody's talking about it. My wife and I are both software engineers, and nobody is talking about it at our respective work places. We were socializing with a large group this weekend, and while FB was brought up, it was only in the context of following a group. Nobody mentioned the "scandal" going on. Not a single person.

Ok, fine, this is anecdata. But we're talking heavily tech leaning people and we didn't hear anything from them.

Nothing is going to come of this.


> I'm still completely convinced that this whole scandal will not damage Facebook a bit, in the long term. Yeah, the stock is going to be a bit lower for some weeks, some people will leave, but the largest part of the Facebook users has not even realised the scandal in full scale

They'll probably be damaged if FB becomes a downmarket social network only used by those who are unsophisticated and unaware. It may not be apparent now, but it will be when some new product captures those users and starts building social momentum that FB can't counter. That's literally how FB got big, and I think they've only been able to so far prevent it happening to them because they've been allowed to buy their competition.


Intelligent and wealthy people care. That's the group of people who made facebook what is it today, and everyone else will eventually leave if the people they admire have left.


I'm afraid that I agree. Convenience and network effects are the two most awesomely powerful drivers of adoption and retention and Facebook has both.


put your money where your mouth is and buy that stock


because they make it so easy to join but difficult to leave. It's basically a cult


> But the fact that they used those dark ux patterns to request that permission should not be forgotten.

I posted this in another thread, but it's relevant here too:

Isn't this an Android issue? I mean, Facebook is collecting the data, but they're asking for it like every other Android app does.

As an example, iOS used to just show a dialog that mentions an app wants to use your location, and it was later changed to:

1. a dialog that asks if it's ok to use your location while "you use the App" with,

2. a required blurb by the app that specifies how it's used, and

3. a separate dialog for when the usage of your location also applies "even when you are not using the app", with a clear option to instead grant the permission only if the app is running in the foreground.

iOS also warns you when you use a custom keyboard if the keyboard wants "full access". And clearly explains this means it can "transmit anything you type, including things you have previously typed with this keyboard", including "sensitive information such as your credit card numbers or street address".

I quoted directly from the dialog boxes because I think the wording is well crafted to make it very clear for non-savvy users. Android on the other hand literally just asks you if you want to use Messenger as your "SMS app". This tells nothing to the user, doesn't inform them what data can be accessed, what can be done with it (can it leave the phone?), nor gives the user any further options.

I know Facebook privacy is the hot-topic right now, but my problem with this is there's lots of other apps that can and probably have used these permissions too, and are probably storing this information. This is the opportunity to put Google under fire for their terrible Android permission system, which extends to random games asking for full phone access and a myriad of other sketchy things, but by focusing on Facebook this opportunity is being lost.


I think the business sector needs to have a look at the way consent is done in the medical field, where huge terms and conditions with nasty stuff hidden away isn't ok.

The concept in medicine is that consent for treatment is done in a simple way that encompasses things that are commonly an issue (but often a minor inconvenience), things that are rarely an issue but are disastrous, and things that might specifically be an extra-big issue for the patient (i.e. damage to fingertip sensation for a typist, or something like that).

This gives the responsibility of getting the salient issues communicated to the person requesting consent - forcing it to be phrased in an understandable way. I see some companies are doing a summary of the terms and conditions (seen it with Virgin and Google), which is welcome, as long as they are including what is common and important in that summary and not hiding things in the T&Cs.


I'm not sure what "understandable way" you are talking about -- as a patient anyway. Every time I've been or taken a family member to see a doctor, I have to sign a multi-page consent agreement (which I read about as closely as the Terms of Use on a website). What's the point? Am I not going to sign it? No, I'm there because I need care. I don't read it, and neither does anyone else I've asked about it.


Does the doctor's admin indicate the general gist of the document and have you initial any important sections? This seems to be common now in the UK - I think because having people sign a document isn't valid unless they are aware of the content in at least general terms (I'm pretty sure this was tested in court).

Assumed consent of an ignorant party is effectively no consent at all.


I imagine that's exactly what the business sector doesn't want to do.

I remember when Google and Microsoft changed their privacy policies 2-3 years ago to become much more generic and "unified" - as in they could now use your data however they found fit from across all of their services and products. Great piece of PR that "unified" trick, though, so props to the PR team, I suppose.

The tech media mostly treated it with a shrug even when there was something that sounded quite suspicious in there, and had a tone of "yeah, but that's probably just used for X, and it's unlikely Google/Microsoft will use it for other nefarious purposes. It's just for legal reasons, you see."

No, those provisions weren't put in there "just for legal reasons" and they were put in there on purpose because they were meant to be used. Also, if they put it in there "for legal reasons" it's also because they intend to use it and need the legal cover. It's weird how the media took out exactly the wrong message from it back then.

It also bares repeating, because the media didn't cover it much back then, and it was quite a sneaky thing for Google to do - Google changed its privacy policy to no longer keep your data "anonymized." Many people even on HN like to defend the companies' data collection "because they anonymize it", which in itself has always been a joke, as studies have proven, but they no longer even do that:

https://www.extremetech.com/internet/238093-google-quietly-c...


And to be fair, Facebook was one of the first companies 10 years ago already that offered their terms in standard legalise as well as translated into common people speak!

They have done great things in that regard by pushing the whole industry forward


Every single contract should be this way. What's the point of a contract if it's clear that one of the parties cannot (or will not) understand it?


The problem is not that they took it. There are always bad actors.

The problem is that very few people cared.

So even if people start leaving facebook in mass, it will just start all over again with another bad actor.


It was probably autocorrect, but just FYI: en masse - it's from French, meaning something like, "in a large group".

https://en.m.wiktionary.org/wiki/en_masse


The irony is, I'm french, and I tried to "englishify" "en masse", thinking it made sense.


English is already partly French, thanks to the Normans. You can usually drop individual French words straight in without a problem, and sometimes short idiomatic phrases. Trying to translate will just shed any idiomatic character. En masse is already idiomatically associated with groups of people, whereas "in mass" makes us think in terms of kilograms.

How many tons of people are leaving Facebook now?

As another example, if you translated tete a tete into "head to head", those actually have slightly different meanings in English. Tete a tete translates idiomatically to "one on one", a private meeting of the minds, while "head to head" is a direct--usually public--confrontation or encounter, as one might find in a sporting match.


Voila, I learned the fact du jour. This makes my day, or my joie de vivre if I may say.

Do I do it right ?


Yes. Those are all meaningful to native Anglophones. Those who learned English as a secondary language might have some trouble, though.

You could think of English as a contest between other languages, to see which one can get the biggest share of the etymology. French is one of the leaders. A lot of the core words come from Norman, but modern terms, particularly those drawn from cuisine and fashion, have made it in as well.


These little facts give life a certain je ne sais quoi


I speak Spanish (native) and Italian (fluent) on top of English (best language).

I found French class hard growing up. Until I realized that this "latin" language had as much in common with English as it did my two romanance language.


Or you can say, they did not care because bad actors paid them?


Again, leaders are not the main issue. The problem is more that very little users cared.


> So even if people start leaving facebook in mass, it will just start all over again with another bad actor.

unless they want to prevent this from happening in the long run?


Who are "they" in your sentence ? If "they" are Facebook, then it's not the point. If "they" are the user, then I doubt that the current outrage is anything more than the scandal "du jour". I spent 10 years talking about this with people around me. Very little cared enough to even start thinking about it for a minute. Not to mention taking life decision with it in mind.


> Even though legally they are in the right

Even that is dubious. Law is interpreted by judges for this exact reason: people pushing it to extremes.


What Facebook acquired from their users was "expressed consent", which is notoriously easy to get using dark UI patterns. But the technical community has been using "informed consent" in their definition of malware for a while now, and it is highly questionable if dark UI patterns and "informed consent" are compatible concepts.

The legal system (at least in US and EU) needs to catch up to this, but it does. In the EU, the GDPR [0] will require "freely given, specific, informed and unambiguous indication of the data subject", which is a lot more than Facebook's claim of having "permission". We will see how judges interpret that.

[0] https://www.eugdpr.org/


> We will see how judges interpret that.

Judges are getting more tech-savvy, and I think lawyers are also getting better at explaining the deceit. There is hope, the question is how long it will take at this point.


> What Facebook acquired from their users was "expressed consent"

I'm not so sure about that, but that's what courts are for.


I think that was groestl's point: there is what it literally says on paper, and there is the question of whether users can reasonably be expected to understand this.


I don't think it was even clear on paper. It never said store forever or even store.


I don't see how a popup that asks for permission in clear and simple terms is dubious at all. I don't want to have to print out the terms and conditions, get them notarized with my signature on them, and snailmail them back to the company when I want to use an app's features in the future, but if explicitly consenting to a popup isn't good enough then I fear that's the path we'd be headed down.


> if explicitly consenting to a popup isn't good enough then I fear that's the path we'd be headed down.

If the pop-up does not adequately convey the consequences of agreeing, it cannot reasonably be considered a form of consent.


If someone sells me a car, and I paint it pink, and they're mad after the fact because they hate pink, did I fail to adequately convey the consequences of their agreeing to sell me the car? I don't think so; they agreed to sell me the car and that should be enough.


I fail to see how your analogy maps to the current situation.


Someone sold a car and it was used in an unexpected way. People agreed to give Facebook data and it was used in unexpected ways.


Totally agree. My mom wouldn't know that damn dialog has two buttons. This is the epitome of “we’re shitty by default, but if you know your way around our system, then you’re fine.” Yeah... how about no?

Just wrote about it here: http://www.tnhh.net/posts/shitty-by-default.html


More than likely youre right. Your statement brings me a curiousity:

What the legal precedence is for UX patterns considered deceptive enough to be intentionally malicious?

The best I could find was a settlement with linkedin in 2015: https://www.fastcodesign.com/3051906/after-lawsuit-settlemen...


I prefer to think of it like “technical” permission (like obtaining the permission from android) and legal permission (aka consent). I doubt FB got consent.


I think this is a very good way of describing the issue.

Unfortunately, there's also the little matter of EULAs.


Right. And judges have enforced those to the letter. But with GDPR and the like coming into play, I think judges will have to accept the reality that EULAs can’t give actual consent.


> Legally yes, they had permission.

California is a two-party consent state for recording, and it's not an unreasonable stretch to say this should also include call metadata. If one party (i.e. Me) opted out, and someone else didn't, they now have a log that I didn't give them permission to make.

I think this is probably the most strong case users will have against Facebook, and I honestly think it's a pretty damned good one, but IANAL, YMMV, etc.

Either way, if you're still using the Facebook app after this has come to light... you're probably setting yourself up for a world of pain. Even if you don't delete Facebook outright, the risks of having their terribly coded app on your phone vastly outweigh the benefits at this point (you know, if you didn't delete it when people realized it decreased their battery life by 10-20%, or that it asked for every permission Android had an option for, with no way to tell Android to restrict access back-in-the-day...)


it's not an unreasonable stretch to say this should also include call metadata

Yes. It is an unreasonable stretch. Cal. Penal Code § 632 prohibits "eavesdrop[ping] upon or record[ing] [a] confidential communication." There is no set of circumstances in the history of telecommunications where seeing your itemized phone bill is the same as eavesdropping.

I understand you feel violated. You should. What Facebook did is vile. But that's not a reason to throw due process out the window.


Recording that a confidential communication happened appears to read on to that law, that's still part of "recording a communication".

If a law said i couldn't "record your visit to a shop" and I shared commercially that you "went to the shop at X time and stayed Y minutes" even if I didn't record the details of your purchases, or what you were wearing, or whatever, it would still appear to be a contravention of the letter of that law.


> California is a two-party consent state for recording, and it's not an unreasonable stretch to say this should also include call metadata

It is very unreasonable. All modern phones automatically keep a log of your phone calls (call history). Should I ask for the consent of anyone I call or delete the phone call from phone history immediately after, otherwise I face criminal penalties?


We don't even know they always did this with permission, because in the past they didn't actually need permission for it, which could be why Facebook is trying to make such a big deal out saying that collecting call history and text is "normal" when they collect your contacts.

It was required in 8.0 to tell the users that they're collecting this data. So the question is, did they collect it before 8.0 w/o permission?

https://developer.android.com/about/versions/oreo/android-8....


There is nothing to suggest dark UX patterns were at play here. People simply gave Facebook access to everything because it was fashionable and convenient.

People simply do not care about their privacy if its in conflict with convenience. It has been common knowledge for years that Facebook will directly use that data against you and still people, completely aware of this, didn't care.

The only thing that made people care about their loss of privacy was a change in fashion. A new trend. Somehow, now, it is suddenly bad where before it wasn't. I simply cannot find the words to blame Facebook for this.


It's probably like a spouse having racy pictures of you, and now you've found out they've done something bad with racy pictures of previous spouses. You gave them the pictures, and now you feel uncomfortable that you did. Legally you gave them permission, but how you feel about it has changed.

Also, Zuck arguing he has your consent is like the spouse saying "You mumbled 'okay' when I badgered to take pictures of you naked! That's permission!". Also Zuck saying it's secure is like your spouse having those pictures in a Windows share in his frat house but telling his frat bros "tell me that you agree not to look at those pictures." (More like your spouse saying the small print says he's allowed to distribute those pics to other people, but only if they promise not to misuse them...).


I think the statement

> People simply do not care about their privacy if its in conflict with convenience

is only partially correct.

So far I have been able to distinguish these kinds of people:

- people unaware

- people aware that try their best to keep privacy (if they are on facebook, they limit their usage - more or less like "lurkers")

- people aware and still posting stuff on FB/giving some permissions and they still do "something" (a bit more than lurkers above)

- people aware but "I don't have anything to hide", yet still unaware of the full potential of lack of privacy and they post whatever they want

- people aware and educated about privacy and "I don't have anything to hide" (yes, they do exist) and they still post whatever they want

> I simply cannot find the words to blame Facebook for this.

I have to disagree here as well. There are certain things which I agree with you on, e.g., this feature was enabled by default, because users want to have more convenience. Indeed! However, what about last week's scandal with Cambridge Analytica? And what else aren't we aware of? I am trying my best not to connect these two facts, yet, it seems there is a pattern when it's about user data => who cares, they don't know what they want.


> However, what about last week's scandal with Cambridge Analytica?

This isn't news to me. Any data submitted by a user to Facebook is forever surrendered to Facebook who then owns it. Facebook can do whatever they want with their property.

The only surprise there is that users so happily agree to this madness only to be shocked by the result. What did they think was going to happen? Still, I don't blame Facebook for this.

It reminds me how people (Twitter addicts) were shocked at this website 10 years ago: http://pleaserobme.com/


"I'm shocked you're shocked" is just fancy-grade despair.


> There is nothing to suggest dark UX patterns were at play here

https://techcrunch.com/wp-content/uploads/2018/03/opt-in_scr...

The no button is barely recognizable as button. But the No button is barely recognizable as a button and it and the explanation are gray on white, while the Yes button and misleading title are much more emphasized. And even half the explanation is bullshit like "better experience for everyone". Plenty dark UX in my book.

(Though I assume a lot of people would have agreed to this, even without the dubious UI)


I'm a frequent Facebook user, and I'm connected to most of my relatives and close friends on Facebook. I have absolutely no need to share my phone contact list(where I have numbers of colleagues and people who I wouldn't call friends) to Facebook.


When you install malware on your computer, it asks for permission too. People don't understand why their USB drive suddenly needs their account password, but they need to get the files that are stored on there so they type it in. That UAC popup may as well be triggered at random based on how often it shows up, and it never explains why.

Is that consent? Do you have a hard time finding the words to blame the malware creators?


I mainly use Kubuntu, but on MS Windows IIRC UAC popups day "ABC.exe needs permission to modify your computer" (or similar), that's some explanation at least. If you've no idea what that program is or why it suddenly needs permission then you can refuse (or seek help).

It's not great, but it's a little better than you suggest.


The difference there is that malware actually is a dark pattern and operates without your permission. Facebook is not a dark pattern and is nothing if you do not willingly surrender your data to their legal ownership.


Of course we’re talking about different things. Facebook knows it and is confusing the issue intentionally. The last thing they want is a nuanced discussion of informed consent and how it relates to click-through alerts nobody ever reads.


“Privacy means people know what they’re signing up for. In plain English, and repeatedly. That’s what it means. I’m an optimist. I believe people are smart. And some people want to share more data than other people do. Ask them. Ask them every time. Make them tell you to stop asking them if they’re tired of you asking them. Let them know precisely what you’re going to do with their data. That’s what we think.” — Steve Jobs, 2010 WSJ AllThingsD Conference

Video: https://www.youtube.com/watch?v=39iKLwlUqBo&app=desktop

Transcript: https://www.digitalmusicnews.com/2018/03/25/steve-jobs-user-...

Recent article: https://qz.com/1236322/apples-steve-jobs-tried-to-warn-faceb...


It's almost like the company he headed made next to no money from selling its users' data.

Sarcasm aside, it's a good quote, but misses the crux of the issue: incentives. User data sharing/selling (whether through partnerships or advertisement targeting) is Facebook's revenue model. Everything else they make money off of is insignificant compared to that. This, I think, is what people mean when they say something is "in a company's DNA": was a dubious partnership with Cambridge Analytica or an only-deceptively-authorized potential collection of contact/SMS data some sinister plot, or the goal all along? Probably not. Was behavior like this considered less-than-scary to Facebook's decision-makers, swept under the rug, or seen as a minor extension of what they already do to make money? Almost certainly.


I suspected this many years ago when familiar faces from our customers started to appear in the friend suggestion box. I've never installed the FB app on my phone, but our less privacy oriented customers probably have.

What pisses me off is that while I can choose what information I share, I haven't given other people nor Facebook permission to read phone call logs that involve me.


The worst part is that they probably suggested these various customers to befriend themselves, just because they all had you in their contact list. Scenarios like these have been reported with unrelated patients of medical professionals.


Once I accidentally linked my Facebook account with Instagram. I was bombarded with friend suggestions of people I follow on Instagram. I follow strangers in Instagram when I like their photo feed. Facebook is only for family, close friends and relatives. I linked Instagram with dummy Facebook account to flush my Instagram profile of Facebook friend list and vice versa. There was no other way to delink two accounts.


Wow. I just imagined working as a sales agent or something like this, and how much trouble could Facebook cause me in this case. Even for somebody, who always felt some disdain for Facebook, this is quite mindblowing.


Do you think you have the right to share with people your call logs without the permission of the people you were calling? I would say you do. So I don't see how you can reasonably object to someone sharing with Facebook the fact that they called you in their call log.


It matters, to me, if its for commercial gain and if it's done with respect for PII.

So "I called $phoneNumber" is OK but "I'll sell you details of $person's phone calls" is not.


To be fair, I get weirdly familiar faces from different times in my life coming in my suggestion box... and I've never had facebook on my phone. I do have Instagram, but this was a trend before it was installed on my phone.

Not to mention that I rarely use my phone's "phone" features, including SMS. I live an ocean away from most family and have never had many friends. I have a couple here in the country outside my spouse, and we talking is sporadic at times since they both have young children.


The same applies to Gmail use.

While (an abstract) you may be OK with trading your privacy for a convinience of a free email account, it automatically means me also getting sweeped by Google's information dragnet.


Sales lead idea:

Make a facebook account and befriend all (and only) your competitors, market to the suggested friends.


Ok, well this is just great. So Facebook has all my contacts from their aquisition of WhatsApp and even though I've never used any of their Apps or had an account with them, I'm pretty sure everyone I typically communicate with has the App and did not opt-out of this collection. So they get data on me for free that the police would need a court order to get. This kind of data retention has been highly controversial in Europe and is currently illegal for carriers to do in some countries, even though the security services obviously love it (https://en.wikipedia.org/wiki/Data_retention), it is one of the few things I've actively politically fought against.

I really hope they will eventually crater like all the other social networks before them and I should have a way of asking them to delete all data they have on me including meta data from sms/phone calls they might have collected.


> I should have a way of asking them to delete all data they have on me including meta data from sms/phone calls they might have collected.

With GDPR they will have a legal requirement that "delete your account" deletes all of your data if you're in the EU. Currently they don't delete conversations with other people, and don't delete any ML data which was based on your data, but hopefully that will be ruled illegal under the GDPR requirements.


I think the problem is that most of the data they have on me is implicit in the data they have stored about other people. They definitely have my phone number, but I'm not sure they store the result of a computation they could easily do on demand with what effectively would be a join statement (no ML required), namely reconstructing my call history with everyone that has the facebook app installed. Again I'm not sure if the GDPR has any provisions against that, because it affects two parties one that has consented to the storage and another that hasn't. Also it is not clear to me how an average citizen would be aware of the fact that even if they only have your phone number stored they are most likely able to reconstruct most of your call history as well.


> namely reconstructing my call history with everyone that has the facebook app installed

IANAL, but I think that would be okay, as long as they don't associate any/strip all personal identifiable information from your phone number, and don't store the phone number in plaintext.


Crazy thought. When a company with a ton of accumulated assets (data) craters, do the rules of bankruptcy and/or mitigating their losses to investors force them to just sell these assets outright?

This may not happen in the next few decades with the likes of Facebook, but imagine what kind of data that much smaller companies have that will just be sold off to the highest bidder, possibly de-anonymized to increase it's value. Does a user privacy contract hold any weight when a company is about to dissolve?


That's impossible with the GDPR for privacy relevant data. That data cannot legally be an asset of a company because a company cannot own it. A company may be a data processor or data controller, but not a data owner for other peoples private data. They may choose to ask each individual if they agree to let another company use the data, but that's a different thing.


> Does a user privacy contract hold any weight when a company is about to dissolve?

It would hold about the same weight as it does now: Effectively ignored.


My knowledge in this area is limited, so maybe others can pipe in, but assuming massive amounts of data on millions (or most?) Facebook users is out in the wild, which cannot be 'put back':

1. how well can this be actually used currently to create a proper psychosocial(?) profile of an individual? What's the current state of the art? Any suggestions for books or papers to read on the matter?

2. do we have any idea of how much our ability to do 1 will improve over time, given the same data? I often find it difficult to separate the Derren Brown style 'profiling' bullshit from what's actually possible. And I also keep reading that 'big data' is not quite as easy to use as it's often made out to.

3. how valuable does this 'stale' data remain? Personality is by definition supposed to remain stable, but I recall learning that even things like friendships, political/world views, other non-personality characteristics are relatively unchanging for most people after somewhere (early?) in their twenties. This could mean that even if Facebook does privacy 'properly', or somehow disappears, there's a rich dataset of individuals that, even if only in the future, be shockingly powerful to manipulate. And we're already scarily susceptible to manipulation using 'conventional' means.


I use Whatsapp only for communication with office colleagues. I had my Instagram linked to my mobile number(2fA). Soon my office colleagues' profiles started showing up on my Instagram follow suggestions. This happened despite the fact that I have not uploaded my contact list to Instagram. I had to block all my colleagues on insta, and then remove my number from Instagram profile.


"how well can this be actually used currently to create a proper psychosocial(?) profile of an individual? What's the current state of the art? Any suggestions for books or papers to read on the matter?"

I suspect quite poorly.

It's 2018 and I've been hearing pitches and narratives to do with fancy targeting of advertisements and user profiling now for 22 years.

And what do we have ? If I search for something obscure related to an (product), I will see unbelievably blunt and almost comically generic (product) ads for a week. Even if my search makes it obvious that I am not a prospective customer.

What advertisers receive for their money on these platforms is laughable and I am sure there's a complicated sales pitch with graphs and fancy terms of art and mentions of "AI" ... and it's all just bullshit, just like it has always been.


I had one incident which convinced me that their algorithm is easily fooled even by single misclick.

I hate ads propagating alcohol so I always report them to FB. Once I misclicked and instead of reporting it I opened linked article. Based on that click FB added alcohol to my interests. It did not matter that I reported all other similar adds before and also after that.


Whether the algorithms seem effective to you, as a single consumer of the advertisements, is irrelevant.

Facebook ad campaigns most definitely work (though I'd imagine there's some variance of effectiveness based on what exactly you're trying to sell with the ad campaigns), and lookalike audiences are an effective way of determining who to show your ads to.

It's really quite simple, as a company you can spend money on Facebook ads and you will get traffic/conversions/etc.


I agree. But I reacted on comments disqusing if this algorithm is capable of doing more than targeting ads to make them cost effective.


That's part of the reason I asked. Based on the ridiculously bad suggestions I'd often get (from search, netflix and the lik, etc.), it all seemed a bit too much hot air.

But a few things made me (sort of) change my mind over time, and worry more actively:

1. the ability to properly use the data available is a skill in itself. The fact that this 'new, non-private' internet is still relatively young (really just a decade, post-smartphone I'd say) could mean that most companies/governments simply didn't know yet how to properly use it. Considering the many examples of the slowness with which organizations tend to adapt, that doesn't seem unlikely.

2. There's a huge difference between the data available, and the targeting achievable pre-smartphone, and post-smartphone. On top of that, I distinctly remember that throughout most of my youth (so up until a few years before world-wide Facebook) it was still very much a 'default' attitude to be very careful about your personal information online. Not quite the "don't go into chatrooms" of my childhood, but still the "don't use your real name, ideally, on <national social network>".

3. We've made quite a number of advances already in storing and analyzing 'big data', from what I understand. And a decade or two of research on this new world of personal internet data must have led to some significant advances, no? I mean, so far we've gotten really good at manipulating people with whatever new technology entered the scene. Just not right away, necessarily.

4. from personal experience, and conversations with people who do/did targeted advertising on Facebook, it really does seem shockingly easy to target specific messages to specific groups. I find it hard to believe that this does not fundamentally change the whole game of manipulation, compared to things like TV and newspapers. Hell, 'old fashioned' direct marketing / door to door sales is perhaps a precursor of all this. I remember reading quite a while back how advanced the techniques were to decide what neighborhoods to visit or snailmail-spam for particular products or services. If companies were willing to spend the amount of money necessary for that, it must have some effect. Being able to target essentially large parts of the entire world in that same way has got to be at least somewhat more powerful, which is scary enough maybe.

I'm not sure of any of this, though. Just thinking out loud and very curious to hear what others think (although ideally I'd also love some suggestions of where to find the best research/articles on the matter).


regarding 1, I recall reading that 23 likes are enough to guess a persons sexual orientation with better accuracy than close colleagues can. Couldn't find a source on this in 5 minutes though.


I feel in a way very disturbed by all these news coming out. Not by their contents, I don't feel there's nothing new here, but by the way way they are written. Since when is it not known in the tech world that facebook's business model exists around the idea of making a piece of legal spyware. At least where I live it's unpolite to give someone's contacts to anyone without asking the person first. I don't see why is it any different with facebook. Other thing I don't understand is why aren't the users being paid for having facebook accounts. If you look at it from a certain perspective, every like and every photo is working for facebook, it does not feel like work but it is, since you're creating the products that are being sold.


Well, the users of facebook do get the facebook service in exchange for the data they provide.

TANSTAAFL. If you don’t pay with money, you’re paying with something else. Facebook is not a charity. The question is not whether they’re allowed to obtain (non-monetary) payment for using their service, it’s whether it was (and is) clear what the price is.

What irks me is why non of these services allow monetary payment. Why can’t I pay for facebook with the express agreement that none of my data is sold? It wouldn’t have mass market appeal, but it would silence many of the critics. (Same deal for all ad-funded platforms: just let me pay with money instead of time or attention.)


FB income : 15,920,000,000$ [1]

FB users : 2,129,000,000 [2]

$7.47/year

This is quite affordable. Curious how would marketing companies react to such idea.

[1] https://www.nasdaq.com/symbol/fb/financials?query=income-sta... [2] https://www.statista.com/statistics/264810/number-of-monthly...


I keep hearing the argument that the people most likely to pay for not looking at ads, are the most interesting to advertisers.....


And it seems very plausible. I’m sure a HN reader is worth far more than $7.47 a year to Facebook.


>TANSTAAFL.

Bless you?

(Seriously what does this mean?)


There Ain't No Such Thing As A Free Lunch.


You don't need to pay willing volunteers.


The Facebook blog post includes this screenshot [0]. That's definitely an opt-in, although I could imagine a non-technical person clicking through it without reading the grey text. Better than nothing.

[0]: https://fbnewsroomus.files.wordpress.com/2018/03/opt-in_scre...


I'd like to formally welcome everyone to the world of a/b test-driven development!

Nice easy-to-read high-contrast giant-font headings that seem legit: "Text anyone in your phone!" - Yay! that's exactly what I want!!!

Tricky hard-to-read low-contrast micro-font fine print that looks like way to much trouble to try to read, detailing what ACTUALLY happens when you're tricked into clicking the:

One brightly-colored-only-thing-that-looks-remotely-like-I-should-click-it call to action.

Add in the cute emoji looking user who obviously LOVES giving up his privacy, and you've got a winner!

Wow project manager, would you look at these great test results!!! everyone wants to give us their private info!!! what idiots!

-every front-end engineer ever


The opt out button is also "Not Now" so they can hit you with this screen again and again until you accidently opt in or want to make it go away.


yeah, that's how I recently clicked it - had to quickly install messenger (needed to make a video call, which the lite client I normally use doesn't support), clicked right through this. And normally I'm careful with such stuff.


It is not "better than nothing," it is the minimum that is technically and legally required.

The UI is intentionally deceptive.

The text is deceptive. The big text says "Text anyone in your phone," which sounds great, but doesn't actually say anything about data sharing. The small low contrast text, describes what they're doing. They know that by putting it small and grey, most people wont read it.

The buttons are deceptive. It looks like there's only one button.

Even by putting all the permission stuff in a row on first startup is push ploy. Everyone knows, people just keep clicking OK until they get to actual app. People don't read the pages. People don't read the user agreements. They just want to get through the roadblocks as fast as possible so they can get to what they want.

None of this is by accident. All of these elements are chosen to push people towards the desired outcome, to upload your contacts, SMS, and call history as quickly as possible.


it appears to be a result of extensive A/B/C ... XY testing. In a way, the users themselves have chosen it.


I know this might come across as a nitpick, but A/B testing and similar techniques aren't at all about what users choose. They are about constructing a statistically correct test between two alternatives measured in some predetermined way (eg. transactions/users but equally well consenting users/total users) with the least amount of bias.

The bit about minimizing bias means that you couldn't possibly give users the A/B variants to choose from but instead must assert the best you can that no user is exposed to different variants of the same experiment. You would typically do this by giving the client a cookie with a random seed which you can then hash with a secret and the id of the experiment to get a randomly distributed but sticky/consistent variant assignment that doesn't scale client storage with the number of experiments. Where things get more interesting is how to change behavior when a user logs into a new device. Do you switch to the users set of variant assignments or keep the experience on this device consistent?

This being said, yeah, I'm pretty sure this is the result of optimization on "what fraction of users give consent" while keeping an eye on uninstall rate as a health metric.

Source: Redesigned/rewrote A/B testing (and general optimization) framework for large e-commerce company many years ago.


That's an incredibly delusional way of framing it. What do you think the A/B test was set up to optimize, privacy or number of uploads?

This permission screen is a push poll.[0]

[0] https://en.wikipedia.org/wiki/Push_poll


number of uploads, obviously. Hacking the human lazyness.


Well then it isn’t really much of choice is it? It’s a false choice. The product manager will apply as much pressure as needed to get the desired effect. To quote Captain Ramsey in Crimson Tide, “[Y]ou can get a horse to deal cards. It’s just a matter of voltage.”


In the same way that people chose to be robbed at gunpoint when muggers A/B tested mugging people while armed versus mugging people while unarmed.


> People don't read the pages. People don't read the user agreements.

Then maybe "people" shouldn't act completely outraged when they find out that their call history was uploaded?


The ToS and other conditions attached to various services and relationships are literally far too long and complex for anyone to read, understand, and recall in full detail.

This point has been the subject of numerour articles and documentaries.

Hoback states that if one was to read everything in these user/service agreements, it would take one full month, that’s 180 hours every year” and that according to The Wall Street Journal “consumers lose over $250 million dollars due to what’s hidden in these agreements.”

https://en.m.wikipedia.org/wiki/Terms_and_Conditions_May_App...


I totally agree!

But first of all, the prompt shown in the article's screenshot was not hidden in a long ToS.

Second of all, I agree that we should work towards solving the problem that overly-long ToSs represent. What I do not agree with is the assertion that, in the presence of an overlong-ToS, we allow people to click-through, then get outraged at the company for doing something they were allowed to do by the ToS. If it really is far too long and complex for you to read, don't use the product.


Tools which become ubiquitous or prerequisites for other services are not discretionary.

Facebook is the Internet for many people, is required for authenticating by numerous services, and is how numerous groups and communities organise.

As such, its terms and services as those of numerous other services, should be defined in a standard set of obligations, rights, and responsibilities, by law.


> Tools which become ubiquitous or prerequisites for other services are not discretionary...As such, its terms and services as those of numerous other services, should be defined in a standard set of obligations, rights, and responsibilities, by law.

I see this sentiment in some form a lot. Is there a name for this principle? Where does it come from? I don't in general agree with it - speaking generally, I think enforcing this principle is a way to solve some problems, but not the best way - and would be interested in discussions about the principle itself.


Government.


I don't understand - would appreciate if you elaborate.


Pardon the delay, but I've been trying to think of a good brief description. I'm not sure I've got one that's particularly clear.

"Common weal", that is, the common wealth or common good, a/k/a social benefit, is probably the best general description. The notion being that there are positive externalities not addressed by the market (or there are offsetting negative externalities not imposed on the producer).

In either case, a useful functioning requires some entity with the interest, capacity, and power, to act in the public interest. The notion is an old one. It comprises Book V of Adam Smith's Wealth of Nations ("The Expenses of the Sovereign"), or in contemporary economics, the area of public sector economics (or welfare economics).

And is the domain of government, as my initial response indicated.

https://www.etymonline.com/word/commonweal

https://en.wikisource.org/wiki/The_Wealth_of_Nations/Book_V/...

https://en.wikipedia.org/wiki/Commonwealth

https://en.wikipedia.org/wiki/Public_economics


What services require Facebook for authentication? The only one I can think of is Tinder (not sure if that's actually the case, I've never used it) but I don't think a few non-essential services requiring a Facebook account makes it not discretionary.

If you needed a Facebook account to file a tax return or something, then I would agree that it's not discretionary.


No one chose anything.

The best description I recently heard of these types push questions is "exploiting a bug in human behavior. And when the bug is in the brain, there is no patch."


So how do we solve this? What is the general framework we use to agree to transactions of this kind where users allow access for some data in return for something?


Effectively, what GDPR requires. You must inform users what the data will be used for when asking for opt-in permission to use it, as well as offering opt-out and right-to-deletion.


this.


dark pattern like this one needs to be hammered down by law. We should consider the most prominent button as "default" and such a form as an "opt-out".


Is this really a dark pattern? The text is right there. It's not convoluted legalese, it's extremely straightforward. If people aren't willing to read that, how exactly are you supposed to get anyone's consent for anything?


You are supposed to get consent by showing both options equally. Note also the wording of "not now". That is the kind of option that feels like the app is going to bother you about it again. Moreover, it communicates inevitability. You can opt out for now.

All of this is clearly made to get people to opt-in. Heck, it is not even clear that the "not now" option is clickable.


Consent can be tricky when you create a path of higher and lower resistance. This is especially so for people who try to avoid conflict (teenage girls here especially). They may give consent simply because it's what is expected of them, and rather than what they actually want. UI design for such consent forms has to be carefully considered so as to make it 'socially acceptable' to decline.

That specific popup is a clear example of how to maximize acceptance in a big portion of the population. I can almost guarantee you that popup design was chosen based off A-B testing, and had the highest rate of acceptance among designs.


Yes - Call and Text history are not required to make this feature work - only your contact list (which is all iOS sends). They bundled this extra invasive data collection into the consent request. And we are used to apps requesting access to contacts to find people you know on the platform, so syncing your contact list is expected for this feature. What's not so expected is also sharing your call and text history, which is conveniently mentioned in a low contrast text blurb. That's the dark pattern.


If you click "not now" it pops up again later.

At least it used to, and it was really abrasive about it. Definitely dark pattern behaviour


I hate to play devil's advocate against something i agree with here, but there's a lot of subjective ideas here, most obviously determining which button is "most prominent."


In a world where we have already been preconditioned to certain design patterns, the blue button is clearly the more prominent and will subconsciously be considered the "default" option.


Worse than that - I expect most technology-illiterate users won't realise the "Not now" text is a button at all.

(Its grey-on-white designed to look identical to the copy text above the button. The only difference is placement, and a small difference in font size.)


That should be considered an opt-out. It's barely noticable that it's an opt-anything when the prominent option is the creepy option.


This is exactly why I do not have any call/sms info in my FB data downloads. I read such stuff and deny the app these things.

Btw. something I am required by law here in Germany. I would need written permission of every contact in my phonebook for this upload, before uploading. Same goes for WhatApp.


Hm, isn't it illegal to offer this option in Germany then?


I am not sure about the wording of the law. But imho no, this would not be illegal to offer. It would be illegal to use, unless you have permission from everybody.


You are wrong about Germany. Only if you are a company you need permission.


No, you are wrong, this law applies to everyone.


This is true. And it seems that people really do it. I get calls from previous colleagues or companies, weather they can give my number further.


that is actually a different screen then when they first rolled it out. You had to dig into another screen to opt out.


I partially blame the Android permissions system. They are so generic you have no idea what you are giving permissions to. Permission in this case: "Allow INSERT APP NAME HERE to access your contacts?"

Access what? Read access? Write access? Get all of their name and contact fields? Send them text messages? Call them? Spam?

If I don't give them the access, what happens next? Can I not use the app? Will the app break? Will the app just ignore it?

There's no information. If you google it for more information you will have to dig for it in some obscure google groups forum, and then yet it's not really clear. Because when you google for Android App Permissions you get results meant for Android Developers, and not normal human beings (jk about the normal human beings part).


If you haven't already installed Signal on your phone, please go and do so. To have alternatives to Facebook/W/I we need to build up network effects. Pretty easy to convince tech people to do this, and it would be a good start for getting Signal in as a replacement.


Is signal decentralized? If it is decentralized, how is the network effect built? (Is everyone isolated on their server with a subset of all signal users, or is there a way to communicate with someone using another signal sever?).


It can be federated (and was in the past) but Moxie has complained about federation holding development back.

https://en.wikipedia.org/wiki/Signal_(software)#Federation


> It is unlikely that we will ever federate with any servers outside of our control again, it makes changes really difficult. https://github.com/LibreSignal/LibreSignal/issues/37#issueco...

So doesn't that mean a single party will control Signal? What prevents them from becoming the next Facebook?


It is not decentralized. All Signal users can communicate with all others.


While Signal is nice it's rather worrying that it's free. I'd rather use a paid alternative like Threema.


Good luck convincing your friends to buy any sort of IM application. It's already difficult now to convince them "just for privacy".


Easy. I respond on Threema within 30 minutes and on WhatsApp within half a week.


They are now in the process of becoming a nonprofit and recently won a $50m grant.


They haven't won anything. It was given to them by the cofounder of WhatsApp because he dislikes Facebook.

Nonprofit doesn't mean the developers don't have to eat. Money has to come from somewhere.


Again this tells me the current security model is out of date. Permissions to stop random hax0r to 0wn your root is basically useless. I don't care about /var/www/... while I'd be desperate if my ~/Documents was whisked away by some dodgy app.

Signed/Unsigned is only a stopgap, we need user friendly UX to control which app has access to what data and even within the same app, establish Chinese Walls between personal data access and networked code.


Facebook Messenger is keeps reminding me to let it access my contacts “to find my friend”; after telling it no it stops reminding for a while then it comes back again.

This attitude is just an example of the constant weasel to get access to the data.


Everybody is talking about Facebook these days, but is anybody considering to delete his Whatsapp and Instragram accounts too? I mean, aren't they part of Facebook nowadays?

Wouldn't it be consistent to abandon all Facebook services?


I do, abandoned Instagram, Whatsapp, and FB for a year and I don't miss a thing. Keep in touch with real friends instead of checking the virtual ones.


If you have international family/friends, WhatsApp is the most ubiquitous and easy to use option I’ve come across.


Honest question: Is it really? That can be just my network, but I've never stumbled upon anyone using WhatsApp here in Poland. I only know this name because I follow tech news. Almost everyone uses Facebook's Messenger, some people use Telegram, sometimes Hangouts, gamers use Discord, techies Signal or Jabber and that's pretty much it.


Of course, my observations are limited to me and I don't have any numbers. My interactions are with people in Brazil/US/UK/Australia/India/South Africa/Kenya/Spain/Central America.

I know WeChat is more popular in East Asia, and I know a few Telegram/Signal users, but I can find about almost everyone on WhatsApp.


When I was in the Nederlands last year they actually had public signage advertising a WhatsApp group chat (for emergencies and the like) for the surrounding community.

I was surprised by how popular it is.


I'm from Brazil, live in the Netherlands. Whastapp is ubiquitous in both places. Not using it today is akin to not having a mobile phone years ago.


My friends from Germany, Switzerland, and France all primarily use WhatsApp I've no idea why.


And why alternatives like Telegramm are not an easy to use option?


That might be an option, but not as ubiquitous and easy to use. We’re dealing with people who barely read and speak a single language, much less English.


Whatsapp still sends (for now) way less data than Facebook and also has much better security. I wonder how long it will continue.


WhatsApp sends the most important one, your social graph aka contact list and how often and when you interact with whom. I also assume the content of the messages you send and receive can be read and monitored.


> I also assume the content of the messages you send and receive can be read and monitored.

Whatsapp uses the Signal protocol for message encryption[0], so they ostensibly cannot read your messages. They probably have a good handle on who you talk to and when, though.

[0] https://signal.org/blog/whatsapp-complete/


They could obscure cell numbers too - they chose not to


Never used Whatsapp, haven't used Instagram for a very long time. It's only Facebook I need to cut off (and recent news are only the courage providers for something I've been considering for years already :P).


Could anyone here imagine if an application we installed on our computers did this? What if the twitter app went around my computer after asking for admin permissions to find all my contacts and messages from different programs just to beam up to facebook.

Such a thing was considered a virus in my time.


Oh, if you have Windows 10, the vendor's assumption already is that the same rules apply as with mobile platforms. In that mindset, there's little point to differentiate.


I have been thinking about it and I think the issue with Facebook in particular is how AGGRESSIVE and CARELESS they have always been in violating privacy. Just like Uber their philosophy is "It's easier to get forgiveness than permission". And of course they have an official slogan of "Move fast and break things".

So all these different controversies are not just a coincidence but are very much baked into their organizational culture. They are obsessed with winning at all costs and clearly see privacy issues as just inconveniences to achieving their goals.

I am not sure if it is possible to imagine a social network that truly respected privacy at its core, that didn't try to get away with as much as it could, that put its users interests ahead of increasing Daily Active Users.

But if Facebook actually operated like that, I suspect there would be a lot less concern.

I don't know if that is even possible though.


I remember working on integrating facebook with an app I was working on several years ago. It was in the pre-M days, so Android was still using the old install time permission system.

As soon as facebook would be installed without even running the app, it would access the contact database and do network operations (very probably uploads .. ).

For this reasons, well among many others, I never installed the facebook app on one of my personal devices at the time.

In their credit, when M arrived, they have updated their app pretty quickly to the new permission model.

Still, I try to be as conservative as possible with critical permissions.

The only one I regret is whatsapp now that facebook owns them.


This is an issue for other messaging clients as well, having the "Sync contacts" enabled by default. So by the time you get in the settings to disable, your contacts might on a server already.


Yeah, it always made me frown when at the time people were using the hidden API to remove permissions manually after install .. (and complain when Google removed that testing tool before shipping proper incremental permissions in the next release). That's just not how it works.

Thankfully now that permissions are incremental, this issue is solved.

Except some apps still target an old API in order to keep install time permissions .. like SnapChat.

This will soon no longer be possible since Google will force apps to target the last API level.

Still, people don't seem to realize what giving an app access to 'contacts' or 'SMS' means.

I am not sure how to solve that tbh :/


The issues surrounding Facebook are systemic to our corporate environment and culture. So, really it's a multifaceted problem that the media has disguised as Facebook's sole problem.

They're an American company so profit is of course precedent over everything else. If users are naive enough to think everything they ever did on Facebook could be private, then they just cease to use it for private communications.

Culturally, there's a giant lack of people wanting to actually pay for most things digital. I don't see why it's ok to pay $5 for a mobile game on your phone but people aren't willing to fork over $10/yr for a modicum of privacy.

Should take a poll on this....

Thoughts?


Since here's yet another thread about Facebook, I'll ask somewhat unrelated question I already asked on HN, but never seen an answer: can somebody explain, why the scandal around Cambridge Analyica (and, "suddenly", a bunch of other rumors about FB) escalated just now, even though we knew about FB, Cambridge Analytica & Trump for about a year at least (well — I knew for a year, sure lots of people knew earlier)?

I suspect I know the answer, but I have faint hope that it's not true and somebody can provide a bit more "sane" explanation.


For me, it was because a year ago there was only one source for this info, and it was a somewhat dubious one (Vice). I assumed after the initial report that what CA did was more in line with Obama’s tactics, which were more benign and standard. It’s only recently become totally clear that the worst of the reports were true.


Scout.ai was where it first broke I believe with their “The Rise of the Weaponized AI Propaganda Machine” piece from last Feb.


Can't say I am 100% sure of it, but around 1 month ago Facebook switched the news feed algorithm from promoting posts from pages you like, to promoting posts from your friends.


I was not aware of CA before recently. Wasn't the undercover videos of the executives admitting illegal doings only just recently released?


Err, seriously is that their defense? Even with permission, why would they need those personal data for any legit purpose that their users would deem useful?


I've just installed NetGuard [1] and found both Facebook and Messenger apps to be very chatty. I guess I expected that. But what suprised me is that my SMS app, Textra [2] was making calls to graph.facebook.com. I have paid for the app so there should be no need to contact Facebook to provide advertising.

As it is just there for SMS, I've completely removed it's access to the internet which is possible through NetGuard.

1. https://play.google.com/store/apps/details?id=eu.faircode.ne...

2. https://play.google.com/store/apps/details?id=com.textra&hl=...


From FB official statement: "When you sign up for Messenger or Facebook Lite on Android, or log into Messenger on an Android device, you are given the option to continuously upload your contacts as well as your call and text history."

Why FB needs users call and text history in first place? These privacy options are enabled by default when you install Massenger app and majority of the people are not aware of these privacy settings on their phones. And even if you alter a privacy setting on you phone then upon next update of the Massenger application it gets reset to defaults! And we all know the high frequency of app updates. It is like all of these are part of a bigger systemic issue and it smells foul play.


They need call and text history to data mine so they can make money off of you by selling ads, and creating a psychological profile that will help make their products more intrusive.


So, this happened and we have confirmed about the dark patterns whether consented or not. What are good alternatives in this "Network effect" age? Facebook - I hardly use anymore. It exists undeleted because it serves as a kind of Social ID. I don't have the App installed anywhere, neither do I access it from my regular browser.

The thing I'm struggling to get alternative for is WhatsApp. How could I talk to all my contacts in a cross platform way without annoying them about installing yet another app?

Text/SMS - costs money because of shitty carriers.

IMessages - Works great but only on iPhones. :-(

Signal - good on paper - still have to trust Moxie not selling out, and needs people to install to be any useful.


I don’t believe Signal will be sold. Moxie (thanks to Brian Acton) just launched the Signal Foundation [1]

[1] https://signal.org/blog/signal-foundation/


Matrix (Riot is the app you would install) is federated, free software, isn't tied to your phone number, and has optional OTR-like encryption (it's based on Signal's encryption but has much more solid group chat encryption). Also messages don't go through a single server (only the home-servers that are involved in the communication), so the OTR-defeating problem that Signal has (that you can defeat the repudiation of the crypto by watching the server logs) is not nearly as much of an issue. It also has VOIP and video calls, that are also encrypted.


I use XMPP since a few years in the way it was supposed to be used:

- federated

- multiple devices with different client software

- own server

It certainly isn't perfect, as there are a lot of 'classic' clients which lack the modern extensions (e.g. OMEMO [1], Carbons, MAM).

Clients ranked by personal preference:

- Conversations: very good, Android only, Downside: no calls (neither audio nor video)

- Gajim: 'classic' client, requires a few additional modules (slightly better than Pidgin when is comes to pictures)

- Pidgin: 'classic' client, requires a few additional modules

- Dino: very good Desktop client on paper (XEP wise), sadly it is lacking a few important desktop integration features like a tray icon, but as those should be easy to implement that client has a lot of potential.

- ChatSecure: iOS, Downsides: I never really got it to work properly. Without server side Push Notification extension it only received messages when the app was open. With the extension it worked for a few minutes, but still not what I would consider usable.

- Jitsi: certainly could be a good client, but at the moment it has no OMEMO support and therefore I stopped testing it.

While I am not a particular Matrix fan, I can recommend their Matrix vs. XMPP comparison:

https://matrix.org/docs/guides/faq.html#what-is-the-differen...

It basically boils down to: In 2012 XMPP was lacking some basic functionality required for mobile use-cases, so they went to build something technologically very different (Matrix). In the meantime, XMPP has developed its own extensions to support those use-cases and now we have two technologically very different solutions, set to solve similar use-cases.

[1]: https://omemo.top

EDIT: removed the indention from the list


Thanks for the detailed information. Please don't use indentation to set of text, though. It makes it really hard to read due to side-scrolling, particularly on mobile.


Also: Facebook works on every platform, while at Signal they have a strange cellphone fetish.

I would love to use Signal, but I have to use Telegram. The Telegram desktop app is awesome, and it doesn't need a cellphone.


Signal has a desktop app nowadays and Telegram still requires a phone number to use.

While Signal's QR code pairing method is more awkward than Telegram's SMS one, it also prevents your account from getting hacked remotely in Iran by stealing your SMS messages: https://securityaffairs.co/wordpress/49976/intelligence/tele...


But with Signal you still need a smartphone (i.e: very bad for privacy); while with telegram you don't. Also, with Telegram you can protect the account with a password, so you can use a burner phone, literally once, and then forget about it.


With Telegram you have no privacy - no one uses the e2ee (secret chats). Perhaps because the desktop client doesn't support the feature, and even if it did, secret chats wouldn't sync across devices.


it's not like the chats are in clear text, because they are not. They aren't encrypted e2e, but they are still encrypted.


Wire works on every platform (you don't need a cellphone to sign up, only e-mail), and they released their server code a while ago as well, and documentations on how to set it up is work in progress, which means that in the future (actually even now, but most people don't know how to set it up :P) you won't even have to rely on their servers, you can easily self-host one.

https://wire.com/en/download/

https://github.com/wireapp/wire-desktop

https://github.com/wireapp/wire-server

https://wire.com/en/security/


WhatsApp didn't work on every platform. Not really. The web interface is just a remote control for your phone.


good point.


Unlike telegram, signal supports e2e encryption on Linux, Mac, windows, Android and ios.


+ Signal, WhatsApp, Telegram and Vider cannot be used w/o phone number (or, in some cases, even without a working phone), which makes them all not trustworthy.

I can only vote for Matrix/Riot once again.


one thing that might make it easier to sell the idea of using signal is that you can also use it to send regular sms messages as well as private ones. that means you only have to switch between two apps, which is what you would be doing anyway


I highly recommend Telegram. The issue with having to install yet another app is still present, but me and my close friends have made the switch and haven't looked back since.


Telegram, while good software, is the worst of the messengers, because they're based in Russia.

The Russian government recently asked for the encryption keys and they have like one more week to comply.

CORRECTION: They're currently in Dubai and are relocating as needed.


Telegram founder often claims he has no links with Russia, apart from being born there. https://www.calvertjournal.com/news/show/9478/telegram-found...

As far as I know the team is in Dubai right now.


Telegram ftw


So I downloaded my facebook data a few days ago and there were no phone nrs in there, no call history, actually much less of everything than I thought they'd have. Are they lying to me or does this depend on other factors?


I’d argue the data arcchive one can download are incomplete. One, it’s missing Likes.

Second, I never put my phone number in Facebook, but I know they have it, because at one time, it was pre-filled in the enable account recovery by phone-dialog.

So what you can export isn’t the full story. The question however is: how do you prove that and really ask for ALL data they have on you?


Are you sure it's not your browser auto-fill?


Yes, I’m sure, because I deactivated that and because they’ve shown me 2-3 different numbers over time for whatever reason.


Or just harvested out of a friends contacts who has the phone number and email address.


Exactly. My point is: FB has way more data about me than the download archive of my data contains/suggests.


The reports going around mostly stem from people who used the Facebook app on older versions of Android. A combination of people not understanding what Facebook would do with their data, and poor design in the Android permission system, granted access to call and message logs, which is how Facebook got them.


Because Facebook doesn't give you what they really know.

The download-your-data feature for a time showed the truth, in 2013: http://www.zdnet.com/article/firm-facebooks-shadow-profiles-...

So we may presume FB, especially in the US with little-to-no privacy laws against corporations, won't tell you what the really know.


The problem is not collecting its connecting the data with users who did not give permission.

The social graph should be one way not two-way and unrelated data should be discarded.

Like in the post we had a few days ago where someone exported his Facebook data and got call histories from his mother in his data.


What is the legality of Facebook recording the SMS conversations of someone who never signed up for FB, but whose text messages got caught in the feed of someone who did?

Secondly, were they just siphoning text or did they get a whole bunch of nudie photos too?


Much of this blame goes on Android and some on iOS/Apple.

The phone/call permissions, especially for Android, were needed for any app that wants to suspend when a call comes in and for analytics/social libs.

I always hated that about Android especially because even harmless games made it look like you were taking contacts and monitoring calls.

Of course this would be abused and it is all over the place. Unfortunately lots of apps/games are specifically built to harvest data and this was a major flaw in their permissions handling. You shouldn't need a monitor phone calls permission to allow a call to come in over your app while someone is using or playing it.


> Much of this blame goes on Android and some on iOS/Apple.

Which part of the blame goes on Apple?


Apple had zero permission management on the contact book, it was wide open. Especially FB and LinkedIn exploited that heavily and partly do to this day.


Oh really? I didn't know that. Can you link me up with something about that? Thanks.


Many apps grabbed contact lists. Path for instance even had an issue with the FTC due to it [1]. Apple implemented better permissions pretty quickly. I recall other apps doing the same up to 2013.

[1] https://www.ftc.gov/news-events/press-releases/2013/02/path-...


not so true.

"Users can grant or deny access to contact data on a per-app basis. Any call to CNContactStore will block the app while the user is being asked to grant or deny access. Note that the user is prompted only the first time access is requested; any subsequent CNContactStore calls use the existing permissions."

https://developer.apple.com/documentation/contacts#1773385


They do today, back up to 2012-2013 they did not. Apps were also overly abusing it back then such as Path [1] and many others.

> The operator of the Path social networking app has agreed to settle Federal Trade Commission charges that it deceived users by collecting personal information from their mobile device address books without their knowledge and consent.

Android didn't really close/tighten permissions until 2014 with Android 4 Ice Cream.

[1] https://www.ftc.gov/news-events/press-releases/2013/02/path-...


> The phone/call permissions, especially for Android, were needed for any app that wants to suspend when a call comes in.

No they weren't


You need/needed READ_PHONE_STATE if you wanted to suspend your app data/saving and allow calls to be accepted you did.

For instance you are playing a game and a call comes in, you needed them to allow that and to possibly not crash your game and save your data as well as for some analytics/social network integration.

Also if you allowed os level music to be played over game audio, you need that to handle music and app state when a call came in.

It was/is a default on many large app platforms including game engines like Unity and any social network app integration such as Google Play Game Services and Unity analytics [1][2]. When you have READ_PHONE_STATE you could get the number and more.

For games it wasn't such a big thing but for apps like Facebook that are always running and kept alive playing a silent sound [3], it could get every call that ever came in on record and apparently did. With these holes, apps could scrape everything and they did [4].

[1] https://forum.unity.com/threads/unity-5-1-adds-android-permi...

[2] https://stackoverflow.com/questions/39668549/why-has-the-rea...

[3] https://www.reddit.com/r/iphone/comments/3opxhm/facebook_app...

[4] https://arstechnica.com/information-technology/2018/03/faceb...


Where's the reference that you needed READ_PHONE_STATE to suspend your app? Shouldn't the app be automatically suspended when the caller app goes into the foreground?


The Facebook app itself is the real issue as both the Facebook app and Facebook Messenger require everything including READ_PHONE_STATE and contact permissions and pretty much everything[1][2].

> Where's the reference that you needed READ_PHONE_STATE to suspend your app? Shouldn't the app be automatically suspended when the caller app goes into the foreground?

Mostly for analytics and social platforms to access unique identifier for analytics you needed it such as in Unity up to 2015 [3][4].

> The Android build enforces READ_PHONE_STATE if the code has references to SystemInfo.deviceUniqueIdentifier. INTERNET is added when any network classes are referenced. ACCESS_NETWORK_STATE is added when calling Application.internetReachability

Also, early on it was required for saving state and or ensuring the app didn't crash. Mobile OSes were moving fast and without it the apps didn't auto suspend which they mostly do now. You can see some of this discussion in the links I included above. Anything older than Android Ice Cream required it as well.

It is added in by many plugins as well such as Google Game Play Services or other Analytics packages that most people didn't check. There was a reason the market was and is flooded with analytics packages.

It seems to re-pop up in analytics packages quite a bit and many aren't checking close enough per this example 2016 [4][5][6][7].

Largely it is due to people just building and shipping fast, there are other things that trigger it but the most common are doing things on suspend when a call comes in or to help allow the music to play in your game/app and turn off when a call comes in or analytics packages.

For the most part it is not needed now but up to 2016 it still was in many areas since Android started.

[1] https://play.google.com/store/apps/details?id=com.facebook.k...

[2] https://play.google.com/store/apps/details?id=com.facebook.o...

[3] https://answers.unity.com/questions/987433/read-phone-state-...

[4] https://forum.unity.com/threads/unity-5-1-adds-android-permi...

[5] https://stackoverflow.com/questions/39668549/why-has-the-rea...

[6] https://github.com/facebook/facebook-sdk-for-unity/issues/58

[7] https://forum.unity.com/threads/read_phone_state-permission-...


There is a lot of misinformation in this thread about Android's roll in all this. Clearly - they need better, more fine grained permissions - but they have been working towards that goal for years now.

There also seems a lot of pro-Apple bias that seems unwarranted/


"Without permission." Translated: our lawyers say that some part of the 50-page user agreement you clicked can be interpreted in a way that technically allows us to do this.


In the last days, FB stock has lost ~15% of its value (from $185 to $160), that's like 2 Twitters..

Next earnings call will be early May probably.


There will be a rebounce, if EFX is any indicator, and FB is a hotter stock with higher profit margin.


unless it takes a turn for the worse.


Everything is possible. Part of last weeks action was cummulative plunge in the stock market, trump's trade wars, rising rates, and so on.


Well, the whole point is that people are not aware of what they have opted into and Facebook exploits this for all it is worth.


It's weird that people can get so upset about these limited excesses of surveillance by companies they've given explicit conset to when more and more of the governments of the world are literally recording everything forever and sharing it with a rapidly expanding set of organizations made up of the same kind of people.


I’ve this comment written a lot here. And while I get it and even agree partially, there is a problem of scope and precedent.

It’s easy for us to say “stupid user, wtf did you think fb was doing?” But, the truth is we don’t actually know.

As tech people we are even just guessing at the real scope of FB’s collection and how they use that data. It’s not like this is something a typical user could really know because there hasn’t been a Facebook before.


And I didn't have to know because I never signed up for Facebook because it's obvious that centralization leads to perverse incentives. It's been obvious from the start but dismissed without consideration ala https://xkcd.com/743/


Yea, but as to the comic, man sometimes you really do just need to get some fucking work done. If I sent an openLibre document around to my company, well, that would be dumb.


That's reasonable. But it isn't reasonable to let your personal choices be dictated by where you work. I'm not saying use LibreOffice for work. I'm more saying that you shouldn't let your work define your opinions.


I always enjoy these lawyer correct texts by facebook and how we all know it's been going on for a long time now. If only the general public would suddenly all figure this out!

It's 50/50 now if everyone will forget this next month, or if the new trend continues to remove facebook.


Yup. Everything has been reduced to generating memes/outrage and reacting to them.

Only exit from the trap is turn off the internet.


Of course it does and can deny it - isn't a user forced to grant the permission when installing the app?

IMHO Google is to stop abuses like this on the Android OS level by means of allowing users to deny any particular permission an app demands and still install the app, encouraging users to grant/deny every particular permission consciously and forcing the app authors to handle denial of any particular permission gracefully without crashes and without denying to function (but in the cases when it is absolutely physically impossible to fulfill the function the user invokes without the missing permission).


Both Google and Apple should provide "middle" controls. Even if I grant the app "access" to my contacts, I want to be able to select "which" contacts and which fields.

E.g. if I use whatsapp for 3 people out of my 100 contacts saved in my phone, I want to be able to give to whatsapp only the access to the phones of these 3 contacts. Others should be invisible for whatsapp, if I want so.

And they should not get the addresses, birth dates and my notices about even these 3 contacts. They should not be able to see them, unless I explicitly want to allow.

I know, users are lazy, but at least the fine grained control and white-listing should be an option. Now it's all or nothing.


There are rooted utilities that do this, or allow you to spew fake data at those apps. That is, if you could unlock your bootloader and install the software you want.

You should really blame phone manufacturers/carriers and Congress for giving you the privilege of paying for a piece of locked hardware that you don't own. They're only loaning you a billboard, after all, so what do you expect? Personally I wouldn't put any important information on a locked device, nor do I ever acquire unlockable devices. People have lost their minds giving up all of the most important electronic freedoms that form the backbone of the future.


> nor do I ever acquire unlockable devices.

Do you mean non-unlockable?


Yes, that's what I meant!


As far as I know the selection of officially unlockable phones is extremely humble. Unlocking a phone usually means using hacks voiding warranty and/or violating licenses. I absolutely support the people choosing to go this way morally (and am one of them myself) yet I think it's sad that people are forced to do illegal-ish hackery to configure their phones a reasonable way and protect their privacy. I have also heard about a man that was put in prison for hacking his nintendo so the whole thing gets scarier and scarier.


We should be thankful that they haven't cross shared the WhatsApp contact info - they probably have a similar stream of information on that side.



How do we know they haven’t shared WhatsApp contact data with their main FB database? I was operating under the assumption based on some news articles that they long broke whatever promises they made.


I'm sure they had their "permission" on page 27 of the terms of service. This isn't news...

My opinion is still that those terms of services are silly. If I ever do start my own business, one of the things on the checklist is to have a minimal, if any, terms of service, because nobody wants to waste time reading them, and 99% of what is in ToSes is in the law anyway. Except, of course, data collection beyond functional requirements. That would be a good thing to hide in long ToSes.


If you read the TechCrunch article you wouldn't have to guess where it was shown. There is a screenshot of the prompt that shows that the prompt and explanation "Continuously upload info about your contacts like phone numbers and nicknames, and your call and text history." Your solution of hiding this information in a minimal terms of service is strictly worse.

My take away is that you can tell users exactly what you're doing and they'll still be outraged. So don't do creepy stuff.


I got my contacts uploaded at some point despite always making sure to reject their prompt. I'm sure people will claim it was just a "bug" or I must have just mistapped (which I have no reason to believe was the case, but how would I prove this), but either way, it's oddly convenient how it works out for them.


That is worrying. The iOS security model would have served you better; Facebook could never accidentally forget you’d said no.


Facebook probably knows if the user is drunk.

Ask for permission at 3am on Sunday. People will press Allow just to get done whatever they thought they were doing, and forget they ever pressed it.


> That is worrying.

Indeed. :\

> The iOS security model would have served you better; Facebook could never accidentally forget you’d said no.

Interesting, so you mean this is the case even if I uninstall and reinstall the app? What if I wipe the OS and reinstall (as "cleanly" as is possible)?


In the iOS security model there's no fixed grab-bag of permissions you accept when an app is installed. Instead the app must explicitly ask for access to each credential on an as-needed basis while the app is running. The request must happen when you request to use that particular feature. And the app is expected to continue working if you say no; just with the corresponding features disabled. So for example, on iOS whatsapp only requests camera access when you try to take a photo in the app.

If you say no, facebook can't access the data. Facebook can't conveniently forget that you said no and access the data anyway - if you say no at the system prompt, the OS won't give the app access to that data in the first place.

The system isn't perfect - it turns out my mum wasn't getting chat notifications on her iphone because she doesn't know what notifications are and she's been saying no to the prompts. But I find it somewhat refreshing to see beginner users erring on the side of saying no, rather than always saying yes to every random prompt the computer spits out. Fail-private is better than fail-public.


I don't think Facebook ever "forgot" what I said, rather it was that I was reinstalling the app for whatever reason, so it really didn't have that information. That's why I asked what would happen if something similar happened on iOS.

Also note that Android's new security model (version 6+?) is pretty similar to what you described, but yes, I do believe this incident occurred on an older version.


Privacy is like eating your vegetables. Or doing those push ups eery morning. Given any excuse people will ignore common sense and the lizard brain takes over. Facebook knows this like no other. The outrage too comes from the lizard brain once it realizes the dangers of the situation it is now in.


I think this should be amended to "don't do creepy stuff" in the cloud.

This is why encryption on our phones is so important. I want my phone knowing all sorts of "creepy" things about me (medical records, etc) so that it can be more useful. What I don't want is that information anywhere except under my immediate control.

Storing encrypted in the cloud is OK as well, I suppose, as long as the encryption key is only available to me.


In the article is mentioned people that did not get that prompt and still have the contacts and calls uploaded, could this prompt been added later so people that used FB before it was added did not had the chance to opt out?

I did not see in the screenshot an OS permission prompt, so if the app had the access locally a bug in the settings could have reset/flip the options and upload the data anyway, like how Windows forgets about your privacy settings on updates.


One man's creep is another man's crop.


I don't know... are they not written in blood, so to speak?

For example, I assume all warranty disclaimers include "not even the implied warranty of merchantability or fitness for a particular purpose" because somewhere, in a legal dispute, "no warranty" was not enough.


And if the warranty-disclaimed item had been sold and depending on the nature of the sale, such term probably wouldn't hold up in many jurisdictions. Otherwise such a term can be carte blanche to fraud.


> nobody wants to waste time reading them

How about a law that requires the party presenting legal paperwork to show that the recipient could have read the document. Use an algorithmic method to estimate the reading time, take off ~33% to account for variance in reading ability, and declare that the minimum time that must be spent with the document.

If you cannot show that e.g. someone clicked "I Agree" at lest 5 minutes (or whatever) after they received the ToS text, then the court will assume prima facie that the have not read the ToS and are not responsible for anything in it. The friction this would add to the checkout/signup process grows with the document size, creating an incentive to write short and simple ToS.


No thank you.

How about present your TOS in a form that is understandable by the average user in a span of 2 minutes? Without being a lawyer...(and I doubt you have 2 minutes at all)


The OP point was that the app creator will be forced to consider how many lines of text will he put in the EULA, if adding such a law with a force timeout and your EULA means the user has to wait 10 minutes you have to think about removing some crap.

Don't do creepy things should be a different law


I think that’s a good way to create an incentive to write short TOSs. They shouldn’t be legally enforceable unless people are actually reading them. Nothing good can come out of that.


Various ToSes already attempt this. Either by forcing you to scroll to the end, timing you, or by putting verification next to each paragraph. Still, no one reads them. The problem is putting a legal contract before a leisure app. I'd rather see legislation that requires the reading level and length of the ToS to match the reading level and attention span of the average user.


The problem, as I’m sure you’re aware, is that the reading level and attention span in this case are minimal. People click through ToSes just as they click through installers. Even if the ToS was 5 words on the screen in 36pt bold font saying “WE COLLECT: Text, call, and contact data”, people would still not take the time to read or understand or care about what that actually means. They want to see cat / friend / baby photos, and will mash buttons until that happens, onboarding processes and tutorials be damned.


And, what happen if some big comp asks you to 'polish' a bit ToS, to make it more 'lawyer style', because otherwise they didn't spent additional $ 10m on you business?


This is the real bullshit.

Other people's lawyers are the worst.


Just for the record; your lawyers are other people's other people's lawyers.


Statistically speaking, they're probably not the same "other people's lawyers." As in, you're probably small and they're probably big.

Big firms acting for small clients generally doesn't work out so well for the small client.


I got this objection once for a mere 100k. The most enterprisey client stalled negotiations for quite some time because we didn’t have long enough TOS. There was no issue with the content just length.


Wtf, this is a thing?


Personally, I'd say no because I value the trust of my users over corporate money.


FB can't collect call and SMS data from my browser ;)

It was so patently obvious that they would do this with any app that I am surprised there's a stink being made about it now.

I don't understand why people fall prey to the "install our privacy invading app" when the service works just fine via a browser.


Should we talk about Google's shoveling of Google Now, err feed, err Assistant, by constantly asking for us to pay with our position, search and vocal history ?

And this is pre-installed on the phone, with the same dark patterns that would push any non-privacy conscious person to activate it, even by mistake.


It's time for a decentralized social network to be formed that's not guided by advertising money, therefore no need to collect data. The only platform that does something similar that I know of is mastodon... are there other alternatives?


I think we should go one level deeper and build an open low level decentralized protocol. that allows others to create networks and 'feeds' on top. The data should belong to the person and the app/website should provide the features.


does the damn opt-in always nag users every time the app opens? Something like that awful LinkedIn prompt to vacuum your phone's contact every time you dare use the app?


Facebook should seriously consider offering premium accounts with no advertising and extra benefits. Extra privacy guarantees. Invitation only, gradually roll it out.


I don't understand the context of your post.

"Pay us money, and we won't allow companies to manipulate you via advertising."

Or

"Pay us money, and we won't use every tool at our disposal to suck every bit of possible data out of your life"

Both of them are gross, and no user should be subject to either of those dilemmas. I'm not saying that every product should be free of use without any cost. Advertising is absolutely necessary to keep certain products and services free. Some companies take it too far though.

Put it this way: eating beef requires that a cow is slaughtered. Should we go for the painless, instantaneous kill, or since it's going to die anyway, should we vivisect it?


Granting permission isn't a default setting obfuscated by deliberately poor interface design and reset everytime you decide to update your api.


Facebook will fall at some point, it’s a disease.


We all know that a "denial" means jack shit nowadays.


meaningful consent is a thing.


Facebook is in denial


"Without permission" is the crucial part. Of course, you have to give the app the permission to do it when you install it, otherwise it will refuse to run. That is why the Android permissions model is fundamentally broken.


Could you explain how permissions could be better achieved? Also, could you provide examples of platforms with good permission/security model?


CopperheadOS[1] has a good permission model (compared to stock Android). Notice that they had implemented a better permission model over Android's even in older Android versions which didn't have runtime granular permissions. It proves what the ad company Google itself could have implemented in stock Android had they not been an ad company.

CyanogenMod's Privacy Guard[2] - basically a proxy that sits between the apps and the ContentProviders, and provides user-configurable "fake data" - was another good approach. Not sure if its successor LineageOS has this feature working - a search shows user complaints that it doesn't work as expected - but I hope it has retained the feature.

I feel a distro that combines both approaches would have been best. Both approaches also prove that there was no technical impediment to implementing them in stock Android.

[1]: https://copperhead.co/android/docs/usage_guide#permission-mo...

[2]: https://www.androidcentral.com/cyanogen-os-privacy-guard-kee...


> It proves what the ad company Google itself could have done had they not been an ad company.

I think you make fair points, but I think this does not prove anything. CopperheadOS has a different user base than Android. A permission model that CopperheadOS users understand (e.g., CopperheadOS users are likely to be more technical) may not work for Android user base.


Their existence proves what was possible in stock Android. Why it didn't happen that way is open to speculation.

Personally, I feel it was because Google's main business did not, and does not, provide any incentives to design stronger permission and privacy models because it itself depends on collecting information about users.

Was usability also a factor? It may very well have been.

However, I disagree with a thinking that uses usability as an excuse to treat a user base numbering in the hundreds of millions as a homogeneous set who don't know anything, and who can't learn anything new.

People fall in a spectrum of capabilities, and more importantly, every individual is capable of moving around in that spectrum with time.

For example, a non-technical user who started out giving one app all permissions may realize their mistake when their email or phone number turn up in google searches, and become more careful with other apps.

Stock Android could have catered to that and standardized on a very granular runtime permissions as the default model. They already had existing ACL models like iOS / Windows policies / SELinux to copy from. They could have left the simplification to the market - the equipment manufacturers and users - to decide. But stock Android made it a binary all or nothing choice for a long time, and left it to equipment manufacturers to provide any additional protection, who of course didn't implement anything either because they too had no incentives to protect user information or standardize the security APIs.

Even now, Android's runtime permissions, while comparatively more granular, are not granular enough, and in practice become a binary choice where some apps refuse to work if a particular permission is not granted.

I have also noticed how Google in their PlayStore keep the permission information hidden away in an obscure location at the bottom of the page, and don't provide any way to filter apps by permissions. How do I search an app that lets me draw on images without asking for contact book information? Not possible without opening every app's page and checking their permissions. I usually try a bit, give up, and head back to gimp on desktop. Is it for better usability? Does better usability mean keeping users ignorant and uneducated? I think it's not a good approach, and based on anecdotes from my personal network, I also think it's a mistaken assumption.


It seems like it should be an obvious given that just installing an FB app on your phone shouldn't hand FB a record of all your phone calls and text messages. Which is what appears to have happened on Android and nowhere else. The sensible question really is 'wtf is wrong with Android' not 'who and how is somehow managing to do this better'.

Nobody asks about cars that don't come with a face stabbing device nor writes long comparative reviews about the best car to get if you prefer not to get stabbed in the face.


> It seems like it should be an obvious given that just installing an FB app on your phone shouldn't hand FB a record of all your phone calls and text messages.

Has it been established anywhere that this was not the case (i.e. that just installing the app uploaded all calls and text messages)?


The claim is that it had permission to do that right after installation, due to the way Android permissions worked at the time.


Read the article and explain how they could have been worse!

For anyone with a basic appreciation of "honesty" or "ethical behaviour", it is fairly obvious that simple improvements would include:

Don't deceive the user about why the app is requesting permission. Don't deceive the user about what you will do with the permissions. Provide the user with an app that does what you say it does, for the purposes you say it is for.

Examples of platforms with a good model: in this context, it is well documented that Apple has been superior to Android.


iOS is pretty good. The onus for dealing with rejected permissions (in many cases) is put onto the app developer.

For example, if you make a photo editing app, you need to request access to the user's photo gallery at runtime of the app.

If they refuse, it's fully expected that your app will continue to work with reduced functionality. IIRC you will have trouble getting approved for the app store if your app breaks after permissions are refused.


But requesting permissions as they're needed is how it works on Android too, since M (late 2015).


Yes, "was broken" seems like a better description. Still, it's pretty bad it lasted as long as it did - Symbian S60 already had that model before Android was even a thing.


Better questions are:

- What is the minimum amount of data sharing required?

- What happens when permission is denied? Does the app close?

- Do people even understand what is being shared?

- Are these click through "consent" screens really giving informed consent? Are they deceptive and biased to get users to give permission without really understanding what is going on? ("Text anyone in your phone" doesn't sound like "Continuously upload SMS and call history." Nor does a giant blue button versus no button, look like there's even an option to say no.)

- Why is this data even allowed to be shared?" (I understand that SMS and call data has neverbeen shareable on iOS.)


I notice the Facebook response is all written in present tense - technically not specifying whether they were abusing the laxness of android v16 up until Oct 2017...

And I trust everything Facebook say precisely as much a Zuckerberg told us all we should with his much-quoted quip "They trust me. Dumb fucks."


I feel like if you should quote a very inflammatory and controversial "quip" you should at least provide a source. Maybe I'm the only one on HN that hasn't heard about it, but even so, sources shouldn't be omitted.



So, do you feel it's irrelevant to mention that it was said in a chat conversation, in 2004 when Zuckerberg was 20 years old? I'm not saying that makes what he said completely fine, but it's not irrelevant for context.


I did say it was "a quip" - and I'm sure quite a few people here understood the context - apologies for those who didn't.

For me, it's kind of a geek culture equivalent of a pop culture reference - not quite as ubiquitously recognised as "Oh my god! They killed Kenny!", but probably (in this audience) as recognisable as 'Quoth the Raven, "Eat my shorts!"'


I wasn't the one you originally replied to but it looks like you are shifting the goal posts here.

> Maybe I'm the only one on HN that hasn't heard about it, but even so, sources shouldn't be omitted.

You asked for sources and I merely provided one such source as "proof" that Zuck actually did use those words in the past.


Sorry, you're right that I mixed you up with whoever posted the original comment. I do think context matters though, of course that makes no sense to say specifically to you.



If journals are going to rehash Facebook blah word by word, syllable by syllable, then I expect free endorsements to alternatives. It's a small public service in exchange for the free content and clicks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: