Hacker News new | past | comments | ask | show | jobs | submit login

Can you further explain how this works?

1. FB does something wrong that affects 700k of its 1b users at the time 2. FB loses any moral authority to object to things that violate their terms of service.

That is a bit of a leap, but I am open to hearing what I’m not understanding about your thought process.




Not the op, but I'll try:

1. FB didn't "do something wrong". They have repeatedly engaged in research without informed consent as op mentions. These are not isolated incidents, but rather Facebook's modus operandi.

2. NYU uses volunteers to collect information on what Facebook ads are shown to them. A university project using volunteers to research on how people are targeted for political ads.

3. Facebook is trying to use its ToS so that its political targeting remains opaque.

I do not see any leaps here. I do not understand how a company such as Facebook has any moral authority to object about _anything_, much less about this research, but I'm open to hearing what I'm not understanding about your thought process.


> FB didn't "do something wrong". They have repeatedly engaged in research without informed consent as op mentions. These are not isolated incidents, but rather Facebook's modus operandi.

Are you referring to AB testing here?

Are you aware that every internet service does this, on an incredibly regular basis, without informed consent?

You can argue that this is unethical, but you should then be calling for online experimentation without informed consent to be banned.


No it was legit experimentation to affect emotional states, not AB testing. Just a random google search: https://www.forbes.com/sites/kashmirhill/2014/06/28/facebook...


What's the difference?

Speaking as a psychologist who's extremely familiar with both experimentation to affect emotional states and AB testing, I really, really don't see one.

I personally think there should be ethics reviews for AB tests, but that's a fringe position in the industry right now.


You can look up "Surveillance Capitalism" by Zuboff for numeral examples of Facebook's overreach.

Consider "A 61-Million-Person Experiment in Social Influence and Political Mobilisation", the 700,000 people whose emotional states Fb experimented with, the documents which revealed how Fb tried to pinpoint when young Australians and New-Zealanders were vulnerable to advertising.

The difference between AB testing and Facebook's experiments should be obvious to anyone familiar with those experiments, as the aim has been to research and alter human behaviour in general, whereas AB testing has to do specifically with understanding user engagement and satisfaction with regards to the product.

Trying to dress up this experimentation as AB testing or no different than AB testing is incredibly dangerous. As a psychologist extremely familiar with both this kind of experiments and AB testing, you really, really should see a difference.


From a technical point of view, they are 100% exactly the same. You use the same tools and methods to accomplish similiar goals (to understand the reaction of people to particular treatments/interventions.

This article, right: https://www.nature.com/articles/nature11421 ?

So, this is a messaging experiment, which is pretty standard within psychology. The only interesting thing about this (and the only reason it's in Nature) is the size, not the experimental design.

> The difference between AB testing and Facebook's experiments should be obvious to anyone familiar with those experiments, as the aim has been to research and alter human behaviour in general

This is what AB tests do. You change the colour of the button, and conversion rate (i.e. the proportion of people that give you money) goes up or down. How is this different from people taking an action (self-reported, btw so potentially garbage) around an election? Seriously, i would love to know the actual difference here.

You may not like Facebook, hell, I may not like Facebook but it's just inaccurate to say that they are uniquely bad because they run experiments on people, when my entire career both in academia and industry has been running experiments on people and that's somehow OK.


I don't see how the technical point of view is relevant.

I listed three examples that should make it obvious. You refuse for some reason to accept the obvious, that these kind of tests try to alter real world behaviour and not behaviour with regards to the product.

There's a big difference between changing a button's color in a page to see how user engagement changes and trying to find when teenagers are in a more vulnerable emotional state to serve them ads. If you can't see the obvious I'm afraid there is anything more to say.


I'm curious as to the metrics here: is emotionally manipulating 700k less bad than 1 billion users? how about 10 users? or 1 user?

also: is it more bad or less bad if 3rd parties do it - political advertising, disinformation, Cambridge Analytica etc?


I think it's pretty neutral to say that harming fewer people is less bad, assuming the amount of harm is the same. Do you disagree?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: