FB _already_ filters out updates based on some blackbox algorithm. So they tweaked the parameters of that algorithm to filter out the "happier" updates, and observed what happens. How is this unethical? The updates were posted by the users' friends! FB didn't manufacture the news items; they were always there.
I detest FB as much as the next guy, but this is ridiculous.
In this case, I don't think there was actual risk but just reading the PNAS paper it doesn't sound like the study went through the proper process. If it was reviewed by an IRB then it did go through the proper process and it's ethically sound, but a PR nightmare.
Facebook altering it's block-box proprietary algorithm for what to show is something it does every day. That can't be unethical by itself.
It's possible that for it to be published, a higher standard would be needed. That is, the actions they took were ethical, but perhaps inappropriate for scientific publication.
But, if that were the case, the peer reviewers and the journal in which it was published should have flagged that. That it was published shows they didn't have any significant concerns.
> It's possible that for it to be published, a higher standard would be needed. That is, the actions they took were ethical, but perhaps inappropriate for scientific publication.
> But, if that were the case, the peer reviewers and the journal in which it was published should have flagged that. That it was published shows they didn't have any significant concerns.
This would not be a scientific ethics issue if they explained their IRB review in the manuscript. It is so unusual to not do this that it is a reasonable assumption something funny is going on, in my experience.
For example, the journal could be incentivized to look the other way in order to publish a high publicity article. I'm NOT saying that's what PNAS did, but just because something is published doesn't mean it's ethical. (See: https://www.ncbi.nlm.nih.gov/pubmed/20137807)
Fair point, yes, it is not evidence of it being ethical or even of PNAS believing it was ethical. But it's reasonable to assume that a high-profile journal like PNAS is extremely aware of the relevant ethical considerations, and likely (but not certaintly) would not violate them.
Then why did they let the manuscript go to press without the "this study was reviewed by the University of Somewhere IRB (protocol #XXXX) and was ruled exempt" sentence?
I see this enough in papers that it seems pretty standard to me, and it especially makes sense in a paper where the editors think there are potential ethical issues.
If I was an author of this paper, I would have actually spent a sentence or two explaining why there was not a risk to human subjects, etc.
The way they address this by basically saying it is ok because of the ToS that no one reads is the worst possible way to handle this. It seems to me more like no one thought about it at all than that the editors carefully considered it. I just don't see how you get from recognizing big ethical issues to not even addressing them in the manuscript.
Well, it could be, but (1) a commercial site optimizing itself seems quite reasonable, and (2) the public knew they were optimizing themselves, and did not complain in any significant way.
That "optimization" might be reasonable does not imply that any method is, "optimization" is not an actual activity, but a label for the purpose of a wide variety of things you can do. Murdering your competitors is also good for "optimizing" your bottom line, but still unethical, to put it mildly.
Also, I strongly doubt that the public has any clue of what facebook is doing. For all the public knows and understands, facebook could be employing magic message fairies.
But why if it's done for science it's unethical, but when done by news stations, politicians, advertising agencies, motivational speakers, salesmen, etc. it's suddenly ok?
> when done by news stations, politicians, advertising agencies, motivational speakers, salesmen
Each of those are cleary identified.
If Facebook wants to put a little icon next to each "experimental study" status update which disclosed the party that funded the study, it would be different.
Even in "science", some studies are funded and others are not.
Are you even serious? Stating that mass media manipulation is overt and identifiable... The HN stopped to be self-correcting if one happens to hate Facebook, Google or LinkedIn.(not that I like them personally)
Any ad, political speech, sales pitch, pr/journalism etc. that you see today is identified by a named author/publisher/vendor byline. Assessment of possible manipulation is left as an exercise for the viewer, who can decide whether to ignore the communication.
There are any number of technical means by which:
(a) opt-in permission could be requested in advance of a study
(b) opt-out option could be advertised in advance of a study
(c) start and end dates of non-optional study could be disclosed
This is about CHOICE of participation, not the NATURE of the study.
"Who can decide whether to ignore the communication"
Wow... Not understanding basic theories of communication and human irrationality that doesn't allow lots of data to be processed and accepted without critical thought.
"opt-in permission could be requested in advance of a study"
Wow... Not understanding basic theories of psychological and sociological studies that state subjects should not be informed about the study or their behavior will change.
Standard rules of ethics for experiments on human subjects say that (with a few exceptions) subjects should always be informed about the study. If that changes the subjects' behavior such that the study is no longer valid, it's the researcher's obligation to come up with a better design that works in the face of informed consent, or to give up and study something easier.
It's been fairly well established that "I wanted to learn something" isn't an adequate excuse for doing things to people without informing them or receiving their consent.
Before you talk about peoples' "clueless 'ethics'", you might want to read the professional standards of the field, for example the American Psychological Association's Ethics Code. The section on "informed consent to research" is here: http://www.apa.org/ethics/code/index.aspx?item=11#802
Users of Facebook see it as a neutral platform for communicating with people they know. Consumers of the things you listed know it's top-down messaging coming from people they don't know or necessarily trust.
So, there is a difference. It's still a complex question, though -- is filtering or prioritizing based on emotional sentiment really different from what they are already doing with inserting ads and such?
I see it this way: they did a study, so it's fair.
Were they to filter posts by emotional sentiment as a part of their normal operations, I'd find it unethical, or at least something I might not want. But I'm totally fine with them subjecting users (including myself) to random research studies, as those are temporary situations, and with Facebook's data sets, they can have great benefits for humanity.
Perhaps Facebook should provide an opt-in option to for user to be a subject of various sociological experiments at unspecified times. I'd happily select it.
All Facebook users have agreed to be part of research experiments. It's in their ToS. If you got people to agree to allow you to enter their homes for research it wouldn't be unethical for you to do so.
ToS are a copout, and in my eyes a tragedy of the modern legislature around software services.
Every single company in the world knows very well that 99.99% of their userbase won't read the ToS, and use this to do whatever they want with their users' information and privacy.
There needs to be dramatic improvements in that area.
Oh lord, not this "break into the house" fallacy again.
Your FB profile is not your house; it is just some data you have shared with FB. FB decides what to do with the data: how to share it, where to share it, when to share it, who to share it with, etc.
Everybody knows that FB _already_ manipulates the feed to change your mood: to make you more engaged with the site; to make you click more on ads; etc. It's been doing this basically for ever.
But they _already_ manipulate the feeds with the specific intent of increasing user engagement: to get more views, to get more ad clicks, more time spent, etc.
As I understand it, the American scholarly tradition uses a 'bright line' definition of human experimentation that stresses getting ethical review board approval for pretty much everything involving humans other than the experimenter. Doing anything that needs review board approval without getting review board approval is seen as highly unethical, even if approval would clearly be granted were it requested.
For example, I once saw a British student e-mail out surveys about newspaper-buying habits; one was sent to an American academic, who replied to the student's supervisor saying the student should be thrown out of university for performing experiments on humans without ethical review board approval.
FB _already_ filters out updates based on some blackbox algorithm. So they tweaked the parameters of that algorithm to filter out the "happier" updates, and observed what happens. How is this unethical? The updates were posted by the users' friends! FB didn't manufacture the news items; they were always there.
I detest FB as much as the next guy, but this is ridiculous.