A few pieces in the Facebook Experiment. I’m still mulling my view.
Paul Bernal: The Facebook Experiment: the ‘why’ questions…:
Perhaps Facebook will look a little bad for a little while – but the potential financial benefit from the new stream of advertising revenue, the ability to squeeze more money from a market that looks increasingly saturated and competitive, outweighs that cost.
Based on the past record, they’re quite likely to be right. People will probably complain about this for a while, and then when the hoo-haa dies down, Facebook will still have over a billion users, and new ways to make money from them. Mark Zuckerberg doesn’t mind looking like the bad guy (again) for a little while. Why should he? The money will continue to flow – and whether it impacts upon the privacy and autonomy of the people on Facebook doesn’t matter to Facebook one way or another. It has ever been thus….
(Via Paul Bernal’s Blog)
A contrarian view from Rohan Samarajiva: Confused objections to Facebook emotional contagion research:
I am puzzled by the predominantly negative reaction to the manipulation of Facebook content, in the recent published research article in the mainstream media (MSM), though perhaps less in blogs and such.
It seems to me that MSM’s reaction is hypocritical. They manipulate their content all the time to evoke different emotional responses from their readers/viewers/listeners. The difference is that conducting research on resultant emotional changes on MSM is not as easy as on Facebook. For example, magazines have used different cover images, darkening or lightening faces and so. Their only indicator of success is whether version A sold more than version B. Not very nuanced.
(Via LIRNEasia)
And Ed Felten: Privacy Implications of Social Media Manipulation:
To be clear, I am not concluding that Facebook necessarily learned much of anything about the manipulability of any particular user. Based on what we know I would bet against the experiment having revealed that kind of information about any individual. My point is simpler: experiments that manipulate user experience impact users’ privacy, and that privacy impact needs to be taken into account in evaluating the ethics of such experiments and in determining when users should be informed.
(Via Freedom to Tinker)
And finally from Robin Wilton: Ethical Data Handling and Facebook’s “Emotional Contagion” Study:
Once, in a workshop, while discussing mechanisms for privacy preference expression, I said I would be happier for data subjects to have some means of expressing a preference than none. An older, wiser participant made the following wry remark: “That only brings a benefit if someone is prepared to give weight to their preference. If not… well, ten million times zero is still zero”. And that’s the weight Facebook appears to have given to the legitimate interests of its data subjects.