Peter Waldkirch is a second year LL.B. student at the University of Ottawa.
The rapid rise of online social networks (can you believe that Facebook only opened itself to the general public in 2006?) has already raised many privacy-related issues. For example, I would suspect that many readers of IPilogue have already heard about stories of people losing their jobs over careless online behaviour (my personal favourite remains that of a BC NDP candidate who was forced to withdraw from the provincial elections over racy Facebook photos; that’s a hard one to live down).
The Canadian Privacy Commissioner’s last annual report to Parliament focused largely on the privacy issues related to the pervasive use of social networking sites amongst Canadian youths (I summarized the report here on IPilogue). At the heart of many of these discussions lies the tension between traditional concepts of privacy on the one hand and the desire to share personal information (which is what leads people to social networking sites in the first place) on the other. Clearly, new conceptual approaches are needed. One interesting such foray is Professor James Grimmelmann’s paper “Privacy As Product Safety” (available on SSRN).
After a brief introduction to some of the pitfalls to thinking about privacy in the online world, Grimmelmann poses his fundamental question: “is the loss of privacy in social media something the law ought to worry about, and if so, how?” Addressing the first part of that question, he identifies four “Myths of Privacy on Facebook”: that users don’t care about privacy; that they make rational privacy choices; that their desire for privacy is unrealistic; and that regulating Facebook as if it were a database would be sufficient (the last, according to Grimmelmann, is really a half-myth – a distraction, rather than actually false).
The first myth, he points out, seems to have a fair amount of traction. This goes back to what I mentioned above as one of the central tensions of current privacy debates: people want to share personal information. That’s the whole point of signing up on something like Facebook in the first place. But for Grimmelmann, this is far from the end of the story, and shouldn’t be taken to mean that people have given up on any sense of privacy; that’s “an elegant theory, except for the inconvenient fact that it doesn’t fit the available data.” He points out that there have already been several user revolts on Facebook over privacy changes – from the roll out of the News Feed to the ill-fated Beacon. Although this doesn’t really settle the issue of what privacy means in today’s world, it’s enough to show that, at least, many users do care.
The myth of rational privacy choices is deeply undermined by the simple fact that privacy policies can be very complicated and confusing beasts, and even a user who takes care can easily fail to successfully navigate a complicated array of privacy options. The previous Facebook policy of automatically sharing a user’s photos with their entire geographic network serves as an example.
Grimmelmann’s discussion of the myth that the desire for privacy is unrealistic is very interesting. Here, he points out that the idea of “privacy” only has real meaning when embedded in actual social contexts. He uses the recent example of women posting the colour of their bra in their status updates as an example (purportedly to raise awareness of breast cancer – though I, for one, only learned of that long after the fad had ended). If a co-worker were to ask a colleague the next day what colour her bra was, that would likely be a form of workplace harassment. In other words, the same datum has different privacy values depending on the context in which it is embedded. The issue is not that privacy is dead, but that we need to consider the ways in which various practices are redefining the idea of privacy itself.
The final myth that Grimmelmann addresses is the idea that thinking about Facebook as a database will be sufficient to protect user privacy. He identifies “limited data collection”, “full disclosure”, and “no secondary use” as the key concepts in a database-centred model of privacy protection, and argues that while they are important, they don’t really get to the heart of the “user-user relationships on Facebook.” It’s hard, for example, to define what the “purpose” of sharing personal information on such a site really means – people use Facebook to “connect and share data”, but actually identifying the scope of that can be problematic. Grimmelmann argues that if “secondary use” just means “’any use not originally contemplated by the user,’ then all we’ve managed to do is restate the problem. We got into this mess precisely because users have shown themselves unable to predict all the ways in which their information might be seen.” Still, I would suggest that, despite their drawbacks, these sorts of database-centric protections can be powerful in addressing contemporary privacy issues. The work done by Canada’s Office of the Privacy Commissioner (responding to a complaint launched by the Canadian Internet Policy and Public Interest Clinic), which has led to Facebook revising its privacy practices globally, shows that there is still potential here.
Still, in the above Grimmelmann is simply arguing that privacy on sites such as Facebook should be legally protected and that we need new ways of thinking about online privacy. To aid us in this rethinking, he proposes using concepts drawn from product-safety laws. Just as the manufacturers of physical goods can be liable for design or other flaws in their products, so should the providers of online services. For example, by focusing on the consumers’ expectations of the product, we can gain insight into what privacy practices result in user backlash. He uses the recent example of Google Buzz (and the privacy backlash it generated) to show how product-safety concepts can be usefully applied to contemporary privacy issues.
This is certainly an interesting idea, and should definitely be pursued. Hopefully, thinking about privacy in terms of product-safety can help contribute to (what I would suggest is) a badly-needed rethinking of privacy in today’s world. Some issues remain, however. For example, I’m not certain that a “consumer expectations” analysis will be easier to apply than the “secondary use” analysis he critiques earlier – both ultimately lead to the question: “what exactly did the user consent to?” It seems to me that this remains the real heart of the issue. Take, for example, the heated response generated when Facebook first introduced its News Feed. By today’s standards, this seems almost quaint. Ben Parr, one of the leaders of the anti-News Feed revolt, certainly feels differently about the matter now, and writes that “I’m not afraid of losing my privacy anymore.”
Grimmelmann doesn’t argue that product-safety style thinking is sufficient on its own, but rather that some form of tort liability inspired by product-safety law could be a useful part of a larger strategy that includes regulation and user education. It’s a fascinating idea, and one that those interested in conceptualizing privacy should certainly check out.
One Response
Peter, thanks for your interesting post and coverage of Professor Grimmelmann’s work. I agree with your criticism that it would be difficult to identify “consumer expectations” when it comes to social networks like Facebook. But this is indicative of a bigger problem with using product liability as a model to regulate the online world: how do you define harm?
For example, with traditional consumer products users would expect to use a product as intended by the manufacture without physical injury to themselves or others. If someone was physically injured, then this is a very tangible form of harm and one for which it would be possible to calculate financial compensation. The problem with the online world is in identifying what “harms” should be recognized and how to financially compensate. Is harm only limited to “financial harms” such as credit card data theft, or should it also include such things as publishing a user’s purchase habits? With respect to the later, some users may not in-fact define this as a harm (which goes to your point about identifying user expectations). Furthermore, how can one quantify the financial damage from having your purchase habits published?
I have discussed privacy and harm in some of my previous IPOsgoode posts: http://www.iposgoode.ca/2010/01/madrid-privacy-standard-still-in-its-infancy/ (see my second point).
http://www.iposgoode.ca/2009/11/ip-osgoode-speaks-professor-jacqueline-lipton-on-privacy-in-web/ (see point four)
Comments are closed.