In my last privacy post I identified certain cloud-computing privacy issues that may be regulated by the free-market. This post will outline a risk-based approach to analyzing privacy issues that laws and legislation may be required to address.
A risk-based analysis is beneficial in that it changes how a problem is viewed and the type of solution sought. With respect to privacy in cloud computing, instead of trying to create a ““Cadillac” solution with a worldwide integrated system of service providers and third party credential certifiers, a risk-based approach seeks to accomplish only what is necessary and efficient. It accomplishes this by highlighting that a fixed amount of resources and time are available, and that some privacy issues are more important to manage than others.
Risk has two components. The first component is the probability or likelihood of an event occurring. The second component is the severity of the consequences if the event were to occur. In the context of a risk analysis, the result is that an event with a low probability and high severity may expose someone to the same amount of risk as an event with a high probability and low severity.
Once a risk is identified and decomposed, the related benefits and costs to manage can be assessed. A risk-based analysis will consider both the rewards of taking a specific risk as well as the costs to manage that risk. Four different options are available for risk management:
1) Accept: Risk can be accepted as the potential cost of engaging in an activity. This is a good option for risks that are too small or remote to be of concern. Accepting a risk may also be the only option if the costs to manage are too significant. Frequently an opportunity-cost analysis is used to determine which risks should be actively managed versus which risks can be accepted.
2) Transfer: Risk may be actively managed by transferring to a third party through, for example, an outsourcing arrangement or insurance policy.
3) Mitigate: An alternative to transferring risk is to mitigate risk by creating special controls or a system of prevention.
4) Avoid: Finally, risk may be avoided by not engaging in the activity that creates the risk in the first place.
One of the privacy issues Reshika Dhir identified in her comment on my last post, and which has been highly criticized in the media, is Facebook’s information retention policy. Applying a risk-based analysis to this issue may provide further insight into whether and how this risk should be managed.
Facebook’s privacy policy gives Facebook the ability to retain a copy of a user’s data that is posted or generated on Facebook even if the user disables their account. Facebook’s The Information we Collect policy states that: “You may remove your User Content from the Site at any time. If you choose to remove your User Content, the license granted above will automatically expire, however you acknowledge that the Company may retain archived copies of your User Content.” The Terms and Conditions: User Content Posted on the Site also state that: “You understand and acknowledge that, even after removal, copies of User Content may remain viewable in cached and archived pages or if other Users have copied or stored your User Content.” The Changing and Removing Information section in the Facebook privacy policy states that: “Individuals who wish to deactivate their Facebook account may do so on the My Account page. Removed information may persist in backup copies for a reasonable period of time but will not be generally available to members of Facebook.”
At face value this practice may initially offend users’ sensibilities. It seems unjust that a user cannot control the information they have posted and generated on Facebook. However, what are the true risks and benefits associated with Facebook retaining this information?
The risk a user is exposed to when their account is de-activated appears to be no greater than the risk while their account is active. This is because the account information does not change; Facebook only retains the user’s information. Furthermore, it appears that Facebook does not continue to provide de-activated account information to the network. Rather, Facebook highlights that user information may be archived within their system, and that some users may have copied, saved, or cached Facebook content to their own computer, putting it outside of Facebook’s control. With respect to the two components of risk, while the severity may remain constant, the probability of a privacy compromising event occurring is diminished when a user account is disabled because Facebook does not actively provide the users’ information to the network.
Another potential risk is that a user may not be able to “turn a new leaf” and erase their Facebook past. A situation may arise where a user may no longer want others to see the acts they have committed, pictures they have taken, or the things they have said. While such a risk seems remote, it could have severe repercussions affecting a user’s career and social network.
While this may be disadvantageous to a user, there are benefits to society that should not be overlooked. First, it will act as a deterrent to online misbehavior. Closer social scrutiny by a community may provide the incentive for users to act as good citizens and refrain from misconduct. The threat of having this information retained in perpetuity furthers this end. Second, frequently a user’s content is generated with the aid or assistance of other users. Examples include posting and tagging photos, messaging, and creating groups. Should all content be removed irrespective of who else authored that content? Lastly, some users may want to return to Facebook and continue with their profile.
Based on the popularity of Facebook within Canada (currently around 7 million users) it can be inferred that most users accept the risk as a cost of staying connected. Users’ options to transfer this risk are limited. Mitigating this risk is also difficult since there is always the possibility that other users will save Facebook data on their own computers. The best mitigation strategy is thus to take advantage of the privacy settings available on Facebook and to ensure any information that is posted on a profile is carefully scrutinized. Lastly, there is always the option to avoid the risk by using a different network, or not using the service at all.
New applications of technology have given rise to a multitude of potential privacy issues. Adopting a risk-based approach when analyzing these issues may help create realistic solutions and ensure that limited resources are spent on areas that will have the greatest impact.
One Response
One of the key messages in Brandon’s post is that users need to mitigate their risk/exposure to privacy abuses by carefully thinking about what information they are making available online and also taking full advantage of privacy controls.
I think this is very sensible.
However, something that stands out in my mind here is that any risk assessment (whether it be done by a user of a service or by a provider of a service) requires an understanding of the terms that are being agreed upon.
There was an interesting study done recently by Carnegie Mellon researchers (unfortunately the original study as linked to by the OUT-LAW article does not appear available on the web anymore) that estimated that the time required by each user to read all the online privacy policies of websites they visit in a year would be approximately 201 hours. Is this not a striking example of inefficiency? Also, users often don’t read the terms of service (or the privacy policies) for sites that they are on. How would that affect the risk assessment?
I’m sure most of us would agree that shortening privacy policies and using less jargon in online terms of service would be very helpful – but that alone, probably isn’t enough. If we wish to be able to move easily from one web service (or device) to another, perhaps some kind of standardization of privacy rules and policies is needed in order to accomplish what is necessary and efficient?
Comments are closed.