IP Osgoode

Does the Canadian Online Harms Proposal Increase Privacy Risks?

Photo by AbsolutVision (UNSPLASH)

Emily Prieur is an IPilogue Writer and a 3L JD Candidate at Queen’s University Faculty of Law.

 

 

Online privacy is, without a doubt, an area of growing concern. As technology takes a greater presence in our lives, lawmakers are right to turn their minds to its potential deleterious effects. However, with the new Liberal government proposal, legal scholars and law students alike are left wondering if the Canadian Online Harms proposal will do more harm than good.

What is the online harms proposal?

In July 2021, the Canadian government released its plans for addressing online harms. The proposal was presented in two parts: a discussion outlining the government’s intent to regulate social media platforms, and a technical paper outlining details of the proposed law. The objective of the proposal is to reduce harmful content online. The government has a narrow focus, targeting content related to 1) child sexual exploitation, 2) terrorism, 3) hate speech, 4) non-consensual sharing of intimate images, and 5) content that incites violence. This proposal is consistent with the government’s previous content regulation efforts, such as Bill C-10 and the proposed legislative changes to the Canadian Human Rights Act and the Criminal Code. Canada is not the only government paving the path towards a more tolerant online community. In fact, the European Commission launched their own “Code of conduct on countering illegal hate speech online” in May 2016.

How does the proposed legislation target online harm?

The proposed legislation requires online platforms to proactively monitor all user speech and evaluate its potential for harm within the five categories of regulated harmful content outlined above. Additionally, any person in Canada may flag content as harmful and online platforms must address flagged content within 24 hours. If successful, the legislation would require online communication services like Facebook and Twitter to report content falling within the five categories of regulated harmful content to law enforcement.

The proposal bestows additional responsibilities upon the Digital Safety Commissioner, a role which would be established by the new legislation. These responsibilities include the power to hold hearings on any complaint made to it, any detected non-compliance, or any matter within its jurisdiction under the Act, especially if the Commissioner believes it would be within the public interest. The language within the Act is worrisome, stating that the Digital Safety Commissioner may also conduct inspections of online platforms at any time, either on routine or ad hoc basis “further to complaints, evidence of non-compliance, or at the Digital Safety Commissioner’s own discretion.” Ultimately, the language delineating the Commissioner’s responsibilities is unclear, creating concerns that the not-yet-appointed Commissioner may have unfettered power over online communication services.

The legislation also proposes penalties for non-compliance. The proposed monetary penalties are astonishingly high; online communication services could face fines of up to 3 percent of their gross global revenue or up to $10 million (whichever is the higher amount) if they do not comply with the new rules. Companies like Facebook and Twitter may also foot the bill to recover the costs of such a proposal (e.g., the cost of a new Commission and Digital Safety Commissioner), as the legislation grants the Digital Safety Commissioner the power to implement regulatory charges as part of the cost to monitor online harm.

How would successful implementation of the proposal negatively impact privacy rights?

In essence, the Canadian Online Harms proposal gives rise to increased surveillance powers to the state.

The first issue surrounding the proposal is with respect to the increased surveillance associated with passing any rules outlined in the proposal. The requirement to monitor all user speech online is alarming, given the government’s proposal evades any potential restrictions on monitoring. Many argue that such a proposal could allow the government to increase their surveillance powers, and representatives at the CitizenLab go so far as to call this aspect of the proposal “chilling”. Emily Laidlaw, the Canada Research Chair in Cybersecurity Law at the University of Calgary, agrees that the obligation to monitor websites creates a privacy risk. Laidlaw stated, “You’re essentially saying that a private body needs to actively monitor and surveil all of the different communications on its platform.”

Additionally, the proposal encourages the use of machine learning to identify harmful content, stating that “an OCSP must take all reasonable measures, which can include the use of automated systems.” The Canadian government recently had issues with their use of machine learning technology and facial recognition software, as discussed in a recent article by IPilogue Guest Writer Shannon Flynn.

Looking forward

Given the increasing use of online platforms, the Government of Canada is justified in their efforts to curb online harm. However, a certain skepticism towards government efforts is reasonable, especially in the wake of Bill C-10 controversy. Although their efforts to curb online harms are noble, the government must also consider any harms that their efforts may have on Canadians’ privacy.

Related posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Search
Categories
Newsletter
Skip to content