IP Osgoode

What to do when Anything is Possible: A Brief Note on the Problems Surrounding the Regulation of Deepfakes

Photo Credit: Markus Winkler (Unsplash.com)

Ali MesbahianAli Mesbahian is an IPilogue Writer and a 2L JD Candidate at Osgoode Hall Law School. 

 

Thanks to deepfakes, it is becoming increasingly difficult to tell whether a video is real or not. Deepfakes are machine learning processes that synthesize video and audio recordings using the images and sounds belonging to individuals not present in the original recordings in order to produce realistic impersonations. As the technology behind deepfakes further develops, it becomes more accessible. What results is the widespread use of deepfakes for nefarious purposes, such as pornography (96 percent of its use according to a 2019 inquiry), and influencing elections. The rapid development of deepfakes also threatens to influence our understanding of the world. What if we reach a point where it is impossible for a layperson to be certain of the veracity of a video in which someone is saying or doing something controversial? Politicians can then exploit the doubts that fill our perception to avoid accountability when real scandals arise. Danielle Citron and Bobby Chesney, law professors at the University of Virginia and the University of Texas respectively, call this the “liar’s dividend”. Because we are aware that deepfakes enable anyone to say or do anything in a video or audio recording, we lose trust in our eyes and ears, allowing accurate information to be veiled as “fake news” in unprecedented ways.

Legal Responses to Deepfakes

Currently, Canada has no legislation that explicitly criminalizes abusing deepfakes. While remedies grounded in copyright infringement, defamation, and violations of privacy and impersonation in elections may cover deepfakes, we require a uniform and direct effort to combat their nefarious use. As Citron explains, a coordinated international response is necessary given how easily accessible falsified videos are from different parts of the world. We must also address this issue cautiously. As Chesney notes, granting the government regulatory authority to determine what is or isn’t true tends “to act on behalf of the ideological powers that be”. In other words, the concern is that undue censorship may arise in the name of protecting the accuracy of information.

The recently proposed Digital Charter Implementation Act includes privacy provisions that may directly impact the regulation of deepfakes. Despite invoking the Charter, the Actdoes not specifically recognize privacy as the fundamental human right it is”. As Emily Laidlaw, law professor at the University of Calgary, explains, while the Act invokes human rights language, it is essentially framed as a consumer protection legislation, making corporations and tech platforms responsible for ensuring the privacy interest of their users and liable in case of a breach. Therefore, if the proposed framework is used to regulate deepfakes, we may return to the problem that I mentioned in my earlier piece: outsourcing our fundamental rights to corporations and expanding corporate power as opposed to limiting it.

Related posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Search
Categories
Newsletter
Skip to content