Until recently, art has been considered a uniquely human phenomenon. Creativity, more generally, was perhaps homo sapiens most defining characteristic. This seemingly secure axiom began to collapse in the 1950s, when Artificial intelligence (AI) budded within the field of computer science. Through the creation of AI, humans have transferred the locus of creativity outside of their bodies. Creativity is no longer confined to the space between our ears.
Initially, AI progress was sluggish, leading to an “AI winter” during most of the 20th century. Only in the 1990 did AI development begin to accelerate again. The exponentially increasing digitization and creation of data during the 2010s has further aided AI development, which relies on massive datasets to train burgeoning AIs. Although there are significant limits to the degree of creativity that machines can formulate, these limits are being rapidly expanded. AI’s recent contribution to art is a revealing development.
In 2018, a portrait christened Edmond de Belamy (pictured above) was produced by an AI built by Obvious, a Parisian art collective. The AI used a database with tens of thousands of portraits created between the 1300s and 1900s to train a machine learning computer program to produce a unique portrait. The painting surpassed its pre-auction value estimates of $7,000-$10,000 and sold for $432,500 in a Christie’s New York auction. Similarly, an AI electronic composer called Aiva, trained on thousands of classical compositions, has released albums whose pieces have been used by video games and in movies.
These developments pose problems for current intellectual property law schemes. This is because is they are premised on incentivizing and rewarding human creativity. What is to be done when creativity becomes increasingly un-human?
Thus far, copyright law has not fully evolved to grapple with the intricate questions that creative AI’s raises. It was only in September 27, 2019, that the World Intellectual Property Organization (WIPO) held the first Conversation on IP and AI, bringing together member states and other stakeholders to discuss the impact of Al on IP policy, with a view to collectively formulate the questions that policymakers need to ask. On December 13, 2019, WIPO published a draft issues paper on the impact of Artificial intelligence on IP Policy and invited member states to provide comments and suggestions. The submission period recently closed on February 14, 2020.
Hopefully, lawmakers continue discussing the issue of AI-IP law. In the mean time, academics are responsible for preparing the way for the evolution of AI copyright through the exploration of the intersection between computer science, economics, and law. To date, the common law has begun to elucidate the issue of AI-IP law, however, as AI is still an emergent and rapidly evolving technology, a detailed and unified vision for how AI will impact the development of Copyright Law is still pending.
An intriguing article that has introduced guidelines for the field of AI copyright was recently published in the March 2020 issue of Nature Machine Intelligence by Jason K. Eshraghian. Dr. Eshraighain outlined how AI artists and the collaborators involved should assess their legal ownership, laying out some guiding principles that are “only applicable for as long as AI does not have legal parenthood, the way humans and corporations are accorded”.
Before exploring how AI artists can protect their interests, it’s useful to understand the fundamental requirements of copyright law. According to the US Copyright Office an artwork eligible for copyright must be an “original work of authorship fixed in a tangible medium”. Given this principle, Eshraghian explored whether it’s possible for AI to exercise creativity, skill, or any other indicator of originality. Ultimately he determined that currently, AI’s range of creativity doesn’t exceed the standard used by the US Copyright Office, which states that copyright law protects the “fruits of intellectual labor founded in the creative powers of the mind.”
Due to the current limitations of the technology, the development of the most advanced AI relies on some form of initial human input in order to prime a computer’s ability to create. At the moment, AI is a tool that can be used to produce creative work in the same way that a camera is a tool used to shoot creative content. Photographers do not need to comprehend the intricate technology of their cameras; as long as their content shows creativity and originality, they have a proprietary claim over their oeuvre.
The same concept applies to programmers developing an AI neural network. As long as the datasets they use as input yield an original and creative production, their work will be protected through traditional copyright law. The programmers do not need to understand the advanced, high-level mathematics, which in most AI systems are black-box algorithms whose outputs are impossible to analyze.
Will machines and computer programs eventually be considered as creative sources and be allowed to own copyrights? Eshraghian cited the recent decision of Warner-Lambert Co Ltd versus Generics wherein Lord Briggs, Justice of the Supreme Court of the UK, determined that “the court is well versed in identifying the governing mind of a corporation and, when the need arises, will no doubt be able to do the same for robots”.
In the meantime, Dr. Eshraghian suggests four lodestar rules in order to legally protect AI artists and collect information that may be useful in evolving copyright law in light of AI.
First, AI programmers should record their methods through online code repositories like GitHub or BitBucket. Second, AI programmers should properly catalog their dataset inputs and the procedure by which they formulate their models. Demonstrating selectivity in deciding the input criteria signals human involvement and creativity. Third, in cases where user data is utilized, the programmer should catalog all runs of the AI algorithm to distinguish the data selection process. This information could be useful in determining whether those whose information was used to create the user-based input have a right to claim the copyright too. Finally, the output should avoid infringing on others’ content through methods like reverse image searches and software configuration management.
AI-generated artwork is still a vanguard concept, and the copyright law surrounding it is unclear, giving a lot of flexibility to AI artists. The guiding principles Dr. Eshraghian lays out will hopefully shed some light on the legislation we will eventually need for IP policy in light of AI and begin an important conversation between all the stakeholders involved.
Written by Joaquin Francis Arias. Joaquin is a contributing IPilogue editor, President of Osgoode’s Legal Entrepreneurs Organization and IP Osgoode Innovation Clinic Fellow.
2 Responses
A natural extension of the discussion in this article is whether AI that creates works like Portrait of Edmond Belamy infringes the copyright of the painters of the original works. While not an issue for paintings from the years 1300 to 1900 (due to the lapse of the copyright term), a recent dispute has shown that this is very much a live issue. Artist Amel Chamandy is claiming copyright infringement against Adam Basanta, who used AI to randomly generate abstract images, then scan those against existing artworks. He then displayed those images which were at least an 80% match with existing artworks, one of which was Chamandy’s. Chamandy claims that the AI’s use of her work, even just for comparison, was an infringement of her copyright.
If Canada were to institute a text and data mining exception for AI, even for commercial uses, issues like this would not arise. Text and data mining exceptions allow AI to use existing data for training purposes, and to create useful outputs based on high quality input. Without such an exception, AI training data are limited to works in the public domain. While it may not be critical to allow the generation of artwork in this way, it becomes an urgent issue if copyright exists in X-rays, ultrasounds and other medical images. For AI to live up to its full potential, it needs access to as much high quality information as possible.
A natural extension of the discussion in this article is whether AI that creates works like Portrait of Edmond Belamy infringes the copyright of the painters of the original works. While not an issue for paintings from the years 1300 to 1900 (due to the lapse of the copyright term), a recent dispute has shown that this is very much a live issue. Artist Amel Chamandy is claiming copyright infringement against Adam Basanta, who used AI to randomly generate abstract images, then scan those against existing artworks. He then displayed those images which were at least an 80% match with existing artworks, one of which was Chamandy’s. Chamandy claims that the AI’s use of her work, even just for comparison, was an infringement of her copyright.
If Canada were to institute a text and data mining exception for AI, even for commercial uses, issues like this would not arise. Text and data mining exceptions allow AI to use existing data for training purposes, and to create useful outputs based on high quality input. Without such an exception, AI training data are limited to works in the public domain. While it may not be critical to allow the generation of artwork in this way, it becomes an urgent issue if copyright exists in X-rays, ultrasounds and other medical images. For AI to live up to its full potential, it needs access to as much high quality data as possible.