By Neeraja Seshadri and Sindhu A
Deepfakes use machine learning to swap faces and ascribe the conduct of one individual to a different individual by creating a digital impersonation of them. This form of artificial intelligence is used to create an impression of events that never took place; it alters reality.
The most common way of creating deepfakes is using Generative Adversarial Networks, where two algorithms are pitted against each other to produce synthetic and realistic media. Deepfakes have been deployed as weapons to harass women in the form of revenge porn and for political campaigns to manipulate viewers, among other use cases. However, the technology also holds vast potential to be used for legitimate purposes as witnessed, for example, in the entertainment sector to create parodies and resurrect dead actors.
Social media platforms are among the most popular channels for deepfakes to reach a broader audience. However, their content moderation policies do not appear adequate to deal with deepfakes. Facebook along with Microsoft and Amazon created the ‘Deepfake Detection Challenge’ the result of which was a technology that can detect deepfakes, but with a mere 65.18% accuracy. Deepfakes pose several legal challenges with implications on different areas of law, including copyright infringement, privacy and data protection, and merely creating a technology to detect deepfakes doesn’t come close to resolving this issue.
The current development around legislation concerning deepfakes fail to consider the implications on copyrights possibly due to the misconception that the protection granted against copyright infringement may be evoked to battle deepfakes. However, copyright laws in different jurisdictions do not protect rights in all circumstances when they are violated by deepfakes. The article examines the doctrine of “fair use” under the Digital Millennium Copyright Act, 1998 of the US and “fair dealing” under the Indian Copyright Act, 1957 and the United Kingdom Copyright, Designs and Patent Act, 1988 to understand the effectiveness of copyright law in addressing implications arising out of this disruptive technology due to the contrasting approaches in these jurisdictions.
II. The United States Position
The primary issue in the US is that the doctrine of fair use is too broad, as it grants protection to various forms of deepfakes including the ones created with malicious intent. The doctrine of fair use in the US under Section 107 of the Digital Millennium Copyright Act, 1998 (DMCA) is based on a four-factor test, which includes purpose and character of the use, nature of copyrighted work, amount and substantiality of the portion taken and the effect of the use on potential markets. This doctrine can extend protection to deepfakes under the concept of “transformative use” as propounded in Campbell v. Acuff Rose. Transformative use is when the purpose and character of a copyrighted work are altered to create content with new expression, meaning or message. When a deepfake is created, the purpose and nature of the deepfake is different from the original copyrighted work, and it therefore does not affect the market value of the original copyrighted works. Courts in the US have also categorically laid down that even if there is substantial copying, if the work is transformative, it could still be accorded protection under fair use.
This liberal position to transformative use arguably allows the doctrine of fair use to be extended to a majority of deepfake content, irrespective of whether created with a bona fide or a mala fide intention. This potentially allows deepfakes created with malicious intent to be protected as parodies under the fair use doctrine, making various safeguards such as notice and takedown and intermediary liability under Section 512 of the DMCA and Section 230 of the Communication Decency Act unavailable.
The lack of legal protection, which includes the right to reputation of the author and right of attribution to the work is a further issue in the US jurisdiction. Article 6 bis of Berne Convention, 1886 deals with the protection of works and the rights of their authors, providing creators with the means to control how their works are used, by whom, and on what terms. However, in the US, these rights are only extended to authors of visual arts, under the Visual Artists Rights Act of 1990, and not authors of all copyrighted works are visual arts under the DMCA. This creates a precarious situation in which authors may be unable to protect their work and reputation when they are tarnished by deepfake technology.
III. The Indian Position
In India, the doctrine of fair dealing under Section 52 of the Indian Copyright Act, 1957 (ICA) deals with what works are excluded from being considered as infringing works. Unlike the US position, the doctrine of fair dealing is an exception to copyright infringement, and the legislation has laid down an exhaustive list stating the acts which are not deemed to be infringing. While the Indian position on fair dealing is often criticized for being rigid, it proves convenient in tackling deepfake technology created with malicious intent, as the use of this technology does not fall under any of the acts mentioned in Section 52 of ICA. However, the provision might not protect the use of deepfake technology for authentic purposes.
Further, the Indian courts have started adopting the concept of transformative use concerning the term ‘review’ under Section 52(1)(a)(ii) of ICA as observed in University of Oxford and Ors. v. Narendra Publishing and Ors. The Courts have incorporated the doctrine of fair use into the concept of fair dealing as an exception to allow specific types of work to be protected owing to their beneficial nature to the society as a whole. The existing Indian precedents on transformative use have primarily dealt with guidebooks alone under the category of literary works, and this interpretation cannot be accorded to deepfakes.
Section 57 of ICA provides for the right to paternity and integrity in compliance with the moral rights requirement under the Berne Convention, 1886. While considering deepfakes, the right to integrity provided under Section 57(1) (b) of ICA plays an essential role, since deepfakes can be regarded as distortion, mutilation or modification of a person’s work. There exist provisions for civil and criminal liability under Section 55 and Section 63 of ICA which provide damages, injunctive relief, imprisonment and fines against infringers. These provisions arguably provide adequate deterrence to tackle deepfakes created for malicious purposes but fail to extend protection to deepfakes created with legitimate purposes.
Intermediary liability under Section 79 of the Information Technology Act, 2000 (IT Act) is imposed for copyright infringement post the judgment of Myspace Inc. v. Super Cassettes Industries Ltd. The Delhi High Court extended a harmonious interpretation to the provisions of the ICA and the IT Act and laid down that in case of copyright infringement, intermediaries have a responsibility to take down infringing content when notified by private parties, even without a court order. However, issues may still arise concerning the detection of deepfakes as the technology remains infirm, and it challenges the content moderation policies of intermediaries while taking down deepfake content.
IV. The United Kingdom Position
UK law protects fair dealing under Section 29 and Section 30 of the Copyright, Designs and Patents Act 1988 (CDPA). These provide specific exceptions to the overall CDPA framework and are designed to protect the usage of copyrighted material without the permission of the owner of that copyright in certain situations. The provisions provide for three such instances, namely in pursuance of non-commercial research and private study, for criticism/review, or to report current events. The definition of fair dealing or what constitutes fairness is not provided in any UK statute; and in Hubbard v. Vosper, Lord Denning held that ‘it is impossible to define what is “fair dealing.” It must be a question of degree.’ This judgment was the first judicial attempt to lay down a test to determine fairness as it was not possible to formulate a definition of fair dealing. Several other parameters have subsequently emerged. These include the nature of the work, the method of obtaining the work, the amount of work appropriated, the character or use of the dealing, the commercial nature of the dealing, the motive of the dealing, the impact of the dealing on the market of the original work and whether there was alternative non-copyrighted work available. The concept of fair dealing further protects works created as a parody, caricature or pastiche as provided under Section 30A, Schedule 2 (2A) of the CDPA.
Although, as in India, the concept of fair dealing in the UK has attracted criticism for being overly rigid and restrictive in scope, the above position provides ample room to deal with deepfakes. Deepfakes created with a legitimate purpose may justify protection on the grounds of use for research or pastiche. The Civil Division of the England and Wales Court of Appeal in the case of Hyde Park Residence Ltd v. Yelland & Ors, held that it is appropriate to consider the motive of the alleged infringer as highly relevant while dealing with the concept of fair dealing, and this seems to govern deepfakes created with malicious intent.
Copyright is often discussed as a vehicle to address deepfakes, but there is a lack of subject-specific legislation. This article has examined the approaches taken in the US, India and the UK.
The complex nature and multifaceted use cases of deepfakes therefore, require legislators and judiciary to consider the motive behind their creation when deciding whether copyright protection is extended to it or not. The exhaustive list under the fair dealing provision of the Indian Copyright legislation overlooks the potential of the use of deepfakes for genuine and bona fide purposes such as entertainment, education and MedTech. As a result, whilst it appears several steps ahead in tackling malicious use of deepfake technology, it seems ill-equipped to even acknowledge the legitimate uses of deepfakes. By contrast, the liberal approach of the US copyright law creates a potential berth for deepfakes created with bona fide as well as mala fide intention to be protected under transformative use. The parameters developed by the judiciary in the UK allow deepfake creators to present their case to avail fair dealing protection.
The position of fair dealing in the UK appears significantly superior to the positions of fair use in the US and fair dealing in India, as the position arguably strikes a balance between the extremely liberal fair use regime in the US and the narrow fair dealing concept in India while dealing with deepfakes.
 Jessica Ice, ‘Defamatory Political Deepfakes and the First Amendment’ (2019) 70 Case W Res L Rev 417.
 Karen Hao, ‘Inside the world of AI that forges beautiful art and terrifying deepfakes’ (MIT Technology Review, 1 December 2018) <https://www.technologyreview.com/2018/12/01/138847/inside-the-world-of-ai-that-forges-beautiful-art-and-terrifying-deepfakes/> accessed 21 August 2020.
 Drew Harwell, ‘Fake-porn videos are being weaponized to harass and humiliate women: “Everybody is a potential target” Risk’ (The Washington Post, 30 December 2018) <https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-being-weaponized-harass-humiliate-women-everybody-is-potential-target/?noredirect=on> accessed 7 August 2020.
 Regina Mihindukulasuriya, ‘Why the Manoj Tiwari deepfakes should have India deeply worried’ (The Print, 29 February 2020) <https://theprint.in/tech/why-the-manoj-tiwari-deepfakes-should-have-india-deeply-worried/372389/> accessed 3 August 2020.
 Mika Westerlund, ‘The Emergence of Deepfake Technology: A Review’ (2019) 9(11) Technology Innovation Management Review 39, 42.
 Robin Pomeroy, ‘This iconic film star will star in a new movie – from beyond the grave’ (World Economic Forum, 8 November 2019) <https://www.weforum.org/agenda/2019/11/james-dean-cgi-deepfakes/> accessed 31 July 2020.
 James Vincent , ‘Facebook contest reveals deepfake detection is still an “unsolved problem”’ (The Verge, 12 June 2020) <https://www.theverge.com/21289164/facebook-deepfake-detection-challenge-unsolved-problem-ai> accessed 4 August 2020.
 ‘WIPO’s Conversation on IP and AI to Continue as a Virtual Meeting’ (World Intellectual Property Organization, 29 May 2020) <https://www.wipo.int/pressroom/en/articles/2020/article_0013.html> accessed 25 July 2020.
 Yash Raj, ‘Obscuring the Lines of Truth: The Alarming Implications of Deepfakes’ (Jurist, 17 June 2020) <https://www.jurist.org/commentary/2020/06/yash-raj-deepfakes/>accessed 21 August 2020.
 The Copyright Act 1957 No.14 Acts of Parliament 1957 (ICA 1957) (India).
 Copyright, Designs and Patent Act 1988 (CDPA 1988) (UK).
 The Digital Millennium Copyright Act 17 U.S.C. §107 (1998) (USA).
 Rich Stim, ‘What Is Fair Use?’ (Stanford University Libraries) <https://fairuse.stanford.edu/overview/fair-use/what-is-fair-use/> accessed 29 July 2020.
 Campbell v Acuff Rose, 510 US 569 (1994).
 Danielle K. Citron and Robert Chesney, ‘Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security’ (2019) 107 California Law Review 1753.
 Patrick Cariou v Richard Prince, 714 F.3d 694 (2013); Rogers v Koons, 960 F.2d 301 (1992); Leibovitz v Paramount Pictures, 137 F.3d 109 (1998); Seltzer v Green Day, 725 F.3d 1170 (2013); Blanch v Koons, 467 F.3d 244 (2006); Bill Graham Archives v Dorling Kindersley Ltd, 448 F.3d 605 (2006).
 Rich Stim, ‘Measuring Fair Use?’ (Stanford University Libraries) <https://fairuse.stanford.edu/overview/fair-use/four-factors/> accessed 21 August 2020.
 17 U.S.C. §512.
 Communication Decency Act 47 U.S.C. § 230 (1996) (USA).
 Berne Convention for the Protection of Literary and Artistic Works, Sept. 9, 1886, as revised at Paris on July 24, 1971 and amended in 1979 S. Treaty Doc. No. 99-27 (1986).
 Visual Artists Rights Act 17 U.S.C. §106A (1990) (USA).
 ICA 1957, s 52.
 Ayush Sharma, ‘Indian Perspective of Fair Dealing under Copyright Law: Lex Lata or Lex Ferenda?’ (2009) 14 Journal of Intellectual Property Rights 523, 529.
 ICA 1957, s 52.
 ibid s 52(1)(A)(ii).
 University of Oxford v Narendra Publishing House, ILR (2009) 2 Del 221.
 Super Cassettes Industries Ltd v Mr. Chintamani Rao and Ors (2011) SCC OnLine Del 4712.
 University of Cambridge v B.D. Bhandari (2011) SCC OnLine Del 3215; Saregama India Limited v Balaji Motion Pictures Limited and Ors (2019) SCC OnLine Del 10036.
 ICA 1957, s 57.
 ibid s 57 (1)(b).
 ibid s 55.
 ibid s 63.
 The Information Technology Act 2000 No.21 Acts of Parliament 2000 (India).
 Myspace Inc v Super Cassettes Industries Ltd (2016) SCC OnLine Del 6382.
 CDPA, s 29.
 ibid s 30.
 Hubbard v Vosper  2 QB 84.
 British Broadcasting Corp v British Satellite Broadcasting Ltd  3 All ER 833; Beloff v Pressdram Ltd  1 All ER 241.
 Time Warner Entertainments LP v Channel Four Television Corporation plc  EMLR 1.
 Beloff v Pressdram Ltd  1 All ER 241 (Ch D).
 Hubbard (n 38).
 Pro Sieben AG v Carlton UK Television Ltd  1 WLR 605.
 Newspaper Licensing Agency v Marks & Spencer plc  EMRL 369.
 Hyde Park Residence Ltd v Yelland & Ors  EWCA Civ 37.
 Fraser-Woodward Ltd v British Broadcasting Corporation Brighter Pictures Ltd  EWHC 472 (Ch).
 Hyde Park Residence Ltd (n 45).
 CDPA, s 30A.
 ibid s 2(2A).
 Giuseppina D’Agostino, ‘Healing Fair Dealing?: A Comparative Copyright Analysis of Canadian Fair Dealing to UK Fair Dealing and US Fair Use’ (2007) Comparative Research in Law & Political Economy. Research Paper No. 28/2007 <http://digitalcommons.osgoode.yorku.ca/clpe/244> accessed 2 August 2020.
 Hyde Park Residence Ltd (n 45).