EU’s New AI Regulation: Addressing Liability Concerns and Its Interplay with the GDPR – Sanjana L B and Sanah Javed

Sanjana L. B. and Sanah Javed[1]

Abstract

The European Union has spearheaded regulation in the digital space and the recent proposal to regulate artificial intelligence is a testament to this. Much like the GDPR, the proposed AI regulation envisages extraterritorial application. The new framework leans towards industry self-regulation, and broadly categorises artificial intelligence systems based on the risk involved in their usage. Further, the liability framework, according to the risk categories under the proposal, goes beyond the existing product liability regime. This article analyses the nuances of a long-arm liability framework under the new regulations. Additionally, the authors discuss the need for congruence between the new regulation and the GDPR, given the EU’s innovation-centric approach and the relationship between AI and data.

 

Introduction

On April 21, 2021, the European Union (‘EU’) put forth the ‘Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence’ (‘AI framework’ or ‘proposed framework’).[2]  The AI framework is rooted in making Artificial Intelligence (‘AI’) technology ethical and enhancing the EU’s position as a ‘future-proof’ and ‘innovation-friendly’ jurisdiction.[3]

The broad intent behind the AI framework is to provide guidance for developing AI, and make the EU a favourable jurisdiction for developers.[4] Soft-touch regulations such as these, enable both the industry and regulators to continuously learn about the technology, its challenges, and risks,[5] allowing regulations to withstand AI technology’s constant growth.[6] Accepting industry insights, the EU has chosen to omit red lines in the AI framework,[7] but rather impose hefty penalties. Further, the framework addresses issues surrounding liability for ‘AI Output’.[8] It proposes that providers and users of AI in third countries must also comply with the framework in cases where the AI Output is used in the EU, hence extending the liability regime outside the EU.[9]

This article discusses firstly, the liability and responsibility mechanism under the framework for developers and users of AI; and secondly, the interplay of the AI framework with the General Data Protection Regulation (‘GDPR’)[10]  and  ways to further strengthen the AI framework, keeping in view the industry-friendly stance of the EU.

I.               How does the proposed framework tackle liability for harms arising out of AI technology?

The AI framework outlines the liability on developers, distributors and third parties engaged in functioning of AI systems.[11] This, read with the Product Liability Directive, makes the producer of the technology strictly liable for any harm that arose from the use of such technology.[12]

The AI framework provides that developers of ‘high risk’ AI are required to establish a risk management system where the developer must foresee the possible risks that might arise from the use of the technology.[13] Further, Articles 11 and 12 impose obligations of technical documentation and automated records. However, industry players have pointed out that when the AI is being developed, the complete scope of its functionality and usability cannot accurately be predicted.[14] Often, AI developed for a particular purpose has the ability to go beyond that intended purpose.[15] These ex-ante obligations on the AI developers are overly burdensome because firstly, the applicability of the framework extends to AI technology developed abroad but may be used within the EU,[16] and secondly, because the scope of liability exceeds the Product Liability Directive.[17]

i.               Extra-territorial Application of the AI framework

The application of the AI framework extends to cases where the AI technology is neither developed, nor deployed in the EU but AI Output is used in the EU.[18] Hence, the AI framework has broad extraterritorial application.[19] A foreign developer must ensure that in case there is a possibility that the AI Output will be used in the EU, the risk assessment, transparency, monitoring requirement and other obligations are fulfilled.

Hence, the developer may face liability in cases where the AI Output was never intended to be introduced in the EU market. On this note, observations have been previously made on the parallels between the proposed framework and GDPR’s extraterritorial application.[20] Further, it has been noted that in order to extend extra-territorial application of the framework, the AI systems produced elsewhere must have an EU nexus.[21] It is proposed that the AI framework must account for the developer’s intention to make the product available in the EU market before liability can be attributed to the developer, by adopting the ‘Offerings Test’ as applied to the GDPR.[22] A connection between the AI developer’s outcome of the technology and its services or output being offered in the EU[23] must be established; if the output is available inadvertently, a significant number of developers that have no commercial presence in the EU, and therefore no nexus, would also be covered under the framework.[24]

ii.             Going beyond the Product Liability regime

The European Parliament in its ‘Recommendations on a civil liability regime for artificial intelligence’[25] acknowledges that AI has self-learning capabilities that go beyond the intention of the producer/product developer. Hence, the operator – both at the frontend and backend must also be accountable for harms caused by AI.[26] The recommendations specify that the liability framework must work ex-ante to minimise risk or ex-post to compensate for the risk.[27] The AI framework for liability is modelled along the lines of the recommendations, however, it does more than just minimise risk; it imposes an obligation of technical documentation and record keeping, further requiring risk assessment and transparency about the manner in which the system’s output takes place, raising concerns over disclosure of trade secrets and competitively sensitive information.[28]

The framework indicates that the Directive on Trade Secrets[29] will take precedence, and if information that falls within the ambit of a trade secret is required to be disclosed to the authorities, the latter are bound by confidentiality obligations.[30] However, the confidentiality obligation includes broadly worded exceptions and lacks sanctions in case the authorities breach confidentiality. Industry players value their trade secrets as it gives them a competitive edge in the market; asking them to comply with transparency obligations at the risk of losing this edge hinders their incentive to innovate.

In response to the EU’s White Paper on AI, stakeholders suggested that the AI developer’s liability must be restricted to harms that arise in the development phase, and not extend to the exploitation phase, as their ability to mitigate harms in the latter case are low.[31] AI functionality may change beyond the developer’s initial intent on the basis of the self-learning capabilities of the technology making the requirements of technical documentation and record keeping impractical.[32]

Further, extending these obligations to developers in cases where merely the AI Output is used in the EU would increase compliance burdens.[33] Obligations in this regard must only extend to users of the technology[34] or those directly involved with usage of AI Output. Hence, the ex-ante framework must impose merely a basic due diligence obligation on the developer and impose the burden of risk assessment on the deployer of AI technology, especially in cases where the AI is multi-purpose and risks cannot adequately be predicted by the developer.[35]

II.             Attaining congruence between the AI framework and the GDPR?

The AI framework is expected to demonstrate the ‘Brussels Effect’ like the GDPR,[36] as it aims to set a standard for ethical technology.[37] However, while the AI framework takes a self-regulatory approach, the GDPR is stricter with its extraterritorial application, close oversight and robust sanctions, despite industries having lobbied for self-regulation.[38] There is a need to reconcile provisions in both frameworks owing to AI’s dependency on large datasets and the EU’s strict enforcement of data rights under the GDPR

i.               Relationship between data protection and AI regulation

AI systems are dependent on data sets[39] as AI processes data for the algorithmic training phase, and the usage phase.[40] Therefore, AI development presupposes the access to, and creation of, massive data sets.[41]

When AI processes personal data, the provisions of the GDPR will apply.[42] The key points of concern between AI and data protection include the risk of re-identification of data subjects, profiling, and classification of inferred personal information as ‘personal data’.[43] Further, the concepts of consent and purpose limitation under the GDPR may be consistent with AI applications, subject to whether consent is specific to AI-related processing or application, whether consent to profiling practices becomes a prerequisite to availing services, the degree of freedom available to a data subject when providing such consent, and freedom to withdraw consent.[44]

Further, the AI framework imposes obligations of high accuracy, transparency, and security on high-risk AI systems when it comes to data governance[45] – all of which must be enabled by access to accurate, complete and representative data sets.[46] Given the close relationship between AI and data, any new rules to regulate AI must take into consideration the rights and obligations under the GDPR and their interaction with obligations under the new rules.

ii.             How does the AI framework fit into the GDPR?

A potential concern arising from the GDPR’s interplay with the AI framework is the grounds to process ‘special categories of personal data’ (‘SCPD’)[47].[48] Processors are prohibited from processing SCPD except as provided under a limited list of grounds in the GDPR.[49] The AI framework allows providers of AI to process SCPD to carry out bias monitoring, detection and correction in high-risk AI.[50]

Providers of high-risk AI need to, on the one hand, carefully monitor bias in algorithms, and on the other hand comply with the GDPR on processing SCPD. The GDPR permits processing of SCPD when the data subject provides explicit consent for specific purposes, and for substantial public interest;[51] however, as discussed above, algorithm training for AI systems requires massive data sets,[52] and therefore, it is not  feasible for AI developers to obtain explicit consent from individual data subjects when they train their systems.[53] However, providers of high-risk AI may be permitted to process SCPD under the public interest ground due to risks of biased algorithms,[54] subject to the safeguards mentioned in the GDPR.[55]

Processing SCPD under the AI framework is permitted when ‘strictly necessary’;[56] but the AI framework itself is not a legal ground to process SCPD.[57] Without a corresponding provision under the GDPR to allow providers to process SCPD for obligations under the AI framework, uncertainties may still persist for AI providers and data subjects. This, coupled with extraterritorial application and the hefty penalties for non-compliance under both frameworks warrants clarity from the regulators. Therefore, the  authors suggest that the GDPR be suitably modified to establish that obligations of high-risk AI providers under the AI framework can be a valid ground to process SCPD, subject to safety and privacy standards. This will help the industry conduct bias monitoring while still being compliant with the GDPR.

Conclusion

The AI framework seeks to be forward-looking; and its ramifications will be far-reaching. The authors note above that the application of the AI framework merely on the basis of AI Output appears to be extraneous and therefore to enforce extraterritoriality without hindering global innovation, the AI framework must borrow the ‘Offering Test’ from the GDPR to determine liability. Further, as obligations under the AI framework will need high-risk AI providers to process SCPD, there is a need to ensure congruence between the AI framework and the GDPR. To this end, modifying the latter to enable high-risk AI providers to process SCPD will help achieve effective bias monitoring, detection, and correction.

 

Footnotes

[1] Sanjana L. B., V-year student, Symbiosis Law School, Hyderabad, and Sanah Javed, Associate, Trilegal. All views expressed in this article are the authors’ personal views.

[2] Commission, ‘Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts’ COM (2021) 206 final (AI framework)

[3] ‘Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence’ (European Commission Press Corner, 21 April 2021) <https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1682> accessed 12 June 2021.

[4] Natasha Lomas, ‘Europe Lays Out Plan for Risk-Based AI Rules to Boost Trust and Uptake’ (TechCrunch, 21 April 2021) <https://techcrunch.com/2021/04/21/europe-lays-out-plan-for-risk-based-ai-rules-to-boost-trust-and-uptake/> accessed 12 June 2021

[5] Denmark and Others, Innovation and Trust worthy AI: Two Sides of the Same Coin, (Position Paper, 2020) 2

[6] In a recent event by the Center for Data and Innovation on May 05, 2021, Mr. Kilian Gross, AI Policy Development and Coordination, DG CONNECT, European Commission, also affirmed that the framework envisions a ‘future-proof’ regulation; See ‘What’s Next on EU’s proposed AI framework’ (DataInnovation, 5 May 2021) <https://www.youtube.com/watch?v=vdcSKXeiDAU&t=2s> accessed 21 June 2021  ; Adam Satariano, ‘Europe Proposes Strict Rules for Artificial Intelligence’ (The New York Times, 21 April 2021) <https://www.nytimes.com/2021/04/16/business/artificial-intelligence-regulation.html> accessed 12 June 2021 ; Javier Espinoza and Madhumita Murgia, ‘Europe Attempts to Take Leading Role in Regulating Uses of AI’ (Financial Times, 24 April 2021) <https://www.ft.com/content/360faa3e-4110-4f38-b618-dd695deece90> accessed 12 June 2021

[7] Tom Simonite, ‘How Tech Companies Are Shaping the Rules Governing AI’ (Wired, 16 May 2019) <https://www.wired.com/story/how-tech-companies-shaping-rules-governing-ai/> accessed 16 June 2021; See also, Friederike Reinhold and Angela Müller, ‘AlgorithmWatch’s response to the European Commission’s proposed regulation on Artificial Intelligence – A major step with major gaps’ (AlgorithmWatch, April 2021) <https://algorithmwatch.org/en/response-to-eu-ai-regulation-proposal-2021/> accessed 12 June 2021

[8] AI framework, art 3(1); AI ‘Output’ is understood as “content, predictions, recommendations or decisions influencing the environment they interact with” that are derived from AI.

[9] AI framework, recital 11

[10] Council Regulation (EC) 2016/679 of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L119/1 (GDPR)

[11] AI framework, chapter II-III

[12] Council Directive of 25 July 1985 on the Appropriation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (85/374/EEC), https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:31985L0374&from=EN (Product Liability Directive)

[13] AI framework, art 9

[14]‘Recommendations for Regulating AI’, Google, 7, 9 <https://ai.google/static/documents/recommendations-for-regulating-ai.pdf> accessed 16 June 2021

[15] Ibid

[16] Marc MacCarthy and Kenneth Propp, ‘Machines learn that Brussels writes the rules: The EU’s new AI regulation’ (LawfareBlog, April 28, 2021) <https://www.lawfareblog.com/machines-learn-brussels-writes-rules-eus-new-ai-regulation> accessed 16 June 2021

[17] European Parliament, Annex to ‘Resolution of 20 October 2020 with recommendations to the Commission on Civil Liability Regime for Artificial Intelligence (2020/2014(INL)) Civil Liability Regime for Artificial Intelligence’, para B (8), 2020 <https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html#title1> accessed 16 June 2021 (Annex to Civil Liability Regime)

[18] ‘New Horizons: European Commission Proposes Measures to Regulate AI’ (JDSupra, April 27, 2021) <https://www.jdsupra.com/legalnews/new-horizons-european-commission-4961736/> accessed 16 June 2021

[19] Dan Whitehead, ‘Why the EU’s AI regulation is a groundbreaking proposal’ (IAPP, April 22, 2021) <https://iapp.org/news/a/why-the-eus-ai-regulation-is-a-ground-breaking-proposal/> accessed 16 June 2021

[20] Blanca Escribano and Peter Katko, ‘European draft Regulation on Artificial Intelligence: Key questions answered’ (EY, 15 May 2021) <https://www.ey.com/en_es/law/european-draft-regulation-on-artificial-intelligence-key-questions-answered> accessed 16 June 2021

[21] Julia M Wilson and others, ‘New Draft Rules on Use of Artificial Intelligence’ (BakerMcKenzie, 14 May 2021) <https://www.bakermckenzie.com/en/insight/publications/2021/05/new-draft-rules-on-the-use-of-ai> accessed 16 June 2021

[22] The ‘Offering Test’ suggests that the law applies only in case the offerings are available in the EU. Any inadvertent or incidental service would not bring the entity within the framework of the GDPR.

[23] Dino Wilkinson, ‘New guidance on the application of the GDPR outside Europe’ (Clyde & Co., 27 November 2019) <https://www.clydeco.com/en/insights/2019/11/new-guidance-on-the-application-of-the-gdpr-outsid> accessed 16 June 2021

[24] Dan Whitehead, ‘AI & Algorithms (Part 3): Why the EU Regulation is a groundbreaking proposal’ (Engage, 3 May 2021) <https://www.engage.hoganlovells.com/knowledgeservices/news/ai-algorithms-part-3-why-the-eus-ai-regulation-is-a-groundbreaking-proposal> accessed 28 July 2021

[25] European Parliament, ‘Resolution of 20 October 2020 with recommendations to the Commission on Civil Liability Regime for Artificial Intelligence (2020/2014(INL)) Civil Liability Regime for Artificial Intelligence’, 2020 <https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html#title1> accessed 16 June 2021 (Civil Liability Regime)

[26] Ibid, para 12

[27] Annex to Civil Liability Regime (n 17)

[28] Nazrin Huseinzade, ‘Algorithm Transparency: How to Eat the Cake and Have it Too’ (European Law Blog, 27 January, 2021) <https://europeanlawblog.eu/2021/01/27/algorithm-transparency-how-to-eat-the-cake-and-have-it-too/> accessed 16 June 2021

[29] Council Directive (2016/943) of 8 June 2016 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016L0943&from=EN; AI framework, para 3.5, explanatory memorandum and art 70(1)(a)

[30] AI framework, art 70

[31] Nazrin Huseinzade (n 28)

[32] ‘Recommendations for Regulating AI’, Google, 7, 9 <https://ai.google/static/documents/recommendations-for-regulating-ai.pdf> accessed 28 July 2021

[33] Sam Shaed, Europe’s proposed A.I. law could cost its economy $36 billion, think tank warns (CNBC, 26 July 2021) <https://www-cnbc-com.cdn.ampproject.org/c/s/www.cnbc.com/amp/2021/07/26/aia-europes-proposed-ai-law-could-cost-its-economy-36-billion.html> accessed 28 July 2021

[34] ‘Recommendations for Regulating AI’, Google, 7, 9 <https://ai.google/static/documents/recommendations-for-regulating-ai.pdf> accessed 28 July 2021

[35] Ibid

[36] Anu Braford, The Brussels Effect: How the European Union Rules the World (OUP 2020) 131

[37] Jeremy Kahn, ‘Europe Proposes Strict A.I. Regulation Likely to Have an Impact Around The World’ (Fortune, 21 April 2021) <https://fortune.com/2021/04/21/europe-artificial-intelligence-regulation-global-impact-google-facebook-ibm/> accessed 12 June 2021

[38] Bradford (n 36) 133, 138, 139

[39] ‘Big Data is Too Big Without AI’ (Maryville University) <https://online.maryville.edu/blog/big-data-is-too-big-without-ai/> accessed 12 June 2021.

[40] ‘Artificial Intelligence and Data Protection – How the GDPR Regulates AI’ (Centre for Information Policy Leadership, March 2020) <https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/cipl-hunton_andrews_kurth_legal_note_-_how_gdpr_regulates_ai__12_march_2020_.pdf> accessed 12 June 2021, 4 stating that the ‘algorithmic training phase’ refers to the phase where AI is trained using data sets to create a model, identify patterns and connect various data points.

[41] Giovanni Sartor and Francesca Lagioia, The impact of the General Data Protection Regulation (GDPR) on artificial intelligence (European Union 2020) 16

[42] Artificial Intelligence and Data Protection – How the GDPR Regulates AI’ (Centre for Information Policy Leadership, March 2020) <https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/cipl-hunton_andrews_kurth_legal_note_-_how_gdpr_regulates_ai__12_march_2020_.pdf> accessed 12 June 2021, 4

[43] Sartor and Lagigoia (n 41) 74-76

[44] Ibid

[45] AI framework, art 10

[46] Mark MacCarthy and Kenneth Propp, ‘Machines learn that Brussels writes the rules: The EU’s new AI regulation’ (Brookings, 4 May 2021) <https://www.brookings.edu/blog/techtank/2021/05/04/machines-learn-that-brussels-writes-the-rules-the-eus-new-ai-regulation/> accessed 12 June 2021

[47] SCPD includes data revealing ethnic or racial identities, religious beliefs, sexual orientation, and biometric data, among others.

[48] GDPR, art 9

[49] GDPR, art 9(2)

[50] AI framework, art 10(5), recital 44

[51] GDPR, art 9(2)(a), art 9(2)(g)

[52] Sartor and Lagigoia (n 41) 16

[53] Sartor and Lagigoia (n 41) 49

[54] AI framework, recital 44

[55] GDPR, art 9(2)(g)

[56] AI framework, art 10(5)

[57] AI framework, recital 41