May I have some artificial intelligence with my human rights? About the recent European Commission’s Proposal on a Regulation for Artificial Intelligence

by Vera Lúcia Raposo*

 

On 21 April 2021, the European Commission released a proposal for a future European Union (EU) regulation on Artificial Intelligence (hereafter, the ‘Proposal). While its structure sought inspiration in the General Data Protection Regulation (GDPR), the content was built upon other relevant EU interventions, namely the EC’s White Paper on Artificial Intelligence (AI), released in February 2020. The aim is to facilitate and develop the use of Artificial Intelligence within the EU to create a truly Digital Single Market while still protecting fundamental rights, EU values, and ethical principles that are potentially threatened by AI features such as its opacity (some have called it a black box), complexity, the demand for copious amounts of data (big data, with all the attached concerns), and the possibility of automation. So far, so good. We expected nothing less from the European Commission (EC). However, the proposed regulation might fall short regarding its aim.

The definition of AI used by the Proposal is presented as being broad enough to encompass future developments of this technology. However, the fast pace of technology advancement implies that it will eventually be outdated.

The proposal is based on a risk-assessment model, under which three categories of AI emerge:

  • Unacceptable risk AI, banned in Title II of the Proposal. This category includes AI that distorts behaviours (dark patterns), explores the vulnerabilities of specific groups (micro-targeting), scores people based on their behaviours (social scoring), and proceeds to real-time biometric identification for law enforcement purposes in public spaces (facial identification);
  • High-risk AI, as referred to in Title III of the Proposal are allowed, albeit under particularly strict standards;
  • Low-risk AI, for which only transparency requisites are demanded in some cases (namely chatbots and deep fakes), as established in Title IV of the Proposal.

The identification of a serious risk is based on the potential menace to health, safety, and/or fundamental rights. Notably, not only does the label of ‘high-risk’ depend on the specific task performed, but also on its purpose. Moreover, in case AI systems are erroneously classified, the consequences are not clearly defined in the Proposal. A correction mechanism should be in place to amend any wrong classification, but the Proposal remains silent on this issue.

High-risk AI systems are products or safe components of products, whose regime has been harmonised under EU law by the legislation listed in Annex II (medical devices, toys, elevators, and many others); or stand-alone AI systems that pose special threats to fundamental rights and that are referred in Annex III. These lists shall be updated in accordance with the latest technological developments.

For high-risk AI, a two-step system of control and monitoring is built. First, the control is performed through mandatory requirements to be complied with by AI systems before entering into the market. A conformity assessment is undertaken by a third party following harmonised standards, which may lead to the issuance of the ‘CE marking of conformity’. Second, an ex-post monitoring mechanism will be put in place by market surveillance authorities which use the powers conferred to them by Regulation (EU) 2019/1020 on market surveillance. Moreover, AI providers are also obliged to report severe incidents and malfunctions.

Low-risk AI systems almost go unnoticed in the Proposal, except for the provisions on transparency. The two-step procedure is not required in this case. However, providers can voluntarily accept to do so by creating their own codes of conduct (Article 69).

source: https://euraxess.ec.europa.eu/

Such legal framework requires the set-up of specific organs at the national level (notifying authorities and notifying bodies), as well as the European level, particularly the European Artificial Intelligence Board. The latter is a supranational supervisory authority, similar to the European Data Protection Board (EDPB), created under the umbrella of the GDPR, and also to the recently proposed European Board for Digital Services established in the proposal of the Digital Services Act.

Another innovation is the EU database, in which stand-alone high-risk AI must be registered before entering into the market, to facilitate transparency and tracking. The information is supposed to be upload to the database by the provider of high-risk AI systems. Neutral entities, such as the notifying bodies, could be entrusted with this task for all AI systems. Such solution would lead to better results in terms of AI’s safety and transparency.

Accountability is another matter of concern. Indeed, compliance requirements are carried out by AI providers, i.e., the ones that develop the AI and put the AI system (or its outputs) on the EU market, even if they are not established within the EU territory. In addition to the above-mentioned tasks on control, monitoring and information update, their obligations include the implementation of risk management and quality management systems, the development of detailed technical documentation, the maintenance of automatically generated logs, and transparency obligations which require that the ones interacting with AI systems must be informed that they are.  Moreover, compliance is also expected from AI users, i.e., people or entities established in the EU using AI in a professional context, or not established in the EU but whose outputs are used in the EU, along with distributors and importers. In the event of non-compliance, the Proposal foresees maximum administrative fines of up to €20m or 4% of total worldwide annual turnover, similarly to the GDPR clauses. However, no accountability mechanisms are in place. Individuals harmed by AI systems can barely obtain redress. The 2019 report on Liability for Artificial Intelligence and other Emerging Digital Technologies and the European Parliament Resolution of 20 October 2020 on a Civil Liability Regime for Artificial Intelligence suggest that such a mechanism might be put in place, as it is also supported by Coordinated Plan on Artificial Intelligence, released together with the Proposal.

source: https://digital-strategy.ec.europa.eu/

Some highly controversial topics, such as facial recognition technologies (FRT), are expected to raise discussion amongst experts. Interestingly, the 2020 White Paper did not ban FRT, although a draft version had suggested a time limited ban on the use of FRT in public spaces. The White Paper merely recommended the use of appropriate cautions, without giving much detail on such recommendations. Instead, Article 5 of the Proposal clearly prohibits the use of some forms of FRT. It is expected that Article 5/1/d, in particular, will be contested by the ones defying this technology. The norm bans ‘real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, allegedly one of its more threatening uses from the perspective of fundamental human rights. Considering its potential added value for crime control, the proposal provides for some exceptions: the search of crime victims, including missing children, ‘specific, substantial and imminent threat to the life or physical safety of natural persons, or of a terrorist attack’, and the detection, localisation, identification, or prosecution of a perpetrator or suspect of crimes referred to in Article 2/2. The latter includes grievous crimes, such as terrorism, tariffing of humans beings, murder, rape, but also different crimes such corruption, fraud, facilitation of unauthorised entry and residency. Moreover, the ban leaves several loopholes: it does not cover the use of FRT for law enforcement purposes that does not take place in real-time or that is not carried out in public spaces; FRT by other public entities not related to law enforcement; and FRT used by private individuals and companies. Critics might argue that still many doors are left open.

Overall, the Proposal is more human-rights-friendly than the White Paper, but eventually also more conservative, a potential downside for EU’s competitive digital capacity at a global scale.  At its core, the regulatory ideas – the categorisation of AI systems per levels of risk – are the same in the Proposal and the White Paper. However, the White Paper had a more pro-technology approach and unlike the Proposal it did not elaborate in detail the potential human rights violations. Some have even pointed out that the White Paper was more concerned with the economic outputs of a massive investiment in AI than with its consequences for human rights. Although the critic might be excessive, the White Paper did contain a stronger emphasis on technological development, as it results from the various sections dedicated to this aim (comparatively, there is proportionately more discussion on development and innovation than in the Proposal).

In contrast, the Proposal gives more space to the protection of fundamental rights (though many will argue that not enough), as expressed in multiple binding norms imposing risk assessments and safety requisites, whose violation can lead to severe economic penalties. Assuming this proposal becomes indeed the new AI Regulation, my guess is that European AI developers and manufacturers will be asked to comply with so many different requirements also coming from other norms, the GDPR, the Medical Devices Regulation, the In Vitro Diagnostic Devices Regulation, the Data Governance Act proposal, that AI in Europe will become a scientific topic, not a real industry though.

In sum, the Proposal essentially pivots around two core concepts. The first is compliance, based on a system of harmonised rules, monitoring, and good governance. Consequentially, the second is the principle of trust (‘trust’ and ‘trustwhorty’ are emphasised along the Proposal). On the one hand, developers of AI shall be able to rely on rules to carry out their businesses within the EU market by complying with a unified body of rules for all Member States. On the other hand, AI users should be able to rely on its safety.

Innovation should have been the third characteristic. The impetus to bring about innovation in AI technologies is restricted to Chapter V which, albeit introducing interesting provisions, falls short of what would be required for a truly digital single market. The most promising initiative is the creation of regulatory sandboxes to encourage innovation, though under strict (too strict?) requirements. AI investment in the EU might be hampered by such ‘innovation hole’ which could advantage other leading players. Given the outstanding Chinese technological development, including on AI, the EU might not be able to reach China in the near future (or ever). Whether the Proposal reached a fair balance between innovation and human rights, or conversely whether it will lead the EU to stagnation in the domain of AI remains to be seen.

 

*Vera Lúcia Raposo  is Associate Professor at the Faculty of Law of the University of Macau, China, and Assistant Professor at the Faculty of Law of the University of Coimbra, Portugal. Her main lines of research are health law and digital law.