The EU Commission’s Proposal for the Regulation of Artificial Intelligence and “human-centric” Regulation for AI as Medical Device

*Daria Onitiu

from https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines.1.html

The EU Commission’s proposal for the Regulation of Artificial Intelligence (the ‘AI Act’) will bring significant changes to standard-setting for high-risk systems, including medical AI. The proposal’s risk-based approach intends to balance the socio-economic benefits of medical AI and the need for harmonised standards for safety-critical applications in healthcare. From medical diagnostic systems to the robot surgeon, medical AI illustrates the need for interdisciplinary perspectives for the formal governance of these novel tools in a dynamic healthcare setting.

Setting the scene regarding ‘high-risk’ systems regarding medical AI

A critical debate is that the AI Act considers almost all AI tools as ‘high-risk’. High-risk systems are those AI systems that, by their nature, such as increased autonomy and opacity, require enhanced mandatory obligations under the Act (Title III).  This risk-based classification has been criticised for enabling over-regulation of AI systems in healthcare. For instance, the AI Act’s broad-brush definition of “AI” including statistical and logical-based applications of algorithms ‘could also [encompass] systems that are not commonly considered AI being regulated by the Act, potentially impacting innovation’.[1]

Nevertheless, ‘the classification as high-risk does not only depend on the function performed by the AI system, but also on the specific purpose and modalities for which that system is used’, as rightly acknowledged in Title III of the proposal.  Therefore, it is not the proposal’s prescriptive although non-exhaustive nature of the risk-based framework which tips the balance regarding innovation and formal governance of AI systems. The EU’s vision is to promote holistic alignment of EU values with a product safety approach.

Accordingly, the real question is the AI Act’s consistency of values when considering the governance of software of AI as medical device. The EU Commission White Paper is clear in this regard in that the system’s ‘the sector and the intended use’ may create ‘distinct’ risks to fundamental rights and safety in healthcare, such as ‘safety considerations and legal implications concerning AI systems that provide specialized medical information to physicians, AI systems providing medical information directly to the patient and AI systems performing medical tasks themselves directly on a patient…’ (p. 17). The reasoning for this varied approach to AI governance is the need for ‘human-centric’ regulation to achieve trust- such as entailing the design of AI systems in healthcare with the involvement of human control, as well as enhanced transparency requirements addressing algorithmic opacity (p. 21). These values regarding the ‘systemic, individual, and societal aspects’ of technology ultimately shape the balance between innovation and regulation (p. 11).

Accordingly, the current discourse on AI governance is framed as a balancing exercise, considering the EU’s future efforts regarding the EU New Legislative Framework (Annex II of the AI Act). However, this process of value-alignment currently stagnates, considering the role of transparency of medical AI systems, based on efforts to align the regulation of medical AI and its procedural alignment with other sectoral legislation, including the Medical Device Regulation (MDR).

Medical AI: a balancing exercise

The current EU Commission’s proposal follows the spirit of other sectoral legislation, including the MDR, reinforcing the modalities including the associated problems of medical AI to shape human usage and decision-making beyond the laboratory setting. The proposal, just as the MDR, is a legal tool protecting product safety. With the AI Act, the EU Commission’s vision of human-centric regulation becomes an “ecosystem of protecting functionality and intended uses of AI” as medical devices.

Consequently, this perspective brings about two interesting problems that deserve further attention. One aspect is that formal governance remains tied to the system’s performance, intended use, and functionality. The European Coordination Committee of the Radiological, Electromedical and Healthcare IT Industry does recognise this, claiming that international standards need to consider lifecycle changes to software as medical device on the ground, and subsequently inform and update the MDR (p.11-13). The problems of technical documentation tied to the lack of standarisation still exist and the AI Act does not include the requirement to verify the system supporting clinical outcomes including patient-centred care. Second, the AI Act’s progressive outlook on transparency in Articles 13-14 stagnates concerning the contribution covering only the algorithms’ functional revelations of foreseeable risks, leaving out the subject’s perception of the nature of risk undermining shared decision-making in a healthcare setting.  This further reinforces gaps in the MDR to verify software as medical device via the levels of explanations for meaningful clinical outcomes.

Therefore, the considerations about AI technologies’ inherent, as well as distinct risks for fundamental rights and safety are transferred to the umbrella of the system’s innovative approach to outperform human judgement. Most requirements in the AI Act, including the ‘appropriate type and degree of transparency’, as well as the identification of technical safeguards for oversight (Article 13 (1); Article 14 (3) (a)) are left to the manufacturer. There is no appropriate involvement of the user, and the subject impacted by these novel technologies. Follow-up measures, such as post-market surveillance under both the AI Act and the MDR (Articles 61 and 89 respectively), will fulfil the function of monitoring life-cycle changes in product development, but do not provide the necessary confidence to develop safe and trustworthy systems considering EU values.

Value-alignment is key for legal certainty

What this shows is that we should not downplay the socio-economic impact of AI to a question of legislative competence but consider the question of safety-critical systems as being a task of value-alignment. Indeed, the significant overlap between the AI Act and MDR induces risks of double-standards at the cost of legal certainty, concerning the governance and enforcement of safety considerations. Nevertheless, what we need first is a risk-based approach that considers an interdisciplinary perspective of EU values into the modalities of AI systems, such as the use of Machine Learning approaches in healthcare. This way the focus on the analysis of prescriptive regulation of AI will shift to the formal governance of novel technologies in the long-term.

AI governance and medical AI: an interdisciplinary approach

The modalities of AI systems necessitate a new approach of standard-setting, which goes beyond a vision regarding the EU’s proactive approach restricted to an AI system’s functionality (p. 2). Restricting AI governance this way creates a false dichotomy stifling innovation, as well as rapid advancements of AI in “narrow” domains. An interdisciplinary approach to AI governance is tools testing a system’s operation on the ground, considering user perspectives of the tool’s reliability, a patient’s perception of risk, as well as core ethical values in decision-making including patient-centred care. This outlook will eventually provide a more consistent approach to AI governance in healthcare, as well as legal certainty.

(This blogpost is the author’s current work in progress. Please contact the author for the latest version of the work).

Details about the author

*Daria Onitiu is a Research Associate based at Edinburgh Law School. She researches at the Governance & Regulation Node within the UKRI Trustworthy Autonomous Systems Project. Her work intends to identify the transparency goals of medical diagnostic systems, and how to translate notions of accountability into a healthcare setting. Twitter @DariaOnitiu

[1] Huw Roberts, Josh Cowls, Emmie Hine, Francesca Mazzi, Andreas Tsamados, Mariarosaria Taddeo and Luciano Floridi, ‘Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US’ (2021) 27 (6) Science and Engineering Ethics 1, 6.

Enhance Legal Experience with Virtual Reality: A Legal Tech Perspective

https://www.healtheuropa.eu/could-virtual-reality-ease-the-burden-of-coronavirus-isolation/99566/

*AVV. ANDREA PULIGHEDDU

1.Introduction

In his most renowned novel Neuromancer, the writer William F. Gibson has introduced the key concept of cyber space. The main character, Henry Dorsett Case, is a low-level hustler that aims to reconnect himself to the Matrix, a virtual reality computing platform sited, precisely in cyberspace. Moving from this fictionalized idea, in 1989 the writer and musician Jaron Lanier developed the concept of virtual reality, referring to it as a three-dimensional realities system implemented with stereo viewing goggles and haptic gloves.

Since those days virtual reality (VR) has typically been portrayed whether as a medium, like a telephone or television, and at the same time delved as a new kind of dimension in its philosophical meaning. In a few years of technological development, this new medium has rapidly reached a high level of commercialization, and the massive use of VR as gaming, educational, and entertainment standard methods, is still right behind the corner. Especially during the first part of COVID-19 pandemic period (from Feb. 2020 to May 2020), we assisted in a speedy utilization of these systems among geek and non-geek consumers.  Even from the investors’ point of view, the outlooks seem to sound pretty good: in 2019, venture capitalists invested at least 4.1 billion in the VR industry, and these numbers seem to be destinated to grow up and scaling up faster than the last three years.

As soon as these systems have been starting to be commercialized for the entertainment industry, a huge number of professional sectors start to verify what types of business uses were possible introducing the VR inside the company productive process and internal activities. There are several successful user cases available, such as in the training and control activities carried on by QA operators or in education and law enforcement.

Despite the virtuous models studied, there is one professional category on the others whose absence seems a serious loss of opportunity: the legal services.

In this short contribution, we will delve into the meaning of virtual reality under the legal practice experience perspective. In particular we will try to examine on one hand if those kinds of technologies are effectively disruptive for the lawyer category and on the other hand what legal tech applications are concretely possible in order to place the legal industry into this new tech path.

2. Virtual Reality background

For the purpose of this post, “virtual reality” or “VR” is an immersive three-dimensional computer-generated environment. Secondly, as well as the environment definition, we need to reduce the scope or analysis regarding also on hardware basis. In order to maintain a commonly understandable level of debating, in this brief article we will take in consideration only systems constituted by a minimal corpus of particular machines. Considering this, the basic hardware infrastructure assumed as an example in this post is a pair of common VR eye goggles and wired clothing.

Therefore the VR system as defined consist in a computer generated environment, experienced within a tethered or standalone devices such as the previous ones described. One of the key factors of every VR experience resides in peripherals sensors. Sometimes these sensors are programmed with a dual purpose, such as to allow the user to interact with the real world also within a virtual dimension. Visual tools and motion detection set in a such smart way enable a high level of responsiveness and immersive user experience for several different types of possible applications. We will not examine deeply every hardware component of that systems, nevertheless also a high-response audio and a clear video setup are definitively others essential ingredients for a disruptive VR experience.

At this point is sufficient to focalize at least on these two fundamental points for the post scope: the importance of sensors to assure good quality feedback for users involved in a VR experience and the immersive value that the experience has to own itself in order to be introduce innovation trough work activity carried on. This last profile, already not examined, is delved in the next point through a legal tech point of view.

3. Legal tech and related outcomes 

Changing perspective, Legal Technology (briefly known also as “Legal Tech”) represents another phenomenon that rapidly soared in a few years growth span. The term, born in the late 1980s,  reached its current notoriety level through the studies published by Richard Susskind in his  Tomorrow’s Lawyers: An Introduction to Your Future book.  The Susskind’s main standpoint was that at least three innovation driving factors can fundamentally be identified with regard to lawyers’ typical activities: resizing tasks from more to less, full liberalization of legal business, and reinforcing alliance between legal matters and technology.

Starting from these three considerations, legal tech has rapidly evolved: from theory it becomes literally a new re-thinking way of the worldwide law professional area, introducing a massive automatization of legal process within code writing and developing new patterns to solve traditional legal problems.

The range of services offered is several. Some are facing typical legal issues, such as due diligence, invoicing methods, reports creations, assessment, gap analysis, etc.; others, more advanced indeed, are focused on contracts and provide computer code-switching systems able to manage different contracts templates as client’s needs. However, the most visionaries and disruptive services ideated from legal tech experts involved the field of artificial intelligence (AI) and blockchain, combining complicated algorithms for legal research or documents signing and transmission or rather for trial justice in courts activities.

Productivity plays an important role for the law firm overall efficiency real growth, but there are disadvantages also related to an adventuristic method of growth like this one. A software represents, more or less, a further level of intermediation between these two parts, and could annihilates the active interaction part which is one important phase of every good legal practice.  How to solve it? VR can represent an opportunity.

4. Improving legal experience: VR scenarios and LT tools synergy

One of the possible ways to reduce the loss of immersivity as described above, should be sought in VR development.

Picture this: an area furnished according to medieval Japan style, with two historiated seats placed in the middle of the room and two digitalized avatars -a legal counselor and his client- arguing about an agreement or rather about due diligence if you preferred it, in a professional meeting. Imagine that their dialogue is being supported within several tools: figure if that the counselor, at a certain point, with a simple hand gesture detected by peripheral sensors, can turn on a new software window shown on the visual hub of both parts.  The software chart shown could, of course, change depending on the argument chosen. For example, during a discussion for an agreement can be useful to show the electronic document version using a writing tool. Adding more variations is possible that some notes should be inserted during the dialogue, maybe with the help of a voice recognition software which summarizes all important statements that emerged during the meeting. In the meantime a legal research tool is scanning all the words provided in the agreement, verifying the percentage of the possible positive verdict in case of a claim or the amount of possible legal issues related to the meeting object.

The audio and visual impact plays a fundamental role as well. For this reason, coming back to our scenario, while both are dialoguing the software chart shown should be endowed with some audio alerts accorded to the complex passage of the legal writing or with the same visual evidence that shows possible interpretative issues or mistakes. A perspective like the one given can definitively raise the number of possible variations. The list of scenarios could be absolutely huge but in order to maintain a synthetic pattern we limit our vision to this one: a more accurate portrayal of human emotions by virtual characters can definitively help law firms to challenge any inherent bias within the firm and their clients’ businesses and boosting the identification of legal needs, preventing several judicial issues that could be delivered to Courts and discharging at the same time the jurisdiction system from useless trials as well as from others administrative costs.

It is evident that this unique synergy between Legal Tech and Virtual Reality represents a disruptive opportunity: the former add some tools that, if integrated, can definitively accelerate legal counseling activities, the latter demolish the immersivity absence of the first one joining the participants to wonderful scenarios that protect the touchy relationship between the client and his lawyer.

5.Current legal issues, risks and challenges.

Based on the facts exposed, it can be affirmed that a synergy between VR application and Legal Tech methods should represent a clear plus to enhancing the legal experience of the future. Nevertheless -especially regarding the EU area regulation- there are several legal issues related to this constantly evolving process, that deserve to be briefly delved.

First of all, there is an unsolved dilemma related to the preservation of a full confidentiality status between client and lawyer, considering both as categories. In other words: cybersecurity and how to preserve it.

In fact, regarding this aspect, some of the challenges that lawyers will face in this new environment will be familiar to the on-life one. These should include, just, for instance, concerns about the security of conversation and documents shared within the VR ambiance, retaining company secrets during the development process, and protecting communications in the virtual world. Is client confidentiality safe and secure, just as it is commonly presumed during on-life counseling?

The European Union has just developed through the ENISA activity a framework of the legal basis for cyber law application. Following this set of rules is basically a guarantee for key users a minimum level of proper security incidence response, also according to the recent review of the Directive on Security of Network and Information Systems (aka NIS2 Directive).

On the other hand, a proper impact assessment for that hybrid VR/L-Tech system has to consider intellectual property rights protection, and facing this task is not easy at all. The aim to preserve Clients secrets from possible violations that occurred during several moments (such as integrations with external tools like documents software-based add-ons or research engines) is literally in contrast with the current VR based sharing methods, which currently are not covered with specific information security tools as antivirus or firewall dedicated. Naturally, their development could represent an easy obstacle to cross, even if information security history teaches us that hurried solutions for complex problems often represent a risk instead of a solution.

The third problem is related to cultural issues as well as legal, or rather the privacy-compliant adoption by clients as digitalized perspective of legal services. In fact, according to statistics, one of the most diffused concerns about VR systems used for different scope than gaming is related to data security and privacy: in 2020 more or less the 50% of subjects interviewed individuated it has the major problem for that systems, dropping that amount in comparison to 2019 (which provided a percentage number higher than 60).

Although the GDPR has already faced the problem within the article 22 provision, this is not sufficient at all to ensure a high level of protection for personal data processed through these virtual environments. Some of the main aspects that will definitively be delved in order to reach a proper level of processing security are, among others, targeted Data Protection Impact Assessment (DPIA) conduction (art. 35 GDPR), security measures individuation according to art. 32 GDPR provisions and data breach management such as enucleated by art. 30 of the Regulation mentioned.

Personal data are constantly involved in the VR process. From digitalization of bodies or avatar creation in the starting part of almost all software designed for social interaction in VR until the data collection during sessions of activities. Voice records, behavior profiling, and huge amounts of identifiable and sensitive data different for variety, volumes, and velocity of processing are continually used by algorithms in common software as well as VR applications. Privacy risks to VR users are particularly relevant, given the new information that Facebook -for instance- will be able to collect from its immersive VR platforms. These platforms currently track a user’s head movement and could potentially have the capability to track eye movement and, to one extent or another, also biometric information. Biometric issues are only part of the problem.

Conclusion

As virtual reality and legal tech software become normally used worldwide, lawyers that want to face the future challenges of law practice need to become used to innovation. Perhaps there are several legal and operative risks involved and not solved yet, but the immensity of VR systems blended with Legal Tech integration represents a path to cross, a morning star to strive for. Even if liabilities and privacy issues are still open, is our duty -and passion, for legal tech experts- to study and tailor a new body of law and practical guidelines where necessary, able to guide wisely this transition for another great era of human beings. The paradigm is already changed. Are we prepared for it?

 

Andrea Puligheddu Privacy & Cybersecurity Lawyer. He held the role of Data Protection Officer for many primary public and private entities in the world of health software & technologies companies, internet service providers, Infosec, TELCO and hardware suppliers. Lecturer in photography and artistic production law for a national accredited media school. Lecturer in GDPR and Privacy Law at several EM (Executive Master’s of Law) on UNI ISO 11697:2017 e UNI PdR 43:2018 standard controls. Accredited as Cyber Security Auditor for international certifications authorities.

More at: https://www.studiolegaleprivacy.com/en/andrea-puligheddu/

May I have some artificial intelligence with my human rights? About the recent European Commission’s Proposal on a Regulation for Artificial Intelligence

by Vera Lúcia Raposo*

 

On 21 April 2021, the European Commission released a proposal for a future European Union (EU) regulation on Artificial Intelligence (hereafter, the ‘Proposal). While its structure sought inspiration in the General Data Protection Regulation (GDPR), the content was built upon other relevant EU interventions, namely the EC’s White Paper on Artificial Intelligence (AI), released in February 2020. The aim is to facilitate and develop the use of Artificial Intelligence within the EU to create a truly Digital Single Market while still protecting fundamental rights, EU values, and ethical principles that are potentially threatened by AI features such as its opacity (some have called it a black box), complexity, the demand for copious amounts of data (big data, with all the attached concerns), and the possibility of automation. So far, so good. We expected nothing less from the European Commission (EC). However, the proposed regulation might fall short regarding its aim.

The definition of AI used by the Proposal is presented as being broad enough to encompass future developments of this technology. However, the fast pace of technology advancement implies that it will eventually be outdated.

The proposal is based on a risk-assessment model, under which three categories of AI emerge:

  • Unacceptable risk AI, banned in Title II of the Proposal. This category includes AI that distorts behaviours (dark patterns), explores the vulnerabilities of specific groups (micro-targeting), scores people based on their behaviours (social scoring), and proceeds to real-time biometric identification for law enforcement purposes in public spaces (facial identification);
  • High-risk AI, as referred to in Title III of the Proposal are allowed, albeit under particularly strict standards;
  • Low-risk AI, for which only transparency requisites are demanded in some cases (namely chatbots and deep fakes), as established in Title IV of the Proposal.

The identification of a serious risk is based on the potential menace to health, safety, and/or fundamental rights. Notably, not only does the label of ‘high-risk’ depend on the specific task performed, but also on its purpose. Moreover, in case AI systems are erroneously classified, the consequences are not clearly defined in the Proposal. A correction mechanism should be in place to amend any wrong classification, but the Proposal remains silent on this issue.

High-risk AI systems are products or safe components of products, whose regime has been harmonised under EU law by the legislation listed in Annex II (medical devices, toys, elevators, and many others); or stand-alone AI systems that pose special threats to fundamental rights and that are referred in Annex III. These lists shall be updated in accordance with the latest technological developments.

For high-risk AI, a two-step system of control and monitoring is built. First, the control is performed through mandatory requirements to be complied with by AI systems before entering into the market. A conformity assessment is undertaken by a third party following harmonised standards, which may lead to the issuance of the ‘CE marking of conformity’. Second, an ex-post monitoring mechanism will be put in place by market surveillance authorities which use the powers conferred to them by Regulation (EU) 2019/1020 on market surveillance. Moreover, AI providers are also obliged to report severe incidents and malfunctions.

Low-risk AI systems almost go unnoticed in the Proposal, except for the provisions on transparency. The two-step procedure is not required in this case. However, providers can voluntarily accept to do so by creating their own codes of conduct (Article 69).

source: https://euraxess.ec.europa.eu/

Such legal framework requires the set-up of specific organs at the national level (notifying authorities and notifying bodies), as well as the European level, particularly the European Artificial Intelligence Board. The latter is a supranational supervisory authority, similar to the European Data Protection Board (EDPB), created under the umbrella of the GDPR, and also to the recently proposed European Board for Digital Services established in the proposal of the Digital Services Act.

Another innovation is the EU database, in which stand-alone high-risk AI must be registered before entering into the market, to facilitate transparency and tracking. The information is supposed to be upload to the database by the provider of high-risk AI systems. Neutral entities, such as the notifying bodies, could be entrusted with this task for all AI systems. Such solution would lead to better results in terms of AI’s safety and transparency.

Accountability is another matter of concern. Indeed, compliance requirements are carried out by AI providers, i.e., the ones that develop the AI and put the AI system (or its outputs) on the EU market, even if they are not established within the EU territory. In addition to the above-mentioned tasks on control, monitoring and information update, their obligations include the implementation of risk management and quality management systems, the development of detailed technical documentation, the maintenance of automatically generated logs, and transparency obligations which require that the ones interacting with AI systems must be informed that they are.  Moreover, compliance is also expected from AI users, i.e., people or entities established in the EU using AI in a professional context, or not established in the EU but whose outputs are used in the EU, along with distributors and importers. In the event of non-compliance, the Proposal foresees maximum administrative fines of up to €20m or 4% of total worldwide annual turnover, similarly to the GDPR clauses. However, no accountability mechanisms are in place. Individuals harmed by AI systems can barely obtain redress. The 2019 report on Liability for Artificial Intelligence and other Emerging Digital Technologies and the European Parliament Resolution of 20 October 2020 on a Civil Liability Regime for Artificial Intelligence suggest that such a mechanism might be put in place, as it is also supported by Coordinated Plan on Artificial Intelligence, released together with the Proposal.

source: https://digital-strategy.ec.europa.eu/

Some highly controversial topics, such as facial recognition technologies (FRT), are expected to raise discussion amongst experts. Interestingly, the 2020 White Paper did not ban FRT, although a draft version had suggested a time limited ban on the use of FRT in public spaces. The White Paper merely recommended the use of appropriate cautions, without giving much detail on such recommendations. Instead, Article 5 of the Proposal clearly prohibits the use of some forms of FRT. It is expected that Article 5/1/d, in particular, will be contested by the ones defying this technology. The norm bans ‘real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, allegedly one of its more threatening uses from the perspective of fundamental human rights. Considering its potential added value for crime control, the proposal provides for some exceptions: the search of crime victims, including missing children, ‘specific, substantial and imminent threat to the life or physical safety of natural persons, or of a terrorist attack’, and the detection, localisation, identification, or prosecution of a perpetrator or suspect of crimes referred to in Article 2/2. The latter includes grievous crimes, such as terrorism, tariffing of humans beings, murder, rape, but also different crimes such corruption, fraud, facilitation of unauthorised entry and residency. Moreover, the ban leaves several loopholes: it does not cover the use of FRT for law enforcement purposes that does not take place in real-time or that is not carried out in public spaces; FRT by other public entities not related to law enforcement; and FRT used by private individuals and companies. Critics might argue that still many doors are left open.

Overall, the Proposal is more human-rights-friendly than the White Paper, but eventually also more conservative, a potential downside for EU’s competitive digital capacity at a global scale.  At its core, the regulatory ideas – the categorisation of AI systems per levels of risk – are the same in the Proposal and the White Paper. However, the White Paper had a more pro-technology approach and unlike the Proposal it did not elaborate in detail the potential human rights violations. Some have even pointed out that the White Paper was more concerned with the economic outputs of a massive investiment in AI than with its consequences for human rights. Although the critic might be excessive, the White Paper did contain a stronger emphasis on technological development, as it results from the various sections dedicated to this aim (comparatively, there is proportionately more discussion on development and innovation than in the Proposal).

In contrast, the Proposal gives more space to the protection of fundamental rights (though many will argue that not enough), as expressed in multiple binding norms imposing risk assessments and safety requisites, whose violation can lead to severe economic penalties. Assuming this proposal becomes indeed the new AI Regulation, my guess is that European AI developers and manufacturers will be asked to comply with so many different requirements also coming from other norms, the GDPR, the Medical Devices Regulation, the In Vitro Diagnostic Devices Regulation, the Data Governance Act proposal, that AI in Europe will become a scientific topic, not a real industry though.

In sum, the Proposal essentially pivots around two core concepts. The first is compliance, based on a system of harmonised rules, monitoring, and good governance. Consequentially, the second is the principle of trust (‘trust’ and ‘trustwhorty’ are emphasised along the Proposal). On the one hand, developers of AI shall be able to rely on rules to carry out their businesses within the EU market by complying with a unified body of rules for all Member States. On the other hand, AI users should be able to rely on its safety.

Innovation should have been the third characteristic. The impetus to bring about innovation in AI technologies is restricted to Chapter V which, albeit introducing interesting provisions, falls short of what would be required for a truly digital single market. The most promising initiative is the creation of regulatory sandboxes to encourage innovation, though under strict (too strict?) requirements. AI investment in the EU might be hampered by such ‘innovation hole’ which could advantage other leading players. Given the outstanding Chinese technological development, including on AI, the EU might not be able to reach China in the near future (or ever). Whether the Proposal reached a fair balance between innovation and human rights, or conversely whether it will lead the EU to stagnation in the domain of AI remains to be seen.

 

*Vera Lúcia Raposo  is Associate Professor at the Faculty of Law of the University of Macau, China, and Assistant Professor at the Faculty of Law of the University of Coimbra, Portugal. Her main lines of research are health law and digital law.