The EU Commission’s Proposal for the Regulation of Artificial Intelligence and “human-centric” Regulation for AI as Medical Device

*Daria Onitiu

from https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines.1.html

The EU Commission’s proposal for the Regulation of Artificial Intelligence (the ‘AI Act’) will bring significant changes to standard-setting for high-risk systems, including medical AI. The proposal’s risk-based approach intends to balance the socio-economic benefits of medical AI and the need for harmonised standards for safety-critical applications in healthcare. From medical diagnostic systems to the robot surgeon, medical AI illustrates the need for interdisciplinary perspectives for the formal governance of these novel tools in a dynamic healthcare setting.

Setting the scene regarding ‘high-risk’ systems regarding medical AI

A critical debate is that the AI Act considers almost all AI tools as ‘high-risk’. High-risk systems are those AI systems that, by their nature, such as increased autonomy and opacity, require enhanced mandatory obligations under the Act (Title III).  This risk-based classification has been criticised for enabling over-regulation of AI systems in healthcare. For instance, the AI Act’s broad-brush definition of “AI” including statistical and logical-based applications of algorithms ‘could also [encompass] systems that are not commonly considered AI being regulated by the Act, potentially impacting innovation’.[1]

Nevertheless, ‘the classification as high-risk does not only depend on the function performed by the AI system, but also on the specific purpose and modalities for which that system is used’, as rightly acknowledged in Title III of the proposal.  Therefore, it is not the proposal’s prescriptive although non-exhaustive nature of the risk-based framework which tips the balance regarding innovation and formal governance of AI systems. The EU’s vision is to promote holistic alignment of EU values with a product safety approach.

Accordingly, the real question is the AI Act’s consistency of values when considering the governance of software of AI as medical device. The EU Commission White Paper is clear in this regard in that the system’s ‘the sector and the intended use’ may create ‘distinct’ risks to fundamental rights and safety in healthcare, such as ‘safety considerations and legal implications concerning AI systems that provide specialized medical information to physicians, AI systems providing medical information directly to the patient and AI systems performing medical tasks themselves directly on a patient…’ (p. 17). The reasoning for this varied approach to AI governance is the need for ‘human-centric’ regulation to achieve trust- such as entailing the design of AI systems in healthcare with the involvement of human control, as well as enhanced transparency requirements addressing algorithmic opacity (p. 21). These values regarding the ‘systemic, individual, and societal aspects’ of technology ultimately shape the balance between innovation and regulation (p. 11).

Accordingly, the current discourse on AI governance is framed as a balancing exercise, considering the EU’s future efforts regarding the EU New Legislative Framework (Annex II of the AI Act). However, this process of value-alignment currently stagnates, considering the role of transparency of medical AI systems, based on efforts to align the regulation of medical AI and its procedural alignment with other sectoral legislation, including the Medical Device Regulation (MDR).

Medical AI: a balancing exercise

The current EU Commission’s proposal follows the spirit of other sectoral legislation, including the MDR, reinforcing the modalities including the associated problems of medical AI to shape human usage and decision-making beyond the laboratory setting. The proposal, just as the MDR, is a legal tool protecting product safety. With the AI Act, the EU Commission’s vision of human-centric regulation becomes an “ecosystem of protecting functionality and intended uses of AI” as medical devices.

Consequently, this perspective brings about two interesting problems that deserve further attention. One aspect is that formal governance remains tied to the system’s performance, intended use, and functionality. The European Coordination Committee of the Radiological, Electromedical and Healthcare IT Industry does recognise this, claiming that international standards need to consider lifecycle changes to software as medical device on the ground, and subsequently inform and update the MDR (p.11-13). The problems of technical documentation tied to the lack of standarisation still exist and the AI Act does not include the requirement to verify the system supporting clinical outcomes including patient-centred care. Second, the AI Act’s progressive outlook on transparency in Articles 13-14 stagnates concerning the contribution covering only the algorithms’ functional revelations of foreseeable risks, leaving out the subject’s perception of the nature of risk undermining shared decision-making in a healthcare setting.  This further reinforces gaps in the MDR to verify software as medical device via the levels of explanations for meaningful clinical outcomes.

Therefore, the considerations about AI technologies’ inherent, as well as distinct risks for fundamental rights and safety are transferred to the umbrella of the system’s innovative approach to outperform human judgement. Most requirements in the AI Act, including the ‘appropriate type and degree of transparency’, as well as the identification of technical safeguards for oversight (Article 13 (1); Article 14 (3) (a)) are left to the manufacturer. There is no appropriate involvement of the user, and the subject impacted by these novel technologies. Follow-up measures, such as post-market surveillance under both the AI Act and the MDR (Articles 61 and 89 respectively), will fulfil the function of monitoring life-cycle changes in product development, but do not provide the necessary confidence to develop safe and trustworthy systems considering EU values.

Value-alignment is key for legal certainty

What this shows is that we should not downplay the socio-economic impact of AI to a question of legislative competence but consider the question of safety-critical systems as being a task of value-alignment. Indeed, the significant overlap between the AI Act and MDR induces risks of double-standards at the cost of legal certainty, concerning the governance and enforcement of safety considerations. Nevertheless, what we need first is a risk-based approach that considers an interdisciplinary perspective of EU values into the modalities of AI systems, such as the use of Machine Learning approaches in healthcare. This way the focus on the analysis of prescriptive regulation of AI will shift to the formal governance of novel technologies in the long-term.

AI governance and medical AI: an interdisciplinary approach

The modalities of AI systems necessitate a new approach of standard-setting, which goes beyond a vision regarding the EU’s proactive approach restricted to an AI system’s functionality (p. 2). Restricting AI governance this way creates a false dichotomy stifling innovation, as well as rapid advancements of AI in “narrow” domains. An interdisciplinary approach to AI governance is tools testing a system’s operation on the ground, considering user perspectives of the tool’s reliability, a patient’s perception of risk, as well as core ethical values in decision-making including patient-centred care. This outlook will eventually provide a more consistent approach to AI governance in healthcare, as well as legal certainty.

(This blogpost is the author’s current work in progress. Please contact the author for the latest version of the work).

Details about the author

*Daria Onitiu is a Research Associate based at Edinburgh Law School. She researches at the Governance & Regulation Node within the UKRI Trustworthy Autonomous Systems Project. Her work intends to identify the transparency goals of medical diagnostic systems, and how to translate notions of accountability into a healthcare setting. Twitter @DariaOnitiu

[1] Huw Roberts, Josh Cowls, Emmie Hine, Francesca Mazzi, Andreas Tsamados, Mariarosaria Taddeo and Luciano Floridi, ‘Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US’ (2021) 27 (6) Science and Engineering Ethics 1, 6.