It is always interesting when two entirely separate parts of one’s life collide. What you may not know about me is that I am not only a lawyer but also a theology student. It was therefore interesting to read that pope Francis (very validly) called for human oversight with respect to the use of AI (in the context of technological weapons) during the G7 summit last week.
The need for human oversight has been recognized by the EU-regulator in article 14 of the EU AI Act for high-risk AI systems. Human oversight must aim to prevent or minimise the risks to health, safety, or fundamental rights that might arise from use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. The latter is defined in the AI-Act as: “the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems”.
Misuse concerns both incorrect use of the AI system and off-label use. For MedTech companies, such obligation goes beyond the normal development processes under the MDR / IVDR. An initial issue arises with identifying the expected misuse. It is impossible to await the clinical use of the device, due to the obligation for the provider to identify measures permitting human oversight prior to the AI system being placed on the market or being put into service. Measures must be proportionate to the risk, the level of autonomy and the context in which the AI system is used.
The AI Act remains silent about how such off-label use should be determined and when off-label use can be considered “reasonably foreseeable”. What if the AI systems is deployed off-label in an entirely different matter than the provider decided on? Should the provider have made a different assumption with respect to what uses outside of the intended purpose of the device would be reasonably foreseeable?
The obligation re. human oversight comes with its own design requirements. The provider should enable those people who provide the human oversight with the possibility to actually understand the relevant capacities and limitations of the high-risk AI systems. They should be able to actually monitor the operation of the AI system. These deployers will likely be ordinary healthcare providers without technical background. They must be able to interpret and to correct the output of the AI or to override, decline to use or reverse the output of the AI system. This should be considered in the design process.
This requires a new way of thinking from MedTech companies developing AI systems: ‘oversight by design’. MedTech companies should incorporate oversight by design into their development processes as soon as possible.
#AI #artificialintelligence #medtech #MDR #IVDR #IVD #medicaldevice #AIAct #AI #AIA #EUAIAct #EUMDR #IVDR #healthcare #SamD #connecteddevice