As the lead author, I am very glad to see the final version of the European Medicines Agency Reflection paper on the use of AI in the medicinal product lifecycle online.
Together with CHMP Methodology Working Party colleagues and EMA representatives, we have taken this paper from the very first drafting stages, through the public consultation period last year and finally to CHMP/CVMP adoption
Below is my summary on the implementation of comments from external stakeholders on the draft version. I am very interested to hear your thoughts on this document, feeding into the upcoming work in formal EMA scientific guidelines on AI in clinical development and AI in pharmacovigilance.
------------------------------
The CHMP adoption of the draft Reflection Paper (RP) on 10 July 2023, and CVMP adoption on 13 July 2023, was followed by a Public Consultation phase ending 31 December 2023. In total, 1342 comments were received from 66 stakeholders including regulatory bodies, public and private consortia, organisations, and individuals. Many comments were very helpful in moving the discussion forward and improving the document both in relation to form and content.
All comments have been reviewed by the EMA CHMP Methodology Working Party drafting group and weighed against each other, and against current regulatory requirements. Further, the fast-changing technical and regulatory landscape, which includes the recent adoption of the European union AI act, has been considered during the implementation.
On a general note, several comments have raised topics or asked for a level of prescriptive detail that could not be accommodated in these initial reflections on the use of AI. These comments will instead be taken into consideration in the future drafting process of formal EMA scientific guidelines in the field of AI.
In line with comments, technical terms and definitions have been harmonized wherever possible and are supported by an expanded glossary. In addition, the scope of the reflection paper has been further clarified, not to exclude shallow machine learning technology which may share many of the features and challenges addressed in the RP.
Given the clear definition of high-risk AI in Annex 3 of the AI act, the terminology in relation to risk has been reworded, replacing the term “high risk” with “high regulatory impact” and “high patient risk” to clarify that most regulatory requirements are founded in medicines legislation.
In response to comments, further clarification has been provided regarding requirements on availability of data used in modelling, use of third-party AI systems, and need for prospective testing of machine learning models.
The key message in the RP remains, that while acknowledging that AI technology holds the potential to improve many if not all aspects of the medicinal product lifecycle, trustworthiness for regulators, payers and patients alike must not be compromised by the introduction of new technology.