AI includes various technologies (such as statistical models, diverse algorithms and self-modifying systems) that are increasingly being applied across all stages of a medicine's lifecycle: from preclinical development, to clinical trial data recording and analysis, to pharmacovigilance and clinical use optimisation. This range of applications brings with it regulatory challenges, including the transparency of algorithms and their meaning, as well as the risks of AI failures and the wider impact these would have on AI uptake in medicine development and patients' health.
The report identifies key issues linked to the regulation of future therapies using AI and makes specific recommendations for regulators and stakeholders involved in medicine development to foster the uptake of AI. Some of the main findings and recommendations include:
- Regulators may need to apply a risk-based approach to assessing and regulating AI, which could be informed through exchange and collaboration in ICMRA;
- Sponsors, developers and pharmaceutical companies should establish strengthened governance structures to oversee algorithms and AI deployments that are closely linked to the benefit/risk of a medicinal product;
- Regulatory guidelines for AI development, validation and use with medicinal products should be developed in areas such as data provenance, reliability, transparency and understandability, pharmacovigilance, and real-world monitoring of patient functioning.
The report is based on a horizon-scanning exercise in AI, conducted by the ICMRA Informal Network for Innovation working group and led by EMA. The goal of this network is to identify challenging topics for medicine regulators, to explore the suitability of existing regulatory frameworks and to develop recommendations to adapt regulatory systems in order to facilitate safe and timely access to innovative medicines.
The implementation of the recommendations will be discussed by ICMRA members in the coming months.