A recent article[1] authored by Haider J. Warraich, MD[2]; Troy Tazbaz[3]; and Robert M. Califf[4], MD in the Journal of the American Medical Association, reviews the history of artificial intelligence (AI) regulation by the U.S. Food and Drug Administration (FDA) and provides perspective from the FDA regarding considerations the agency faces in regulating potential uses of AI in medical products, clinical research and drug design, as well as the challenges it faces in adapting the regulatory system to new technologies that require continuous feedback.
The FDA has been evaluating AI-enabled medical devices for nearly 30 years with the FDA’s first approval of a partially AI-enabled device coming in 1995, when the FDA approved PAPNET, a software employing neural networks for preventing misdiagnosis of cervical cancer.[5] The FDA has subsequently authorized nearly 1,000 AI-enabled medical devices.[6] Recent years have seen substantial advances in AI-development and its use in medical devices and drug design, with a notable 10-fold increase between 2020 and 2021 in new drug applications utilizing AI during drug discovery and development.[7]
The FDA’s medical product centers have indicated four areas of focus for integration of AI in medical products: (1) fostering collaboration with developers, patient groups, academia, global regulators and other interested parties to safeguard public health; (2) promoting development of harmonized standards, guidelines, best practices, and tools between U.S. and global markets; (3) advancing development of regulatory approaches that support innovation; and (4) supporting research related to evaluation and monitoring of AI performance.[8] The FDA announced a commitment to developing a tailored regulatory framework for AI-enabled medical devices in January 2021.[9] As part of this commitment, the FDA has published several guidelines regarding AI and health-related software, including final guidance pertaining to clinical decision-support software published in 2022.[10] The guidelines also provide clarity on FDA policies relating to the definition of a medical device as amended in the 21st Century Cures Act of 2016, which excludes certain types of medical software from the definition of a medical device, such as software intended for maintaining or encouraging a healthy lifestyle that is unrelated to diagnosis, cure, mitigation, prevention or treatment of a disease or condition.[11] The guidance aims to make development more efficient as the AI landscape continues to rapidly change by helping developers understand what changes in an AI model need to be submitted for FDA review.[12] Warraich et al. note that regulatory schemes must be flexible to support a diverse spectrum of AI models in medical device applications with varied amounts of risk while conforming to global standards as much as possible.[13]
Particular risks identified with AI-enabled devices include algorithm failure, model bias, clinician overreliance, incorrect interpretation, or poor algorithm input.[14] Generative AI applications, such as large language models (LLMs), which may be associated with hallucinations, have not been approved by the FDA to date.[15] Given these risks along with the capacity for AI models to evolve and learn, the authors note there is a particular need for post-market monitoring of AI-enabled devices not previously associated with traditional medical products.[16] Whereas traditional medical products remain unchanged after initial approval, the authors suggest that the contextual sensitivity of AI requires continuous and recurrent monitoring within the environment in which it is being used, and that the scale of effort required for such ongoing monitoring may eclipse any current contemplated regulatory scheme.[17] Suggested starting points for developing robust frameworks for monitoring AI-assisted decision making in healthcare include applications in cardiology, oncology or other areas with significant evidence-based support for clinical decision making or in lower-risk clinical workflows, where time is not of the essence, allowing sufficient time for thorough evaluation of AI outputs.[18]
The authors note that the FDA will continue to be centrally involved in ensuring that there is an intentional focus on health outcomes, but it is the collective responsibility of all involved parties to develop and optimize solutions to assess the ongoing safety and effectiveness of AI in medical devices, drug design, and clinical research.[19]
Editor: Brenden S. Gingrich, Ph.D.
_____________________________________
[1] Haider J. Warraich, MD et al., FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine, JAMA, October 15, 2024, doi: 10.1001/jama.2024.21451.
[2] Haider J. Warraich, MD joined the FDA as a Senior Clinical Advisor for Chronic Disease in September 2023. See https://haiderwarraich.com/.
[3] Troy Tazbaz is the Director of the U.S. Food and Drug Administration, Digital Health Center for Excellence within the Center for Devices for Radiological Health. See https://www.fiercehealthcare.com/digital-health/former-oracle-exec-troy-tazbaz-tapped-fda-director-digital-health.
[4] Robert M. Califf, MD is the Commissioner of Food and Drugs as of February 17, 2022, See https://www.fda.gov/about-fda/fda-organization/robert-califf.
[5] Warraich et al. at E1.
[6] Id.
[7] Id. at E2.
[8] Id.
[9] FDA. Artificial Intelligence and Machine Learning Software as a Medical Device Action Plan, January 2021, https://www.fda.gov/media/145022/download.
[10] FDA. Clinical decision support software. Guidance for industry and Food and Drug Administration staff. September 28, 2022. https://www.fda.gov/media/109618/download.
[11] Federal Food, Drug and Cosmetic Act § 520(o); United States Pub. L. 114-255—21st Century Cures Act (2016).
[12] Warraich et al. at E2.
[13] Warraich et al. at E2-E3.
[14] Id. at E3.
[15] Id.
[16] Id. at E3-E4.
[17] Id. at E5.
[18] Id. at E4.
[19] Id. at E5.