The World Health Organization (WHO) recently released guidelines for the ethics and governance of large multi-modal models (LMMs). In the past year, LMMs like ChatGPT have come to the forefront of the news, and people have begun using them in different fields with varying success. Within the healthcare space, LMMs have the potential to respond to patients’ inquiries, identify research topics, and maintain electronic health records. However, the use of LMMs in healthcare raises many legal and ethical questions, such as how they can be used effectively without jeopardizing patient safety and privacy.
On January 18, the WHO released over 40 recommendations for the use of LMMs to promote and protect patient health. In its recommendations, the WHO called for governments to invest in public data sets that require developers and users to adhere to ethical and privacy standards in order to gain access to the data. The WHO also recommends mandatory post-release audits and impact assessments of LMMs to enhance transparency about the model’s accuracy and uncover any biases.
The WHO’s Chief Scientist, Dr. Jeremy Farr, is optimistic that LMMs can “achieve better health outcomes and overcomes persisting health inequities.” But Dr. Farr says these goals can only be reached if “those who develop, regulate, and use these technologies identify and fully account for the associated risks.” Companies like Epic Systems have already begun implementing LMMs into their products, and Dr. Alain Labrique, the WHO Director for Digital Health and Innovation, has called on governments to “lead efforts to effectively regulate the development and use of AI technologies, such as LMMs.”