New WHO Guidelines to Revolutionize AI in Healthcare

By HospiMedica International staff writers
Posted on 24 Oct 2023

As healthcare data becomes increasingly abundant and analytical methods like machine learning, logic-based systems, and statistical techniques advance, artificial intelligence (AI) has the potential to reshape the healthcare landscape. However, the quick deployment of AI, including large language models, sometimes occurs without fully understanding their performance implications, which could be either advantageous or detrimental to end-users like healthcare providers and patients. Given that AI systems can handle sensitive personal data, it is crucial to have strong legal and regulatory measures in place to protect privacy, security, and integrity.

The World Health Organization (WHO, Geneva, Switzerland) has released a new publication that outlines critical regulatory factors concerning the use of AI in healthcare. The report underscores the need for verifying the safety and efficacy of AI systems, rapidly making beneficial systems accessible to those in need, and encouraging discussions among various stakeholders like developers, regulators, healthcare staff, and patients. WHO acknowledges that AI can significantly enhance healthcare by bolstering clinical trials, enhancing medical diagnostics, treatments, self-care, and individualized care, and supporting healthcare professionals in various ways. For instance, AI can be especially useful in environments where there's a shortage of medical specialists, helping with tasks like interpreting retinal scans and radiological images. The document specifies six key regulatory areas for healthcare AI, which include transparency and documentation, risk management, validating data and clarity on the intended use of AI, data quality, privacy and data protection, and collaboration.


Image: The WHO has outlined considerations for regulation of artificial intelligence for health (Photo courtesy of 123RF)

AI systems are complex and influenced not just by the algorithms they are built on but also the data they are trained with, often sourced from clinical environments and user interactions. Effective regulation can manage the risks associated with AI systems amplifying biases present in the training data. For example, AI models might struggle to capture the diversity of different populations, resulting in biases or even inaccuracies. To minimize such risks, regulations can ensure that attributes like gender, race, and ethnicity in the training data are transparently reported and that data sets are deliberately made to be representative. The WHO's new publication aims to provide a framework of fundamental principles that national or regional governments and regulatory bodies can adopt or adapt for AI guidance in healthcare.

Related Links:
WHO


Latest Business News