Advanced AI Systems Could Assist Anesthesiologists in Operating Room
Posted on 03 Feb 2022
A new deep learning algorithm trained to optimize doses of propofol to maintain unconsciousness during general anesthesia could augment patient monitoring.
A new study by researchers at MIT (Cambridge, MA, USA) and Massachusetts General Hospital (Boston, MA, USA) suggests the day may be approaching when advanced artificial intelligence (AI) systems could assist anesthesiologists in the operating room. The team of neuroscientists, engineers and physicians has demonstrated a machine learning algorithm for continuously automating dosing of the anesthetic drug propofol.
Using an application of deep reinforcement learning, in which the software’s neural networks simultaneously learned how its dosing choices maintain unconsciousness and how to critique the efficacy of its own actions, the algorithm outperformed more traditional software in sophisticated, physiology-based simulations of patients. It also closely matched the performance of real anesthesiologists when showing what it would do to maintain unconsciousness given recorded data from nine real surgeries. The algorithm’s advances increase the feasibility for computers to maintain patient unconsciousness with no more drug than is needed, thereby freeing up anesthesiologists for all the other responsibilities they have in the operating room, including making sure patients remain immobile, experience no pain, remain physiologically stable, and receive adequate oxygen. The algorithm’s potential to help optimize drug dosing could improve patient care, according to the researchers.
The research team designed a machine learning approach that would not only learn how to dose propofol to maintain patient unconsciousness, but also how to do so in a way that would optimize the amount of drug administered. They accomplished this by endowing the software with two related neural networks: an “actor” with the responsibility to decide how much drug to dose at every given moment, and a “critic” whose job was to help the actor behave in a manner that maximizes “rewards” specified by the programmer. For instance, the researchers experimented with training the algorithm using three different rewards: one that penalized only overdosing, one that questioned providing any dose, and one that imposed no penalties.
In every case they trained the algorithm with simulations of patients that employed advanced models of both pharmacokinetics, or how quickly propofol doses reach the relevant regions of the brain after doses are administered, and pharmacodynamics, or how the drug actually alters consciousness when it reaches its destination. Patient unconsciousness levels, meanwhile, were reflected in measure of brain waves as they can be in real operating rooms. By running hundreds of rounds of simulation with a range of values for these conditions, both the actor and the critic could learn how to perform their roles for a variety of kinds of patients.
The most effective reward system turned out to be the “dose penalty” one in which the critic questioned every dose the actor gave, constantly chiding the actor to keep dosing to a necessary minimum to maintain unconsciousness. Without any dosing penalty the system sometimes dosed too much and with only an overdose penalty it sometimes gave too little. The “dose penalty” model learned more quickly and produced less error than the other value models and the traditional standard software, a “proportional integral derivative” controller.
After training and testing the algorithm with simulations, the researchers put the “dose penalty” version to a more real-world test by feeding it patient consciousness data recorded from real cases in the operating room. The testing demonstrated both the strengths and limits of the algorithm. During most tests the algorithm’s dosing choices closely matched those of the attending anesthesiologists after unconsciousness had been induced and before it was no longer necessary. The algorithm, however, adjusted dosing as frequently as every five seconds while the anesthesiologists (who all had plenty of other things to do) typically did so only every 20-30 minutes.
As the tests showed, the algorithm is not optimized for inducing unconsciousness in the first place, the researchers acknowledged. The software also doesn’t know of its own accord when surgery is over, but it’s a straightforward matter for the anesthesiologist to manage that process. One of the most important challenges any AI system is likely to continue to face is whether the data it is being fed about patient unconsciousness is perfectly accurate. Another active area that the researchers are looking into is improving the interpretation of data sources, such as brain wave signals, to improve the quality of patient monitoring data under anesthesia.
“Anesthesiologists have to simultaneously monitor numerous aspects of a patient’s physiological state, and so it makes sense to automate those aspects of patient care that we understand well,” said Gabe Schamberg, a former MIT postdoc who is also the study’s corresponding author.
“Algorithms such as this one allow anesthesiologists to maintain more careful, near continuous vigilance over the patient during general anesthesia,” said senior author Emery N. Brown, a neuroscientist at The Picower Institute for Learning and Memory and Institute for Medical Engineering and Science at MIT and an anesthesiologist at MGH.
Related Links:
MIT
Massachusetts General Hospital