Imagine a future day at work powered by artificial intelligence (AI) technologies. Your patient that day has been risk-stratified preoperatively, and all conditions have been optimized according to the most current guidelines. As you talk to the patient, the preoperative note is updated automatically, aided by an AI voice-recognition technology. Vital signs are monitored remotely. If there is a problem with placing the I.V., the AI-enhanced ultrasound makes it a breeze. On induction, you select what medications to use, an intelligent infusion pump determines a personalized amount for the patient and titrates the doses automatically. During the operative course, EHR-integrated algorithms predict blood loss, duration of surgery, risk for complications, and suggest the best course of action for mitigation. In the PACU, care is optimized by AI models relevant to the specific type of surgery to minimize patient recovery time and to optimize pain control. Additional algorithms can further aid in decision-making for postoperative care and discharge.
“Understanding the limitations of AI models is essential to their development, implementation, and use and will allow us to innovate while maintaining high patient safety standards. These technologies are ours for the taking.”
The technologies to achieve this future are becoming attainable with the latest wave of innovative AI tools for perioperative care. It depends on us, the anesthesiologists, to determine when and how these technologies are integrated into our practice. The transition from research to innovative practice has been discussed for many decades. According to the Belmont report (The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. 1979), which governs most of the current federal and ethical regulations for research, “practice is designed solely to enhance the well-being of an individual patient and research is designed to test a hypothesis and contribute to generalizable universal knowledge.” Achieving last-mile translation from research into clinical practice is mired with challenges, but some recent research studies have the potential to lead to meaningful intraoperative insights.
There are already AI-enabled technologies that promise to augment current clinical anesthesia practice. For example, AI-enabled ultrasound can help even untrained anesthesiologists perform fast and accurate intraoperative echocardiograms (Int J Cardiovasc Imaging 2021;37:577-86; Circulation 2018;138:1623-35). Furthermore, rich pre- and intra-operative data can be leveraged by algorithms to predict multiple postoperative outcomes. One such model used routine intraoperative vital signs to predict the risk for heart failure postoperatively (BMC Med Inform Decis Mak 2019;19:260). Another approach aimed to predict risk of in-hospital mortality for patients undergoing surgery under general anesthesia and yielded an accurate and interpretable model that can provide actionable insights (NPJ Digit Med 2021;4:8).
Despite the hype, however, all models have their limitations, and understanding them is essential for the ultimate successful implementation of AI. The decision to follow a model’s recommendation remains in the hands of the anesthesiologist. This necessitates understanding the procedure through which the model was developed, its application, performance metrics, and limitations. One such shortcoming is propagating pre-existing bias in the patient cohort. AI tools are only as accurate as the data on which they are trained, and thus using patient cohorts in which some groups received inequitable care may perpetuate and even amplify bias. For example, an algorithm trained to predict complex medical care needs based on past health care spending assigned higher risk to patients, who historically spent more health care money than Black patients with equal needs. This resulted in more specialty referrals for patients, perpetuating bias in both spending and access to care (Science 2019;366:447-53). These risks can be mitigated by different approaches; for example, by using diverse patient cohorts and ensuring equitable treatment of patients. Another solution to detect and mitigate bias is by utilizing interpretable models. This rapidly growing field of machine learning includes models designed to help us understand how the predictions are made, in contrast to fully “black box” models in which the process remains entirely opaque. The pursuit of interpretable approaches is becoming more common in health care AI, as we need to understand model predictions and failures while in use. For example, in prognostication algorithms, these approaches could highlight predictors and characterize the contributions made by each. In the model designed to predict in-hospital mortality after surgery (NPJ Digit Med 2021;4:8), understanding which factors contribute to the predictions will allow the anesthesiologist to focus on the most significant and modifiable factors. Clinical practice stands to benefit when models are not only highly predictive but also when model decision-making processes are transparent.
The creation and adoption of relevant models for the anesthesia practice depend on the participation of our specialty. While model development happens at the crossroads of applied mathematics, computer science, and AI theory, even the most high-performing model will fail to be translated to our practice if it does not solve a relevant problem in a clinically meaningful way. We, the anesthesiologists, provide invaluable domain expertise and can contribute to numerous phases of model development. Not only can we aid project design and deployment, but we stand to improve the model development itself. We can define relevant variables or known causal relationships that can make the model more adherent to well-understood physiological behaviors. For example, a model intended to predict and treat intraoperative bradycardia will benefit from anesthesiologist input about the pharmacology of the relevant intraoperative drugs that can affect the heart rate. Similarly, if an intermediate model exhibits difficulty in predicting extreme bradycardia, the anesthesiologist can recommend an increase of laparoscopic surgeries in the dataset that can serve to mitigate this issue by increasing the frequency of representation of such severe samples in the cohort. Without this insight, a data scientist may instead choose to train models on vastly more data, leading to increased computational requirements, model complexity, and even risk for unsatisfactory performance due to artifacts.
Moreover, anesthesiologists are well positioned to lead subsequent adoption and seamless integration of AI algorithms in clinical practice as we are the ultimate end-users of such technologies. These approaches require integration with electronic medical records, monitors, anesthesia machines, and infusion pumps. For the bradycardia model example, determination of high probability for bradycardia could direct the infusion pump to temporarily pause the administration of remifentanil. Thus, the development and implementation of innovative AI models should be performed by anesthesiologist-led multidisciplinary teams, which benefit from the expertise of physicians, data scientists, and biomedical engineers alike. We have historically embraced innovation, and, as a result, the safety of anesthesia has increased tremendously. The recent advancements in AI are ripe to augment our clinical practice but will necessitate our education and engagement as a specialty.
AI technologies are here to stay – they are already widely used in other industries. As more algorithms become commercially available, bringing with them promises of transforming patient care and optimizing intraoperative workflows, we need to stay vigilant. Understanding the limitations of AI models is essential to their development, implementation, and use and will allow us to innovate while maintaining high patient safety standards. These technologies are ours for the taking. They can augment our practice if we guide them, and adopt them, to improve patient care. An AI-augmented future is on the horizon – the power to shape it is ours.