Innovations in technology have been a cornerstone in the advancement of anesthesiology and its contributions toward improving patient safety and outcomes. Burgeoning consumer technologies have expectedly moved into the health care space as both technology industry and health care delivery systems enthusiastically explore how new innovations improve patient care, workflow efficiencies, and reduce cost. Perioperative care generates a wealth of data matched by few other domains in medicine. Anesthesiologists are tasked with consuming a multitude of data quickly to avoid risk and keep patients safe. The development of tools to help clinicians collect and process information is rapidly increasing and central to the innovation moving through perioperative medicine, including telemedicine. Over the next decade, data tasks will be increasingly performed by computer-based systems, offering new, exciting pathways to improved care – if challenges that inevitably accompany such innovation can be overcome. Changes in how data are consumed and acted upon by anesthesiologists and others are under way, with the benefits, challenges, and impact of several key examples described here. In this article, we will focus on the technology innovations that are becoming part of everyday life and how they might impact anesthesiology practices in the future.

Cloud computing, wearable sensors, and the internet of things (IoT)

In place of local servers and expensive infrastructure, more health care systems are migrating their data to the cloud. Clouds are not just for passive data storage, they can also allow health care systems to develop applications that automate data migration through their information technology platforms, enabling decision-support systems in anesthesiology and perioperative practices. Infrastructure-, platform-, and software-as-a-service products provide a range of data services, from basic storage and access to more advanced features like embedding “intelligent” models that help users make sense of the information. Advantages include cost savings, data transparency with improved access to providers and patients, and greater capacity to manage “big data” and understand latent insights that are difficult to capture with current systems. The market estimate for cloud computing in health care was nearly $28 billion in 2020 and is expected to increase to $64 billion within the next four years.  More than 80% of health care organizations have adopted at least one cloud service. Software applications and IoT devices, including wearable sensors, are more easily connected to the cloud than with traditional health information systems, creating new possibilities for patient surveillance and risk assessment. The Apple Watch has a single lead ECG function that may be used to detect atrial fibrillation, and Guardian Connect (Medtronic) predicts major blood glucose fluctuations before they occur using IBM Watson technology. Wearable mobile sensor technologies in uniquely customized form factors are in various phases of development and testing, including devices that can monitor blood pressure, cardiac output, ejection fraction, and other advanced cardiac measurements non-invasively. It is very conceivable that such devices will be in common use as the trend toward less invasive and longitudinal monitoring approaches gain greater footing in perioperative care. The simultaneous emergence of wearable sensors and cloud computing opens new opportunities for remote surveillance monitoring and biometric tracking of a variety of physiological indices, both inside and outside the hospital setting. Device measurements can be automatically sent to the cloud to alert clinicians of potential patient risks, making early, cost-saving interventions possible. Data security concerns, lack of specialists with expertise in developing and managing a health care system’s cloud presence, and vendor lock-in where a customer is limited to a single vendor for practical, financial, or other reasons remain challenges to broad adoption of cloud technology. Its potential impact rests in capturing valuable new health information and increasing interoperability between devices and datasets, while making data more broadly available to stakeholders and artificial intelligence (AI) applications.

Machine learning (ML) and predictive analytics

Approval of diabetic retinopathy application based on machine learning (ML) by the FDA opened the door for the use of ML in clinical medicine. The confluence of cheap, available computing power, sophisticated machine-learning algorithms, and data at scale have made it possible to develop powerful, highly accurate models predicting poor outcomes after surgery. Those for mortality, hospital length of stay, and intermediate outcomes including acute kidney injury, deep venous thrombosis, re-intubation, and delirium, among others, have all been developed.  Models predicting which patients require therapy, and the effect of treatment, have more recently been reported as well as those predicting adverse events. Perhaps the best example is intraoperative hypotension, a common adverse event with a clear relationship to multiple organ system injuries.  The Hypotension Prediction Index (HPI) was one of the first models created that targeted hypotension and the only one demonstrated to reduce hypotension exposure when incorporated into care delivery, compared to care without it. New-age prediction models offer unprecedented opportunities to avoid poor outcomes with early interventions, but data sharing and standardization, patient safety, accountability and transparency for predictions made, and AI literacy for providers and patients are all areas where further work is required to bring these advances into routine clinical care.

As we expand our understanding of ML algorithms, models, and databases in building the models, we will further refine such technologies in daily clinical decision-making. While ML is a way for the computer systems to learn patterns, the professional societies and individual practitioners are investigating how ML could add value above their clinical decision-making skills. A recent work using EEG as the target to drive remifentanil and propofol titration during anesthetic cases argues that ML-based approaches could supplement our clinical decision trees. However, there are areas of concern for the use of ML in anesthesiology and perioperative medicine due to its potential impact on patient safety and reliabilityremoving the autonomy of clinicians, and negative impact on clinical decision heuristics.

1) The very fact that the data used to build the models underlying the mythical ML applications are the drivers of ML read-outs tells us the varying degree of patient safety and reliability of ML. While models are now being refined with increased data points, we cannot deny the future where we interact with these algorithms. One then questions who (anesthesiologists) or (machine) carries the accountability. Because of the potential need for rescuing interventions, it is likely that anesthetic delivery systems would not be fully autonomous, but require some clinician supervision. However, what accountability, and therefore liability, lies with the anesthesiologist versus with the ML system needs to be established.

2) The notion of transferring most of the clinical decision autonomy to a computer system is a very real possibility in specialties such as oncology. However, we believe that in anesthesiology and perioperative medicine, the various acute interventions make it unlikely that the clinical autonomy is transferred to ML systems.

3) The potential for any negative impact of ML on clinical decision heuristics would need to be mitigated. Could we forget how to perform a safety check in our anesthesia machines? Unfortunately, case reports are published in the literature as we rely more on automated systems. Such potential needs to be mitigated using high-fidelity simulation or prolonging the time to learn the clinical skills.

Future of ML in anesthesiology: Clinical skills such as central venous catheter placement are not easily replaced by computers or robots, but clinical decision-making will have ML-driven applications in addition to our presence. The models for the ML should apply multiple variables for developing the algorithms tailored for individual patients, e.g., disparate ethnic groups or residents of different regions may have unique physiologies and environmental factors impacting their clinical presentations.

Automated systems

Incorporating themes described above, but moving beyond systems designed to support human activity, are those that operate independent, or nearly so, of clinicians to deliver care autonomously. In 2018, the FDA approved the first autonomous AI diagnostic system, IDx-DR, which serves as a screening tool for more severe forms of diabetic retinopathy in undiagnosed adults 22 years of age and older using digital photographs of a patient’s retina. There are similar, albeit less developed, systems that maintain hemodynamic and hypnotic targets during surgery, primary tasks of modern anesthesia providers, and use AI-guided administration of vasoactive, fluid, and anesthetic therapies. Early testing suggests systems outperform humans acting alone, consistent with prior findings that humans and machines perform best when working together. Fully automated systems share the same risks as their component technologies (see above) but possess additional risks, including marginalizing the physician-patient relationship and skill atrophy among clinicians. Nonetheless, transitioning low-risk, time-consuming tasks to machines would allow humans to focus on more complex activities to increase efficiency and optimize resources.

Telemedicine in anesthesiology

Telemedicine has steadily grown over the past decade, but the COVID-19 pandemic dramatically accelerated its adoption. It is expected that even with the receding of pandemic-related restrictions, most outpatient practices will continue virtual patient care at a significant level. Many anesthesiology preoperative clinics similarly have adapted and will grow this practice through a variety of technologies. Telemedicine pre-anesthesia evaluation could provide for safe patient care while reducing patient inconvenience, time away from work, travel time, and cost associated with an extra in-person presurgical evaluation visit. The initial studies in telemedicine requested patients to report to a telemedical site that housed equipment to accomplish both the history and physical examination portion, but patients were not willing to do this. Additionally, there was marked concern about the potential for no or poor physical examination and both cardiopulmonary and airway exams. But as technology has developed with increased access, telemedicine has gradually moved into the patient’s home, where mobile, tablet, or computer devices with an internet connection are all that is necessary for an evaluation.

Many patients have indeed received successful telemedical preoperative evaluations, with histories being done virtually and electronic stethoscopes allowing cardiopulmonary examination. In general, studies that compare telemedicine visits to in-person visits report high concordance of physical examinations, reduced cancellations on the day of surgery, improved patient satisfaction, reduction in cost, and less travel time and time away from work.

Intraoperative applications of telemedicine exist, although more compelling evidence on the utility and at-scale implementation needs to be generated for these innovations to significantly change practices. An example of a successful intraoperative tele-anesthesia set up incudes a telemedicine unit that integrated all physiologic data (ECG, oxygen saturation, carbon dioxide, blood pressure, breath and heart sounds) for bidirectional voice and live videoconferencing over a low-bandwidth satellite between Ecuador and the United States. And a pilot study in patients undergoing surgery reported the feasibility of EEG-controlled closed-loop administration of propofol over a distance of 200 km, using a teletherapeutic propofol infusion, connected through the internet with a remote computer, while using EEG monitoring to adjust the propofol dosage.

In summary, trends toward automation and algorithmic clinical and operational decision-support systems are strongly evident in most health care practices, including anesthesiology. In the future, as troves of data are generated from electronic records, notes, images, sensors and devices, it will become imperative for health care systems to employ AI-enabled technologies to help clinicians and researchers develop greater insights from these multidimensional data sets in order to improve monitoring, diagnostics, and therapies in patient care. While technology innovations are likely to have a significant impact on anesthesiology practices, challenges in adoption remain and will require focused attention. These include a perceived threat and competition with anesthesiologists’ practices and expertise, data privacy concerns, the ethics of machine and algorithm performance versus guiding clinical decision-making, the evolving regulatory landscape for digital diagnostics and therapeutics, and technical and financial challenges in IT interoperability and integration.

.