In December 2021, an infusion pump manufacturer sent out a letter to its customers, many of which are anesthesiology departments, warning that its pumps posed a safety risk. After an upstream occlusion alarm, the user can clear the alarm state without clearing the upstream occlusion; in this case, the pump appears on the pump interface to be running normally, but in reality, it is not running (full occlusion) or is running at a lower rate than programmed (partial occlusion). The only way to know that this is occurring is to look at the drip rate. The manufacturer reportedly received 51 reports of serious injury and three reports of patient death over five years, potentially associated with this issue (asamonitor.pub/3yqtrDb).

“Machines take me by surprise with great frequency” – Alan Turing

In the OR, we think every day about how we work as a team with our nursing colleagues, surgical colleagues, anesthesiology residents, nurse anesthetists, and anesthesiologist assistants. We know that we need to clearly communicate and share a mental model about how to advance the care of our patients and promote their safety in the perioperative period. However, the care of our patients is not just dependent on how we interact with our colleagues. Rather, incidents like those reported in this case remind us that we also rely on smooth interactions with technology in the OR to deliver safe care.

Technology in the OR fulfills at least one of two roles – it can help us deliver care to our patients, and it can provide feedback to us about how that care is going. All the devices that we work with daily – infusion pumps, vital sign monitors, bed controllers, ventilators, anesthesia machines, drug carts – are critical extensions of us that allow for safer patient care. However, to integrate technology into our practice, we need to understand how it works and have an up-to-date mental model of its status. The medication infusion pump in the safety report has a potentially clinically relevant flaw, failing to provide feedback about its status, which may result in clinicians being unaware that pumps programmed to run critical medications, such as pressors, are in fact not infusing.

Now imagine that you are providing care to a hypotensive patient, which is routine care in the OR. Your mental model says that the patient is vasodilated from the anesthetic gas, so you treat the patient with a phenylephrine infusion. Unfortunately, they remain hypotensive, leaving you perplexed and considering further interventions. Suddenly you notice the phenylephrine infusion pump was not started and is not infusing the medication. You activate the pump, and the patient’s blood pressure improves. Now take this same scenario, except the pump visually appears to be infusing the phenylephrine. In this case, you must update your mental model to consider another cause of hypotension, and you may take the patient’s care in another less appropriate direction simply because the infusion pump might have incorrectly given you the feedback that it was operating properly. We rely on feedback from devices to make appropriate clinical decisions in the OR and to update our mental model, or understanding, of the clinical situation.

Another contributing factor is the continuous cacophony of alarms we hear in the OR during the course of an anesthetic. While we often quickly silence alarms and ignore the best practice advisories in our electronic health record, we do this at our own risk. Sometimes these alarms represent real problems, and we need to work as a system to reduce the quantity of unnecessary alarms to allow clinicians to focus on alarms that impact patient care. The issue of alarm fatigue was recently discussed in the November 2021 issue of this publication (ASA Monitor 2021;85:23).

There are other subtle ways that feedback from technology impacts how we manage patient care in the OR, and it is important for device manufacturers to consider real-world use of their devices and take steps to avoid confusion (Anesthesiology 2020;133:653-65). For example, if a device has a data input field labeled “patient weight,” a clinician might reasonably believe that “weight” means exactly what it says: enter the patient’s actual weight. However, buried 100 pages into the 180-page user’s manual we find that “weight” actually means the patient’s ideal weight, not their actual weight. Perhaps additional real-world testing or anesthesiologist feedback could have resulted in a label that was less likely to cause confusion. When the clinician is confused as to the intended definition of the label, this may have potential clinical implications. For example, if the device is an anesthesia machine, the clinician may end up delivering larger tidal volumes than intended; if it is a medication delivery system, this could result in overdose. Labels and definitions that are potentially unclear can hamper our ability to understand what the machine is doing and how we should interact with it to successfully carry out our intended anesthetic plans.

When these types of incidents occur, they are not cognitive errors, but rather opportunities for improved device design, and the approach we take to preventing them should reflect that. Best practice is that medical devices have safety steps built in to support clinical decision-making and not rely on the clinician at the bedside to troubleshoot programming defaults. When that doesn’t occur, there are steps we can take to prevent and/or mitigate patient harm due to device design errors. On the front line, we can promote point-of-care education to help clinicians understand how to use equipment in detail, especially less-used or unfamiliar equipment. However, education alone, although important, will not solve the problem. In providing clinical care, we should always seek multiple sources of feedback. For example, when the blood pressure cuff isn’t reading, we check the end-tidal CO2 measure, the patient’s pulse, and the pulse oximeter waveform. We need to adopt that same mindset with every piece of data we evaluate. If the patient isn’t responding to an infusion as we think they should, we can confirm that it is the correct medication, that the pump is programmed correctly, and that the drip rate seems appropriate for the intended dose. Finally, we can’t afford to be overly reliant on technology. Technology can make our care safer, but even the best-designed device can malfunction at the worst possible time. Technology failure should always be on your differential diagnosis when medical care is not proceeding as anticipated.

There are also actions that we can take as a specialty to decrease the risk of recurrence of this problem. When departments make purchasing decisions, it is important to consider the usability and design of the equipment, with robust clinical input, in addition to the price. Purchasing power is a strong lever that we can pull on to promote thorough testing and human-centered design.

Finally, even with well-designed equipment, errors can still arise. The medication infusion pump issue discussed here, and all the issues discussed in previous months’ articles, come from the submission of safety reports. While the role of safety reporting systems in identifying patient safety risks is undeniable, we are concerned that the recent homicide conviction of a nurse in Tennessee may discourage anesthesia clinicians from reporting errors.