Accurate estimation of surgical transfusion risk is essential for efficient allocation of blood bank resources and for other aspects of anesthetic planning. This study hypothesized that a machine learning model incorporating both surgery- and patient-specific variables would outperform the traditional approach that uses only procedure-specific information, allowing for more efficient allocation of preoperative type and screen orders.
The American College of Surgeons National Surgical Quality Improvement Program Participant Use File was used to train four machine learning models to predict the likelihood of red cell transfusion using surgery-specific and patient-specific variables. A baseline model using only procedure-specific information was created for comparison. The models were trained on surgical encounters that occurred at 722 hospitals in 2016 through 2018. The models were internally validated on surgical cases that occurred at 719 hospitals in 2019. Generalizability of the best-performing model was assessed by external validation on surgical cases occurring at a single institution in 2020.
Transfusion prevalence was 2.4% (73,313 of 3,049,617), 2.2% (23,205 of 1,076,441), and 6.7% (1,104 of 16,053) across the training, internal validation, and external validation cohorts, respectively. The gradient boosting machine outperformed the baseline model and was the best- performing model. At a fixed 96% sensitivity, this model had a positive predictive value of 0.06 and 0.21 and recommended type and screens for 36% and 30% of the patients in internal and external validation, respectively. By comparison, the baseline model at the same sensitivity had a positive predictive value of 0.04 and 0.144 and recommended type and screens for 57% and 45% of the patients in internal and external validation, respectively. The most important predictor variables were overall procedure-specific transfusion rate and preoperative hematocrit.
A personalized transfusion risk prediction model was created using both surgery- and patient-specific variables to guide preoperative type and screen orders and showed better performance compared to the traditional procedure-centric approach.
- Accurate surgical transfusion risk assessment helps strike the correct balance of blood bank resource utilization and blood product availability
- The most widely used risk assessment methods focus on historical procedure-specific transfusion rate and do not incorporate patient-specific factors
- A machine learning–based prediction algorithm to identify patients was derived and validated using more than 4 million national surgical registry records
- The algorithm demonstrated that the inclusion of patient factors decreased the number of recommended type and screen orders from 57% to 36% of cases while maintaining 96% sensitivity
- When validated using data from a single center, the algorithm reduced the number of recommended type and screen orders from 46% to 31%
- The most important variables for model prediction included procedure- specific transfusion rate, preoperative hematocrit, age, and laboratory indicators of coagulopathy