| ||||
| ||||
![]() Title:Interpretable Machine Learning for Early Detection of Critical Patients in ED Authors:Waqar Aziz Sulaiman, Andreas Panayides, Eirini Schiza, Efthyvoulos Kyriacou, Antonis Kakas and Constantinos Pattichis Conference:IEEE CBMS 2025 Tags:death prediction, Emergency department, explainable AI, ICU, machine learning and XAI Abstract: Emergency departments (EDs) face increasing demands that require efficient methods to quickly identify patients at risk for critical outcomes (i.e., inpatient death or admission to the ICU within 12 hours). This study aimed to develop interpretable rule-based models for predicting critical patient outcomes using machine learning. We used data from the MIMIC-IV-ED database, applying Gradient Boosting (GB) and Logistic Regression (LR) models to identify patients at risk based on 13 readily available initial triage variables. Gradient boosting achieved better performance for the test set (Accuracy: 78.21%, AUROC: 0.887, AUPRC: 0.445) compared to Logistic Regression (Accuracy: 77.27%, AUROC: 0.863, AUPRC: 0.370). Using the Te2Rules method, we extracted 43 clinically interpretable rules from the GB model, achieving high overall fidelity (98.90%). Furthermore, categorizing extracted rules using Rule Coverage Index (RCI) into ”high,” ”medium,” ”low” improved their clinical applicability. Our method aims to provide a practical balance between predictive accuracy and interpretability, potentially assisting clinicians in promptly identifying critically ill patients during the early stages of assessment. Interpretable Machine Learning for Early Detection of Critical Patients in ED ![]() Interpretable Machine Learning for Early Detection of Critical Patients in ED | ||||
Copyright © 2002 – 2025 EasyChair |