Skip to main content

Table 2 Overview of the information of the included prediction models

From: Risk prediction models for falls in hospitalized older patients: a systematic review and meta-analysis

Author (year)

Continuous variable

processing method

Missing data

handling

Modelling Methods

Validation method

Model performance

Model presentation

Final predictors

Interpretability

Discrimination

Calibration

Others

Dormosh [16] 2023

Continuous

variable

Multiple

imputation

LR + Lasso

10-fold cross-validation

AUROC:

0.695 (0.667–0.724)

Calibration

curve

-

The formula of risk score

Fall history (0.34), Cardiac arrhythmias (0.35), Renal failure (0.33), Antipsychotics (0.48), Admission to neurologic department (0.53), Admission to emergency department (0.56), Heart rate (0.01), Katz ADL score (0.06), DOS score (0.04), Missing potassium (-0.38), Missing calcium (-0.25), Missing PaCO2 (-0.32), Missing DOS score (-0.41),

-

Adeli [17] 2023

Continuous

variable

Imputation

ANN

Leave-one-out cross-validation

AUROC:0.762

-

Acc:0.731; Spe:0.732

Average Precision:0.499

Precision:0.441; Recall:0.728

F1:0.549

Prediction probability

-

NR

Zhao [18]2020

Continuous

variable

NR

LR

Random split

AUROC

A:0.874(0.784–0.964)

B:0.847(0.771–0.924)

Calibration

curve

Sen:0.692

Spe:0.855

Nomogram

History of fractures (1.67), Orthostatic hypotension (1.72), Functional status (1.07), Sedative-hypnotics (2.11), Level of serum albumin (2.53)

-

Wijesinghe [19]

2020

NR

NR

LR; SVM;

RF

10-fold cross-validation

NR

NR

Precision:0.756; Recall:0.937; F1:0.836

Prediction probability

NR

NR

Kawazoe [20] 2022

Categorical

variables

Multiple

imputation

BERT + Bi-LSTM

Temporal validation

AUROC:0.851

NR

Sen:0.737; Spe:0.839;

Precision:0.093; F1:0.165

Prediction probability

-

NR

Chu [21] 2022

Categorical

variables

NR

DNN; XGBoost;

LightGBM; RF; SGD; LR

Random split

AUROC:0.694

NR

Acc:0.730; Sens:0.694; Spe:0.694;

Precision score:0.694; Recall score:0.694; F1:0.730

Prediction probability

-

PFI

Alharbi [22] 2022

Categorical

variables

NR

Catboost

Independent dataset validation

NR

NR

Dataset SERV:

Acc:0.942, Sen:0.916, Spe:0.968,

PPV:0.968,NPV:0.918,F1:0.941

Dataset SV:

Acc:0.989, Sen:0.988, Spe:0.990,

PPV:0.992, NPV:0.985, F1:0.990

Software-based platform

-

NR

Peel [23] 2021

Categorical

variables

NR

LR

Independent dataset validation

AUROC:

0.700(0.630–0.760)

NR

Sen:0.72

spe:0.60

Scoring table

Sex (0.63), BMI (0.53), Fall in last 90 days (0.51), Balance problem (0.69), Psychological problems (0.86),

Age (-0.03)

-

Vratsistas-Curto [24]

2018

Continuous

variable

NR

LR

Bootstrap

AUROC:

0.730(0.660–0.810)

NR

-

Scoring table

Mobility/transfers, Mentalstatus/cognition,

Male sex.

-

Beauchet [25] 2018

Categorical

variables

Exclude

ANN

Random split

NR

NR

Acc:0.838; Sen:0.296; Spe:0.943

PPV:0.500; NPV:0.874; F1:0.372

Prediction probability

-

NR

GholamHosseini [26] 2014

NR

NR

NR

Random split

NR

NR

Acc:0.740; Sen:0.850; PPV:0.850

F1:0.850

Scoring table

Real-time vital signs, Motion data, Medications, Falls history and muscle strength.

NR

Neumann [27] 2013

Continuous

variable

-

LR; DT;

Add-up model

Temporal validation

NR

NR

Sen:0.460; Spe:0.711; PPV:0.149

NPV:0.923

Scoring table

Mental alteration, Fall history,

Insecure mobility

-

Marschollek [28] 2012

Continuous

variable

Mean imputation

LR;

DT

10-fold cross-validation

AUROC:0.63

NR

Acc:0.660; Sen:0.554; Spe:0.671

PPV:0.150; NPV:0.935; F1:0.237

Prediction probability

High age, Low Barthel index, Cognitive impairment,

Multi-medication

Co-morbidity

PFI

  1. NR: Not report, Not applicable, A: Training data, B: test data, Multiple imputation: Imputation of missing values was performed multiple times by constructing estimation models, AUROC: Area Under the Receiver Operating Characteristic curve, Nomogram: A graphical interface that calculates the total score of the influencing factors to obtain the final prediction probability, LR: Logistic Regression, DT: Decision Tree, SVM: Support Vector Machine, RF: Random Forest, DNN: Deep neural network, XGBoost: eXtreme Gradient Boosting, LightGBM: Light Gradient Boosting Machine, SGD: Stochastic Gradient Descent, CatBoost: categorical boosting, ANN: Artificial Neural Network. Acc: Accuracy, Sen: Sensitivity, Spe: Specificity, PPV: Positive Predictive Value, NPV: Negative Predictive Value, F1: F1 Score (F1 Score =(2×Precision×Recall)/(Precision + Recall)), BERT: bidirectional encoder representations from transformers, Bi-LSTM: bidirectional long short-term memory, PFI: Permutation Feature Importance