International Journal of Molecular Medical Science, 2025, Vol.15, No.2, 54-68 http://medscipublisher.com/index.php/ijmms 57 2016, totaling 4. In terms of study populations, 5 studies focused on pediatric patients, 3 on pediatric critically ill patients, 3 on pediatric trauma patients, 1 on pediatric cancer patients, and 1 on pediatric surgical patients. With regards to the source of study populations, 3 were from single-center studies and 10 were from multi-center studies. In terms of outcome prediction, 10 studies were on VTE, 2 on DVT, and 1 on PE. The total sample size ranged from 104 to 536 423 cases. Table 2 summarizes the predictive model information from the 13 included studies. Three studies reported missed data, one study was unclear about missing data, and three studies reported methods for dealing with missing data. Logistic regression analysis was used for model construction in all 13 studies. Eight studies converted continuous variables into categorical variables. The most commonly used predictor in all models was CVC, which was included as the final predictor in 8 studies. Other commonly used predictors included age and surgery, both of which were used in 6 studies. The reported AUC or C statistics ranged from 0.67 to 0.954 4, with some studies reporting positive predictive value, negative predictive value, sensitivity, and specificity. Out of the 13 studies, 7 conducted calibration, with the Hosmer-Lemeshow test being the most commonly used method, employed 5 times. Figure 2 Document screening flow chart 3.3 Model validation The included 13 studies have all undergone internal or external validation, with 12 studies undergoing internal validation and 1 study undergoing external validation. 3.4 Quality evaluation results According to the PROBAST tool, Table 3 summarizes the applicability and risk of bias of the included studies. In terms of applicability studies, all studies in various fields and overall were rated as low risk. In the field of the study subjects, all included studies were deemed low risk; in the predictor domain, all included studies were deemed unclear risk of bias, primarily due to not specifying whether predictor variables were assessed without knowledge of the outcome; in the outcome domain, all included studies were deemed unclear risk of bias, mainly because it was not stated whether outcomes were determined without knowledge of predictor variables; in the statistical analysis domain, 7 studies were rated as high risk, with 5 studies not mentioning how missing data were handled specifically, and 3 studies not explicitly stating whether overfitting and underfitting were considered. During the quality evaluation process, it was found that the majority of studies did not mention how to handle missing data, leading to an increased risk of bias. This result suggests that model developers need to be more rigorous and detailed when designing and describing research processes.
RkJQdWJsaXNoZXIy MjQ4ODYzNA==