Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
Book Review
Brief Report
Case Letter
Case Report
Case Series
Commentary
Current Issue
Editorial
Erratum
Guest Editorial
Images
Images in Neurology
Images in Neuroscience
Images in Neurosciences
Letter to Editor
Letter to the Editor
Letters to Editor
Letters to the Editor
Media and News
None
Notice of Retraction
Obituary
Original Article
Point of View
Position Paper
Review Article
Short Communication
Short Communications
Systematic Review
Systematic Review Article
Technical Note
Techniques in Neurosurgery
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
Book Review
Brief Report
Case Letter
Case Report
Case Series
Commentary
Current Issue
Editorial
Erratum
Guest Editorial
Images
Images in Neurology
Images in Neuroscience
Images in Neurosciences
Letter to Editor
Letter to the Editor
Letters to Editor
Letters to the Editor
Media and News
None
Notice of Retraction
Obituary
Original Article
Point of View
Position Paper
Review Article
Short Communication
Short Communications
Systematic Review
Systematic Review Article
Technical Note
Techniques in Neurosurgery
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
Book Review
Brief Report
Case Letter
Case Report
Case Series
Commentary
Current Issue
Editorial
Erratum
Guest Editorial
Images
Images in Neurology
Images in Neuroscience
Images in Neurosciences
Letter to Editor
Letter to the Editor
Letters to Editor
Letters to the Editor
Media and News
None
Notice of Retraction
Obituary
Original Article
Point of View
Position Paper
Review Article
Short Communication
Short Communications
Systematic Review
Systematic Review Article
Technical Note
Techniques in Neurosurgery
View/Download PDF

Translate this page into:

Original Article
ARTICLE IN PRESS
doi:
10.25259/JNRP_131_2025

Feasibility comparison of deep learning image regressions to estimate intracranial pressure from cranial computed tomography in hydrocephalus

Division of Neurosurgery, Department of Surgery, Faculty of Medicine, Prince of Songkla University, Hat Yai, Thailand
Department of Electrical Engineering, Faculty of Engineering, Prince of Songkla University, Hat Yai, Thailand.

*Corresponding author: Thara Tunthanathip, Division of Neurosurgery, Department of Surgery, Faculty of Medicine, Prince of Songkla University, Hat Yai, Thailand. thara7640@gmail.com

Licence
This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-Share Alike 4.0 License, which allows others to remix, transform, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms.

How to cite this article: Tunthanathip T, Duangsoithong R, SaeHeng S. Feasibility comparison of deep learning image regressions to estimate intracranial pressure from cranial computed tomography in hydrocephalus. J Neurosci Rural Pract. doi: 10.25259/JNRP_131_2025

Abstract

Objectives:

Hydrocephalus (HCP) is a neurosurgical condition that causes functional disability. The purpose of the present study was to compare the predictive performances of various deep learning (DL) architectures in predicting intracranial pressure (ICP) values from cranial computed tomography (CT) scans.

Materials and Methods:

We retrospectively analyzed 242 HCP patients who underwent external ventricular drainage with intraoperative ICP measurement. Four clinically relevant axial CT slices were extracted per patient and used to train five DL architectures: Convolutional neural network (CNN), dense convolutional network, LeNet, and residual neural network. Performance was evaluated using mean absolute error (MAE), mean squared error (MSE), root MSE (RMSE), and coefficient of determination (R2).

Results:

The best-performing model (CNN) achieved MAE 4.96 cmH2O and RMSE 6.22 cmH2O, but explained only a small proportion of variance (R2 = 0.18). Model performance was modest across all architectures, with limited explanatory power. A contributing factor was the dataset imbalance, as a large proportion of patients presented with moderate to high ICP values, increasing variability and constraining regression learning across the full ICP spectrum.

Conclusion:

DL regression used to cranial CT demonstrated limited explanatory performance for estimating ICP and is not sufficient as a substitute for invasive monitoring. However, these findings provide preliminary feasibility evidence and a challenge for improvement. Future research using larger, balanced, multi-center datasets, and incorporating multimodal data will be necessary to enhance predictive accuracy and clinical applicability.

Keywords

Deep learning
Hydrocephalus
Intracranial hypertension
Intracranial pressure
Regression analysis

INTRODUCTION

One of the neurosurgical disorders that cause functional disability globally is hydrocephalus (HCP), especially in low- and middle-income nations.[1,2] It results from disturbances in cerebrospinal fluid (CSF) dynamics, leading to ventricular dilatation and elevated intracranial pressure (ICP), both of which are related to unfavorable outcomes.[3,4] Therefore, external ventricular drainage (EVD) and other CSF diversion procedures remain standard interventions to control ICP.

Elevated ICP is a key prognostic factor across neurosurgical conditions. In traumatic brain injury, ICP >20 mmHg has been associated with mortality and reduced cerebral perfusion pressure,[5] while levels above 25–30 mmHg predict unfavorable outcomes in subarachnoid and intraventricular hemorrhage.[6,7] Accurate prediction of ICP is therefore critical for timely intervention and prognostication.

Machine learning (ML) and deep learning (DL) approaches have increasingly been applied for ICP prediction. Prior studies have reported strong performance of random forests and logistic regression in forecasting ICP crises,[8,9] and ML regression models using computed tomography (CT)-derived features such as ventricular size and optic nerve sheath diameter achieved favorable accuracy with XGBoost.[10]

At present, computer vision (CV) models have been studied in neurosurgical fields, such as image classification, tumor classification, neuroanatomic segmentation, and surgical posture analysis.[11-13] Danilov et al., employed the CV model to assess the accuracy of automatically recognizing a neurosurgeon’s optimal posture and hand location.[11] In contrast, DL-based image models were utilized to classify types of pineal tumors using magnetic resonance imaging from prior studies.[11] For ICP prediction, there was a lack of evidence regarding the ability to predict ICP value from medical images in the literature review. Therefore, the present study aims to compare the predictive performance of several DL architectures for ICP estimation in patients with HCP.

MATERIALS AND METHODS

The present study’s procedure is demonstrated in Figure 1. Patients with HCP who underwent EVD were enrolled in the hospital database between January 2019 to December 2024. Therefore, patients who were excluded were those without pre-operative CT scans, patients with craniectomy, infants with fontanelles that did not close, and patients with an absent intraoperative ICP value from the operative record. Hence, clinical characteristics and a cranial CT scan were obtained from the remaining individuals. In addition, intraoperative ICP values were collected from electronic medical records, and cranial CT scans of HCP patients were collected.

Workflow of the present study. MAE: Mean absolute error, MSE: Mean squared error, RMSE: Root mean squared error, R2: Coefficient of determination.
Figure 1:
Workflow of the present study. MAE: Mean absolute error, MSE: Mean squared error, RMSE: Root mean squared error, R2: Coefficient of determination.

Cranial CT scans for investigation

Images of cranial CT scans were retrieved from the institutional picture archiving and communication system and de-identified. Axial levels were selected by protocol and verified by a neurosurgeon (10 years’ experience). We selected four clinically meaningful axial levels (the largest lateral ventricular span, the foramen of Monro, the basal cisterns, and the fourth ventricle) that capture ventricular size and cisternal patency relevant to ICP.[10] Therefore, four axial pictures of cranial CT scans were gathered as follows: The lateral ventricle’s greatest distance [Figure 2a], the level of the foramen of Monro [Figure 2b], the level of the basal cistern [Figure 2c], and the level of the fourth ventricle [Figure 2d]. Total images were subsequently randomly split with a 7:2:1 ratio into a train, validation, and test dataset.

Cranial computed tomography scans with an axial view for investigation. (a) The lateral ventricle’s greatest distance level (white arrow) (b) Foramen of Monro level (white arrow), (c) basal cistern level (white arrow), and (d) fourth ventricle level (white arrow).
Figure 2:
Cranial computed tomography scans with an axial view for investigation. (a) The lateral ventricle’s greatest distance level (white arrow) (b) Foramen of Monro level (white arrow), (c) basal cistern level (white arrow), and (d) fourth ventricle level (white arrow).

Data and image preprocessing

The dataset comprised annotated medical images and their corresponding ICP values. Metadata contained patient identifiers, image IDs, and ICP values. All CT scans were acquired in grayscale (Hounsfield units). For models requiring 3-channel input (e.g., residual neural network [ResNet], dense convolutional network [DenseNet], and Visual Geometry Group-16 [VGG16]), the single grayscale channel was replicated across three channels, resulting in identical values across R, G, and B. This approach preserved grayscale intensity information without introducing artificial color mapping. We did not use pseudo-coloring or multi-window fusion in this study. All images were initially resized to 128 × 128 for dataset standardization. For DenseNet121 and ResNet50, images were subsequently up-scaled to 224 × 224 during model-specific preprocessing to meet backbone input requirements. Normalization was performed using a mean of 0.5 and a standard deviation of 0.5 to standardize pixel values. Image augmentation was applied to the training dataset to improve the model’s robustness to variations in image quality and orientation. The augmentations included random horizontal flipping (50% probability), rotation (±15°), color jitter (brightness, contrast, saturation, and hue adjustments), and random affine transformations (translation up to 10% of image dimensions). These transformations were implemented using the PyTorch library.

DL architectures for model training

The four DL architectures were used in the present study, including convolutional neural network (CNN), DenseNet, LeNet, and ResNet.

CNN architecture

The network comprised convolutional and max-pooling layers for feature extraction, followed by fully connected layers for regression. Input CT images (128 × 128 × 3, with grayscale values replicated across three channels) were processed through four convolutional layers (32, 64, 128, and 128 filters, respectively; kernel size 3 × 3, rectified linear unit (ReLU) activation), each followed by 2 × 2 max-pooling to reduce spatial resolution and capture increasingly abstract features progressively. The extracted features were then flattened and passed through two dense layers: One with 128 neurons (ReLU) and a final single-neuron output layer for ICP regression.

DenseNet architecture

The model accepted 128 × 128 × 3 input images, which were resized to 224 × 224 for compatibility with the DenseNet121 backbone. Pixel values were normalized to 0–1 using a rescaling layer. DenseNet121, used without its top layers, served as the feature extractor. A global average pooling layer then reduced feature map dimensionality and consolidated spatial information. The pooled features were passed through a dense layer of 128 neurons with ReLU activation, followed by a single-neuron dense layer with linear activation to generate the continuous ICP output.

LeNet architecture

The model processed 128 × 128 × 3 input images. Feature extraction began with two convolutional layers: The first with six 5 × 5 filters and the second with 16 filters of the same size, both using ReLU activation. The resulting feature maps were flattened and passed through two fully connected layers with 120 and 84 neurons (ReLU), enabling high-level representation learning. Finally, a single-neuron dense layer with linear activation generated the continuous regression output.

ResNet architecture

The model processed 128 × 128 × 3 input images, resized to 224 × 224 for ResNet50 compatibility. Pixel values were normalized to 0–1. ResNet50, pretrained on ImageNet and used without its top layers, served as the feature extractor through convolutional layers and residual connections. A global average pooling layer reduced feature dimensions, followed by a dense layer with 128 units (ReLU) to model non-linear relationships. To limit overfitting, a dropout layer (rate 0.5) was applied. The final output layer consisted of a single neuron with linear activation to generate the continuous regression value.

Training model and validation

The model underwent training on the dataset for 100 epochs with a batch size of 32, providing efficient data processing while ensuring stability in weight changes. During each epoch, the model’s weights were updated based on backpropagation, and the validation dataset was used to monitor its performance on unseen data. The training history captured key performance metrics, including training loss and validation loss over each epoch.

Predictive performance of the model using the test dataset

The performance of a trained model was assessed using the test set. Plotting predicted ICP values against the actual ICP levels produced in the scatter plot. In addition, essential metrics were determined, including mean absolute error (MAE), mean squared error (MSE), root MSE (RMSE), and R-squared (R2).

For interpretation, low MSE and RMSE demonstrate that the model produces predictions that are quite near the true values, whereas low MAE suggests that the model is consistently accurate across individual predictions. An R2 value near 1 indicates a strong fit, indicating the model can explain the majority of the variance in the ICP value.[10] The models were implemented in Python 3.9 (Python Software Foundation) using the TensorFlow 2.0 library. Training and evaluation were performed in a Google Colab environment with an NVIDIA Tesla L4 GPU (Google). Finally, the model that had the best predictive performance was subsequently implemented as an online application using Streamlit version 1.41.1 (Streamlit Inc.).

Ethical considerations

A Human Research Ethics Committee approved the present study (REC 68-017-10-1). Because it was a retrospective analysis, the present study did not require patients’ informed consent. However, the identity numbers of patients were encoded before the training process.

RESULTS

The present work included 280 individuals with HCP who received EVD between January 2019 and December 2024. Therefore, 16 patients without preoperative CT scans, 13 patients with decompressive craniectomy, eight neonates with fontanelles that did not close, and one patient with an absent intraoperative ICP value from the operating record were eliminated. Consequently, clinical features and cranial CT scan images were obtained for 242 patients, as illustrated in Table 1.

Table 1: Demographic data (n=242).
Factor n (%)
Age group (year)
  0–15 19 (7.9)
  >15–60 122 (50.4)
  >60 101 (41.7)
  Average age-year (SD) 25.90 (20.58)
Gender
  Male 112 (46.3)
  Female 130 (53.7)
Underlying disease
  Hypertension 77 (31.8)
  Diabetes mellitus 28 (11.6)
  Dyslipidemia 24 (9.9)
  Renal failure 10 (4.2)
  Cirrhosis 3 (1.2)
Symptoms
  Motor weakness 177 (73.1)
  Headache 95 (39.3)
  Seizure 27 (11.2)
Glasgow coma scale
  13–15 45 (18.6)
  9–12 62 (25.6)
  3–8 135 (55.8)
Pupillary light reflex
  React both eyes 200 (82.6)
  React one eye 15 (6.2)
  Fixed both eye 27 (11.2)
Cause of hydrocephalus
  Subarachnoid hemorrhage 88 (36.4)
  Intracerebral hemorrhage 87 (36.0)
  Intracranial tumor 44 (18.2)
  Brain abscess/meningitis 11 (4.5)
  Trauma 9 (3.7)
  Congenital cause 3 (1.2)
  Average intraoperative ICP-cmH2O (SD) 22.86 (7.39)

ICP: Intracranial pressure, SD: Standard deviation, cmH2O: Centimeters of water

The present study was characterized by a female predominance, with an average age of 25.90 years (±20.58). Common underlying diseases were hypertension, diabetes mellitus, and dyslipidemia, whereas motor weakness was the most common symptom. HCP was frequently caused by intracerebral hemorrhage, subarachnoid hemorrhage, and cranial tumors. In addition, the average ICP was 22.86 cmH2O (±7.39).

Training model and validation

Several DL architectures were utilized to train models; training and validation losses are shown in Figure 3a-d. CNN train models’ training and validation losses progressively declined, but LeNet and ResNet’s validation losses rose over epochs.

Training loss by deep learning models (a) convolutional neural network, (b) dense convolutional network, (c) LeNet, and (d) residual neural network.
Figure 3:
Training loss by deep learning models (a) convolutional neural network, (b) dense convolutional network, (c) LeNet, and (d) residual neural network.

Model predictive performance with the test dataset

Predictions and actual ICP values were plotted according to CNN, DenseNet, LeNet, and ResNet architectures using a test dataset, as shown in Figure 4a-d. CNN’s scatter plot shows that it is more evenly distributed along a 45° line than other architectures. In addition, MAE, MSE, RMSE, and R2 were calculated and compared among DL architectures in Table 2. When compared to other models, the CNN model had the lowest error parameters, and its prediction had the highest R2 value. In detail, the CNN model demonstrated MAE, MSE, and RMSE of 4.96 cmH2O, 38.76 cmH2O, and 6.22 cmH2O, respectively, and it could explain approximately 18% of the variance in ICP value. For implementation, the CNN model was subsequently deployed as a web application through https://cticp123.streamlit.app/, as shown in Figure 5.

Scatter plot between prediction and actual intracranial pressure value by deep learning models (a) convolutional neural network, (b) dense convolutional network, (c) LeNet, and (d) residual neural network.
Figure 4:
Scatter plot between prediction and actual intracranial pressure value by deep learning models (a) convolutional neural network, (b) dense convolutional network, (c) LeNet, and (d) residual neural network.
Screenshot of web application for intracranial pressure prediction. ICP: Intracranial pressure. cmH2O: Centimeters of water
Figure 5:
Screenshot of web application for intracranial pressure prediction. ICP: Intracranial pressure. cmH2O: Centimeters of water
Table 2: Predictive performances of deep learning architectures for predicting intracranial pressure.
Architectures MAE (cmH2O) MSE (cmH2O2) RMSE (cmH2O) R2
Convolutional neural network 4.96 38.76 6.22 0.18
DenseNet 6.58 69.73 8.35 −0.45
LeNet 6.90 82.62 9.09 −0.72
ResNet 6.61 70.82 8.41 −0.48

CNN: Convolutional neural network, DenseNet: Dense convolutional network, MAE: Mean absolute error, MSE: Mean squared error, ResNet: Residual neural network, RMSE: Root mean square error, R2: R-squared

DISCUSSION

ICP prediction has been explored to enhance neurological critical care, particularly in resource-limited settings where invasive monitoring imposes economic burdens.[14,15] Several approaches have used clinical features, neuroanatomical indices, and cranial CT scans. Alali et al., developed clinical prediction rules incorporating Marshall classification, midline shift >5 mm, GCS ≤4, pupillary asymmetry, and abnormal reactivity, achieving an AUC of 0.86.[16]

DL-based regression based on medical images has been challenged; the present study compared the predictive performance among various DL models for predicting ICP value in HCP patients. As a result, the model of CNN architecture had the best performance with the lowest error and the highest R2. Nevertheless, the R2 values from all models in the current investigation were low. This could be explained by the distribution of ICP values in our dataset was imbalanced. A large proportion of patients presented with moderate-to-high ICP, while relatively few cases fell within the very high range. This restricted variability may have constrained the regression models’ ability to capture patterns across the full ICP spectrum, effectively biasing predictions toward high values. Therefore, larger, prospective, and multi-center studies are essential to validate the feasibility of CT-based ICP regression and improve generalizability. However, other error parameters were close to prior studies.[10,17-19] Trakulpanitkit and Tunthanathip used ventricular indices and optic nerve sheath diameter to predict ICP in HCP patients, reporting XGBoost performance with MAE 3.62– 3.89 cmH2O, RMSE 6.46–6.72 cmH2O, and R2 0.50–0.51.[10] Similarly, Fong et al., developed an ML-based algorithm for ICP crisis detection, achieving RMSE 3.56–4.51 mmHg.[19] In contrast, our CNN model achieved MAE 4.96 cmH2O, RMSE 6.22 cmH2O, and R2 0.18. The lower explanatory power may reflect that ventricular index–based methods target specific anatomic predictors, whereas DL models automatically extract features from the whole image, potentially diluting ICP-relevant signals.[20,21]

An advantage of regression-based ICP prediction is that it yields continuous values, which can be converted to cerebral perfusion pressure (CPP). Since CPP is critical for maintaining cerebral blood flow, optimal CPP–targeted therapy has been advocated, especially in traumatic brain injury, with guidelines recommending 60–70 mmHg to minimize ischemia and edema.[22-24] Multimodal ICP prediction combining clinical features, CT imaging, and neuroanatomical indices may serve as a screening tool to guide patient-specific management.[25,26] Moreover, predictive DL models could be integrated into computerized clinical decision support systems (CDSS) through web applications. A systematic review by Souza et al., reported that CDSS improved care processes, clinical outcomes, costs, and patient safety.[27]

To the best of our knowledge, this is the first study to predict ICP directly from cranial CT scans. Several limitations must be acknowledged. The optimal number of training images for CNNs remains debated; Shahinfar et al. suggested 150–500 per class for classification tasks[28] but evidence for image regression is limited. Larger multi-center datasets are needed to improve training, optimize models, and allow robust validation.[28-30] Future external validation on unseen data will also be essential to confirm the generalizability of our findings.

CONCLUSION

DL regression used to cranial CT demonstrated limited explanatory performance for estimating ICP and is not sufficient as a substitute for invasive monitoring. However, these findings provide preliminary feasibility evidence and a challenge for improvement. Future research using larger, balanced, multi-center datasets, and incorporating multimodal data will be necessary to enhance predictive accuracy and clinical applicability.

Ethical approval:

The research/study was approved by the Institutional Review Board at the Faculty of Medicine, Prince of Songkla University, number REC 68-017-10-1, dated February 05, 2025.

Declaration of patient consent:

Patient’s consent not required as patients identity is not disclosed or compromised.

Conflicts of interest:

There are no conflicts of interest.

Use of artificial intelligence (AI)-assisted technology for manuscript preparation:

The authors confirm that there was no use of artificial intelligence (AI)-assisted technology for assisting in the writing or editing of the manuscript and no images were manipulated using AI.

Financial support and sponsorship: Nil.

References

  1. , , , , , , et al. Global hydrocephalus epidemiology and incidence: systematic review and meta-analysis. J Neurosurg. 2018;130:1065-79.
    [CrossRef] [PubMed] [Google Scholar]
  2. , , , . Hydrocephalus in low and middle-income countries-progress and challenges. Neurol India. 2021;69:S292-7.
    [CrossRef] [PubMed] [Google Scholar]
  3. , , . Understanding and modeling the pathophysiology of hydrocephalus: In search of better treatment options. Physiologia. 2024;4:182-201.
    [CrossRef] [Google Scholar]
  4. , . Development and internal validation of nomogram for chronic subdural hematoma recurrence after surgery. J Xiangya Med. 2025;10:2.
    [CrossRef] [Google Scholar]
  5. , , , , , , et al. Impact of intracranial pressure and cerebral perfusion pressure on severe disability and mortality after head injury. Neurocrit Care. 2006;4:8-13.
    [CrossRef] [PubMed] [Google Scholar]
  6. , , , , . Initial intracranial pressure is an independent predictor of unfavorable functional outcomes after aneurysmal subarachnoid hemorrhage. J Clin Neurosci. 2021;94:152-8.
    [CrossRef] [PubMed] [Google Scholar]
  7. , , , , , . Occurrence and impact of intracranial pressure elevation during treatment of severe intraventricular hemorrhage. Crit Care Med. 2012;40:1601-8.
    [CrossRef] [PubMed] [Google Scholar]
  8. , , , , , , et al. Prediction of intracranial pressure crises after severe traumatic brain injury using machine learning algorithms. J Neurosurg. 2023;139:528-35.
    [CrossRef] [PubMed] [Google Scholar]
  9. , , . Prognosis of subarachnoid hemorrhage determined by intracranial pressure thresholds. J Cerebrovasc Endovasc Neurosurg 2025 Jul 23
    [CrossRef] [PubMed] [Google Scholar]
  10. , . Comparison of intracranial pressure prediction in hydrocephalus patients among linear, non-linear, and machine learning regression models in Thailand. Acute Crit Care. 2023;38:362-70.
    [CrossRef] [PubMed] [Google Scholar]
  11. , , , , , , et al. Computer vision for assessing surgical movements in neurosurgery. Stud Health Technol Inform. 2024;316:934-8.
    [CrossRef] [PubMed] [Google Scholar]
  12. , , . Comparative analysis of deep learning architectures for performance of image classification in pineal region tumors. J Med Artif Intell. 2025;8:13.
    [CrossRef] [Google Scholar]
  13. , , . Image segmentation of operative neuroanatomy into tissue categories using a machine learning construct and its role in neurosurgical training. Oper Neurosurg. 2022;23:279-86.
    [CrossRef] [PubMed] [Google Scholar]
  14. , . Cost-effectiveness of intracranial pressure monitoring in severe traumatic brain injury in Southern Thailand. Acute Crit Care. 2025;40:69-78.
    [CrossRef] [PubMed] [Google Scholar]
  15. , , , , . Image-based detection of the internal carotid arteries and sella turcica in endoscopic endonasal transsphenoidal surgery. Neurosurg Focus. 2025;59:E11.
    [CrossRef] [PubMed] [Google Scholar]
  16. , , , , , , et al. A clinical decision rule to predict intracranial hypertension in severe traumatic brain injury. J Neurosurg. 2018;131:612-9.
    [CrossRef] [PubMed] [Google Scholar]
  17. , , , , , , et al. Dynamic nomogram for predicting long-term survival in patients with brain abscess. Chin Neurosurg J. 2025;11:15.
    [CrossRef] [PubMed] [Google Scholar]
  18. , , . Interpreting regression models in clinical outcome studies. Bone Joint Res. 2015;4:152-3.
    [CrossRef] [PubMed] [Google Scholar]
  19. , , , , . IntraCranial pressure prediction AlgoRithm using machinE learning (I-CARE): Training and Validation Study. Crit Care Explor. 2023;6:e1024.
    [CrossRef] [PubMed] [Google Scholar]
  20. , , , , , , et al. Automatic detection of mild cognitive impairment based on deep learning and radiomics of MR imaging. Front Med (Lausanne). 2024;11:1305565.
    [CrossRef] [PubMed] [Google Scholar]
  21. , , . Comparative analysis of image classification algorithms based on traditional machine learning and deep learning. Pattern Recogn Lett. 2021;141:61-7.
    [CrossRef] [Google Scholar]
  22. , , , , , , et al. Benefit on optimal cerebral perfusion pressure targeted treatment for traumatic brain injury patients. J Crit Care. 2017;41:49-55.
    [CrossRef] [PubMed] [Google Scholar]
  23. , , , , , , et al. Can optimal cerebral perfusion pressure in patients with severe traumatic brain injury be calculated based on minute-by-minute data monitoring? Acta Neurochir Suppl. 2016;122:245-8.
    [CrossRef] [PubMed] [Google Scholar]
  24. , , , , , , et al. Guidelines for the management of severe traumatic brain injury, Fourth Edition. Neurosurgery. 2017;80:6-15.
    [CrossRef] [PubMed] [Google Scholar]
  25. , . Development and internal validation of a nomogram for predicting outcomes in children with traumatic subdural hematoma. Acute Crit Care. 2022;37:429-37.
    [CrossRef] [PubMed] [Google Scholar]
  26. , , , , , . Necessity of inhospital neurological observation for mild traumatic brain injury patients with negative computed tomography brain scans. J Health Sci Med Res. 2000;38:267-74.
    [Google Scholar]
  27. , , , , , , et al. Computerized clinical decision support systems for primary preventive care: A decision-maker-researcher partnership systematic review of effects on process of care and patient outcomes. Implement Sci. 2011;6:87.
    [CrossRef] [PubMed] [Google Scholar]
  28. , , . “How many images do I need?” Understanding how sample size per class affects deep learning model performance metrics for balanced designs in autonomous wildlife monitoring. Eco Inform. 2020;57:101085.
    [CrossRef] [Google Scholar]
  29. , , , , . Economic impact of a machine learning-based strategy for preparation of blood products in brain tumor surgery. PLoS One. 2022;17:e0270916.
    [CrossRef] [PubMed] [Google Scholar]
  30. , , , , , . Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE Trans Med Imaging. 2016;35:1299-312.
    [CrossRef] [PubMed] [Google Scholar]
Show Sections