Latest week ending November 15, 2025
AI and Radiomics Transform Cancer Prognosis and Detection
Key Takeaways
- Artificial intelligence (AI) and radiomics are increasingly demonstrating their value in enhancing cancer diagnosis and prognosis.
- These advanced imaging and AI techniques are also proving crucial for predicting survival and disease progression in specific aggressive cancers.
- Beyond detection and prognosis, radiomics and deep learning are enhancing risk stratification and characterization across various cancers.
Artificial intelligence (AI) and radiomics are increasingly demonstrating their value in enhancing cancer diagnosis and prognosis. For early prostate cancer detection, AI-based technologies have shown a median AUC-ROC of 0.88, improving diagnostic accuracy, reducing inter-reader variability, and decreasing reporting times . In inoperable pancreatic cancer, a combined clinical-radiomics nomogram, which integrates radiomic signatures with clinical factors like age and tumor size, achieved a superior predictive performance with a C-index of 0.892 for overall survival in patients undergoing concurrent chemoradiotherapy . Similarly, a novel multimodal deep learning model (TRIM-uHCC) for unresectable hepatocellular carcinoma offers significantly improved prognostic stratification (C-indices: 0.71-0.79) compared to standard guideline-based staging systems, potentially guiding treatment decisions for individualized patient care .
These advanced imaging and AI techniques are also proving crucial for predicting survival and disease progression in specific aggressive cancers. Machine learning models utilizing 18F-DOPA PET radiomics have achieved 81-83% ROC_AUC in predicting remaining survival for glioblastoma patients and can help differentiate tumor progression from treatment effects, potentially surpassing current RANO criteria . Complementing this, amide proton transfer-weighted (APTw) MRI shows promise in differentiating early progressive disease from pseudoprogression in IDH-wildtype glioblastoma, particularly when combined with DWI and PWI, achieving an AUC of 0.90 . In early oral squamous cell carcinoma (OSCC), PET radiomics-based machine learning models accurately predict late cervical lymph node metastasis with an AUC of 0.977 and 87.5% accuracy, enabling better risk stratification . For pharyngeal cancer, a deep learning model combining baseline and adaptive radiation therapy CT images predicts local recurrence, neck lymph node relapse, and distant metastases, with AUCs ranging from 0.747 to 0.793 .
Beyond detection and prognosis, radiomics and deep learning are enhancing risk stratification and characterization across various cancers. A longitudinal radiomics-based approach, incorporating time-varying radiomic modeling from serial chest CT scans, has improved lung cancer risk prediction in USPSTF-ineligible patients. This composite model achieved 78% accuracy, 89% sensitivity, and 67% specificity, outperforming existing risk models like the Brock model . In breast cancer, integrating radiomics with genomics and other multi-omics data is becoming vital for precision management, providing insights into tumor heterogeneity, subtype prediction, and treatment response forecasting . Furthermore, a dual-channel deep learning framework has been shown to effectively capture intratumoral heterogeneity on CECT images for preoperative risk stratification of thymic epithelial tumors (TETs), demonstrating an AUC of 0.74-0.76 on an external test set and outperforming conventional methods and radiologists' visual assessment .
Advances in imaging modalities and AI are also streamlining clinical workflows and providing non-invasive diagnostic alternatives. Diffusion-weighted MRI (DWI) with ADC mapping offers a highly accurate non-contrast imaging alternative for detecting abdominal aortic endoleaks after endovascular aneurysm repair, achieving 95% sensitivity and 93% specificity, which is particularly beneficial for patients with renal insufficiency or gadolinium allergies . For left ventricular thrombi (LVT), delayed-phase cardiac CT images (AUC 0.95) and CT-derived extracellular volume (ECV) maps (AUC 0.98) significantly improve detection and characterization, with ECV maps demonstrating the highest classification accuracy . Additionally, a new deep learning network (DMCD-Net) for pulmonary embolism segmentation in CTPA can reduce false positives and enhance focus on challenging regions, aiding in the quantitative assessment of PE severity . In radiology departments, AI models have achieved high F1 scores (up to 0.979) in automating the identification of follow-up recommendations in radiology reports, significantly improving patient safety and workflow efficiency .