Enhancing prognostic outcome prediction in oropharyngeal cancer patients using multi-modality imaging and advanced AI techniques

News
This PhD research focuses on improving the prediction of treatment outcomes for patients with oropharyngeal cancer, a type of head and neck cancer. Current clinical tools often lack the accuracy needed to personalize treatment. To address this, the research developed and tested advanced Artificial Intelligence (AI) models using information from various medical scans—CT, PET, and MRI—and clinical data.
Promotion B. Ma

The findings showed that combining different imaging types with clinical data leads to more accurate predictions of patient outcomes, such as survival and recurrence. By using deep learning models that learn directly from medical scans, the research found that including automatically generated tumor maps and information about cancerous lymph nodes improved prediction performance. Among the imaging modalities, MRI scans, especially T2-weighted images, were particularly effective in predicting local tumor control and survival.

A new Transformer-based model (TransRP) was developed, which outperformed traditional AI models by better capturing both local and global patterns in imaging data. Interestingly, simpler models like DenseNet also showed strong performance, especially on external validation datasets.

Overall, this research highlights the potential of combining multi-modality imaging with AI for better outcome prediction. These findings support the development of more personalized treatment strategies and encourage further clinical validation using larger and diverse datasets.