Journal of Southern Medical University ›› 2024, Vol. 44 ›› Issue (1): 138-145.doi: 10.12122/j.issn.1673-4254.2024.01.16

Previous Articles     Next Articles

A multi-modal feature fusion classification model based on distance matching and discriminative representation learning for differentiation of high-grade glioma from solitary brain metastasis

ZHANG Zhenyang, XIE Jincheng, ZHONG Weixiong, LIANG Fangrong, YANG Ruimeng, ZHEN Xin   

  1. School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Department of Radiology, Second Affiliated Hospital of South China University of Technology (Guangzhou First People's Hospital), Guangzhou 510180, China; School of Medicine, South China University of Technology, Guangzhou 510006, China
  • Published:2024-01-24

Abstract: Objective To explore the performance of a new multimodal feature fusion classification model based on distance matching and discriminative representation learning for differentiating high-grade glioma (HGG) from solitary brain metastasis (SBM). Methods We collected multi-parametric magnetic resonance imaging (MRI) data from 61 patients with HGG and 60 with SBM, and delineated regions of interest (ROI) on T1WI, T2WI, T2-weighted fluid attenuated inversion recovery (T2_FLAIR) and post-contrast enhancement T1WI (CE_T1WI) images. The radiomics features were extracted from each sequence using Pyradiomics and fused using a multimodal feature fusion classification model based on distance matching and discriminative representation learning to obtain a classification model. The discriminative performance of the classification model for differentiating HGG from SBM was evaluated using five-fold cross-validation with metrics of specificity, sensitivity, accuracy, and the area under the ROC curve (AUC) and quantitatively compared with other feature fusion models. Visual experiments were conducted to examine the fused features obtained by the proposed model to validate its feasibility and effectiveness. Results The five-fold cross-validation results showed that the proposed multimodal feature fusion classification model had a specificity of 0.871, a sensitivity of 0.817, an accuracy of 0.843, and an AUC of 0.930 for distinguishing HGG from SBM. This feature fusion method exhibited excellent discriminative performance in the visual experiments. Conclusion The proposed multimodal feature fusion classification model has an excellent ability for differentiating HGG from SBM with significant advantages over other feature fusion classification models in discrimination and classification tasks between HGG and SBM.

Key words: feature fusion; shared representation learning; discriminant analysis; high-grade glioma; solitary brain metastasis