南方医科大学学报 ›› 2024, Vol. 44 ›› Issue (1): 138-145.doi: 10.12122/j.issn.1673-4254.2024.01.16

• • 上一篇    下一篇

基于距匹配及判别表征学习的多模态特征融合分类模型研究:高级别胶质瘤与单发性脑转移瘤的鉴别诊断

张振阳,谢金城,钟伟雄,梁芳蓉,杨蕊梦,甄 鑫   

  1. 南方医科大学生物医学工程学院,广东 广州 510515;华南理工大学附属第二医院(广州市第一人民医院)放射科,广东 广州 510180;华南理工大学医学院,广东 广州 510006
  • 发布日期:2024-01-24

A multi-modal feature fusion classification model based on distance matching and discriminative representation learning for differentiation of high-grade glioma from solitary brain metastasis

ZHANG Zhenyang, XIE Jincheng, ZHONG Weixiong, LIANG Fangrong, YANG Ruimeng, ZHEN Xin   

  1. School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Department of Radiology, Second Affiliated Hospital of South China University of Technology (Guangzhou First People's Hospital), Guangzhou 510180, China; School of Medicine, South China University of Technology, Guangzhou 510006, China
  • Published:2024-01-24

摘要: 目的 探索基于距匹配及判别表征学习的多模态特征融合分类模型在鉴别高级别胶质瘤(HGG)与单发性脑转移(SBM)中的鉴别能力和应用价值。方法 收集了121例患者(61例HGG和60例SBM)的多参数磁共振成像(MRI)扫描图像,在T1W1、T2W1、T2加权液体衰减反转恢复(T2_FLAIR)和T1WI增强图像(CE_T1WI)4种常规轴位MRI图像上勾画目标感兴趣区域(ROI),并使用开源影像组学工具Pyradiomics从4个MRI序列分别提取影像组学特征。使用本研究提出的基于距匹配及判别表征学习的多模态特征融合分类模型对4个MRI序列的影像组学特征进行融合并得到分类模型。采用五折交叉验证方法和特异性(SPE)、灵敏度(SEN)、准确率(ACC)、ROC曲线下面积(AUC)评价该分类模型的鉴别性能。将本研究所提模型与其他特征融合分类模型对于HGG与SBM的鉴别能力进行定量比较,同时对本研究提出特征融合方法得到的融合特征进行样本散点可视化实验,验证本研究所提出的多模态特征融合分类模型的可行性和有效性。结果 五折交叉验证结果显示本研究所提出的基于距匹配及判别表征学习的多模态特征融合分类模型在鉴别高级别胶质瘤与单发性脑转移瘤中的SPE、SEN、ACC、AUC分别为:0.871、0.817、0.843、0.930,且特征融合方法在可视化实验中具有优秀的表现。结论 基于距匹配及判别表征学习的多模态特征融合分类模型在鉴别高级别胶质瘤与单发性脑转移瘤中的应用具有优秀的鉴别能力和较高的应用价值。同时,与其他特征融合分类模型相比,本研究提出的分类模型在HGG与SBM的鉴别分类任务中具有较大的优势。

关键词: 特征融合;共享表征学习;判别分析;高级别胶质瘤;单发性脑转移瘤

Abstract: Objective To explore the performance of a new multimodal feature fusion classification model based on distance matching and discriminative representation learning for differentiating high-grade glioma (HGG) from solitary brain metastasis (SBM). Methods We collected multi-parametric magnetic resonance imaging (MRI) data from 61 patients with HGG and 60 with SBM, and delineated regions of interest (ROI) on T1WI, T2WI, T2-weighted fluid attenuated inversion recovery (T2_FLAIR) and post-contrast enhancement T1WI (CE_T1WI) images. The radiomics features were extracted from each sequence using Pyradiomics and fused using a multimodal feature fusion classification model based on distance matching and discriminative representation learning to obtain a classification model. The discriminative performance of the classification model for differentiating HGG from SBM was evaluated using five-fold cross-validation with metrics of specificity, sensitivity, accuracy, and the area under the ROC curve (AUC) and quantitatively compared with other feature fusion models. Visual experiments were conducted to examine the fused features obtained by the proposed model to validate its feasibility and effectiveness. Results The five-fold cross-validation results showed that the proposed multimodal feature fusion classification model had a specificity of 0.871, a sensitivity of 0.817, an accuracy of 0.843, and an AUC of 0.930 for distinguishing HGG from SBM. This feature fusion method exhibited excellent discriminative performance in the visual experiments. Conclusion The proposed multimodal feature fusion classification model has an excellent ability for differentiating HGG from SBM with significant advantages over other feature fusion classification models in discrimination and classification tasks between HGG and SBM.

Key words: feature fusion; shared representation learning; discriminant analysis; high-grade glioma; solitary brain metastasis