南方医科大学学报 ›› 2023, Vol. 43 ›› Issue (6): 985-993.doi: 10.12122/j.issn.1673-4254.2023.06.14

• • 上一篇    下一篇

基于半监督网络的组织感知CT图像对比度的增强方法

周 昊,曾 栋,边兆英,马建华   

  1. 南方医科大学生物医学工程学院,广东 广州 510515;琶洲实验室(黄埔),广东 广州 510515
  • 出版日期:2023-06-20 发布日期:2023-07-07

A semi-supervised network-based tissue-aware contrast enhancement method for CT images

ZHOU Hao, ZENG Dong, BIAN Zhaoying, MA Jianhua   

  1. Department of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Pazhou Lab (Huangpu), Guangzhou 510515, China
  • Online:2023-06-20 Published:2023-07-07

摘要: 目的 提出一种组织感知的对比度增强网络(T-ACEnet)对CT图像进行增强显示,并验证该结果对于现有器官分割任务的精度提升。方法 原始CT图像通过映射生成具有肺窗、软组织窗对比度的低动态灰阶图像,监督子网络通过肺部掩膜学习感知肺部、腹腔软组织的最佳窗宽窗位设置。自监督子网络通过极值抑制损失函数保持器官边缘结构信息。增强网络生成的图像被用作分割网络的输入,进行腹部多器官的分割。结果 T-ACEnet所生成的图像可以在一幅图像中包含更多窗口设置信息,便于医生进行病灶的初步筛查。且T-ACE图像在SSIM、QABF、VIFF、PSNR指标上相较于次优方法分别提升了0.51、0.26、0.10和14.14,MSE则降低了一个数量级。同时,T-ACE图像作为分割网络输入时,相较于原始CT图像,在不改变模型的情况下可以有效提高器官分割精度,5个分割定量指标均得到了提升,最大一项可提高4.16%。结论 本研究所提出的T-ACEnet可以感知性地增强器官组织对比度,提供更全面、更连续的诊断信息,同时所生成的图像可以显著提高器官分割任务的表现。

关键词: 计算机断层成像;深度学习;CT图像可视化;多器官分割

Abstract: Objective To propose a tissue- aware contrast enhancement network (T- ACEnet) for CT image enhancement and validate its accuracy in CT image organ segmentation tasks. Methods The original CT images were mapped to generate low dynamic grayscale images with lung and soft tissue window contrasts, and the supervised sub-network learned to recognize the optimal window width and level setting of the lung and abdominal soft tissues via the lung mask. The self-supervised sub-network then used the extreme value suppression loss function to preserve more organ edge structure information. The images generated by the T-ACEnet were fed into the segmentation network to segment multiple abdominal organs. Results The images obtained by T-ACEnet were capable of providing more window setting information in a single image, which allowed the physicians to conduct preliminary screening of the lesions. Compared with the suboptimal methods, T-ACE images achieved improvements by 0.51, 0.26, 0.10, and 14.14 in SSIM, QABF, VIFF, and PSNR metrics, respectively, with a reduced MSE by an order of magnitude. When T-ACE images were used as input for segmentation networks, the organ segmentation accuracy could be effectively improved without changing the model as compared with the original CT images. All the 5 segmentation quantitative indices were improved, with the maximum improvement of 4.16% . Conclusion The T-ACEnet can perceptually improve the contrast of organ tissues and provide more comprehensive and continuous diagnostic information, and the T-ACE images generated using this method can significantly improve the performance of organ segmentation tasks.

Key words: computed tomography; deep learning; CT image visualization; multi-organ segmentation