Journal of Southern Medical University ›› 2023, Vol. 43 ›› Issue (6): 985-993.doi: 10.12122/j.issn.1673-4254.2023.06.14

Previous Articles     Next Articles

A semi-supervised network-based tissue-aware contrast enhancement method for CT images

ZHOU Hao, ZENG Dong, BIAN Zhaoying, MA Jianhua   

  1. Department of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Pazhou Lab (Huangpu), Guangzhou 510515, China
  • Online:2023-06-20 Published:2023-07-07

Abstract: Objective To propose a tissue- aware contrast enhancement network (T- ACEnet) for CT image enhancement and validate its accuracy in CT image organ segmentation tasks. Methods The original CT images were mapped to generate low dynamic grayscale images with lung and soft tissue window contrasts, and the supervised sub-network learned to recognize the optimal window width and level setting of the lung and abdominal soft tissues via the lung mask. The self-supervised sub-network then used the extreme value suppression loss function to preserve more organ edge structure information. The images generated by the T-ACEnet were fed into the segmentation network to segment multiple abdominal organs. Results The images obtained by T-ACEnet were capable of providing more window setting information in a single image, which allowed the physicians to conduct preliminary screening of the lesions. Compared with the suboptimal methods, T-ACE images achieved improvements by 0.51, 0.26, 0.10, and 14.14 in SSIM, QABF, VIFF, and PSNR metrics, respectively, with a reduced MSE by an order of magnitude. When T-ACE images were used as input for segmentation networks, the organ segmentation accuracy could be effectively improved without changing the model as compared with the original CT images. All the 5 segmentation quantitative indices were improved, with the maximum improvement of 4.16% . Conclusion The T-ACEnet can perceptually improve the contrast of organ tissues and provide more comprehensive and continuous diagnostic information, and the T-ACE images generated using this method can significantly improve the performance of organ segmentation tasks.

Key words: computed tomography; deep learning; CT image visualization; multi-organ segmentation