南方医科大学学报 ›› 2024, Vol. 44 ›› Issue (5): 950-959.doi: 10.12122/j.issn.1673-4254.2024.05.17

• 技术方法 • 上一篇    下一篇

基于双域Transformer耦合特征学习的CT截断数据重建模型

汪辰1,2(), 蒙铭强1,2, 李明强2, 王永波1,2, 曾栋1,2, 边兆英1,2, 马建华1,2()   

  1. 1.南方医科大学生物医学工程学院,广东 广州 510515
    2.琶洲实验室(黄埔),广东 广州 510005
  • 收稿日期:2023-10-31 出版日期:2024-05-20 发布日期:2024-06-06
  • 通讯作者: 马建华 E-mail:wangchen9909@outlook.com;jhma@smu.edu.cn
  • 作者简介:汪 辰,在读硕士研究生,E-mail: wangchen9909@outlook.com
  • 基金资助:
    国家自然科学基金(U21A6005)

Reconstruction from CT truncated data based on dual-domain transformer coupled feature learning

Chen WANG1,2(), Mingqiang MENG1,2, Mingqiang LI2, Yongbo WANG1,2, Dong ZENG1,2, Zhaoying BIAN1,2, Jianhua MA1,2()   

  1. 1.School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
    2.Pazhou Lab (Huangpu), Guangzhou 510005, China
  • Received:2023-10-31 Online:2024-05-20 Published:2024-06-06
  • Contact: Jianhua MA E-mail:wangchen9909@outlook.com;jhma@smu.edu.cn
  • Supported by:
    National Natural Science Foundation of China(U21A6005)

摘要:

目的 为解决CT扫描视野(FOV)不足导致的截断伪影和图像结构失真问题,本文提出了一种基于投影和图像双域Transformer耦合特征学习的CT截断数据重建模型(DDTrans)。 方法 基于Transformer网络分别构建投影域和图像域恢复模型,利用Transformer注意力模块的远距离依赖建模能力捕捉全局结构特征来恢复投影数据信息,增强重建图像。在投影域和图像域网络之间构建可微Radon反投影算子层,使得DDTrans能够进行端到端训练。此外,引入投影一致性损失来约束图像前投影结果,进一步提升图像重建的准确性。 结果 Mayo仿真数据实验结果表明,在部分截断和内扫描两种截断情况下,本文方法DDTrans在去除FOV边缘的截断伪影和恢复FOV外部信息等方面效果均优于对比算法。 结论 DDTrans模型可以有效去除CT截断伪影,确保FOV内数据的精确重建,同时实现FOV外部数据的近似重建。

关键词: CT截断伪影, Transformer, 深度学习, 双域

Abstract:

Objective To propose a CT truncated data reconstruction model (DDTrans) based on projection and image dual-domain Transformer coupled feature learning for reducing truncation artifacts and image structure distortion caused by insufficient field of view (FOV) in CT scanning. Methods Transformer was adopted to build projection domain and image domain restoration models, and the long-range dependency modeling capability of the Transformer attention module was used to capture global structural features to restore the projection data information and enhance the reconstructed images. We constructed a differentiable Radon back-projection operator layer between the projection domain and image domain networks to enable end-to-end training of DDTrans. Projection consistency loss was introduced to constrain the image forward-projection results to further improve the accuracy of image reconstruction. Results The experimental results with Mayo simulation data showed that for both partial truncation and interior scanning data, the proposed DDTrans method showed better performance than the comparison algorithms in removing truncation artifacts at the edges and restoring the external information of the FOV. Conclusion The DDTrans method can effectively remove CT truncation artifacts to ensure accurate reconstruction of the data within the FOV and achieve approximate reconstruction of data outside the FOV.

Key words: CT truncation artifacts, transformer, deep learning, dual-domain