OrbitNet—A fully automated orbit multi-organ segmentation model based on transformer in CT images
Abstract:
The delineation of orbital organs is a vital step in orbital diseases diagnosis and preoperative planning.
However, an accurate multi-organ segmentation is still a clinical problem which suffers from two limitations. First, the contrast of soft tissue is relatively low. It usually cannot clearly show the boundaries of organs. Second, the optic nerve and the rectus muscle are difficult to distinguish because they are spatially adjacent and have similar geometry. To address these challenges, we propose the OrbitNet model to automatically segment orbital organs in CT images. Specifically, we present a global feature extraction module based on the transformer architecture called FocusTrans encoder, which enhance the ability to extract boundary features. To make the network focus on the extraction of edge features in the optic nerve and rectus muscle, the SA block is used to replace the convolution block in the decoding stage. In addition, we use the structural similarity measure (SSIM) loss as a part of the hybrid loss function to learn the edge differences of the organs better. OrbitNet has been trained and tested on the CT dataset collected by the Eye Hospital of Wenzhou Medical University. The experimental results show that our proposed model achieved superior results. The average Dice Similarity Coefficient (DSC) is 83.9%, the value of average 95% Hausdorff Distance (HD95) is 1.62 mm, and the value of average Symmetric Surface Distance (ASSD) is 0.47 mm. Our model also has good performance on the MICCAI 2015 challenge dataset.