[论文翻译] 用深度学习进行骨盆分割:大型CT数据集和基线模型


原文地址:https://arxiv.org/pdf/2012.08721v2.pdf

代码地址:HTTPS://github.com/ict-lab/

.

Deep Learning to Segment Pelvic Bones: Large-scale CT Datasets and Baseline Models

用深度学习进行骨盆分割:大型CT数据集和基线模型

Abstract

Purpose: Pelvic bone segmentation in CT has always been an essential step in clinical diagnosis and surgery planning of pelvic bone diseases. Existing methods for pelvic bone segmentation are either hand-crafted or semi-automatic and achieve limited accuracy when dealing with image appearance variations due to the multi-site domain shift, the presence of contrasted vessels, coprolith and chyme, bone fractures, low dose, metal artifacts, etc. Due to the lack of a large-scale pelvic CT dataset with annotations, deep learning methods are not fully explored.

Methods: In this paper, we aim to bridge the data gap by curating a large pelvic CT dataset pooled from multiple sources, including 1,184 CT volumes with a variety of appearance variations. Then we propose for the first time, to the best of our knowledge, to learn a deep multi-class network for segmenting lumbar spine, sacrum, left hip, and right hip, from multiple-domain images simultaneously to obtain more effective and robust feature representations. Finally, we introduce a post-processor based on the signed distance function (SDF).

Results: Extensive experiments on our dataset demonstrate the effectiveness of our automatic method, achieving an average Dice of 0.987 for a metal-free volume. SDF post-processor yields a decrease of 15.1% in Hausdorff distance compared with traditional post-processor. Conclusion: We believe this large-scale dataset will promote the development of the whole community and open source the images, annotations, codes, and trained baseline models at https://github.com/ICT-MIRACLE-lab/CTPelvic1K. \keywords{CT dataset \and Pelvic segmentation \and Deep learning \and SDF post-processing}

摘要

用途: CT中的盆腔骨细分始终是盆腔骨病临床诊断和手术规划的重要步骤。骨盆骨分割的现有方法是手工制作的或半自动,并且在处理由于多站点域移引起的图像外观变化时达到有限的精度,对比血管,共摩尔和木豆,骨折,低剂量,低剂量,金属伪影等由于缺乏带有注释的大规模骨盆CT数据集,无法完全探索深度学习方法。

方法:在本文中,我们的目标是通过巩固从多个来源汇总的大型骨盆CT数据集来弥合数据差距,其中包括1,184张CT,具有各种外观变化。然后我们首次提出,据我们所知,学习深度多级网络,用于分割腰椎,骶骨,左髋关节和右髋,同时获得更有效和坚固的特征代表性。最后,我们基于符号距离函数(SDF)引入后处理器。

结果:对我们数据集的广泛实验证明了我们自动方法的有效性,实现了无金属体积的平均骰子为0.987。与传统的后处理器相比,SDF后处理器在Hausdorff距离中减少了15.1%。 结论:我们认为这一大型数据集将促进整个社区的开发,并在HTTPS://github.com/ict-lab/中开源图像,注释,代码和培训的基线模型CTPELVIC1K。

\关键字{CT DataSet \和骨盆分割\和深度学习\和SDF后处理}

Introduction

The pelvis is an important structure connecting the spine and lower limbs and plays a vital role in maintaining the stability of the body and protecting the internal organs of the abdomen. The abnormality of the pelvis, like hip dysplasia and pelvic fractures, can have a serious impact on our physical health. For example, as the most severe and life-threatening bone injuries, pelvic fractures can wound other organs at the fracture site, and the mortality rate can reach 45% at the most severe situation, the open pelvic fractures. Medical imaging plays an important role in the whole process of diagnosis and treatment of patients with pelvic injuries. Compared with X-Ray images, CT preserves the actual anatomic structure including depth information, providing more details about the damaged site to surgeons, so it is often used for 3D reconstruction to make follow-up and evaluation of postoperative effects. In these applications, is crucial for assessing the severity of pelvic injuries and helping surgeons to make correct judgments and choose the appropriate surgical approaches. In the past, surgeons segmented pelvis manually from CT using software like Mimics https://en.wikipedia.org/wiki/Mimics , which is time-consuming and non-reproducible. To address these clinical needs, we here present an automatic algorithm that can accurately and quickly segment pelvic bones from CT.

image

Existing methods for pelvic bone segmentation from CT mostly use simple thresholding, region growing, and handcrafted models, which include deformable models, statistical shape models, watershed and others. These methods focus on local gray information and have limited accuracy due to the density differences between cortical and trabecular bones. And trabecular bone is similar to that of the surrounding tissues in terms of texture and intensity. Bone fractures, if present, further lead to weak edges. Recently, deep learning-based methods have achieved great success in image segmentation; however, their effectiveness for CT pelvic bone segmentation is not fully known.

简介

骨盆是连接脊柱和下肢的重要结构,在保持身体的稳定性和保护腹部的内脏中起着至关重要的作用。像髋关节发育性和骨盆骨折一样骨盆的异常可能对我们的身体健康产生严重影响。例如,作为最严重和危及生命的骨损伤,骨盆骨折可以在骨折部位缠绕其他器官,并且死亡率在最严重的情况下可以达到45%,开放的骨盆骨折。医学成像在骨盆损伤患者的诊断和治疗过程中起着重要作用。与X射线图像相比,CT保留了包括深度信息的实际解剖结构,提供有关损坏部位的更多细节到外科医生,因此它通常用于3D重建,以便进行术后效果的跟进和评估。在这些应用中,对评估骨盆损伤的严重程度并帮助外科医生进行正确判断并选择适当的手术方法至关重要。在过去,外科医生使用像模仿Https://en.wikipedia.org/wiki/mimics等软件手动从CT手动分割骨盆,这是耗时和不可重复的。为了解决这些临床需求,我们在这里提出了一种自动算法,可以准确顺利地从CT中段骨盆骨骼。

从ct骨盆骨分割的现有方法主要使用简单的阈值,区域生长和手工模型,包括可变形的模型,统计形状模型,流域和其他模型。这些方法专注于局部灰色信息,并且由于皮质和小梁骨骼之间的密度差异而具有有限的精度。在质地和强度方面,小梁骨类似于周围组织的骨骼。骨折,如果存在,进一步导致弱边缘。最近,基于深度学习的方法取得了巨大成功的图像细分;然而,它们对CT盆腔骨分割的有效性尚不完全已知。

Although there are some datasets related to pelvic bone, only a few of them are open-sourced and with small size (less than 5 images or 200 slices), far less than other organs. Although conducted experiments based on deep learning, the result was not very good (Dice=0.92) with the dataset only having 200 CT slices. For the robustness of the deep learning method, it is essential to have a comprehensive dataset that includes as many real scenes as possible. In this paper, we bridge this gap by curating a large-scale CT dataset and explore the use of deep learning in this task, which marks, to the best of our knowledge, in this area, with more statistical significance and reference value. To build a comprehensive dataset, we have to deal with diverse image appearance variations due to differences in imaging resolution and field-of-view (FOV), domain shift arising from different sites, the presence of contrasted vessels, coprolith and chyme, bone fractures, low dose, metal artifacts, etc. Fig. fig-pic-3 gives some examples about these various conditions. Among the above-mentioned appearance variations, the challenge of the metal artifacts is the most difficult to handle. Further, we aim at a multi-class segmentation problem that separates the pelvis into multiple bones, including , , , and , instead of simply segmenting out the whole pelvis from CT. The contributions of this paper are summarized as follows: - A pelvic CT dataset} pooled from multiple domains and different manufacturers, including 1,184 CT volumes (over 320K CT slices) of diverse appearance variations (including 75 CTs with metal artifacts). Their multi-bone labels are carefully annotated by experts. We open source it to benefit the whole community; - Learning a deep multi-class segmentation network} to obtain more effective representations for joint lumbar spine, sacrum, left hip, and right hip segmentation from multi-domain labeled images, thereby yielding desired accuracy and robustness; - A fully automatic analysis pipeline} that achieves high accuracy, efficiency, and robustness, thereby enabling its potential use in clinical practices.

虽然有一些与盆腔有关的数据集,但它们中只有少数开放,小尺寸(少于5张图像或200片),远低于其他器官。虽然基于深度学习进行了实验,但结果并不是很好的(骰子= 0.92),只有数据集只有200 CT切片。对于深度学习方法的稳健性,必须具有一个全面的数据集,该数据集包括尽可能多的真实场景。在本文中,我们通过策划大型CT数据集来弥合这一差距并探索在这项任务中使用深度学习,这是我们的知识,在这一领域,具有更多统计显着性和参考价值。为了建立一个全面的数据集,我们必须通过成像分辨率的差异和视野(FOV),来自不同地点的域移位的不同图像出现变化来处理多样化的图像外观变化,对比的血管,共摩尔和木豆,骨骨折,低剂量,金属伪影等。图3给出了关于这些各种条件的一些示例。在上述外观变化中,金属伪像的挑战是最难以处理的。此外,我们的目标是在多级分割问题上,将骨盆分离成多个骨骼,包括,以及简单地将整个骨盆从CT分割出来。本文的贡献总结如下: - 从多个域和不同制造商汇集的A \ Texit {Pelvic CT数据集},包括1,184美元的CT卷(超过320K CT切片)各种外观变化(包括带金属伪影的75个CTS )。他们的多骨标签由专家仔细注释。我们开展资料,使其有益于整个社区; - 学习A Deep Multi-Class分割网络}以获得更多有效的腰椎脊柱,骶骨,左髋关节和从多域标记图像的右髋关节分割,从而产生所需的准确性和鲁棒性; - A 全自动分析管道},实现高精度,效率和稳健性,从而能够在临床实践中使用它。

Our Dataset 我们的数据集

Dataset name[M] # Mean spacing(mm) Mean size Tr/Val/Ts Source and Year
\ABDOMEN 35 (0.76, 0.76, 3.80) (512, 512, 73) 21/7/7 Public 2015
\COLONOG[\Checkmark] 731 (0.75, 0.75, 0.81) (512, 512, 323) 440/146/145 Public 2008
\MSD_T10 155 (0.77, 0.77, 4.55) (512, 512, 63) 93/31/31 Public 2019
\KITS19 44 (0.82, 0.82, 1.25) (512, 512, 240) 26/9/9 Public 2019
\CERVIX 41 (1.02, 1.02, 2.50) (512, 512, 102) 24/8/9 Public 2015
\CLINIC[\Checkmark] 103 (0.85, 0.85, 0.80) (512, 512, 345) 61/21/21 Collected 2020
\CLINIC-metal[\Checkmark] 75 (0.83, 0.83, 0.80) (512, 512, 334) 0(61)/0/14 Collected 2020
Our Datasets 1,184 (0.78, 0.78, 1.46) (512, 512, 273) 665(61)/222/236 -

Data Collection 数据收集

To build a comprehensive pelvic CT dataset that can replicate practical appearance variations, we curate a large dataset of pelvic CT images from seven sources, two of which come from a clinic and five from existing CT datasets. The overview of our large dataset is shown in Tabletab0. These seven sub-datasets are curated separately from different sites and sources with different characteristics often encountered in the clinic. In these sources, we exclude some cases of very low quality or without pelvic region and remove the unrelated areas outside the pelvis in our current dataset. Among them, the raw data of COLONOG, CLINIC, and CLINIC-metal are stored in a DICOM format, with more information like scanner manufacturers can be accessed. More details about our dataset are given in Online Resource 1 https://drive.google.com/file/d/115kLXfdSHS9eWxQmxhMmZJBRiSfI8_4_/view . We reformat all DICOM images to NIfTI to simplify data processing and de-identify images, meeting the institutional review board (IRB) policies of contributing sites. All existing sub-datasets are under Creative Commons license CC-BY-NC-SA at least and we will keep the license unchanged. For CLINIC and CLINIC-metal sub-datasets, we will open-source them under Creative Commons license CC-BY-NC-SA 4.0.

构建一个全面的骨盆CT数据集,可以复制实际外观变化,我们从七个来源策划大型骨盆CT图像数据集,其中两个来自诊所和来自现有CT数据集的五个。我们的大型数据集的概述显示在Tabletab0中。这七个子数据集与诊所中经常遇到的不同特征的不同站点和源分开策划。在这些来源中,我们排除了一些非常低质量或没有盆景的情况,并在我们当前数据集中删除骨盆外的不相关区域。其中,Colonog,诊所和诊所 - 金属的原始数据以DICOM格式存储,具有更多信息,如扫描仪制造商可以访问。有关我们数据集的更多详细信息在线资源1 https://drive.google.com/file/d/115klxfdshs9ewxqmxhmmzjbrisfi8_4_/view。我们将所有DICOM图像重新格式化为NIFI,以简化数据处理和取消识别图像,符合贡献网站的机构审查委员会(IRB)政策。所有现有的子数据集至少在Creative Commons许可证CC-BY-NC-SA下,我们将保持许可证不变。对于诊所和诊所 - 金属子数据集,我们将在Creative Commons许可证CC-By-NC-SA 4.0下开源它们。

image

Data Annotation

Considering the scale of thousands of cases in our dataset and annotation itself is truly a subjective and time-consuming task. We introduce a strategy of Annotation by Iterative Deep Learning (AID) to speed up our annotation process. In the AID workflow, we train a deep network with a few precisely annotated data in the beginning. Then the deep network is used to automatically annotate more data, followed by revision from human experts. The human-corrected annotations and their corresponding images are added to the training set to retrain a more powerful deep network. These steps are repeated iteratively until we finish our annotation task. In the last, only minimal modification is needed by human experts. The annotation pipeline is shown in Fig.fig-pic-10. In Step , two senior experts are invited to pixel-wise annotate 40 cases of CLINIC sub-dataset precisely as the initial database based on the results from simple thresholding method, using ITK Snap (Philadelphia, PA) software. All annotations are performed in the transverse plane. The sagittal and coronal planes are used to assist the judgment in the transverse plane. The reason for starting from the CLINIC sub-dataset is that the cancerous bone and surrounding tissues exhibit similar appearances at the fracture site, which needs more prior knowledge guidance from doctors. In Step , we train a deep network with the updated database and make predictions on new 100 data selected randomly at a time. In Step , some junior annotators refine the labels based on the prediction results, and each junior annotator is only responsible for part of 100 new data. A coordinator will check the quality of refinement by all junior annotators. For easy cases, the annotation process is over in this stage; for hard cases, senior experts are invited to make more precise annotations. Step and Step are repeated until we finish the annotation of all images in our dataset. Finally, we conduct another round of scrutiny for outliers and mistakes and make necessary corrections to ensure the final quality of our dataset. In Fig.fig-pic-10, Junior annotators' are graduate students in the field of medical image analysis. The Coordinator' is a medical image analysis practitioner with many years of experience, and the `Senior experts' are cooperating doctors in the partner hospital, one of the best orthopedic hospitals in our country. In total, we have annotations for $ 1,109 $ metal-free CTs and 14 metal-affected CTs. The remaining 61 metal-affected CTs of image are left unannotated and planned for use in unsupervised learning.

数据注释

考虑我们数据集和注释本身的数千个案例的比例是真正的主观和耗时的任务。我们介绍了迭代深度学习(援助)注释的策略,以加快我们的注释过程。在援助工作流程中,我们在开始时培训了一个深度网络的深度网络。然后,深网络用于自动注释更多数据,然后从人类专家修订。人类纠正的注释及其相应的图像被添加到训练集中,以重新培训更强大的深网络。迭代地重复这些步骤,直到我们完成注释任务。在最后,人类专家只需要最小的修改。注释管道如图3所示。在步骤中,邀请了两个高级专家对像素 - 明智的注释40例,临床子数据集是初始数据库的基于简单阈值方法的结果,使用ITK Snap(Philadelphia,PA)软件。所有注释都在横向平面中执行。矢状和冠状平面用于帮助横向平面中的判断。从诊所亚数据集开始的原因是癌症骨骼和周围组织在骨折场所表现出类似的外表,这些裂缝部位需要更多地从医生的知识指导。在步骤中,我们将深度网络与更新的数据库一起培训,并在一次随机选择的新100数据上进行预测。在步骤中,一些初级注释器基于预测结果优化标签,每个初级注入者都仅负责100个新数据的一部分。协调员将通过所有初级注释器检查细化的质量。在简单的情况下,在本阶段结束了注释过程;对于硬壳,邀请高级专家培养更准确的注释。重复步骤和步骤,直到我们完成DataSet中所有图像的注释。最后,我们对异常值和错误进行了另一轮审查,并进行了必要的修正,以确保我们数据集的最终质量。在图10中,“初级注释器”是医学图像分析领域的研究生。 “协调员”是具有多年经验的医学形象分析从业者,“高级专家”是伙伴医院的合作医生,我国最好的骨科医院之一。总的来说,我们为$ 1,109 $无金属CTS和14个金属影响CTS提供注释。剩下的61个金属影响的图像CTS是未经监督的学习中的未经宣传的和计划的。

Segmentation Methodology

The overall pipeline of our deep approach for segmenting pelvic bones is illustrated in Fig.fig-pic-5. The input is a 3D CT volume. (i) First, the input is sent to our segmentation module. It is a plug and play (PnP) module that can be replaced at will. (ii) After segmentation is done, we send the multi-class 3D prediction to a SDF post-processor, which removes some false predictions and outputs the final multi-bone segmentation result.

image

分割方法

对骨盆骨骼进行深度方法的整体管道在图4中示出了图5。输入是3D CT卷。 (i)首先,将输入发送到我们的分割模块。它是一个可替换的即插即用(PNP)模块。 (ii)完成分割后,我们将多级3D预测发送到SDF后处理器,其消除了一些假预测并输出最终的多骨分割结果。

Segmentation Module

Based on our large-scale dataset coll