[论文翻译] 用深度学习进行骨盆分割:大型CT数据集和基线模型




Deep Learning to Segment Pelvic Bones: Large-scale CT Datasets and Baseline Models

用深度学习进行骨盆分割:大型 CT 数据集和基线模型


Purpose: Pelvic bone segmentation in CT has always been an essential step in clinical diagnosis and surgery planning of pelvic bone diseases. Existing methods for pelvic bone segmentation are either hand-crafted or semi-automatic and achieve limited accuracy when dealing with image appearance variations due to the multi-site domain shift, the presence of contrasted vessels, coprolith and chyme, bone fractures, low dose, metal artifacts, etc. Due to the lack of a large-scale pelvic CT dataset with annotations, deep learning methods are not fully explored.

Methods: In this paper, we aim to bridge the data gap by curating a large pelvic CT dataset pooled from multiple sources, including 1,184 CT volumes with a variety of appearance variations. Then we propose for the first time, to the best of our knowledge, to learn a deep multi-class network for segmenting lumbar spine, sacrum, left hip, and right hip, from multiple-domain images simultaneously to obtain more effective and robust feature representations. Finally, we introduce a post-processor based on the signed distance function (SDF).

Results: Extensive experiments on our dataset demonstrate the effectiveness of our automatic method, achieving an average Dice of 0.987 for a metal-free volume. SDF post-processor yields a decrease of 15.1% in Hausdorff distance compared with traditional post-processor. Conclusion: We believe this large-scale dataset will promote the development of the whole community and open source the images, annotations, codes, and trained baseline models at https://github.com/ICT-MIRACLE-lab/CTPelvic1K. \keywords{CT dataset \and Pelvic segmentation \and Deep learning \and SDF post-processing}


用途: CT 中的盆腔骨细分始终是盆腔骨病临床诊断和手术规划的重要步骤。骨盆骨分割的现有方法是手工制作的或半自动,并且在处理由于多站点域移引起的图像外观变化时达到有限的精度,对比血管,共摩尔和木豆,骨折,低剂量,低剂量,金属伪影等由于缺乏带有注释的大规模骨盆 CT 数据集,无法完全探索深度学习方法。

方法:在本文中,我们的目标是通过巩固从多个来源汇总的大型骨盆 CT 数据集来弥合数据差距,其中包括 1,184 张 CT,具有各种外观变化。然后我们首次提出,据我们所知,学习深度多级网络,用于分割腰椎,骶骨,左髋关节和右髋,同时获得更有效和坚固的特征代表性。最后,我们基于符号距离函数(SDF)引入后处理器。

结果:对我们数据集的广泛实验证明了我们自动方法的有效性,实现了无金属体积的平均骰子为 0.987。与传统的后处理器相比,SDF 后处理器在 Hausdorff 距离中减少了 15.1%。 结论:我们认为这一大型数据集将促进整个社区的开发,并在 HTTPS://github.com/ict-lab/中开源图像,注释,代码和培训的基线模型 CTPELVIC1K。

\关键字{CT DataSet \和骨盆分割\和深度学习\和 SDF 后处理}


The pelvis is an important structure connecting the spine and lower limbs and plays a vital role in maintaining the stability of the body and protecting the internal organs of the abdomen. The abnormality of the pelvis, like hip dysplasia and pelvic fractures, can have a serious impact on our physical health. For example, as the most severe and life-threatening bone injuries, pelvic fractures can wound other organs at the fracture site, and the mortality rate can reach 45% at the most severe situation, the open pelvic fractures. Medical imaging plays an important role in the whole process of diagnosis and treatment of patients with pelvic injuries. Compared with X-Ray images, CT preserves the actual anatomic structure including depth information, providing more details about the damaged site to surgeons, so it is often used for 3D reconstruction to make follow-up and evaluation of postoperative effects. In these applications, is crucial for assessing the severity of pelvic injuries and helping surgeons to make correct judgments and choose the appropriate surgical approaches. In the past, surgeons segmented pelvis manually from CT using software like Mimics https://en.wikipedia.org/wiki/Mimics , which is time-consuming and non-reproducible. To address these clinical needs, we here present an automatic algorithm that can accurately and quickly segment pelvic bones from CT.


Existing methods for pelvic bone segmentation from CT mostly use simple thresholding, region growing, and handcrafted models, which include deformable models, statistical shape models, watershed and others. These methods focus on local gray information and have limited accuracy due to the density differences between cortical and trabecular bones. And trabecular bone is similar to that of the surrounding tissues in terms of texture and intensity. Bone fractures, if present, further lead to weak edges. Recently, deep learning-based methods have achieved great success in image segmentation; however, their effectiveness for CT pelvic bone segmentation is not fully known.


骨盆是连接脊柱和下肢的重要结构,在保持身体的稳定性和保护腹部的内脏中起着至关重要的作用。像髋关节发育性和骨盆骨折一样骨盆的异常可能对我们的身体健康产生严重影响。例如,作为最严重和危及生命的骨损伤,骨盆骨折可以在骨折部位缠绕其他器官,并且死亡率在最严重的情况下可以达到 45%,开放的骨盆骨折。医学成像在骨盆损伤患者的诊断和治疗过程中起着重要作用。与 X 射线图像相比,CT 保留了包括深度信息的实际解剖结构,提供有关损坏部位的更多细节到外科医生,因此它通常用于 3D 重建,以便进行术后效果的跟进和评估。在这些应用中,对评估骨盆损伤的严重程度并帮助外科医生进行正确判断并选择适当的手术方法至关重要。在过去,外科医生使用像模仿 Https://en.wikipedia.org/wiki/mimics 等软件手动从 CT 手动分割骨盆,这是耗时和不可重复的。为了解决这些临床需求,我们在这里提出了一种自动算法,可以准确顺利地从 CT 中段骨盆骨骼。

从 ct 骨盆骨分割的现有方法主要使用简单的阈值,区域生长和手工模型,包括可变形的模型,统计形状模型,流域和其他模型。这些方法专注于局部灰色信息,并且由于皮质和小梁骨骼之间的密度差异而具有有限的精度。在质地和强度方面,小梁骨类似于周围组织的骨骼。骨折,如果存在,进一步导致弱边缘。最近,基于深度学习的方法取得了巨大成功的图像细分;然而,它们对 CT 盆腔骨分割的有效性尚不完全已知。

Although there are some datasets related to pelvic bone, only a few of them are open-sourced and with small size (less than 5 images or 200 slices), far less than other organs. Although conducted experiments based on deep learning, the result was not very good (Dice=0.92) with the dataset only having 200 CT slices. For the robustness of the deep learning method, it is essential to have a comprehensive dataset that includes as many real scenes as possible. In this paper, we bridge this gap by curating a large-scale CT dataset and explore the use of deep learning in this task, which marks, to the best of our knowledge, in this area, with more statistical significance and reference value. To build a comprehensive dataset, we have to deal with diverse image appearance variations due to differences in imaging resolution and field-of-view (FOV), domain shift arising from different sites, the presence of contrasted vessels, coprolith and chyme, bone fractures, low dose, metal artifacts, etc. Fig. fig-pic-3 gives some examples about these various conditions. Among the above-mentioned appearance variations, the challenge of the metal artifacts is the most difficult to handle. Further, we aim at a multi-class segmentation problem that separates the pelvis into multiple bones, including , , , and , instead of simply segmenting out the whole pelvis from CT. The contributions of this paper are summarized as follows: - A pelvic CT dataset} pooled from multiple domains and different manufacturers, including 1,184 CT volumes (over 320K CT slices) of diverse appearance variations (including 75 CTs with metal artifacts). Their multi-bone labels are carefully annotated by experts. We open source it to benefit the whole community; - Learning a deep multi-class segmentation network} to obtain more effective representations for joint lumbar spine, sacrum, left hip, and right hip segmentation from multi-domain labeled images, thereby yielding desired accuracy and robustness; - A fully automatic analysis pipeline} that achieves high accuracy, efficiency, and robustness, thereby enabling its potential use in clinical practices.

虽然有一些与盆腔有关的数据集,但它们中只有少数开放,小尺寸(少于 5 张图像或 200 片),远低于其他器官。虽然基于深度学习进行了实验,但结果并不是很好的(骰子= 0.92),只有数据集只有 200 CT 切片。对于深度学习方法的稳健性,必须具有一个全面的数据集,该数据集包括尽可能多的真实场景。在本文中,我们通过策划大型 CT 数据集来弥合这一差距并探索在这项任务中使用深度学习,这是我们的知识,在这一领域,具有更多统计显着性和参考价值。为了建立一个全面的数据集,我们必须通过成像分辨率的差异和视野(FOV),来自不同地点的域移位的不同图像出现变化来处理多样化的图像外观变化,对比的血管,共摩尔和木豆,骨骨折,低剂量,金属伪影等。图 3 给出了关于这些各种条件的一些示例。在上述外观变化中,金属伪像的挑战是最难以处理的。此外,我们的目标是在多级分割问题上,将骨盆分离成多个骨骼,包括,以及简单地将整个骨盆从 CT 分割出来。本文的贡献总结如下: - 从多个域和不同制造商汇集的 A \ Texit {Pelvic CT 数据集},包括 1,184 美元的 CT 卷(超过 320K CT 切片)各种外观变化(包括带金属伪影的 75 个 CTS )。他们的多骨标签由专家仔细注释。我们开展资料,使其有益于整个社区; - 学习 A Deep Multi-Class 分割网络}以获得更多有效的腰椎脊柱,骶骨,左髋关节和从多域标记图像的右髋关节分割,从而产生所需的准确性和鲁棒性; - A 全自动分析管道},实现高精度,效率和稳健性,从而能够在临床实践中使用它。

Our Dataset 我们的数据集

Dataset name[M] # Mean spacing(mm) Mean size Tr/Val/Ts Source and Year
\ABDOMEN 35 (0.76, 0.76, 3.80) (512, 512, 73) 21/7/7 Public 2015
\COLONOG[\Checkmark] 731 (0.75, 0.75, 0.81) (512, 512, 323) 440/146/145 Public 2008
\MSD_T10 155 (0.77, 0.77, 4.55) (512, 512, 63) 93/31/31 Public 2019
\KITS19 44 (0.82, 0.82, 1.25) (512, 512, 240) 26/9/9 Public 2019
\CERVIX 41 (1.02, 1.02, 2.50) (512, 512, 102) 24/8/9 Public 2015
\CLINIC[\Checkmark] 103 (0.85, 0.85, 0.80) (512, 512, 345) 61/21/21 Collected 2020
\CLINIC-metal[\Checkmark] 75 (0.83, 0.83, 0.80) (512, 512, 334) 0(61)/0/14 Collected 2020
Our Datasets 1,184 (0.78, 0.78, 1.46) (512, 512, 273) 665(61)/222/236 -

Data Collection 数据收集

To build a comprehensive pelvic CT dataset that can replicate practical appearance variations, we curate a large dataset of pelvic CT images from seven sources, two of which come from a clinic and five from existing CT datasets. The overview of our large dataset is shown in Tabletab0. These seven sub-datasets are curated separately from different sites and sources with different characteristics often encountered in the clinic. In these sources, we exclude some cases of very low quality or without pelvic region and remove the unrelated areas outside the pelvis in our current dataset. Among them, the raw data of COLONOG, CLINIC, and CLINIC-metal are stored in a DICOM format, with more information like scanner manufacturers can be accessed. More details about our dataset are given in Online Resource 1 https://drive.google.com/file/d/115kLXfdSHS9eWxQmxhMmZJBRiSfI8_4_/view . We reformat all DICOM images to NIfTI to simplify data processing and de-identify images, meeting the institutional review board (IRB) policies of contributing sites. All existing sub-datasets are under Creative Commons license CC-BY-NC-SA at least and we will keep the license unchanged. For CLINIC and CLINIC-metal sub-datasets, we will open-source them under Creative Commons license CC-BY-NC-SA 4.0.

构建一个全面的骨盆 CT 数据集,可以复制实际外观变化,我们从七个来源策划大型骨盆 CT 图像数据集,其中两个来自诊所和来自现有 CT 数据集的五个。我们的大型数据集的概述显示在 Tabletab0 中。这七个子数据集与诊所中经常遇到的不同特征的不同站点和源分开策划。在这些来源中,我们排除了一些非常低质量或没有盆景的情况,并在我们当前数据集中删除骨盆外的不相关区域。其中,Colonog,诊所和诊所 - 金属的原始数据以 DICOM 格式存储,具有更多信息,如扫描仪制造商可以访问。有关我们数据集的更多详细信息在线资源 1 https://drive.google.com/file/d/115klxfdshs9ewxqmxhmmzjbrisfi8_4_/view。我们将所有 DICOM 图像重新格式化为 NIFI,以简化数据处理和取消识别图像,符合贡献网站的机构审查委员会(IRB)政策。所有现有的子数据集至少在 Creative Commons 许可证 CC-BY-NC-SA 下,我们将保持许可证不变。对于诊所和诊所 - 金属子数据集,我们将在 Creative Commons 许可证 CC-By-NC-SA 4.0 下开源它们。


Data Annotation

Considering the scale of thousands of cases in our dataset and annotation itself is truly a subjective and time-consuming task. We introduce a strategy of Annotation by Iterative Deep Learning (AID) to speed up our annotation process. In the AID workflow, we train a deep network with a few precisely annotated data in the beginning. Then the deep network is used to automatically annotate more data, followed by revision from human experts. The human-corrected annotations and their corresponding images are added to the training set to retrain a more powerful deep network. These steps are repeated iteratively until we finish our annotation task. In the last, only minimal modification is needed by human experts. The annotation pipeline is shown in Fig.fig-pic-10. In Step , two senior experts are invited to pixel-wise annotate 40 cases of CLINIC sub-dataset precisely as the initial database based on the results from simple thresholding method, using ITK Snap (Philadelphia, PA) software. All annotations are performed in the transverse plane. The sagittal and coronal planes are used to assist the judgment in the transverse plane. The reason for starting from the CLINIC sub-dataset is that the cancerous bone and surrounding tissues exhibit similar appearances at the fracture site, which needs more prior knowledge guidance from doctors. In Step , we train a deep network with the updated database and make predictions on new 100 data selected randomly at a time. In Step , some junior annotators refine the labels based on the prediction results, and each junior annotator is only responsible for part of 100 new data. A coordinator will check the quality of refinement by all junior annotators. For easy cases, the annotation process is over in this stage; for hard cases, senior experts are invited to make more precise annotations. Step and Step are repeated until we finish the annotation of all images in our dataset. Finally, we conduct another round of scrutiny for outliers and mistakes and make necessary corrections to ensure the final quality of our dataset. In Fig.fig-pic-10, Junior annotators' are graduate students in the field of medical image analysis. The Coordinator' is a medical image analysis practitioner with many years of experience, and the `Senior experts' are cooperating doctors in the partner hospital, one of the best orthopedic hospitals in our country. In total, we have annotations for $ 1,109 $ metal-free CTs and 14 metal-affected CTs. The remaining 61 metal-affected CTs of image are left unannotated and planned for use in unsupervised learning.


考虑我们数据集和注释本身的数千个案例的比例是真正的主观和耗时的任务。我们介绍了迭代深度学习(援助)注释的策略,以加快我们的注释过程。在援助工作流程中,我们在开始时培训了一个深度网络的深度网络。然后,深网络用于自动注释更多数据,然后从人类专家修订。人类纠正的注释及其相应的图像被添加到训练集中,以重新培训更强大的深网络。迭代地重复这些步骤,直到我们完成注释任务。在最后,人类专家只需要最小的修改。注释管道如图 3 所示。在步骤中,邀请了两个高级专家对像素 - 明智的注释 40 例,临床子数据集是初始数据库的基于简单阈值方法的结果,使用 ITK Snap(Philadelphia,PA)软件。所有注释都在横向平面中执行。矢状和冠状平面用于帮助横向平面中的判断。从诊所亚数据集开始的原因是癌症骨骼和周围组织在骨折场所表现出类似的外表,这些裂缝部位需要更多地从医生的知识指导。在步骤中,我们将深度网络与更新的数据库一起培训,并在一次随机选择的新 100 数据上进行预测。在步骤中,一些初级注释器基于预测结果优化标签,每个初级注入者都仅负责 100 个新数据的一部分。协调员将通过所有初级注释器检查细化的质量。在简单的情况下,在本阶段结束了注释过程;对于硬壳,邀请高级专家培养更准确的注释。重复步骤和步骤,直到我们完成 DataSet 中所有图像的注释。最后,我们对异常值和错误进行了另一轮审查,并进行了必要的修正,以确保我们数据集的最终质量。在图 10 中,“初级注释器”是医学图像分析领域的研究生。 “协调员”是具有多年经验的医学形象分析从业者,“高级专家”是伙伴医院的合作医生,我国最好的骨科医院之一。总的来说,我们为$ 1,109 $无金属 CTS 和 14 个金属影响 CTS 提供注释。剩下的 61 个金属影响的图像 CTS 是未经监督的学习中的未经宣传的和计划的。

Segmentation Methodology

The overall pipeline of our deep approach for segmenting pelvic bones is illustrated in Fig.fig-pic-5. The input is a 3D CT volume. (i) First, the input is sent to our segmentation module. It is a plug and play (PnP) module that can be replaced at will. (ii) After segmentation is done, we send the multi-class 3D prediction to a SDF post-processor, which removes some false predictions and outputs the final multi-bone segmentation result.



对骨盆骨骼进行深度方法的整体管道在图 4 中示出了图 5。输入是 3D CT 卷。 (i)首先,将输入发送到我们的分割模块。它是一个可替换的即插即用(PNP)模块。 (ii)完成分割后,我们将多级 3D 预测发送到 SDF 后处理器,其消除了一些假预测并输出最终的多骨分割结果。

Segmentation Module

Based on our large-scale dataset collected from multiple sources together with annotations, we use a fully supervised method to train a deep network to learn an effective representation of the pelvic bones. The deep learning framework we choose here is 3D U-Net cascade version of nnU-Net, which is a robust state-of-the-art deep learning-based medical image segmentation method. 3D U-Net cascade contains two 3D U-net, where the first one is trained on downsampled images (stage 1 in Fig.fig-pic-5), the second one is trained on full resolution images (stage 2 in Fig.fig-pic-5). A 3D network can better exploit the useful 3D spatial information in 3D CT images. Training on downsampled images first can enlarge the size of patches in relation to the image, then also enable the 3D network to learn more contextual information. Training on full resolution images second refines the segmentation results predicted from former U-Net.


基于我们的大规模数据集从多个源收集,我们使用完全监督的方法训练深网络以学习骨盆骨骼的有效表示。我们选择的深度学习框架是 NNU-Net 的 3D U-Net 级联版本,这是一种坚固的基于最先进的深度学习的医学图像分割方法。 3D U-Net Cascade 包含两个 3D U-Net,其中第一个在下采样的图像上培训(图 3 中的阶段 1),第二个在全分辨率图像上培训(图 2 中的阶段 2 -PIC-5)。 3D 网络可以更好地利用 3D CT 图像中的有用的 3D 空间信息。在下采样的图像上训练首先可以扩大与图像相关的补丁的大小,然后还使 3D 网络能够了解更多上下文信息。全部分辨率训练秒第二改善前 U-Net 预测的分割结果。

SDF Post Processor

Post-processing is useful for a stable system in clinical use, preventing some mispredictions in some complex scenes. In the segmentation task, current segmentation systems usually determine whether to remove the outliers according to the size of the connected region to reduce mispredictions. However, in the pelvic fractures scene, broken bones may also be removed as outliers. To this end, we introduce the SDF filtering as our post-processing module to add a besides the . We calculate SDF based on the maximum connected region (MCR) of the anatomical structure in the prediction result, obtaining a 3D distance map that increases from the bone border to the image boundary to help determining whether `outlier prediction' defined by traditional MCR-based method should be removed.

SDF 后处理器

后处理对于临床使用中的稳定系统有用,防止了一些复杂的场景中的一些错误。在分割任务中,当前分割系统通常根据连接区域的大小确定是否删除异常值以减少错误预测。然而,在骨盆骨折场景中,破碎的骨骼也可以作为异常值移除。为此,我们将 SDF 过滤作为我们的后处理模块介绍,以添加除了。我们基于预测结果中的解剖结构的最大连接区域(MCR)计算 SDF,获得从骨边界到图像边界增加的 3D 距离图,以帮助确定是否由传统的 MCR 基于的“异常预测”应该删除方法。

Experiments 实验

Implementation Details 实现详细信息

We implement our method based on open source code of nnU-Net https://github.com/mic-dkfz/nnunet . We also used MONAI https://monai.io/ during our algorithm development. Details please refer to the Online Resource 1. For our metal-free dataset, we randomly select 3/5, 1/5, 1/5 cases in each sub-dataset as the training set, validation set, and testing set, respectively, and keep such a data partition unchanged in all-dataset experiments and sub-datasets experiments.

我们基于 NNU-Net HTTPS://github.com/mic-dkfz/nnunet 的开源代码实现我们的方法。我们还在算法开发期间使用 Monai Https://monai.io/。详情请参阅在线资源 1.对于我们的无金属数据集,我们将分别随机选择每个子数据集中的 3/5,1 / 5,1/5 例,分别为培训集,验证集和测试集,并保持所有数据集实验和子数据集实验中不变的数据分区。

Results and Discussion 结果和讨论

Segmentation Module

To prove that learning from our large-scale pelvic bones CT dataset is helpful to improve the robustness of our segmentation system, we conduct a series of experiments in different aspects. Firstly, we test the performance of models of different dimensions on our entire dataset. The Exp (a) in Tabletab_basic shows the quantitative results. $ \Phi_{ALL} $ denotes a deep network model trained on `ALL' dataset. Following the conventions in most literature, we use Dice coefficient(DC) and Hausdorff distance (HD) as the metrics for quantitative evaluation. All results are tested on our testing set. Same as we discussed in Sect.subsectionSegModule, $ \Phi_{ALL(3D_cascade)} $ shows the best performance, achieving an average DC of 0.987 and HD of 5.50 voxels, which means 3D U-Net cascade can learn the semantic features of pelvic anatomy better then 2D/3D U-Net. As the following experiments are all trained with 3D U-Net cascade, the mark $ {(3D_cascade)} $ of $ \Phi{ALL(3D_cascade)} $ is omitted for notational clarity.



Secondly, we train six deep networks, one network per single sub-dataset ( $ \Phi_{ABDOMEN} $ , etc.). Then we test them on each sub-dataset. Quantitative and qualitative results are shown in Tabletab_subdataset and Fig.fig-pic-6, respectively. We also calculate the performance of $ \Phi_{ALL} $ on each sub-dataset.


证明从我们大规模的骨盆骨骼 CT 数据集有助于提高我们分割系统的稳健性,我们在不同方面进行一系列实验。首先,我们在整个数据集中测试不同维度模型的性能。 Tabletab_Basic 中的 EXP(A)显示了定量结果。 $\Phi_{ALL} $表示在“所有”数据集上培训的深网络模型。在大多数文献中的约定之后,我们使用骰子系数(DC)和 Hausdorff 距离(HD)作为定量评估的度量。所有结果都在我们的测试集上进行了测试。与我们在 sect.subsectionsegmodule 中讨论的相同,$\Phi_{ALL(3D_cascade)} $显示了最佳性能,实现了 0.987 和 5.50 voxels 的平均 dc,这意味着 3d U-net 级联可以更好地学习骨盆解剖学的语义特征,然后是 2d / 3d U-net。由于以下实验都有 3D U-Net 级联培训,因此省略了$\Phi_{ALL(3D_cascade)}$的标记$_{(3D_cascade)} $以用于符号清晰度。

其次,我们中训练六种深的网络,每单个子集一个网络($\Phi_{ABDOMEN} $ , 等等。)。然后我们在每个子数据集上测试它们。定量和定性结果分别显示在 Tabletab_subdataset 和图 30 中。我们还计算每个子数据集上的$\Phi_{ALL} $的性能。

For a fair comparison, cross-testing of sub-dataset networks is also conducted on each sub-dataset's testing set. We observe that the evaluation metrics of model $ \Phi_{ALL} $ are generally better than those for the model trained on a single sub-dataset. These models trained on a single sub-dataset are difficult to consistently perform well in other domains, except $ \Phi_{COLONOG} $ , which contains the largest amount of data from various sources, originally. This observation implies that the domain gap problem does exist and the solution of collecting data directly from multi-source is effective. More intuitively, we show the Average' values in heat map format in Fig.table2fig. Furthermore, we implement cross-validation of these six metal-free sub-datasets to verify the generalization ability of this solution. Models are marked as \$ \Phi_{ex\ABDOMEN} \$ , etc. The results of \$ \Phi_{ex\COLONOG} \$ can fully explain that training with data from multi-sources can achieve good results on data that has not been seen before. When the models trained separately on the other five sub-datasets cannot achieve good results on COLONOG, aggregating these five sub-datasets can get a comparable result compared with \$ \Phi_{ALL} \$ , using only one third of the amount of data. More data from multi-sources can be seen as additional constraints on model learning, prompting the network to learn better feature representations of the pelvic bones and the background. In Fig.fig-pic-6, the above discussions can be seen intuitively through qualitative results. For more experimental results and discussions, e.g. Generalization across manufacturers', `Limitations of the dataset', please refer to Online Resource 1.


对于公平的比较,还在每个子数据集的测试集上进行子数据集网络的交叉测试。我们观察到模型$\Phi_{ALL} $的评估度量通常比在单个子数据集上训练的模型的评估度量。在单个子数据集上培训的这些模型难以在其他域中始终良好地表现良好,除了$\Phi_{COLONOG} $,其中最初包含来自各种来源的最大数据量。此观察结果意味着存在域间隙问题,并且直接从多源收集数据的解决方案是有效的。更直观地,我们在图 1able2fig 中显示了热图格式的“平均值”值。此外,我们实现了这六个无金属子数据集的交叉验证,以验证该解决方案的泛化能力。模型标记为$\Phi_{ex\ABDOMEN} $等。$\Phi_{ex\COLONOG} $的结果可以完全解释,使用来自多源的数据的培训可以在未见之前的数据上实现良好的结果。当在其他五个子数据集上单独培训的模型无法在 ColOnog 上实现良好的结果时,聚合这五个子数据集可以获得与$\Phi_{ALL} $相比的可比结果,仅使用数据量的三分之一。来自多源的更多数据可以被视为对模型学习的额外约束,提示网络了解骨盆骨骼和背景的更好特征表示。在图 3 中,通过定性结果可以直观地看到上述讨论。例如,对于更多实验结果和讨论,例如讨论。 “泛化横跨制造商”,“数据集的限制”,请参阅在线资源 1.

SDF post-processor

The Exp (b) in Tabletab_basic shows the effect of the post-processing module. SDF post-processor yields a decrease of 80.7% and 15.1% in HD compared with no post-processor and MCR post-processor. Details please refer to Online Resource 1. The visual effects of two cases are displayed in Fig.fig-pic-8. Large fragments near the anatomical structure are kept with SDF post-processing but are removed by the MCR method.


sdf 后处理器

tabletab_basic 中的 exp(b)显示了后处理模块的效果。与没有后处理器和 MCR 后处理器相比,SDF 后处理器在高清中降低了 80.7%和 15.1%。详细信息请参阅在线资源 1.两种情况的视觉效果显示在图 4.图片-8 中。解剖结构附近的大碎片与 SDF 后处理保持,但是通过 MCR 方法除去。


To benefit the pelvic surgery and diagnosis community, we curate and open source a large-scale pelvic CT dataset pooled from multiple domains, including $ 1,184 $ CT volumes (over 320K CT slices) of various appearance variations, and present a pelvic segmentation system based on deep learning, which, to the best of our knowledge, marks the first attempt in the literature. We train a multi-class network for segmentation of lumbar spine, sacrum, left hip, and right hip using the multiple-domain images to obtain more effective and robust features. SDF filtering further improves the robustness of the system. This system lays a solid foundation for our future work. We plan to test the significance of our system in real clinical practices, and explore more options based on our dataset, e.g. devising a module for metal-affected CTs and domain-independent pelvic bones segmentation algorithm. This research was supported in part by the Youth Innovation Promotion Association CAS (grant 2018135). The authors have no relevant financial or non-financial interests to disclose. Please refer to URL. Please refer to URL. We have obtained the approval from the Ethics Committee of clinical hospital. Not applicable.


使骨盆外科和诊断群落受益,我们巩固和开源从多个域汇集的大型骨盆 CT 数据集,包括$ 1,184 $ CT 卷(超过 320K CT 切片)各种外观变化,并显示基于深度学习的盆腔分割系统,据我们所知,这是在文献中的第一次尝试。我们培训多级网络以进行腰椎,骶骨,左臀部和右髋关节的分割,以获得更有效和鲁棒的功能。 SDF 过滤进一步提高了系统的鲁棒性。该系统为未来的工作奠定了坚实的基础。我们计划在真正的临床实践中测试我们的系统的重要性,并根据我们的数据集探索更多选项。设计用于金属影响 CTS 和域无关的盆腔骨骼分割算法的模块。该研究部分由青年创新促进协会 CAS(授予 2018135)得到支持。作者没有相关的财务或非财务利益披露。请参阅 URL。请参阅 URL。我们已获得临床医院伦理委员会的批准。不适用。