[论文翻译] 用深度学习进行骨盆分割:大型CT数据集和基线模型




Deep Learning to Segment Pelvic Bones: Large-scale CT Datasets and Baseline Models



Purpose: Pelvic bone segmentation in CT has always been an essential step in clinical diagnosis and surgery planning of pelvic bone diseases. Existing methods for pelvic bone segmentation are either hand-crafted or semi-automatic and achieve limited accuracy when dealing with image appearance variations due to the multi-site domain shift, the presence of contrasted vessels, coprolith and chyme, bone fractures, low dose, metal artifacts, etc. Due to the lack of a large-scale pelvic CT dataset with annotations, deep learning methods are not fully explored.

Methods: In this paper, we aim to bridge the data gap by curating a large pelvic CT dataset pooled from multiple sources, including 1,184 CT volumes with a variety of appearance variations. Then we propose for the first time, to the best of our knowledge, to learn a deep multi-class network for segmenting lumbar spine, sacrum, left hip, and right hip, from multiple-domain images simultaneously to obtain more effective and robust feature representations. Finally, we introduce a post-processor based on the signed distance function (SDF).

Results: Extensive experiments on our dataset demonstrate the effectiveness of our automatic method, achieving an average Dice of 0.987 for a metal-free volume. SDF post-processor yields a decrease of 15.1% in Hausdorff distance compared with traditional post-processor. Conclusion: We believe this large-scale dataset will promote the development of the whole community and open source the images, annotations, codes, and trained baseline models at https://github.com/ICT-MIRACLE-lab/CTPelvic1K. \keywords{CT dataset \and Pelvic segmentation \and Deep learning \and SDF post-processing}


用途: CT中的盆腔骨细分始终是盆腔骨病临床诊断和手术规划的重要步骤。骨盆骨分割的现有方法是手工制作的或半自动,并且在处理由于多站点域移引起的图像外观变化时达到有限的精度,对比血管,共摩尔和木豆,骨折,低剂量,低剂量,金属伪影等由于缺乏带有注释的大规模骨盆CT数据集,无法完全探索深度学习方法。


结果:对我们数据集的广泛实验证明了我们自动方法的有效性,实现了无金属体积的平均骰子为0.987。与传统的后处理器相比,SDF后处理器在Hausdorff距离中减少了15.1%。 结论:我们认为这一大型数据集将促进整个社区的开发,并在HTTPS://github.com/ict-lab/中开源图像,注释,代码和培训的基线模型CTPELVIC1K。

\关键字{CT DataSet \和骨盆分割\和深度学习\和SDF后处理}


The pelvis is an important structure connecting the spine and lower limbs and plays a vital role in maintaining the stability of the body and protecting the internal organs of the abdomen. The abnormality of the pelvis, like hip dysplasia and pelvic fractures, can have a serious impact on our physical health. For example, as the most severe and life-threatening bone injuries, pelvic fractures can wound other organs at the fracture site, and the mortality rate can reach 45% at the most severe situation, the open pelvic fractures. Medical imaging plays an important role in the whole process of diagnosis and treatment of patients with pelvic injuries. Compared with X-Ray images, CT preserves the actual anatomic structure including depth information, providing more details about the damaged site to surgeons, so it is often used for 3D reconstruction to make follow-up and evaluation of postoperative effects. In these applications, is crucial for assessing the severity of pelvic injuries and helping surgeons to make correct judgments and choose the appropriate surgical approaches. In the past, surgeons segmented pelvis manually from CT using software like Mimics https://en.wikipedia.org/wiki/Mimics , which is time-consuming and non-reproducible. To address these clinical needs, we here present an automatic algorithm that can accurately and quickly segment pelvic bones from CT.


Existing methods for pelvic bone segmentation from CT mostly use simple thresholding, region growing, and handcrafted models, which include deformable models, statistical shape models, watershed and others. These methods focus on local gray information and have limited accuracy due to the density differences between cortical and trabecular bones. And trabecular bone is similar to that of the surrounding tissues in terms of texture and intensity. Bone fractures, if present, further lead to weak edges. Recently, deep learning-based methods have achieved great success in image segmentation; however, their effectiveness for CT pelvic bone segmentation is not fully known.




Although there are some datasets related to pelvic bone, only a few of them are open-sourced and with small size (less than 5 images or 200 slices), far less than other organs. Although conducted experiments based on deep learning, the result was not very good (Dice=0.92) with the dataset only having 200 CT slices. For the robustness of the deep learning method, it is essential to have a comprehensive dataset that includes as many real scenes as possible. In this paper, we bridge this gap by curating a large-scale CT dataset and explore the use of deep learning in this task, which marks, to the best of our knowledge, in this area, with more statistical significance and reference value. To build a comprehensive dataset, we have to deal with diverse image appearance variations due to differences in imaging resolution and field-of-view (FOV), domain shift arising from different sites, the presence of contrasted vessels, coprolith and chyme, bone fractures, low dose, metal artifacts, etc. Fig. fig-pic-3 gives some examples about these various conditions. Among the above-mentioned appearance variations, the challenge of the metal artifacts is the most difficult to handle. Further, we aim at a multi-class segmentation problem that separates the pelvis into multiple bones, including , , , and , instead of simply segmenting out the whole pelvis from CT. The contributions of this paper are summarized as follows: - A pelvic CT dataset} pooled from multiple domains and different manufacturers, including 1,184 CT volumes (over 320K CT slices) of diverse appearance variations (including 75 CTs with metal artifacts). Their multi-bone labels are carefully annotated by experts. We open source it to benefit the whole community; - Learning a deep multi-class segmentation network} to obtain more effective representations for joint lumbar spine, sacrum, left hip, and right hip segmentation from multi-domain labeled images, thereby yielding desired accuracy and robustness; - A fully automatic analysis pipeline} that achieves high accuracy, efficiency, and robustness, thereby enabling its potential use in clinical practices.

虽然有一些与盆腔有关的数据集,但它们中只有少数开放,小尺寸(少于5张图像或200片),远低于其他器官。虽然基于深度学习进行了实验,但结果并不是很好的(骰子= 0.92),只有数据集只有200 CT切片。对于深度学习方法的稳健性,必须具有一个全面的数据集,该数据集包括尽可能多的真实场景。在本文中,我们通过策划大型CT数据集来弥合这一差距并探索在这项任务中使用深度学习,这是我们的知识,在这一领域,具有更多统计显着性和参考价值。为了建立一个全面的数据集,我们必须通过成像分辨率的差异和视野(FOV),来自不同地点的域移位的不同图像出现变化来处理多样化的图像外观变化,对比的血管,共摩尔和木豆,骨骨折,低剂量,金属伪影等。图3给出了关于这些各种条件的一些示例。在上述外观变化中,金属伪像的挑战是最难以处理的。此外,我们的目标是在多级分割问题上,将骨盆分离成多个骨骼,包括,以及简单地将整个骨盆从CT分割出来。本文的贡献总结如下: - 从多个域和不同制造商汇集的A \ Texit {Pelvic CT数据集},包括1,184美元的CT卷(超过320K CT切片)各种外观变化(包括带金属伪影的75个CTS )。他们的多骨标签由专家仔细注释。我们开展资料,使其有益于整个社区; - 学习A Deep Multi-Class分割网络}以获得更多有效的腰椎脊柱,骶骨,左髋关节和从多域标记图像的右髋关节分割,从而产生所需的准确性和鲁棒性; - A 全自动分析管道},实现高精度,效率和稳健性,从而能够在临床实践中使用它。

Our Dataset 我们的数据集

Dataset name[M] # Mean spacing(mm) Mean size Tr/Val/Ts Source and Year
\ABDOMEN 35 (0.76, 0.76, 3.80) (512, 512, 73) 21/7/7 Public 2015
\COLONOG[\Checkmark] 731 (0.75, 0.75, 0.81) (512, 512, 323) 440/146/145 Public 2008
\MSD_T10 155 (0.77, 0.77, 4.55) (512, 512, 63) 93/31/31 Public 2019
\KITS19 44 (0.82, 0.82, 1.25) (512, 512, 240) 26/9/9 Public 2019
\CERVIX 41 (1.02, 1.02, 2.50) (512, 512, 102) 24/8/9 Public 2015
\CLINIC[\Checkmark] 103 (0.85, 0.85, 0.80) (512, 512, 345) 61/21/21 Collected 2020
\CLINIC-metal[\Checkmark] 75 (0.83, 0.83, 0.80) (512, 512, 334) 0(61)/0/14 Collected 2020
Our Datasets 1,184 (0.78, 0.78, 1.46) (512, 512, 273) 665(61)/222/236 -

Data Collection 数据收集

To build a comprehensive pelvic CT dataset that can replicate practical appearance variations, we curate a large dataset of pelvic CT images from seven sources, two of which come from a clinic and five from existing CT datasets. The overview of our large dataset is shown in Tabletab0. These seven sub-datasets are curated separately from different sites and sources with different characteristics often encountered in the clinic. In these sources, we exclude some cases of very low quality or without pelvic region and remove the unrelated areas outside the pelvis in our current dataset. Among them, the raw data of COLONOG, CLINIC, and CLINIC-metal are stored in a DICOM format, with more information like scanner manufacturers can be accessed. More details about our dataset are given in Online Resource 1 https://drive.google.com/file/d/115kLXfdSHS9eWxQmxhMmZJBRiSfI8_4_/view . We reformat all DICOM images to NIfTI to simplify data processing and de-identify images, meeting the institutional review board (IRB) policies of contributing sites. All existing sub-datasets are under Creative Commons license CC-BY-NC-SA at least and we will keep the license unchanged. For CLINIC and CLINIC-metal sub-datasets, we will open-source them under Creative Commons license CC-BY-NC-SA 4.0.

构建一个全面的骨盆CT数据集,可以复制实际外观变化,我们从七个来源策划大型骨盆CT图像数据集,其中两个来自诊所和来自现有CT数据集的五个。我们的大型数据集的概述显示在Tabletab0中。这七个子数据集与诊所中经常遇到的不同特征的不同站点和源分开策划。在这些来源中,我们排除了一些非常低质量或没有盆景的情况,并在我们当前数据集中删除骨盆外的不相关区域。其中,Colonog,诊所和诊所 - 金属的原始数据以DICOM格式存储,具有更多信息,如扫描仪制造商可以访问。有关我们数据集的更多详细信息在线资源1 https://drive.google.com/file/d/115klxfdshs9ewxqmxhmmzjbrisfi8_4_/view。我们将所有DICOM图像重新格式化为NIFI,以简化数据处理和取消识别图像,符合贡献网站的机构审查委员会(IRB)政策。所有现有的子数据集至少在Creative Commons许可证CC-BY-NC-SA下,我们将保持许可证不变。对于诊所和诊所 - 金属子数据集,我们将在Creative Commons许可证CC-By-NC-SA 4.0下开源它们。


Data Annotation

Considering the scale of thousands of cases in our dataset and annotation itself is truly a subjective and time-consuming task. We introduce a strategy of Annotation by Iterative Deep Learning (AID) to speed up our annotation process. In the AID workflow, we train a deep network with a few precisely annotated data in the beginning. Then the deep network is used to automatically annotate more data, followed by revision from human experts. The human-corrected annotations and their corresponding images are added to the training set to retrain a more powerful deep network. These steps are repeated iteratively until we finish our annotation task. In the last, only minimal modification is needed by human experts. The annotation pipeline is shown in Fig.fig-pic-10. In Step , two senior experts are invited to pixel-wise annotate 40 cases of CLINIC sub-dataset precisely as the initial database based on the results from simple thresholding method, using ITK Snap (Philadelphia, PA) software. All annotations are performed in the transverse plane. The sagittal and coronal planes are used to assist the judgment in the transverse plane. The reason for starting from the CLINIC sub-dataset is that the cancerous bone and surrounding tissues exhibit similar appearances at the fracture site, which needs more prior knowledge guidance from doctors. In Step , we train a deep network with the updated database and make predictions on new 100 data selected randomly at a time. In Step , some junior annotators refine the labels based on the prediction results, and each junior annotator is only responsible for part of 100 new data. A coordinator will check the quality of refinement by all junior annotators. For easy cases, the annotation process is over in this stage; for hard cases, senior experts are invited to make more precise annotations. Step and Step are repeated until we finish the annotation of all images in our dataset. Finally, we conduct another round of scrutiny for outliers and mistakes and make necessary corrections to ensure the final quality of our dataset. In Fig.fig-pic-10, Junior annotators' are graduate students in the field of medical image analysis. The Coordinator' is a medical image analysis practitioner with many years of experience, and the `Senior experts' are cooperating doctors in the partner hospital, one of the best orthopedic hospitals in our country. In total, we have annotations for $ 1,109 $ metal-free CTs and 14 metal-affected CTs. The remaining 61 metal-affected CTs of image are left unannotated and planned for use in unsupervised learning.


考虑我们数据集和注释本身的数千个案例的比例是真正的主观和耗时的任务。我们介绍了迭代深度学习(援助)注释的策略,以加快我们的注释过程。在援助工作流程中,我们在开始时培训了一个深度网络的深度网络。然后,深网络用于自动注释更多数据,然后从人类专家修订。人类纠正的注释及其相应的图像被添加到训练集中,以重新培训更强大的深网络。迭代地重复这些步骤,直到我们完成注释任务。在最后,人类专家只需要最小的修改。注释管道如图3所示。在步骤中,邀请了两个高级专家对像素 - 明智的注释40例,临床子数据集是初始数据库的基于简单阈值方法的结果,使用ITK Snap(Philadelphia,PA)软件。所有注释都在横向平面中执行。矢状和冠状平面用于帮助横向平面中的判断。从诊所亚数据集开始的原因是癌症骨骼和周围组织在骨折场所表现出类似的外表,这些裂缝部位需要更多地从医生的知识指导。在步骤中,我们将深度网络与更新的数据库一起培训,并在一次随机选择的新100数据上进行预测。在步骤中,一些初级注释器基于预测结果优化标签,每个初级注入者都仅负责100个新数据的一部分。协调员将通过所有初级注释器检查细化的质量。在简单的情况下,在本阶段结束了注释过程;对于硬壳,邀请高级专家培养更准确的注释。重复步骤和步骤,直到我们完成DataSet中所有图像的注释。最后,我们对异常值和错误进行了另一轮审查,并进行了必要的修正,以确保我们数据集的最终质量。在图10中,“初级注释器”是医学图像分析领域的研究生。 “协调员”是具有多年经验的医学形象分析从业者,“高级专家”是伙伴医院的合作医生,我国最好的骨科医院之一。总的来说,我们为$ 1,109 $无金属CTS和14个金属影响CTS提供注释。剩下的61个金属影响的图像CTS是未经监督的学习中的未经宣传的和计划的。

Segmentation Methodology

The overall pipeline of our deep approach for segmenting pelvic bones is illustrated in Fig.fig-pic-5. The input is a 3D CT volume. (i) First, the input is sent to our segmentation module. It is a plug and play (PnP) module that can be replaced at will. (ii) After segmentation is done, we send the multi-class 3D prediction to a SDF post-processor, which removes some false predictions and outputs the final multi-bone segmentation result.



对骨盆骨骼进行深度方法的整体管道在图4中示出了图5。输入是3D CT卷。 (i)首先,将输入发送到我们的分割模块。它是一个可替换的即插即用(PNP)模块。 (ii)完成分割后,我们将多级3D预测发送到SDF后处理器,其消除了一些假预测并输出最终的多骨分割结果。

Segmentation Module

Based on our large-scale dataset collected from multiple sources together with annotations, we use a fully supervised method to train a deep network to learn an effective representation of the pelvic bones. The deep learning framework we choose here is 3D U-Net cascade version of nnU-Net, which is a robust state-of-the-art deep learning-based medical image segmentation method. 3D U-Net cascade contains two 3D U-net, where the first one is trained on downsampled images (stage 1 in Fig.fig-pic-5), the second one is trained on full resolution images (stage 2 in Fig.fig-pic-5). A 3D network can better exploit the useful 3D spatial information in 3D CT images. Training on downsampled images first can enlarge the size of patches in relation to the image, then also enable the 3D network to learn more contextual information. Training on full resolution images second refines the segmentation results predicted from former U-Net.


基于我们的大规模数据集从多个源收集,我们使用完全监督的方法训练深网络以学习骨盆骨骼的有效表示。我们选择的深度学习框架是NNU-Net的3D U-Net级联版本,这是一种坚固的基于最先进的深度学习的医学图像分割方法。 3D U-Net Cascade包含两个3D U-Net,其中第一个在下采样的图像上培训(图3中的阶段1),第二个在全分辨率图像上培训(图2中的阶段2 -PIC-5)。 3D网络可以更好地利用3D CT图像中的有用的3D空间信息。在下采样的图像上训练首先可以扩大与图像相关的补丁的大小,然后还使3D网络能够了解更多上下文信息。全部分辨率训练秒第二改善前U-Net预测的分割结果。

SDF Post Processor

Post-processing is useful for a stable system in clinical use, preventing some mispredictions in some complex scenes. In the segmentation task, current segmentation systems usually determine whether to remove the outliers according to the size of the connected region to reduce mispredictions. However, in the pelvic fractures scene, broken bones may also be removed as outliers. To this end, we introduce the SDF filtering as our post-processing module to add a besides the . We calculate SDF based on the maximum connected region (MCR) of the anatomical structure in the prediction result, obtaining a 3D distance map that increases from the bone border to the image boundary to help determining whether `outlier prediction' defined by traditional MCR-based method should be removed.



Experiments 实验

Implementation Details 实现详细信息

We implement our method based on open source code of nnU-Net https://github.com/mic-dkfz/nnunet . We also used MONAI https://monai.io/ during our algorithm development. Details please refer to the Online Resource 1. For our metal-free dataset, we randomly select 3/5, 1/5, 1/5 cases in each sub-dataset as the training set, validation set, and testing set, respectively, and keep such a data partition unchanged in all-dataset experiments and sub-datasets experiments.

我们基于NNU-Net HTTPS://github.com/mic-dkfz/nnunet的开源代码实现我们的方法。我们还在算法开发期间使用Monai Https://monai.io/。详情请参阅在线资源1.对于我们的无金属数据集,我们将分别随机选择每个子数据集中的3/5,1 / 5,1/5例,分别为培训集,验证集和测试集,并保持所有数据集实验和子数据集实验中不变的数据分区。

Results and Discussion 结果和讨论

Segmentation Module

To prove that learning from our large-scale pelvic bones CT dataset is helpful to improve the robustness of our segmentation system, we conduct a series of experiments in different aspects. Firstly, we test the performance of models of different dimensions on our entire dataset. The Exp (a) in Tabletab_basic shows the quantitative results. $ \Phi_{ALL} $ denotes a deep network model trained on `ALL' dataset. Following the conventions in most literature, we use Dice coefficient(DC) and Hausdorff distance (HD) as the metrics for quantitative evaluation. All results are tested on our testing set. Same as we discussed in Sect.subsectionSegModule, $ \Phi_{ALL(3D_cascade)} $ shows the best performance, achieving an average DC of 0.987 and HD of 5.50 voxels, which means 3D U-Net cascade can learn the semantic features of pelvic anatomy better then 2D/3D U-Net. As the following experiments are all trained with 3D U-Net cascade, the mark $ {(3D_cascade)} $ of $ \Phi{ALL(3D_cascade)} $ is omitted for notational clarity.



Secondly, we train six deep networks, one network per single sub-dataset ( $ \Phi_{ABDOMEN} $ , etc.). Then we test them on each sub-dataset. Quantitative and qualitative results are shown in Tabletab_subdataset and Fig.fig-pic-6, respectively. We also calculate the performance of $ \Phi_{ALL} $ on each sub-dataset.


证明从我们大规模的骨盆骨骼CT数据集有助于提高我们分割系统的稳健性,我们在不同方面进行一系列实验。首先,我们在整个数据集中测试不同维度模型的性能。 Tabletab_Basic中的EXP(A)显示了定量结果。 $\Phi_{ALL} $表示在“所有”数据集上培训的深网络模型。在大多数文献中的约定之后,我们使用骰子系数(DC)和Hausdorff距离(HD)作为定量评估的度量。所有结果都在我们的测试集上进行了测试。与我们在sect.subsectionsegmodule中讨论的相同,$\Phi_{ALL(3D_cascade)} $显示了最佳性能,实现了0.987和5.50 voxels的平均dc,这意味着3d U-net级联可以更好地学习骨盆解剖学的语义特征,然后是2d / 3d U-net。由于以下实验都有3D U-Net级联培训,因此省略了$\Phi_{ALL(3D_cascade)}$的标记$_{(3D_cascade)} $以用于符号清晰度。

其次,我们中训练六种深的网络,每单个子集一个网络($\Phi_{ABDOMEN} $ , 等等。)。然后我们在每个子数据集上测试它们。定量和定性结果分别显示在Tabletab_subdataset和图30中。我们还计算每个子数据集上的$\Phi_{ALL} $的性能。

For a fair comparison, cross-testing of sub-dataset networks is also conducted on each sub-dataset's testing set. We observe that the evaluation metrics of model $ \Phi_{ALL} $ are generally better than those for the model trained on a single sub-dataset. These models trained on a single sub-dataset are difficult to consistently perform well in other domains, except $ \Phi_{COLONOG} $ , which contains the largest amount of data from various sources, originally. This observation implies that the domain gap problem does exist and the solution of collecting data directly from multi-source is effective. More intuitively, we show the Average' values in heat map format in Fig.table2fig. Furthermore, we implement cross-validation of these six metal-free sub-datasets to verify the generalization ability of this solution. Models are marked as \$ \Phi_{ex\ABDOMEN} \$ , etc. The results of \$ \Phi_{ex\COLONOG} \$ can fully explain that training with data from multi-sources can achieve good results on data that has not been seen before. When the models trained separately on the other five sub-datasets cannot achieve good results on COLONOG, aggregating these five sub-datasets can get a comparable result compared with \$ \Phi_{ALL} \$ , using only one third of the amount of data. More data from multi-sources can be seen as additional constraints on model learning, prompting the network to learn better feature representations of the pelvic bones and the background. In Fig.fig-pic-6, the above discussions can be seen intuitively through qualitative results. For more experimental results and discussions, e.g. Generalization across manufacturers', `Limitations of the dataset', please refer to Online Resource 1.


对于公平的比较,还在每个子数据集的测试集上进行子数据集网络的交叉测试。我们观察到模型$\Phi_{ALL} $的评估度量通常比在单个子数据集上训练的模型的评估度量。在单个子数据集上培训的这些模型难以在其他域中始终良好地表现良好,除了$\Phi_{COLONOG} $,其中最初包含来自各种来源的最大数据量。此观察结果意味着存在域间隙问题,并且直接从多源收集数据的解决方案是有效的。更直观地,我们在图1able2fig中显示了热图格式的“平均值”值。此外,我们实现了这六个无金属子数据集的交叉验证,以验证该解决方案的泛化能力。模型标记为$\Phi_{ex\ABDOMEN} $等。$\Phi_{ex\COLONOG} $的结果可以完全解释,使用来自多源的数据的培训可以在未见之前的数据上实现良好的结果。当在其他五个子数据集上单独培训的模型无法在ColOnog上实现良好的结果时,聚合这五个子数据集可以获得与$\Phi_{ALL} $相比的可比结果,仅使用数据量的三分之一。来自多源的更多数据可以被视为对模型学习的额外约束,提示网络了解骨盆骨骼和背景的更好特征表示。在图3中,通过定性结果可以直观地看到上述讨论。例如,对于更多实验结果和讨论,例如讨论。 “泛化横跨制造商”,“数据集的限制”,请参阅在线资源1.

SDF post-processor

The Exp (b) in Tabletab_basic shows the effect of the post-processing module. SDF post-processor yields a decrease of 80.7% and 15.1% in HD compared with no post-processor and MCR post-processor. Details please refer to Online Resource 1. The visual effects of two cases are displayed in Fig.fig-pic-8. Large fragments near the anatomical structure are kept with SDF post-processing but are removed by the MCR method.





To benefit the pelvic surgery and diagnosis community, we curate and open source a large-scale pelvic CT dataset pooled from multiple domains, including $ 1,184 $ CT volumes (over 320K CT slices) of various appearance variations, and present a pelvic segmentation system based on deep learning, which, to the best of our knowledge, marks the first attempt in the literature. We train a multi-class network for segmentation of lumbar spine, sacrum, left hip, and right hip using the multiple-domain images to obtain more effective and robust features. SDF filtering further improves the robustness of the system. This system lays a solid foundation for our future work. We plan to test the significance of our system in real clinical practices, and explore more options based on our dataset, e.g. devising a module for metal-affected CTs and domain-independent pelvic bones segmentation algorithm. This research was supported in part by the Youth Innovation Promotion Association CAS (grant 2018135). The authors have no relevant financial or non-financial interests to disclose. Please refer to URL. Please refer to URL. We have obtained the approval from the Ethics Committee of clinical hospital. Not applicable.


使骨盆外科和诊断群落受益,我们巩固和开源从多个域汇集的大型骨盆CT数据集,包括$ 1,184 $ CT卷(超过320K CT切片)各种外观变化,并显示基于深度学习的盆腔分割系统,据我们所知,这是在文献中的第一次尝试。我们培训多级网络以进行腰椎,骶骨,左臀部和右髋关节的分割,以获得更有效和鲁棒的功能。 SDF过滤进一步提高了系统的鲁棒性。该系统为未来的工作奠定了坚实的基础。我们计划在真正的临床实践中测试我们的系统的重要性,并根据我们的数据集探索更多选项。设计用于金属影响CTS和域无关的盆腔骨骼分割算法的模块。该研究部分由青年创新促进协会CAS(授予2018135)得到支持。作者没有相关的财务或非财务利益披露。请参阅URL。请参阅URL。我们已获得临床医院伦理委员会的批准。不适用。