CN113706486A - Pancreas tumor image segmentation method based on dense connection network migration learning - Google Patents
- ️Fri Nov 26 2021
Info
-
Publication number
- CN113706486A CN113706486A CN202110944394.6A CN202110944394A CN113706486A CN 113706486 A CN113706486 A CN 113706486A CN 202110944394 A CN202110944394 A CN 202110944394A CN 113706486 A CN113706486 A CN 113706486A Authority
- CN
- China Prior art keywords
- network
- segmentation
- image
- module
- net Prior art date
- 2021-08-17 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 206010061902 Pancreatic neoplasm Diseases 0.000 title claims abstract description 65
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000013508 migration Methods 0.000 title claims abstract description 5
- 230000005012 migration Effects 0.000 title claims abstract description 5
- 238000003709 image segmentation Methods 0.000 title abstract description 13
- 230000011218 segmentation Effects 0.000 claims abstract description 91
- 201000002528 pancreatic cancer Diseases 0.000 claims abstract description 64
- 238000012549 training Methods 0.000 claims abstract description 24
- 238000012360 testing method Methods 0.000 claims abstract description 17
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 238000002595 magnetic resonance imaging Methods 0.000 claims description 58
- 230000006870 function Effects 0.000 claims description 36
- 238000002600 positron emission tomography Methods 0.000 claims description 21
- 238000001514 detection method Methods 0.000 claims description 14
- 238000005481 NMR spectroscopy Methods 0.000 claims description 10
- 238000002591 computed tomography Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 8
- 238000011423 initialization method Methods 0.000 claims description 7
- 238000013526 transfer learning Methods 0.000 claims description 7
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 6
- 210000002569 neuron Anatomy 0.000 claims description 5
- 238000011176 pooling Methods 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 2
- 238000005457 optimization Methods 0.000 claims description 2
- 238000007781 pre-processing Methods 0.000 claims 2
- 230000007306 turnover Effects 0.000 claims 2
- 238000012636 positron electron tomography Methods 0.000 description 21
- 206010028980 Neoplasm Diseases 0.000 description 10
- 238000004088 simulation Methods 0.000 description 9
- 238000001959 radiotherapy Methods 0.000 description 5
- 230000034994 death Effects 0.000 description 4
- 231100000517 death Toxicity 0.000 description 4
- 230000007423 decrease Effects 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 230000035945 sensitivity Effects 0.000 description 4
- 201000011510 cancer Diseases 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 210000000056 organ Anatomy 0.000 description 3
- 238000012879 PET imaging Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 230000036210 malignancy Effects 0.000 description 2
- 210000001519 tissue Anatomy 0.000 description 2
- 206010006187 Breast cancer Diseases 0.000 description 1
- 208000026310 Breast neoplasm Diseases 0.000 description 1
- 208000023178 Musculoskeletal disease Diseases 0.000 description 1
- 206010072360 Peritumoural oedema Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 231100000518 lethal Toxicity 0.000 description 1
- 230000001665 lethal effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000002503 metabolic effect Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 210000000496 pancreas Anatomy 0.000 description 1
- 201000008129 pancreatic ductal adenocarcinoma Diseases 0.000 description 1
- 238000010837 poor prognosis Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Nuclear Medicine (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
本发明公开了一种基于密集连接网络迁移学习的胰腺肿瘤图像分割方法,其方案是:获取正电子发射计算机断层显像PET和核磁共振成像MRI,对其进行预处理后划分为训练集和测试集;构建分割网络,并使用PET训练数据集对其训练,得到一次训练好的网络参数W1;使用迁移策略将分割网络中特征提取模块的初始参数设置为W1中相应模块的值,其余模块参数随机初始化,并利用MRI图像训练集重新训练分割网络,得到二次训练好的网络参数W2;将MRI测试集输入到以W2为网络参数的分割网络中得到分割结果。本发明提高了MRI图像分割的性能,解决现有技术对小数据集难以训练网络的问题,可用于协助医生完成胰腺肿瘤治疗前的自动靶区勾画。
The invention discloses a pancreatic tumor image segmentation method based on dense connection network migration learning. Set up a segmentation network, and use the PET training data set to train it to obtain a trained network parameter W1; use the migration strategy to set the initial parameters of the feature extraction module in the segmentation network to the value of the corresponding module in W1, and the parameters of the remaining modules Random initialization, and use the MRI image training set to retrain the segmentation network to obtain the second-trained network parameter W2; input the MRI test set into the segmentation network with W2 as the network parameter to obtain the segmentation result. The invention improves the performance of MRI image segmentation, solves the problem that the network is difficult to train for small data sets in the prior art, and can be used to assist doctors to complete automatic target area delineation before pancreatic tumor treatment.
Description
技术领域technical field
本发明属于图像处理技术领域,特别涉及一种胰腺肿瘤图像分割方法,可用于帮助医生完成胰腺肿瘤治疗前的自动靶区勾画。The invention belongs to the technical field of image processing, and in particular relates to a pancreatic tumor image segmentation method, which can be used to help doctors to complete automatic target area delineation before pancreatic tumor treatment.
背景技术Background technique
目前,胰腺肿瘤仍然是全球最致命的恶性肿瘤之一,并且发病率有逐年增高的趋势。根据国际癌症研究所发布的2020年全球最新癌症负担报告GLOBOCAN 2020显示,2020年全球预估胰腺肿瘤新增病例数近49.57万,死亡病例数约46.60万。由于胰腺肿瘤预后差,因而导致的死亡病例数与新发病例数几乎一样多,是男女恶性肿瘤死亡的第七大主要原因。根据一项对28个欧洲国家的研究,预计到2025年,胰腺肿瘤将超过乳腺癌成为导致恶性肿瘤死亡的第三大原因。胰腺肿瘤患者放射治疗中的剂量通常受到肿瘤附近器官的限制,在不减少剂量覆盖范围的前提下,应尽可能在胰腺中准确定位肿瘤的边缘,实现最优放疗计划。因此,在放射治疗中进行准确的胰腺肿瘤病灶分割是很有必要的。At present, pancreatic tumors are still one of the most lethal malignant tumors in the world, and the incidence is increasing year by year. According to GLOBOCAN 2020, the latest global cancer burden report released by the International Institute for Cancer Research, the estimated number of new cases of pancreatic tumors in 2020 will be nearly 495,700, and the number of deaths will be about 466,000. Pancreatic tumors cause nearly as many deaths as new cases due to their poor prognosis, and are the seventh leading cause of death from malignancies in both men and women. Pancreatic tumors are expected to surpass breast cancer as the third leading cause of death from malignancies by 2025, according to a study of 28 European countries. The dose of radiotherapy in patients with pancreatic tumors is usually limited by the organs near the tumor. Under the premise of not reducing the dose coverage, the edge of the tumor should be positioned as accurately as possible in the pancreas to achieve the optimal radiotherapy plan. Therefore, accurate segmentation of pancreatic tumor lesions in radiotherapy is necessary.
在医学图像中,多模态数据因成像机理不同而能提供有关器官,肿瘤的多种层面信息,因而被广泛使用。诊断肿瘤时常用的医学图像有CT图像、MRI图像和PET图像。其中CT图像用于诊断肌肉和骨骼疾病;MRI图像用以提供良好的软组织对比度,其包括T2加权MRI图像,该图像适合诊断肿瘤周围水肿;PET图像虽然缺少组织解剖特征,但可以提供病变的定量代谢和功能信息,在PET成像中,肿瘤区域的图像强度高于正常组织和器官区域的图像强度,可以比较容易地定位胰腺肿瘤大致区域。近年来,多模态成像因其在肿瘤患者放射治疗计划中的潜在应用而受到越来越多的关注。充分利用和整合所有可用的成像数据进行目标分割可以大大提升准确率。In medical images, multimodal data can provide various levels of information about organs and tumors due to different imaging mechanisms, so they are widely used. Commonly used medical images when diagnosing tumors are CT images, MRI images and PET images. Among them, CT images are used to diagnose musculoskeletal diseases; MRI images are used to provide good soft tissue contrast, including T2-weighted MRI images, which are suitable for diagnosing peritumoral edema; PET images, although lacking tissue anatomical features, can provide quantification of lesions Metabolic and functional information, in PET imaging, the image intensity of the tumor area is higher than that of the normal tissue and organ area, and the approximate area of the pancreatic tumor can be located relatively easily. In recent years, multimodal imaging has received increasing attention for its potential application in radiation therapy planning for cancer patients. Leveraging and integrating all available imaging data for object segmentation can greatly improve accuracy.
张国庆等人在中国专利网:CN113034461A上公开了一种胰腺肿瘤CT图像分割方法。该方法主要分为图像编码路径与图像解码路径。在图像编码路径中,每一层由可变卷积、BN以及ReLU函数组成,且通过2*2的最大池化层将特征映射图传输到下一层;所述编码路径的最后一层包括三个块的密集连接卷积网络。在解码路径中,每一层的特征映射图包括第一部分以及第二部分,通过BConvLSTM将所述第一部分与所述第二部分结合;所述第一部分通过上采样函数与上一层的特征映射图运算得到,所述第二部分为当前解码层的特征映射图;所述BConvLSTM包括输入门、输出门、遗忘门以及存储单元。Zhang Guoqing and others disclosed a method for segmenting CT images of pancreatic tumors on China Patent Network: CN113034461A. The method is mainly divided into image encoding path and image decoding path. In the image encoding path, each layer is composed of variable convolution, BN and ReLU functions, and the feature map is transmitted to the next layer through a 2*2 max pooling layer; the last layer of the encoding path includes Densely connected convolutional network of three blocks. In the decoding path, the feature map of each layer includes a first part and a second part, and the first part and the second part are combined by BConvLSTM; the first part is mapped to the feature map of the previous layer through an upsampling function The graph operation is obtained, the second part is the feature map of the current decoding layer; the BConvLSTM includes an input gate, an output gate, a forget gate and a storage unit.
Liang等人于2019年在期刊International Journal of Radiation Oncology,Biology,Physics上发表的文章On the Development of MRI-Based Auto-Segmentationof Pancreatic Tumor Using Deep Neural Networks中介绍了一种MRI图像中的胰腺肿瘤分割方法。该方法使用滑动取正方形窗口的方式对原始图像进行裁剪以扩充数据量,并且用27个人的MRI图像训练了一个三维的卷积神经网络,通过此网络可以分割出MRI图像中的胰腺肿瘤。Liang et al. introduced a method for pancreatic tumor segmentation in MRI images in the article On the Development of MRI-Based Auto-Segmentation of Pancreatic Tumor Using Deep Neural Networks published in the journal International Journal of Radiation Oncology, Biology, Physics in 2019 . The method uses a sliding square window to crop the original image to expand the data volume, and trains a three-dimensional convolutional neural network with MRI images of 27 individuals, through which the pancreatic tumors in the MRI images can be segmented.
Zhu等人于2018在arXiv上发表的文章Multi-Scale Coarse-to-FineSegmentation for Screening Pancreatic Ductal Adenocarcinoma中提出了一种胰腺肿瘤CT图像分割方法。该方法针对不同病人胰腺肿瘤大小不同的特点,使用643,323,163这三种尺寸的CT图像训练了三个相应的分割网络。在测试时,该方法首先将原始图像裁剪成尺寸为643的图像,并使用该尺寸对应的分割网络分割其中的胰腺肿瘤,在这步粗分割后再根据分割结果进一步将图像裁剪成323的尺寸,并使用该尺寸对应的分割网络分割其中的胰腺肿瘤。以此类推,使用这三种尺度的分割网络可以实现由粗到细的分割,最终可以获得胰腺肿瘤分割结果。In the article Multi-Scale Coarse-to-FineSegmentation for Screening Pancreatic Ductal Adenocarcinoma published by Zhu et al. on arXiv in 2018, a method for segmentation of pancreatic tumor CT images was proposed. In this method, three corresponding segmentation networks are trained using CT images of 64 3 , 32 3 , and 16 3 sizes according to the different size of pancreatic tumors in different patients. During testing, the method first crops the original image into an image with a size of 64 3 , and uses the segmentation network corresponding to this size to segment the pancreatic tumor. After this step of rough segmentation, the image is further cropped into 32 3 according to the segmentation result. size, and segment pancreatic tumors within it using the segmentation network corresponding to that size. By analogy, the segmentation network of these three scales can achieve coarse-to-fine segmentation, and finally the pancreatic tumor segmentation results can be obtained.
上述现有的胰腺肿瘤分割方法由于都没有关注医学图像分割中可用的带标记图像数量少的问题,且仅使用了一种模态的图像,即或是胰腺肿瘤CT图像或是胰腺肿瘤MRI图像,而没有充分结合多模态图像信息,因此导致胰腺肿瘤分割精度低,无法满足放射治疗前胰腺肿瘤区域自动勾画的需要。The above-mentioned existing pancreatic tumor segmentation methods do not pay attention to the small number of labeled images available in medical image segmentation, and only use images of one modality, that is, pancreatic tumor CT images or pancreatic tumor MRI images. , without fully combining multimodal image information, resulting in low accuracy of pancreatic tumor segmentation, which cannot meet the needs of automatic delineation of pancreatic tumor regions before radiotherapy.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于针对上述现有技术的不足,提出一种基于密集连接网络迁移学习的胰腺肿瘤图像分割方法,以在胰腺肿瘤图像数据少的情况下,提高对核磁共振图像MRI中的胰腺肿瘤的分割精度。The purpose of the present invention is to address the above-mentioned shortcomings of the prior art, and propose a pancreatic tumor image segmentation method based on densely connected network transfer learning, so as to improve the accuracy of pancreatic tumor imaging in MRI when the pancreatic tumor image data is small. segmentation accuracy.
本发明的技术思路是:通过使用密集连接模块代替现有Mask-RCNN网络结构中特征提取部分的残差模块,得到基于密集连接模块的分割网络结构DM-net;通过使用迁移学习策略将从胰腺肿瘤PET图像分割中学到的知识用到胰腺肿瘤MRI图像的分割中,从而实现对使用少量胰腺肿瘤MRI图像的准确分割。The technical idea of the present invention is: by using the dense connection module to replace the residual module in the feature extraction part of the existing Mask-RCNN network structure, a segmentation network structure DM-net based on the dense connection module is obtained; The knowledge learned in tumor PET image segmentation is used in the segmentation of pancreatic tumor MRI images to achieve accurate segmentation using a small number of pancreatic tumor MRI images.
根据上述思路,本发明的实现包括如下:According to the above-mentioned ideas, the realization of the present invention includes the following:
(1)从医院获取正电子发射计算机断层显像PET数据和核磁共振成像MRI数据,对其进行预处理,并将预处理后的数据按照8:2的比例划分为训练集和测试集;(1) Obtain positron emission tomography PET data and magnetic resonance imaging MRI data from the hospital, preprocess them, and divide the preprocessed data into a training set and a test set according to a ratio of 8:2;
(2)构建由特征提取模块、区域候选网络、感兴趣区域对齐模块和三分支模块级联构成的分割网络DM-net;(2) Constructing a segmentation network DM-net consisting of a feature extraction module, a region candidate network, a region of interest alignment module and a cascade of three-branch modules;
(3)使用He初始化方法初始化该分割网络的初始参数,设置分割网络DM-net的损失函数为:loss=losscls+lossbox+lossmask,(3) Use the He initialization method to initialize the initial parameters of the segmentation network, and set the loss function of the segmentation network DM-net as: loss=loss cls +loss box +loss mask ,
其中,losscls为分类分支的损失,lossbox为检测分支的损失,lossmask为分割分支的损失;Among them, loss cls is the loss of the classification branch, loss box is the loss of the detection branch, and loss mask is the loss of the segmentation branch;
(4)使用Adam优化器并以上述损失函数为优化目标,利用正电子发射计算机断层显像PET训练数据集对分割网络DM-net的参数进行迭代学习,直到损失函数的值不再减小,得到一次训练好的网络参数W1;(4) Using the Adam optimizer and taking the above loss function as the optimization goal, the parameters of the segmentation network DM-net are iteratively learned using the positron emission tomography PET training data set, until the value of the loss function no longer decreases, Get a trained network parameter W1;
(5)通过迁移学习策略并使用核磁共振MRI数据训练DM-net:(5) Train DM-net with MRI data via transfer learning strategy:
(5a)将DM-net的特征提取模块的参数值设置为训练好的网络参数W1中对应模块的值,使用He初始化方法对其区域候选网络、感兴趣区域对齐模块和三分支模块的参数重新初始化;(5a) Set the parameter value of the feature extraction module of DM-net to the value of the corresponding module in the trained network parameter W1, and use the He initialization method to reset the parameters of its region candidate network, region of interest alignment module and three-branch module. initialization;
(5b)保持网络的损失函数不变,利用核磁共振MRI训练数据集对分割网络DM-net的参数进行迭代学习,直到损失函数的值不再减小,得到二次训练好的网络参数W2;(5b) Keep the loss function of the network unchanged, and use the MRI training data set to iteratively learn the parameters of the segmentation network DM-net until the value of the loss function no longer decreases, and obtain the network parameter W2 trained for the second time;
(6)将二次训练好的网络参数W2加载到分割网络DM-net中,并将核磁共振MRI测试数据集输入到DM-net中,得到输出概率图;(6) Load the network parameters W2 trained for the second time into the segmentation network DM-net, and input the MRI test data set into DM-net to obtain the output probability map;
(7)设置概率阈值为0.5,将输出概率图的每个像素点值与其阈值进行比较,得到最终的分割结果:(7) Set the probability threshold to 0.5, and compare each pixel value of the output probability map with its threshold to obtain the final segmentation result:
将输出概率图中值小于0.5的像素点的值设置为0,表示背景区域,Set the value of the pixel whose value is less than 0.5 in the output probability map to 0, indicating the background area,
将输出概率图中值大于0.5的像素点的值设置为1,表示胰腺肿瘤。The value of the pixel with a value greater than 0.5 in the output probability map is set to 1, indicating a pancreatic tumor.
本发明与现有的技术相比,具有如下优点:Compared with the prior art, the present invention has the following advantages:
1.能够使用少量胰腺肿瘤核磁共振MRI图像训练一个分割网络。1. Ability to train a segmentation network using a small number of pancreatic tumor MRI images.
本发明通过使用迁移学习策略将从胰腺肿瘤正电子发射计算机断层显像PET图像分割中学习到的知识迁移到了胰腺肿瘤核磁共振MRI图像分割中,即使用训练好的胰腺肿瘤正电子发射计算机断层显像PET图像分割模型作为预训练模型,使得胰腺肿瘤核磁共振MRI图像分割网络学习更加容易,因而能够使用少量胰腺肿瘤核磁共振MRI图像训练一个分割网络。In the present invention, the knowledge learned from the segmentation of pancreatic tumor positron emission tomography PET images is transferred to the segmentation of pancreatic tumor MRI images by using the transfer learning strategy, that is, the trained pancreatic tumor positron emission tomography image segmentation is used. Using a segmentation model like PET as a pre-trained model makes it easier to learn a segmentation network for pancreatic tumor MRI images, thus enabling a segmentation network to be trained using a small number of pancreatic tumor MRI images.
2.提升了胰腺肿瘤核磁共振MRI图像分割的性能。2. Improve the performance of pancreatic tumor MRI image segmentation.
本发明由于借鉴现有的Mask-RCNN的结构,即同时进行实例分割、目标检测与分类的多任务学习,这些任务之间可以特征共享相互促进;同时由于通过对Mask-RCNN网络结构的改进,将其中的特征提取模块改为由密集连接模块堆叠而成,使得前面所有的层与后面层都进行了密集连接,从而实现了特征重用;此外由于本发明通过使用迁移学习策略将胰腺肿瘤正电子发射计算机断层显像PET图像和胰腺肿瘤核磁共振MRI图像进行融合,可使两种模态的图像在实现胰腺肿瘤分割时信息互补;所述三点提升胰腺肿瘤核磁共振MRI图像分割的性能。The present invention uses the existing Mask-RCNN structure for reference, that is, simultaneous multi-task learning of instance segmentation, target detection and classification, and feature sharing among these tasks can promote each other; The feature extraction module in it is changed to be stacked with dense connection modules, so that all the previous layers and the back layers are densely connected, thereby realizing feature reuse; The fusion of the emission computed tomography PET image and the pancreatic tumor MRI image can make the images of the two modalities complement each other in the realization of pancreatic tumor segmentation; the three points improve the performance of pancreatic tumor MRI image segmentation.
附图说明Description of drawings
图1是本发明的实现流程图;Fig. 1 is the realization flow chart of the present invention;
图2是本发明中构建的基于密集连接模块的DM-net网络结构图;Fig. 2 is the DM-net network structure diagram based on dense connection module constructed in the present invention;
图3是本发明中使用的胰腺肿瘤核磁共振MRI图像和正电子发射计算机断层显像PET图像的示例;3 is an example of a pancreatic tumor nuclear magnetic resonance MRI image and a positron emission computed tomography PET image used in the present invention;
图4是用本发明和现有的三种目标分割方法对核磁共振MRI胰腺肿瘤的分割效果对比图;4 is a comparison diagram of the segmentation effect of the present invention and three existing target segmentation methods on MRI pancreatic tumors;
具体实施方式Detailed ways
下面结合附图对本发明的实施和效果作进一步详细描述。The implementation and effects of the present invention will be described in further detail below with reference to the accompanying drawings.
参照图1,本发明的实现步骤包括如下:1, the implementation steps of the present invention include the following:
步骤1.构建正电子发射计算机断层显像PET和核磁共振成像MRI数据集,并划分训练集和测试集。Step 1. Construct Positron Emission Computed Tomography PET and Magnetic Resonance Imaging MRI datasets and divide training set and test set.
(1.1)从医院获取正电子发射计算机断层显像PET数据和核磁共振成像MRI数据;(1.1) Obtain positron emission tomography PET data and magnetic resonance imaging MRI data from the hospital;
(1.2)以核磁共振MRI图像位置为基准,使用3D slicer软件对同一病人的正电子发射计算机断层显像PET图像的空间位置进行调整,使其与核磁共振MRI图像重叠,并依次进行随机的旋转、水平翻转和垂直翻转,将其数据量分别扩充为原来的8倍;(1.2) Based on the position of the MRI image, use 3D slicer software to adjust the spatial position of the positron emission tomography PET image of the same patient so that it overlaps with the MRI image, and rotate randomly in turn. , horizontal flip and vertical flip to expand the data volume to 8 times of the original;
(1.3)将扩充后的正电子发射计算机断层显像PET图像与核磁共振MRI图像的尺寸都由原始的512×512裁剪为320×320;(1.3) Crop the size of the expanded PET image and MRI image from the original 512×512 to 320×320;
(1.4)通过下式分别对裁剪后的正电子发射计算机断层显像PET图像和核磁共振MRI图像进行归一化:(1.4) Normalize the cropped positron emission tomography PET image and the nuclear magnetic resonance MRI image respectively by the following formulas:
其中,Y是归一化后的图像,X为输入图像,Xmin为输入图像的像素点灰度最大值,Xmax为输入图像的像素点灰度的最小值;Among them, Y is the normalized image, X is the input image, X min is the maximum value of the pixel point gray level of the input image, and X max is the minimum value of the pixel point gray level of the input image;
(1.5)将归一化后的正电子发射计算机断层显像PET图像和核磁共振MRI图像按照8:2的比例划分为训练集和测试集。(1.5) The normalized positron emission tomography PET images and nuclear magnetic resonance MRI images are divided into training set and test set according to the ratio of 8:2.
步骤2.构建分割网络DM-net。Step 2. Construct the segmentation network DM-net.
参照图2,本步骤的具体实现如下:Referring to Fig. 2, the concrete realization of this step is as follows:
(2.1)构建特征提取模块:其由四个密集连接模块级联构成,且上一个密集连接模块的输出与下一个密集连接模块的输出进行了通道方向上的拼接,每个密集连接模块由一个线性整流Relu激活函数和一个3×3大小的二维卷积层构成;(2.1) Build a feature extraction module: it is composed of four densely connected modules cascaded, and the output of the previous densely connected module and the output of the next densely connected module are spliced in the channel direction, and each densely connected module consists of a The linear rectification Relu activation function and a 3×3 size two-dimensional convolutional layer are composed;
(2.2)构建区域候选网络:其由一个提取候选锚框单元和一个二分类网络构成,在该候选锚框单元中通过使用滑动窗口的方法得到多个候选的锚框;该二分类网络由多个卷积层和多个全连接层级联构成,它可以判断候选的锚框中是否包含胰腺肿瘤区域,从而从所有候选的锚框中筛选出可能的候选区域;(2.2) Constructing a regional candidate network: it consists of an extraction candidate anchor frame unit and a two-class network, in which multiple candidate anchor frames are obtained by using a sliding window method; the two-class network consists of multiple It is composed of a convolutional layer and multiple fully connected layers, which can determine whether the candidate anchor box contains pancreatic tumor regions, so as to screen out possible candidate regions from all candidate anchor boxes;
(2.3)构建感兴趣区域对齐模块:其由一个网格划分单元、一个双线性插值单元和一个最大池化单元级联构成,其中:(2.3) Build a region of interest alignment module: it consists of a grid division unit, a bilinear interpolation unit and a maximum pooling unit cascade, where:
该网格划分单元的网格大小为L/7×H/7,L为候选框的长度,H为候选框的高度;The grid size of the grid division unit is L/7×H/7, L is the length of the candidate frame, and H is the height of the candidate frame;
该双线性插值单元的采样点为4,即在每个网格内随机取四个点使用双线性插值的方法得到这四个点的灰度值;The sampling point of the bilinear interpolation unit is 4, that is, four points are randomly selected in each grid to obtain the gray value of these four points by the method of bilinear interpolation;
该最大池化单元的采样核的大小为2×2,步长为2;The size of the sampling kernel of the max pooling unit is 2×2, and the stride is 2;
(2.4)构建三分支模块:其由一个分类模块、一个检测模块和一个分割模块并连构成,其中:(2.4) Build a three-branch module: it consists of a classification module, a detection module and a segmentation module in parallel, where:
该分类模块是一个全连接网络,由多个全连接层堆叠而成,且最后一个全连接层的神经元个数为2;The classification module is a fully-connected network composed of multiple fully-connected layers stacked, and the number of neurons in the last fully-connected layer is 2;
该检测模块是一个全连接网络,由数个全连接层堆叠而成,且最后一个全连接层的神经元个数为4;The detection module is a fully-connected network composed of several fully-connected layers stacked, and the number of neurons in the last fully-connected layer is 4;
该分割模块是一个全卷积网络,由多个上采样层和3×3的二维卷积层构成;The segmentation module is a fully convolutional network consisting of multiple upsampling layers and 3×3 2D convolutional layers;
(2.5)将特征提取模块、区域候选模块、感兴趣区域对齐模块和三分支模块依次级联构成分割网络DM-net。(2.5) The feature extraction module, the region candidate module, the region of interest alignment module and the three-branch module are cascaded in turn to form the segmentation network DM-net.
步骤3.使用He初始化方法初始化该分割网络的初始参数,并设置分割网络DM-net的损失函数。Step 3. Use the He initialization method to initialize the initial parameters of the segmentation network, and set the loss function of the segmentation network DM-net.
(3.1)使用He初始化方法初始化该分割网络的初始参数,初始化后的网络初始参数W服从分布:(3.1) Use the He initialization method to initialize the initial parameters of the segmentation network, and the initialized network parameters W obey the distribution:
其中,nl为分割网络DM-net的第l层的神经元个数,
)表示数学期望为0,方差为的正态分布。Among them, n l is the number of neurons in the lth layer of the segmentation network DM-net, ) means that the mathematical expectation is 0 and the variance is normal distribution.(3.2)设置分割网络DM-net的损失函数为:loss=losscls+lossbox+lossmask (3.2) Set the loss function of the segmentation network DM-net as: loss=loss cls +loss box +loss mask
其中,losscls为分类分支的损失,lossbox为检测分支的损失,lossmask为分割分支的损失,它们具体的公式如下:Among them, loss cls is the loss of the classification branch, loss box is the loss of the detection branch, and loss mask is the loss of the segmentation branch. Their specific formulas are as follows:
其中,in,
pi为第i个候选框的预测分类概率,当第i个候选框包含胰腺肿瘤时,当第i个候选框不包含胰腺肿瘤时,ti是候选框i的参数化坐标,是候选框i的真实标签的参数化坐标;yi为输入图像中第i个像素点对应的分割标签,如果像素点i属于背景区域,则yi为0,如果像素点i属于胰腺肿瘤则yi为1,predi为预测结果中第i个像素点属于胰腺肿瘤的概率;Nbox是图像中候选框的数量,Np是图像中像素点的个数。 p i is the predicted classification probability of the ith candidate box, when the ith candidate box contains pancreatic tumors, When the ith candidate box does not contain pancreatic tumors, t i is the parameterized coordinate of the candidate box i, is the parameterized coordinate of the true label of the candidate frame i; y i is the segmentation label corresponding to the ith pixel in the input image, if the pixel i belongs to the background area, then y i is 0, if the pixel i belongs to the pancreatic tumor, then y i is 1, pred i is the probability that the ith pixel in the prediction result belongs to pancreatic tumor; N box is the number of candidate boxes in the image, and N p is the number of pixels in the image.
步骤4.使用正电子发射计算机断层显像PET训练数据集训练DM-net,得到一次训练好的网络参数W1。Step 4. Use the Positron Emission Computed Tomography PET training data set to train DM-net to obtain a trained network parameter W1.
(4.1)在正电子发射计算机断层显像PET训练数据集中取4张正电子发射计算机断层显像PET图像,分别输入到分割网络DM-net中得到每张图像的分割结果、分类结果和检测结果,通过步骤3中的公式计算每张图像的损失函数值,再对4张图像的损失函数值取平均得到正电子发射计算机断层显像PET图像的平均损失函数值;(4.1) Take 4 positron emission tomography PET images in the PET training data set, and input them into the segmentation network DM-net to obtain the segmentation results, classification results and detection results of each image. , calculate the loss function value of each image by the formula in step 3, and then average the loss function values of 4 images to obtain the average loss function value of the positron emission tomography PET image;
(4.2)对计算得到的平均损失函数值进行反向传播求得梯度值,并使用Adam优化器更新分割网络DM-net的网络参数;(4.2) Perform back-propagation on the calculated average loss function value to obtain the gradient value, and use the Adam optimizer to update the network parameters of the segmentation network DM-net;
(4.3)重复(4.1)-(4.2)过程,直到训练数据集中的数据都被学习,则完成一次迭代迭代学习。(4.3) Repeat the process of (4.1)-(4.2) until all the data in the training data set are learned, then complete one iteration iterative learning.
(4.4)重复(4.3)过程,进行多次迭代学习,直到计算得到的平均损失函数值不再减小,得到一次训练好的网络参数W1。(4.4) Repeat the process of (4.3), and perform multiple iterative learning until the calculated average loss function value no longer decreases, and obtain a trained network parameter W1.
步骤5.通过迁移学习策略并使用核磁共振MRI数据集训练DM-net,得到二次训练好的网络参数W2。Step 5. Train the DM-net using the MRI data set through the transfer learning strategy to obtain the network parameters W2 trained for the second time.
(5.1)将DM-net的特征提取模块的参数值设置为训练好的网络参数W1中对应模块的值,使用He初始化方法对其区域候选网络、感兴趣区域对齐模块和三分支模块的参数重新初始化。(5.1) Set the parameter value of the feature extraction module of DM-net to the value of the corresponding module in the trained network parameter W1, and use the He initialization method to reset the parameters of its region candidate network, region of interest alignment module and three-branch module. initialization.
(5.2)在核磁共振MRI训练数据集中取4张核磁共振MRI图像,分别输入到分割网络DM-net中得到每张图像的分割结果、分类结果和检测结果,通过步骤3中的公式计算每张图像的损失函数值,再对4张图像的损失函数值取平均得到核磁共振MRI图像的平均损失函数值。(5.2) Take 4 MRI images in the MRI training data set, and input them into the segmentation network DM-net to obtain the segmentation results, classification results and detection results of each image, and calculate each image through the formula in step 3. The loss function value of the image, and then the average loss function value of the 4 images is averaged to obtain the average loss function value of the MRI image.
(5.3)对计算得到的平均损失函数值进行反向传播求得梯度值,并使用Adam优化器更新分割网络DM-net的网络参数。(5.3) Perform back-propagation on the calculated average loss function value to obtain the gradient value, and use the Adam optimizer to update the network parameters of the segmentation network DM-net.
(5.4)重复(5.2)-(5.3)过程,直到训练数据集中的数据都被学习,则完成一次迭代迭代学习。(5.4) Repeat the process of (5.2)-(5.3) until all the data in the training data set are learned, then complete an iterative iterative learning.
(5.5)重复(5.4)过程,进行多次迭代学习,直到计算得到的平均损失函数值不再减小,得到二次训练好的网络参数W2。(5.5) Repeat the process of (5.4), and perform multiple iterative learning until the calculated average loss function value no longer decreases, and obtain the network parameter W2 trained for the second time.
步骤6.使用训练好的DM-net对胰腺肿瘤核磁共振MRI测试图像进行测试。Step 6. Use the trained DM-net to test the pancreatic tumor MRI test images.
将二次训练好的网络参数W2加载到分割网络DM-net中得到训练好的DM-net,并将核磁共振MRI测试数据集输入到DM-net中,得到输出概率图。Load the second-trained network parameters W2 into the segmentation network DM-net to obtain the trained DM-net, and input the MRI test data set into DM-net to obtain the output probability map.
步骤7.根据输出概率图得到最终分割结果。Step 7. Obtain the final segmentation result according to the output probability map.
设置概率阈值为0.5,将输出概率图的每个像素点值与其阈值进行比较:Set the probability threshold to 0.5, and compare each pixel value of the output probability map with its threshold:
将输出概率图中值小于0.5的像素点的值设置为0;Set the value of the pixel whose value is less than 0.5 in the output probability map to 0;
将输出概率图中值大于0.5的像素点的值设置为1;Set the value of the pixel whose value is greater than 0.5 in the output probability map to 1;
用0表示背景区域,用1表示胰腺肿瘤,完成对核磁共振MRI图像的分割。The background region is represented by 0, and the pancreatic tumor is represented by 1, and the segmentation of the MRI image is completed.
本发明的效果可通过以下仿真进一步说明。The effects of the present invention can be further illustrated by the following simulations.
1.仿真条件1. Simulation conditions
本实验仿真平台为Intel Core i9-9900K CPU 3.6GHz,内存32GB的台式电脑,使用Python3.6,keras2.1.5,tensorflow1.9.0构建与训练神经网络模型,使用Nvidia1080GPU,Cuda 9.0和Cudnn v7进行加速。The simulation platform of this experiment is a desktop computer with Intel Core i9-9900K CPU 3.6GHz and memory 32GB, using Python3.6, keras2.1.5, tensorflow1.9.0 to build and train the neural network model, using Nvidia1080GPU, Cuda 9.0 and Cudnn v7 for acceleration.
仿真所使用的图像是胰腺肿瘤核磁共振MRI图像以及正电子发射计算机断层显像PET图像,核磁共振MRI图像与正电子发射计算机断层显像PET图像都来自同一批病人,是可以进行配准的。如图3所示,其中第一列是病人的PET图像,第二列是对应的MRI图像,胰腺肿瘤区域用曲线轮廓标记出来。The images used in the simulation are pancreatic tumor MRI images and positron emission computed tomography PET images, both of which are from the same batch of patients and can be registered. As shown in Figure 3, the first column is the PET image of the patient, the second column is the corresponding MRI image, and the pancreatic tumor area is marked with a curved outline.
仿真时所采用的分割性能评价指标包括DICE系数、敏感度SEN和特异度SPE,其具体计算公式如下:The segmentation performance evaluation indicators used in the simulation include DICE coefficient, sensitivity SEN and specificity SPE. The specific calculation formula is as follows:
其中,A表示真实标签,B表示预测结果,TP表示图像中真实为正样本点且实际被分为正样本点的个数,TN表示图像中真实为正样本点但实际被分为负样本的点的个数,FP表示图像中真实为负样本而实际被分为正样本的点的个数,FN表示图像中真实为负样本且实际被分为负样本的点的个数。Among them, A represents the real label, B represents the prediction result, TP represents the number of positive sample points in the image that are actually divided into positive sample points, and TN represents the real positive sample points in the image but are actually classified as negative samples. The number of points, FP represents the number of points in the image that are actually negative samples but are actually classified as positive samples, FN represents the number of points in the image that are actually negative samples and are actually classified as negative samples.
仿真使用的现有图像分割网络:包括U型网络Unet、残差网络ResNet、掩码分割检测网络Mask-RCNN、使用了迁移学习策略的掩码分割检测网络T-Mask-RCNN、基于密集连接模块的网络DM-net。Existing image segmentation networks used in the simulation: including U-shaped network Unet, residual network ResNet, mask segmentation detection network Mask-RCNN, mask segmentation detection network T-Mask-RCNN using transfer learning strategy, based on dense connection module network DM-net.
2.仿真内容2. Simulation content
仿真一,使用核磁共振MRI图像训练集和正电子发射计算机断层显像PET图像训练集分别训练本发明网络和现有的Unet、ResNet、Mask-RCNN、T-Mask-RCNN、DM-net这六种图像分割网络,并使用核磁共振MRI测试集对每个训练好的网络进行测试,得到每种分割方法的测试集分割结果,如图4所示。Simulation 1, using nuclear magnetic resonance MRI image training set and positron emission computed tomography PET image training set to train the network of the present invention and the existing six kinds of Unet, ResNet, Mask-RCNN, T-Mask-RCNN, DM-net respectively. Image segmentation network, and use the MRI test set to test each trained network, and get the test set segmentation results of each segmentation method, as shown in Figure 4.
计算每种方法的测试集分割结果与测试集真实标签之间的Dice系数、敏感度SEN、特异度SPE。如表1所示:Calculate the Dice coefficient, sensitivity SEN, specificity SPE between the test set segmentation result of each method and the true label of the test set. As shown in Table 1:
表1Table 1
从图4可见,在胰腺肿瘤核磁共振MRI图像分割时本发明比其他现有图像分割网络分割准确率高,由于Unet与ResNet无法分割出胰腺肿瘤,因此其分割结果没有在图4中进行展示。It can be seen from FIG. 4 that the present invention has higher segmentation accuracy than other existing image segmentation networks when segmenting MRI images of pancreatic tumors. Since Unet and ResNet cannot segment pancreatic tumors, the segmentation results are not shown in FIG. 4 .
由表一可以得到如下分析结论:From Table 1, the following conclusions can be drawn:
将Mask-RCNN与DM-net进行对比,可发现在Mask-RCNN的基础上加入密集连接模块在Dice系数变化不大的情况下,大大提升了敏感度;Comparing Mask-RCNN with DM-net, it can be found that adding a dense connection module on the basis of Mask-RCNN greatly improves the sensitivity when the Dice coefficient does not change much;
将T-Mask-RCNN与本发明进行对比,表明本发明通过密集连接模块的加入可以提升Dice系数即使分割更准确;Comparing the T-Mask-RCNN with the present invention, it shows that the present invention can improve the Dice coefficient through the addition of the dense connection module, even if the segmentation is more accurate;
DM-net与本发明的比较,以及Mask-RCNN与T-Mask-RCNN的比较,均说明使用迁移学习可有效提升分割准确率;The comparison between DM-net and the present invention, and the comparison between Mask-RCNN and T-Mask-RCNN, all show that the use of transfer learning can effectively improve the segmentation accuracy;
由上述比较可知,本发明中使用的密集连接模块与迁移策略是可以提升分割性能。It can be seen from the above comparison that the dense connection module and migration strategy used in the present invention can improve the segmentation performance.
由于Unet与ResNet无法分割出胰腺肿瘤,因此其评价指标不进行报告。Since Unet and ResNet cannot segment pancreatic tumors, their evaluation metrics are not reported.
仿真二,将本发明与Liang等人提出的胰腺肿瘤分割方法以及Zhu等人提出的胰腺肿瘤分割方法中的测试集分割结果的评价指标进行对比,结果如表2所示。In simulation 2, the present invention is compared with the pancreatic tumor segmentation method proposed by Liang et al. and the evaluation index of the test set segmentation result in the pancreatic tumor segmentation method proposed by Zhu et al. The results are shown in Table 2.
表2Table 2
算法algorithm Dice(%)Dice(%) SEN(%)SEN(%) SPE(%)SPE(%) Liang等人Liang et al. 7272 7979 9494 Zhu等人Zhu et al. 74.2374.23 77.0477.04 99.3199.31 本发明this invention 76.3376.33 77.0877.08 99.6199.61
从表二可得结论:本发明相较目前已有文献中的胰腺肿瘤分割方法在准确率、敏感度、特异度上都有所提升。It can be concluded from Table 2 that the present invention has improved the accuracy, sensitivity and specificity compared with the pancreatic tumor segmentation methods in the existing literature.
Claims (6)
1. A pancreatic tumor segmentation method based on dense connection network migration learning is characterized by comprising the following steps:
(1) acquiring Positron Emission Tomography (PET) data and Magnetic Resonance Imaging (MRI) data from a hospital, preprocessing the PET data and the MRI data, and dividing the preprocessed data into a training set and a test set according to the ratio of 8: 2;
(2) constructing a segmentation network DM-net formed by cascading a feature extraction module, a region candidate network, an interested region alignment module and a three-branch module;
(3) initializing initial parameters of the segmentation network by using a He initialization method, and setting a loss function of the segmentation network DM-net as follows: loss is losscls+lossbox+lossmask,
Therein, lossclsTo classify the loss of a branch, lossboxTo detect loss of a branch, lossmaskAs loss of split branches;
(4) using an Adam optimizer and taking the loss function as an optimization target, and using a Positron Emission Tomography (PET) training data set to iteratively learn the parameters of the segmentation network DM-net until the value of the loss function is not reduced any more, so as to obtain a network parameter W1 trained once;
(5) training DM-net by a transfer learning strategy and using magnetic resonance MRI data:
(5a) setting the parameter value of the feature extraction module of the DM-net as the value of the corresponding module in the trained network parameters W1, and re-initializing the parameters of the area candidate network, the area-of-interest alignment module and the three-branch module by using a He initialization method;
(5b) keeping the loss function of the network unchanged, and performing iterative learning on the parameters of the partitioned network DM-net by using a nuclear magnetic resonance MRI training data set until the value of the loss function is not reduced any more, so as to obtain a secondarily trained network parameter W2;
(6) loading the secondarily trained network parameter W2 into a segmentation network DM-net, and inputting a nuclear magnetic resonance MRI test data set into the DM-net to obtain an output probability map;
(7) setting the probability threshold value to be 0.5, and comparing each pixel point value of the output probability map with the threshold value thereof to obtain a final segmentation result:
setting the value of the pixel point with the median value of less than 0.5 in the output probability map as 0, representing the background area,
and setting the value of the pixel point with the median of the output probability map larger than 0.5 as 1, and representing the pancreatic tumor.
2. The method of claim 1, wherein the preprocessing of Positron Emission Tomography (PET) data and Magnetic Resonance Imaging (MRI) data in (1) is performed as follows:
(1a) taking the position of a nuclear magnetic resonance MRI image as a reference, using 3D slicer software to adjust the spatial position of a positron emission computed tomography (PET) image of the same patient, overlapping the spatial position with the nuclear magnetic resonance MRI image, and sequentially performing random rotation, horizontal turnover and vertical turnover to respectively expand the data volume to 8 times of the original data volume;
(1b) cutting the sizes of the expanded Positron Emission Tomography (PET) image and the expanded Magnetic Resonance Imaging (MRI) image from original 512 multiplied by 512 to 320 multiplied by 320;
(1c) the cropped positron emission computed tomography PET image and the magnetic resonance MRI image are normalized by the following formula, respectively:
wherein Y is the normalized image, X is the input image, X is the normalized imageminIs the maximum value of the gray level of a pixel point of an input image, XmaxThe minimum value of the gray level of the pixel point of the input image.
3. The method according to claim 1, wherein the modular structure of the split network DM-net constructed in (2) is as follows:
the characteristic extraction module is formed by cascading four dense connection modules, the output of the previous dense connection module and the output of the next dense connection module are spliced in the channel direction, and each dense connection module is formed by a linear rectification Relu activation function and a two-dimensional convolution layer with the size of 3 multiplied by 3;
the area candidate network is composed of an extraction candidate anchor frame unit and a two-classification network, and the two-classification network is composed of a plurality of convolution layers and a plurality of full-connection levels in a linkage mode;
the region of interest alignment module is formed by cascading a grid division unit, a bilinear interpolation unit and a maximum pooling unit, wherein:
the grid size of the grid dividing unit is L/7 multiplied by H/7, L is the length of the candidate frame, and H is the height of the candidate frame;
the sampling point of the bilinear interpolation unit is 4, namely four points are randomly selected in each grid to obtain the gray values of the four points by using a bilinear interpolation method;
the size of the sampling core of the maximum pooling unit is 2 multiplied by 2, and the step length is 2;
the three-branch module is formed by a classification module, a detection module and a segmentation module in parallel, wherein:
the classification module is formed by stacking a plurality of full connection layers, and the number of neurons of the last full connection layer is 2;
the detection module is formed by stacking a plurality of full connection layers, and the number of neurons of the last full connection layer is 4;
the segmentation module is composed of a plurality of upsampled layers and a 3 x 3 two-dimensional convolutional layer.
4. The method of claim 1, wherein the loss of classification branch in (3)clsLoss of detection branchboxLoss of split branchmaskThe following are calculated respectively:
wherein,
pithe predicted classification probability for the ith candidate box, when the ith candidate box contains a pancreatic tumor,
when the ith candidate box does not contain a pancreatic tumor,
tiis the parameterized coordinate of the candidate box i,
is the parameterized coordinates of the true tag of the candidate box i; y isiFor the segmentation label corresponding to the ith pixel point in the input image, if the pixel point i belongs to the background area, then yi0, if pixel point i belongs to pancreatic tumor, yiIs 1, prediThe probability that the ith pixel point belongs to the pancreatic tumor in the prediction result is obtained; n is a radical ofboxIs the number of candidate frames in the image, NpIs the number of pixel points in the image.
5. The method of claim 1, wherein the parameters of the segmentation network DM-net are iteratively learned using a positron emission tomography PET training dataset in (4) as follows:
(4a) 4 positron emission computed tomography (PET) images are acquired from a training data set and are respectively input into a segmentation network DM-net to obtain the segmentation result, the classification result and the detection result of each image, the loss function value of each image is calculated through the formula in the step (3), and the loss function values of the 4 images are averaged to obtain the average loss function value of the PET images;
(4b) carrying out back propagation on the calculated average loss function value to obtain a gradient value, and updating the network parameters of the partitioned network DM-net by using an Adam optimizer;
(4c) repeating the processes (4a) - (4b) until all data in the training data set are learned, and finishing iterative learning once;
(4d) and repeating the process (4c) for a plurality of times of iterative learning until the calculated average loss function value is not reduced, thereby obtaining a trained network parameter W1.
6. The method of claim 1, wherein the parameters of the segmented network DM-net are iteratively learned using the MRI training dataset in (5b) as follows:
(5b1) 4 nuclear magnetic resonance MRI images are acquired from the training data set and are respectively input into a segmentation network DM-net to obtain the segmentation result, the classification result and the detection result of each image, the loss function value of each image is calculated through the formula in (3), and the loss function values of the 4 images are averaged to obtain the average loss function value of the nuclear magnetic resonance MRI images;
(5b2) carrying out back propagation on the calculated average loss function value to obtain a gradient value, and updating the network parameters of the partitioned network DM-net by using an Adam optimizer;
(5b3) repeating the processes of (5b1) - (5b2) until all data in the training data set are learned, and finishing iterative learning;
(5b4) and repeating (5b3) the process, and carrying out iterative learning for a plurality of times until the calculated average loss function value is not reduced any more, so as to obtain the secondarily trained network parameter W2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110944394.6A CN113706486B (en) | 2021-08-17 | 2021-08-17 | Pancreatic tumor image segmentation method based on dense connection network migration learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110944394.6A CN113706486B (en) | 2021-08-17 | 2021-08-17 | Pancreatic tumor image segmentation method based on dense connection network migration learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113706486A true CN113706486A (en) | 2021-11-26 |
CN113706486B CN113706486B (en) | 2024-08-02 |
Family
ID=78653084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110944394.6A Active CN113706486B (en) | 2021-08-17 | 2021-08-17 | Pancreatic tumor image segmentation method based on dense connection network migration learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113706486B (en) |
Cited By (5)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114529562A (en) * | 2022-02-22 | 2022-05-24 | 安徽大学 | Medical image segmentation method based on auxiliary learning task and re-segmentation constraint |
CN114742119A (en) * | 2021-12-30 | 2022-07-12 | 浙江大华技术股份有限公司 | Cross-supervised model training method, image segmentation method and related equipment |
CN114937171A (en) * | 2022-05-11 | 2022-08-23 | 复旦大学 | Alzheimer's classification system based on deep learning |
CN115222007A (en) * | 2022-05-31 | 2022-10-21 | 复旦大学 | An improved particle swarm parameter optimization method for glioma multi-task integrated network |
CN115527036A (en) * | 2022-11-25 | 2022-12-27 | 南方电网数字电网研究院有限公司 | Power grid scene point cloud semantic segmentation method and device, computer equipment and medium |
Citations (11)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109636806A (en) * | 2018-11-22 | 2019-04-16 | 浙江大学山东工业技术研究院 | A kind of three-dimensional NMR pancreas image partition method based on multistep study |
US20190385021A1 (en) * | 2018-06-18 | 2019-12-19 | Drvision Technologies Llc | Optimal and efficient machine learning method for deep semantic segmentation |
CN110751651A (en) * | 2019-09-27 | 2020-02-04 | 西安电子科技大学 | MRI pancreas image segmentation method based on multi-scale transfer learning |
CN111476713A (en) * | 2020-03-26 | 2020-07-31 | 中南大学 | Weather image intelligent recognition method and system based on multi-depth convolutional neural network fusion |
US20200265579A1 (en) * | 2019-02-14 | 2020-08-20 | Koninklijke Philips N.V. | Computer-implemented method for medical image processing |
CN111640120A (en) * | 2020-04-09 | 2020-09-08 | 之江实验室 | Pancreas CT automatic segmentation method based on significance dense connection expansion convolution network |
US20210034913A1 (en) * | 2018-05-23 | 2021-02-04 | Beijing Sensetime Technology Development Co., Ltd. | Method and device for image processing, and computer storage medium |
AU2020103905A4 (en) * | 2020-12-04 | 2021-02-11 | Chongqing Normal University | Unsupervised cross-domain self-adaptive medical image segmentation method based on deep adversarial learning |
CN112381787A (en) * | 2020-11-12 | 2021-02-19 | 福州大学 | Steel plate surface defect classification method based on transfer learning |
CN113011306A (en) * | 2021-03-15 | 2021-06-22 | 中南大学 | Method, system and medium for automatic identification of bone marrow cell images in continuous maturation stage |
CN113034461A (en) * | 2021-03-22 | 2021-06-25 | 中国科学院上海营养与健康研究所 | Pancreas tumor region image segmentation method and device and computer readable storage medium |
-
2021
- 2021-08-17 CN CN202110944394.6A patent/CN113706486B/en active Active
Patent Citations (11)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210034913A1 (en) * | 2018-05-23 | 2021-02-04 | Beijing Sensetime Technology Development Co., Ltd. | Method and device for image processing, and computer storage medium |
US20190385021A1 (en) * | 2018-06-18 | 2019-12-19 | Drvision Technologies Llc | Optimal and efficient machine learning method for deep semantic segmentation |
CN109636806A (en) * | 2018-11-22 | 2019-04-16 | 浙江大学山东工业技术研究院 | A kind of three-dimensional NMR pancreas image partition method based on multistep study |
US20200265579A1 (en) * | 2019-02-14 | 2020-08-20 | Koninklijke Philips N.V. | Computer-implemented method for medical image processing |
CN110751651A (en) * | 2019-09-27 | 2020-02-04 | 西安电子科技大学 | MRI pancreas image segmentation method based on multi-scale transfer learning |
CN111476713A (en) * | 2020-03-26 | 2020-07-31 | 中南大学 | Weather image intelligent recognition method and system based on multi-depth convolutional neural network fusion |
CN111640120A (en) * | 2020-04-09 | 2020-09-08 | 之江实验室 | Pancreas CT automatic segmentation method based on significance dense connection expansion convolution network |
CN112381787A (en) * | 2020-11-12 | 2021-02-19 | 福州大学 | Steel plate surface defect classification method based on transfer learning |
AU2020103905A4 (en) * | 2020-12-04 | 2021-02-11 | Chongqing Normal University | Unsupervised cross-domain self-adaptive medical image segmentation method based on deep adversarial learning |
CN113011306A (en) * | 2021-03-15 | 2021-06-22 | 中南大学 | Method, system and medium for automatic identification of bone marrow cell images in continuous maturation stage |
CN113034461A (en) * | 2021-03-22 | 2021-06-25 | 中国科学院上海营养与健康研究所 | Pancreas tumor region image segmentation method and device and computer readable storage medium |
Non-Patent Citations (3)
* Cited by examiner, † Cited by third partyTitle |
---|
LIANG, Y等: "On the Development of MRI-Based Auto-Segmentation of Pancreatic Tumor Using Deep Neural Networks", INTERNATIONAL JOURNAL OF RADIATION ONCOLOGY BIOLOGY PHYSICS, 30 September 2019 (2019-09-30) * |
凌彤 等: "利用多模态U形网络的CT图像前列腺分割", 智能系统学报, 5 July 2018 (2018-07-05) * |
李彦枝 等: "基于改进卷积神经网络的极光图像分类算法研究", 南京邮电大学学报(自然科学版), 31 December 2019 (2019-12-31) * |
Cited By (6)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114742119A (en) * | 2021-12-30 | 2022-07-12 | 浙江大华技术股份有限公司 | Cross-supervised model training method, image segmentation method and related equipment |
CN114529562A (en) * | 2022-02-22 | 2022-05-24 | 安徽大学 | Medical image segmentation method based on auxiliary learning task and re-segmentation constraint |
CN114937171A (en) * | 2022-05-11 | 2022-08-23 | 复旦大学 | Alzheimer's classification system based on deep learning |
CN114937171B (en) * | 2022-05-11 | 2023-06-09 | 复旦大学 | Alzheimer's classification system based on deep learning |
CN115222007A (en) * | 2022-05-31 | 2022-10-21 | 复旦大学 | An improved particle swarm parameter optimization method for glioma multi-task integrated network |
CN115527036A (en) * | 2022-11-25 | 2022-12-27 | 南方电网数字电网研究院有限公司 | Power grid scene point cloud semantic segmentation method and device, computer equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN113706486B (en) | 2024-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Elazab et al. | 2020 | GP-GAN: Brain tumor growth prediction using stacked 3D generative adversarial networks from longitudinal MR Images |
CN113706486A (en) | 2021-11-26 | Pancreas tumor image segmentation method based on dense connection network migration learning |
Li et al. | 2019 | Automatic cardiothoracic ratio calculation with deep learning |
Dutande et al. | 2022 | Deep residual separable convolutional neural network for lung tumor segmentation |
US20170249739A1 (en) | 2017-08-31 | Computer analysis of mammograms |
Dong et al. | 2020 | Multi-view secondary input collaborative deep learning for lung nodule 3D segmentation |
CN112258530A (en) | 2021-01-22 | Neural network-based computer-aided lung nodule automatic segmentation method |
CN107403201A (en) | 2017-11-28 | Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method |
Li et al. | 2012 | Learning image context for segmentation of the prostate in CT-guided radiotherapy |
Wang et al. | 2022 | A data augmentation method for fully automatic brain tumor segmentation |
Bushara et al. | 2023 | An ensemble method for the detection and classification of lung cancer using Computed Tomography images utilizing a capsule network with Visual Geometry Group |
Peng et al. | 2022 | A-LugSeg: Automatic and explainability-guided multi-site lung detection in chest X-ray images |
Liu et al. | 2020 | A fully automatic segmentation algorithm for CT lung images based on random forest |
Di et al. | 2022 | Automatic liver tumor segmentation from CT images using hierarchical iterative superpixels and local statistical features |
Maity et al. | 2022 | Automatic lung parenchyma segmentation using a deep convolutional neural network from chest X-rays |
Tummala et al. | 2022 | Liver tumor segmentation from computed tomography images using multiscale residual dilated encoder‐decoder network |
Chen et al. | 2021 | MAU-Net: Multiple attention 3D U-Net for lung cancer segmentation on CT images |
Tao et al. | 2022 | Tooth CT Image Segmentation Method Based on the U‐Net Network and Attention Module |
Lu et al. | 2020 | Automatic tumor segmentation by means of deep convolutional U-Net with pre-trained encoder in PET images |
Liu et al. | 2024 | Breast cancer classification method based on improved VGG16 using mammography images |
CN114693671B (en) | 2022-11-29 | Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning |
Vimala et al. | 2024 | Efficient GDD feature approximation based brain tumour classification and survival analysis model using deep learning |
Biswas et al. | 2024 | GAN-driven liver tumor segmentation: enhancing accuracy in biomedical imaging |
Kumar et al. | 2022 | Automated lung nodule detection in CT images by optimized CNN: Impact of improved whale optimization algorithm |
Huang et al. | 2022 | Multi-scale feature similarity-based weakly supervised lymphoma segmentation in PET/CT images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
2021-11-26 | PB01 | Publication | |
2021-11-26 | PB01 | Publication | |
2021-12-14 | SE01 | Entry into force of request for substantive examination | |
2021-12-14 | SE01 | Entry into force of request for substantive examination | |
2024-08-02 | GR01 | Patent grant | |
2024-08-02 | GR01 | Patent grant |