Journal of Frontiers of Computer Science and Technology ›› 2022, Vol. 16 ›› Issue (8): 1850-1864.DOI: 10.3778/j.issn.1673-9418.2203023
• Graphics and Image • Previous Articles Next Articles
XIE Juanying, ZHANG Kaiyun
Received:
2022-02-14
Revised:
2022-03-31
Online:
2022-08-01
Published:
2022-08-19
About author:
XIE Juanying, born in 1971, Ph.D., professor, Ph.D. supervisor, senior member of CCF. Her research interests include machine learning, data mining, biomedical data analysis, etc.Supported by:
谢娟英, 张凯云
作者简介:
谢娟英(1971—),女,陕西西安人,博士,教授,博士生导师,CCF高级会员,主要研究方向为机器学习、数据挖掘、生物医学数据分析等。基金资助:
CLC Number:
XIE Juanying, ZHANG Kaiyun. XR-MSF-Unet: Automatic Segmentation Model for COVID-19 Lung CT Images[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(8): 1850-1864.
谢娟英, 张凯云. XR-MSF-Unet:新冠肺炎肺部CT图像自动分割模型[J]. 计算机科学与探索, 2022, 16(8): 1850-1864.
Add to citation manager EndNote|Ris|BibTeX
URL: http://fcst.ceaj.org/EN/10.3778/j.issn.1673-9418.2203023
数据集 | CT切片数量 | COVID-19的 CT切片数 | COVID-19 病例数 |
---|---|---|---|
COVID-19-1 | 100 | 100 | ~60 |
COVID-19-2 | 829 | 373 | 9 |
COVID-19-3 | 1 844 | 1 844 | 20 |
COVID-19-4 | 785 | 785 | 50 |
Table 1 COVID-19 CT image datasets for experiments
数据集 | CT切片数量 | COVID-19的 CT切片数 | COVID-19 病例数 |
---|---|---|---|
COVID-19-1 | 100 | 100 | ~60 |
COVID-19-2 | 829 | 373 | 9 |
COVID-19-3 | 1 844 | 1 844 | 20 |
COVID-19-4 | 785 | 785 | 50 |
优化器 | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
RMSProp | 0.578 5 | 0.414 7 | 0.567 6 | 0.551 6 |
Adam | 0.566 5 | 0.388 6 | 0.532 5 | 0.609 2 |
SGD | 0.211 0 | 0.159 4 | 0.236 6 | 0.418 9 |
Adamax | 0.528 7 | 0.455 2 | 0.559 2 | 0.650 5 |
Table 2 Comparison of model performance using different optimizers
优化器 | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
RMSProp | 0.578 5 | 0.414 7 | 0.567 6 | 0.551 6 |
Adam | 0.566 5 | 0.388 6 | 0.532 5 | 0.609 2 |
SGD | 0.211 0 | 0.159 4 | 0.236 6 | 0.418 9 |
Adamax | 0.528 7 | 0.455 2 | 0.559 2 | 0.650 5 |
数据集 | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
无数据扩增 | 0.578 5 | 0.414 7 | 0.567 6 | 0.551 6 |
有数据扩增 | 0.614 4 | 0.417 3 | 0.623 6 | 0.621 9 |
Table 3 Testing results of data augmentation efficacy
数据集 | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
无数据扩增 | 0.578 5 | 0.414 7 | 0.567 6 | 0.551 6 |
有数据扩增 | 0.614 4 | 0.417 3 | 0.623 6 | 0.621 9 |
残差块参数 | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
0 | 0.614 4 | 0.417 3 | 0.623 6 | 0.621 9 |
1 | 0.603 4 | 0.394 9 | 0.524 2 | 0.608 1 |
2 | 0.601 4 | 0.404 9 | 0.538 7 | 0.618 6 |
4 | 0.618 0 | 0.422 3 | 0.563 3 | 0.625 7 |
8 | 0.623 4 | 0.443 6 | 0.583 1 | 0.637 6 |
16 | 0.627 2 | 0.430 8 | 0.569 1 | 0.626 2 |
32 | 0.635 4 | 0.458 7 | 0.624 3 | 0.645 8 |
64 | 0.560 5 | 0.330 9 | 0.451 3 | 0.585 1 |
Table 4 Experimental results of parameter X of residual blocks embedded in XR module
残差块参数 | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
0 | 0.614 4 | 0.417 3 | 0.623 6 | 0.621 9 |
1 | 0.603 4 | 0.394 9 | 0.524 2 | 0.608 1 |
2 | 0.601 4 | 0.404 9 | 0.538 7 | 0.618 6 |
4 | 0.618 0 | 0.422 3 | 0.563 3 | 0.625 7 |
8 | 0.623 4 | 0.443 6 | 0.583 1 | 0.637 6 |
16 | 0.627 2 | 0.430 8 | 0.569 1 | 0.626 2 |
32 | 0.635 4 | 0.458 7 | 0.624 3 | 0.645 8 |
64 | 0.560 5 | 0.330 9 | 0.451 3 | 0.585 1 |
Kernel size | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
1×1 | 0.624 3 | 0.448 3 | 0.611 5 | 0.648 6 |
3×3 | 0.600 9 | 0.398 6 | 0.529 5 | 0.613 6 |
5×5 | 0.600 8 | 0.409 4 | 0.538 8 | 0.623 8 |
7×7 | 0.567 9 | 0.339 1 | 0.460 6 | 0.590 3 |
9×9 | 0.610 4 | 0.416 1 | 0.555 3 | 0.625 0 |
Table 5 Experimental results for testing kernel sizes of SAM module
Kernel size | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
1×1 | 0.624 3 | 0.448 3 | 0.611 5 | 0.648 6 |
3×3 | 0.600 9 | 0.398 6 | 0.529 5 | 0.613 6 |
5×5 | 0.600 8 | 0.409 4 | 0.538 8 | 0.623 8 |
7×7 | 0.567 9 | 0.339 1 | 0.460 6 | 0.590 3 |
9×9 | 0.610 4 | 0.416 1 | 0.555 3 | 0.625 0 |
权重值 | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
1∶1 | 0.624 3 | 0.448 3 | 0.611 5 | 0.648 6 |
2∶3 | 0.067 8 | 0.043 8 | 0.058 6 | 0.071 2 |
3∶2 | 0.195 0 | 0.198 6 | 0.132 7 | 0.113 4 |
3∶7 | 0.070 1 | 0.048 3 | 0.063 4 | 0.073 6 |
7∶3 | 0.263 5 | 0.144 9 | 0.256 9 | 0.292 4 |
1∶4 | 0.056 5 | 0.040 1 | 0.055 8 | 0.077 1 |
4∶1 | 0.219 5 | 0.135 9 | 0.277 4 | 0.233 3 |
1∶9 | 0.594 7 | 0.379 6 | 0.506 2 | 0.604 0 |
9∶1 | 0.606 3 | 0.390 8 | 0.520 5 | 0.607 5 |
Table 6 Experiments for testing weights for feature fusion of MSF module
权重值 | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
1∶1 | 0.624 3 | 0.448 3 | 0.611 5 | 0.648 6 |
2∶3 | 0.067 8 | 0.043 8 | 0.058 6 | 0.071 2 |
3∶2 | 0.195 0 | 0.198 6 | 0.132 7 | 0.113 4 |
3∶7 | 0.070 1 | 0.048 3 | 0.063 4 | 0.073 6 |
7∶3 | 0.263 5 | 0.144 9 | 0.256 9 | 0.292 4 |
1∶4 | 0.056 5 | 0.040 1 | 0.055 8 | 0.077 1 |
4∶1 | 0.219 5 | 0.135 9 | 0.277 4 | 0.233 3 |
1∶9 | 0.594 7 | 0.379 6 | 0.506 2 | 0.604 0 |
9∶1 | 0.606 3 | 0.390 8 | 0.520 5 | 0.607 5 |
MSF模块的位置 | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
L1 | 0.460 1 | 0.335 7 | 0.372 4 | 0.583 9 |
L2 | 0.624 3 | 0.448 3 | 0.611 5 | 0.648 6 |
L3 | 0.580 4 | 0.347 8 | 0.472 1 | 0.593 5 |
L4 | 0.593 9 | 0.407 4 | 0.541 9 | 0.620 1 |
L5 | 0.582 3 | 0.364 0 | 0.484 7 | 0.601 1 |
L6 | 0.600 7 | 0.412 7 | 0.547 5 | 0.617 3 |
L7 | 0.617 5 | 0.397 8 | 0.530 4 | 0.608 7 |
L8 | 0.609 7 | 0.413 1 | 0.546 5 | 0.619 6 |
Table 7 Performance of U-Net embedding MSF module in different positions
MSF模块的位置 | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
L1 | 0.460 1 | 0.335 7 | 0.372 4 | 0.583 9 |
L2 | 0.624 3 | 0.448 3 | 0.611 5 | 0.648 6 |
L3 | 0.580 4 | 0.347 8 | 0.472 1 | 0.593 5 |
L4 | 0.593 9 | 0.407 4 | 0.541 9 | 0.620 1 |
L5 | 0.582 3 | 0.364 0 | 0.484 7 | 0.601 1 |
L6 | 0.600 7 | 0.412 7 | 0.547 5 | 0.617 3 |
L7 | 0.617 5 | 0.397 8 | 0.530 4 | 0.608 7 |
L8 | 0.609 7 | 0.413 1 | 0.546 5 | 0.619 6 |
模型 | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
U-Net[ | 0.614 4 | 0.417 3 | 0.623 6 | 0.621 9 |
SE[ | 0.551 5 | 0.307 6 | 0.423 2 | 0.579 2 |
CBAM[ | 0.592 9 | 0.362 0 | 0.492 3 | 0.598 6 |
SCSE[ | 0.615 2 | 0.409 5 | 0.545 7 | 0.616 7 |
ECA[ | 0.593 7 | 0.397 3 | 0.528 4 | 0.616 1 |
MSF+U-Net | 0.624 3 | 0.448 3 | 0.611 5 | 0.648 6 |
Table 8 Performance comparison of MSF and other attention modules
模型 | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
U-Net[ | 0.614 4 | 0.417 3 | 0.623 6 | 0.621 9 |
SE[ | 0.551 5 | 0.307 6 | 0.423 2 | 0.579 2 |
CBAM[ | 0.592 9 | 0.362 0 | 0.492 3 | 0.598 6 |
SCSE[ | 0.615 2 | 0.409 5 | 0.545 7 | 0.616 7 |
ECA[ | 0.593 7 | 0.397 3 | 0.528 4 | 0.616 1 |
MSF+U-Net | 0.624 3 | 0.448 3 | 0.611 5 | 0.648 6 |
Baseline | 组件 | 评价指标 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
DRF | GAM | LAM | SAM | Dice | IOU | F1-Score | Sensitivity | #Parameters/106 | FPS | |
U-Net | 0.614 4 | 0.417 3 | 0.623 6 | 0.621 9 | 16.47 | 43 | ||||
√ | √ | √ | 0.270 1 | 0.148 7 | 0.264 3 | 0.373 5 | 48.76 | 16 | ||
√ | √ | √ | 0.536 4 | 0.401 8 | 0.590 3 | 0.602 6 | 52.52 | 18 | ||
√ | √ | √ | 0.607 6 | 0.416 2 | 0.553 1 | 0.627 8 | 55.66 | 9 | ||
√ | √ | √ | 0.132 0 | 0.123 4 | 0.231 9 | 0.323 9 | 56.69 | 16 | ||
√ | √ | √ | √ | 0.624 3 | 0.448 3 | 0.611 5 | 0.648 6 | 64.54 | 30 |
Table 9 Ablation experiments and complexity analysis of MSF module
Baseline | 组件 | 评价指标 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
DRF | GAM | LAM | SAM | Dice | IOU | F1-Score | Sensitivity | #Parameters/106 | FPS | |
U-Net | 0.614 4 | 0.417 3 | 0.623 6 | 0.621 9 | 16.47 | 43 | ||||
√ | √ | √ | 0.270 1 | 0.148 7 | 0.264 3 | 0.373 5 | 48.76 | 16 | ||
√ | √ | √ | 0.536 4 | 0.401 8 | 0.590 3 | 0.602 6 | 52.52 | 18 | ||
√ | √ | √ | 0.607 6 | 0.416 2 | 0.553 1 | 0.627 8 | 55.66 | 9 | ||
√ | √ | √ | 0.132 0 | 0.123 4 | 0.231 9 | 0.323 9 | 56.69 | 16 | ||
√ | √ | √ | √ | 0.624 3 | 0.448 3 | 0.611 5 | 0.648 6 | 64.54 | 30 |
模型 | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
U-Net | 0.614 4 | 0.417 3 | 0.623 6 | 0.621 9 |
XR+U-Net | 0.635 4 | 0.458 7 | 0.624 3 | 0.645 8 |
MSF+U-Net | 0.624 3 | 0.448 3 | 0.611 5 | 0.648 6 |
XR-MSF-Unet | 0.646 5 | 0.476 9 | 0.635 8 | 0.670 2 |
Table 10 Ablation experimental results of different modules on performance of U-Net
模型 | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
U-Net | 0.614 4 | 0.417 3 | 0.623 6 | 0.621 9 |
XR+U-Net | 0.635 4 | 0.458 7 | 0.624 3 | 0.645 8 |
MSF+U-Net | 0.624 3 | 0.448 3 | 0.611 5 | 0.648 6 |
XR-MSF-Unet | 0.646 5 | 0.476 9 | 0.635 8 | 0.670 2 |
模型 | Dice | IOU | F1-Score | Sensitivity | #Parameters/106 | FPS |
---|---|---|---|---|---|---|
U-Net[ | 0.614 4 | 0.417 3 | 0.623 6 | 0.621 9 | 16.47 | 45 |
Attention U-Net[ | 0.605 2 | 0.391 3 | 0.520 3 | 0.539 6 | 33.26 | 46 |
UNet++[ | 0.551 3 | 0.317 2 | 0.432 3 | 0.585 6 | 34.93 | 26 |
FusionNet[ | 0.595 9 | 0.378 7 | 0.506 2 | 0.606 7 | 77.88 | 19 |
SegNet[ | 0.546 1 | 0.303 1 | 0.419 5 | 0.580 5 | 28.08 | 62 |
FCN[ | 0.601 7 | 0.402 3 | 0.559 5 | 0.618 1 | 19.17 | 66 |
PraNet[ | 0.539 1 | 0.310 3 | 0.521 0 | 0.597 3 | 31.04 | 41 |
BASNet[ | 0.634 2 | 0.461 7 | 0.611 5 | 0.639 6 | 83.03 | 19 |
CaraNet[ | 0.613 2 | 0.445 7 | 0.591 8 | 0.603 5 | 44.48 | 40 |
UNeXt[ | 0.624 2 | 0.451 6 | 0.614 5 | 0.645 8 | 24.56 | 41 |
XR-MSF-Unet | 0.646 5 | 0.476 9 | 0.635 8 | 0.670 2 | 98.21 | 15 |
Table 11 Performance comparison of XR-MSF-Unet and other methods
模型 | Dice | IOU | F1-Score | Sensitivity | #Parameters/106 | FPS |
---|---|---|---|---|---|---|
U-Net[ | 0.614 4 | 0.417 3 | 0.623 6 | 0.621 9 | 16.47 | 45 |
Attention U-Net[ | 0.605 2 | 0.391 3 | 0.520 3 | 0.539 6 | 33.26 | 46 |
UNet++[ | 0.551 3 | 0.317 2 | 0.432 3 | 0.585 6 | 34.93 | 26 |
FusionNet[ | 0.595 9 | 0.378 7 | 0.506 2 | 0.606 7 | 77.88 | 19 |
SegNet[ | 0.546 1 | 0.303 1 | 0.419 5 | 0.580 5 | 28.08 | 62 |
FCN[ | 0.601 7 | 0.402 3 | 0.559 5 | 0.618 1 | 19.17 | 66 |
PraNet[ | 0.539 1 | 0.310 3 | 0.521 0 | 0.597 3 | 31.04 | 41 |
BASNet[ | 0.634 2 | 0.461 7 | 0.611 5 | 0.639 6 | 83.03 | 19 |
CaraNet[ | 0.613 2 | 0.445 7 | 0.591 8 | 0.603 5 | 44.48 | 40 |
UNeXt[ | 0.624 2 | 0.451 6 | 0.614 5 | 0.645 8 | 24.56 | 41 |
XR-MSF-Unet | 0.646 5 | 0.476 9 | 0.635 8 | 0.670 2 | 98.21 | 15 |
数据集 | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
COVID-19-1 | 0.646 5 | 0.476 9 | 0.635 8 | 0.670 2 |
COVID-19-2 | 0.795 0 | 0.821 7 | 0.845 3 | 0.856 4 |
COVID-19-3 | 0.881 6 | 0.771 5 | 0.860 1 | 0.875 2 |
COVID-19-4 | 0.727 8 | 0.761 1 | 0.771 5 | 0.806 9 |
Table 12 Generalization test of XR-MSF-Unet model
数据集 | Dice | IOU | F1-Score | Sensitivity |
---|---|---|---|---|
COVID-19-1 | 0.646 5 | 0.476 9 | 0.635 8 | 0.670 2 |
COVID-19-2 | 0.795 0 | 0.821 7 | 0.845 3 | 0.856 4 |
COVID-19-3 | 0.881 6 | 0.771 5 | 0.860 1 | 0.875 2 |
COVID-19-4 | 0.727 8 | 0.761 1 | 0.771 5 | 0.806 9 |
[1] | 新型冠状病毒感染的肺炎诊疗方案(试行第五版)[EB/OL]. ( 2020-02-04)[2022-01-31]. http://www.nhc.gov.cn/yzygj/s7653p/202002/3b09b894ac9b4204a79db5b8912d4440/files/7260301a393845fc87fcf6dd52965ecb.pdf. |
[2] | 刘辰, 肖志勇, 杜年茂. 改进的卷积神经网络在医学图像分割上的应用[J]. 计算机科学与探索, 2019, 13(9): 1593-1603. |
LIU C, XIAO Z Y, DU N M. Application of improved con-volutional neural network in medical image segmentation[J]. Journal of Frontiers of Computer Science and Techno-logy, 2019, 13(9): 1593-1603. | |
[3] | 胡敏, 周秀东, 黄宏程, 等. 基于改进U型神经网络的脑出血CT图像分割[J]. 电子与信息学报, 2022, 44(1): 11-17. |
HU M, ZHOU X D, HUANG H C, et al. Computed-tomo-graphy image segmentation of cerebral hemorrhage based on improved U-shaped neural network[J]. Journal of Elec-tronics & Information Technology, 2022, 44(1): 11-17. | |
[4] | RONNEBERGER O, FISCHER P, BROX T. U-Net: convo-lutional networks for biomedical image segmentation[J]. Medical Image Computing and Computer-Assisted Interven-tion, 2015, 9351: 234-241. |
[5] | 沈怀艳, 吴云. 基于MSFA-Net的肝脏CT图像分割方法[J/OL]. 计算机科学与探索( 2021-08-03) [2022-01-31].http://kns.cnki.net/kcms/detail/11.5602.TP.20210803.1652.004.html. |
SHEN H Y, WU Y. Liver CT image segmentation method based on MSFA-Net[J/OL]. Journal of Frontiers of Computer Science and Technology(2021-08-03) [2022-01-31]. http://kns.cnki.net/kcms/detail/11.5602.TP.20210803.1652.004.html. | |
[6] | 余昇, 王康健, 何灵敏, 等. 基于改进U-net网络的气胸分割方法[J]. 计算机工程与应用, 2022, 58(3): 207-214. |
YU S, WANG K J, HE L M, et al. Pneumothorax segmenta-tion method based on improved U-net network[J]. Computer Engineering and Applications, 2022, 58(3): 207-214. | |
[7] | 钱宝鑫, 肖志勇, 宋威. 改进的卷积神经网络在肺部图像上的分割应用[J]. 计算机科学与探索, 2020, 14(8): 1358-1367. |
QIAN B X, XIAO Z Y, SONG W. Application of improved convolutional neural network in lung image segmentation[J]. Journal of Frontiers of Computer Science and Techno-logy, 2020, 14(8): 1358-1367. | |
[8] | 姬广海, 黄满华, 张庆, 等. 新型冠状病毒肺炎CT表现及动态变化[J]. 中国医学影像技术, 2020, 36(2): 242-247. |
JI G H, HUANG M H, ZHANG Q, et al. CT manifestations and dynamic changes of corona virus disease 2019[J]. Chin-ese Journal of Medical Imaging Technology, 2020, 36(2): 242-247. | |
[9] | 许玉环, 吕晓艳, 张见增, 等. 新型冠状病毒肺炎不同临床分型的CT特征[J]. 中国医学影像学杂志, 2020, 28(12): 887-890. |
XU Y H, LV X Y, ZHANG J Z, et al. CT features of different clinical types of COVID-19[J]. Chinese Journal of Medical Imaging, 2020, 28(12): 887-890. | |
[10] | WANG Z, LIU Q D, DOU Q, et al. Contrastive cross-site learning with redesigned net for COVID-19 CT classifica-tion[J]. IEEE Journal of Biomedical and Health Informa-tics, 2020, 24(10): 2806-2813. |
[11] | BUTT C, GILL J L, CHUN D, et al. Deep learning system to screen coronavirus disease 2019 pneumonia[J]. Applied Intelligence, 2020(5): 1-7. |
[12] | CHEN J, WU L L, ZHANG J, et al. Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography[J]. Scientific Reports, 2020, 10(1): 191-196. |
[13] | ZHOU Z W, SIDDIQUUE M M R, TAJBAKHSH N, et al. UNet++: a nested U-Net architecture for medical image seg-mentation[C]// LNCS 11045: Proceedings of the 2018 Inter-national Workshop on Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Cham: Springer, 2018: 3-11. |
[14] | SHAN F, GAO Y Z, WANG J, et al. Abnormal lung quan-tification in chest CT images of COVID-19 patients with deep learning and its application to severity prediction[J]. Medical Physics, 2021, 48(4): 1633-1645. |
[15] | CHEN X C, YAO L N, ZHANG Y, et al. Residual attention U-Net for automated multi-class segmentation of COVID-19 chest CT images[J]. arXiv:2004.05645, 2020. |
[16] | FAN D P, ZHOU T, JI G P, et al. Inf-Net: automatic COVID-19 lung infection segmentation from CT images[J]. IEEE Transactions on Medical Imaging, 2020, 39: 2626-2637. |
[17] | BUDAK Ü, ÇIBUK M, CÖMERT Z, et al. Efficient COVID-19 segmentation from CT slices exploiting semantic segmen-tation with integrated attention mechanism[J]. Journal of Digital Imaging, 2021, 34(2): 263-272. |
[18] | KUMAR SINGH V, ABDEL-NASSER M, PANDEY N, et al. LungINFseg: segmenting COVID-19 infected regions in lung CT images based on a receptive-field-aware deep lear-ning framework[J]. Diagnostics, 2021, 11(2): 158. |
[19] | WANG X, ZHENG C S, DENG X B, et al. A weakly-super-vised framework for COVID-19 classification and lesion lo-calization from chest CT[J]. IEEE Transactions on Medical Imaging, 2020, 39(8): 2615-2625. |
[20] | 吴辰文, 梁雨欣, 田鸿雁. 改进卷积神经网络的COVID-19 CT影像分类方法研究[J]. 计算机工程与应用, 2022, 58(2): 225-234. |
WU C W, LIANG Y X, TIAN H Y. Research on COVID-19 CT image classification method based on improved convo-lutional neural network[J]. Computer Engineering and App-lications, 2022, 58(2): 225-234. | |
[21] | XIE S N, GIRSHICK R B, DOLLÁR P, et al. Aggregated residual transformations for deep neural networks[C]// Pro-ceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Was-hington: IEEE Computer Society, 2017: 5987-5995. |
[22] | 姜峰, 顾庆, 郝慧珍, 等. 基于内容的图像分割方法综述[J]. 软件学报, 2017, 28(1): 160-183. |
JIANG F, GU Q, HAO H Z, et al. Survey on content-based image segmentation methods[J]. Journal of Software, 2017, 28(1): 160-183. | |
[23] | LONG J, SHELHAMER E, DARRELL T. Fully convolu-tional networks for semantic segmentation[J]. IEEE Tran-sactions on Pattern Analysis and Machine Intelligence, 2015, 39(4): 640-651. |
[24] | ALEX K, SUTSKEVER I, HINTON G E, et al. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2012, 60: 84-90. |
[25] | SZEGEDY C, LIU W, JIA Y Q, et al. Going deeper with convolutions[C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, Jun 7-12, 2015. Washington: IEEE Computer Society, 2015: 1-9. |
[26] | HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recogni-tion, Las Vegas, Jun 27-30, 2016. Washington: IEEE Computer Society, 2016: 770-778. |
[27] | CHAUDHARI S, POLATKAN G, RAMANATH R, et al. An attentive survey of attention models[J].arXiv:1904.02874, 2019. |
[28] | TSOTSOS J K, CULHANE S M, WAI W Y K, et al. Mo-deling visual attention via selective tuning[J]. Artificial Inte-lligence, 1995, 78(1/2): 507-545. |
[29] | HU J, SHEN L, ALBANIE S, et al. Squeeze-and-excitation networks[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Washington: IEEE Computer Society, 2018: 7132-7141. |
[30] | WOO S, PARK J, LEE J, et al. CBAM: convolutional block attention module[C]// LNCS 11211: Proceedings of the 15th European Conference on Computer Vision, Munich, Sep 8-14, 2018. Cham: Springer, 2018: 3-19. |
[31] | ROY A G, NAVAB N, WACHINGER C. Concurrent spatial and channel ‘squeeze & excitation’ in fully convolutional networks[C]// LNCS 11070: Proceedings of the 21st Interna-tional Conference on Medical Image Computing and Com-puter Assisted Intervention, Granada, Sep 16-20, 2018. Cham: Springer, 2018: 421-429. |
[32] | WANG Q, WU B, ZHU P, et al. ECA-Net: efficient channel attention for deep convolutional neural networks[C]// Pro-ceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, Jun 13-19, 2020.Piscataway: IEEE, 2020: 11531-11539. |
[33] | MILLETARI F, NAVAB N, AHMADI S A. V-Net: fully convolutional neural networks for volumetric medical image segmentation[C]// Proceedings of the 4th International Con-ference on 3D Vision, Stanford, Oct 25-28, 2016. Washing-ton: IEEE Computer Society, 2016: 565-571. |
[34] | JENSSEN H B. COVID-19 CT segmentation dataset[EB/OL]. [2022-01-31]. https://medicalsegmentation.com/covid19/. |
[35] | MA J, WANG Y, AN X, et al. Toward data efficient lear-ning: a benchmark for COVID-19 CT lung and infection seg-mentation[J]. Medical Physics, 2021, 48(3): 1197-1210. |
[36] | MOROZOV S, ANDREYCHENKO A, PAVLOV N, et al. Mosmeddata: chest CT scans with COVID-19 related findings dataset[J]. arXiv:20050.6465, 2020. |
[37] | CANNY J. A computational approach to edge detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelli-gence, 1986, 8(6): 679-698. |
[38] | QUAN T M, HILDEBRAND D, JEONG W K. FusionNet: a deep fully residual convolutional neural network for image segmentation in connectomics[J]. Frontiers in Computer Science, 2021, 3: 1197-1210. |
[39] | BADRINARAYANAN V, KENDALL A, CIPOLLA R. Seg-Net: a deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE Transactions on Pattern Analy-sis and Machine Intelligence, 2017, 39: 2481-2495. |
[40] | OKTAY O, SCHLEMPER J, FOLGOC L L, et al. Attention U-Net: learning where to look for the pancreas[J]. arXiv:1804.0399, 2018. |
[41] | FAN D P, JI G P, ZHOU T, et al. PraNet: parallel reverse attention network for polyp segmentation[C]// LNCS 12266: Proceedings of the 23rd International Conference on Medical Image Computing and Computer Assisted Intervention, Lima, Oct 4-8, 2020. Cham: Springer, 2020: 263-273. |
[42] | QIN X, FAN D, HUANG C, et al. Boundary-aware segmen-tation network for mobile and Web applications[J]. arXiv:2101.04704, 2021. |
[43] | LOU A, GUAN S, LOEW M, et al. CaraNet: context axial reverse attention network for segmentation of small medical objects[J]. arXiv:2108.07368, 2021. |
[44] | JEYA M J V, VISHAL M P. UNeXt: MLP-based rapid medi-cal image segmentation network[J]. arXiv:2203.04967, 2022. |
[45] | ISENSEE F, JAEGER P F, KOHL S A A, et al. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation[J]. Nature Methods, 2021, 18: 203-211. |
[1] | LIU Lamei, WANG Xiaona, LIU Wanjun, QU Haicheng. Image Semantic Segmentation Method with Fusion of Transposed Convolution and Deep Residual [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(9): 2132-2142. |
[2] | ZHU Bingyu, LIU Zhen, ZHANG Jingxiang. COVID-19 Detection Algorithm Combining Grad-CAM and Convolutional Neural Network [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(9): 2108-2120. |
[3] | GU Penghui, XIAO Zhiyong. Application of Improved U-Net in Retinal Vessel Segmentation [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(3): 683-691. |
[4] | SONG Shujie, CUI Zhenchao, CHEN Liping, CHEN Xiangyang. Fundus Vessel Segmentation Algorithm Based on Multi-feature Fusion Neural Network [J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(12): 2401-2412. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||
/D:/magtech/JO/Jwk3_kxyts/WEB-INF/classes/