Journal of Frontiers of Computer Science and Technology ›› 2022, Vol. 16 ›› Issue (3): 552-564.DOI: 10.3778/j.issn.1673-9418.2106100
• Surveys and Frontiers • Previous Articles Next Articles
LIU Liping, SUN Jian+(), GAO Shiyan
Received:
2021-06-28
Revised:
2021-08-26
Online:
2022-03-01
Published:
2021-09-01
About author:
LIU Liping, born in 1977, Ph.D., professor. Her research interests include pattern recognition and intelligent system, mining engineering, etc.Supported by:
通讯作者:
+ E-mail: 1125439094@qq.com作者简介:
刘利平(1977—),女,河北唐山人,博士在读,教授,主要研究方向为模式识别与智能系统、矿业工程等。基金资助:
CLC Number:
LIU Liping, SUN Jian, GAO Shiyan. Overview of Blind Deblurring Methods for Single Image[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(3): 552-564.
刘利平, 孙建, 高世妍. 单图像盲去模糊方法概述[J]. 计算机科学与探索, 2022, 16(3): 552-564.
Add to citation manager EndNote|Ris|BibTeX
URL: http://fcst.ceaj.org/EN/10.3778/j.issn.1673-9418.2106100
方法 | 适用场景 | 机制 | 优势 | 局限性 |
---|---|---|---|---|
ECP[ | 运动模糊 | BCP和DCP相结合 | 同时考虑了暗通道和亮通道信息,无需任何复杂处理技术 | 易受低光照条件影响和图像明暗程度影响 |
PSF估计[ | 散焦模糊运动模糊 | 以亚像素精度恢复空间变化的PSF | 快速准确,仅使用单个图像以亚像素分辨率解决空间变化的PSF | 只能求解单峰的核 |
FastGyro[ | 运动模糊 | 基于惯性提高现有特征检测器和描述符是运动模糊的鲁棒性 | 增加了监测点的数量,提供了更高的重复性,提高了检测器定位精度 | 不适合强烈的运动模糊,去模糊效果受陀螺仪测量角速度高度影响 |
VEM[ | 运动模糊 | 基于变分贝叶斯推理的边缘选择算法 | 易于实现,对变化内容稳定 | 可能会陷入一些远离真相的局部最小值 |
LMG[ | 均匀模糊 | 基于半二次分裂策略和从粗到细的MAP框架 | 可处理各种特定场景 | 处理包含非高斯噪声图像时效率低下,需大量时间迭代更新变量 |
CME[ | 运动模糊 | 使用边缘轮廓的思想来直接预测阶梯边缘 | 速度快、适用性强、计算复杂度降低 | 无法处理不同的大规模模糊 |
GCRF[ | 均匀模糊 | 通过数据驱动方法有效地从训练集中学习好的核融合模型 | 更准确恢复底层内核产生更清晰的反卷积结果,减少振铃伪影 | 核融合框架完全依赖于单个方法的核估计结果,改进有限度 |
傅里叶突发积累[ | 抖动模糊 | 图像在傅里叶域进行加权平均,加权取决于傅里叶谱的幅度 | 速度快,占用内存小,不会引入大多数反卷积算法中存在的典型振铃 | 优化慢 |
ADMM[ | 抖动模糊 | 优化边缘检测器 | 速度快,感知质量更好,可集成到手机 | 无法捕捉到模糊程度为0的完美图像 |
RGTV[ | 均匀模糊 高斯模糊 | 交替求解骨架图像和模糊核 | 促进来自模糊观察的清晰图像的双峰权重分布,享有理想的频谱特性 | 对非均匀散焦模糊估计不佳 |
PLMG和低秩先验[ | 运动模糊 | 高效的交替方向乘子法及半二次方分裂法 | 有效抑制潜在图像振铃效应并保留大多细节 | 易受模糊图像噪声影响 |
L0稀疏先验方法[ | 合成模糊图像真实模糊图像 | 对标签图像的像素分布和梯度分布特征进行分析 | 有效地抑制图像边缘的振铃效应,速度更快 | 迭代过程容易陷入局部最优问题 |
混合梯度稀疏先验[ | 真实模糊图像 | 图像高阶梯度的稀疏性与低阶梯度相结合来构造混合梯度正则项 | 恢复出更锐利的边缘和更平滑的细节信息 | 存在一定程度的振铃效应 |
暗像素先验方法[ | 真实模糊运动模糊 | 模糊图像的暗像素是非稀疏的 | 适用于大量图像,具有更好普适性 | 无法扩展到模糊视频的复原中 |
Table 1 Traditional blind deblurring methods
方法 | 适用场景 | 机制 | 优势 | 局限性 |
---|---|---|---|---|
ECP[ | 运动模糊 | BCP和DCP相结合 | 同时考虑了暗通道和亮通道信息,无需任何复杂处理技术 | 易受低光照条件影响和图像明暗程度影响 |
PSF估计[ | 散焦模糊运动模糊 | 以亚像素精度恢复空间变化的PSF | 快速准确,仅使用单个图像以亚像素分辨率解决空间变化的PSF | 只能求解单峰的核 |
FastGyro[ | 运动模糊 | 基于惯性提高现有特征检测器和描述符是运动模糊的鲁棒性 | 增加了监测点的数量,提供了更高的重复性,提高了检测器定位精度 | 不适合强烈的运动模糊,去模糊效果受陀螺仪测量角速度高度影响 |
VEM[ | 运动模糊 | 基于变分贝叶斯推理的边缘选择算法 | 易于实现,对变化内容稳定 | 可能会陷入一些远离真相的局部最小值 |
LMG[ | 均匀模糊 | 基于半二次分裂策略和从粗到细的MAP框架 | 可处理各种特定场景 | 处理包含非高斯噪声图像时效率低下,需大量时间迭代更新变量 |
CME[ | 运动模糊 | 使用边缘轮廓的思想来直接预测阶梯边缘 | 速度快、适用性强、计算复杂度降低 | 无法处理不同的大规模模糊 |
GCRF[ | 均匀模糊 | 通过数据驱动方法有效地从训练集中学习好的核融合模型 | 更准确恢复底层内核产生更清晰的反卷积结果,减少振铃伪影 | 核融合框架完全依赖于单个方法的核估计结果,改进有限度 |
傅里叶突发积累[ | 抖动模糊 | 图像在傅里叶域进行加权平均,加权取决于傅里叶谱的幅度 | 速度快,占用内存小,不会引入大多数反卷积算法中存在的典型振铃 | 优化慢 |
ADMM[ | 抖动模糊 | 优化边缘检测器 | 速度快,感知质量更好,可集成到手机 | 无法捕捉到模糊程度为0的完美图像 |
RGTV[ | 均匀模糊 高斯模糊 | 交替求解骨架图像和模糊核 | 促进来自模糊观察的清晰图像的双峰权重分布,享有理想的频谱特性 | 对非均匀散焦模糊估计不佳 |
PLMG和低秩先验[ | 运动模糊 | 高效的交替方向乘子法及半二次方分裂法 | 有效抑制潜在图像振铃效应并保留大多细节 | 易受模糊图像噪声影响 |
L0稀疏先验方法[ | 合成模糊图像真实模糊图像 | 对标签图像的像素分布和梯度分布特征进行分析 | 有效地抑制图像边缘的振铃效应,速度更快 | 迭代过程容易陷入局部最优问题 |
混合梯度稀疏先验[ | 真实模糊图像 | 图像高阶梯度的稀疏性与低阶梯度相结合来构造混合梯度正则项 | 恢复出更锐利的边缘和更平滑的细节信息 | 存在一定程度的振铃效应 |
暗像素先验方法[ | 真实模糊运动模糊 | 模糊图像的暗像素是非稀疏的 | 适用于大量图像,具有更好普适性 | 无法扩展到模糊视频的复原中 |
方法 | 适用场景 | 机制 | 优势 | 局限性 |
---|---|---|---|---|
空间变体RNN[ | 运动模糊动态场景模糊 | 通过无线脉冲响应模型制定去模糊过程 | 权重可以从另一个网络中学习,针对不同模糊学习不同权重 | 需同时涉及大区域和空间变化结构 |
SRN[ | 运动模糊 | 新的多尺度循环网络结构 | 减少了可训练参数的数量,提高了训练效率 | 受限于固定数据集和训练时期 |
DMPHN[ | 运动模糊 | 类似于空间金字塔匹配的端到端CNN分层模型 | 所需滤波器较小,可快速推理 | 需要较大的GPU内存 |
UID-GAN[ | 特定区域图像模糊 | 用分离模型来分割模糊图像的内容和模糊特征 | 无需成对的训练图像,无监督学习 | 无法保留文本图像的一些细节 |
DPSR[ | LR模糊图像 | 设计了一个新的SISR退化模型 | 深度即插即用框架,可处理任意模糊核 | 对于大多数真实图像与退化模型不匹配变现不佳 |
BIE-RVD[ | 运动模糊 | 基于端到端可微结构的时空视频自动编码器 | 网络快、更准确(特别是大模糊) | 训练时执行任务复杂 |
UMSN[ | 人脸等自然图像模糊 | 独立学习特定类别特征,结合特征对人脸图像去模糊 | 去模糊后的图像有更高的识别精度且同时保留人脸重要部分 | 对特征不明显模糊图像处理不佳 |
DDMS[ | 运动模糊 | 构建一种具有滤波变换和特征调制能力的全卷积结构 | 完全省去了多尺度处理和大型过滤器,实时去模糊 | 不适合图像高频重要内容图像的恢复 |
Dr-Net[ | 任意模糊 | 使用从数据中学习参数的深度网络,对图像先验和数据保真度近端算子建模 | 网络快,更好地处理异构模糊 | 卷积层最佳权重必须通过优化框架找到 |
DeepGyro[ | 运动模糊 | 将陀螺仪测量值整合到卷积神经网络中 | 最大限度减少了补丁边缘附近伪影,实时处理极端和空间变化的运动模糊并且不会损坏清晰图像 | 易受自然图像的光源影响 |
MCGAN[ | 面部和文本模糊 | 引入恢复细节的新训练损失与多类GAN结合 | 可从低分辨率模糊输入中恢复特定对象类别的图像细节 | 重建的人脸可能包含棋盘伪影,多类图像训练时很难为所有图像类学习一个统一的模型 |
MSLS[ | 均匀模糊 非均匀模糊 | 从单一的模糊观测中同时恢复潜在的锐化图像和模糊核 | 恢复图像中的伪影更少,运行速度快 | 要求存在具有轮廓的对象,其大小要远远大于模糊内核的大小 |
RSRN[ | 毫米波辐射模糊图像 | 多级残差递归结构、多尺度循环连接 | 网络训练更稳定,更好保留细节信息 | 对其他模糊类型图像效果不佳 |
深度多级小波变换[ | 运动模糊 | 多尺度扩张稠密块(MDDB)空间域重建模块(SDRM) | 增大网络感受野,降低了映射复杂度 | 训练过程较复杂 |
Table 2 Blind deblurring method based on deep learning
方法 | 适用场景 | 机制 | 优势 | 局限性 |
---|---|---|---|---|
空间变体RNN[ | 运动模糊动态场景模糊 | 通过无线脉冲响应模型制定去模糊过程 | 权重可以从另一个网络中学习,针对不同模糊学习不同权重 | 需同时涉及大区域和空间变化结构 |
SRN[ | 运动模糊 | 新的多尺度循环网络结构 | 减少了可训练参数的数量,提高了训练效率 | 受限于固定数据集和训练时期 |
DMPHN[ | 运动模糊 | 类似于空间金字塔匹配的端到端CNN分层模型 | 所需滤波器较小,可快速推理 | 需要较大的GPU内存 |
UID-GAN[ | 特定区域图像模糊 | 用分离模型来分割模糊图像的内容和模糊特征 | 无需成对的训练图像,无监督学习 | 无法保留文本图像的一些细节 |
DPSR[ | LR模糊图像 | 设计了一个新的SISR退化模型 | 深度即插即用框架,可处理任意模糊核 | 对于大多数真实图像与退化模型不匹配变现不佳 |
BIE-RVD[ | 运动模糊 | 基于端到端可微结构的时空视频自动编码器 | 网络快、更准确(特别是大模糊) | 训练时执行任务复杂 |
UMSN[ | 人脸等自然图像模糊 | 独立学习特定类别特征,结合特征对人脸图像去模糊 | 去模糊后的图像有更高的识别精度且同时保留人脸重要部分 | 对特征不明显模糊图像处理不佳 |
DDMS[ | 运动模糊 | 构建一种具有滤波变换和特征调制能力的全卷积结构 | 完全省去了多尺度处理和大型过滤器,实时去模糊 | 不适合图像高频重要内容图像的恢复 |
Dr-Net[ | 任意模糊 | 使用从数据中学习参数的深度网络,对图像先验和数据保真度近端算子建模 | 网络快,更好地处理异构模糊 | 卷积层最佳权重必须通过优化框架找到 |
DeepGyro[ | 运动模糊 | 将陀螺仪测量值整合到卷积神经网络中 | 最大限度减少了补丁边缘附近伪影,实时处理极端和空间变化的运动模糊并且不会损坏清晰图像 | 易受自然图像的光源影响 |
MCGAN[ | 面部和文本模糊 | 引入恢复细节的新训练损失与多类GAN结合 | 可从低分辨率模糊输入中恢复特定对象类别的图像细节 | 重建的人脸可能包含棋盘伪影,多类图像训练时很难为所有图像类学习一个统一的模型 |
MSLS[ | 均匀模糊 非均匀模糊 | 从单一的模糊观测中同时恢复潜在的锐化图像和模糊核 | 恢复图像中的伪影更少,运行速度快 | 要求存在具有轮廓的对象,其大小要远远大于模糊内核的大小 |
RSRN[ | 毫米波辐射模糊图像 | 多级残差递归结构、多尺度循环连接 | 网络训练更稳定,更好保留细节信息 | 对其他模糊类型图像效果不佳 |
深度多级小波变换[ | 运动模糊 | 多尺度扩张稠密块(MDDB)空间域重建模块(SDRM) | 增大网络感受野,降低了映射复杂度 | 训练过程较复杂 |
方法 | PSNR/dB | SSIM |
---|---|---|
Zhang et al(SVRNN)[ | 29.19 | 0.930 6 |
Tao et al(SRN)[ | 30.26 | 0.934 2 |
Kupyn et al(Deblur GAN)[ | 28.70 | 0.958 0 |
Kupyn et al(DeblurGAN-v2)[ | 29.55 | 0.934 5 |
Aljadaany et al(Dr-Net)[ | 30.35 | 0.961 0 |
Sun et al(KCNN)[ | 24.64 | 0.842 9 |
Nah et al(MSCNN)[ | 29.08 | 0.913 5 |
Pan et al(暗通道先验)[ | 23.52 | 0.833 6 |
Whyte et al(参数化几何模型)[ | 24.53 | 0.845 8 |
Xu et al( | 20.30 | 0.740 7 |
Kim et al(新的能量模型)[ | 23.64 | 0.823 9 |
Liu et al(混合神经网络的低层视觉递归滤波器)[ | 25.75 | 0.865 4 |
Gong et al(MBMF)[ | 26.06 | 0.863 2 |
Table 3 PSNR and SSIM of different methods on GoPro dataset
方法 | PSNR/dB | SSIM |
---|---|---|
Zhang et al(SVRNN)[ | 29.19 | 0.930 6 |
Tao et al(SRN)[ | 30.26 | 0.934 2 |
Kupyn et al(Deblur GAN)[ | 28.70 | 0.958 0 |
Kupyn et al(DeblurGAN-v2)[ | 29.55 | 0.934 5 |
Aljadaany et al(Dr-Net)[ | 30.35 | 0.961 0 |
Sun et al(KCNN)[ | 24.64 | 0.842 9 |
Nah et al(MSCNN)[ | 29.08 | 0.913 5 |
Pan et al(暗通道先验)[ | 23.52 | 0.833 6 |
Whyte et al(参数化几何模型)[ | 24.53 | 0.845 8 |
Xu et al( | 20.30 | 0.740 7 |
Kim et al(新的能量模型)[ | 23.64 | 0.823 9 |
Liu et al(混合神经网络的低层视觉递归滤波器)[ | 25.75 | 0.865 4 |
Gong et al(MBMF)[ | 26.06 | 0.863 2 |
方法 | PSNR/dB | SSIM |
---|---|---|
Tao et al(SRN)[ | 26.75 | 0.837 0 |
Kupyn et al(Deblur GAN)[ | 26.10 | 0.816 0 |
Kupyn et al(DeblurGAN-v2)[ | 26.72 | 0.836 0 |
Aljadaany et al(DR-Net)[ | 27.20 | 0.865 0 |
Sun et al(KCNN)[ | 25.22 | 0.773 5 |
Nah et al(MSCNN)[ | 26.48 | 0.807 9 |
Whyte et al(参数化几何模型)[ | 27.03 | 0.809 0 |
Xu et al( | 27.47 | 0.811 0 |
Kim et al(新的能量模型)[ | 24.68 | 0.793 7 |
Table 4 PSNR and SSIM of different methods on Kohler dataset
方法 | PSNR/dB | SSIM |
---|---|---|
Tao et al(SRN)[ | 26.75 | 0.837 0 |
Kupyn et al(Deblur GAN)[ | 26.10 | 0.816 0 |
Kupyn et al(DeblurGAN-v2)[ | 26.72 | 0.836 0 |
Aljadaany et al(DR-Net)[ | 27.20 | 0.865 0 |
Sun et al(KCNN)[ | 25.22 | 0.773 5 |
Nah et al(MSCNN)[ | 26.48 | 0.807 9 |
Whyte et al(参数化几何模型)[ | 27.03 | 0.809 0 |
Xu et al( | 27.47 | 0.811 0 |
Kim et al(新的能量模型)[ | 24.68 | 0.793 7 |
方法 | PSNR/dB | SSIM |
---|---|---|
Kupyn et al[ | 26.45 | 0.880 |
Yasarla et al[ | 27.75 | 0.897 |
Nah et al[ | 24.12 | 0.823 |
Cho et al[ | 16.82 | 0.574 |
Pan et al[ | 20.93 | 0.727 |
Krishnan et al[ | 19.30 | 0.670 |
Xu et al[ | 20.11 | 0.711 |
Shan et al[ | 19.57 | 0.670 |
Zhong et al[ | 16.41 | 0.614 |
Shen et al[ | 25.58 | 0.861 |
Table 5 PSNR and SSIM of different methods on Helen dataset
方法 | PSNR/dB | SSIM |
---|---|---|
Kupyn et al[ | 26.45 | 0.880 |
Yasarla et al[ | 27.75 | 0.897 |
Nah et al[ | 24.12 | 0.823 |
Cho et al[ | 16.82 | 0.574 |
Pan et al[ | 20.93 | 0.727 |
Krishnan et al[ | 19.30 | 0.670 |
Xu et al[ | 20.11 | 0.711 |
Shan et al[ | 19.57 | 0.670 |
Zhong et al[ | 16.41 | 0.614 |
Shen et al[ | 25.58 | 0.861 |
方法 | PSNR/dB | SSIM |
---|---|---|
Kupyn et al[ | 25.42 | 0.884 |
Yasarla et al[ | 26.62 | 0.908 |
Nah et al[ | 22.43 | 0.832 |
Cho et al[ | 13.03 | 0.445 |
Pan et al[ | 18.59 | 0.677 |
Krishnan et al[ | 18.38 | 0.672 |
Xu et al[ | 18.93 | 0.685 |
Shan et al[ | 18.43 | 0.644 |
Zhong et al[ | 17.26 | 0.695 |
Shen et al[ | 24.34 | 0.860 |
Table 6 PSNR and SSIM of different methods on CelebA dataset
方法 | PSNR/dB | SSIM |
---|---|---|
Kupyn et al[ | 25.42 | 0.884 |
Yasarla et al[ | 26.62 | 0.908 |
Nah et al[ | 22.43 | 0.832 |
Cho et al[ | 13.03 | 0.445 |
Pan et al[ | 18.59 | 0.677 |
Krishnan et al[ | 18.38 | 0.672 |
Xu et al[ | 18.93 | 0.685 |
Shan et al[ | 18.43 | 0.644 |
Zhong et al[ | 17.26 | 0.695 |
Shen et al[ | 24.34 | 0.860 |
[1] |
STOCKHAM T G, CANNON T M, INGEBRETSEN R B. Blind deconvolution through digital signal processing[J]. Proceedings of the IEEE, 1975, 63(4): 678-692.
DOI URL |
[2] | OPPENHEIM A, SCHAFER R, STOCKHAM T. Nonlinear filtering of multiplied and convolved signals[J]. IEEE Tran-sactions on Audio and Electroacoustics, 1968, 16(3): 437-466. |
[3] | FERGUS R, SINGH B, HERTZMANN A, et al. Removing camera shake from a single photograph[J]. ACM Transac-tions on Graphics, 2006, 25(3): 787-794. |
[4] | YOU Y L, KAVEH M. Blind image restoration by aniso-tropic regularization[J]. IEEE Transactions on Image Proce-ssing, 1999, 8(3): 396-407. |
[5] |
CHAN T F, WONG C K. Total variation blind deconvo-lution[J]. IEEE Transactions on Image Processing, 1998, 7(3): 370-375.
DOI URL |
[6] | KRISHNAN D, FERGUS R. Fast image deconvolution using hyper-Laplacian priors[C]// Proceedings of the 22nd Inter-national Conference on Neural Information Processing Sys-tems, Vancouver, Dec 7-10, 2009. Red Hook: Curran Asso-ciates, 2009: 1033-1041. |
[7] | PAN J S, SU Z X. Fast ℓ0 -regularized kernel estimation for robust motion deblurring[J]. IEEE Signal Processing Let-ters, 2013, 20(9): 841-844. |
[8] | PAN J S, LIN Z C, SU Z X, et al. Robust kernel estimation with outliers handling for image deblurring[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Jun 27-30, 2016. Washington: IEEE Computer Society, 2016: 2800-2808. |
[9] | PAN J S, HU Z, SU Z X, et al. L0-regularized intensity and gradient prior for deblurring text images and beyond[J]. IEEE Transactions on Pattern Analysis and Machine Intel-ligence, 2016, 39(2): 342-355. |
[10] | PAN J S, SUN D Q, PFISTER H, et al. Deblurring images via dark channel prior[J]. IEEE Transactions on Pattern Ana-lysis and Machine Intelligence, 2017, 40(10): 2315-2328. |
[11] |
HE K M, SUN J, TANG X O. Single image haze removal using dark channel prior[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(12): 2341-2353.
DOI URL |
[12] | YAN Y Y, REN W Q, GUO Y F, et al. Image deblurring via extreme channels prior[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer So-ciety, 2017: 6978-6986. |
[13] | JOSHI N, SZELISKI R, KRIEGMAN D J. PSF estimation using sharp edge prediction[C]// Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recog-nition, Anchorage, Jun 24-26, 2008. Washington: IEEE Com-puter Society, 2008: 1-8. |
[14] | JIA J Y. Single image motion deblurring using transparency[C]// Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, Jun 18-23, 2007. Washington: IEEE Computer Society, 2007: 1-8. |
[15] | BAR L, SOCHEN N A, KIRYATI N. Restoration of images with piecewise space-variant blur[C]// LNCS 4485: Procee-dings of the 2007 International Conference on Scale Space and Variational Methods in Computer Vision, Ischia, May 30-Jun 2, 2007. Berlin, Heidelberg: Springer, 2007: 533-544. |
[16] |
SOREL M, FLUSSER J. Space-variant restoration of images degraded by camera motion blur[J]. IEEE Transactions on Image Processing, 2008, 17(2): 105-116.
DOI URL |
[17] | XU L, JIA J Y. Two-phase kernel estimation for robust mo-tion deblurring[C]// LNCS 6311: Proceedings of the 11th European Conference on Computer Vision, Heraklion, Sep 5-11, 2010. Berlin, Heidelberg: Springer, 2010: 157-170. |
[18] |
BAE S, DURAND F. Defocus magnification[J]. Computer Graphics Forum, 2007, 26(3): 571-579.
DOI URL |
[19] | DAI S Y, WU Y. Motion from blur[C]// Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Re-cognition, Anchorage, Jun 24-26, 2008. Washington: IEEE Computer Society, 2008: 1-8. |
[20] | LEVIN A. Blind motion deblurring using image statistics[C]// Proceedings of the 20th Annual Conference on Neural Information Processing Systems, Vancouver, Dec 4-7, 2006. Cambridge: MIT Press, 2006: 841-848. |
[21] | MUSTANIEMI J, KANNALA J, SÄRKKÄ S, et al. Fast motion deblurring for feature detection and matching using inertial measurements[C]// Proceedings of the 24th Interna-tional Conference on Pattern Recognition, Beijing, Aug 20-24, 2018. Washington: IEEE Computer Society, 2018: 3068-3073. |
[22] | YANG L G, JI H. A variational EM framework with adap-tive edge selection for blind motion deblurring[C]// Procee-dings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Pis-cataway: IEEE, 2019: 10167-10176. |
[23] | CHEN L, FANG F M, WANG T T, et al. Blind image deb-lurring with local maximum gradient prior[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Pisca-taway: IEEE, 2019: 1742-1750. |
[24] | VASU S, RAJAGOPALAN A N. From local to global: edge profiles to camera motion in blurred images[C]// Procee-dings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Wa-shington: IEEE Computer Society, 2017: 4447-4456. |
[25] | MAI L, LIU F. Kernel fusion for better image deblurring[C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, Jun 7-12, 2015. Piscataway: IEEE, 2015: 371-380. |
[26] | DELBRACIO M, SAPIRO G. Burst deblurring: removing camera shake through Fourier burst accumulation[C]// Pro-ceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, Jun 7-12, 2015. Piscata-way: IEEE, 2015: 2385-2393. |
[27] | PENDYALA S, RAMESHA P, BNS A V, et al. Blur de-tection and fast blind image deblurring[C]// Proceedings of the 2015 Annual IEEE India Conference, New Delhi, Dec 17-20, 2015. Piscataway: IEEE, 2015: 1-4. |
[28] | BAI Y C, CHEUNG G, LIU X M, et al. Graph-based blind image deblurring from a single photograph[J]. IEEE Tran-sactions on Image Processing, 2018, 28(3): 1404-1418. |
[29] | 周志豪, 张玉龙, 唐启凡, 等. 多尺度低秩图像盲去模糊方法[J]. 西安交通大学学报, 2021, 55(9): 168-177. |
ZHOU Z H, ZHANG Y L, TANG Q F, et al. Multi-scale low-rank blind image deblurring method[J]. Journal of Xi’an Jiao-tong University, 2021, 55(9): 168-177. | |
[30] | 柳宁, 赵焕明, 李德平, 等. 基于L0稀疏先验的运动模糊标签图像盲复原[J]. 华南理工大学学报(自然科学版), 2021, 49(3): 8-16. |
LIU N, ZHAO H M, LI D P, et al. Blind restoration of motion blur label image based on L0 sparse priors[J]. Journal of South China University of Technology (Natural Science Edition), 2021, 49(3): 8-16. | |
[31] | 徐宁珊, 王琛, 任国强, 等. 混合梯度稀疏先验约束下的图像盲复原[J]. 光电工程, 2021, 48(6): 58-69. |
XU N S, WANG C, REN G Q, et al. Blind image resto-ration method regularized by hybrid gradient sparse prior[J]. Opto-Electronic Engineering, 2021, 48(6): 58-69. | |
[32] | 涂春梅, 陈国彬, 刘超. 暗像素先验的模糊图像盲复原方法[J]. 计算机工程与应用, 2020, 56(10): 213-219. |
TU C M, CHEN G B, LIU C. Dark-pixel-prior blind deblur-ring method[J]. Computer Engineering and Applications, 2020, 56(10): 213-219. | |
[33] | CRONJE J. Deep convolutional neural networks for dense non-uniform motion deblurring[C]// Proceedings of the 2015 International Conference on Image and Vision Computing, New Zealand, Nov 23-24, 2015. Piscataway: IEEE, 2015: 1-5. |
[34] | XU X Y, PAN J S, ZHANG Y J, et al. Motion blur kernel estimation via deep learning[J]. IEEE Transactions on Ima-ge Processing, 2018, 27(1): 194-205. |
[35] | HRADIŠ M, KOTERA J, ZEMCÍK P, et al. Convolutional neural networks for direct text deblurring[C]// Proceedings of the British Machine Vision Conference 2015, Swansea, Sep 7-10, 2015. Durham: BMVA Press, 2015: 1-13. |
[36] |
SCHULER C J, HIRSCH M, HARMELING S, et al. Lear-ning to deblur[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 38(7): 1439-1451.
DOI URL |
[37] | ZHANG J W, PAN J S, REN J S J, et al. Dynamic scene deblurring using spatially variant recurrent neural networks[C]// Proceedings of the 2018 IEEE Conference on Com-puter Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Piscataway: IEEE, 2018: 2521-2529. |
[38] | TAO X, GAO H Y, SHEN X Y, et al. Scale-recurrent net-work for deep image deblurring[C]// Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recog-nition, Salt Lake City, Jun 18-22, 2018. Piscataway: IEEE, 2018: 8174-8182. |
[39] | GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[J]. arXiv:1406.2661, 2014. |
[40] | LEDIG C, THEIS L, HUSZÁR F, et al. Photo-realistic single image super-resolution using a generative adversarial net-work[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 4681-4690. |
[41] | CHEN Y, WU F G, ZHAO J S. Motion deblurring via using generative adversarial networks for space-based imaging[C]// Proceedings of the 16th IEEE International Confe-rence on Software Engineering Research, Management and Applications, Kunming, Jun 13-15, 2018. Washington: IEEE Computer Society, 2018: 37-41. |
[42] | GULRAJANI I, AHMED F, ARJOVSKY M, et al. Improved training of Wasserstein GANs[J]. arXiv:1704.00028, 2017. |
[43] | KUPYN O, BUDZAN V, MYKHAILYCH M, et al. Deblur-GAN: blind motion deblurring using conditional adver-sarial networks[C]// Proceedings of the 2018 IEEE Confe-rence on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 8183-8192. |
[44] | KUPYN O, MARTYNIUK T, WU J R, et al. DeblurGAN-v2: deblurring (orders-of-magnitude) faster and better[C]// Proceedings of the 2019 IEEE/CVF International Conferen-ce on Computer Vision, Seoul, Oct 27-Nov 2, 2019. Piscata-way: IEEE, 2019: 8878-8887. |
[45] | ZHANG H G, DAI Y C, LI H D, et al. Deep stacked hie-rarchical multi-patch network for image deblurring[C]// Pro-ceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 15-20, 2019. Piscataway: IEEE, 2019: 5978-5986. |
[46] |
LU B Y, CHEN J C, CHELLAPPA R. UID-GAN: unsuper-vised image deblurring via disentangled representations[J]. IEEE Transactions on Biometrics, Behavior, and Identity Science, 2020, 2(1): 26-39.
DOI URL |
[47] | ZHANG K, ZUO W M, ZHANG L. Deep plug-and-play super-resolution for arbitrary blur kernels[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pat-tern Recognition, Long Beach, Jun 16-20, 2019. Piscataway:IEEE, 2019: 1671-1681. |
[48] | PUROHIT K, SHAH A B, RAJAGOPALAN A N. Bringing alive blurred moments[C]// Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 6830-6839. |
[49] |
YASARLA R, PERAZZI F, PATEL V M. Deblurring face images using uncertainty guided multi-stream semantic net-works[J]. IEEE Transactions on Image Processing, 2020, 29: 6251-6263.
DOI URL |
[50] | PUROHIT K, RAJAGOPALAN A N. Region-adaptive dense network for efficient motion deblurring[C]// Proceedings of the 34th AAAI Conference on Artificial Intelligence, the 32nd Innovative Applications of Artificial Intelligence Con-ference, the 10th AAAI Symposium on Educational Advan-ces in Artificial Intelligence, New York, Feb 7-12, 2020. Menlo Park: AAAI, 2020: 11882-11889. |
[51] | ALJADAANY R, PAL D K, SAVVIDES M. Douglas-Rachford networks: learning both the image prior and data fidelity terms for blind image deconvolution[C]// Procee-dings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Pis-cataway: IEEE, 2019: 10235-10244. |
[52] | MUSTANIEMI J, KANNALA J, SÄRKKÄ S, et al. Gyroscope-aided motion deblurring with deep networks[C]// Proceedings of the 2019 IEEE Winter Conference on Applica-tions of Computer Vision, Waikoloa Village, Jan 7-11, 2019. Piscataway: IEEE, 2019: 1914-1922. |
[53] | XU X Y, SUN D Q, PAN J S, et al. Learning to super-resolve blurry face and text images[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Oct 22-29, 2017. Washington: IEEE Computer So-ciety, 2017: 251-260. |
[54] | BAI Y C, JIA H Z, JIANG M, et al. Single-image blind deb-lurring using multi-scale latent structure prior[J]. IEEE Tran-sactions on Circuits and Systems for Video Technology, 2019, 30(7): 2033-2045. |
[55] | 徐国豪, 刘媛媛, 朱路. 基于残差递归网络的毫米波辐射图像去模糊[J/OL]. 激光与光电子学进展 [2021-08-06]. http://kns.cnki.net/kcms/detail/31.1690.TN.20210804.1202.038.html. |
XU G H, LIU Y Y, ZHU L. Millimeter wave radiation ima-ge deblurring based on residual recursive network[J/OL]. Progress in Laser and Optoelectronics [2021-08-06]. http://kns.cnki.net/kcms/detail/31.1690.TN.20210804.1202.038.html. | |
[56] | 陈书贞, 曹世鹏, 崔美玥, 等. 基于深度多级小波变换的图像盲去模糊算法[J]. 电子与信息学报, 2021, 43(1): 154-161. |
CHEN S Z, CAO S P, CUI M Y, et al. Image blind deblur-ring algorithm based on deep multi-level wavelet transform[J]. Journal of Electronics & Information Technology, 2021, 43(1): 154-161. | |
[57] | LEVIN A, WEISS Y, DURAND F, et al. Understanding and evaluating blind deconvolution algorithms[C]// Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, Jun 20-25, 2009. Washington: IEEE Com-puter Society, 2009: 1964-1971. |
[58] | SUN L B, CHO S, WANG J, et al. Edge-based blur kernel estimation using patch priors[C]// Proceedings of the 2013 IEEE International Conference on Computational Photo-graphy, Cambridge, Apr 19-21, 2013. Washington: IEEE Com-puter Society, 2013: 1-8. |
[59] | LAI W S, HUANG J B, HU Z, et al. A comparative study for single image blind deblurring[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Reco-gnition, Las Vegas, Jun 27-30, 2016. Washington: IEEE Com-puter Society, 2016: 1701-1709. |
[60] | KÖHLER R, HIRSCH M, MOHLER B J, et al. Recording and playback of camera shake: benchmarking blind decon-volution with a real-world database[C]// LNCS 7578: Pro-ceedings of the 12th European Conference on Computer Vision, Florence, Oct 7-13, 2012. Berlin, Heidelberg: Sprin-ger, 2012: 27-40. |
[61] | NAH S, KIM T H, LEE K M. Deep multi-scale convolu-tional neural network for dynamic scene deblurring[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 257-265. |
[62] |
WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.
DOI URL |
[63] | CHO S, LEE S. Fast motion deblurring[J]. ACM Transac-tions on Graphics, 2009, 28(5): 145. |
[64] | PAN J S, SUN D Q, PFISTER H, et al. Blind image deblur-ring using dark channel prior[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recogni-tion, Las Vegas, Jun 27-30, 2016. Washington: IEEE Com-puter Society, 2016: 1628-1636. |
[65] | LI L, PAN J S, LAI W S, et al. Learning a discriminative prior for blind image deblurring[C]// Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recog-nition, Salt Lake City, Jun 18-23, 2018. Washington: IEEE Computer Society, 2018: 6616-6625. |
[66] | WHYTE O, SIVIC J, ZISSERMAN A, et al. Non-uniform deblurring for shaken images[J]. International Journal of Com-puter Vision, 2012, 98(2): 168-186. |
[67] | KRISHNAN D, TAY T, FERGUS R. Blind deconvolution using a normalized sparsity measure[C]// Proceedings of the 24th IEEE Conference on Computer Vision and Pattern Reco-gnition, Colorado Springs, Jun 20-25, 2011. Washington: IEEE Computer Society, 2011: 233-240. |
[68] | XU L, ZHENG S C, JIA J Y. Unnatural L0 sparse represen-tation for natural image deblurring[C]// Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Re-cognition, Portland, Jun 23-28, 2013. Washington: IEEE Com-puter Society, 2013: 1107-1114. |
[69] | KIM T Y, LEE K M. Segmentation-free dynamic scene deb-lurring[C]// Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, Jun 23-28, 2014. Washington: IEEE Computer Society, 2014: 2766-2773. |
[70] | LIU S F, PAN J S, YANG M H. Learning recursive filters for low-level vision via a hybrid neural network[C]// LNCS 9908: Proceedings of the 14th European Conference on Com-puter Vision, Amsterdam, Oct 11-14, 2016. Cham: Springer, 2016: 560-576. |
[71] | GONG D, YANG J, LIU L Q, et al. From motion blur to mo-tion flow: a deep learning solution for removing hetero-geneous motion blur[C]// Proceedings of the 2017 IEEE Con-ference on Computer Vision and Pattern Recognition, Hono-lulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 3806-3815. |
[72] | SHAN Q, JIA J Y, AGARWALA A. High-quality motion deblurring from a single image[J]. ACM Transactions on Graphics, 2008, 27(3): 73. |
[73] | ZHONG L, CHO S, METAXAS D N, et al. Handling noise in single image deblurring using directional filters[C]// Pro-ceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, Jun 23-28, 2013. Washin-gton: IEEE Computer Society, 2013: 612-619. |
[74] | SHEN Z Y, LAI W S, XU T F, et al. Deep semantic face deblurring[C]// Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 8260-8269. |
[75] | PAN J S, HU Z, SU Z X, et al. Deblurring text images via L0-regularized intensity and gradient prior[C]// Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, Jun 23-28, 2014. Washing-ton: IEEE Computer Society, 2014: 2901-2908. |
[76] | MADAM N T, KUMAR S, RAJAGOPALAN A N. Unsu-pervised class-specific deblurring[C]// LNCS 11214: Procee-dings of the 15th European Conference on Computer Vi-sion, Munich, Sep 8-14, 2018. Cham: Springer, 2018: 358-374. |
[77] | LU B Y, CHEN J C, CHELLAPPA R. Unsupervised domain-specific deblurring via disentangled representations[C]// Pro-ceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Pis-cataway: IEEE, 2019: 10225-10234. |
[1] | AN Fengping, LI Xiaowei, CAO Xiang. Medical Image Classification Algorithm Based on Weight Initialization-Sliding Window CNN [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(8): 1885-1897. |
[2] | ZENG Fanzhi, XU Luqian, ZHOU Yan, ZHOU Yuexia, LIAO Junwei. Review of Knowledge Tracing Model for Intelligent Education [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(8): 1742-1763. |
[3] | LIU Yi, LI Mengmeng, ZHENG Qibin, QIN Wei, REN Xiaoguang. Survey on Video Object Tracking Algorithms [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(7): 1504-1515. |
[4] | ZHAO Xiaoming, YANG Yijiao, ZHANG Shiqing. Survey of Deep Learning Based Multimodal Emotion Recognition [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(7): 1479-1503. |
[5] | XIA Hongbin, XIAO Yifei, LIU Yuan. Long Text Generation Adversarial Network Model with Self-Attention Mechanism [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(7): 1603-1610. |
[6] | SUN Fangwei, LI Chengyang, XIE Yongqiang, LI Zhongbo, YANG Caidong, QI Jin. Review of Deep Learning Applied to Occluded Object Detection [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(6): 1243-1259. |
[7] | LIU Yafen, ZHENG Yifeng, JIANG Lingyi, LI Guohe, ZHANG Wenjie. Survey on Pseudo-Labeling Methods in Deep Semi-supervised Learning [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(6): 1279-1290. |
[8] | CHENG Weiyue, ZHANG Xueqin, LIN Kezheng, LI Ao. Deep Convolutional Neural Network Algorithm Fusing Global and Local Features [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(5): 1146-1154. |
[9] | ZHONG Mengyuan, JIANG Lin. Review of Super-Resolution Image Reconstruction Algorithms [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(5): 972-990. |
[10] | XU Jia, WEI Tingting, YU Ge, HUANG Xinyue, LYU Pin. Review of Question Difficulty Evaluation Approaches [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(4): 734-759. |
[11] | PEI Lishen, ZHAO Xuezhuan. Survey of Collective Activity Recognition Based on Deep Learning [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(4): 775-790. |
[12] | ZHU Weijie, CHEN Ying. Micro-expression Recognition Convolutional Network for Dual-stream Temporal-Domain Information Interaction [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(4): 950-958. |
[13] | JIANG Yi, XU Jiajie, LIU Xu, ZHU Junwu. Research on Edge-Guided Image Repair Algorithm [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(3): 669-682. |
[14] | ZHANG Quangui, HU Jiayan, WANG Li. One Class Collaborative Filtering Recommendation Algorithm Coupled with User Common Characteristics [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(3): 637-648. |
[15] | WU Kaijun, HUANG Tao, WANG Dicong, BAI Chenshuai, TAO Xiaomiao. Research Progress of Video Anomaly Detection Technology [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(3): 529-540. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||
/D:/magtech/JO/Jwk3_kxyts/WEB-INF/classes/