计算机科学与探索 ›› 2022, Vol. 16 ›› Issue (11): 2471-2486.DOI: 10.3778/j.issn.1673-9418.2203082
史屹琛1, 封筠1,+(), 肖立轩1, 贺晶晶1, 胡晶晶2
收稿日期:
2022-03-21
修回日期:
2022-05-16
出版日期:
2022-11-01
发布日期:
2022-11-16
通讯作者:
+ E-mail: fengjun@stdu.edu.cn作者简介:
史屹琛(1998—),男,山西太原人,硕士研究生,主要研究方向为人脸活体检测、迁移学习。基金资助:
SHI Yichen1, FENG Jun1,+(), XIAO Lixuan1, HE Jingjing1, HU Jingjing2
Received:
2022-03-21
Revised:
2022-05-16
Online:
2022-11-01
Published:
2022-11-16
About author:
SHI Yichen, born in 1998, M.S. candidate. His research interests include face anti-spoofing and transfer learning.Supported by:
摘要:
人脸活体检测(FAS)作为保护人脸识别模型的重要手段,能够确保系统在面对各种呈现攻击时仍然安全、可靠。当前基于深度学习的人脸活体检测模型在测试数据与训练数据服从同一分布时结果令人满意,但当训练好的模型在领域外场景进行推理时,如遇到跨域迁移、分布外场景时,模型的准确性会出现较大的下降。主要阐述了静默型人脸活体检测模型在真实场景中会遇到的问题,即模型遇到未知环境和未知攻击方式。将相应的解决方案分为四类:基于领域自适应的方法、基于领域泛化的方法、基于零样本/小样本学习的方法以及基于异常检测的方法。对各解决方案及其包含的深度学习模型方法进行总结、比较,归纳了主要方法的机制、模型结构、优势、局限性以及适用场景。介绍了领域外场景下人脸活体检测常用的公共数据集、评价指标、测评协议以及在部分测评协议下当前先进方法的测试结果。最后讨论了人脸活体检测在实际应用中存在的难点与挑战,总结了未来的研究方向。
中图分类号:
史屹琛, 封筠, 肖立轩, 贺晶晶, 胡晶晶. 领域外人脸活体检测综述[J]. 计算机科学与探索, 2022, 16(11): 2471-2486.
SHI Yichen, FENG Jun, XIAO Lixuan, HE Jingjing, HU Jingjing. Out of Domain Face Anti-spoofing: A Survey[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(11): 2471-2486.
类别 | 方法 | 机制 | 模型结构 | 优点 | 局限性 | 适用场景 |
---|---|---|---|---|---|---|
领域分布差异 | OR-DA[ | 最小化源域和目标域特征空间之间的最大均值差异 | AlexNet | 减少领域之间的统计距离,可解释性强 | 仅凭MMD距离无法充分反映领域之间的差异 | 可见光 |
DTCNN[ | 减小源域和目标域之间基于核方法的MMD距离 | |||||
对抗迁移学习 | Adversarial[ | 使用对抗训练的方式使得特征提取器提取到源域和目标域共同的特征 | ResNet18 | 网络结构新颖 | 对抗训练过程不稳定 | 可见光 |
USDAN[ | 设计不同的分布对齐操作,增强非监督和半监督领域自适应的泛化能力 | ResNet18 | 可以灵活地在无监督和半监督间切换 | 条件分布的对齐方式单一 | 可见光 | |
DR-UDA[ | 将领域相关特征与无关特征解耦,学习源域和目标域的共享嵌入空间 | ResNet18 | 使用领域无关特征分类 | 解耦策略简单,未与先验知识结合 | 可见光 | |
深度特征增广[ | 将一种经典域自适应算法扩展到深度神经网络中,定义了基于深度特征增广的域自适应层 | FCN | 对目标域数据需求少,迁移快速 | 不适用于目标域标签未知和目标域零样本的情景 | 可见光 | |
其他方法 | SDA[ | 提出了自域适应框架,采用元学习方法 | DepthNet | 为人脸反欺骗开辟了一个新的方向,从测试领域独有信息中提取区分特征 | 元学习训练困难 | 可见光 |
表1 基于领域自适应的FAS方法总结
Table 1 Summary of FAS methods based on domain adaptation
类别 | 方法 | 机制 | 模型结构 | 优点 | 局限性 | 适用场景 |
---|---|---|---|---|---|---|
领域分布差异 | OR-DA[ | 最小化源域和目标域特征空间之间的最大均值差异 | AlexNet | 减少领域之间的统计距离,可解释性强 | 仅凭MMD距离无法充分反映领域之间的差异 | 可见光 |
DTCNN[ | 减小源域和目标域之间基于核方法的MMD距离 | |||||
对抗迁移学习 | Adversarial[ | 使用对抗训练的方式使得特征提取器提取到源域和目标域共同的特征 | ResNet18 | 网络结构新颖 | 对抗训练过程不稳定 | 可见光 |
USDAN[ | 设计不同的分布对齐操作,增强非监督和半监督领域自适应的泛化能力 | ResNet18 | 可以灵活地在无监督和半监督间切换 | 条件分布的对齐方式单一 | 可见光 | |
DR-UDA[ | 将领域相关特征与无关特征解耦,学习源域和目标域的共享嵌入空间 | ResNet18 | 使用领域无关特征分类 | 解耦策略简单,未与先验知识结合 | 可见光 | |
深度特征增广[ | 将一种经典域自适应算法扩展到深度神经网络中,定义了基于深度特征增广的域自适应层 | FCN | 对目标域数据需求少,迁移快速 | 不适用于目标域标签未知和目标域零样本的情景 | 可见光 | |
其他方法 | SDA[ | 提出了自域适应框架,采用元学习方法 | DepthNet | 为人脸反欺骗开辟了一个新的方向,从测试领域独有信息中提取区分特征 | 元学习训练困难 | 可见光 |
类别 | 方法 | 机制 | 模型结构 | 优点 | 局限性 | 适用场景 |
---|---|---|---|---|---|---|
元学习 | RFMetaFAS[ | 采用了细粒度的学习策略,在正则化的特征空间中运用领域知识辅助监督元学习 | CNN | 挖掘更广义的区分线索,模型通用性强 | 元学习双层优化训练困难 | 可见光 |
HFN+MP[ | 设计层次融合网络,利用元学习提取元特征 | ResNet50 | 融合RGB图像和MP信息,推动了混合方法研究 | 元模型提取器待优化 | 可见光 | |
PDL-FAS[ | 使用MLDG框架训练分类器 | PRNet | 与传统方法相比,不需要结构域标签 | 与SOTA方法相比,性能欠佳 | 可见光 | |
D2AM[ | 设计了基于DRLM(domain representation learning module)和MMD的正则化,模拟更困难和丰富的域移位场景 | DepthNet | 可解释性高,解决了混合域FAS问题 | — | 可见光 | |
对抗迁移学习 | MADDG[ | 使用深度监督配合多个判别器,提高特征提取器的泛化能力,使用双力三元组挖掘约束 | DepthNet | 提取领域无关的判别特征 | 多个判别器拟合困难 | 可见光 |
SSDG[ | 使用单边对抗学习和不平等三元组损失 | ResNet18 | 不对称的处理提高了模型在未知领域的泛化性能 | 非对称设计有待完善 | 可见光 | |
DRDG[ | 样本重加权模块(SRM)和特征重加权模块(FRM)进行双重加权 | — | 在特征层面与样本层面双重加权,可解释性强 | 模型速度有待提升 | 可见光 | |
CADG[ | 借助输出特征和分类预测结果作为条件变量来辅助进行对抗域泛化 | Attention-UNet ResNet18 | 数据分布匹配性高 | 样本挖掘受到批大小限制 | 可见光 | |
HWT[ | 利用HWT的细节子带图能够提取图像丰富的细节特征 | CNN | 融合纹理特征、深度图和rPPG信号,引入先验知识 | 内存占用较高,训练困难 | 可见光 | |
SSAN[ | 将内容特征与风格特征解耦、组合,采用对比学习方法进行训练 | DepthNet ResNet18 | 建立大规模的FAS基准 | — | 可见光 |
表2 基于领域泛化的FAS方法总结
Table 2 Summary of FAS methods based on domain generalization
类别 | 方法 | 机制 | 模型结构 | 优点 | 局限性 | 适用场景 |
---|---|---|---|---|---|---|
元学习 | RFMetaFAS[ | 采用了细粒度的学习策略,在正则化的特征空间中运用领域知识辅助监督元学习 | CNN | 挖掘更广义的区分线索,模型通用性强 | 元学习双层优化训练困难 | 可见光 |
HFN+MP[ | 设计层次融合网络,利用元学习提取元特征 | ResNet50 | 融合RGB图像和MP信息,推动了混合方法研究 | 元模型提取器待优化 | 可见光 | |
PDL-FAS[ | 使用MLDG框架训练分类器 | PRNet | 与传统方法相比,不需要结构域标签 | 与SOTA方法相比,性能欠佳 | 可见光 | |
D2AM[ | 设计了基于DRLM(domain representation learning module)和MMD的正则化,模拟更困难和丰富的域移位场景 | DepthNet | 可解释性高,解决了混合域FAS问题 | — | 可见光 | |
对抗迁移学习 | MADDG[ | 使用深度监督配合多个判别器,提高特征提取器的泛化能力,使用双力三元组挖掘约束 | DepthNet | 提取领域无关的判别特征 | 多个判别器拟合困难 | 可见光 |
SSDG[ | 使用单边对抗学习和不平等三元组损失 | ResNet18 | 不对称的处理提高了模型在未知领域的泛化性能 | 非对称设计有待完善 | 可见光 | |
DRDG[ | 样本重加权模块(SRM)和特征重加权模块(FRM)进行双重加权 | — | 在特征层面与样本层面双重加权,可解释性强 | 模型速度有待提升 | 可见光 | |
CADG[ | 借助输出特征和分类预测结果作为条件变量来辅助进行对抗域泛化 | Attention-UNet ResNet18 | 数据分布匹配性高 | 样本挖掘受到批大小限制 | 可见光 | |
HWT[ | 利用HWT的细节子带图能够提取图像丰富的细节特征 | CNN | 融合纹理特征、深度图和rPPG信号,引入先验知识 | 内存占用较高,训练困难 | 可见光 | |
SSAN[ | 将内容特征与风格特征解耦、组合,采用对比学习方法进行训练 | DepthNet ResNet18 | 建立大规模的FAS基准 | — | 可见光 |
方法 | 机制 | 模型结构 | 优点 | 局限性 | 适用场景 |
---|---|---|---|---|---|
ViTranZFAS[ | 对预训练的ViT模型进行微调 | ViT | 将ViT模型引入零样本/小样本任务 | 模型占用显存大,训练困难 | 可见光 |
CM-PAD[ | 提出了一种遵循小样本学习范式的连续元学习的人脸活体检测框架 | DepthNet | 用过去的知识来缓解灾难性遗忘 | — | 可见光 |
AIM-FAS[ | 提出AIM-FAS方法,运用元学习解决零样本/小样本FAS问题 | DepthNet | 提出3个零样本/小样本FAS基准 | — | 可见光 |
SASA[ | 运用风格转移创建辅助域进行语义对齐和对抗学习 | ResNet18 | 引入惩罚机制,源域性能稳定 | 在样本层面进行增强,训练速度欠佳 | 可见光 |
DTN[ | 提出了一种新的深度树网络(DTN),来分层学习特征和检测未知的欺骗攻击 | Deep Tree Network | 模型推理速度快,提出新数据集SiW-M | 使用最大方差的分离策略有待提升 | 可见光 |
表3 基于零样本/小样本学习的FAS方法总结
Table 3 Summary of FAS methods based on zero/few shot learning
方法 | 机制 | 模型结构 | 优点 | 局限性 | 适用场景 |
---|---|---|---|---|---|
ViTranZFAS[ | 对预训练的ViT模型进行微调 | ViT | 将ViT模型引入零样本/小样本任务 | 模型占用显存大,训练困难 | 可见光 |
CM-PAD[ | 提出了一种遵循小样本学习范式的连续元学习的人脸活体检测框架 | DepthNet | 用过去的知识来缓解灾难性遗忘 | — | 可见光 |
AIM-FAS[ | 提出AIM-FAS方法,运用元学习解决零样本/小样本FAS问题 | DepthNet | 提出3个零样本/小样本FAS基准 | — | 可见光 |
SASA[ | 运用风格转移创建辅助域进行语义对齐和对抗学习 | ResNet18 | 引入惩罚机制,源域性能稳定 | 在样本层面进行增强,训练速度欠佳 | 可见光 |
DTN[ | 提出了一种新的深度树网络(DTN),来分层学习特征和检测未知的欺骗攻击 | Deep Tree Network | 模型推理速度快,提出新数据集SiW-M | 使用最大方差的分离策略有待提升 | 可见光 |
方法 | 机制 | 模型结构 | 优点 | 局限性 | 适用场景 |
---|---|---|---|---|---|
Anomaly[ | 首次将异常检测引入FAS任务 | — | 开创FAS任务新研究方法,性能优于传统二分类方法 | 在其他成像条件下性能欠佳 | 可见光 |
End2End-Anomaly[ | 通过生成的伪负样本训练单分类分类器 | VGG-Face | 在特征空间中进行伪负采样,速度较快 | 伪负样本生成策略有待改进 | 可见光 |
MCCNN[ | 提出一种基于多通道卷积神经网络(MCCNN)进行表征学习的单分类器框架 | LightCNN | 适用场景广泛 | — | 灰度图、红外、 可见光、深度图、热成像 |
IQM-GMM[ | 提出使用图像质量度量(IMQ)特征和高斯混合(GMM)模型来表示真实样本的概率分布 | GMM | 更适用于视频攻击已知-照片攻击未知,视频攻击未知-照片攻击已知的测试协议 | 单分类GMM分类效果不理想 | 可见光 |
Dataset Construction[ | 训练时加入非专业数据的混合数据集进行训练 | CNN | 泛化性能更好 | 非专业数据存在不稳定性 | 可见光 |
表4 基于异常检测的FAS方法总结
Table 4 Summary of FAS methods based on anomaly detection
方法 | 机制 | 模型结构 | 优点 | 局限性 | 适用场景 |
---|---|---|---|---|---|
Anomaly[ | 首次将异常检测引入FAS任务 | — | 开创FAS任务新研究方法,性能优于传统二分类方法 | 在其他成像条件下性能欠佳 | 可见光 |
End2End-Anomaly[ | 通过生成的伪负样本训练单分类分类器 | VGG-Face | 在特征空间中进行伪负采样,速度较快 | 伪负样本生成策略有待改进 | 可见光 |
MCCNN[ | 提出一种基于多通道卷积神经网络(MCCNN)进行表征学习的单分类器框架 | LightCNN | 适用场景广泛 | — | 灰度图、红外、 可见光、深度图、热成像 |
IQM-GMM[ | 提出使用图像质量度量(IMQ)特征和高斯混合(GMM)模型来表示真实样本的概率分布 | GMM | 更适用于视频攻击已知-照片攻击未知,视频攻击未知-照片攻击已知的测试协议 | 单分类GMM分类效果不理想 | 可见光 |
Dataset Construction[ | 训练时加入非专业数据的混合数据集进行训练 | CNN | 泛化性能更好 | 非专业数据存在不稳定性 | 可见光 |
数据集 | 年份 | 个体数 | 数据量 | 特点 | 攻击方式 |
---|---|---|---|---|---|
Oulu-NPU[ | 2017 | 55 | 4 950个视频 | 三种不同光照条件和背景 | 打印、视频重放 |
CASIA-MFSD[ | 2012 | 50 | 600个视频 | 图像质量分为低、中、高三种 | 打印、视频重放 |
Replay-Attack[ | 2012 | 50 | 1 300个视频 | 固定场景和复杂场景两种环境 | 打印、视频重放 |
MSU-MFSD[ | 2015 | 35 | 380个视频 | 单一场景 | 打印、视频重放 |
SiW[ | 2018 | 165 | 4 478个视频 | 与摄像机的距离、姿态、光照和表情不同 | 打印、视频重放 |
HQ-WMCA[ | 2020 | 51 | 2 904个视频 | 多个模态(色彩、深度、热成像、近红外光谱、短波红外) | 打印、视频重放、面具、化妆、装饰品 |
CASIA-SURF[ | 2020 | 1 000 | 21 000个视频 | 多个模态(色彩、深度、红外) | 打印 |
表5 主流数据集总览
Table 5 Overview of mainstream datasets
数据集 | 年份 | 个体数 | 数据量 | 特点 | 攻击方式 |
---|---|---|---|---|---|
Oulu-NPU[ | 2017 | 55 | 4 950个视频 | 三种不同光照条件和背景 | 打印、视频重放 |
CASIA-MFSD[ | 2012 | 50 | 600个视频 | 图像质量分为低、中、高三种 | 打印、视频重放 |
Replay-Attack[ | 2012 | 50 | 1 300个视频 | 固定场景和复杂场景两种环境 | 打印、视频重放 |
MSU-MFSD[ | 2015 | 35 | 380个视频 | 单一场景 | 打印、视频重放 |
SiW[ | 2018 | 165 | 4 478个视频 | 与摄像机的距离、姿态、光照和表情不同 | 打印、视频重放 |
HQ-WMCA[ | 2020 | 51 | 2 904个视频 | 多个模态(色彩、深度、热成像、近红外光谱、短波红外) | 打印、视频重放、面具、化妆、装饰品 |
CASIA-SURF[ | 2020 | 1 000 | 21 000个视频 | 多个模态(色彩、深度、红外) | 打印 |
方法 | HTER/% | |||
---|---|---|---|---|
O&C&I→M | O&M&I→C | O&C&M→I | I&C&M→O | |
MMD-AAE[ | 27.08 | 44.59 | 31.58 | 40.98 |
MADDG[ | 17.69 | 24.50 | 22.19 | 27.98 |
SSDG-M[ | 16.67 | 23.11 | 18.21 | 25.17 |
DR-MD-Net[ | 17.02 | 19.68 | 20.87 | 25.02 |
RFMeta[ | 13.89 | 20.27 | 17.30 | 16.45 |
NAS-FAS[ | 19.53 | 16.54 | 14.51 | 13.80 |
D2AM[ | 12.70 | 20.98 | 15.43 | 15.27 |
SDA[ | 15.40 | 24.50 | 15.60 | 23.10 |
DRDG[ | 12.43 | 19.05 | 15.56 | 16.63 |
ANRL[ | 10.83 | 17.83 | 16.03 | 15.67 |
SSAN-M[ | 10.42 | 16.47 | 14.00 | 19.51 |
表6 CASIA-MFSD、Replay-Attack、MSU-MFSD和Oulu-NPU的跨数据集测试结果
Table 6 Results of cross-dataset testing on CASIA-MFSD, Replay-Attack, MSU-MFSD and Oulu-NPU
方法 | HTER/% | |||
---|---|---|---|---|
O&C&I→M | O&M&I→C | O&C&M→I | I&C&M→O | |
MMD-AAE[ | 27.08 | 44.59 | 31.58 | 40.98 |
MADDG[ | 17.69 | 24.50 | 22.19 | 27.98 |
SSDG-M[ | 16.67 | 23.11 | 18.21 | 25.17 |
DR-MD-Net[ | 17.02 | 19.68 | 20.87 | 25.02 |
RFMeta[ | 13.89 | 20.27 | 17.30 | 16.45 |
NAS-FAS[ | 19.53 | 16.54 | 14.51 | 13.80 |
D2AM[ | 12.70 | 20.98 | 15.43 | 15.27 |
SDA[ | 15.40 | 24.50 | 15.60 | 23.10 |
DRDG[ | 12.43 | 19.05 | 15.56 | 16.63 |
ANRL[ | 10.83 | 17.83 | 16.03 | 15.67 |
SSAN-M[ | 10.42 | 16.47 | 14.00 | 19.51 |
方法 | ACER/% | |||||
---|---|---|---|---|---|---|
Replay | Mask Attacks | Make Attacks | Partial Attacks | Average | ||
Auxiliary[ | 16.80 | 6.90 | 21.42 | 27.07 | 31.60 | 23.6±18.5 |
SpoofTrace[ | 7.80 | 7.30 | 8.98 | 25.77 | 15.77 | 14.2±13.2 |
CDCN[ | 8.70 | 7.70 | 10.26 | 20.43 | 16.10 | 13.6±11.7 |
CDCN-PS[ | 12.10 | 7.40 | 10.02 | 19.10 | 15.33 | 12.9±11.1 |
SSR-FCN[ | 7.40 | 19.50 | 10.54 | 13.37 | 14.07 | 12.4±9.2 |
DC-CDN[ | 12.10 | 9.70 | 8.44 | 17.30 | 13.03 | 11.9±10.3 |
BCN[ | 12.80 | 5.70 | 8.04 | 15.33 | 13.70 | 11.2±9.2 |
表7 SiW-M数据集上交叉型测试结果
Table 7 Results of cross-type testing on SiW-M dataset
方法 | ACER/% | |||||
---|---|---|---|---|---|---|
Replay | Mask Attacks | Make Attacks | Partial Attacks | Average | ||
Auxiliary[ | 16.80 | 6.90 | 21.42 | 27.07 | 31.60 | 23.6±18.5 |
SpoofTrace[ | 7.80 | 7.30 | 8.98 | 25.77 | 15.77 | 14.2±13.2 |
CDCN[ | 8.70 | 7.70 | 10.26 | 20.43 | 16.10 | 13.6±11.7 |
CDCN-PS[ | 12.10 | 7.40 | 10.02 | 19.10 | 15.33 | 12.9±11.1 |
SSR-FCN[ | 7.40 | 19.50 | 10.54 | 13.37 | 14.07 | 12.4±9.2 |
DC-CDN[ | 12.10 | 9.70 | 8.44 | 17.30 | 13.03 | 11.9±10.3 |
BCN[ | 12.80 | 5.70 | 8.04 | 15.33 | 13.70 | 11.2±9.2 |
数据集 | Subset | 数据集 | Subset |
---|---|---|---|
CASIA-SURF[ | P1 | Rose-Youtu[ | P2 |
WMCA[ | P1 | WFFD[ | P2 |
MSU-MFSD[ | P1 | CelebA-Spoof[ | P2 |
HKBU-MARs V2[ | P1 | CASIA-MFSD[ | P2 |
Oulu-NPU[ | P1 | Replay-Attack[ | P2 |
CeFA[ | P1 | SiW[ | P2 |
表8 数据集及其对应的编号
Table 8 Datasets and their corresponding numbers
数据集 | Subset | 数据集 | Subset |
---|---|---|---|
CASIA-SURF[ | P1 | Rose-Youtu[ | P2 |
WMCA[ | P1 | WFFD[ | P2 |
MSU-MFSD[ | P1 | CelebA-Spoof[ | P2 |
HKBU-MARs V2[ | P1 | CASIA-MFSD[ | P2 |
Oulu-NPU[ | P1 | Replay-Attack[ | P2 |
CeFA[ | P1 | SiW[ | P2 |
[1] | SHARIF M, BHAGAVATULA S, BAUER L, et al. Reiter accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition[C]// Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Oct 24-28, 2016. New York: ACM, 2016: 1528-1540. |
[2] | 翁泽佳, 陈静静, 姜育刚. 基于域对抗学习的可泛化虚假人脸检测方法研究[J]. 计算机研究与发展, 2021, 58(7): 1476-1489. |
WENG Z J, CHEN J J, JIANG Y G. On the generalization of face forgery detection with domain adversarial learning[J]. Journal of Computer Research and Development, 2021, 58(7): 1476-1489. | |
[3] | DE FREITAS PEREIRA T, ANJOS A, DE MARTINOJ M, et al. LBP-TOP based countermeasure against face spoofing attacks[C]// LNCS 7728: Proceedings of the 11th Asian Conference on Computer Vision, Daejeon, Nov 5-9, 2012. Berlin, Heidelberg: Springer, 2012: 121-132. |
[4] | BOULKENAFET Z, KOMULAINEN J, HADID A. Face anti-spoofing based on color texture analysis[C]// Proceedings of the 2015 IEEE International Conference on Image Processing, Quebec City, Sep 27-30, 2015. Piscataway: IEEE, 2015: 2636-2640. |
[5] | KOMULAINEN J, HADID A, PIETIKÄINEN M. Context based face anti-spoofing[C]// Proceedings of the IEEE 6th International Conference on Biometrics:Theory, Applications and Systems, Arlington, Sep 29-Oct 2, 2013. Piscataway: IEEE, 2013: 1-8. |
[6] | PATEL K, HAN H, JAIN A K. Secure face unlock: spoof detection on smartphones[J]. IEEE Transactions on Infor- mation Forensics and Security, 2016, 11(10): 2268-2283. |
[7] | BOULKENAFET Z, KOMULAINEN J, HADID A. Face antispoofing using speeded-up robust features and Fisher vector encoding[J]. IEEE Signal Processing Letters, 2016, 24(2): 141-145. |
[8] | LIN B, LI X, YU Z, et al. Face liveness detection by rPPG features and contextual patch-based CNN[C]// Proceedings of the 2019 3rd International Conference on Biometric Engineering and Applications, Stockholm, May 29-31, 2019. New York: ACM, 2019: 61-68. |
[9] | LIU Y, JOURABLOO A, LIU X. Learning deep models for face anti-spoofing: binary or auxiliary supervision[C]// Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 389-398. |
[10] | YANG J, LEI Z, LI S Z. Learn convolutional neural network for face anti-spoofing[J]. arXiv:1408.5601, 2014. |
[11] | YU Z T, ZHAO C X, WANG Z Z, et al. Searching central difference convolutional networks for face anti-spoofing[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, Jun 13-19, 2020. Piscataway: IEEE, 2020: 5294-5304. |
[12] | YU Z T, QIN Y X, ZHAO H S, et al. Dual-cross central difference network for face anti-spoofing[J]. arXiv:2105. 01290, 2021. |
[13] | CHEN S, SONG X, FENG Z, et al. Face anti-spoofing with local difference network and binary facial mask supervision[J]. Journal of Electronic Imaging, 2022, 31(1): 013007. |
[14] | ATOUM Y, LIU Y J, JOURABLOO A, et al. Face anti-spoofing using patch and depth-based CNNs[C]// Proceedings of the 2017 IEEE International Joint Conference on Biometrics, Denver, Oct 1-4, 2017. Piscataway: IEEE, 2017: 319-328. |
[15] | WANG Z Z, YU Z T, ZHAO C X, et al. Deep spatial gradient and temporal depth learning for face anti-spoofing[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, Jun 13-19, 2020. Piscataway: IEEE, 2020: 5042-5051. |
[16] | KIM T, KIM Y H, KIM I, et al. BASN: enriching feature representation using bipartite auxiliary supervisions for face anti-spoofing[C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Oct 27-28, 2019. Piscataway: IEEE, 2019: 494-503. |
[17] | YU Z T, LI X B, NIU X S, et al. Face anti-spoofing with human material perception[C]// LNCS 12352: Proceedings of the 16th European Conference on Computer Vision, Glasgow, Aug 23-28, 2020. Cham: Springer, 2020: 557-575. |
[18] |
YU Z, LI X, WANG P, et al. TransRPPG: remote photople- thysmography transformer for 3D mask face presentation attack detection[J]. IEEE Signal Processing Letters, 2021, 28: 1290-1294.
DOI URL |
[19] |
王宏飞, 程鑫, 赵祥模, 等. 光流与纹理特征融合的人脸活体检测算法[J]. 计算机工程与应用, 2022, 58(6): 170-176.
DOI |
WANG H F, CHENG X, ZHAO X M, et al. Face liveness detection based on fusional optical flow and texture features[J]. Computer Engineering and Applications, 2022, 58(6): 170-176.
DOI |
|
[20] | 汪亚航, 宋晓宁, 吴小俊. 结合混合池化的双流人脸活体检测网络[J]. 中国图象图形学报, 2020, 25(7): 1408-1420. |
WANG Y H, SONG X N, WU X J. Two-stream face spoofing detection network combined with hybrid pooling[J]. Journal of Image and Graphics, 2020, 25(7): 1408-1420. | |
[21] | 马思源, 郑涵, 郭文. 应用深度光学应变特征图的人脸活体检测[J]. 中国图象图形学报, 2020, 25(3): 618-628. |
MA S Y, ZHENG H, GUO W. Deep optical strain feature map for face anti-spoofing[J]. Journal of Image and Graphics, 2020, 25(3): 618-628. | |
[22] | ZHOU K Y, LIU Z W, QIAO Y, et al. Domain generalization: a survey[J]. arXiv:2103.02503, 2021. |
[23] | 谢晓华, 卞锦堂, 赖剑煌. 人脸活体检测综述[J]. 中国图象图形学报, 2022, 27(1): 63-87. |
XIE X H, BIAN J T, LAI J H. Review on face liveness detection[J]. Journal of Image and Graphics, 2022, 27(1): 63-87. | |
[24] |
马玉琨, 徐姚文, 赵欣, 等. 人脸识别系统的活体检测综述[J]. 计算机科学与探索, 2021, 15(7): 1195-1206.
DOI |
MA Y K, XU Y W, ZHAO X, et al. Review of presentation attack detection in face recognition system[J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(7): 1195-1206.
DOI |
|
[25] | 卢子谦, 陆哲明, 沈冯立, 等. 人脸反欺诈活体检测综述[J]. 信息安全学报, 2020, 5(2): 18-27. |
LU Z Q, LU Z M, SHEN F L, et al. A survey of face anti-spoofing[J]. Journal of Cyber Security, 2020, 5(2): 18-27. | |
[26] | 邓雄, 王洪春, 赵立军, 等. 人脸识别活体检测研究方法综述[J]. 计算机应用研究, 2020, 37(9): 2579-2585. |
DENG X, WANG H C, ZHAO L J, et al. Survey on face anti-spoofing in face recognition[J]. Application Research of Computers, 2020, 37(9): 2579-2585. | |
[27] | 蒋方玲, 刘鹏程, 周祥东. 人脸活体检测综述[J]. 自动化学报, 2021, 47(8): 1799-1821. |
JIANG F L, LIU P C, ZHOU X D. A review on face anti-spoofing[J]. Acta Automatica Sinica, 2021, 47(8): 1799-1821. | |
[28] |
WANG M, DENG W H. Deep visual domain adaptation: a survey[J]. Neurocomputing, 2018, 312: 135-153.
DOI URL |
[29] |
PAN S J, YANG Q. A survey on transfer learning[J]. IEEE Transactions on Knowledge and Data Engineering, 2009, 22(10): 1345-1359.
DOI URL |
[30] | TZENG E, HOFFMAN J, SAENKO K, et al. Adversarial discriminative domain adaptation[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 2962-2971. |
[31] | WANG J D, LAN C L, LIU C, et al. Generalizing to unseen domains: a survey on domain generalization[J]. arXiv:2103.03097, 2021. |
[32] | GRETTON A, BORGWARDT K M, RASCH M J, et al. A kernel two-sample test[J]. The Journal of Machine Learning Research, 2012, 13(1): 723-773. |
[33] |
LI H, LI W, CAO H, et al. Unsupervised domain adaptation for face anti-spoofing[J]. IEEE Transactions on Information Forensics and Security, 2018, 13(7): 1794-1809.
DOI URL |
[34] | TU X G, ZHANG H S, XIE M, et al. Deep transfer across domains for face antispoofing[J]. Journal of Electronic Imaging, 2019, 28(4): 043001. |
[35] | KIM Y E, NAM W J, MIN K, et al. Style-guided domain adaptation for face presentation attack detection[J]. arXiv: 2203.14565, 2022. |
[36] | HAMBLIN J, NIKHAL K, RIGGAN B S. Understanding cross domain presentation attack detection for visible face recognition[C]// Proceedings of the 16th IEEE International Conference on Automatic Face and Gesture Recognition. Piscataway: IEEE, 2021: 1-8. |
[37] |
孙文赟, 金忠, 赵海涛, 等. 基于深度特征增广的跨域小样本人脸欺诈检测算法[J]. 计算机科学, 2021, 48(2): 330-336.
DOI |
SUN W Y, JIN Z, ZHAO H T, et al. Cross-domain few-shot face spoofing detection method based on deep feature augmentation[J]. Computer Science, 2021, 48(2): 330-336. | |
[38] | HUANG H P, SUN D, LIU Y, et al. Adaptive transformers for robust few-shot cross-domain face anti-spoofing[J]. arXiv: 2203.12175, 2022. |
[39] | WANG G Q, HAN H, SHAN S G, et al. Improving cross-database face presentation attack detection via adversarial domain adaptation[C]// Proceedings of the 2019 International Conference on Biometrics, Crete, Jun 4-7, 2019. Piscataway: IEEE, 2019: 1-8. |
[40] | El-DIN Y S, MOUSTAFA M N, MAHDI H. Adversarial unsupervised domain adaptation guided with deep clustering for face presentation attack detection[C]// Proceedings of the 2021 International Conference on Image Processing and Vision Engineering, Apr 28-30, 2021: 36-45. |
[41] | JIA Y P, ZHANG J, SHAN S G, et al. Unified unsupervised and semi-supervised domain adaptation network for cross-scenario face anti-spoofing[J]. Pattern Recognition, 2021, 115: 107888. |
[42] |
WANG G Q, HAN H, SHAN S G, et al. Unsupervised adversarial domain adaptation for cross-domain face presentation attack detection[J]. IEEE Transactions on Information Forensics and Security, 2020, 16: 56-69.
DOI URL |
[43] | TU X G, MA Z, ZHAO J, et al. Learning generalizable and identity-discriminative representations for face anti-spoofing[J]. ACM Transactions on Intelligent Systems and Technology, 2020, 11(5): 1-19. |
[44] | WANG J J, ZHANG J Y, BIAN Y, et al. Self-domain adaptation for face anti-spoofing[C]// Proceedings of the 35th AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2021: 2746-2754. |
[45] | MOHAMMADI A, BHATTACHARJEE S, MARCEL S. Domain adaptation for generalization of face presentation attack detection in mobile settings with minimal information[C]// Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, Barcelona, May 4-8, 2020. Piscataway: IEEE, 2020: 1001-1005. |
[46] | SHAO R, LAN X Y, YUEN P C. Regularized fine-grained meta face anti-spoofing[C]// Proceedings of the 34th AAAI Conference on Artificial Intelligence, the 32nd Innovative Applications of Artificial Intelligence Conference, the 10th AAAI Symposium on Educational Advances in Artificial Intelligence, New York, Feb 7-12, 2020. Menlo Park: AAAI, 2020: 11974-11981. |
[47] | CAI R Z, LI Z, WAN R J, et al. Learning meta pattern for face anti-spoofing[J]. arXiv:2110.06753, 2021. |
[48] | KIM Y E, LEE S W. Domain generalization with pseudo- domain label for face anti-spoofing[C]// Proceedings of the 6th Asian Conference on Pattern Recognition, Jeju Island, Nov 9-12, 2021: 431-442. |
[49] | LI D, YANG Y X, SONG Y Z, et al. Learning to generalize: meta-learning for domain generalization[C]// Proceedings of the 32nd AAAI Conference on Artificial Intelligence, the 30th Innovative Applications of Artificial Intelligence, and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence, New Orleans, Feb 2-7, 2018. Menlo Park: AAAI, 2018: 3490-3497. |
[50] | CHEN Z H, YAO T P, SHENG K K, et al. Generalizable representation learning for mixture domain face anti-spoofing[J]. arXiv:2105.02453, 2021. |
[51] | ULYANOV D, VEDALDI A, LEMPITSKY V. Instance normalization: the missing ingredient for fast stylization[J]. arXiv:1607.08022, 2016. |
[52] |
蔡体健, 尘福春, 刘文鑫, 等. 一种基于条件对抗域泛化的人脸活体检测方法[J]. 计算机应用研究. DOI: 10.19734/j.issn.1001-3695.2021.12.0685.
DOI |
CAI T J, CHEN F C, LIU W X, et al. Face anti-spoofing method based on conditional adversarial domain gener- alization[J]. Application Research of Computers. DOI: 10.19734/j.issn.1001-3695.2021.12.0685.
DOI |
|
[53] | 李策, 李兰, 宣树星, 等. 采用超复数小波生成对抗网络的活体人脸检测算法[J]. 西安交通大学学报, 2021, 55(5): 113-122. |
LI C, LI L, XUAN S X, et al. Face anti-spoofing algorithm using generative adversarial networks with hypercomplex wavelet[J]. Journal of Xi’an Jiaotong University, 2021, 55(5): 113-122. | |
[54] | WANG Z, WANG Z Z, YU Z T, et al. Domain generalization via shuffled style assembly for face anti-spoofing[C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, Jun 18-24, 2022. Piscataway: IEEE, 2022: 4113-4123. |
[55] | LIU S C, LU S T, XU H Y, et al. Feature generation and hypothesis verification for reliable face anti-spoofing[C]// Proceedings of the 36th AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2022: 1782-1791. |
[56] | SHAO R, LAN X Y, LI J W, et al. Multi-adversarial discriminative deep domain generalization for face presentation attack detection[C]// Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 10023-10031. |
[57] | JIA Y P, ZHANG J, SHAN S G, et al. Single-side domain generalization for face anti-spoofing[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, Jun 14-19, 2020. Piscataway: IEEE, 2020: 8481-8490. |
[58] | LIU S B, ZHANG K Y, YAO T P, et al. Dual reweighting domain generalization for face presentation attack detection[C]// Proceedings of the 30th International Joint Conference on Artificial Intelligence, Aug 19-27, 2021: 867-873. |
[59] | WANG Y Q, YAO Q M, KWOK J T, et al. Generalizing from a few examples: a survey on few-shot learning[J]. ACM Computing Surveys, 2020, 53(3): 1-34. |
[60] | WANG W, ZHENG V W, YU H, et al. A survey of zero-shot learning: settings, methods, and applications[J]. ACM Transactions on Intelligent Systems and Technology, 2019, 10(2): 1-37. |
[61] | GEORGE A, MARCEL S. On the effectiveness of vision transformers for zero-shot face anti-spoofing[C]// Proceedings of the 2021 IEEE International Joint Conference on Biometrics, Shenzhen, Aug 4-7, 2021. Piscataway: IEEE, 2021: 1-8. |
[62] | PÉREZ-CABO D, JIMÉNEZ-CABELLO D, COSTA-PAZO A, et al. Learning to learn face-PAD: a lifelong learning approach[C]// Proceedings of the 2020 IEEE International Joint Conference on Biometrics, Houston, Sep 28-Oct 1, 2020. Piscataway: IEEE, 2020: 1-9. |
[63] | QIN Y X, ZHAO C X, ZHU X Y, et al. Learning meta model for zero-and few-shot face anti-spoofing[C]// Proceedings of the 34th AAAI Conference on Artificial Intelligence, the 32nd Innovative Applications of Artificial Intelligence Conference, the 10th AAAI Symposium on Educational Advances in Artificial Intelligence, New York, Feb 7-12, 2020. Menlo Park: AAAI, 2020: 11916-11923. |
[64] |
QUAN R, WU Y, YU X, et al. Progressive transfer learning for face anti-spoofing[J]. IEEE Transactions on Image Processing, 2021, 30: 3946-3955.
DOI URL |
[65] | YANG B W, ZHANG J, YIN Z F, et al. Few-shot domain expansion for face anti-spoofing[J]. arXiv:2106.14162, 2021. |
[66] | LIU Y J, STEHOUWER J, JOURABLOO A, et al. Deep tree learning for zero-shot face anti-spoofing[C]// Procee-dings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 4680-4689. |
[67] | YANG J K, ZHOU K Y, LI Y X, et al. Generalized out-of-distribution detection: a survey[J]. arXiv:2110.11334, 2021. |
[68] |
ARASHLOO S R, KITTLER J, CHRISTMAS W. An anomaly detection approach to face spoofing detection: a new formulation and evaluation protocol[J]. IEEE Access, 2017, 5: 13868-13882.
DOI URL |
[69] | ABDUH L, IVRISSIMTZIS I. Training dataset construction for anomaly detection in face anti-spoofing[C]// Proceedings of the Computer Graphics & Visual Computing, Lincoln, Sep 8-9, 2021. Aire-la-Ville: The Eurographics Association, 2021: 21-26. |
[70] | BAWEJA Y, OZA P, PERERA P, et al. Anomaly detection-based unknown face presentation attack detection[C]// Proceedings of the 2020 IEEE International Joint Conference on Biometrics, Houston, Sep 28-Oct 1, 2020. Piscataway: IEEE, 2020: 1-9. |
[71] |
GEORGE A, MARCEL S. Learning one class representa-tions for face presentation attack detection using multi-channel convolutional neural networks[J]. IEEE Transactions on Information Forensics and Security, 2020, 16: 361-375.
DOI URL |
[72] | NIKISINS O, MOHAMMADI A, ANJOS A, et al. On effectiveness of anomaly detection approaches against unseen presentation attacks in face anti-spoofing[C]// Proceedings of the 2018 International Conference on Biometrics, Gold Coast, Feb 20-23, 2018. Piscataway: IEEE, 2018: 75-81. |
[73] | FATEMIFAR S, ARASHLOO S R, AWAIS M, et al. Client-specific anomaly detection for face presentation attack detection[J]. Pattern Recognition, 2021, 112: 107696. |
[74] | PÉREZ-CABO D, JIMÉNEZ-CABELLO D, COSTA-PAZO A, et al. Deep anomaly detection for generalized face anti-spoofing[C]// Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 1591-1600. |
[75] | BOULKENAFET Z, KOMULAINEN J, LI L, et al. Oulu-NPU: a mobile face presentation attack database with real-world variations[C]// Proceedings of the 12th IEEE International Conference on Automatic Face & Gesture Recognition, Washington, May 30-Jun 3, 2017. Piscataway: IEEE, 2017: 612-618. |
[76] | ZHANG Z, YAN J, LIU S, et al. A face antispoofing database with diverse attacks[C]// Proceedings of the 2012 5th IAPR International Conference on Biometrics, New Delhi, Mar 29-Apr 1, 2012. Piscataway: IEEE, 2012: 26-31. |
[77] | CHINGOVSKA I, ANJOS A, MARCEL S. On the effectiveness of local binary patterns in face anti-spoofing[C]// Proceedings of the 2012 International Conference of Biometrics Special Interest Group, Darmstadt, Sep 6-7, 2012. Piscataway: IEEE, 2012: 1-7. |
[78] |
WEN D, HAN H, JAIN A K. Face spoof detection with image distortion analysis[J]. IEEE Transactions on Information Forensics and Security, 2015, 10(4): 746-761.
DOI URL |
[79] |
HEUSCH G, GEORGE A, GEISSBÜHLER D, et al. Deep models and shortwave infrared information to detect face presentation attacks[J]. IEEE Transactions on Biometrics, Behavior, and Identity Science, 2020, 2(4): 399-409.
DOI URL |
[80] |
ZHANG S F, LIU A J, WAN J, et al. CASIA-SURF: a large-scale multi-modal benchmark for face anti-spoofing[J]. IEEE Transactions on Biometrics, Behavior, and Identity Science, 2020, 2(2): 182-193.
DOI URL |
[81] | LI H L, PAN S J, WANG S Q, et al. Domain generalization with adversarial feature learning[C]// Proceedings of the 2018 IEEE Conferenceon Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 5400-5409. |
[82] | WANG G, HAN H, SHAN S, et al. Cross-domain face presentation attack detection via multi-domain disentangled representation learning[C]// Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 6678-6687. |
[83] |
YU Z T, WAN J, QIN Y X, et al. NAS-FAS: static-dynamic central difference network search for face anti-spoofing[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 43(9): 3005-3023.
DOI URL |
[84] | LIU S B, ZHANG K Y, YAO T P, et al. Adaptive normalized representation learning for generalizable face anti-spoofing[C]// Proceedings of the 29th ACM International Conference on Multimedia, Chengdu, Oct 20-24, 2021. New York: ACM, 2021: 1469-1477. |
[85] | LIU Y, STEHOUWER J, LIU X. On disentangling spoof trace for generic face anti-spoofing[C]// LNCS 12363: Proceedings of the 16th European Conference on Computer Vision, Glasgow, Aug 23-28, 2020. Cham: Springer, 2020: 406-422. |
[86] |
YU Z, LI X, SHI J, et al. Revisiting pixel-wise supervision for face anti-spoofing[J]. IEEE Transactions on Biometrics, Behavior, and Identity Science, 2021, 3(3): 285-295.
DOI URL |
[87] |
DEB D, JAIN A K. Look locally infer globally: a genera- lizable face anti-spoofing approach[J]. IEEE Transactions on Information Forensics and Security, 2020, 16: 1143-1157.
DOI URL |
[88] | LIU S Q, YUEN P C, ZHANG S P, et al. 3D mask face anti-spoofing with remote photoplethysmography[C]// LNCS 9911: Proceedings of the 14th European Conference on Computer Vision, Amsterdam, Oct 11-14, 2016. Cham: Springer, 2016: 85-100. |
[89] | LIU A J, TAN Z C, WAN J, et al. CASIA-SURF CeFA: a benchmark for multi-modal cross-ethnicity face anti-spoofing[C]// Proceedings of the 2021 IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, Jan 3-8, 2021. Piscataway: IEEE, 2021: 1179-1187. |
[90] |
JIA S, LI X, HU C, et al. 3D face anti-spoofing with factorized bilinear coding[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 31(10): 4031-4045.
DOI URL |
[91] | ZHANG Y H, YIN Z F, LI Y D, et al. CelebA-Spoof: large-scale face anti-spoofing dataset with rich annotations[C]// LNCS 12357: Proceedings of the 16th European Conference on Computer Vision, Glasgow, Aug 23-28, 2020. Cham: Springer, 2020: 70-85. |
[1] | 吕晓琦, 纪科, 陈贞翔, 孙润元, 马坤, 邬俊, 李浥东. 结合注意力与循环神经网络的专家推荐算法[J]. 计算机科学与探索, 2022, 16(9): 2068-2077. |
[2] | 张祥平, 刘建勋. 基于深度学习的代码表征及其应用综述[J]. 计算机科学与探索, 2022, 16(9): 2011-2029. |
[3] | 李冬梅, 罗斯斯, 张小平, 许福. 命名实体识别方法研究综述[J]. 计算机科学与探索, 2022, 16(9): 1954-1968. |
[4] | 任宁, 付岩, 吴艳霞, 梁鹏举, 韩希. 深度学习应用于目标检测中失衡问题研究综述[J]. 计算机科学与探索, 2022, 16(9): 1933-1953. |
[5] | 杨才东, 李承阳, 李忠博, 谢永强, 孙方伟, 齐锦. 深度学习的图像超分辨率重建技术综述[J]. 计算机科学与探索, 2022, 16(9): 1990-2010. |
[6] | 曾凡智, 许露倩, 周燕, 周月霞, 廖俊玮. 面向智慧教育的知识追踪模型研究综述[J]. 计算机科学与探索, 2022, 16(8): 1742-1763. |
[7] | 安凤平, 李晓薇, 曹翔. 权重初始化-滑动窗口CNN的医学图像分类[J]. 计算机科学与探索, 2022, 16(8): 1885-1897. |
[8] | 刘艺, 李蒙蒙, 郑奇斌, 秦伟, 任小广. 视频目标跟踪算法综述[J]. 计算机科学与探索, 2022, 16(7): 1504-1515. |
[9] | 赵小明, 杨轶娇, 张石清. 面向深度学习的多模态情感识别研究进展[J]. 计算机科学与探索, 2022, 16(7): 1479-1503. |
[10] | 夏鸿斌, 肖奕飞, 刘渊. 融合自注意力机制的长文本生成对抗网络模型[J]. 计算机科学与探索, 2022, 16(7): 1603-1610. |
[11] | 孙方伟, 李承阳, 谢永强, 李忠博, 杨才东, 齐锦. 深度学习应用于遮挡目标检测算法综述[J]. 计算机科学与探索, 2022, 16(6): 1243-1259. |
[12] | 刘雅芬, 郑艺峰, 江铃燚, 李国和, 张文杰. 深度半监督学习中伪标签方法综述[J]. 计算机科学与探索, 2022, 16(6): 1279-1290. |
[13] | 朱壮壮, 周治平. 高斯混合生成模型检测健康数据异常[J]. 计算机科学与探索, 2022, 16(5): 1128-1135. |
[14] | 程卫月, 张雪琴, 林克正, 李骜. 融合全局与局部特征的深度卷积神经网络算法[J]. 计算机科学与探索, 2022, 16(5): 1146-1154. |
[15] | 钟梦圆, 姜麟. 超分辨率图像重建算法综述[J]. 计算机科学与探索, 2022, 16(5): 972-990. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||