[1] LIANG R G, LV P Z, ZHAO Y, et al. A survey of audio-visual deepfake detection techniques[J]. Journal of Cyber Security, 2020, 5(2): 1-17.
梁瑞刚, 吕培卓, 赵月, 等. 视听觉深度伪造检测技术研究综述[J]. 信息安全学报, 2020, 5(2): 1-17.
[2] MIN F, LU T W, ZHANG Y D. Automatic face replacement in photographs based on active shape models[C]//Proceed-ings of the 2009 Asia-Pacific Conference on Computational Intelligence and Industrial Applications, Wuhan, Nov 28-29, 2009. Piscataway: IEEE, 2009: 170-173.
[3] Welcome to Chechnya[EB/OL]. [2021-07-12]. https://www.welcometochechnya.com.
[4] Insecam[EB/OL]. [2021-07-12]. https://www.insecam.org.
[5] BORSHUKOV G, PIPONI D, LARSEN O, et al. Universal capture: image-based facial animation for “the matrix relo-aded”[C]//Proceedings of the 2005 International Conference on Computer Graphics and Interactive Techniques, Los An-geles, Jul 31-Aug 4, 2005. New York: ACM, 2005: 16.
[6] JOSHI N, MATUSIK W, ADELSON E H, et al. Personal photo enhancement using example images[J]. ACM Trans-actions on Graphics, 2010, 29(2): 12.
[7] LEYVAND T, COHEN-OR D, DROR G, et al. Data-driven enhancement of facial attractiveness[J]. ACM Transactions on Graphics, 2008, 27(3): 38.
[8] NARUNIEC J, HELMINGER L, SCHROERS C, et al. High-resolution neural face swapping for visual effects[J]. Com-puter Graphics Forum, 2020, 39(4): 173-184.
[9] ZHANG J N, ZENG X F, WANG M M, et al. FreeNet: multi-identity face reenactment[C]//Proceedings of the 2020 IEEE/ CVF Conference on Computer Vision and Pattern Recogni-tion, Seattle, Jun 13-19, 2020. Piscataway: IEEE, 2020: 5325-5334.
[10] LUO C W, YU J, WANG Z F. Synthesizing performance-driven facial animation[J]. Acta Automatica Sinica, 2014, 40(10): 2245-2252.
罗常伟, 於俊, 汪增福. 视频驱动人脸动画合成[J]. 自动化学报, 2014, 40(10): 2245-2252.
[11] H.R.3600-Deepfakes report act of 2019[EB/OL]. [2021-07-12]. https://www.congress.gov/bill/116th-congress/house-bill/3600.
[12] 网络音视频信息服务管理规定[EB/OL]. [2021-07-12]. http://www.cac.gov.cn/2019-11/29/c_1576561820967678.htm.
[13] ALEXANDER O, ROGERS M, LAMBETH W, et al. Creat-ing a photoreal digital actor: the digital emily project[C]// Proceedings of the 2009 Conference for Visual Media Pro-duction, London, Nov 12-13, 2009. Piscataway: IEEE, 2009: 176-187.
[14] CAI L, GUO Y D, ZHANG J Y. High-quality 3D face recon-struction from multi-view images[J]. Journal of Computer-Aided Design & Computer Graphics, 2020, 32(2): 305-314.
蔡麟, 郭玉东, 张举勇. 基于多视角的高精度三维人脸 重建[J]. 计算机辅助设计与图形学学报, 2020, 32(2): 305-314.
[15] BLANZ V, VETTER T. A morphable model for the synthesis of 3D faces[C]//Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, Los An-geles, Aug 8-13, 1999. New York: ACM, 1999: 187-194.
[16] CAO C, WENG Y L, ZHOU S, et al. Facewarehouse: a 3D facial expression database for visual computing[J]. IEEE Transactions on Visualization and Computer Graphics, 2013, 20(3): 413-425.
[17] PAYSAN P, KNOTHE R, AMBERG B, et al. A 3D face model for pose and illumination invariant face recognition [C]//Proceedings of the 6th IEEE International Conference on Advanced Video and Signal Based Surveillance, Genova, Sep 2-4, 2009. Washington: IEEE Computer Society, 2009: 296-301.
[18] BOOTH J, ROUSSOS A, PONNIAH A, et al. Large scale 3D morphable models[J]. International Journal of Computer Vision, 2018, 126(2): 233-254.
[19] ROMDHANI S, VETTER T. Estimating 3D shape and tex-ture using pixel intensity, edges, specular highlights, texture constraints and a prior[C]//Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pat-tern Recognition, San Diego, Jun 20-26, 2005. Washington: IEEE Computer Society, 2005: 986-993.
[20] AMBERG B, BLAKE A, FITZGIBBON A W, et al. Recons-tructing high quality face-surfaces using model based stereo [C]//Proceedings of the 11th International Conference on Computer Vision, Rio de Janeiro, Oct 14-20, 2007. Wash-ington: IEEE Computer Society, 2007: 1-8.
[21] YE Z P, XIA W Y, SUN Z Y, et al. From traditional rendering to differentiable rendering: theories, methods and applications[J]. Science China: Information Sciences, 2021, 51(7): 1043-1067.
叶子鹏, 夏雯宇, 孙志尧, 等. 从传统渲染到可微渲染: 基本原理、方法和应用[J]. 中国科学: 信息科学, 2021, 51(7): 1043-1067.
[22] TUAN TRAN A, HASSNER T, MASI I, et al. Regressing robust and discriminative 3D morphable models with a very deep neural network[C]//Proceedings of the 2017 IEEE Con-ference on Computer Vision and Pattern Recognition, Hon-olulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 1493-1502.
[23] CHEN A P, CHEN Z, ZHANG G L, et al. Photo-realistic facial details synthesis from single image[C]//Proceedings of the 2019 IEEE/CVF International Conference on Com-puter Vision, Seoul, Oct 27-Nov 2, 2019. Piscataway: IEEE, 2019: 9428-9438.
[24] GECER B, PLOUMPIS S, KOTSIA I, et al. GANFIT: gen-erative adversarial network fitting for high fidelity 3D face reconstruction[C]//Proceedings of the 2019 IEEE/CVF Con-ference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 1155-1164.
[25] DENG Y, YANG J L, XU S C, et al. Accurate 3D face re-construction with weakly-supervised learning: from single image to image set[C]//Proceedings of the 2019 IEEE Con-ference on Computer Vision and Pattern Recognition Work-shops, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 285-295.
[26] YU Z P, CHI J, YE Y N, et al. Detailed features-preserving 3D facial expression transfer[J]. Journal of Computer-Aided Design & Computer Graphics, 2021, 33(2): 186-198.
于志平, 迟静, 叶亚男, 等. 细节特征保持的三维面部表情迁移方法[J]. 计算机辅助设计与图形学学报, 2021, 33(2): 186-198.
[27] VLASIC D, BRAND M, PFISTER H, et al. Face transfer with multilinear models[C]//Proceedings of the 2006 Inter-national Conference on Computer Graphics and Interactive Techniques, Boston, Jul 30-Aug 3, 2006. New York: ACM, 2006: 24.
[28] THIES J, ZOLLH?FER M, NIE?NER M, et al. Real-time expression transfer for facial reenactment[J]. ACM Transac-tions on Graphics, 2015, 34(6): 183.
[29] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//Proceedings of the Annual Conference on Neural Information Processing Systems 2014, Montreal, Dec 8-13, 2014. Red Hook: Curran Associates, 2014: 2672-2680.
[30] WANG K F, GOU C, DUAN Y J, et al. Generative adversarial networks: the state of the art and beyond[J]. Acta Automa-tica Sinica, 2017, 43(3): 321-332.
王坤峰, 苟超, 段艳杰, 等. 生成式对抗网络GAN的研究进展与展望[J]. 自动化学报, 2017, 43(3): 321-332.
[31] MIRZA M, OSINDERO S. Conditional generative adversa-rial nets[J]. arXiv:1411.1784, 2014.
[32] ODENA A, OLAH C, SHLENS J. Conditional image synth-esis with auxiliary classifier GANs[C]//Proceedings of the 34th International Conference on Machine Learning, Sydney, Aug 6-11, 2017: 2642-2651.
[33] PERARNAU G, VAN DE WEIJER J, RADUCANU B, et al. Invertible conditional GANs for image editing[J]. arXiv: 1611.06355, 2016.
[34] CHEN X, DUAN Y, HOUTHOOFT R, et al. InfoGAN: inter-pretable representation learning by information maximizing generative adversarial nets[C]//Proceedings of the Annual Conference on Neural Information Processing Systems 2016, Barcelona, Dec 5-10, 2016. Red Hook: Curran Associates, 2016: 2172-2180.
[35] DENTON E L, CHINTALA S, SZLAM A, et al. Deep gen-erative image models using a Laplacian pyramid of adver-sarial networks[C]//Proceedings of the 28th International Conference on Neural Information Processing Systems, Montreal, Dec 7-12, 2015. Red Hook: Curran Associates, 2015: 1486-1494.
[36] KARRAS T, AILA T, LAINE S, et al. Progressive growing of GANs for improved quality, stability, and variation[C]// Proceedings of the 6th International Conference on Learn-ing Representations, Vancouver, Apr 30-May 3, 2018: 1-26.
[37] KARRAS T, LAINE S, AILA T. A style-based generator architecture for generative adversarial networks[C]//Procee-dings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Pis-cataway: IEEE, 2019: 4401-4410.
[38] ISOLA P, ZHU J Y, ZHOU T H, et al. Image-to-image tran-slation with conditional adversarial networks[C]//Proceed-ings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 5967-5976.
[39] WANG T C, LIU M Y, ZHU J Y, et al. High-resolution image synthesis and semantic manipulation with conditional GANs [C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 8798-8807.
[40] MALIK S. Digital face replacement in photographs[R]. Toronto: University of Toronto, 2003.
[41] BLANZ V, SCHERBAUM K, VETTER T, et al. Exchang-ing faces in images[J]. Computer Graphics Forum, 2004, 23(3): 669-676.
[42] CHENG Y T, TZENG V, LIANG Y, et al. 3D-model-based face replaement in video[C]//Proceedings of the 2009 Inter-national Conference on Computer Graphics and Interactive Techniques, New Orleans, Aug 3-7, 2009. New York: ACM, 2009: 1.
[43] LIN Y, LIN Q, TANG F, et al. Face replacement with large-pose differences[C]//Proceedings of the 20th ACM Multi-media Conference, Nara, Oct 29-Nov 2, 2012. New York: ACM, 2012: 1249-1250.
[44] LIN Y, WANG S J, LIN Q, et al. Face swapping under large pose variations: a 3D model based approach[C]//Proceedings of the 2012 IEEE International Conference on Multimedia and Expo, Melbourne, Jul 9-13, 2012. Washington: IEEE Computer Society, 2012: 333-338.
[45] DALE K, SUNKAVALLI K, JOHNSON M K, et al. Video face replacement[J]. ACM Transactions on Graphics, 2011, 30(6): 130.
[46] PéREZ P, GANGNET M, BLAKE A. Poisson image editing[J]. ACM Transactions on Graphics, 2003, 22(3): 313-318.
[47] CHEN T, CHENG M M, TAN P, et al. Sketch2photo: Internet image montage[J]. ACM Transactions on Graphics, 2009, 28(5): 124.
[48] BOYKOV Y, JOLLY M P. Interactive graph cuts for optimal boundary and region segmentation of objects in N-D images [C]//Proceedings of the 8th International Conference on Computer Vision, Vancouver, Jul 7-14, 2001. Washington: IEEE Computer Society, 2001: 105-112.
[49] BITOUK D, KUMAR N, DHILLON S, et al. Face swap-ping: automatically replacing faces in photographs[J]. ACM Transactions on Graphics, 2008, 27(3): 39.
[50] MOSADDEGH S, SIMON L, JURIE F. Photorealistic face de-identification by aggregating donors?? face components[C]// LNCS 9005: Proceedings of the 2nd Asian Conference on Computer Vision, Singapore, Nov 1-5, 2014. Cham: Springer, 2014: 159-174.
[51] KEMELMACHER-SHLIZERMAN I. Transfiguring portraits[J]. ACM Transactions on Graphics, 2016, 35(4): 94.
[52] COOTES T F, TAYLOR C J, COOPER D H, et al. Active shape models-their training and application[J]. Computer Vision and Image Understanding, 1995, 61(1): 38-59.
[53] WANG H X, PAN C H, GONG H F, et al. Facial image com-position based on active appearance model[C]//Proceedings of the 2008 IEEE International Conference on Acoustics, Speech, and Signal Processing, Las Vegas, Mar 30-Apr 4, 2008. Piscataway: IEEE, 2008: 893-896.
[54] COOTES T F, EDWARDS G J, TAYLOR C J. Active app-earance models[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(6): 681-685.
[55] GARRIDO P, VALGAERTS L, REHMSEN O, et al. Auto-matic face reenactment[C]//Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, Jun 23-28, 2014. Washington: IEEE Computer Society, 2014: 4217-4224.
[56] CHEN T, TAN P, MA L Q, et al. PoseShop: human image database construction and personalized content synthesis[J]. IEEE Transactions on Visualization and Computer Graphics, 2012, 19(5): 824-837.
[57] SUNKAVALLI K, JOHNSON M K, MATUSIK W, et al. Multi-scale image harmonization[J]. ACM Transactions on Graphics, 2010, 29(4): 125.
[58] ZHANG X J, SONG J, PARK J I. The image blending me-thod for face swapping[C]//Proceedings of the 4th IEEE In-ternational Conference on Network Infrastructure and Digital Content, Beijing, Sep 19-21, 2014. Piscataway: IEEE, 2014: 95-98.
[59] YAN S, HE S, LEI X, et al. Video face swap based on auto-encoder generation network[C]//Proceedings of the 2018 International Conference on Audio, Language and Image Processing, Shanghai, Jul 16-17, 2018. Piscataway: IEEE, 2018: 103-108.
[60] KORSHUNOVA I, SHI W Z, DAMBRE J, et al. Fast face-swap using convolutional neural networks[C]//Proceedings of the 2017 IEEE International Conference on Computer Vi-sion, Venice, Oct 22-29, 2017. Washington: IEEE Computer Society, 2017: 3677-3685.
[61] JOHNSON J, ALAHI A, LI F F. Perceptual losses for real-time style transfer and super-resolution[C]//LNCS 9906: Proceedings of the 14th European Conference on Computer Vision, Amsterdam, Oct 11-14, 2016. Cham: Springer, 2016: 694-711.
[62] ULYANOV D, LEBEDEV V, VEDALDI A, et al. Texture networks: feed-forward synthesis of textures and stylized images[C]//Proceedings of the 33nd International Conference on Machine Learning, New York, Jun 19-24, 2016: 2027-2041.
[63] LI L Z, BAO J M, YANG H, et al. Advancing high fidelity identity swapping for forgery detection[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, Jun 13-19, 2020. Piscataway: IEEE, 2020: 5073-5082.
[64] NIRKIN Y, MASI I, TUAN A T, et al. On face segmentation, face swapping, and face perception[C]//Proceedings of the 13th IEEE International Conference on Automatic Face & Gesture Recognition, Xi??an, May 15-19, 2018. Washington: IEEE Computer Society, 2018: 98-105.
[65] NIRKIN Y, KELLER Y, HASSNER T. FSGAN: subject ag-nostic face swapping and reenactment[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Oct 27-Nov 2, 2019. Piscataway: IEEE, 2019: 7183-7192.
[66] DONG H, NEEKHARA P, WU C, et al. Unsupervised image-to-image translation with generative adversarial networks[J]. arXiv:1701.02676, 2017.
[67] OLSZEWSKI K, LI Z M, YANG C, et al. Realistic dyna-mic facial textures from a single image using GANs[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Oct 22-29, 2017. Washington: IEEE Computer Society, 2017: 5439-5448.
[68] NATSUME R, YATAGAWA T, MORISHIMA S. RSGAN: face swapping and editing using face and hair representa-tion in latent spaces[C]//Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference, Vancouver, Aug 12-16, 2018. New York: ACM,2018: 69.
[69] NATSUME R, YATAGAWA T, MORISHIMA S. FSNet: an identity-aware generative model for image-based face swap-ping[C]//LNCS 11366: Proceedings of the 14th Asian Con-ference on Computer Vision, Perth, Dec 2-6, 2018. Cham: Springer, 2018: 117-132.
[70] BAO J M, CHEN D, WEN F, et al. Towards open-set iden-tity preserving face synthesis[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recogni-tion, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 6713-6722.
[71] LI Z W, LI Z M, FEI T L, et al. Face image restoration based on residual generative adversarial network[J]. Computer Science, 2020, 47(S1): 230-236.
李泽文, 李子铭, 费天禄, 等. 基于残差生成对抗网络的人脸图像复原[J]. 计算机科学, 2020, 47(S1): 230-236.
[72] PUMAROLA A, AGUDO A, MARTINEZ A M, et al. GAN-imation: anatomically-aware facial animation from a single image[C]//LNCS 11214: Proceedings of the 15th European Conference on Computer Vision, Munich, Sep 8-14, 2018. Cham: Springer, 2018: 818-833.
[73] PUMAROLA A, AGUDO A, MARTINEZ A M, et al. GAN-imation: one-shot anatomically consistent facial animation[J]. International Journal of Computer Vision, 2020, 128(3): 698-713.
[74] SANCHEZ E, VALSTAR M. Triple consistency loss for pairing distributions in GAN-based face synthesis[J]. arXiv:1811.03492, 2018.
[75] PHAM H X, WANG Y, PAVLOVIC V. Generative adversa-rial talking head: bringing portraits to life with a weakly supervised neural network[J]. arXiv:1803.07716, 2018.
[76] ZHOU Y Q, SHI B E. Photorealistic facial expression synth-esis by the conditional difference adversarial autoencoder[C]//Proceedings of the 7th International Conference on Affective Computing and Intelligent Interaction, San Ant-onio, Oct 23-26, 2017. Washington: IEEE Computer Society, 2017: 370-376.
[77] BOZORGTABAR B, RAD M S, EKENEL H K, et al. Us-ing photorealistic face synthesis and domain adaptation to improve facial expression analysis[C]//Proceedings of the 14th IEEE International Conference on Automatic Face & Gesture Recognition Lille, May 14-18, 2019. Piscataway: IEEE, 2019: 1-8.
[78] DING H, SRICHARAN K, CHELLAPPA R. ExprGAN: fa-cial expression editing with controllable expression intensity[C]//Proceedings of the 32nd AAAI Conference on Artifi-cial Intelligence, the 30th Innovative Applications of Artifi-cial Intelligence, and the 8th AAAI Symposium on Educa-tional Advances in Artificial Intelligence, New Orleans, Feb 2-7, 2018. Menlo Park: AAAI, 2018: 6781-6788.
[79] PIGHIN F H, HECKER J, LISCHINSKI D, et al. Synthesi-zing realistic facial expressions from photographs[C]//Pro-ceedings of the 2006 International Conference on Computer Graphics and Interactive Techniques, Boston, Jul 30-Aug 3, 2006. New York: ACM, 2006: 19.
[80] DE LA HUNTY M, ASTHANA A, GOECKE R. Linear facial expression transfer with active appearance models[C]//Proceedings of the 20th International Conference on Pattern Recognition, Istanbul, Aug 23-26, 2010. Washington: IEEE Computer Society, 2010: 3789-3792.
[81] THEOBALD B J, MATTHEWS I, MANGINI M, et al. Map-ping and manipulating facial expression[J]. Language and Speech, 2009, 52(2/3): 369-386.
[82] LIU Z C, SHAN Y, ZHANG Z Y. Expressive expression mapping with ratio images[C]//Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, Aug 12-17, 2001. New York: ACM, 2001: 271-276.
[83] BREGLER C, COVELL M, SLANEY M. Video rewrite: driving visual speech with audio[C]//Proceedings of the 24th Annual Conference on Computer Graphics and Interac-tive Techniques, Los Angeles, Aug 3-8, 1997. New York: ACM, 1997: 353-360.
[84] SUWAJANAKORN S, SEITZ S M, KEMELMACHER-SHLIZERMAN I. Synthesizing Obama: learning lip sync from audio[J]. ACM Transactions on Graphics, 2017, 36(4): 95.
[85] YANG F, WANG J, SHECHTMAN E, et al. Expression flow for 3D-aware face component transfer[J]. ACM Transactions on Graphics, 2011, 30(4): 60.
[86] BLANZ V, BASSO C, POGGIO T A, et al. Reanimating faces in images and video[J]. Computer Graphics Forum, 2003, 22(3): 641-650.
[87] WILLIAMS L. Performance-driven facial animation[C]// Proceedings of the 17th Annual Conference on Computer Graphics and Interactive Techniques, Dallas, Aug 6-10, 1990. New York: ACM, 1990: 235-242.
[88] GARRIDO P, VALGAERTS L, SARMADI H, et al. VDub: modifying face video of actors for plausible visual align-ment to a dubbed audio track[J]. Computer Graphics Forum, 2015, 34(2): 193-204.
[89] SUWAJANAKORN S, SEITZ S M, KEMELMACHER-SHLIZERMAN I. What makes Tom Hanks look like Tom Hanks[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision, Santiago, Dec 7-13, 2015. Washington: IEEE Computer Society, 2015: 3952-3960.
[90] YAN Y F, LYU K, XUE J, et al. Facial animation method based on deep learning and expression AU parameters[J]. Journal of Computer-Aided Design & Computer Graphics, 2019, 31(11): 1973-1980.
闫衍芙, 吕科, 薛健, 等. 基于深度学习和表情AU参数的人脸动画方法[J]. 计算机辅助设计与图形学学报, 2019, 31(11): 1973-1980.
[91] THIES J, ZOLLH?FER M, STAMMINGER M, et al. Face2-Face: real-time face capture and reenactment of RGB videos[C]//Proceedings of the 2016 IEEE Conference on Com-puter Vision and Pattern Recognition, Las Vegas, Jun 27-30, 2016. Washington: IEEE Computer Society, 2016: 2387-2395.
[92] THIES J, ZOLLH?FER M, STAMMINGER M, et al. Face-VR: real-time gaze-aware facial reenactment in virtual rea-lity[J]. ACM Transactions on Graphics, 2018, 37(2): 25.
[93] THIES J, ZOLLH?FER M, THEOBALT C, et al. Headon: real-time reenactment of human portrait videos[J]. ACM Transactions on Graphics, 2018, 37(4): 164.
[94] AVERBUCH-ELOR H, COHEN-OR D, KOPF J, et al. Br-inging portraits to life[J]. ACM Transactions on Graphics, 2017, 36(6): 196.
[95] THIES J, ZOLLH?FER M, NIE?NER M. Deferred neural rendering: image synthesis using neural textures[J]. ACM Transactions on Graphics, 2019, 38(4): 66.
[96] KIM H, GARRIDO P, TEWARI A, et al. Deep video por-traits[J]. ACM Transactions on Graphics, 2018, 37(4): 163.
[97] KOUJAN M R, DOUKAS M C, ROUSSOS A, et al. Head2-Head: video-based neural head synthesis[C]//Proceedings of the 15th IEEE International Conference on Automatic Face and Gesture Recognition, Buenos Aires, Nov 16-20, 2020. Piscataway: IEEE, 2020: 16-23.
[98] DOUKAS M C, KOUJAN M R, SHARMANSKA V, et al. Head2Head++: deep facial attributes re-targeting[J]. IEEE Transactions on Biometrics, Behavior, and Identity Science,2021, 3(1): 31-43.
[99] NAGANO K, SEO J, XING J, et al. paGAN: real-time ava-tars using dynamic textures[J]. ACM Transactions on Grap-hics, 2018, 37(6): 258.
[100] ZAKHAROV E, SHYSHEYA A, BURKOV E, et al. Few-shot adversarial learning of realistic neural talking head models[C]//Proceedings of the 2019 IEEE/CVF Interna-tional Conference on Computer Vision, Seoul, Oct 27-Nov 2, 2019. Piscataway: IEEE, 2019: 9459-9468.
[101] GENG J, SHAO T, ZHENG Y, et al. Warp-guided GANs for single-photo facial animation[J]. ACM Transactions on Graphics, 2018, 37(6): 231.
[102] WILES O, KOEPKE A S, ZISSERMAN A. X2Face: a net-work for controlling face generation using images, audio, and pose codes[C]//LNCS 11217: Proceedings of the 15th European Conference on Computer Vision, Munich, Sep 8-14, 2018. Cham: Springer, 2018: 670-686.
[103] SIAROHIN A, LATHUILIèRE S, TULYAKOV S, et al. Animating arbitrary objects via deep motion transfer[C]// Proceedings of the 2019 IEEE/CVF Conference on Com-puter Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 2377-2386.
[104] SIAROHIN A, LATHUILIèRE S, TULYAKOV S, et al. First order motion model for image animation[C]//Procee-dings of the Annual Conference on Neural Information Processing Systems 2019, Vancouver, Dec 8-14, 2019: 7137-7147.
[105] FU Y, GUO G, HUANG T S. Age synthesis and estimation via faces: a survey[J]. IEEE Transactions on Pattern Anal-ysis and Machine Intelligence, 2010, 32(11): 1955-1976.
[106] LI M, ZUO W M, ZHANG D. Deep identity-aware transfer of facial attributes[J]. arXiv:1610.05586, 2016.
[107] SHEN W, LIU R J. Learning residual images for face attri-bute manipulation[C]//Proceedings of the 2017 IEEE Con-ference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 1225-1233.
[108] KAPANIA S, GOYAL S, LAMBA S, et al. Multiple do-main image to image translation for facial attribute trans-fer[J]. International Journal of Information Systems & Management Science, 2019, 2(2).
[109] SHEN Y J, GU J J, TANG X O, et al. Interpreting the latent space of GANs for semantic face editing[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, Jun 13-19, 2020. Pisca-taway: IEEE, 2020: 9243-9252.
[110] ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Oct 22-29, 2017. Washington: IEEE Computer Society, 2017: 2242-2251.
[111] HE Z L, ZUO W M, KAN M N, et al. AttGAN: facial att-ribute editing by only changing what you want[J]. IEEE Transactions on Image Processing, 2019, 28(11): 5464-5478.
[112] LIU M, DING Y K, XIA M, et al. STGAN: a unified selec-tive transfer network for arbitrary image attribute editing[C]//Proceedings of the 2019 IEEE Conference on Com-puter Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 3673-3682.
[113] ZHU D, LIU S, JIANG W, et al. UGAN: untraceable GAN for multi-domain face translation[J]. arXiv:1907.11418, 2019.
[114] CHOI Y, CHOI M, KIM M, et al. StarGAN: unified gen-erative adversarial networks for multi-domain image-to-image translation[C]//Proceedings of the 2018 IEEE Con-ference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 8789-8797.
[115] LIU Y, FAN H, NI F C, et al. ClsGAN: selective attribute editing model based on classification adversarial network[J]. Neural Networks, 2021, 133: 220-228.
[116] HUANG Z K, ZHENG Z D, YAN C G, et al. Real-world automatic makeup via identity preservation makeup net[C]//Proceedings of the 29th International Joint Confer-ence on Artificial Intelligence, Yokohama, Jul 2020: 652-658.
[117] JIN X, HAN R, NING N, et al. Facial makeup transfer combining illumination transfer[J]. IEEE Access, 2019, 7: 80928-80936.
[118] QIAN S J, LIN K, WU W, et al. Make a face: towards arbi-trary high fidelity face manipulation[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Oct 27-Nov 2, 2019. Piscataway: IEEE, 2019: 10032-10041.
[119] LIU M, BREUEL T M, KAUTZ J. Unsupervised image-to-image translation networks[C]//Proceedings of the Annual Conference on Neural Information Processing Systems 2017, Long Beach, Dec 4-9, 2017. Red Hook: Curran Ass-ociates, 2017: 700-708.
[120] LAMPLE G, ZEGHIDOUR N, USUNIER N, et al. Fader networks: manipulating images by sliding attributes[C]// Proceedings of the Annual Conference on Neural Info-rmation Processing Systems 2017, Long Beach, Dec 4-9, 2017. Red Hook: Curran Associates, 2017: 5967-5976.
[121] SHU Z X, YUMER E, HADAP S, et al. Neural face edit-ing with intrinsic image disentangling[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Patt-ern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 5444-5453.
[122] KINGMA D P, DHARIWAL P. Glow: generative flow with invertible 1×1 convolutions[C]//Proceedings of the Annual Conference on Neural Information Processing Systems 2018, Montréal, Dec 3-8, 2018: 10236-10245.
[123] YIN W D, LIU Z W, LOY C C. Instance-level facial attri-butes transfer with geometry-aware flow[C]//Proceedings of the 33rd AAAI Conference on Artificial Intelligence, the 31st Innovative Applications of Artificial Intelligence Conference, the 9th AAAI Symposium on Educational Ad-vances in Artificial Intelligence, Honolulu, Jan 27-Feb 1, 2019. Menlo Park: AAAI, 2019: 9111-9118.
[124] BROCK A, DONAHUE J, SIMONYAN K. Large scale GAN training for high fidelity natural image synthesis[C]// Proceedings of the 7th International Conference on Learn-ing Representations, New Orleans, May 6-9, 2019: 1-35.
[125] KARRAS T, LAINE S, AITTALA M, et al. Analyzing and improving the image quality of styleGAN[C]//Proceed-ings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, Jun 13-19, 2020. Piscataway: IEEE, 2020: 8107-8116.
[126] KARNEWAR A, WANG O. Msg-GAN: multi-scale gradi-ents for generative adversarial networks[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, Jun 13-19, 2020. Pisca-taway: IEEE, 2020: 7796-7805.
[127] HU Y K, FAN X, YU L T, et al. Graph based neural net-work regression strategy for facial image super resolution[J]. Journal of Software, 2018, 29(4): 914-925.
呼延康, 樊鑫, 余乐天, 等. 图神经网络回归的人脸超分辨率重建[J]. 软件学报, 2018, 29(4): 914-925.
[128] XU R B, LU T, WANG Y, et al. Face hallucination algori-thm via combined learning[J]. Journal of Computer Appli-cations, 2020, 40(3): 710-716.
许若波, 卢涛, 王宇, 等. 基于组合学习的人脸超分辨率算法[J]. 计算机应用, 2020, 40(3): 710-716.
[129] CHEN Y, TAI Y, LIU X M, et al. FSRNet: end-to-end learn-ing face super-resolution with facial priors[C]//Proceed-ings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 2492-2501.
[130] IIZUKA S, SIMO-SERRA E, ISHIKAWA H. Globally and locally consistent image completion[J]. ACM Transactions on Graphics, 2017, 36(4): 107.
[131] CHEN Z Y, NIE S L, WU T F, et al. High resolution face completion with multiple controllable attributes via fully end-to-end progressive generative adversarial networks[J]. arXiv:1801.07632, 2018.
[132] ZHANG L Z, WANG J C, XU Y S, et al. Nested scale-editing for conditional image synthesis[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, Jun 13-19, 2020. Pisca-taway: IEEE, 2020: 5476-5486.
[133] ZHOU H Q, CAO L, DU K N. Sketch face synthesis based on multi-discriminator cyclic generative adversarial net-work[J]. Computer Engineering and Applications, 2021, 57(3): 231-238.
周华强, 曹林, 杜康宁. 多判别器循环生成对抗网络的素描人脸合成[J]. 计算机工程与应用, 2021, 57(3): 231-238.
[134] JIANG B, LIU H Y, YANG C, et al. A face inpainting algo-rithm with local attribute generative adversarial network[J]. Journal of Computer Research and Development, 2019, 56(11): 2485-2493.
蒋斌, 刘虹雨, 杨超, 等. 一种基于局部属性生成对抗网络的人脸修复算法[J]. 计算机研究与发展, 2019, 56(11): 2485-2493.
[135] ZHANG H, XU T, LI H S, et al. StackGAN: text to photo-realistic image synthesis with stacked generative adversa-rial networks[C]//Proceedings of the 2017 IEEE Interna-tional Conference on Computer Vision, Venice, Oct 22-29, 2017. Washington: IEEE Computer Society, 2017: 5908-5916.
[136] ZHANG H, XU T, LI H S, et al. StackGAN++: realistic image synthesis with stacked generative adversarial net-works[J]. IEEE Transactions on Pattern Analysis and Ma-chine Intelligence, 2018, 41(8): 1947-1962.
[137] NASIR O R, JHA S K, GROVER M S, et al. Text2FaceGAN: face generation from fine grained textual descriptions[C]// Proceedings of the 5th IEEE International Conference on Multimedia Big Data, Singapore, Sep 11-13, 2019. Pis-cataway: IEEE, 2019: 58-67.
[138] CHEN X, QING L B, HE X H, et al. FTGAN: a fully-trained generative adversarial networks for text to face generation[J]. arXiv:1904.05729, 2019.
[139] DI X, PATEL V M. Face synthesis from visual attributes via sketch using conditional VAEs and GANs[J]. arXiv: 1801.00077, 2017.
[140] BAO J M, CHEN D, WEN F, et al. CVAE-GAN: fine-grained image generation through asymmetric training[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Oct 22-29, 2017. Washington: IEEE Computer Society, 2017: 2764-2773.
[141] WANG T R, ZHANG T, LOVELL B C. Faces à la Carte: text-to-face generation via attribute disentanglement[C]// Proceedings of the 2021 IEEE Winter Conference on App-lications of Computer Vision, Waikoloa, Jan 3-8, 2021. Piscataway: IEEE, 2021: 3379-3387.
[142] LU Y Y, WU S Z, TAI Y W, et al. Image generation from sketch constraint using contextual GAN[C]//LNCS 11220: Proceedings of the 15th European Conference on Com-puter Vision, Munich, Sep 8-14, 2018. Cham: Springer, 2018: 213-228.
[143] SANGKLOY P, LU J W, FANG C, et al. Scribbler: control-ling deep image synthesis with sketch and color[C]//Pro-ceedings of the 2017 IEEE Conference on Computer Vis-ion and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 6836-6845.
[144] KAZEMI H, IRANMANESH M, DABOUEI A, et al. Fa-cial attributes guided deep sketch-to-photo synthesis[C]// Proceedings of the 2018 IEEE Winter Applications of Computer Vision Workshops, Lake Tahoe, Mar 15, 2018. Washington: IEEE Computer Society, 2018: 1-8.
[145] CHEN S Y, SU W C, GAO L, et al. Deep generation of face images from sketches[J]. arXiv:2006.01047, 2020.
[146] WANG T C, LIU M Y, ZHU J Y, et al. Video-to-video synthesis[C]//Proceedings of the Annual Conference on Neural Information Processing Systems 2018, Montréal, Dec 3-8, 2018: 1152-1164.
[147] JO Y, PARK J. SC-FEGAN: face editing generative adver-sarial network with user??s sketch and color[C]//Proceed-ings of the 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Oct 27-Nov 2, 2019. Piscataway: IEEE, 2019: 1745-1753.
[148] LIU C T, CAO L, DU K N. Portrait coloring based on joint consistent cyclic generative adversarial network[J]. Computer Engineering and Applications, 2020, 56(16): 183-190.
刘昌通, 曹林, 杜康宁. 基于联合一致循环生成对抗网络的人像着色[J]. 计算机工程与应用, 2020, 56(16): 183-190. |