[1] SHAO L, WU D, LI X. Learning deep and wide: a spectral method for learning deep networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2014, 25(12): 2303-2308.
[2] HINTON G E, SALAKHUTDINOV R R. Reducing the di-mensionality of data with neural networks[J]. Science, 2006, 313(5786): 504-507.
[3] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[J]. arXiv:1406.2661, 2014.
[4] DOU K Z, CAI L Z M, NAN T B, et al. Tibetan speech syn-thesis based on neural network[J]. Journal of Chinese Infor-mation Processing, 2019, 33(2): 75-80.
都格草, 才让卓玛, 南措吉, 等. 基于神经网络的藏语语音合成[J]. 中文信息学报, 2019, 33(2): 75-80.
[5] XU D, WEI C, PENG P, et al. GE-GAN: a novel deep lear-ning framework for road traffic state estimation[J]. Transpor-tation Research Part C: Emerging Technologies, 2020, 117: 102635.
[6] WU Y, YANG F, XU Y, et al. Privacy-protective-GAN for privacy preserving face deidentification[J]. Journal of Com-puter Science and Technology, 2019, 34(1): 47-60.
[7] HU M F, LIU J W, ZUO X. Survey on deep generative model[J/OL]. Acta Automatica Sinica[2020-10-17]. https://doi.org/10.16383/j.aas.c190866.
胡铭菲, 刘建伟, 左信. 深度生成模型综述[J/OL]. 自动化学报[2020-10-17]. https://doi.org/10.16383/j.aas.c190866.
[8] WANG Z W, SHE Q, WARD T E. Generative adversarial networks in computer vision: a survey and taxonomy[J]. arXiv:1906.01529, 2019.
[9] MIRZA M, OSINDERO S. Conditional generative adversarial nets[J]. arXiv:1411.1784, 2014.
[10] ODENA A, OLAH C, SHLENS J. Conditional image syn-thesis with auxiliary classifier GANs[J]. arXiv:1610.09585, 2016.
[11] CHEN X, DUAN Y, HOUTHOOFT R, et al. InfoGAN: inter-pretable representation learning by information maximizing generative adversarial nets[J]. arXiv:1606.03657, 2016.
[12] PERARNAU G, VAN DE WEIJER J, RADUCANU B, et al. Invertible conditional GANs for image editing[J]. arXiv:1611.06355, 2016.
[13] DASH A, GAMBOA J C B, AHMED S, et al. TAC-GAN: text conditioned auxiliary classifier generative adversarial network[J]. arXiv:1703.06412, 2017.
[14] ODENA A. Semi-supervised learning with generative adver-sarial networks[J]. arXiv:1606.01583, 2016.
[15] SPRINGENBERG J T. Unsupervised and semi-supervised learning with categorical generative adversarial networks[J]. arXiv:1511.06390, 2015.
[16] RADFORD A, METZ L, CHINTALA S. Unsupervised repre-sentation learning with deep convolutional generative adver-sarial networks[J]. arXiv:1511.06434, 2015.
[17] XU B, WANG N Y, CHEN T Q, et al. Empirical evaluation of rectified activations in convolutional network[J]. arXiv:1505.00853, 2015.
[18] YU C C, KANG M, CHEN Y B, et al. Endangered Tujia language speech enhancement research based on improved DCGAN[C]//LNCS 11856: Proceedings of the 18th China National Conference on Chinese Computational Linguistics, Kunming, Oct 18-20, 2019. Cham: Springer, 2019: 394-404.
[19] ZHAO J B, MATHIEU M, LECUN Y. Energy-based genera-tive adversarial network[J]. arXiv:1609.03126, 2016.
[20] DENTON E L, CHINTALA S, SZLAM A, et al. Deep genera-tive image models using a Laplacian pyramid of adversarial networks[C]//Proceedings of the Annual Conference on Neural Information Processing Systems 2015, Montreal, Dec 7-12, 2015. Red Hook: Curran Associates, 2015: 1486-1494.
[21] OLKKONEN H, PESOLA P. Gaussian pyramid wavelet transform for multiresolution analysis of images[J]. Graphical Models & Image Processing, 1996, 58(4): 394-398.
[22] BURT P J,ADELSON E H. The Laplacian pyramid as a compact image code[J]. IEEE Transactions on Communica-tions, 1983, 31(4): 532-540.
[23] ZHANG H, XU T, LI H S, et al. StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Oct 22-29, 2017. Washington: IEEE Computer Society, 2017: 5908-5916.
[24] ZHANG H, XU T, LI H S, et al. StackGAN++: realistic image synthesis with stacked generative adversarial networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(8): 1947-1962.
[25] JOHNSON J, GUPTA A, LI F F. Image generation from scene graphs[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 1219-1228.
[26] ZHU J Y, PAKR T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Oct 22-29, 2017. Washington: IEEE Computer Society, 2017: 2242-2251.
[27] ISOLA P, ZHU J Y, ZHOU T H, et al. Image-to-image transla-tion with conditional adversarial networks[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 5967-5976.
[28] ZHU J Y, ZHANG R, PATHAK D, et al. Toward multimodal image-to-image translation[J]. arXiv:1711.11586, 2017.
[29] CHOI Y, CHOI M J, KIM M, et al. StarGAN: unified gene-rative adversarial networks for multi-domain image-to-image translation[J]. arXiv:1711.09020, 2017.
[30] BANSAL A, MA S G, RAMANAN D, et al. Recycle-GAN: unsupervised video retargeting[J]. arXiv:1808.05174, 2018.
[31] ZHANG H, GOODFELLOW I J, METAXAS D N, et al. Self-attention generative adversarial networks[J]. arXiv:1805. 08318v2, 2018.
[32] BROCK A, DONAHUE J, SIMONYAN K. Large scale GAN training for high fidelity natural image synthesis[J].arXiv:1809.11096, 2018.
[33] DONAHUE J, KR?HENBüHL P, DARRELL T, et al. Adver-sarial feature learning[J]. arXiv:1605.09782, 2016.
[34] DONAHUE J, SIMONYAN K. Large scale adversarial repre-sentation learning[J]. arXiv:1907.02544, 2019.
[35] KARRAS T, LAINE S, AILA T. A style-based generator architecture for generative adversarial networks[C]//Procee-dings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscata-way: IEEE, 2019: 4401-4410.
[36] MAO X D, LI Q, XIE H R, et al. Least squares generative adversarial networks[C]//Proceedings of the 2017 IEEE Inter-national Conference on Computer Vision, Venice, Oct 22-29, 2017. Washington: IEEE Computer Society, 2017: 2813-2821.
[37] ARJOVSKY M, BOTTOU L. Towards principled methods for training generative adversarial networks[J]. arXiv:1701. 04862, 2017.
[38] BERTHELOT D, SCHUMM T, METZ L. BEGAN: boun-dary equilibrium generative adversarial networks[J]. arXiv:1703.10717, 2017.
[39] GULRAJANI I, AHMED F, ARJOVSKY M, et al. Improved training of Wasserstein GANs[J]. arXiv:1704.00028, 2017.
[40] NOWOZIN S, CSEKE B, TOMIOKA R. f-GAN: training generative neural samplers using variational divergence mini-mization[J]. arXiv:1606.00709, 2016.
[41] LEDIG C, THEIS L, HUSZAR F, et al. Photo-realistic single image super-resolution using a generative adversarial network[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 105-114.
[42] WANG X T, YU K, WU S X, et al. ESRGAN: enhanced superresolution generative adversarial networks[J]. arXiv:1809.00219, 2018.
[43] VOLKHONSKIY D, NAZAROV I, BORISENKO B. Stega-nographic generative adversarial networks[C]//Proceedings of the 12th International Conference on Machine Vision, Amsterdam, Nov 16-18, 2019. San Francisco: SPIE, 2019: 114333M.
[44] SHI H C, DONG J, WANG W, et al. SSGAN: secure stega-nography based on generative adversarial networks[C]//LNCS 10735: Proceedings of the 18th Pacific-Rim Conference on Multimedia, Harbin, Sep 28-29, 2017. Cham: Springer, 2017: 534-544.
[45] QIAN Y L, DONG J, WANG W, et al. Deep learning for steganalysis via convolutional neural networks[C]//Procee-dings of SPIE-The International Society for Optical Engi-neering. San Francisco: SPIE, 2015: 1-10.
[46] WANG Y J, NIU K, YANG X Y. Information hiding scheme based on generative adversarial network[J]. Journal of Com-puter Applications, 2018, 38(10): 2923-2928.
王耀杰, 钮可, 杨晓元. 基于生成对抗网络的信息隐藏方案[J]. 计算机应用, 2018, 38(10): 2923-2928.
[47] SANTANA E, HOTZ G. Learning a driving simulator[J]. arXiv:1608.01230, 2016.
[48] HUANG R, ZHANG S, LI T Y, et al. Beyond face rotation: global and local perception GAN for photorealistic and iden-tity preserving frontal view synthesis[J]. arXiv:1704.04086, 2017.
[49] REED S E, AKATA Z, YAN X C, et al. Generative adver-sarial text to image synthesis[J]. arXiv:1605.05396v1, 2016.
[50] ZHAO S Y, LI J W. Generative adversarial network for generating low-rank images[J]. Acta Automatica Sinica, 2018, 44(5): 829-839.
赵树阳, 李建武. 基于生成对抗网络的低秩图像生成方法[J]. 自动化学报, 2018, 44(5): 829-839.
[51] ZHU J Y, KR?HENBüHL P, SHECHTMAN E, et al. Gene-rative visual manipulation on the natural image manifold[C]//LNCS 9909: Proceedings of the 14th European Conference on Computer Vision, Amsterdam, Oct 11-14, 2016. Cham: Springer, 2016: 597-613.
[52] LI Y J, LIU S F, YANG J M, et al. Generative face comple-tion[J]. arXiv:1704.05838, 2017.
[53] LIU G L, REDA F A, SHIH K J, et al. Image inpainting for irregular holes using partial convolutions[C]//LNCS 11215: Proceedings of the 15th European Conference on Computer Vision, Munich, Sep 8-14, 2018. Cham: Springer, 2018: 89-105.
[54] CHEN J, DONG X L, LIANG J X, et al. Research on the local style transfer of clothing by CycleGAN based on atten-tion mechanism[J/OL]. Computer Engineering[2020-12-14]. https://doi.org/10.19678/j.issn.1000-3428.0059665.
陈佳, 董学良, 梁金星, 等. 基于注意力机制的CycleGAN服装局部风格迁移研究[J/OL]. 计算机工程[2020-12-14]. https://doi.org/10.19678/j.issn.1000-3428.0059665.
[55] LI C, ZHANG Y, HUANG C H. Improved super-resolution reconstruction of image based on generative adversarial net-works[J]. Computer Engineering and Applications, 2020, 56(4): 191-196.
李诚, 张羽, 黄初华. 改进的生成对抗网络图像超分辨率重建[J]. 计算机工程与应用, 2020, 56(4): 191-196.
[56] MA S, FU J L, CHEN C W, et al. DA-GAN: instance-level image translation by deep attention generative adversarial networks[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 5657-5666.
[57] LIN Z F, YIN M X, YANG F, et al. Survey of image transla-tion based on conditional generative adversarial network[J]. Journal of Chinese Computer Systems, 2020, 41(12): 2569-2581.
林振峰, 尹梦晓, 杨锋, 等. 基于条件生成式对抗网络的图像转换综述[J]. 小型微型计算机系统, 2020, 41(12): 2569-2581.
[58] LIANG X D, LEE L, DAI W, et al. Dual motion GAN for future-flow embedded video prediction[J]. arXiv:1708.00284, 2017.
[59] TULYAKOV S, LIU M Y, YANG X D, et al. MoCoGAN: decomposing motion and content for video generation[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 1526-1535.
[60] ZHANG Z Z, YANG L, ZHENG Y F. Translating and seg-menting multimodal medical volumes with cycle-and shape-consistency generative adversarial network[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 9242-9251.
[61] CHEN J W, CHAO H Y, YANG M. Image blind denoising with generative adversarial network based noise modeling[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 3155-3164.
[62] CHENG Z X, SUN H M, MASARU T, et al. Performance comparison of convolutional autoencoders, generative adver-sarial networks and super-resolution for image compression[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 2613-2616.
[63] GADELHA M, MAJI S, WANG R. 3D shape induction from 2D views of multiple objects[J]. arXiv:1612.05872, 2016.
[64] LI Q Z, BAI W X, NIU J. Underwater image color correc-tion and enhancement based on improved cycle-consistent generative adversarial networks[J/OL]. Acta Automatica Sinica[2020-12-14]. https://doi.org/10.16383/j.aas.c200510.
李庆忠, 白文秀, 牛炯. 基于改进CycleGAN的水下图像颜色校正与增强[J/OL]. 自动化学报[2020-12-14]. https://doi.org/10.16383/j.aas.c200510.
[65] SUN X, DING X L. Data augmentation method based on generative adversarial networks for facial expression recogni-tion sets[J]. Computer Engineering and Applications, 2020, 56(4): 115-121.
孙晓, 丁小龙. 基于生成对抗网络的人脸表情数据增强方法[J]. 计算机工程与应用, 2020, 56(4): 115-121.
[66] YU H Y, LI G R, SU L, et al. Conditional GAN based indivi-dual and global motion fusion for multiple object tracking in UAV videos[J]. Pattern Recognition Letters, 2020, 131: 219-226.
[67] GAO L L, CHEN D Y, ZHAO Z, et al. Lightweight dyna-mic conditional GAN with pyramid attention for text-to-image synthesis[J]. Pattern Recognition, 202, 110: 107384.
[68] SALIMANS T, GOODFELLOW I J, ZAREMBA W, et al. Improved techniques for training GANs[C]//Proceedings of the Annual Conference on Neural Information Processing Systems 2016, Barcelona, Dec 5-10, 2016. Red Hook: Curran Associates, 2016: 2226-2234.
[69] HEUSEL M, RAMSAUER H, UNTERTHINER T, et al. GANs trained by a two time-scale update rule converge to a local Nash equilibrium[C]//Proceedings of the Annual Con-ference on Neural Information Processing Systems 2017, Long Beach, Dec 4-9, 2017. Red Hook: Curran Associates, 2017: 6626-6637.
[70] CHE T, LI Y R, JACOB A P, et al. Mode regularized genera-tive adversarial networks[J]. arXiv:1612.02136v5, 2016.
[71] GURUMURTHY S, SARVADEVABHATLA R K, BABU R V. DeLiGAN: generative adversarial networks for diverse and limited data[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 4941-4949.
[72] DZIUGAITE G K, ROY D M, GHAHRAMANI Z. Training generative neural networks via maximum mean discrepancy optimization[J]. arXiv:1505.03906, 2015.
[73] VALLENDER S S. Calculation of the Wasserstein distance between probability distributions on the line[J]. Theory of Probability & Its Applications, 1974, 18(4): 784-786.
[74] WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.
[75] ZHANG L, ZHAO J Y, YE X L, et al. Collaborative genera-tion of confrontation networks[J]. Acta Automatica Sinica, 2018, 44(5): 804-810.
张龙, 赵杰煜, 叶绪伦, 等. 协作式生成对抗网络[J]. 自动化学报, 2018, 44(5): 804-810.
[76] LUCIC K K, MARCIN M, OLIVIER B, et al. Are GANs created equal? A large-scale study[J]. arXiv:1711.10337v1, 2017.
[77] ZHOU B L, ADETYA K, AGATA L, et al. Learning deep features for discriminative localization[J]. arXiv:1512.04150, 2015.
[78] JOLICOEUR M A. The relativistic discriminator: a key element missing from standard GAN[J]. arXiv:1807.00734, 2018.
[79] LI D, CHEN D C, JIN B H, et al. MAD-GAN: multivariate anomaly detection for time series data with generative adver-sarial networks[C]//LNCS 11730: Proceedings of the 28th International Conference on Artificial Neural Networks, Munich, Sep 17-19, 2019. Cham: Springer, 2019: 703-716.
[80] KARRAS T, AILA T, LAINE S, et al. Progressive growing of GANs for improved quality, stability, and variation[J]. arXiv:1710.10196v3, 2017.
[81] METZ L, POOLE B, PFAU D, et al. Unrolled generative adversarial networks[J]. arXiv:1611.02163, 2016.
[82] LIN Z N, KHETAN A, FANTI G C, et al. PacGAN: the power of two samples in generative adversarial networks[C]//Proceedings of the Annual Conference on Neural Informa-tion Processing Systems 2018, Montréal, Dec 3-8, 2018. Red Hook: Curran Associates, 2018: 1505-1514.
[83] KODALI N, ABERNETHY J, HAYS J, et al. On convergence and stability of GANs[J]. arXiv:1705.07215, 2017.
[84] NGUYEN T D, LE T, VU H, et al. Dual discriminator generative adversarial nets[J]. arXiv:1709.03831, 2017. |