[1] FAZI M B, FULLER M. Computational aesthetics[M]. New York: John Wiley & Sons, Inc., 2016.
[2] HOENIG F. Defining computational aesthetics[C]//Proceed-ings of the 1st Eurographics Workshop on Computational Aesthetics in Graphics, Visualization, and Imaging, Girona, May 18-20, 2005. Eurographics Association, 2005: 13-18.
[3] WANG W N, YI J J, HE Q H. Review for computational image aesthetics[J]. Journal of Image and Graphics, 2012, 17(8): 893-901.
王伟凝, 蚁静缄, 贺前华. 可计算图像美学研究进展[J]. 中国图象图形学报, 2012, 17(8): 893-901.
[4] YANG H T, SHI P, HE S K, et al. A comprehensive survey on image aesthetic quality assessment[C]//Proceedings of the 18th IEEE/ACIS International Conference on Computer and Information Science, Beijing, Jun 17-19, 2019. Piscataway:IEEE, 2019: 294-299.
[5] DOSHI N, SHIKHENAWIS G, MITRA S K. Image aesthetics assessment using multi channel convolutional neural networks[C]//LNCS 1148: Proceedings of the 4th International Con-ference on Computer Vision and Image Processing, Jaipur, Sep 27-29, 2019. Berlin, Heidelberg: Springer, 2019: 15-24.
[6] GE R X. A method of image aesthetics classification based on color harmony and composition[J]. Software Guide, 2017, 16(11): 221-224.
葛瑞雪. 融合色彩和谐性与构图的图像美学分类方法研究[J]. 软件导刊, 2017, 16(11): 221-224.
[7] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[J]. arXiv:1406.2661v1, 2014.
[8] NGUYEN D T, PHAM T D, BATCHULUUN G, et al. Pres-entation attack face image generation based on a deep gen-erative adversarial network[J]. Sensors, 2020, 20(7): 1810.
[9] LIN J X, XIA Y C, QIN T, et al. Conditional image-to-image translation[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-22, 2018. Washington: IEEE Computer Society, 2018: 5524-5532.
[10] CHERIAN A, SULLIVAN A. Sem-GAN: semantically-consistent image-to-image translation[C]//Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision, Waikoloa Village, Jan 7-11, 2019. Piscataway: IEEE, 2019: 1797-1806.
[11] CHEN Y Z, HU H F. An improved method for semantic image inpainting with GANs: progressive inpainting[J]. Neural Processing Letters, 2019, 49(3): 1355-1367.
[12] ZHANG N, JI H, LIU L, et al. Exemplar-based image inpaint-ing using angle-aware patch matching[J]. EURASIP Journal on Image and Video Processing, 2019: 70.
[13] EHSANI K, MOTTAGHI R, FARHADI A. SeGAN: seg-menting and generating the invisible[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-22, 2018. Washington:IEEE Computer Society, 2018: 6144-6153.
[14] SUN X, LI X G, LI J F, et al. Review on deep learning based image super-resolution restoration algorithms[J]. Acta Auto-matica Sinica, 2017, 43(5): 697-709.
孙旭, 李晓光, 李嘉锋, 等. 基于深度学习的图像超分辨率复原研究进展[J]. 自动化学报, 2017, 43(5): 697-709.
[15] LIM B, SON S, KIM H, et al. Enhanced deep residual net-works for single image super-resolution[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 1132-1140.
[16] ARJOVSKY M, CHINTALA S, BOTTOU L. Wasserstein GAN[J]. arXiv:1701.07875, 2017.
[17] SINN M, RAWAT A. Non-parametric estimation of Jensen-Shannon divergence in generative adversarial network train-ing[C]//Proceedings of the 2018 International Conference on Artificial Intelligence and Statistics, Playa Blanca, Apr 9-11, 2018: 642-651.
[18] MIYATO T, KATAOKA T, KOYAMA M, et al. Spectral no-rmalization for generative adversarial networks[C]//Proce-edings of the 6th International Conference on Learning Re-presentations, Vancouver, Apr 30-May 3, 2018: 1-26.
[19] WANG C Y, XU C, YAO X, et al. Evolutionary generative adversarial networks[J]. IEEE Transactions on Evolutionary Computation, 2019, 23(6): 921-934.
[20] ZHANG H, GOODFELLOW I, METAXAS D, et al. Self-attention generative adversarial networks[J]. arXiv:1805.08318, 2018.
[21] ZHU L Y. Aesthetic dictionary[M]. Shanghai: Shanghai Le-xicographical Publishing House, 2010.
[22] MAI L, JIN H L, LIU F. Composition-preserving deep photo aesthetics assessment[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Jun 27-30, 2016. Washington: IEEE Computer Society, 2016: 497-506.
[23] DENG X, CUI C, FANG H, et al. Personalized image aesth-etics assessment[C]//Proceedings of the 2017 ACM on Con-ference on Information and Knowledge Management. New York: ACM, 2017: 2043-2046.
[24] DATTA R, JOSHI D, LI J, et al. Studying aesthetics in photo-graphic images using a computational approach[C]//LNCS 3953: Proceedings of the 9th European Conference on Com-puter Vision, Graz, May 7-13, 2006. Berlin, Heidelberg: Springer, 2006: 288-301.
[25] DHAR S, ORDONEZ V, BERG T L. High level describable attributes for predicting aesthetics and interestingness[C]// Proceedings of the 24th IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, Jun 20-25, 2011. Washington: IEEE Computer Society, 2011: 1657-1664.
[26] KONG S, SHEN X H, LIN Z L, et al. Photo aesthetics ranking network with attributes and content adaptation[C]// LNCS 9905: Proceedings of the 14th European Conference on Computer Vision, Amsterdam, Oct 11-14, 2016. Cham:Springer, 2016: 662-679.
[27] TALEBI H, MILANFAR P. NIMA: neural image assessment[J]. IEEE Transactions on Image Processing, 2018, 27(8): 3998-4011.
[28] HUANG G, LIU Z, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Jul 21-26, 2017. Washington: IEEE Computer Society, 2017: 2261-2269.
[29] SHAO Q, MA H P. Convolutional neural network text class-ification model with self-attention mechanism[J]. Journal of Chinese Computer Systems, 2019, 40(6): 1137-1141.
邵清, 马慧萍. 融合self-attention机制的卷积神经网络文本分类模型[J]. 小型微型计算机系统, 2019, 40(6): 1137-1141.
[30] XU T Y, WANG Z. Text-to-image synthesis optimization based on aesthetic assessment[J]. Journal of Beijing Univer-sity of Aeronautics and Astronautics, 2019, 45(12): 2438-2448.
徐天宇, 王智. 基于美学评判的文本生成图像优化[J]. 北京航空航天大学学报, 2019, 45(12): 2438-2448.
[31] BARRON J T. A general and adaptive robust loss function[J]. arXiv:1701.03077, 2017. |