[1] WU Y, WANG X, LI G, et al. AnimeSR: learning real-world super-resolution models for animation videos[J]. arXiv:2206.07038, 2022.
[2] SIYAO L, ZHAO S, YU W, et al. Deep animation video in-terpolation in the wild[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recog-nition, Jun 20-25, 2021. Piscataway: IEEE, 2021: 6587-6595.
[3] GWERN B. Danbooru2021: a large-scale crowdsourced and tagged anime illustration dataset[EB/OL]. (2022-01-23)[2023-03-22]. https://www.gwern.net/Danbooru2021.
[4] ZHANG L, JI Y, LIU C. DanbooRegion: an illustration region dataset[C]//LNCS 12358: Proceedings of the 16th European Conference on Computer Vision, Glasgow, Aug 23-28, 2020. Cham: Springer, 2020: 137-154.
[5] IKUTA H, OGAKI K, ODAGIRI Y. Blending texture fea-tures from multiple reference images for style transfer[C]// Proceedings of the SIGGRAPH Asia 2016 Technical Briefs, Macao, China, Dec 5-8, 2016. New York: ACM, 2016: 1-4.
[6] RIOS E A, CHENG W H, LAI B C. DAF:re: a challenging, crowd-sourced, large-scale, long-tailed dataset for anime character recognition[J]. arXiv:2101.08674, 2021.
[7] ZHENG Y, ZHAO Y, REN M, et al. Cartoon face recog-nition: a benchmark dataset[C]//Proceedings of the 28th ACM International Conference on Multimedia, Seattle, Oct 12-16, 2020. New York: ACM, 2020: 2264-2272.
[8] LI H, GUO S, LYU K, et al. A challenging benchmark of anime style recognition[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recogni-tion, New Orleans, Jun 19-20, 2022. Piscataway: IEEE, 2022: 4720-4729.
[9] BLENDER. Blender open source 3D creation suite[CP/OL]. Netherlands: Blender Conference (1994) [2023-01-20]. https://www.blender.org/.
[10] SHUGRINA M, LIANG Z, KAR A, et al. Creative flow+ dataset[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, Jun 16-20, 2019. Piscataway: IEEE, 2019: 5384-5393.
[11] LI S Y, LI Y H, LI B, et al. AnimeRun: 2D animation visual correspondence from open source 3D movies[J]. arXiv:2211.05709, 2022.
[12] KIM K, PARK S, LEE J, et al. AnimeCeleb: large-scale anima-tion CelebHeads dataset for head reenactment[C]//LNCS 13668: Proceedings of the 17th European Conference on Computer Vision, Tel Aviv, Oct 23-27, 2022. Cham: Springer, 2022: 414-430.
[13] bloc97/Anime4K: a high-quality real time upscale for anime video[EB/OL]. (2018)[2023-01-22]. https://github.com/bloc97/Anime4K.
[14] NAGADOMI. Waifu2x: image and video super-resolution using deep convolutional neural networks[EB/OL]. (2015) [2023-01-22]. https://github.com/nagadomi/waifu2x.
[15] DONG C, LOY C C, HE K, et al. Image super-resolution using deep convolutional networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 38(2): 295-307.
[16] RONNEBERGER O, FISCHER P, BROX T. U-Net: convo-lutional networks for biomedical image segmentation[C]//LNCS 9351: Proceedings of the 18th International Con-ference on Medical Image Computing and Computer-Assisted Intervention, Munich, Oct 5-9, 2015. Cham: Springer, 2015: 234-241.
[17] LIU Z, LIN Y, CAO Y, et al. Swin transformer: hierarchical vision transformer using shifted windows[C]//Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Oct 10-17, 2021. Piscataway: IEEE, 2021: 9992-10002.
[18] BILIBILI. RealCUGAN: improved training of generative ad-versarial networks using realistic unpaired data synthesis[EB/OL]. (2021)[2023-01-22]. https://github.com/bilibili/ailab/tree/main/RealCUGAN.
[19] WANG X T. Anime model[EB/OL]. (2021)[2023-01-22]. https://github.com/xinntao/Real-ESRGAN/blob/master/docs/anime_model.md.
[20] WANG X T. Anime video model[EB/OL]. (2021)[2023-01-22]. https://github.com/xinntao/Real-ESRGAN/blob/master/docs/anime_video_model.md.
[21] CHEN S, ZWICKER M. Improving the perceptual quality of 2D animation interpolation[C]//LNCS 13677: Proceedings of the 17th European Conference on Computer Vision, Tel Aviv, Oct 23-27, 2022. Cham: Springer, 2022: 271-287.
[22] SYKORA D, BURIáNEK J, ?áRA J. Unsupervised colori-zation of black-and-white cartoons[C]//Proceedings of the 3rd International Symposium on Non-photorealistic Anima-tion and Rendering, Annecy, Jun 7-9, 2004. New York: ACM, 2004: 121-127.
[23] BRUCE D L, Takeo K. An iterative image registration tech-nique with an application to stereo vision[C]//Proceedings of the 7th International Joint Conference on Artificial Intel-ligence, Vancouver, Aug 24-28, 1981. San Francisco: Mor-gan Kaufmann Publishers Inc, 1981: 674-679.
[24] ZHU H, LIU X, WONG T T, et al. Globally optimal toon tracking[J]. ACM Transactions on Graphics, 2016, 35(4): 75.
[25] LIU X, MAO X, YANG X, et al. Stereoscopizing cel anima-tions[J]. ACM Transactions on Graphics, 2013, 32(6): 223.
[26] THASARATHAN H, EBRAHIMI M. Artist-guided semi-automatic animation colorization[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Oct 27-Nov 2, 2019. Piscataway: IEEE, 2019: 3157-3160.
[27] SHI M, ZHANG J Q, CHEN S Y, et al. Reference-based deep line art video colorization with a few references[J]. IEEE Transactions on Visualization and Computer Graphics, 2023, 29(6): 2965-2979.
[28] HUANG X, BELONGIE S. Arbitrary style transfer in real-time with adaptive instance normalization[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Oct 22-29, 2017. Piscataway: IEEE, 2017:1501-1510.
[29] LI X, ZHANG B, LIAO J, et al. Deep sketch-guided cartoon video inbetweening[J]. IEEE Transactions on Visualization and Computer Graphics, 2021, 28(8): 2938-2952.
[30] SUN D, YANG X, LIU M Y, et al. PWC-net: CNNs for optical flow using pyramid, warping, and cost volume[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Piscataway: IEEE, 2018: 8934-8943.
[31] ZHANG Q, WANG B, WEN W, et al. Line art correlation matching feature transfer network for automatic animation colorization[C]//Proceedings of the 2021 IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, Jan 3-8, 2021, Piscataway: IEEE, 2021: 3872-3881.
[32] YU Y, QIAN J, WANG C, et al. Animation line art coloriza-tion based on optical flow method[J]. SSRN Electronic Journal, 2022. DOI:10.2139/ssrn.4202289.
[33] Topaz Labs. Topaz Labs: AI image quality software[EB/OL]. [2023-01-28]. https://topazlabs.com.
[34] Adobe after effects: create motion graphics and visual effects for film, TV, video, and web[EB/OL]. [2023-01-28]. https://www.adobe.com/products/aftereffects.html.
[35] TVPaint 2D animation software[CP/OL]. (1991) [2023-01-28]. http://www.tvpaint.com/.
[36] MITTAL A, SOUNDARARAJAN R, BOVIK A C. Making a “completely blind” image quality analyzer[J]. IEEE Signal Processing Letters, 2012, 20(3): 209-212.
[37] MITTAL A, MOORTHY A K, BOVIK A C. No-reference image quality assessment in the spatial domain[J]. IEEE Transactions on Image Processing, 2012, 21(12): 4695-4708.
[38] MA C, YANG C Y, YANG X, et al. Learning a no-reference quality metric for single-image super-resolution[J]. Com-puter Vision and Image Understanding, 2017, 158: 1-16.
[39] BLAU Y, MICHAELI T. The perception-distortion tradeoff[C]//Proceedings of the 2018 IEEE Conference on Compu-ter Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Piscataway: IEEE, 2018: 6228-6237.
[40] GU S, BAO J, CHEN D, et al. GIQA: generated image quality assessment[C]//LNCS 12356: Proceedings of the 16th Euro-pean Conference on Computer Vision, Glasgow, Aug 23-28, 2020. Cham: Springer, 2020: 369-385.
[41] SU S, YAN Q, ZHU Y, et al. Blindly assess image quality in the wild guided by a self-adaptive hyper network[C]//Proceedings of the 2020 IEEE/CVF Conference on Compu-ter Vision and Pattern Recognition, Seattle, Jun 13-19, 2020, Piscataway: IEEE, 2020: 3667-3676.
[42] YANG S, WU T, SHI S, et al. MANIQA: multi-dimension attention network for no-reference image quality assessment[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, Jun 18-24, 2022. Piscataway: IEEE, 2022: 1191-1200.
[43] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: transformers for image recog-nition at scale[J]. arXiv:2010.11929, 2020.
[44] PASZKE A, GROSS S, MASSA F, et al. Pytorch: an imperative style, high-performance deep learning library[C]//Advances in Neural Information Processing Systems 32, Vancouver, Dec 8-14, 2019: 8026-8037.
[45] WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE transactions on image processing, 2004, 13(4): 600-612.
[46] ZHANG R, ISOLA P, EFROS A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-23, 2018. Piscataway: IEEE, 2018: 586-595.
[47] TEED Z, DENG J. RAFT: recurrent all-pairs field trans-forms for optical flow[C]//LNCS 12347: Proceedings of the 16th European Conference on Computer Vision, Glasgow, Aug 23-28, 2020. Cham: Springer, 2020: 402-419. |