[1] GUPTA L, JAIN R, VASZKUN G. Survey of important issues in UAV communication networks[J] IEEE Communications Surveys & Tutorials, 2016, 18(2): 1123-1152.
[2] HAYAT S, YANMAZ E, MUZAFFAR R. Survey on unmanned aerial vehicle networks for civil applications: a communications viewpoint[J] IEEE Communications Surveys & Tutorials, 2016, 18(4): 2624-2661.
[3] YANG L, ZHANG W. Beam tracking and optimization for UAV communication[J]. IEEE Transactions on Wireless Communications, 2019, 18(11): 5367-5379.
[4] ZHANG C, ZHANG W, WANG W, et al. Research challenges and opportunities of UAV millimeter-wave communications[J]. IEEE Wireless Communications, 2019, 26(1): 58-62.
[5] ANDRE T, HUMMEL K A, SCHOELLIG A P, et al. Applica-tion-driven design of aerial communication networks[J]. IEEE Communications Magazine, 2014, 52(5): 129-137.
[6] MAO G, DRAKE S, ANDERSON B D O. Design of an extended Kalman filter for UAV localization[C]//Proceedings of the 2007 Information, Decision and Control. Piscataway: IEEE, 2007.
[7] KIM E, LEE S,KIM C, et al. Mobile beacon-based 3D-localization with multidimensional scaling in large sensor networks[J]. IEEE Communications Letters, 2010, 14(7): 647-649.
[8] SSU K F, OU C H, JIAU H C. Localization with mobile anchor points in wireless sensor networks[J]. IEEE Transactions on Vehicular Technology, 2005, 54(3): 1187-1197.
[9] VILLAS L A, GUIDONI D L, UEYAMA J. 3D localization in wireless sensor networks using unmanned aerial vehicle[C]//Proceedings of the 2013 IEEE International Symposium on Network Computing & Applications. Piscataway: IEEE, 2013: 135-142.
[10] YU T, WANG X, JIN J, et al. Cloud-orchestrated physical topology discovery of large-scale IoT systems using UAVs[J]. IEEE Transactions on Industrial Informatics, 2018, 14(5): 2261-2270.
[11] DENG L, LI J, HUANG J T, et al. Recent advances in deep learning for speech research at Microsoft[C]//Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2013: 8604-8608.
[12] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C]//Advances in Neural Information Processing Systems 25, Lake Tahoe, Dec 3-6, 2012: 1097-1105.
[13] COLLOBERT R, WESTON J, BOTTOU L, et al. Natural language processing (almost) from scratch[J]. Journal of Machine Learning Research, 2011, 12: 2493-2537.
[14] CHEN C, SEFF A, KORNHAUSER A, et al. Deep driving: learning affordance for direct perception in autonomous driving[C]//Proceedings of the 2015 IEEE International conference on Computer Vision. Washington: IEEE Computer Society, 2015: 2722-2730.
[15] ESTEVA A, KUPREL B, NOVOA R A, et al. Dermatologist-level classification of skin cancer with deep neural networks[J]. Nature, 2017, 542(7639): 115-118.
[16] SILVER D, HUANG A, MADDISON C J, et al. Mastering the game of Go with deep neural networks and tree search[J]. Nature, 2016, 529(7587): 484-489.
[17] GALLINARI P, THIRIA S, FOGELMAN F. Multilayer perceptrons and data analysis[C]//Proceedings of the IEEE 1988 International Conference on Neural Networks. Piscataway: IEEE, 1988: 391-399.
[18] LECUN Y, BOSER B, DENKER J S, et al. Backpropagation applied to handwritten zip code recognition[J]. Neural Computation, 1989, 1(4): 541-551.
[19] ELMAN J L. Finding structure in time[J]. Cognitive Science, 1990, 14(2): 179-211.
[20] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[C]//Advances in Neural Information Processing Systems 27, Montreal, Dec 8-13, 2014: 2672-2680.
[21] RIFAI S, VINCENT P, MULLER X, et al. Contractive auto-encoders: explicit invariance during feature extraction[C]//Proceedings of the 28th International Conference on Machine Learning. Madison: Omnipress, 2011: 833-840.
[22] ZHANG L L, HAN S, WEI J, et al. nn-Meter: towards accurate latency prediction of deep-learning model inference on diverse edge devices[C]//Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services. New York: ACM, 2021: 81-93.
[23] LI Y, BAIK J, RAHMAN M M, et al. Pareto optimization of CNN models via hardware-aware neural architecture search for drainage crossing classification on resource-limited devices[C]//Proceedings of the 2023 Workshops of the International Conference on High Performance Computing, Network, Storage, and Analysis. New York: ACM, 2023: 1767-1775.
[24] NAIR S, ABBASI S, WONG A, et al. MAPLE-Edge: a runtime latency predictor for edge devices[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 3660-3668.
[25] KONG Y, YANG P, CHENG Y. Edge-assisted on-device model update for video analytics in adverse environments[C]//Proceedings of the 31st ACM International Conference on Multimedia. New York: ACM, 2023: 9051-9060.
[26] LIU S, WANG T, LI J, et al. AdaMask: enabling machine-centric video streaming with adaptive frame masking for DNN inference offloading[C]//Proceedings of the 30th ACM International Conference on Multimedia. New York: ACM, 2022: 3035-3044.
[27] 周飞燕, 金林鹏, 董军. 卷积神经网络研究综述[J]. 计算机学报, 2017, 40(6): 1229-1251.
ZHOU F Y, JIN L P, DONG J. Review of convolutional network[J]. Chinese Journal of Computers, 2017, 40(6): 1229-1251.
[28] GORI M, MONFARDINI G, SCARSELLI F. A new model for learning in graph domains[C]//Proceedings of the 2005 IEEE International Joint Conference on Neural Networks. Piscataway: IEEE, 2005: 729-734.
[29] BRUNA J, ZAREMBA W, SZLAM A, et al. Spectral networks and locally connected networks on graphs[C]//Proceedings of the 2nd International Conference on Learning Representations, Banff, Apr 14-16, 2014.
[30] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems 30, Long Beach, Dec 4-9, 2017: 5998-6008.
[31] KIPF T N, WELLING M. Variational graph auto-encoders[EB/OL]. [2024-01-15]. https://arxiv.org/abs/1611.07308.
[32] PAN S, HU R, LONG G, et al. Adversarially regularized graph autoencoder for graph embedding[C]//Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Jul 13-19, 2018: 2609-2615.
[33] JOHNSON D D. Learning graphical state transitions[C]// Proceedings of the 5th International Conference on Learning Representations, Toulon, Apr 24-26, 2017.
[34] XU D, CHENG W, LUO D, et al. Spatio-temporal attentive RNN for node classification in temporal attributed graphs[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, Aug 10-16, 2019: 3947-3953.
[35] LUO J H, WU J, LIN W. ThiNet: a filter level pruning method for deep neural network compression[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision. Washington: IEEE Computer Society, 2017: 5058-5066.
[36] GUPTA S, AGRAWAL A, GOPALAKRISHNAN K, et al. Deep learning with limited numerical precision[C]//Proceedings of the 32nd International Conference on Machine Learning, Lille, Jul 6-11, 2015: 1737-1746.
[37] VANHOUCKE V, SENIOR A, MAO M Z. Improving the speed of neural networks on CPUs[C]//Advances in Neural Information Processing Systems 24, Granada, Dec 12-14, 2011: 4.
[38] ZAGORUYKO S, KOMODAKIS N. Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer[EB/OL].[2024-01-15]. https://arxiv.org/abs/1612.03928.
[39] DEFFERRARD M, BRESSON X, VANDERGHEYNST P. Convolutional neural networks on graphs with fast localized spectral filtering[C]//Advances in Neural Information Processing Systems 29, Barcelona, Dec 5-10, 2016: 3837-3845.
[40] BENGIO Y, SIMARD P, FRASCONI P. Learning long-term dependencies with gradient descent is difficult[J]. IEEE Transactions on Neural Networks, 1994, 5(2): 157-166.
[41] HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997, 9: 1735-1780.
[42] CHO K, VAN MERRIENBOER B, BAHDANAU D, et al. On the properties of neural machine translation: encoder-decoder approaches[EB/OL]. [2024-01-15]. https://arxiv.org/abs/1409.1259.
[43] CHUNG J, GULCEHRE C, CHO K, et al. Empirical evaluation of gated recurrent neural networks on sequence modeling[EB/OL]. [2024-01-15]. https://arxiv.org/abs/1412.3555.
[44] 苗旭鹏, 王驭捷, 沈佳, 等. 面向多GPU的图神经网络训练加速[J]. 软件学报, 2023, 34(9): 4407-4420.
MIAO X P, WANG Y J, SHEN J, et al. Graph neural network training acceleration for multi-GPUs[J]. Journal of Software, 2023, 34(9): 4407-4420.
[45] AZAD A, JACQUELIN M, BULUÇ A, et al. The reverse Cuthill-mcKee algorithm in distributed-memory[C]//Proceedings of the 2017 IEEE International Parallel and Distributed Processing Symposium. Washington: IEEE Computer Society, 2017: 22-31.
[46] CHIN T W, DING R Z, ZHANG C, et al. Towards efficient model compression via learned global ranking[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 1515-1525.
[47] 王炯. 基于结构化剪枝的深度神经网络模型压缩方法研究[D]. 南京: 南京邮电大学, 2022.
WANG J. Research on deep neural network model compression method based on structural pruning[D]. Nanjing: Nanjing University of Posts and Telecommunications, 2022. |