[1] United Nations. Department of Economic and Social Affairs. Population Division. World population prospects: the 2015 revision[EB/OL]. (2015-12-09)[2019-06-01]. http://www.un.org/en/development/desa/population/theme/ageing/WPA2015.shtml.
[2] World Health Organization. Falls: key facts[EB/OL]. (2018-01-16)[2019-06-01]. http://www.who.int/news-room/fact-sheets/detail/falls.
[3] Mubashir M, Shao L, Seed L. A survey on fall detection: prin-ciples and approaches[J]. Neurocomputing, 2013, 100(2): 144-152.
[4] Liang W J, Zhang Y H, Jing H, et al. Research on fall detec-tion method based on SVM[J]. Measurement & Control Technology, 2014, 33(9): 33-35.梁维杰, 张应红, 景晖, 等. 基于支持向量机的跌倒检测方法研究[J]. 测控技术, 2014, 33(9): 33-35.
[5] Kong X B, Lin M, Tomiyama H. Fall detection for elderly persons using a depth camera[C]//Proceedings of the 2017 International Conference on Advanced Mechatronic Systems, Xiamen, Dec 6-9, 2017. Piscataway: IEEE, 2018: 269-273.
[6] Min W D, Cui H, Rao H, et al. Detection of human falls on furniture using scene analysis based on deep learning and activity characteristics[J]. IEEE Access, 2018, 6: 9324-9335.
[7] Shi X, Zhang T. Design of a wearable fall detection device[J]. Chinese Journal of Scientific Instrument, 2012, 33(3): 575-580.石欣, 张涛. 一种可穿戴式跌倒检测装置设计[J]. 仪器仪表学报, 2012, 33(3): 575-580.
[8] Mirmahboub B, Samavi S, Karimi N, et al. Automatic mono-cular system for human fall detection based on variations in silhouette area[J]. IEEE Transactions on Biomedical Engi-neering, 2013, 60(2): 427-436.
[9] Ren S Q, He K M, Girshick R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intel-ligence, 2017, 39(6): 1137-1149.
[10] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classifi-cation with deep convolutional neural networks[C]//Proceed-ings of the 25th International Conference on Neural Infor-mation Processing Systems, Doha, Nov 12-15, 2012. New York: ACM, 2012: 1097-1105.
[11] He K M, Zhang X Y, Ren S Q. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Con-ference on Computer Vision and Pattern Recognition, Las Vegas, Jun 27-30, 2016. Washington: IEEE Computer Society, 2016: 770-778.
[12] Simonyan K, Zisserman A. Two-stream convolutional net-works for action recognition in videos[J]. Computational Linguistics, 2014, 1(4): 568-576.
[13] Tran D, Bourdev L, Fergus R, et al. Learning spatiotem-poral features with 3D convolutional networks[C]//Proceed-ings of the 2015 IEEE International Conference on Computer Vision, Santiago, Dec 13-16, 2015. Piscataway: IEEE, 2015: 4489-4497.
[14] Qiu Z F, Yao T, Mei T. Learning spatio-temporal represent-ation with pseudo-3D residual networks[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Oct 24-27, 2017. Piscataway: IEEE, 2017: 5534-5542.
[15] Hara K, Kataoka H, Satoh Y. Learning spatio-temporal fea-tures with 3D residual networks for action recognition[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops, Venice, Oct 28-29, 2017. Piscat-away: IEEE, 2017: 3154-3160.
[16] Tran D, Wang H, Torresani L, et al. A closer look at spatio-temporal convolutions for action recognition[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun 18-22, 2018. Piscataway: IEEE, 2018: 6450-6459.
[17] Auvinet E, Rougier C, Meunier J, et al. Multiple cameras fall dataset[EB/OL]. [2019-06-01]. http://www.iro.umontreal.ca/~labimage/Dataset/.
[18] Kwolek B, Kepski M. Human fall detection on embedded platform using depth maps and wireless accelerometer[J]. Computer Methods & Programs in Biomedicine, 2014, 117(3): 489-501.
[19] Soomro K, Zamir A R, Shah M. UCF101: a dataset of 101 human action classes from videos in the wild[EB/OL]. [2019-06-01]. http://crcv.ucf.edu/data/UCF101.php.
[20] Wang L M, Xiong Y J, Wang Z, et al. Towards good practices for very deep two-stream convnets[J]. arXiv:1507.02159, 2015. |