[1] Yang Y D. Current situation and future prospect of industrial robots[J]. Electronic Technology and Software Engineering, 2019(19): 249-250. 杨粤东. 工业机器人的现状及未来展望[J]. 电子技术与软件工程, 2019(19): 249-250.
[2] Li A. Research on quality control methods and strategies of multi-variety and small-batch assembly enterprises[D]. Nan-jing: Nanjing University, 2019. 李昂. 多品种小批量装配型企业的质控方法与策略研究[D]. 南京: 南京大学, 2019.
[3] Zhang H X, Lv X Y, Leng W C, et al. Recent advances on vision-based robot learning by demonstration[J]. Recent Patents on Mechanical Engineering, 2018, 11(4): 269-284.
[4] Hovland G E, Sikka P, Mccarragher B J. Skill acquisition from human demonstration using a hidden Markov model[C]//Proceedings of the 1996 IEEE International Conference on Robotics & Automation, Minneapolis, Apr 22-28, 1996. Piscataway: IEEE, 1996: 2706-2711.
[5] Billard A, Calinon S, Dillmann R, et al. Robot programming by demonstration[M]//Handbook of Robotics. Berlin, Heidel-berg: Springer, 2008: 1371-1394.
[6] Aksoy E E, Abramov A, Dorr J, et al. Learning the semantics of object-action relations by observation[J]. The International Journal of Robotics Research, 2011, 30(10): 1229-1249.
[7] Cubek R, Ertel W, Palm G. High-level learning from demon-stration with conceptual spaces and subspace clustering[C]//Proceedings of the 2015 IEEE International Conference on Robotics and Automation, Seattle, May 26-30, 2015. Pisca-taway: IEEE, 2015: 2592-2597.
[8] Dean E, Ramirez A K, Bergner F, et al. Integration of robotic technologies for rapidly deployable robots[J]. IEEE Transac-tions on Industrial Informatics, 2017, 14(4): 1691-1700.
[9] Ramirez-Amaro K, Dean-Leon E, Bergner F, et al. A semantic-based method for teaching industrial robots new tasks[J]. KI-Künstliche Intelligenz, 2019, 33(2): 117-122.
[10] Kyrarini M, Haseeb M A, Ristic-Durrant D, et al. Robot learning of industrial assembly task via human demonstra-tions[J]. Autonomous Robots, 2018, 43(1): 239-257.
[11] Gu Y, Sheng W, Ou Y. Automated assembly skill acquisition through human demonstration[C]//Proceedings of the 2014 IEEE International Conference on Robotics and Automation, Hong Kong, China, May 31-Jun 7, 2014. Piscataway: IEEE, 2014: 6313-6318.
[12] Gao G. Operational behavior recognition method of service-oriented robot demonstration learning[D]. Nanjing: Southeast University, 2018. 高歌. 面向服务机器人示范学习的操作行为识别方法[D]. 南京: 东南大学, 2018.
[13] Liu H, Qian K, Gui B X, et al. Task generalization of robot based on multi-demonstration motion primitive parameteri-zation learning[J]. Robot, 2019, 41(5): 574-582. 刘环, 钱堃, 桂博兴, 等. 基于多演示动作基元参数化学习的机器人任务泛化[J]. 机器人, 2019, 41(5): 574-582.
[14] Wang Y, Xiong R, Shen L, et al. Towards learning from demonstration system for parts assembly: a graph based rep-resentation for knowledge[C]//Proceedings of the IEEE 4th Annual International Conference on Cyber Technology in Automation, Control, and Intelligent Systems, Hong Kong, China, Jun 4-7, 2014. Piscataway: IEEE, 2014: 174-179.
[15] Wang Y, Cai J, Wang Y B, et al. Probabilistic graph based spatial assembly relation inference for programming of asse-mbly task by demonstration[C]//Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Sys-tems, Hamburg, Sep 28-Oct 2, 2015. Piscataway: IEEE, 2015: 4402-4407.
[16] Ji L, Xiong R, Wang Y, et al. A method of simultaneously action recognition and video segmentation of video streams[C]//Proceedings of the 2017 IEEE International Conference on Robotics and Biomimetics, Macau, China, Dec 5-8, 2017. Piscataway: IEEE, 2017: 1515-1520.
[17] Zheng Y J. Optimization and evaluation of imitation learning algorithm based on Gaussian mixture model[D]. Beijing: Bei-jing University of Technology, 2017. 郑逸加. 基于高斯混合模型的模仿学习算法的优化与评价[D]. 北京: 北京工业大学, 2017.
[18] Steinmetz F, Nitsch V, Stulp F. Intuitive task level program-ming by demonstration through semantic skill recognition[J]. IEEE Robotics and Automation Letters, 2019, 4(4): 3742-3749.
[19] Rohrbach M, Rohrbach A, Regneri M, et al. Recognizing fine-grained and composite activities using hand-centric fea-tures and script data[J]. International Journal of Computer Vision, 2016, 119(3): 346-373.
[20] Huang B D, Ye M L, Li S L, et al. A vision-guided multi-robot cooperation framework for learning-by-demonstration and task reproduction[C]//Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vancouver, Sep 24-28, 2017. Piscataway: IEEE, 2017: 4797-4804.
[21] Wang Y, Jiao Y M, Xiong R, et al. MASD: a multimodal assembly skill decoding system for robot programming by demonstration[J]. IEEE Transactions on Automation Science and Engineering, 2018, 15(4): 1722-1734.
[22] Kramberger A. A comparison of learning by demonstration methods for force-based robot skills[C]//Proceedings of the 2014 International Conference on Robotics in Alpe-Adria-Danube Region, Smolenice, Sep 3-5, 2014. Piscataway: IEEE, 2014: 1-6.
[23] Schutter D, Joris D S, Aertbelien E. Combining imitation learning with constraint-based task specification and control[J]. IEEE Robotics and Automation Letters, 2019, 4(2): 1892-1899.
[24] Takano W, Yamada Y, Nakamura Y. Generation of action des-cription from classification of motion and object[J]. Robotics and Autonomous Systems, 2017, 91: 247-257.
[25] Jesus S, David A R, Mauricio M, et al. Semantic reasoning in service robots using expert systems[J]. Robotics and Auto-nomous Systems, 2019, 114: 77-92.
[26] Ramirez-Amaro K, Yang Y Z, Cheng G D. A survey on semantic-based methods for the understanding of human movements[J]. Robotics and Autonomous Systems, 2019, 119: 31-50.
[27] Caccavale R, Saveriano M, Finzi A, et al. Kinesthetic teac-hing and attentional supervision of structured tasks in human-robot interaction[J]. Autonomous Robots, 2019, 43(6): 1291-1307.
[28] Zhang S L. Research on motion planning method of robotic arm visual interaction LFD[D]. Wuhan: Wuhan University of Science and Technology, 2019. 张思伦. 机械臂视觉交互LFD运动规划方法的研究[D]. 武汉: 武汉科技大学, 2019.
[29] Ferreira M, Costa P, Rocha L, et al. Stereo-based real-time 6-DoF work tool tracking for robot programing by demon-stration[J]. The International Journal of Advanced Manufac-turing Technology, 2016, 85: 57-69.
[30] Lin H I, Lin Y H. A novel teaching system for industrial robots[J]. Sensors, 2014, 14(4): 6012-6031.
[31] Wang C Y. Study on demonstration learning of humanoid robot arm based on Kinect[D]. Harbin: Harbin Institute of Technology, 2017. 王朝阳. 基于Kinect的类人机械臂演示学习研究[D]. 哈尔滨: 哈尔滨工业大学, 2017.
[32] Jha A, Chiddarwar S S, Alakshendra V, et al. Kinematics-based approach for robot programming via human arm mo-tion[J]. Journal of the Brazilian Society of Mechanical Scie-nces and Engineering, 2016, 39(7): 2659- 2675.
[33] Sander M, Aguirre A, Benavides F. Humanoid robot learning by demonstration based on visual bootstrapping technique[C]//Proceedings of the 2016 XLII Latin American Computing Conference, Valparaiso, Oct 10-14, 2016. Piscataway: IEEE, 2016: 1-8.
[34] Ji L. Part pose inference and grasping planning for demon-stration programming of industrial assembly[D]. Hangzhou: Zhejiang University, 2019. 吉梁. 面向工业装配演示编程的零件位姿推理与抓取规划[D]. 杭州: 浙江大学, 2019.
[35] Chen J, Ren H L, Lau H Y K. Automate robot reaching task with learning from demonstration[C]//Proceedings of the 18th International Conference on Advanced Robotics, Hong Kong, China, Jul 10-12, 2017. Piscataway: IEEE, 2017: 543-548.
[36] Duan J H, Ou Y S, Xu S, et al. Sequential learning unifi-cation controller from human demonstrations for robotic com-pliant manipulation[J]. Neurocomputing, 2019, 366: 35-45.
[37] Koc O, Peters J. Learning to serve: an experimental study for a new learning from demonstrations framework[J]. IEEE Robotics and Automation Letters, 2018, 4(2): 1784-1791.
[38] Dantam N, Essa I, Stilman M. Linguistic transfer of human assembly tasks to robots[C]//Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots & Sys-tems, Vilamoura, Oct 7-12, 2012. Piscataway: IEEE, 2012: 237-242.
[39] Ahmadzadeh S R, Paikan A, Mastrogiovanni F, et al. Lear-ning symbolic representations of actions from human demon-strations[C]//Proceedings of the 2015 IEEE International Conference on Robotics and Automation, Seattle, May 26-30, 2015. Piscataway: IEEE, 2015: 3801-3808.
[40] Das N, Prakash R, Behera L. Learning object manipulation from demonstration through vision for the 7-DOF barrett WAM[C]//Proceedings of the 2016 IEEE International Con-ference on Control, Measurement and Instrumentation, Kol-kata, Jan 8-10, 2016. Piscataway: IEEE, 2016: 391-396.
[41] Gu Y, Sheng W, Crick C, et al. Automated assembly skill acquisition and implementation through human demonstra-tion[J]. Robotics and Autonomous Systems, 2018, 99: 1-16.
[42] Lambrecht J, Kleinsorge M, Rosenstrauch M, et al. Spatial programming for industrial robots through task demonstra-tion[J]. International Journal of Advanced Robotic Systems, 2013, 10(5): 254.
[43] Chen F, Lv H, Pang Z, et al. WristCam: a wearable sensor for hand trajectory gesture recognition and intelligent human-robot interaction[J]. IEEE Sensors Journal, 2019, 19(19): 8441-8450.
[44] Ramirez-Amaro K, Beetz M, Cheng G. Transferring skills to humanoid robots by extracting semantic representations from observations of human activities[J]. Artificial Intelli-gence, 2017, 247: 95-118.
[45] Lin H I, Chiang Y P. Understanding human hand gestures for learning robot pick-and-place tasks[J]. International Journal of Advanced Robotic Systems, 2015, 12(5): 49.
[46] Wang Y, Xiong R, Yu H S, et al. Perception of demonstration for automatic programing of robotic assembly: framework, algorithm, and validation[J]. IEEE/ASME Transactions on Mechatronics, 2018, 23(3): 1059-1070.
[47] Lee K, Su Y, Kim T K, et al. A syntactic approach to robot imitation learning using probabilistic activity grammars[J]. Robotics and Autonomous Systems, 2013, 61(12): 1323-1334.
[48] Chen S J, Yin D, Zhang R, et al. Study on cognitive reasoning of home service robot demonstration learning[J]. Journal of Chinese Computer Systems, 2013, 34(6): 1441-1445. 陈世佳, 尹东, 张荣, 等. 认知推理的家庭服务机器人演示学习研究[J]. 小型微型计算机系统, 2013, 34(6): 1441-1445.
[49] Wang F, Qi H, Zhou X Q, et al. Collaborative robot demon-stration programming and optimization method based on multi-source information fusion[J]. Robot, 2018(4): 551-559. 王斐, 齐欢, 周星群, 等. 基于多源信息融合的协作机器人演示编程及优化方法[J]. 机器人, 2018(4): 551-559.
[50] Wang W T, Li R, Chen Y, et al. Facilitating human-robot collaborative tasks by teaching learning collaboration from human demonstrations[J]. IEEE Transactions on Automation Science and Engineering, 2018, 16(2): 1-14.
[51] Tremblay J, To T, Molchanov A, et al. Synthetically trained neural networks for learning human-readable plans from real-world demonstrations[C]//Proceedings of the 2018 IEEE Inter-national Conference on Robotics and Automation, Brisbane, May 21-25, 2018. Piscataway: IEEE, 2018: 5659-5666.
[52] Li B Y, Lu T, Li X C, et al. An automatic robot skills learning system from robot??s real-world demonstrations[C]//Proceedings of the 2019 Chinese Control and Decision Con-ference, Nanchang, Jun 3-5, 2019. Piscataway: IEEE, 2019: 5138-5142.
[53] Zhang T H, Mccarthy Z, Jow O, et al. Deep imitation learning for complex manipulation tasks from virtual reality teleopera-tion[C]//Proceedings of the 2018 IEEE International Confe-rence on Robotics and Automation, Brisbane, May 21-25, 2018. Piscataway: IEEE, 2018: 5628-5635.
[54] Rahmatizadeh R, Abolghasemi P, Boloni L, et al. Vision-based multi-task manipulation for inexpensive robots using end-to-end learning from demonstration[C]//Proceedings of the 2018 IEEE International Conference on Robotics and Automation, Brisbane, May 21-25, 2018. Piscataway: IEEE, 2018: 3758-3765. |