[1] VON NEUMANN. The computer and the brain[M]. Beijing: Peking University Press, 2010.
冯·诺依曼. 计算机与人脑[M]. 北京: 北京大学出版社, 2010.
[2] AKOPYAN F, SAWADA J, CASSIDY A, et al. TrueNorth: design and tool flow of a 65 mW 1 million neuron programmable neurosynaptic chip[J]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2015, 34(10): 1537-1557.
[3] SATO S, NEMOTO K, AKIMOTO S, et al. Implementation of a new neurochip using stochastic logic[J]. IEEE Transactions on Neural Networks, 2003, 14(5): 1122-1127.
[4] LI H G, HAYAKAWA Y, SATO S, et al. Hardware implementation of an inverse function delayed neural network using stochastic logic[J]. IEICE Transactions on Information & Systems, 2006, 89-D(9): 2572-2578.
[5] LIU Y D, LIU S T, WANG Y Z, et al. A survey of stochastic computing neural networks for machine learning applications[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(7): 2809-2824.
[6] FARAJI S R, NAJAFI M H, LI B Z, et al. Energy-efficient convolutional neural networks with deterministic bit-stream processing[C]//Proceedings of the Design, Automation & Test in Europe Conference & Exhibition, Florence, Mar 25-29, 2019. Piscataway: IEEE, 2019: 1757-1762.
[7] SIM H U, NGUYEN D, LEE J, et al. Scalable stochastic-computing accelerator for convolutional neural networks[C]//Proceedings of the 22nd Asia and South Pacific Design Automation Conference, Chiba, Jan 16-19, 2017. Piscataway: IEEE, 2017: 696-701.
[8] KIM K, KIM J, YU J, et al. Dynamic energy accuracy trade-off using stochastic computing in deep neural networks[C]//Proceedings of the 53rd Annual Design Automation Conference, Austin, Jun 5-9, 2016. New York: ACM, 2016: 124.
[9] LIU Y D, WANG Y Z, LOMBARDI F, et al. An energy-efficient online learning stochastic computational deep belief network[J]. IEEE Journal of Emerging and Selected Topics in Circuits and Systems, 2018, 8(3): 454-465.
[10] JENSON D, RIEDEL M D. A deterministic approach to stochastic computation[C]//Proceedings of the 35th Inter-national Conference on Computer-Aided Design, Austin, Nov 7-10, 2016. New York: ACM, 2016: 102.
[11] ZHANG Z D, WANG R S, ZHANG Z, et al. Circuit reliability comparison between stochastic computing and binary computing[J]. IEEE Transactions on Circuits and Systems II: Express Briefs, 2020, 67(12): 3342-3346.
[12] SIM?H U, LEE?J. A new stochastic computing multiplier with application to deep convolutional neural networks[C]//Proceedings of the 54th Annual Design Automation Conference, Austin, Jun 18-22, 2017. New York: ACM, 2017: 29.
[13] GUO K Y, ZENG S L, YU J C, et al. A survey of FPGA-based neural network inference accelerators[J]. ACM Transactions on Reconfigurable Technology and Systems, 2019, 12(1): 2.
[14] YUAN Z, LIU Y P, YUE J S, et al. STICKER: an energy-efficient multi-sparsity compatible accelerator for convolutional neural networks in 65-nm CMOS[J]. IEEE Journal of Solid-State Circuits, 2020, 55(2): 465-477.
[15] MOONS B, VERHELST M. A 0.3-2.6 TOPS/W precision-scalable processor for real-time large-scale ConvNets[C]//Proceedings of the 2016 IEEE Symposium on VLSI Circuits, Honolulu, Jun 15-17, 2016. Piscataway: IEEE, 2016: 1-2. |