Content of Brain-Like Computing in our journal

        Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Hardware Architecture of Stochastic Computing Neural Network
    CHEN Yuhao, SONG Yinjie, ZHU Yanan, GAO Yunfei, LI Hongge
    Journal of Frontiers of Computer Science and Technology    2021, 15 (11): 2105-2115.   DOI: 10.3778/j.issn.1673-9418.2105050

    Stochastic computing is a kind of logic calculation that converts binary into probabilistic coded digital pulse stream. At the cost of computing power and time delay, it has the computing advantages of low power consumption and high energy efficiency. In this paper, the basic concept of stochastic computing is explained, and a stochastic computing circuit with single-channel or multi-channel is designed to improve the speed and accuracy. Based on the stochastic computing circuit, the stochastic pulse neuron is designed, and the reconfigurable computing architecture of neural network, BUAA-ChouSuan, is realized. The design is implemented with KINTEX-7 (FPGA), the logic resource (lookup table, LUT) of stochastic MAC (multiply accumulate) is 80% lower than that of traditional MAC. In SCNN (stochastic convolutional neural network) experiment, LeNet and AlexNet are tested. Under the condition of 350 MHz clock frequency, the average energy efficiency can reach 0.536 TSOPS/W, and the utilization rate of processing unit (PE) can reach more than 90%.

    Reference | Related Articles | Metrics
    Abstract287
    PDF307
    Computing In-Memory Design Based on Double Word Line and Double Threshold 4T SRAM
    LIN Zhiting, NIU Jianchao, WU Xiulong, PENG Chunyu
    Journal of Frontiers of Computer Science and Technology    2021, 15 (11): 2116-2126.   DOI: 10.3778/j.issn.1673-9418.2011090

    In order to cope with the storage wall of the von Neumann computing architecture, the computing in-memory (CIM) architecture embeds logic in the memory, and completes the operation while reading the data, so that the storage unit has computing power and reduces processing data transfer between the device and the memory. In order to realize the design of large-capacity and low-cost memory, this paper proposes a storage system based on 4T SRAM (static random access memory) with double word line and double  threshold, which can not only realize data storage and reading, but also realize BCAM (binary content addressable memory) operations and logic operations such as AND, NOR, and XOR. During logic operation, two rows of storage data are selected through the decoding circuit, the bit lines are all pre-discharged to a low level, and the bit line voltage is compared with the reference voltage through the bit line end sensitive amplifier and the operation result is output. During BCAM operation, the external input data are decoded by the decoding circuit to realize the on and off control of the left and right transmission tubes of the storage unit, and the bit line end sensitive amplifier outputs the matching result through the NOR gate. The proposed circuit is built and simulated under 65 nm CMOS technology. Compared with the 6T memory cell, the storage area of the 4T memory cell is reduced by 25%. Compared with the single word line 4T memory structure, the double word line 4T memory structure can save about 47% of the read power consumption in very large scale integration (VLSI) applications. The maximum power consumption of data matching during BCAM operation is 909.72 FJ, and the array operation speed of N columns can reach 16161.6×N MB/Hz when the word line voltage is 600 mV.

    Reference | Related Articles | Metrics
    Abstract271
    PDF291
    PEST: Energy-Efficient NEST Brain-Like Simulator Implemented by PYNQ Cluster
    LI Peiqi, YU Gongjian, HUA Xia, LIU Jiahang, CHAI Zhilei
    Journal of Frontiers of Computer Science and Technology    2021, 15 (11): 2127-2141.   DOI: 10.3778/j.issn.1673-9418.2011047

    Large-scale brain-like simulation with high performance and low power consumption is one of the most challenging problems in brain-like computing. At present, the implementation of brain-like computing is mainly divided into hardware implementation and software implementation. Dedicated brain-like computing chips and systems implemented by hardware can provide better energy efficiency indicators, but they are costly and poorly adaptable. Software-based simulation (such as NEST) has good availability but has the problem of slow computing speed. If the two implementation methods are combined, through the software and hardware co-design, to ensure a good application ecology while obtaining higher computing energy efficiency, this paper proposes a high-energy-efficiency implemen-tation (PEST) of the NEST brain-like simulator based on the FPGA heterogeneous platform PYNQ cluster. By building a large-scale PYNQ cluster, it designs software and hardware data interaction interfaces to realize a scalable brain-like computing system based on the NEST simulator, designs FPGA hardware circuits for IAF neurons, and uses MPI distributed computing to improve NEST computing efficiency. The experimental results show that, for different computing models, under the optimal adaptation of PYNQ cluster, the performance of the neuron update part on PEST is improved by more than 4.6 times compared with AMD 3600X, and by more than 7.5 times compared with Xeon 2620. PEST's updated energy efficiency is more than 5.3 times higher than that of 3600X and 7.9 times higher than that of Xeon 2620.

    Reference | Related Articles | Metrics
    Abstract286
    PDF241