-
公开(公告)号:US20200250539A1
公开(公告)日:2020-08-06
申请号:US16482710
申请日:2018-07-13
发明人: Shaoli LIU , Xuda ZHOU , Zidong DU , Daofu LIU
摘要: The application provides a processing method and device. Weights and input neurons are quantized respectively, and a weight dictionary, a weight codebook, a neuron dictionary, and a neuron codebook are determined. A computational codebook is determined according to the weight codebook and the neuron codebook. Meanwhile, according to the application, the computational codebook is determined according to two types of quantized data, and the two types of quantized data are combined, which facilitates data processing.
-
公开(公告)号:US20200174547A1
公开(公告)日:2020-06-04
申请号:US16615293
申请日:2019-01-09
发明人: Zhou Fang , Bingrui Wang
IPC分类号: G06F1/3287 , G06F1/324 , G06F1/3296
摘要: Disclosed in the present application are a control device, method and equipment for a processor. The control device for the processor comprises: an arithmetic circuit and a memory, the arithmetic circuit being connected to the memory. The arithmetic circuit is used to output a control signal according to acquired sensor data, and the control signal is used to control a processor. The control device, method and equipment for the processor according to the present invention may be used to determine whether it is necessary to start the processor according to preset key information, or whether it is necessary to reduce the energy consumption of a processor which is currently in operation, thereby improving endurance.
-
公开(公告)号:US20200160221A1
公开(公告)日:2020-05-21
申请号:US16715170
申请日:2019-12-16
发明人: Yao ZHANG , Bingrui WANG
摘要: The present disclosure provides a computation device. The computation device is configured to perform a machine learning computation, and includes an operation unit, a controller unit, and a conversion unit. The storage unit is configured to obtain input data and a computation instruction. The controller unit is configured to extract and parse the computation instruction from the storage unit to obtain one or more operation instructions, and to send the one or more operation instructions and the input data to the operation unit. The operation unit is configured to perform operations on the input data according to one or more operation instructions to obtain a computation result of the computation instruction. In the examples of the present disclosure, the input data involved in machine learning computations is represented by fixed-point data, thereby improving the processing speed and efficiency of training operations.
-
公开(公告)号:US20200097826A1
公开(公告)日:2020-03-26
申请号:US16699027
申请日:2019-11-28
发明人: Zidong Du , Xuda Zhou , Shaoli Liu , Tianshi Chen
IPC分类号: G06N3/08 , G06N3/04 , G06F13/16 , G06F1/3296 , G06F9/38
摘要: The present disclosure provides a processing device including: a coarse-grained pruning unit configured to perform coarse-grained pruning on a weight of a neural network to obtain a pruned weight, an operation unit configured to train the neural network according to the pruned weight. The coarse-grained pruning unit is specifically configured to select M weights from the weights of the neural network through a sliding window, and when the M weights meet a preset condition, all or part of the M weights may be set to 0. The processing device can reduce the memory access while reducing the amount of computation, thereby obtaining an acceleration ratio and reducing energy consumption.
-
公开(公告)号:US20200097806A1
公开(公告)日:2020-03-26
申请号:US16699029
申请日:2019-11-28
发明人: Tianshi Chen , Yifan Hao , Shaoli Liu
IPC分类号: G06N3/063 , G06N3/04 , G06F12/0875 , G06N3/08
摘要: The present disclosure provides a processing device including: a coarse-grained pruning unit configured to perform coarse-grained pruning on a weight of a neural network to obtain a pruned weight, an operation unit configured to train the neural network according to the pruned weight. The coarse-grained pruning unit is specifically configured to select M weights from the weights of the neural network through a sliding window, and when the M weights meet a preset condition, all or part of the M weights may be set to 0. The processing device can reduce the memory access while reducing the amount of computation, thereby obtaining an acceleration ratio and reducing energy consumption.
-
公开(公告)号:US20200097796A1
公开(公告)日:2020-03-26
申请号:US16698991
申请日:2019-11-28
发明人: Zidong DU , Shaoli LIU , Tianshi CHEN
摘要: A computing device, comprising: a computing module, comprising one or more computing units; and a control module, comprising a computing control unit, and used for controlling shutdown of the computing unit of the computing module according to a determining condition. Also provided is a computing method. The computing device and method have the advantages of low power consumption and high flexibility, and can be combined with the upgrading mode of software, thereby further increasing the computing speed, reducing the computing amount, and reducing the computing power consumption of an accelerator.
-
27.
公开(公告)号:US20200050927A1
公开(公告)日:2020-02-13
申请号:US16658800
申请日:2019-10-21
发明人: Tianshi CHEN , Yimin ZHUANG , Qi GUO , Shaoli LIU , Yunji CHEN
摘要: Aspects of a neural network operation device are described herein. The aspects may include a matrix element storage module configured to receive a first matrix that includes one or more first values, each of the first values being represented in a sequence that includes one or more bits. The matrix element storage module may be further configured to respectively store the one or more bits in one or more storage spaces in accordance with positions of the bits in the sequence. The aspects may further include a numeric operation module configured to calculate an intermediate result for each storage space based on one or more second values in a second matrix and an accumulation module configured to sum the intermediate results to generate an output value.
-
公开(公告)号:US20190354159A1
公开(公告)日:2019-11-21
申请号:US16528973
申请日:2019-08-01
发明人: Shaoli LIU , Lei ZHANG , Tianshi CHEN
IPC分类号: G06F1/324 , G06F1/3296
摘要: The application provides a Dynamic Voltage Frequency Scaling device. The Dynamic Voltage Frequency Scaling device in a convolutional operation device acquires working state information of the convolutional operation device and its internal units/modules in real time and scales working voltage or working frequency of the convolutional operation device and its internal units/modules according to the working state information of the convolutional operation device and its internal units/modules, so as to reduce the overall running power consumption of the convolutional operation device during the convolutional operation.
-
公开(公告)号:US20190327479A1
公开(公告)日:2019-10-24
申请号:US16457397
申请日:2019-06-28
发明人: Tianshi CHEN , Yuzhe LUO , Qi GUO , Shaoli LIU , Yunji CHEN
IPC分类号: H04N19/42 , G06N3/04 , H04N19/60 , H04N19/124 , H04N19/182 , H04N19/172 , H04N19/13
摘要: Aspects of data compression/decompression for neural networks are described herein. The aspects may include a model data converter configured to convert neural network content values into pseudo video data. The neural network content values may refer to weight values and bias values of the neural network. The pseudo video data may include one or more pseudo frames. The aspects may further include a compression module configured to encode the pseudo video data into one or more neural network data packages.
-
30.
公开(公告)号:US20190311252A1
公开(公告)日:2019-10-10
申请号:US16440257
申请日:2019-06-13
发明人: Tianshi CHEN , Yimin ZHUANG , Qi GUO , Shaoli LIU , Yunji CHEN
摘要: Aspects of a neural network operation device are described herein. The aspects may include a matrix element storage module configured to receive a first matrix that includes one or more first values, each of the first values being represented in a sequence that includes one or more bits. The matrix element storage module may be further configured to respectively store the one or more bits in one or more storage spaces in accordance with positions of the bits in the sequence. The aspects may further include a numeric operation module configured to calculate an intermediate result for each storage space based on one or more second values in a second matrix and an accumulation module configured to sum the intermediate results to generate an output value.
-
-
-
-
-
-
-
-
-