-
公开(公告)号:US20220121599A1
公开(公告)日:2022-04-21
申请号:US17564431
申请日:2021-12-29
发明人: Shaoli LIU , Zhen LI , Yao ZHANG
摘要: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
-
公开(公告)号:US20220035762A1
公开(公告)日:2022-02-03
申请号:US17278812
申请日:2019-10-18
发明人: Yao ZHANG , Shaoli LIU , Jun LIANG , Yu CHEN
摘要: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system, and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
-
公开(公告)号:US20210224069A1
公开(公告)日:2021-07-22
申请号:US16745743
申请日:2020-01-17
发明人: Tianshi CHEN , Shaoli LIU , Zai WANG , Shuai HU
摘要: The present disclosure provides a computing method that is applied to a computing device. The computing device includes: a memory, a register unit, and a matrix computing unit. The method includes the following steps: controlling, by the computing device, the matrix computing unit to obtain a first operation instruction, where the first operation instruction includes a matrix reading instruction for a matrix required for executing the instruction; controlling, by the computing device, an operating unit to send a reading command to the memory according to the matrix reading instruction; and controlling, by the computing device, the operating unit to read a matrix corresponding to the matrix reading instruction in a batch reading manner, and executing the first operation instruction on the matrix. The technical solutions in the present disclosure have the advantages of fast computing speed and high efficiency.
-
公开(公告)号:US20210150685A1
公开(公告)日:2021-05-20
申请号:US17119148
申请日:2020-12-11
发明人: Tianshi CHEN , Shaoli LIU , Zai WANG , Shuai HU
摘要: Disclosed are an information processing method and a terminal device. The method comprises: acquiring first information, wherein the first information is information to be processed by a terminal device; calling an operation instruction in a calculation apparatus to calculate the first information so as to obtain second information; and outputting the second information. By means of the examples in the present disclosure, a calculation apparatus of a terminal device can be used to call an operation instruction to process first information, so as to output second information of a target desired by a user, thereby improving the information processing efficiency. The present technical solution has advantages of a fast computation speed and high efficiency.
-
公开(公告)号:US20200265300A1
公开(公告)日:2020-08-20
申请号:US16831273
申请日:2020-03-26
发明人: Shaoli LIU , Xuda ZHOU , Zidong DU , Daofu LIU
摘要: The application provides an operation method and device. Quantized data is looked up to realize an operation, which simplifies the structure and reduces the computation energy consumption of the data, meanwhile, a plurality of operations are realized.
-
公开(公告)号:US20200150971A1
公开(公告)日:2020-05-14
申请号:US16698998
申请日:2019-11-28
发明人: Shaoli LIU , Shengyuan ZHOU , Zidong DU
摘要: The disclosure provides a data processing device and method. The data processing device may include: a task configuration information storage unit and a task queue configuration unit. The task configuration information storage unit is configured to store configuration information of tasks. The task queue configuration unit is configured to configure a task queue according to the configuration information stored in the task configuration information storage unit. According to the disclosure, a task queue may be configured according to the configuration information.
-
公开(公告)号:US20200089534A1
公开(公告)日:2020-03-19
申请号:US16693999
申请日:2019-11-25
发明人: Tianshi CHEN , Shengyuan ZHOU , Shaoli LIU
摘要: The disclosure provides a task segmentation device and method, a task processing device and method, a multi-core processor. The task segmentation device includes a granularity task segmentation unit configured to segment a task by adopting at least one granularity to form subtasks, and a task segmentation granularity selection unit configured to select the granularity to be adopted.
-
公开(公告)号:US20190318246A1
公开(公告)日:2019-10-17
申请号:US16455347
申请日:2019-06-27
发明人: Yunji CHEN , Xinkai SONG , Shaoli LIU , Tianshi CHEN
摘要: Aspects of data modification for neural networks are described herein. The aspects may include a connection value generator configured to receive one or more groups of input data and one or more weight values and generate one or more connection values based on the one or more weight values. The aspects may further include a pruning module configured to modify the one or more groups of input data and the one or more weight values based on the connection values. Further still, the aspects may include a computing unit configured to update the one or more weight values and/or calculate one or more input gradients.
-
公开(公告)号:US20240054012A1
公开(公告)日:2024-02-15
申请号:US18259684
申请日:2021-12-30
发明人: Yingnan ZHANG , Qinglong CHAI , Lu CHAO , Yao ZHANG , Shaoli LIU , Jun LIANG
CPC分类号: G06F9/4881 , G06F9/5005 , G06F9/485
摘要: The present disclosure provides a circuit, method and system for inter-chip communication. The method is implemented in a computation apparatus, where the computation apparatus is included in a combined processing apparatus, and the combined processing apparatus includes a general interconnection interface and other processing apparatus. The computation apparatus interacts with other processing apparatus to jointly complete a computation operation specified by a user. The combined processing apparatus also includes a storage apparatus. The storage apparatus is respectively connected to the computation apparatus and other processing apparatus and is used for storing data of the computation apparatus and other processing apparatus.
-
公开(公告)号:US20220156562A1
公开(公告)日:2022-05-19
申请号:US17440479
申请日:2020-03-20
发明人: Shaoli LIU , Xishan ZHANG
IPC分类号: G06N3/04
摘要: A neural network operation module, which comprises a storage unit that stores output neurons, weight precision and output neuron gradient precision of a multi-layer neural network; a controller unit that obtains an average value Y1 of the absolute value of the output neuron before fixed-point and an average value Y2 of the absolute value of the output neuron after fixed-point; if Y1/Y2 is greater than a preset threshold K, obtaining the output neuron gradient precision of adjacent two layers of the multi-layer neural network, and obtaining an estimation value An of error transfer precision; when An is greater than a preset precision Ar, the output neuron gradient precision and weight precision of the adjacent two layers are increased; and an operation unit that represents the output neuron gradient and weight of the adjacent two layers according to the increased precision.
-
-
-
-
-
-
-
-
-