-
公开(公告)号:US11442785B2
公开(公告)日:2022-09-13
申请号:US16720145
申请日:2019-12-19
发明人: Shaoli Liu , Xishan Zhang
摘要: The present disclosure provides a computation method and product thereof. The computation method adopts a fusion method to perform machine learning computations. Technical effects of the present disclosure include fewer computations and less power consumption.
-
公开(公告)号:US20220121599A1
公开(公告)日:2022-04-21
申请号:US17564431
申请日:2021-12-29
发明人: Shaoli LIU , Zhen LI , Yao ZHANG
摘要: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
-
公开(公告)号:US11307864B2
公开(公告)日:2022-04-19
申请号:US16698992
申请日:2019-11-28
发明人: Tianshi Chen , Lei Zhang , Shaoli Liu
IPC分类号: G06F9/445 , G06F9/46 , G06F9/48 , G06F9/38 , G06F9/30 , G06F17/16 , G06F3/01 , G06F9/50 , G06F9/54 , G06F11/07 , G06F11/10 , G06F11/30 , G06F12/0875 , G06K9/62 , G06N3/04 , G06N3/063 , G06V40/16 , G06F7/57 , G06F7/544 , G06F1/324
摘要: The disclosure provides a data processing device and method. The data processing device may include: a task configuration information storage unit and a task queue configuration unit. The task configuration information storage unit is configured to store configuration information of tasks. The task queue configuration unit is configured to configure a task queue according to the configuration information stored in the task configuration information storage unit. According to the disclosure, a task queue may be configured according to the configuration information.
-
公开(公告)号:US20220083909A1
公开(公告)日:2022-03-17
申请号:US17361633
申请日:2021-06-29
发明人: Yao ZHANG , Guang JIANG , Xishan ZHANG , Shiyi ZHOU , Di HUANG , Chang LIU , Jiaming GUO
IPC分类号: G06N20/00
摘要: The present disclosure relates to a method, a device, and related products for processing data. In an embodiment of the present disclosure, when processing data related to a neural network, an optimal truncation threshold value for a plurality of pieces of data is determined. The data is truncated through the truncation data threshold, and the plurality of pieces of data is quantized from a high-precision format to a low-precision format. The method in the present disclosure can ensure the precision of data processing as high as possible while reducing the amount of data processing. In addition, the method also helps to significantly reduce the amount of data transmission, thereby greatly accelerating the data exchange among a plurality of computing devices.
-
公开(公告)号:US20220035762A1
公开(公告)日:2022-02-03
申请号:US17278812
申请日:2019-10-18
发明人: Yao ZHANG , Shaoli LIU , Jun LIANG , Yu CHEN
摘要: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system, and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
-
公开(公告)号:US20210224069A1
公开(公告)日:2021-07-22
申请号:US16745743
申请日:2020-01-17
发明人: Tianshi CHEN , Shaoli LIU , Zai WANG , Shuai HU
摘要: The present disclosure provides a computing method that is applied to a computing device. The computing device includes: a memory, a register unit, and a matrix computing unit. The method includes the following steps: controlling, by the computing device, the matrix computing unit to obtain a first operation instruction, where the first operation instruction includes a matrix reading instruction for a matrix required for executing the instruction; controlling, by the computing device, an operating unit to send a reading command to the memory according to the matrix reading instruction; and controlling, by the computing device, the operating unit to read a matrix corresponding to the matrix reading instruction in a batch reading manner, and executing the first operation instruction on the matrix. The technical solutions in the present disclosure have the advantages of fast computing speed and high efficiency.
-
公开(公告)号:US11036480B2
公开(公告)日:2021-06-15
申请号:US17130469
申请日:2020-12-22
发明人: Weijian Du , Linyang Wu , Xunyu Chen
摘要: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
-
公开(公告)号:US20210150685A1
公开(公告)日:2021-05-20
申请号:US17119148
申请日:2020-12-11
发明人: Tianshi CHEN , Shaoli LIU , Zai WANG , Shuai HU
摘要: Disclosed are an information processing method and a terminal device. The method comprises: acquiring first information, wherein the first information is information to be processed by a terminal device; calling an operation instruction in a calculation apparatus to calculate the first information so as to obtain second information; and outputting the second information. By means of the examples in the present disclosure, a calculation apparatus of a terminal device can be used to call an operation instruction to process first information, so as to output second information of a target desired by a user, thereby improving the information processing efficiency. The present technical solution has advantages of a fast computation speed and high efficiency.
-
公开(公告)号:US20210109729A1
公开(公告)日:2021-04-15
申请号:US17130469
申请日:2020-12-22
发明人: Weijian DU , Linyang WU , Xunyu CHEN
摘要: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
-
公开(公告)号:US10901815B2
公开(公告)日:2021-01-26
申请号:US16693918
申请日:2019-11-25
发明人: Tianshi Chen , Shengyuan Zhou , Shaoli Liu
IPC分类号: G06F9/54 , G06F9/22 , G06F12/0875 , G06F13/28 , G06N3/063 , G06N3/04 , G06N3/08 , G06F30/27 , G06F15/163 , H04L12/70
摘要: A data sharing system may include a storage module and at least two processing modules. The at least two processing modules may share the storage module and the at least two processing modules communicate to implement data sharing. A data sharing method for the data sharing system is provided. According to the disclosure, a storage communication overhead may be reduced, and a data access delay may be effectively reduced.
-
-
-
-
-
-
-
-
-