-
公开(公告)号:US11036480B2
公开(公告)日:2021-06-15
申请号:US17130469
申请日:2020-12-22
发明人: Weijian Du , Linyang Wu , Xunyu Chen
摘要: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
-
公开(公告)号:US11726754B2
公开(公告)日:2023-08-15
申请号:US17849650
申请日:2022-06-26
发明人: Weijian Du , Linyang Wu , Xunyu Chen
摘要: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
-
公开(公告)号:US11334330B2
公开(公告)日:2022-05-17
申请号:US17130370
申请日:2020-12-22
发明人: Weijian Du , Linyang Wu , Xunyu Chen
摘要: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
-
公开(公告)号:US11334329B2
公开(公告)日:2022-05-17
申请号:US16975082
申请日:2019-05-07
发明人: Weijian Du , Linyang Wu , Xunyu Chen
摘要: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
-
公开(公告)号:US11379199B2
公开(公告)日:2022-07-05
申请号:US17130300
申请日:2020-12-22
发明人: Weijian Du , Linyang Wu , Xunyu Chen
摘要: Disclosed are a general-purpose machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201), performing classification processing on the task parameters to obtain task instructions and model parameters (S1202), aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203), and integrating the stack data and the heap data to obtain a general-purpose machine learning model (S1204). By means of the method, compiled results of a corresponding general-purpose model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
-
6.
公开(公告)号:US11221877B2
公开(公告)日:2022-01-11
申请号:US16575344
申请日:2019-09-18
发明人: Linyang Wu , Xiaofu Meng
摘要: The present disclosure provides a task parallel processing method, a device, a system, a storage medium and computer equipment, which are capable of distributing and regulating tasks to be executed according to a task directed acyclic graph, and may thereby realize task parallelism of a multi-core processor and improve the efficiency of data processing.
-
7.
公开(公告)号:US11113104B2
公开(公告)日:2021-09-07
申请号:US16705190
申请日:2019-12-05
发明人: Linyang Wu , Qi Guo , Xunyu Chen , Kangyu Wang
摘要: Computer systems, data processing methods, and computer-readable media are provided to run original networks. An exemplary computer system includes first and second processors and first and second memories. The first memory stores offline models and corresponding input data of a plurality of original networks, and a runtime system configured to run on the first processor. The second memory stores an operating system configured to run on the first processor or the second processor. When the runtime system runs on the first processor, the runtime system obtains an offline model and corresponding input data of an original network from the first memory and controls the second processor to run the offline model of the original network. The offline model of the original network includes model parameters, instructions, and interface data of respective computation nodes of the original network.
-
8.
公开(公告)号:US11360811B2
公开(公告)日:2022-06-14
申请号:US16702491
申请日:2019-12-03
发明人: Linyang Wu , Qi Guo , Xunyu Chen , Kangyu Wang
摘要: Computer systems, data processing methods, and computer-readable media are provided to run original networks. An exemplary computer system includes first and second processors a memory storing offline models and corresponding input data of a plurality of original networks, and a runtime system configured to run on the first processor. The runtime system, when runs on the first processor, causes the first processor to implement a plurality of virtual devices comprising a data processing device configured to obtain an offline model and corresponding input data of an original network from the memory, an equipment management device configured to control turning on or off of the second processor, and a task execution device configured to control the second processor to run the offline model of the original network.
-
公开(公告)号:US11307836B2
公开(公告)日:2022-04-19
申请号:US17130348
申请日:2020-12-22
发明人: Weijian Du , Linyang Wu , Xunyu Chen
摘要: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
-
-
-
-
-
-
-
-