General machine learning model, and model file generation and parsing method

    公开(公告)号:US11036480B2

    公开(公告)日:2021-06-15

    申请号:US17130469

    申请日:2020-12-22

    摘要: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.

    General machine learning model, and model file generation and parsing method

    公开(公告)号:US11726754B2

    公开(公告)日:2023-08-15

    申请号:US17849650

    申请日:2022-06-26

    摘要: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.

    General machine learning model, and model file generation and parsing method

    公开(公告)号:US11334330B2

    公开(公告)日:2022-05-17

    申请号:US17130370

    申请日:2020-12-22

    摘要: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.

    General machine learning model, and model file generation and parsing method

    公开(公告)号:US11334329B2

    公开(公告)日:2022-05-17

    申请号:US16975082

    申请日:2019-05-07

    摘要: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.

    General machine learning model, and model file generation and parsing method

    公开(公告)号:US11379199B2

    公开(公告)日:2022-07-05

    申请号:US17130300

    申请日:2020-12-22

    摘要: Disclosed are a general-purpose machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201), performing classification processing on the task parameters to obtain task instructions and model parameters (S1202), aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203), and integrating the stack data and the heap data to obtain a general-purpose machine learning model (S1204). By means of the method, compiled results of a corresponding general-purpose model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.

    Task parallel processing method, apparatus and system, storage medium and computer device

    公开(公告)号:US11113104B2

    公开(公告)日:2021-09-07

    申请号:US16705190

    申请日:2019-12-05

    摘要: Computer systems, data processing methods, and computer-readable media are provided to run original networks. An exemplary computer system includes first and second processors and first and second memories. The first memory stores offline models and corresponding input data of a plurality of original networks, and a runtime system configured to run on the first processor. The second memory stores an operating system configured to run on the first processor or the second processor. When the runtime system runs on the first processor, the runtime system obtains an offline model and corresponding input data of an original network from the first memory and controls the second processor to run the offline model of the original network. The offline model of the original network includes model parameters, instructions, and interface data of respective computation nodes of the original network.

    Task parallel processing method, apparatus and system, storage medium and computer device

    公开(公告)号:US11360811B2

    公开(公告)日:2022-06-14

    申请号:US16702491

    申请日:2019-12-03

    摘要: Computer systems, data processing methods, and computer-readable media are provided to run original networks. An exemplary computer system includes first and second processors a memory storing offline models and corresponding input data of a plurality of original networks, and a runtime system configured to run on the first processor. The runtime system, when runs on the first processor, causes the first processor to implement a plurality of virtual devices comprising a data processing device configured to obtain an offline model and corresponding input data of an original network from the memory, an equipment management device configured to control turning on or off of the second processor, and a task execution device configured to control the second processor to run the offline model of the original network.

    General machine learning model, and model file generation and parsing method

    公开(公告)号:US11307836B2

    公开(公告)日:2022-04-19

    申请号:US17130348

    申请日:2020-12-22

    摘要: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.