-
公开(公告)号:US20190156246A1
公开(公告)日:2019-05-23
申请号:US15884279
申请日:2018-01-30
Applicant: Amazon Technologies, Inc.
Inventor: Calvin Yue-Ren Kuo , Jiazhen Chen , Jingwei Sun , Haiyang Liu
Abstract: A provider network implements a machine learning deployment service for generating and deploying packages to implement machine learning at connected devices. The service may receive from a client an indication of an inference application, a machine learning framework to be used by the inference application, a machine learning model to be used by the inference application, and an edge device to run the inference application. The service may then generate a package based on the inference application, the machine learning framework, the machine learning model, and a hardware platform of the edge device. To generate the package, the service may optimize the model based on the hardware platform of the edge device and/or the machine learning framework. The service may then deploy the package to the edge device. The edge device then installs the inference application and performs actions based on inference data generated by the machine learning model.
-
公开(公告)号:US12293260B2
公开(公告)日:2025-05-06
申请号:US15884279
申请日:2018-01-30
Applicant: Amazon Technologies, Inc.
Inventor: Calvin Yue-Ren Kuo , Jiazhen Chen , Jingwei Sun , Haiyang Liu
IPC: G06N20/00 , G06F8/60 , G06F18/214 , G06N5/04 , H04W4/38
Abstract: A provider network implements a machine learning deployment service for generating and deploying packages to implement machine learning at connected devices. The service may receive from a client an indication of an inference application, a machine learning framework to be used by the inference application, a machine learning model to be used by the inference application, and an edge device to run the inference application. The service may then generate a package based on the inference application, the machine learning framework, the machine learning model, and a hardware platform of the edge device. To generate the package, the service may optimize the model based on the hardware platform of the edge device and/or the machine learning framework. The service may then deploy the package to the edge device. The edge device then installs the inference application and performs actions based on inference data generated by the machine learning model.
-