-
公开(公告)号:US20210173896A1
公开(公告)日:2021-06-10
申请号:US15934277
申请日:2018-03-23
Applicant: Amazon Technologies, Inc.
Inventor: Poorna Chand Srinivas PERUMALLA , Pracheer GUPTA , Stefano STEFANI
IPC: G06F17/30
Abstract: Techniques are described for a nearest neighbor search service that enables users to perform nearest neighbor searches. The nearest neighbor search service includes an interface that enables users to create collections of searchable vectors, add and update vectors to a collection, delete vectors from a collection, and perform searches for nearest neighbors to a given vector. The nearest neighbor search service enables users to add, update, and delete vectors of a collection in real-time while also enabling users to perform searches at the same time.
-
公开(公告)号:US20200005124A1
公开(公告)日:2020-01-02
申请号:US16020788
申请日:2018-06-27
Applicant: Amazon Technologies, Inc.
Inventor: Sudipta SENGUPTA , Poorna Chand Srinivas PERUMALLA , Dominic Rajeev DIVAKARUNI , Nafea BSHARA , Leo Parker DIRAC , Bratin SAHA , Matthew James WOOD , Andrea OLGIATI , Swaminathan SIVASUBRAMANIAN
Abstract: Implementations detailed herein include description of a computer-implemented method. In an implementation, the method at least includes receiving an application instance configuration, an application of the application instance to utilize a portion of an attached accelerator during execution of a machine learning model and the application instance configuration including an arithmetic precision of the machine learning model to be used in determining the portion of the accelerator to provision; provisioning the application instance and the portion of the accelerator attached to the application instance, wherein the application instance is implemented using a physical compute instance in a first location, wherein the portion of the accelerator is implemented using a physical accelerator in the second location; loading the machine learning model onto the portion of the accelerator; and performing inference using the loaded machine learning model of the application using the portion of the accelerator on the attached accelerator.
-
公开(公告)号:US20200004596A1
公开(公告)日:2020-01-02
申请号:US16020776
申请日:2018-06-27
Applicant: Amazon Technologies, Inc.
Inventor: Sudipta SENGUPTA , Poorna Chand Srinivas PERUMALLA , Dominic Rajeev DIVAKARUNI , Nafea BSHARA , Leo Parker DIRAC , Bratin SAHA , Matthew James WOOD , Andrea OLGIATI , Swaminathan SIVASUBRAMANIAN
Abstract: Implementations detailed herein include description of a computer-implemented method. In an implementation, the method at least includes receiving an application instance configuration, an application of the application instance to utilize a portion of an attached accelerator during execution of a machine learning model and the application instance configuration including: an indication of the central processing unit (CPU) capability to be used, an arithmetic precision of the machine learning model to be used, an indication of the accelerator capability to be used, a storage location of the application, and an indication of an amount of random access memory to use.
-
4.
公开(公告)号:US20190220783A1
公开(公告)日:2019-07-18
申请号:US15872547
申请日:2018-01-16
Applicant: Amazon Technologies, Inc.
Inventor: Nagajyothi NOOKULA , Poorna Chand Srinivas PERUMALLA , Aashish JINDIA , Danjuan YE , Eduardo Manuel CALLEJA , Song GE , Vinay HANUMAIAH , Wanqiang CHEN , Safeer MOHIUDDIN , Romi BOIMER , Madan Mohan Rao JAMPANI , Fei CHEN
CPC classification number: G06N20/00 , G06F9/5044 , G06F9/5066 , G06N5/022
Abstract: Techniques for generating and executing an execution plan for a machine learning (ML) model using one of an edge device and a non-edge device are described. In some examples, a request for the generation of the execution plan includes at least one objective for the execution of the ML model and the execution plan is generated based at least in part on comparative execution information and network latency information.
-
公开(公告)号:US20200004597A1
公开(公告)日:2020-01-02
申请号:US16020810
申请日:2018-06-27
Applicant: Amazon Technologies, Inc.
Inventor: Sudipta SENGUPTA , Poorna Chand Srinivas PERUMALLA , Dominic Rajeev DIVAKARUNI , Nafea BSHARA , Leo Parker DIRAC , Bratin SAHA , Matthew James WOOD , Andrea OLGIATI , Swaminathan SIVASUBRAMANIAN
Abstract: Implementations detailed herein include description of a computer-implemented method. In an implementation, the method at least includes provisioning an application instance and portions of at least one accelerator attached to the application instance to execute a machine learning model of an application of the application instance; loading the machine learning model onto the portions of the at least one accelerator; receiving scoring data in the application; and utilizing each of the portions of the attached at least one accelerator to perform inference on the scoring data in parallel and only using one response from the portions of the accelerator
-
公开(公告)号:US20200004595A1
公开(公告)日:2020-01-02
申请号:US16020819
申请日:2018-06-27
Applicant: Amazon Technologies, Inc.
Inventor: Sudipta SENGUPTA , Poorna Chand Srinivas PERUMALLA , Dominic Rajeev DIVAKARUNI , Nafea BSHARA , Leo Parker DIRAC , Bratin SAHA , Matthew James WOOD , Andrea OLGIATI , Swaminathan SIVASUBRAMANIAN
Abstract: Implementations detailed herein include description of a computer-implemented method. In an implementation, the method at least includes attaching a first set of one or more accelerator slots of an accelerator appliance to an application instance of a multi-tenant provider network according to an application instance configuration, the application instance configuration to define per accelerator slot capabilities to be used by an application of the application instance, wherein the multi-tenant provider network comprises a plurality of computing devices configured to implement a plurality of virtual compute instances, and wherein the first set of one or more accelerator slots is implemented using physical accelerator resources accessible to the application instance; while performing inference using the loaded machine learning model of the application using the first set of one or more accelerator slots on the attached accelerator appliance, managing resources of the accelerator appliance using an accelerator appliance manager of the accelerator appliance.
-
-
-
-
-