-
公开(公告)号:US20190095122A1
公开(公告)日:2019-03-28
申请号:US15717963
申请日:2017-09-28
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Kshitij Doshi , Daniel Rivas Barragan , Federico Ardanaz , Suraj Prabhakaran
IPC: G06F3/06
Abstract: According to various aspects, a computing system may include one or more first memories of a first memory type and one or more second memories of a second memory type different from the first memory type and a memory controller. The memory controller may be configured to receive telemetry data associated with at least one of the one or more first memories and the one or more second memories, execute a data transfer between the one or more first memories and the one or more second memories in a first operation mode of the memory controller, suspend a data transfer between the one or more first memories and the one or more second memories in a second operation mode of the memory controller, and switch between the first operation mode and the second operation mode based on the telemetry data.
-
公开(公告)号:US20190044831A1
公开(公告)日:2019-02-07
申请号:US15857526
申请日:2017-12-28
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Kshitij Arun Doshi , Suraj Prabhakaran , Raghu Kondapalli , Alexander Bachmutsky
IPC: H04L12/24
Abstract: Various systems and methods for implementing a service-level agreement (SLA) apparatus receive a request from a requester via a network interface of the gateway, the request comprising an inference model identifier that identifies a handler of the request, and a response time indicator. The response time indicator relates to a time within which the request is to be handled indicates an undefined time within which the request is to be handled. The apparatus determines a network location of a handler that is a platform or an inference model to handle the request consistent with the response time indicator, and routes the request to the handler at the network location.
-
公开(公告)号:US12217192B2
公开(公告)日:2025-02-04
申请号:US18091874
申请日:2022-12-30
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Suraj Prabhakaran , Kshitij Arun Doshi , Da-Ming Chiang , Joe Cahill
Abstract: Various systems and methods of initiating and performing contextualized AI inferencing, are described herein. In an example, operations performed with a gateway computing device to invoke an inferencing model include receiving and processing a request for an inferencing operation, selecting an implementation of the inferencing model on a remote service based on a model specification and contextual data from the edge device, and executing the selected implementation of the inferencing model, such that results from the inferencing model are provided back to the edge device. Also in an example, operations performed with an edge computing device to request an inferencing model include collecting contextual data, generating an inferencing request, transmitting the inference request to a gateway device, and receiving and processing the results of execution. Further techniques for implementing a registration of the inference model, and invoking particular variants of an inference model, are also described.
-
公开(公告)号:US20230396669A1
公开(公告)日:2023-12-07
申请号:US18234791
申请日:2023-08-16
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Ned Smith , Kshitij Doshi , Alexander Bachmutsky , Suraj Prabhakaran
IPC: H04L67/1004 , H04L41/12 , H04L41/5006
CPC classification number: H04L67/1004 , H04L41/12 , H04L41/5006 , H04L41/5019
Abstract: Technologies for function as a service (FaaS) arbitration include an edge gateway, multiple endpoint devices, and multiple service providers. The edge gateway receives a registration request from a service provider that is indicative of an FaaS function identifier and a transform function. The edge gateway verifies an attestation received from the service provider and registers the service provider. The edge gateway receives a function execution request from an endpoint device that is indicative of the FaaS function identifier. The edge gateway selects the service provider based on the FaaS function identifier, programs an accelerator with the transform function, executes the transform function with the accelerator to transform the function execution request to a provider request, and submits the provider request to the service provider. The service provider may be selected based on an expected service level included in the function execution request. Other embodiments are described and claimed.
-
公开(公告)号:US11799952B2
公开(公告)日:2023-10-24
申请号:US16241891
申请日:2019-01-07
Applicant: Intel Corporation
Inventor: Suraj Prabhakaran , Kshitij A. Doshi , Francesc Guim Bernat
IPC: H04L67/1036 , H04L67/1004 , H04L9/40 , H04L41/046 , H04L67/51 , H04L67/61
CPC classification number: H04L67/1036 , H04L41/046 , H04L63/08 , H04L67/1004 , H04L67/51 , H04L67/61
Abstract: A computing cluster can receive a request to perform a workload from a client. The request can include a service discovery agent. If the request is authenticated and permitted on the computing cluster, the service discovery agent is executed. Execution of the service discovery agent can lead to discovery of resource capabilities of the cluster and selection of the appropriate resource based on performance requirements. The selected resource can be deployed for execution of the workload.
-
公开(公告)号:US11706158B2
公开(公告)日:2023-07-18
申请号:US17510077
申请日:2021-10-25
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Anil Rao , Suraj Prabhakaran , Mohan Kumar , Karthik Kumar
IPC: H04L49/25 , H04L12/66 , H04L47/33 , H04L49/20 , H04L41/5019 , H04L41/0823
CPC classification number: H04L49/25 , H04L12/66 , H04L41/0823 , H04L41/5019 , H04L47/33 , H04L49/205
Abstract: Technologies for accelerating edge device workloads at a device edge network include a network computing device which includes a processor platform that includes at least one processor which supports a plurality of non-accelerated function-as-a-service (FaaS) operations and an accelerated platform that includes at least one accelerator which supports a plurality of accelerated FaaS (AFaaS) operation. The network computing device is configured to receive a request to perform a FaaS operation, determine whether the received request indicates that an AFaaS operation is to be performed on the received request, and identify compute requirements for the AFaaS operation to be performed. The network computing device is further configured to select an accelerator platform to perform the identified AFaaS operation and forward the received request to the selected accelerator platform to perform the identified AFaaS operation. Other embodiments are described and claimed.
-
37.
公开(公告)号:US20230142539A1
公开(公告)日:2023-05-11
申请号:US18068409
申请日:2022-12-19
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Suraj Prabhakaran , Ignacio Astilleros Diez , Timothy Verrall
IPC: H04L47/50 , H04L67/10 , H04L67/60 , H04L67/2866
CPC classification number: H04L47/50 , H04L67/10 , H04L67/60 , H04L67/2866 , H04L49/90
Abstract: Example edge gateway circuitry to schedule service requests in a network computing system includes: gateway-level hardware queue manager circuitry to: parse the service requests based on service parameters in the service requests; and schedule the service requests in a queue based on the service parameters, the service requests received from client devices; and hardware queue manager communication interface circuitry to send ones of the service requests from the queue to rack-level hardware queue manager circuitry in a physical rack, the ones of the service requests corresponding to functions as a service provided by resources in the physical rack.
-
公开(公告)号:US11436433B2
公开(公告)日:2022-09-06
申请号:US15857562
申请日:2017-12-28
Applicant: Intel Corporation
Inventor: Alexander Bachmutsky , Kshitij A. Doshi , Francesc Guim Bernat , Raghu Kondapalli , Suraj Prabhakaran
IPC: G06K9/62 , H04L67/1097 , G06N3/08 , H04L67/125 , H04L67/12 , H04L67/10 , G06N3/063 , H04W4/38
Abstract: An apparatus for training artificial intelligence (AI) models is presented. In embodiments, the apparatus may include an input interface to receive in real time model training data from one or more sources to train one or more artificial neural networks (ANNs) associated with the one or more sources, each of the one or more sources associated with at least one of the ANNs; a load distributor coupled to the input interface to distribute in real time the model training data for the one or more ANNs to one or more AI appliances; and a resource manager coupled to the load distributor to dynamically assign one or more computing resources on ones of the AI appliances to each of the ANNs in view of amounts of the training data received in real time from the one or more sources for their associated ANNs.
-
公开(公告)号:US20220224657A1
公开(公告)日:2022-07-14
申请号:US17510077
申请日:2021-10-25
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Anil Rao , Suraj Prabhakaran , Mohan Kumar , Karthik Kumar
IPC: H04L49/25 , H04L12/66 , H04L47/33 , H04L49/20 , H04L41/5019 , H04L41/0823
Abstract: Technologies for accelerating edge device workloads at a device edge network include a network computing device which includes a processor platform that includes at least one processor which supports a plurality of non-accelerated function-as-a-service (FaaS) operations and an accelerated platform that includes at least one accelerator which supports a plurality of accelerated FaaS (AFaaS) operation. The network computing device is configured to receive a request to perform a FaaS operation, determine whether the received request indicates that an AFaaS operation is to be performed on the received request, and identify compute requirements for the AFaaS operation to be performed. The network computing device is further configured to select an accelerator platform to perform the identified AFaaS operation and forward the received request to the selected accelerator platform to perform the identified AFaaS operation. Other embodiments are described and claimed.
-
公开(公告)号:US11356339B2
公开(公告)日:2022-06-07
申请号:US17066400
申请日:2020-10-08
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Kshitij Arun Doshi , Suraj Prabhakaran , Raghu Kondapalli , Alexander Bachmutsky
IPC: H04L29/08 , G06F17/30 , G06F11/34 , G06F11/30 , H04L41/5019 , H04L67/12 , H04L67/63 , H04L67/61 , H04L41/0806 , H04L41/5041 , G06N5/04
Abstract: Various systems and methods for implementing a service-level agreement (SLA) apparatus receive a request from a requester via a network interface of the gateway, the request comprising an inference model identifier that identifies a handler of the request, and a response time indicator. The response time indicator relates to a time within which the request is to be handled indicates an undefined time within which the request is to be handled. The apparatus determines a network location of a handler that is a platform or an inference model to handle the request consistent with the response time indicator, and routes the request to the handler at the network location.
-
-
-
-
-
-
-
-
-