-
公开(公告)号:US20190042955A1
公开(公告)日:2019-02-07
申请号:US15857087
申请日:2017-12-28
Applicant: Joe Cahill , Da-Ming Chiang , Kshitij Arun Doshi , Francesc Cesc Guim Bernat , Suraj Prabhakaran
Inventor: Joe Cahill , Da-Ming Chiang , Kshitij Arun Doshi , Francesc Cesc Guim Bernat , Suraj Prabhakaran
IPC: G06N5/04
Abstract: Various systems and methods of initiating and performing contextualized AI inferencing, are described herein. In an example, operations performed with a gateway computing device to invoke an inferencing model include receiving and processing a request for an inferencing operation, selecting an implementation of the inferencing model on a remote service based on a model specification and contextual data from the edge device, and executing the selected implementation of the inferencing model, such that results from the inferencing model are provided back to the edge device. Also in an example, operations performed with an edge computing device to request an inferencing model include collecting contextual data, generating an inferencing request, transmitting the inference request to a gateway device, and receiving and processing the results of execution. Further techniques for implementing a registration of the inference model, and invoking particular variants of an inference model, are also described.
-
公开(公告)号:US20190041960A1
公开(公告)日:2019-02-07
申请号:US16011842
申请日:2018-06-19
Applicant: Francesc Guim Bernat , Suraj Prabhakaran , Timothy Verrall , Karthik Kumar , Mark A. Schmisseur
Inventor: Francesc Guim Bernat , Suraj Prabhakaran , Timothy Verrall , Karthik Kumar , Mark A. Schmisseur
Abstract: In one embodiment, an apparatus of an edge computing system includes memory that includes instructions and processing circuitry coupled to the memory. The processing circuitry implements the instructions to process a request to execute at least a portion of a workflow on pooled computing resources, the workflow being associated with a particular tenant, determine an amount of power to be allocated to particular resources of the pooled computing resources for execution of the portion of the workflow based on a power budget associated with the tenant and a current power cost, and control allocation of the determined amount of power to the particular resources of the pooled computing resources during execution of the portion of the workflow.
-
3.
公开(公告)号:US20180026849A1
公开(公告)日:2018-01-25
申请号:US15655864
申请日:2017-07-20
Applicant: FRANCESC GUIM BERNAT , SUSANNE M. BALLE , DANIEL RIVAS BARRAGAN , JOHN CHUN KWOK LEUNG , SURAJ PRABHAKARAN , MURUGASAMY K. NACHIMUTHU , SLAWOMIR PUTYRSKI
Inventor: FRANCESC GUIM BERNAT , SUSANNE M. BALLE , DANIEL RIVAS BARRAGAN , JOHN CHUN KWOK LEUNG , SURAJ PRABHAKARAN , MURUGASAMY K. NACHIMUTHU , SLAWOMIR PUTYRSKI
CPC classification number: H04L43/16 , G06F16/2282 , G06F16/2379 , H04L41/0816 , H04L41/0896 , H04L41/12 , H04L41/16 , H04L41/5009 , H04L41/5025 , H04L41/5054 , H04L43/0858 , H04L43/0876 , H04L43/0894 , H04L47/722 , H04L47/803 , H04L47/805 , H04L67/10 , H04L67/1031 , H04L67/34 , H04Q9/00 , H04Q2209/20
Abstract: Techniques for managing static and dynamic partitions in software-defined infrastructures (SDI) are described. An SDI manager component may include one or more processor circuits to access one or more resources. The SDI manager component may include a partition manager to create one or more partitions using the one or more resources, the one or more partitions each including a plurality of nodes of a similar resource type. The SDI manager may generate an update to a pre-composed partition table, stored within a non-transitory computer-readable storage medium, including the created one or more partitions, and receive a request from an orchestrator for a node. The SDI manager may select one of the created one or more partitions to the orchestrator based upon the pre-composed partition table, and identify the selected partition to the orchestrator. Other embodiments are described and claimed.
-
公开(公告)号:US20190158606A1
公开(公告)日:2019-05-23
申请号:US16235137
申请日:2018-12-28
Applicant: FRANCESC GUIM BERNAT , PATRICK BOHAN , KSHITIJ ARUN DOSHI , BRINDA GANESH , ANDREW J. HERDRICH , MONICA KENGUVA , KARTHIK KUMAR , PATRICK G. KUTCH , FELIPE PASTOR BENEYTO , RASHMIN PATEL , SURAJ PRABHAKARAN , NED M. SMITH , PETAR TORRE , ALEXANDER VUL
Inventor: FRANCESC GUIM BERNAT , PATRICK BOHAN , KSHITIJ ARUN DOSHI , BRINDA GANESH , ANDREW J. HERDRICH , MONICA KENGUVA , KARTHIK KUMAR , PATRICK G. KUTCH , FELIPE PASTOR BENEYTO , RASHMIN PATEL , SURAJ PRABHAKARAN , NED M. SMITH , PETAR TORRE , ALEXANDER VUL
IPC: H04L29/08 , H04L12/26 , H04L12/911 , H04L12/24
Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QoS pre-allocation; and automatic QoS migration across edge computing nodes. In a specific example, a technique for service migration includes: identifying a service operated with computing resources in an edge computing system, involving computing capabilities for a connected edge device with an identified service level; identifying a mobility condition for the service, based on a change in network connectivity with the connected edge device; and performing a migration of the service to another edge computing system based on the identified mobility condition, to enable the service to be continued at the second edge computing apparatus to provide computing capabilities for the connected edge device with the identified service level.
-
公开(公告)号:US20210021594A1
公开(公告)日:2021-01-21
申请号:US17032696
申请日:2020-09-25
Applicant: Francesc Guim Bernat , Ned M. Smith , Kshitij Arun Doshi , Suraj Prabhakaran , Brinda Ganesh
Inventor: Francesc Guim Bernat , Ned M. Smith , Kshitij Arun Doshi , Suraj Prabhakaran , Brinda Ganesh
IPC: H04L29/06
Abstract: Various aspects of methods, systems, and use cases for biometric security for edge platform management. An edge cloud system to implement biometric security for edge platform management comprises a biometric sensor; and an edge node in an edge network, the edge node to: receive a request to access a feature of the edge node, the request originating from an entity, wherein the request comprises an entity identifier and a feature identifier; receive from the biometric sensor, biometric data of the entity; authenticate the entity using the biometric data; and in response to authenticating the entity using the biometric data, grant access to the feature based on a crosscheck to an access control list that includes entity identifiers correlated to feature identifiers, using the received entity identifier and the received feature identifier.
-
公开(公告)号:US20210021533A1
公开(公告)日:2021-01-21
申请号:US17033140
申请日:2020-09-25
Applicant: Francesc Guim Bernat , Ned M. Smith , Kshitij Arun Doshi , Suraj Prabhakaran , Timothy Verrall , Kapil Sood , Tarun Viswanathan
Inventor: Francesc Guim Bernat , Ned M. Smith , Kshitij Arun Doshi , Suraj Prabhakaran , Timothy Verrall , Kapil Sood , Tarun Viswanathan
IPC: H04L12/841 , H04L12/911 , H04L12/933
Abstract: Systems and techniques for intelligent data forwarding in edge networks are described herein. A request may be received from an edge user device for a service via a first endpoint. A time value may be calculated using a timestamp of the request. Motion characteristics may be determined for the edge user device using the time value. A response to the request may be transmitted to a second endpoint based on the motion characteristics.
-
公开(公告)号:US20190042884A1
公开(公告)日:2019-02-07
申请号:US15857562
申请日:2017-12-28
Applicant: Francesc GUIM BERNAT , Suraj PRABHAKARAN , Alexander BACHMUTSKY , Raghu KONDAPALLI , Kshitij A. DOSHI
Inventor: Francesc GUIM BERNAT , Suraj PRABHAKARAN , Alexander BACHMUTSKY , Raghu KONDAPALLI , Kshitij A. DOSHI
Abstract: An apparatus for training artificial intelligence (AI) models is presented. In embodiments, the apparatus may include an input interface to receive in real time model training data from one or more sources to train one or more artificial neural networks (ANNs) associated with the one or more sources, each of the one or more sources associated with at least one of the ANNs; a load distributor coupled to the input interface to distribute in real time the model training data for the one or more ANNs to one or more AI appliances; and a resource manager coupled to the load distributor to dynamically assign one or more computing resources on ones of the AI appliances to each of the ANNs in view of amounts of the training data received in real time from the one or more sources for their associated ANNs.
-
公开(公告)号:US20190235773A1
公开(公告)日:2019-08-01
申请号:US16378828
申请日:2019-04-09
Applicant: Mark Schmisseur , Thomas Willhalm , Francesc Guim Bernat , Karthik Kumar , Suraj Prabhakaran
Inventor: Mark Schmisseur , Thomas Willhalm , Francesc Guim Bernat , Karthik Kumar , Suraj Prabhakaran
CPC classification number: G06F3/0622 , G06F3/0604 , G06F3/0658 , G06F3/0659 , G06F3/0685 , G06F21/6218
Abstract: Examples relate to a memory controller or memory controller device for a memory pool of a computer system, to a management apparatus or management device for the computer system, and to an apparatus or device for a compute node of the computer system, and to corresponding methods and computer programs. The memory pool comprises computer memory that is accessible to a plurality of compute nodes of the computer system via the memory controller. The memory controller comprises interface circuitry for communicating with the plurality of compute nodes. The memory controller comprises control circuitry being configured to obtain an access control instruction via the interface circuitry. The access control instruction indicates that access to a portion of the computer memory of the memory pool is to be granted to one or more processes being executed by the plurality of compute nodes of the computer system. The access control instruction comprises information related to a node identifier and a process identifier for each of the one or more processes. The control circuitry is configured to provide access to the portion of the computer memory of the memory pool to the one or more processes based on the access control instruction.
-
公开(公告)号:US20190158300A1
公开(公告)日:2019-05-23
申请号:US16235894
申请日:2018-12-28
Applicant: Dario Sabella , Ned M. Smith , Neal Oliver , Kshitij Arun Doshi , Suraj Prabhakaran , Miltiadis Filippou , Francesc Guim Bernat
Inventor: Dario Sabella , Ned M. Smith , Neal Oliver , Kshitij Arun Doshi , Suraj Prabhakaran , Miltiadis Filippou , Francesc Guim Bernat
Abstract: An architecture to allow Multi-Access Edge Computing (MEC) billing and charge tracking, is disclosed. In an example, a tracking process, such as is performed by an edge computing apparatus, includes: receiving a computational processing request for a service operated with computing resources of the edge computing apparatus from a connected edge device within the first access network, wherein the computational processing request includes an identification of the connected edge device; identifying a processing device, within the first access network, for performing the computational processing request; and storing the identification of the connected edge device, a processing device identification, and data describing the computational processes completed by the processing device in association with the computational processing request.
-
10.
公开(公告)号:US20190141610A1
公开(公告)日:2019-05-09
申请号:US16235685
申请日:2018-12-28
Applicant: Dario Sabella , Ned M. Smith , Neal Oliver , Kshitij Arun Doshi , Suraj Prabhakaran , Francesc Guim Bernat , Miltiadis Filippou
Inventor: Dario Sabella , Ned M. Smith , Neal Oliver , Kshitij Arun Doshi , Suraj Prabhakaran , Francesc Guim Bernat , Miltiadis Filippou
Abstract: Various systems and methods for enhancing a distributed computing environment with multiple edge hosts and user devices, including in multi-access edge computing (MEC) network platforms and settings, are described herein. A device of a lifecycle management (LCM) proxy apparatus obtains a request, from a device application, for an application multiple context of an application. The application multiple context for the application is determined. The request from the device application for the application multiple context for the application is authorized. A device application identifier based on the request is added to the application multiple context. A created response for the device application based on the authorization of the request is transmitted to the device application. The response includes an identifier of the application multiple context.