-
公开(公告)号:US20190042515A1
公开(公告)日:2019-02-07
申请号:US15848218
申请日:2017-12-20
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Da-Ming Chiang , Kshitij A. Doshi , Suraj Prabhakaran , Mark A. Schmisseur
IPC: G06F13/40 , G06F13/362 , G06N3/04 , G06F13/42 , G06F9/455
Abstract: There is disclosed an example of an artificial intelligence (AI) system, including: a first hardware platform; a fabric interface configured to communicatively couple the first hardware platform to a second hardware platform; a processor hosted on the first hardware platform and programmed to operate on an AI problem; and a first training accelerator, including: an accelerator hardware; a platform inter-chip link (ICL) configured to communicatively couple the first training accelerator to a second training accelerator on the first hardware platform without aid of the processor; a fabric ICL to communicatively couple the first training accelerator to a third training accelerator on a second hardware platform without aid of the processor; and a system decoder configured to operate the fabric ICL and platform ICL to share data of the accelerator hardware between the first training accelerator and second and third training accelerators without aid of the processor.
-
公开(公告)号:US12132664B2
公开(公告)日:2024-10-29
申请号:US18068409
申请日:2022-12-19
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Suraj Prabhakaran , Ignacio Astilleros Diez , Timothy Verrall
IPC: H04L47/50 , H04L67/10 , H04L67/2866 , H04L67/60 , H04L49/90
CPC classification number: H04L47/50 , H04L67/10 , H04L67/2866 , H04L67/60 , H04L49/90
Abstract: Example edge gateway circuitry to schedule service requests in a network computing system includes: gateway-level hardware queue manager circuitry to: parse the service requests based on service parameters in the service requests; and schedule the service requests in a queue based on the service parameters, the service requests received from client devices; and hardware queue manager communication interface circuitry to send ones of the service requests from the queue to rack-level hardware queue manager circuitry in a physical rack, the ones of the service requests corresponding to functions as a service provided by resources in the physical rack.
-
公开(公告)号:US20240320179A1
公开(公告)日:2024-09-26
申请号:US18680970
申请日:2024-05-31
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Da-Ming Chiang , Kshitij A. Doshi , Suraj Prabhakaran , Mark A. Schmisseur
IPC: G06F13/40 , G06F9/455 , G06F9/50 , G06F9/54 , G06F13/362 , G06F13/42 , G06N3/02 , G06N3/04 , G06N3/045 , G06N3/08
CPC classification number: G06F13/4068 , G06F9/45533 , G06F9/5027 , G06F9/54 , G06F13/362 , G06F13/4265 , G06F13/4282 , G06N3/02 , G06N3/04 , G06N3/045 , G06N3/08 , G06F2213/0026
Abstract: There is disclosed an example of an artificial intelligence (AI) system, including: a first hardware platform; a fabric interface configured to communicatively couple the first hardware platform to a second hardware platform; a processor hosted on the first hardware platform and programmed to operate on an AI problem; and a first training accelerator, including: an accelerator hardware; a platform inter-chip link (ICL) configured to communicatively couple the first training accelerator to a second training accelerator on the first hardware platform without aid of the processor; a fabric ICL to communicatively couple the first training accelerator to a third training accelerator on a second hardware platform without aid of the processor; and a system decoder configured to operate the fabric ICL and platform ICL to share data of the accelerator hardware between the first training accelerator and second and third training accelerators without aid of the processor.
-
54.
公开(公告)号:US11972298B2
公开(公告)日:2024-04-30
申请号:US17666366
申请日:2022-02-07
Applicant: Intel Corporation
Inventor: Evan Custodio , Francesc Guim Bernat , Suraj Prabhakaran , Trevor Cooper , Ned M. Smith , Kshitij Doshi , Petar Torre
CPC classification number: G06F9/505 , G06F9/5044 , G06F9/5083 , G06F2209/509
Abstract: Technologies for migrating data between edge accelerators hosted on different edge locations include a device hosted on a present edge location. The device includes one or more processors to: receive a workload from a requesting device, determine one or more accelerator devices hosted on the present edge location to perform the workload, and transmit the workload to the one or more accelerator devices to process the workload. The one or more processor is further to determine whether to perform data migration from the one or more accelerator devices to one or more different edge accelerator devices hosted on a different edge location, and send, in response to a determination to perform the data migration, a request to the one or more accelerator devices on the present edge location for transformed workload data to be processed by the one or more different edge accelerator devices.
-
公开(公告)号:US11809252B2
公开(公告)日:2023-11-07
申请号:US16524868
申请日:2019-07-29
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Suraj Prabhakaran , Karthik Kumar , Uzair Qureshi , Timothy Verrall
CPC classification number: G06F1/30 , G06F1/263 , G06F11/1474 , G06F2201/87
Abstract: Examples described herein relate to management of battery-use by one or more computing resources in the event of a power outage. Data used by one or more computing resources can be backed-up using battery power. Battery power is allocated to data back-up operations based at least on one or more of: criticality level of data, priority of an application that processes the data, or priority level of resource. The computing resource can back-up data to a persistent storage media. The computing resource can store a log of data that is backed-up or not backed-up. The log can be used by the computing resource to access the backed-up data for continuing to process the data and to determine what data is not available for processing.
-
公开(公告)号:US11743143B2
公开(公告)日:2023-08-29
申请号:US17832903
申请日:2022-06-06
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Kshitij Arun Doshi , Suraj Prabhakaran , Raghu Kondapalli , Alexander Bachmutsky
IPC: H04L12/70 , H04L41/0806 , H04W48/08 , H04L67/12 , H04L41/5019 , H04L41/5041 , H04L67/61 , H04L67/63 , G06N5/04
CPC classification number: H04L41/5019 , H04L41/0806 , H04L41/5045 , H04L67/12 , H04L67/61 , H04L67/63 , G06N5/04
Abstract: Various systems and methods for implementing a service-level agreement (SLA) apparatus receive a request from a requester via a network interface of the gateway, the request comprising an inference model identifier that identifies a handler of the request, and a response time indicator. The response time indicator relates to a time within which the request is to be handled indicates an undefined time within which the request is to be handled. The apparatus determines a network location of a handler that is a platform or an inference model to handle the request consistent with the response time indicator, and routes the request to the handler at the network location.
-
公开(公告)号:US20230039631A1
公开(公告)日:2023-02-09
申请号:US17973268
申请日:2022-10-25
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Da-Ming Chiang , Kshitij A. Doshi , Suraj Prabhakaran , Mark A. Schmisseur
IPC: G06F13/40 , G06F13/362 , G06N3/04 , G06F13/42 , G06F9/455 , G06N3/08 , G06F9/50 , G06F9/54 , G06N3/02
Abstract: There is disclosed an example of an artificial intelligence (AI) system, including: a first hardware platform; a fabric interface configured to communicatively couple the first hardware platform to a second hardware platform; a processor hosted on the first hardware platform and programmed to operate on an AI problem; and a first training accelerator, including: an accelerator hardware; a platform inter-chip link (ICL) configured to communicatively couple the first training accelerator to a second training accelerator on the first hardware platform without aid of the processor; a fabric ICL to communicatively couple the first training accelerator to a third training accelerator on a second hardware platform without aid of the processor; and a system decoder configured to operate the fabric ICL and platform ICL to share data of the accelerator hardware between the first training accelerator and second and third training accelerators without aid of the processor.
-
公开(公告)号:US20230022620A1
公开(公告)日:2023-01-26
申请号:US17875672
申请日:2022-07-28
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Patrick Bohan , Kshitij Arun Doshi , Brinda Ganesh , Andrew J. Herdrich , Monica Kenguva , Karthik Kumar , Patrick G. Kutch , Felipe Pastor Beneyto , Rashmin Patel , Suraj Prabhakaran , Ned M. Smith , Petar Torre , Alexander Vul
IPC: H04L67/148 , H04L47/70 , H04L43/0811 , H04W4/40 , H04L67/10 , H04W4/70 , H04L41/5019 , H04L67/00 , G06F9/48
Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QoS pre-allocation; and automatic QoS migration across edge computing nodes. In a specific example, a technique for service migration includes: identifying a service operated with computing resources in an edge computing system, involving computing capabilities for a connected edge device with an identified service level; identifying a mobility condition for the service, based on a change in network connectivity with the connected edge device; and performing a migration of the service to another edge computing system based on the identified mobility condition, to enable the service to be continued at the second edge computing apparatus to provide computing capabilities for the connected edge device with the identified service level.
-
59.
公开(公告)号:US11431648B2
公开(公告)日:2022-08-30
申请号:US16004542
申请日:2018-06-11
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Kshitij Doshi , Suraj Prabhakaran
IPC: H04L12/927 , H04L12/24 , H04L29/08 , H04L12/911 , H04L12/14 , H04L12/26 , H04L47/80 , H04L41/5009 , H04L67/1031 , H04L47/726 , H04L43/0852 , H04L43/0888 , H04L43/087
Abstract: Technologies for providing adaptive utilization of different interconnects for workloads include a compute device. The compute device includes a connection abstraction logic unit to determine a quality of service target to be satisfied in the execution of a workload that is to communicate with at least one other workload through one or more interconnects of a set of interconnects associated with the compute device, determine a quality of service property of each interconnect of the set of interconnects, and allocate, as a function of the determined quality of service property of each interconnect, one or more of the set of interconnects to the workload to satisfy the quality of service target. The compute device also includes circuitry to execute the workload and communicate with the at least one other workload through the allocated one or more interconnects. Other embodiments are also described and claimed.
-
公开(公告)号:US11412052B2
公开(公告)日:2022-08-09
申请号:US16235137
申请日:2018-12-28
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Patrick Bohan , Kshitij Arun Doshi , Brinda Ganesh , Andrew J. Herdrich , Monica Kenguva , Karthik Kumar , Patrick G Kutch , Felipe Pastor Beneyto , Rashmin Patel , Suraj Prabhakaran , Ned M. Smith , Petar Torre , Alexander Vul
IPC: H04L67/148 , H04L43/0811 , H04L67/10 , H04L41/5019 , H04L67/00 , H04L41/5003 , H04L47/70 , H04W4/40 , H04W4/70 , G06F9/48
Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QoS pre-allocation; and automatic QoS migration across edge computing nodes. In a specific example, a technique for service migration includes: identifying a service operated with computing resources in an edge computing system, involving computing capabilities for a connected edge device with an identified service level; identifying a mobility condition for the service, based on a change in network connectivity with the connected edge device; and performing a migration of the service to another edge computing system based on the identified mobility condition, to enable the service to be continued at the second edge computing apparatus to provide computing capabilities for the connected edge device with the identified service level.
-
-
-
-
-
-
-
-
-